uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,993,763
arxiv
\section{Introduction} A classical problem of discrete geometry is to determine the maximum fraction of $d$-dimensional Euclidean space, $\varphi_\mathrm{cp}$, that can be covered by non-overlapping, identical spheres. Determining this densest packing of hard spheres is trivial for $d=1$ and elementary for $d=2$, but otherwise only known rigorously for $d=3$~\cite{hales_proof_2005}, $8$ and $24$~\cite{viazovska_sphere_2017,cohn_sphere_2017}. The behavior of $\varphi_\mathrm{cp}$ as $d \rightarrow} \def\la{\left\langle} \def\ra{\right\rangle \infty$ and the structure of the associated packings is a great mathematical challenge about which relatively little is understood. The best known lower bound is $\varphi_\mathrm{cp} \ge 65963 \cdot d 2^{-d}$~\cite{venkatesh_note_2013} for sufficiently high $d$ (with an additional factor on the order of $\ln (\ln d)$ along a sparse sequence of dimensions), which matches the exponential order of the lower bound $\varphi_\mathrm{cp} \ge 2^{-d}$ trivially obtained by considering any saturated packing~\cite{conway_sphere_1993}. The best upper bound, by contrast, grows exponentially larger with $d$, as $\varphi_\mathrm{cp} \le 2^{-0.599 d}$~\cite{kabatiansky_bounds_1978,cohn_sphere_2014}. Almost all of the known proofs of lower bounds on $\varphi_\mathrm{cp}$ proceed by analyzing lattice packings or random lattice packings (see Ref.~\onlinecite{cohn_packing_2016} for an exposition). These proofs presuppose that lattices provide the backbone of the densest configurations of spheres, but say nothing of the nucleation and coexistence conditions that underlie the ability for a crystal based on such lattices to form and remain stable with respect to the liquid state. While Bravais lattice-based packings are provably optimal in $d = 1$, $2$, $3$, $8$, and $24$, it is far from clear that they remain so for higher $d$~\cite{cohn_packing_2016}. Hence, solely analyzing lattice packings is inadequate to fully capture $\varphi_\mathrm{cp}$. We here take a statistical physics approach and analyze $\varphi_\mathrm{cp}$ through the equilibrium properties of the hard sphere model, a uniformly random sphere packing of a given density. We conjecture three possible scenarios for the behavior of the hard sphere model in high dimensions, based on recent work in the physics literature~\cite{radin_structure_2005,koch_most_2005,skoge_packing_2006,van_meel_hard-sphere_2009,estrada_fluidsolid_2011,stevenson_ultimate_2011,charbonneau_thermodynamic_2021,wang_mean-field_2005,finken_freezing_2001,van_meel_geometrical_2009,lue_molecular_2021}: crystallization (scenario A) does not occur, or, if it does, occurs either (scenario B) much after the dynamical glass transition (at which the liquid dynamics becomes arrested~\cite{parisi_theory_2020}) or (scenario C) around that transition. Under some simple assumptions and using recent results from both physics and mathematics, we explore the consequences for $\varphi_\mathrm{cp}$ under each scenario. In A, we conclude that $\varphi_\mathrm{cp} \sim d \ln d \cdot 2^{-d}$; in B, we conclude that $\varphi_\mathrm{cp} $ is only slightly improved to $d^{\g+1} (\ln d)^3 \cdot 2^{-d}$ with some exponent $\g>0$; in C, we have that $\varphi_\mathrm{cp} \ge 2^{-d(1-\epsilon)}$ for some explicit $\epsilon >0$. It is worth noting that Refs.~\onlinecite{torquato_new_2006, torquato_jammed_2010} proposed a series of conjectures based on a different set of arguments from statistical physics, that are consistent with scenario C. See also Refs.~\onlinecite{kallus_statistical_2013,andreanov_extreme_2016} and references therein. A set of plausibility conditions emerge for each of these scenarios, which are then checked against simulation results and a cell-cluster expansion of the densest known crystals in $d=3$-$10$, which are expected to be the most thermodynamically stable at high pressures (and have been observed to be so for all pressures at which crystals are stable in $d=3$-6~\cite{van_meel_hard-sphere_2009, charbonneau_thermodynamic_2021, lue_molecular_2021}). The inclusion of $d=10$ here is significant, as it is the lowest dimension for which a non-Bravais lattice is the basis for the densest known crystal. While these observations do not suffice to unambiguously declare which scenario is correct, they nevertheless suggest that scenario C is most likely, followed by scenario B. While scenario A remains plausible, no hint of it can be teased from low-dimensional crystallization trends. The rest of this article is organized as follows. In Section~\ref{sec:background}, we provide a series of definitions and describe the first-order liquid-crystal phase transition in hard spheres. In Section~\ref{sec:conjectureSection}, we present the aforementioned conjectures and follow through with their implications for three possible crystallization scenarios. In Section~\ref{sec:simulations}, we analyze low-dimensional crystals in $d=3$-$10$ using cell cluster expansions (where numerically possible) as well as simulations to further constrain the likely scenarios. Section~\ref{sec:final} concludes with a discussion of the likelihood of each of the three scenarios given the low-dimensional trends. \section{Theoretical Background} In this section, we provide a definition of the entropy of the hard sphere model and show that both its first and second derivatives with respect to volume are positive. We then use these properties to derive the relationship between liquid and crystal entropies through a common tangent construction. \label{sec:background} \subsection{Definitions} Consider $N$ identical $d$-dimensional hard spheres of diameter $\sigma$ in a box of volume $V$. Sphere positions are specified by a set of $d$-dimensional vectors $\underline{Y} = \{ \mathbf{y}_i \}_{i=1,\cdots,N}$, each $\mathbf{y}_i$ having components $y_{i\m}$ for $\m=1,\cdots,d$. The sphere concentration is equivalently described by the number density $\rho = N/V$, the specific volume $v = 1/\rho = V/N$, and the packing fraction $\varphi = \rho V_d (\sigma/ 2)^d$, where $V_d = \pi^{d/2}/\Gamma(1+d/2)$ is the $d$-dimensional volume of a ball of unit radius. In the following, we consider the thermodynamic limit in which $N\rightarrow} \def\la{\left\langle} \def\ra{\right\rangle\infty$ and $V\rightarrow} \def\la{\left\langle} \def\ra{\right\rangle\infty$, at constant $\varphi \in (0, \varphi_\mathrm{cp})$. Defining $I(\underline{Y})$ the indicator function specifying that there are no overlaps between spheres, one can introduce \begin{align} Z_N &= \frac1{N!} \int \mathrm d \underline{Y} \, I(\underline{Y}) \ , \qquad Z^{\rm id}_N = \frac{V^N}{N!} \ , \\ Z^{\rm ex}_N &= \frac1{V^N} \int \mathrm d \underline{Y} \, I(\underline{Y}) = \frac{Z_N}{Z^{\rm id}_N} \nonumber \ , \end{align} which are the configurational, ideal gas, and excess partition functions, respectively. Note that $Z^{\rm ex}_N \in [0,1]$ is also the probability that $N$ randomly placed spheres in $V$ have no overlap. Similarly, in the thermodynamic limit, the per particle, ideal gas, and excess entropies are, respectively, \begin{align} s &=\lim_{N\rightarrow} \def\la{\left\langle} \def\ra{\right\rangle\infty} \frac1N \ln Z_N = s^{\rm id}+ s^{\rm ex} \ , \nonumber \\ \label{eq:excessS} s^{\rm id} &= -\ln(\rho\sigma^d) - d\ln (\Lambda/\sigma) + 1 \ , \\ s^{\rm ex} &=\lim_{N\rightarrow} \def\la{\left\langle} \def\ra{\right\rangle\infty} \frac1N \ln Z^{\rm ex}_N \ , \nonumber \end{align} where $\Lambda$ is the de Broglie wavelength and the sphere diameter $\sigma$ is here introduced purely for notational convenience. The thermodynamic relations for pressure $P$ and isothermal compressibility $\chi_T$ (for temperature $T=1/\beta$ with the Boltzmann constant set to unity) \begin{equation}\label{eq:thermo} \beta P = \frac{ds}{dv} \geq 0 \ , \qquad \chi_T = - \frac1V \frac{dV}{dP} = -\frac{\rho}{T} \frac{1}{\frac{d^2s}{dv^2}} \geq 0 \ , \end{equation} imply that the total entropy per particle, $s$, must be a monotonically increasing and concave function of the specific volume. \subsection{Crystallization via a first-order phase transition} In all dimensions $d$ for which the information is available, the densest (infinite-pressure) packing of hard spheres is crystalline; that is, given by a (Bravais or not) lattice packing. For $d\geq3$, at finite pressure, this densest packing gives rise to a stable crystalline phase separated from the liquid phase by a first-order transition. Such a liquid-crystal transition means that the liquid and crystal phases have distinct analytic entropy functions, $s_{\ell}$ and $s_{c}$, which are separately monotonically increasing and concave. Because Eqs.~\eqref{eq:thermo} should always be satisfied in equilibrium, the equilibrium state of the system corresponds to the Maxwell construction illustrated in Fig.~\ref{fig:sketch}. At low $P$ (high $v > v_f$), the homogeneous liquid dominates; at high $P$ (low $v < v_m$), the homogeneous crystal dominates. In the region $v_m < v < v_f$, pressure $P_\mathrm{co}$ is constant and the system is formed of coexisting crystalline and liquid domains. The equations determining the three unknown $v_m, v_f, P_\mathrm{co}$ which characterize the coexistence region can be obtained from the common tangent construction defined as \begin{equation} \begin{split} &\frac{ds_\ell}{dv}(v_f) = \frac{ds_c}{dv}(v_m) = \beta P_\mathrm{co} \ , \\ &s_\ell(v_f) - s_c(v_m) = \beta P_\mathrm{co} (v_f - v_m) \ . \end{split} \label{eq:coex} \end{equation} \begin{figure}[t] \includegraphics[width=\columnwidth]{parallelTangentIllustration.pdf} \caption{Sketch of the liquid (red) and crystal (blue) entropies as a function of the scaled specific volume $\overline{v}$, in the vicinity of the first-order fluid-crystal transition determined by common tangent construction (black dotted line). The crystal branch terminates at the densest packing density ${\overline{\varphi}_\mathrm{cp}=1/\overline{v}_\mathrm{cp}}$ (dashed line) and remains metastable beyond coexistence up to $\overline{\varphi}_s = 1/\overline{v}_s$. For $d=3$-$10$, $\overline{\varphi}_s>\overline{\varphi}_f$~\cite{charbonneau_thermodynamic_2021}, but no assumption is here made about their ordering in higher $d$. The liquid branch extends from zero density, and its metastable extension beyond coexistence terminates at the Kauzmann density ${\overline{\varphi}_{k}=1/\overline{v}_k}$, whereupon the liquid turns into an ideal glass (purple). The glass phase then terminates at the glass close packing density ${\overline{\varphi}_{\rm gcp}=1/\overline{v}_{\rm gcp}}$~\cite{parisi_theory_2020}.} \label{fig:sketch} \end{figure} \section{Conjectures and Scenarios} \label{sec:conjectureSection} In this section, we describe a set of conjectures that constrain the relationships in Eq.~\eqref{eq:coex} and work through their consequences, hence giving rise to three crystallization scenarios. Note that in considering high-$d$ systems, it is convenient to define scaled packing fraction $\overline{\varphi} =2^d \varphi = \rho V_d \sigma^d$, specific volume $\overline{v} = 1/\overline{\varphi} = v/(V_d \s^d)$, and pressure $\overline{P} = \beta P \sigma^d / V_d$. \subsection{Conjectures on the high-\texorpdfstring{$d$}{Lg} phase behavior} \label{sec:conj} We first make a series of conjectures: \begin{enumerate} \item The high $d$ equilibrium phase diagram is characterized by a low-density liquid and a high-density crystal, and no other equilibrium phase intervenes. If there is a high-density crystal phase, then it is separated from the liquid phase by a first-order phase transition, as in Fig.~\ref{fig:sketch}. We have no support for this conjecture, other than from the empirical observation that it holds in $d=3$-$10$~\cite{skoge_packing_2006,van_meel_hard-sphere_2009, charbonneau_thermodynamic_2021, lue_molecular_2021}. \item In the limit of high $d$, the excess entropy of the liquid phase is given by truncating the virial expansion at the lowest order, i.e., \begin{equation} s^{\rm ex}_\ell = - \frac{\overline{\varphi}}{2} = -\frac{1}{2 \overline{v}} \ , \end{equation} which implies that \begin{equation} \label{eq:Sliq} s_\ell = \ln\overline{v} -\frac{1}{2 \overline{v}} + \ln V_d - d \ln (\Lambda/\sigma) + 1 \ . \end{equation} Equation~\eqref{eq:Sliq} holds up to the so-called {\it Kauzmann density} $\overline{\varphi}_k = d \ln d + o(d\ln d)$, at which the liquid state condenses into an ideal glass phase. The ideal glass entropy has a different and less explicit expression---see Ref.~\onlinecite[Eq.~(7.43)]{parisi_theory_2020} and the surrounding discussion---that is continuous at $\overline{\varphi}_k$ and quickly diverges to $-\infty$ upon approaching the {\it glass close packing} density ${\overline{\varphi}_{\rm gcp} = d \ln d + o(d\ln d)}$, as illustrated in Fig.~\ref{fig:sketch}. (The difference between $\overline{\varphi}_k$ and $\overline{\varphi}_{\rm gcp}$ is at the level of subleading corrections.) In addition, the liquid dynamics become arrested for ${\overline{\varphi} > \overline{\varphi}_d \approx 4.8 d}$. This conjecture is supported by a large body of physics literature~\cite{frisch_classical_1985, wyler_hard-sphere_1987, frisch_high_1999, parisi_toy_2000, parisi_mean-field_2010, maimbourg_solution_2016, charbonneau_glass_2017, parisi_theory_2020, charbonneau_dimensional_2021}. \item The crystal phase is accurately described by the free-volume entropy. In other words, throughout the crystal phase, particles simply rattle in a cage formed by their neighbors. Consider the close packed crystal at density $\overline{\varphi}_\mathrm{cp}$, and reduce the diameter of all particles from $\sigma$ to $\sigma(1-\varepsilon)$. The density is correspondingly reduced to $\overline{\varphi} = \overline{\varphi}_\mathrm{cp} (1-\varepsilon)^d$, and each particle gains the possibility of rattling in a volume of linear size $a \varepsilon\sigma$ without overlapping its neighbors, $a$ being an unknown proportionality constant close to 1. Moreover all particles can be permuted, so each particle can access all the $N$ possible cages. Therefore, using $x=\overline{v}_\mathrm{cp}/\overline{v}=\rho/\rho_\mathrm{cp}$, one can estimate \begin{align} \label{eq:cryst} Z^{\rm ex}_N \approx& \left[ \frac{N V_d (a \varepsilon\sigma)^d }{V} \right]^N = \left[ \overline{\varphi}_\mathrm{cp} (a \varepsilon)^d \right]^N \Rightarrow\nonumber\\ s_c \approx& -\ln x + d \ln a + d \ln ( 1 - x^{1/d}) \\ &+\ln V_d - d \ln (\Lambda/\sigma) + 1 \ . \nonumber \end{align} We assume that this expression remains valid for all $\overline{v} \in [ \overline{v}_\mathrm{cp}, \overline{v}_m ]$. We have no support for this conjecture, except from the empirical observation that a similar expression provides a good fit to the crystal entropy in $d=3$-$10$~\cite{van_meel_hard-sphere_2009, charbonneau_thermodynamic_2021, lue_molecular_2021}. The free volume entropy gives a rigorous lower bound on $s^{\rm ex}$, and, if we assume the close-packed crystal to be a lattice packing, then we can allow particles to rattle in regions defined by scaling down the Voronoi cells around each center. A special consideration should be made for lattice packings, such as $\lambda_9$, which contain a set of internal soft (or zero) modes. Along such modes, the packing is allowed to shift freely without generating any overlap. Because the number of such modes is necessarily subextensive, however, the contribution of these modes to the entropy per particle must vanish in the thermodynamic limit (by analogy to the contribution of Goldstone modes in the low-temperature phase of a Heisenberg ferromagnet \cite{patashinskii_fluctuation_1979}). \item The liquid remains the equilibrium phase at least down to a specific volume $\overline{v} = 1/[d \ln(2/\sqrt{3})]$, i.e., $\overline{v}_\ell < 1/[d \ln(2/\sqrt{3})] \approx 6.952/d$ or $\overline{\varphi}_\ell > d \ln(2/\sqrt{3}) \approx 0.144 d$. This conjecture is motivated by the results of~\cite{jenssen_hard_2019}. \end{enumerate} \subsection{Crystallization in high \texorpdfstring{$d$}{Lg}} From the conjectures of Sec.~\ref{sec:conj} we can derive bounds on high-$d$ crystallization, which are discussed below. \subsubsection{Coexistence equations} First, by rewriting Eqs.~\eqref{eq:coex} in terms of scaled variables and using Eq.~\eqref{eq:Sliq} for $s_\ell$ and Eq.~\eqref{eq:cryst} for $s_c$, we obtain \begin{equation} \begin{split} &\frac{1}{\overline{v}_f} + \frac{1}{2 \overline{v}_f^2} = \frac1{\overline{v}_m} \frac{1}{ 1 - \left( \overline{v}_\mathrm{cp}/\overline{v}_m \right)^{1/d} } = \overline{P}_\mathrm{co} \ , \\ &\ln(\overline{v}_f) - \frac{1}{2 \overline{v}_f} - \ln( \overline{v}_m / \overline{v}_\mathrm{cp} ) - d \ln a \\ & - d \ln \left[ 1 - \left( \frac{\overline{v}_\mathrm{cp}}{\overline{v}_m} \right)^{1/d} \right] = \overline{P}_\mathrm{co} (\overline{v}_f - \overline{v}_m) \ . \end{split} \end{equation} It is then convenient to rewrite these equations in terms of density $\overline{\varphi}$: \begin{equation}\label{eq:coexolf} \begin{split} &\overline{\varphi}_f + \frac{1}{2} \overline{\varphi}_f^2 = \frac{\overline{\varphi}_m}{ 1 - \left( \overline{\varphi}_{m}/\overline{\varphi}_\mathrm{cp} \right)^{1/d} } = \overline{P}_\mathrm{co} \ , \\ &-\ln(\overline{\varphi}_f) - \frac{1}{2} \overline{\varphi}_f - \ln( \overline{\varphi}_\mathrm{cp} / \overline{\varphi}_{m} ) - d \ln a \\ & - d \ln \left[ 1 - \left( \frac{\overline{\varphi}_{m}}{\overline{\varphi}_\mathrm{cp}} \right)^{1/d} \right] = \overline{P}_\mathrm{co} \left( \frac{1}{\overline{\varphi}_f} -\frac{1}{\overline{\varphi}_m} \right) \ . \end{split} \end{equation} Given $\overline{\varphi}_\mathrm{cp}$, these equations can easily be solved numerically to yield the coexistence parameters. This strategy was employed by Finken et al.~\cite{finken_freezing_2001} (albeit possibly with an erroneous common tangent construction~\cite{van_meel_hard-sphere_2009}) using close packing density of laminated lattices up to $d\approx 50$. Here we take a different approach. We use our knowledge of $\overline{\varphi}_f$ to obtain bounds on $\overline{\varphi}_\mathrm{cp}$. \subsubsection{Asymptotic analysis} According to Sec.~\ref{sec:conj}, one has $\overline{\varphi}_f \in [0.144 d, d \ln d]$. In a more strict setting we could impose that crystallization happens before the liquid is dynamically arrested, which would restrict the upper bound to $4.8 d$. We thus introduce $\widehat{\varphi}_f = \overline{\varphi}_f/d$ that is of $\mathcal{O}(1) $ or at most $\mathcal{O}(\ln d)$. For $d\rightarrow} \def\la{\left\langle} \def\ra{\right\rangle\infty$, we have $\ln(\overline{\varphi}_f) \ll \frac{1}{2} \overline{\varphi}_f$ and $\overline{\varphi}_f \ll \frac{1}{2} \overline{\varphi}_f^2$, and also $\overline{P}_\mathrm{co} \sim \frac{1}{2} \overline{\varphi}_f^2$, which thus simplifies Eqs.~\eqref{eq:coexolf} as: \begin{equation} \begin{split} & \frac{d^2}2 \widehat{\varphi}_f^2 = \frac{\overline{\varphi}_m}{ 1 - \left( \overline{\varphi}_{m}/\overline{\varphi}_\mathrm{cp} \right)^{1/d} } \ , \\ & - d \widehat{\varphi}_f - \ln( \overline{\varphi}_\mathrm{cp} / \overline{\varphi}_{m} ) \\ & - d \ln a - d \ln \left[ 1 - \left( \frac{\overline{\varphi}_{m}}{\overline{\varphi}_\mathrm{cp}} \right)^{1/d} \right] = - \frac{d^2}{2} \frac{ \widehat{\varphi}_f^2}{ \overline{\varphi}_m} \ . \end{split} \label{eqn:asymptotics} \end{equation} Two possible asymptotic solutions to these equations exist, depending on the scaling of $\overline{\varphi}_m / \overline{\varphi}_\mathrm{cp}$. \subsection{Crystallization scenarios} \label{sec:3scenarios} Based on the above conjectures and asymptotic analysis, three distinct crystallization scenarios can be identified. \subsubsection{Scenario A} In this scenario, crystallization does not proceed and thus the liquid and the glass phases are the only possible equilibrium phases. The close packing density then equals the glass close packing density, and hence $\varphi_\mathrm{cp} = 2^{-d} \cdot \overline{\varphi}_{\rm gcp} \sim 2^{-d} d \ln d$. This scenario happens if the close packing density of the densest crystal remains below $\varphi_{\rm gcp}$. \subsubsection{Scenario B} In this scenario, we suppose that there is a crystalline phase and ${\overline{\varphi}_m / \overline{\varphi}_\mathrm{cp} \sim A/d^\g}$ (with $\g>0$ and $A>0$, or $\g=0$ and $0<A<1$), such that ${1-\big(\overline{\varphi}_m/\overline{\varphi}_\mathrm{cp}\big)^{1/d} \sim ( \g \ln d - \ln A)/d}$. Note that in this scenario $1-\big(\overline{\varphi}_m/\overline{\varphi}_\mathrm{cp}\big)^{1/d} \allowbreak \ll 1$ and the use of the free volume equation of state for the crystal is well justified. Defining $\widehat{\varphi}_m = \overline{\varphi}_m/d$ and neglecting subdominant terms, Eqs.~\eqref{eqn:asymptotics} become \begin{equation} \begin{split} & \frac{1}{2} \widehat{\varphi}_f^2 = \frac{\widehat{\varphi}_m }{ \g \ln d - \ln A} \ , \\ & - \widehat{\varphi}_f - \ln a - \ln(\g \ln d - \ln A) + \ln d = - \frac{1}{2} \frac{ \widehat{\varphi}_f^2}{ \widehat{\varphi}_m} \ . \end{split} \end{equation} The solution is \begin{equation} \label{eq:Bris} \begin{split} \overline{\varphi}_f &\sim d \ln d \ , \\ \overline{\varphi}_m &\sim d \frac{(\ln d)^2}{2} ( \g \ln d - \ln A) \ , \\ \overline{\varphi}_\mathrm{cp} &\sim \frac{d^{\g+1}}{A} \frac{(\ln d)^2}{2} ( \g \ln d - \ln A) \ . \\ \end{split} \end{equation} Note that one should check the subleading corrections to $\overline{\varphi}_f$ to make sure that $\overline{\varphi}_f \leq \overline{\varphi}_k$, which is a strict requirement for the consistency of our approach. It is also somewhat unpleasant that crystallization then takes place much beyond the dynamical arrest of the liquid, i.e., $\overline{\varphi}_f \gg \overline{\varphi}_d$. In this scenario, the close-packed crystal would be only slightly denser than the best amorphous packing, and its exponential scaling would be the same as the Minkowski bound. Crystallization would then be extremely unlikely, because the liquid would becomes dynamically arrested before any sign of crystallization could emerge. Note that the value of $a$, provided it remains finite for $d\rightarrow} \def\la{\left\langle} \def\ra{\right\rangle\infty$, here plays no role. \subsubsection{Scenario C} In this scenario, we suppose there is a crystalline phase, but by contrast to scenario B, here $\overline{\varphi}_m / \overline{\varphi}_\mathrm{cp} \sim e^{-\a d}$ with constant $\alpha > 0$. Hence, $1-\big(\overline{\varphi}_m/\overline{\varphi}_\mathrm{cp}\big)^{1/d} = 1-e^{-\a}$ remains finite, and the use of the free volume equation of state for the crystal is less justified for large $\a$. Then, the first equation gives $\overline{\varphi}_m = (d^2/2) \widehat{\varphi}_f^2 (1-e^{-\a})$. Plugging this expression into the second Eq.~\eqref{eqn:asymptotics} and taking the leading order, we get $\widehat{\varphi}_f = -\a -\ln(1-e^{-\a}) - \ln a$. The final result is then \begin{equation} \label{eq:finC} \begin{split} & \overline{\varphi}_f \sim d [ -\a -\ln(1-e^{-\a}) - \ln a ] \ , \\ & \overline{\varphi}_m \sim \frac{1}{2} \overline{\varphi}_f^2 (1-e^{-\a}) \ , \\ &\overline{\varphi}_\mathrm{cp} \sim e^{\a d} \overline{\varphi}_m \ . \\ \end{split} \end{equation} In this scenario, the beginning of the coexistence region is $\overline{\varphi}_f \propto d$, which is a natural scaling for the liquid state, while the end of the coexistence region is $\overline{\varphi}_m \propto d^2$ and the crystal close packing is $\overline{\varphi}_\mathrm{cp} \propto e^{\a d}$. \begin{figure}[t] \includegraphics[width=\columnwidth]{KLPhaseDiagram.pdf} \caption{Sketch of $\widehat{\varphi}_f$ as a function of $\a=\frac{1}{d}\ln\overline{\varphi}_\mathrm{cp}$ in scenario C, assuming that $\ln a=0$. All areas to the right and below of the red box are forbidden by the KL bound when ${d\rightarrow} \def\la{\left\langle} \def\ra{\right\rangle\infty}$. Simulation results for the freezing density in $d=3$-$10$ (magenta) trend towards the asymptotically allowed region as $d$ increases.} \label{fig:bound} \end{figure} The relation between $\widehat{\varphi}_f =\overline{\varphi}_f/d$ and $\alpha=\frac{1}{d}\ln\overline{\varphi}_\mathrm{cp}$ (at fixed $a$), given in Eq.~\eqref{eq:finC}, is illustrated in Fig.~\ref{fig:bound} (for $a=1$). It is a decreasing function, and it can be inverted to give \begin{equation} \alpha = \ln(1+e^{-\widehat{\varphi}_f-\ln a}) \ . \end{equation} Hence, upper bounds on $\alpha$ can be turned into lower bounds for $\widehat{\varphi}_f$, and vice versa. Let us consider upper bounds on $\a$ first. The Kabatiansky-Levenshtein (KL) upper bound on packing~\cite{kabatiansky_bounds_1978} requires that $\a \leq \allowbreak \ln(2) (1-0.5990) \allowbreak = 0.278$, which then implies \begin{equation}\label{eq:wfflb} \widehat{\varphi}_f > 1.138 -\ln a \ . \end{equation} The fourth conjecture in Sec.~\ref{sec:conj} implies a bound $\widehat{\varphi}_f \geq 0.144$; as long as $\ln a \leq 0.994$, this bound is however weaker than the KL one. We then consider lower bounds on $\a$. If we require that crystallization happens before dynamical arrest, i.e. $\widehat{\varphi}_f\leq 4.8$, then we obtain a lower bound on $\alpha$ in terms of~$a$, \begin{equation} \ln\big(1+e^{-4.8-\ln a}\big) \le \alpha \ . \label{eqn:alphaRange} \end{equation} Note that crystallization might well take place after dynamical arrest, as it is the case in scenario B discussed above; if $\ln a < -3.662$, then this is necessarily the case, due to Eq.~\eqref{eq:wfflb}. Unfortunately, $a$ is unknown, but if the third conjecture of Sec.~\ref{sec:conj} is correct, then it should be close to unity, which is what our finite-$d$ results suggest (see Sec.~\ref{sec:simulations}). Because the dependence of the above bounds on $a$ is logarithmic, relatively small deviations from unity further do not affect much the results. We thus assumed $a \rightarrow 1$ as $d \rightarrow \infty$, for illustration, in Fig.~\ref{fig:bound}. With this choice, we obtain $\widehat} \newcommand{\wt}{\widetilde{\varphi}_f = -\a -\ln(1-e^{-\a})$ (blue line) and \begin{equation} \ln(1+e^{-4.8}) =0.0082 \leq \a \leq 0.278 \ . \end{equation} The true value of $a$ in the high-dimensional limit simply sets the ordinate offset of the blue curve in Fig.~\ref{fig:bound} and, provided it is not too far from unity, only slightly shifts the lower bound. \subsubsection{Summary of the three scenarios} From the analysis so far, we conclude that under the assumptions in Sec.~\ref{sec:conj}, three possible scenarios arise: \begin{itemize} \item[A.] If there is no crystallization, then $\varphi_\mathrm{cp} = 2^{-d} \overline{\varphi}_{\rm gcp} \sim d \ln d \cdot 2^{-d}$, and optimal packings are glasses. \item[B.] If the close packing density of the crystal is not exponentially larger than the melting density, then crystallization happens deep in the dynamically arrested region ($\overline{\varphi}_d \ll \overline{\varphi}_f < \overline{\varphi}_k \sim d \ln d$), and we obtain the results in Eq.~\eqref{eq:Bris}, in particular with $\overline{\varphi}_\mathrm{cp} \sim d^{\g+1} \frac{(\ln d)^2}{2} ( \g \ln d - \ln A)$ being not exponential in $d$, and only slightly larger than $\overline{\varphi}_{\rm gcp}$. \item[C.] If instead crystallization happens on the same scale as the dynamical arrest ($\overline{\varphi}_f\propto d$), then the crystal close packing should be $\overline{\varphi}_\mathrm{cp}\sim e^{\a d}$. Quantitative bounds then depend weakly (logarithmically) on $a$; we assume $a=1$ for simplicity, which gives an upper bound $\a < 0.278$ from the KL bound and also implies $\overline{\varphi}_f > 1.138 d$. The additional requirement that crystallization happens {\it before} dynamical arrest ($\overline{\varphi}_f < \overline{\varphi}_d\sim 4.8d$) gives a lower bound $\a>0.0082$ on the close packing density (assuming $\ln a \approx 0$), which improves exponentially over the Minkowski bound. \end{itemize} \section{Insights from low-\texorpdfstring{$d$}{Lg} crystals} \label{sec:simulations} To obtain insights into which of the above three scenarios is most likely, we examine $\overline{\varphi}_f$, $\overline{\varphi}_m$, and $a$ for each of the densest crystals in $d=3$-$10$, which are $D_d$ checkerboard lattices in $d=3$-$5$, $E_6$ in $d=6$, $E_7$ in $d=7$, the $E_8$ root lattice in $d=8$, $\lambda_9$ in $d=9$, and $P_{10c}$ in $d=10$~\cite{conway_sphere_1993}. Because the relative distance between $\overline{\varphi}_m$ and $\overline{\varphi}_\mathrm{cp}$ grows with $d$, it is possible (through corrections to the free-volume expressions considered here) that a lower density crystal may be most stable at intermediate pressures for certain $d$. We here consider only the densest crystals, as all other crystal forms would in any case transition to the densest at high pressure given enough time. (Note that low-dimensional studies in $d=2$-$6$ have found no such discrepancy~\cite{van_meel_hard-sphere_2009, lue_molecular_2021}, and that if one were to exist, the densest crystal would nevertheless offer the strongest bound on the stability of the liquid, and thus the scaling analysis would not be impacted.) In this section, we consider three distinct estimates of the constant $a$ for these crystals: (1) a high-$d$ generalization of the Rudd--Stillinger cell-cluster expansion~\cite{rudd_rigid_1968} for nearly perfect crystals; (2) the scaling of the dynamical cage size near close packing; (3) thermal integration of the crystal equation of state from a reference crystal whose absolute entropy is determined by the Frenkel-Ladd scheme~\cite{frenkel_new_1984}. \begin{table*}[htb]\centering \begin{tabular}{c | c |c | c | c | c | c | c} \hline \hline crystal & $\varphi_\mathrm{cp}$ & $\kappa_0^\mathrm{CC_1}$ & $\kappa_0^\mathrm{CC_2}$ (Dir) & $\kappa_0^\mathrm{fit}$ & $\kappa_1^\mathrm{CC_1}$ & $\kappa_1^\mathrm{CC_2}$ (Dir) & $\kappa_1^\mathrm{fit}$\\ \hline $D_3$ & 0.7405 & 0.125 & 0.3136 & 0.511(18) & 0.6115 & 2.2108 & 3.8(4)\\ $D_4$ & 0.6169 & 0.15 & 0.3403 & 0.491(18) & 0.76 & 2.4845 & 3.3(3)\\ $D_5$ & 0.4653 & 0.2 & 0.4299 & 0.555(12)& 0.9205 & 1.5717 & 3.2(2)\\ $E_6$ & 0.3729 & 0.4286 & -- & 0.54(2) & 0.1228 & -- & 2.8(3)\\ $E_7$ & 0.2953 & 0.4554 & -- & 0.48(3) & -- & -- & 3.2(4)\\ $E_8$ & 0.2537 & 0.4336 & -- & 0.41(3)& -- & -- & 3.1(5)\\ $\lambda_9$ & 0.1458 & 0.4370 & -- & 1.2(2) & -- & -- & -3(2)\\ $P_{10c}$ & 0.0996 & -- & -- & 0.62(5) & -- & -- & 3.1(7)\\ \hline \hline \end{tabular} \caption{\textbf{Constants used and derived for each crystal.} Packing fraction at close packing $\varphi_\mathrm{cp}$ taken from Ref.~\onlinecite{conway_sphere_1993}. Cell cluster equation of state results are given to both first ($\kappa_0^\mathrm{CC_1}$ and $\kappa_1^\mathrm{CC_1}$) and second ($\kappa_0^\mathrm{CC_2}$ and $\kappa_1^\mathrm{CC_2}$) order, and are compared with the crystal equation of state obtained from numerical simulations, $\kappa_0^\mathrm{fit}$ and $\kappa_1^\mathrm{fit}$. Cell cluster results are exact to machine precision, but rounded to the fourth decimal places. Otherwise, error bars represent $95\%$ confidence intervals.} \label{table:packingConstants} \end{table*} \subsection{Cell-cluster expansion} \label{sec:cellClusterMain} Rudd and Stillinger proposed to expand the entropy of a high-pressure crystal of hard particles~\cite{rudd_rigid_1968} by ordering terms as \begin{multline} s_c = \lim_{x\rightarrow} \def\la{\left\langle} \def\ra{\right\rangle 1} \bigg[-d\ln(\Lambda/\sigma) + d\ln(1 - x^{1/d}) - \ln x - C \\ - D(1-x^{1/d}) - E(1-x^{1/d})^2 + \mathcal{O}(1-x^{1/d})^3 \bigg] \ , \label{eqn:rudd} \end{multline} where $x=\varphi/\varphi_\mathrm{cp}$. In essence, this scheme proposes a polynomial correction in $1-x^{1/d}$ to the free volume expansion of Eq.~\eqref{eq:cryst}. The coefficients $C$, $D$, and $E$, which depend on crystal symmetry and dimension, can further be expanded in (infinite) series of cell clusters (see Appendix~\ref{sec:cellClusterAppendix}). We here present two such expansions, denoted recursive (Rec) and direct (Dir). Although these series are neither unique nor proven to converge in any $d$, they nevertheless provide a constructive analytical framework. By (admittedly loose) physical analogy with the virial expansion for the liquid, one might even expect their convergence rate to improve as $d\rightarrow\infty$. Using Eqs.~\eqref{eq:thermo} and~\eqref{eqn:rudd}, the reduced pressure (or compressibility) can then be expressed as \begin{align} p &= \frac{\beta P}{\rho} =-x\left(\frac{\partial s_c}{\partial x}\right)_{\beta}\nonumber\\ &= \frac{1}{1-x^{1/d}} + \kappa_0 + \kappa_1(1-x^{1/d}) + \mathcal{O}(1-x^{1/d})^2, \label{eqn:compressDeriv} \end{align} where the first term $1/(1-x^{1/d})$ is the free volume equation of state~\cite{kirkwood_critique_1950, kamien_entropic_2007}, and its constant and linear offsets corrections are, respectively, \begin{align} \kappa_0 &= - \frac{D}{d} \ ,\label{eq:relationK0} \\ \kappa_1 &= \frac{D-2E}{d} \ .\label{eq:relationK1} \end{align} Comparing Eqs.~\eqref{eq:cryst} and \eqref{eqn:rudd} further identifies \begin{equation} d \ln a = - C - 1 -\ln V_d \ . \label{eq:relationCA} \end{equation} Values of $\kappa_0$ and $\kappa_1$ from the cell cluster expansions are presented in Table~\ref{table:packingConstants}. The standard derivation of the free volume---and $s_c$ by extension---assumes that upon decompressing a close-packed crystal the available free volume, $v_\mathrm{free}$, is that of the Voronoi cell (see Fig.~\ref{fig:curvedFV}). However, the true free volume is larger than this approximation, hence $C > 0$ for all lattices. Because its boundary is concave, i.e., $\frac{\partial^2 v_\mathrm{free}}{\partial x^2} < 0$, we also have that $\kappa_0 > 0$ for all lattices. No similar constraint, however, obviously fixes the sign of $\kappa_1$. (See Appendix \ref{sec:freeVolumeExpansion} for a fuller presentation.) \begin{figure}[htb] \includegraphics[width=0.5\linewidth]{curvedFreeVolume.pdf} \caption{Free volume schematic with each sphere (black circle) excluding a spherical volume of radius $\sigma$ (dashed lines) away from their centers for the center of another sphere to occupy. The space thus bounded (blue dashed lines) is the free volume, $v_\mathrm{free}$ (blue), available to the center sphere. For $x\to1$, the free volume boundary is approximately self-similar to that of the Voronoi cell (red), but upon decompression the curvature of the free volume boundary grows more pronounced. This concavity implies $\frac{\partial^2 v_\mathrm{free}}{\partial x^2}<0$, and thus $\kappa_0 > 0$.} \label{fig:curvedFV} \end{figure} \begin{figure}[ht] \includegraphics[width=0.9\linewidth]{bothDelta.pdf} \caption{\textbf{a)} Long-time MSD plateau for $d=3$-10 offset by a multiplicative factor of $2^d$ for visual clarity. \textbf{b)} From the scaling form in Eq.~\ref{eq:cagescaling} (solid lines), the prefactor $\Delta_0$ is extracted.} \label{fig:deltaScaling} \end{figure} \subsection{Dynamical Cage size} The cage size determined from the long-time limit of the mean squared displacement (MSD), $\langle r^2(t) \rangle$, can be used to estimate $a$ under simple assumptions. In order to compute the MSD, perfect crystals are first prepared by trivial planting. Equilibrium configurations are then sampled using the same Metropolis Monte Carlo scheme as in Ref.~\onlinecite{charbonneau_thermodynamic_2021}. A straightforward MSD computation is, however, inappropriate for $\lambda_9$, given that its nine internal soft modes permit unbounded motion along certain directions. For this crystal, the relevant MSD therefore excludes displacements along these soft dimensions, \begin{equation} \langle r^2(t) \rangle = \frac{1}{N} \sum_{i,\vartheta} [r^\vartheta_i(t) - r^\vartheta_i(0) - \Xi^\vartheta h^\vartheta_i - r^\vartheta_\mathrm{com}(t)]^2, \end{equation} where $\mathbf{r}_\mathrm{com}(t)$ denotes the center of mass of sublattice $\vartheta$ and $\{2\Xi^\vartheta\}$ denotes the (global) displacement vector along each mode. (The factor of $2$ is included for symmetrization.) Each of the nine sublattices contains half of all particles moving either in the positive or negative direction away from the center of mass of the sublattice. We denote the participation of each particle to each sublattice by the tensor elements $h_i^\vartheta=\pm 1$. The term $\Xi^\vartheta h_i^\vartheta$ then encodes the distance traveled by each particle from the center of mass of each sublattice during collective motions. In all cases, the MSD is fitted to a stretched exponential \begin{equation} \langle r^2(t) \rangle = \Delta\big(1-e^{-(t/\tau_\beta)^{\gamma}}\big) \label{eqn:tauDef} \end{equation} with relaxation time $\tau_\beta$, and stretching exponent, $\gamma < 1$, for time $t$ given in number of Monte Carlo sweeps, as it relaxes to its plateau height, $\Delta$. For the range of $d$, $N$, and $x$ considered, $\gamma$ typically varies between $0.4$ and $1$, and systematically increases with $d$ (Appendix~\ref{sec:equilibrationConstants}, Fig.~\ref{fig:equilibration}). The time constant depends only weakly on dimension and $x$--except for $d=3$ near coexistence---and $\tau_\beta$ are $\mathcal{O}(10)$. A system is deemed equilibrated after $t\geq 10\tau_\beta$, which can easily be achieved using standard computational resources. Over $10,000$ independent snapshots for each $x$ and $d$ can thus be efficiently obtained. As $x$ approaches unity, the typical linear cage size $\Delta$, scales as (Fig.~\ref{fig:deltaScaling}) \begin{equation} \label{eq:cagescaling} \Delta = \Delta_0(1-x^{1/d})^2, \end{equation} with prefactor $\Delta_0$. Using the partition function in Eq.~\eqref{eq:cryst} to compute the MSD of two random points in a $d$-dimensional sphere of radius $a$ separately gives \begin{equation} \Delta_0=2a^2\frac{d(d^2+2d-1)}{(1+d)^2(2+d)}. \label{eq:dynamicA} \end{equation} Within this structural assumption, an estimate for $a$ can thus be extracted from the scaling of the MSD plateau height. \begin{figure*}[htb] \includegraphics[width=\linewidth]{pressureConstantCompareOffsetUpdated.pdf} \caption{\textbf{a)} Correction to the free volume equation of state in $d=3$-$10$. The constant and linear terms, $\kappa^\mathrm{fit}_0$ and $\kappa^\mathrm{fit}_1$, are estimated from a simple linear fit (lines) of the simulation results (points) over $1-x^{1/d} \in(0, 0.1)$. The growth of the error bars, which denote 95\% confidence intervals, at high densities reflects the numerical difficulty of comparing two diverging quantities. Note that for visual clarity each curve is offset by $d/5$. \textbf{b)} Comparison of $\kappa^\mathrm{fit}_0$ (black) from (a) with the value from the first order cell cluster expansion $\kappa^\mathrm{CC_1}_0$ (red) and second $\kappa^\mathrm{CC_1}_0$ (blue) using both expansions. As $d$ increases, the direct cluster expansion results appear to slowly converge towards the pressure calculation to first order, while the recursive expansion appears to diverge. \textbf{c)} Comparison of $\kappa^\mathrm{fit}_1$ from (a) with $\kappa^\mathrm{CC}_1$ and $\kappa_1^\mathrm{CC_2}$ using both cell cluster expansions. Here, the truncated recursive expansion oscillates as expected (see Appendix~\ref{sec:cellClusterAppendix}). In (b) and (c), lines are only provided as guides for the eye.} \label{fig:speedyPDiff} \end{figure*} \subsection{Thermal integration} An assumption-free estimate of $a$ can also be obtained by thermally integrating the crystal equation of state from a state of known entropy. Such high-accuracy entropies are available for $d=3$ up to close packing~\cite{speedy_pressure_1998}, but comparable results are limited to the liquid-crystal coexistence regime for $d=4$-$10$~\cite{van_meel_hard-sphere_2009,charbonneau_thermodynamic_2021}. For succinctness, we here briefly describe the integration scheme, and especially how it differs from that reported in Ref.~\onlinecite{charbonneau_thermodynamic_2021}. The reduced pressure is first computed using the pair correlation at contact, $g(\sigma^+)$, \begin{equation} p = 1 + \frac{\overline{\varphi}}{2}g(\sigma^+) \ . \end{equation} These numerical results are then fitted using Eq.~\eqref{eqn:compressDeriv} up to $\mathcal{O}(1-x^{1/d})$, which provides numerical estimates of $\kappa_0$ and $\kappa_1$. The fitted crystal equation of state captures simulation results well for all $d$ in this regime (Fig.~\ref{fig:speedyPDiff}), thus validating the form proposed by Rudd et al.~for expanding the free energy around close packing. Higher-order corrections would, however, be needed to describe pressures down to the fluid-crystal coexistence regime~\cite{charbonneau_thermodynamic_2021}. Absolute entropies at a reference density $x_0$ are computed by performing a Frenkel-Ladd integration at that state~\cite{frenkel_new_1984,polson_finite-size_2000} in all but $d=9$ where the periodic potential defined in Ref.~\onlinecite{charbonneau_thermodynamic_2021} is used instead. In order to optimize numerical accuracy, the reference state is taken near close packing, i.e., $x_0\approx 1$, instead of near melting as in Ref.~\onlinecite{charbonneau_thermodynamic_2021}. However, a larger integration cutoff is then needed to prevent spheres from overlapping in the Einstein crystal limit of the Frenkel-Ladd scheme. These constraints are balanced by taking $x_0$ within the regime of validity of the fitted equation of state, but no denser. Because the reference entropy exhibits a significant size dependence--unlike the crystal equation of state--the thermodynamic $s_c(x_0)$ is further estimated using a standard finite-size scaling analysis (Appendix~\ref{sec:finSizeRef}). A numerical estimate of $a$ can then be obtained via Eqs.~\eqref{eqn:rudd} and \eqref{eq:relationK0}-\eqref{eq:relationCA}, which in the limit $x\rightarrow} \def\la{\left\langle} \def\ra{\right\rangle 1$ yield \begin{equation} \begin{split} d \ln a = s_c(x_0) - d\ln(1 - x_0^{1/d}) + \ln x_0 - 1 - \ln V_d \\ - d\kappa_0 (1-x_0^{1/d}) + \frac{d}{2}(\kappa_0 + \kappa_1)(1-x_0^{1/d})^2 \ . \label{eq:thermalIntA} \end{split} \end{equation} \subsection{Summary of results from low-\texorpdfstring{$d$}{Lg} crystals} The cell-cluster expansion and the numerical estimates of $a$ from both the cage size and thermal integration are compared in Fig.~\ref{fig:tivscc}. The two numerical estimates, $a^{\Delta_0}$ and $a^\mathrm{fit}$, neatly converge as dimension increases. The spherical caging assumption of Eq.~\eqref{eq:dynamicA} on which the former relies, although fairly crude in low $d$, becomes increasingly inconsequential as $d$ increases. Going from the first level of the cell cluster expansion, $a^\mathrm{CC_1}$, to the second, $a^\mathrm{CC_2}$, also suggests a rapid convergence towards $a^\mathrm{fit}$ using both cell cluster expansions. Over the accessible $d$ range, however, the agreement does not markedly increase with dimension. Most importantly, all of these estimates support the conjecture $a \sim \mathcal{O}(1)$. Because standard simulations of higher-dimensional systems become increasingly computationally challenging, were the expansion of Rudd et al.~fully controlled, it could help extend the present analysis. The convergence of the analytical expansion of $\kappa_0$ and $\kappa_1$ offers some hope in this direction, albeit only through the direct expansion strategy. Under the recursive strategy, the $\kappa_0$ and $\kappa_1$ terms appear to diverge from the numerical results at second order, in support of Rudd et al.'s expectation that derivatives of this expansion might never converge in a truncated series~\cite{rudd_rigid_1968}. In this context, further formal expansion of the direct integration strategy to third order might be of interest. At the moment, however, the ability to numerically evaluate integrals of order $n$ in reasonable time is capped at $nd \approx 14$~\cite{koch_most_2005}. Irrespective of any convergence concerns, the results for $\lambda_9$ stand out. In particular, thermal integration results for $\kappa_0$ and $a$ in $d=9$ are much larger than those of nearby dimensions, and $\kappa_1$ is of the opposite sign. These features largely track what one might expect of a crystal with soft modes. First, its (effective) cage should be elongated along soft directions, thus making $a$ larger. Second, because the free volume is elongated, its rate of increase with decreasing $x$, $-\frac{\partial^2 v_\mathrm{free}}{\partial x^2}$, should be larger than for standard caging, thus increasing $\kappa_0$. Third, the negative value of $\kappa_1$ might result from spheres in interlocking lattices being relatively less constrained and thus less likely to be in contact at high entropy points (where multiple soft modes are available). That said, unlike the crystal equation of state of other crystals, that of crystals with soft modes are expected to exhibit significant finite-size corrections, as discussed in Sec.~\ref{sec:conj}. Unfortunately, only a single system size is numerically available for this crystal~\cite{charbonneau_thermodynamic_2021}, and thus a systematic examination of these effects is not here feasible. Because the only path towards radical asphericity of the Voronoi cell--and thus significant deviations from $a \sim \mathcal{O}(1)$--is through the presence of a direction in which individual particles or subextensive collections of particles are not constrained or are only weakly constrained, a comparable analysis of lower-dimensional crystals containing soft modes, such as parallel hard cubes~\cite{swol_percolation_1987,jagla_melting_1998}, might thus be a more promising route to gain insight on this matter. \begin{figure}[ht] \includegraphics[width=0.9\columnwidth]{aAllEstimates.pdf} \caption{Dimensional evolution of the lattice constant $a$ evaluated using three approaches. Cell-cluster expansion results are reported using Eq.~\eqref{eq:relationCA} to first ($a^\mathrm{CC_1}$ in red) and second ($a^\mathrm{CC_2}$ in blue) order for both direct (dashed line) and recursive (solid line) expansions (see Appendix~\ref{sec:cellClusterAppendix}). Thermal integration results, $a^\mathrm{fit}$ (black), from Eq.~\eqref{eq:thermalIntA} and dynamical estimates $a^{\Delta_0}$ (green) from Eq.~\eqref{eq:dynamicA} evolve qualitatively similarly. All estimates of $a$ suggest that $a \sim \mathcal{O}(1)$ as $d\rightarrow\infty$. Error bars denote a 95\% confidence interval; the cell cluster calculations are accurate to double precision truncation.} \label{fig:tivscc} \end{figure} \section{Discussion and Conclusion} \label{sec:final} Under the assumptions of Sec.~\ref{sec:conj}, we summarize the three possible scenarios available for high dimensional crystallization: (scenario A) if there is no crystallization, then the optimal packings are glasses; (scenario B) if crystallization occurs but only deep in the dynamically arrested region, then the densest crystal is only slightly more dense than the closest packed glass; (scenario C) if instead crystallization happens on the same scale as the dynamical arrest then the crystal close packing is $\overline{\varphi}_\mathrm{cp}\sim e^{\a d}$, an exponential improvement over the Minkowski bound. Of these, scenario C relies on the constant $a$ being of order unity, while the others make no such requirement. Through the use of both cell cluster expansions and numerical simulations of crystals in $d=3$-$10$, we have obtained three independent measures of $a$ that are roughly consistent with each other. We further observe that the crystal entropy is dominated by the free volume description in all $d$. Although we observe a significant polynomial correction that does not markedly decrease with increasing dimension, it does not significantly increase in crystals with soft modes either. It is therefore expected that these contributions remain subdominant to the free volume description in all $d$. Three additional low-$d$ observations point to the relative likelihood of each of the three scenarios proffered. First, crystallization is thermodynamically favored at least up to $d=10$ and $\widehat{\varphi}_f \sim 1$, which is well below $\widehat} \newcommand{\wt}{\widetilde\varphi_d$. Second, $\alpha = \frac{1}{d}\ln{\overline{\varphi}_\mathrm{cp}}$ remains finite and approaches the zone of crystallization allowed by scenario C. Third, $a$ appears to remain $\mathcal{O}(1)$. Together, these results indicate that while all scenarios are possible, scenario C is most likely to be true, scenario B less likely, and scenario A less likely still. While future numerical simulations and analysis of higher dimensional crystals may be possible in a few additional dimensions, it seems improbable that such simulations would upend this ordering. Finally, it should be noted that even if scenario C is correct, it might still be heavily kinetically suppressed. If crystallization proceeds via classical nucleation theory, in particular, then the competition between surface and volume terms in the free energy creates a barrier such that the nucleation time should scale exponentially with $d$; see, e.g., Ref.~\onlinecite[Eq.~(3.28)]{debenedetti_metastable_1996} and Ref.~\onlinecite[Eq.~(8.15)]{parisi_theory_2020}. Although alternative crystallization schemes have been suggested in deeply supercooled liquids in $d=3$~\cite{filion_crystal_2010, sanz_crystallization_2011}, the geometrical peculiarities of three-dimensional space that underlie such mechanisms appear unlikely to find echo in any higher $d$. \begin{acknowledgments} We thank Yi Hu, Robert Hoy, and Henry Cohn for stimulating discussions. This work was supported by grants from the Simons Foundation (\#454937, Patrick Charbonneau, \#454955 Francesco Zamponi), the National Science Foundation (DMS-1847451, Will Perkins) and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n. 723955 - GlassUniversality). The computations were carried out on the Duke Compute Cluster (DCC), for which the authors thank Tom Milledge’s assistance. Data relevant to this work have been archived and can be accessed at the Duke Digital Repository~\cite{data}. \end{acknowledgments} \section{Introduction} A classical problem of discrete geometry is to determine the maximum fraction of $d$-dimensional Euclidean space, $\varphi_\mathrm{cp}$, that can be covered by non-overlapping, identical spheres. Determining this densest packing of hard spheres is trivial for $d=1$ and elementary for $d=2$, but otherwise only known rigorously for $d=3$~\cite{hales_proof_2005}, $8$ and $24$~\cite{viazovska_sphere_2017,cohn_sphere_2017}. The behavior of $\varphi_\mathrm{cp}$ as $d \rightarrow} \def\la{\left\langle} \def\ra{\right\rangle \infty$ and the structure of the associated packings is a great mathematical challenge about which relatively little is understood. The best known lower bound is $\varphi_\mathrm{cp} \ge 65963 \cdot d 2^{-d}$~\cite{venkatesh_note_2013} for sufficiently high $d$ (with an additional factor on the order of $\ln (\ln d)$ along a sparse sequence of dimensions), which matches the exponential order of the lower bound $\varphi_\mathrm{cp} \ge 2^{-d}$ trivially obtained by considering any saturated packing~\cite{conway_sphere_1993}. The best upper bound, by contrast, grows exponentially larger with $d$, as $\varphi_\mathrm{cp} \le 2^{-0.599 d}$~\cite{kabatiansky_bounds_1978,cohn_sphere_2014}. Almost all of the known proofs of lower bounds on $\varphi_\mathrm{cp}$ proceed by analyzing lattice packings or random lattice packings (see Ref.~\onlinecite{cohn_packing_2016} for an exposition). These proofs presuppose that lattices provide the backbone of the densest configurations of spheres, but say nothing of the nucleation and coexistence conditions that underlie the ability for a crystal based on such lattices to form and remain stable with respect to the liquid state. While Bravais lattice-based packings are provably optimal in $d = 1$, $2$, $3$, $8$, and $24$, it is far from clear that they remain so for higher $d$~\cite{cohn_packing_2016}. Hence, solely analyzing lattice packings is inadequate to fully capture $\varphi_\mathrm{cp}$. We here take a statistical physics approach and analyze $\varphi_\mathrm{cp}$ through the equilibrium properties of the hard sphere model, a uniformly random sphere packing of a given density. We conjecture three possible scenarios for the behavior of the hard sphere model in high dimensions, based on recent work in the physics literature~\cite{radin_structure_2005,koch_most_2005,skoge_packing_2006,van_meel_hard-sphere_2009,estrada_fluidsolid_2011,stevenson_ultimate_2011,charbonneau_thermodynamic_2021,wang_mean-field_2005,finken_freezing_2001,van_meel_geometrical_2009,lue_molecular_2021}: crystallization (scenario A) does not occur, or, if it does, occurs either (scenario B) much after the dynamical glass transition (at which the liquid dynamics becomes arrested~\cite{parisi_theory_2020}) or (scenario C) around that transition. Under some simple assumptions and using recent results from both physics and mathematics, we explore the consequences for $\varphi_\mathrm{cp}$ under each scenario. In A, we conclude that $\varphi_\mathrm{cp} \sim d \ln d \cdot 2^{-d}$; in B, we conclude that $\varphi_\mathrm{cp} $ is only slightly improved to $d^{\g+1} (\ln d)^3 \cdot 2^{-d}$ with some exponent $\g>0$; in C, we have that $\varphi_\mathrm{cp} \ge 2^{-d(1-\epsilon)}$ for some explicit $\epsilon >0$. It is worth noting that Refs.~\onlinecite{torquato_new_2006, torquato_jammed_2010} proposed a series of conjectures based on a different set of arguments from statistical physics, that are consistent with scenario C. See also Refs.~\onlinecite{kallus_statistical_2013,andreanov_extreme_2016} and references therein. A set of plausibility conditions emerge for each of these scenarios, which are then checked against simulation results and a cell-cluster expansion of the densest known crystals in $d=3$-$10$, which are expected to be the most thermodynamically stable at high pressures (and have been observed to be so for all pressures at which crystals are stable in $d=3$-6~\cite{van_meel_hard-sphere_2009, charbonneau_thermodynamic_2021, lue_molecular_2021}). The inclusion of $d=10$ here is significant, as it is the lowest dimension for which a non-Bravais lattice is the basis for the densest known crystal. While these observations do not suffice to unambiguously declare which scenario is correct, they nevertheless suggest that scenario C is most likely, followed by scenario B. While scenario A remains plausible, no hint of it can be teased from low-dimensional crystallization trends. The rest of this article is organized as follows. In Section~\ref{sec:background}, we provide a series of definitions and describe the first-order liquid-crystal phase transition in hard spheres. In Section~\ref{sec:conjectureSection}, we present the aforementioned conjectures and follow through with their implications for three possible crystallization scenarios. In Section~\ref{sec:simulations}, we analyze low-dimensional crystals in $d=3$-$10$ using cell cluster expansions (where numerically possible) as well as simulations to further constrain the likely scenarios. Section~\ref{sec:final} concludes with a discussion of the likelihood of each of the three scenarios given the low-dimensional trends. \section{Theoretical Background} In this section, we provide a definition of the entropy of the hard sphere model and show that both its first and second derivatives with respect to volume are positive. We then use these properties to derive the relationship between liquid and crystal entropies through a common tangent construction. \label{sec:background} \subsection{Definitions} Consider $N$ identical $d$-dimensional hard spheres of diameter $\sigma$ in a box of volume $V$. Sphere positions are specified by a set of $d$-dimensional vectors $\underline{Y} = \{ \mathbf{y}_i \}_{i=1,\cdots,N}$, each $\mathbf{y}_i$ having components $y_{i\m}$ for $\m=1,\cdots,d$. The sphere concentration is equivalently described by the number density $\rho = N/V$, the specific volume $v = 1/\rho = V/N$, and the packing fraction $\varphi = \rho V_d (\sigma/ 2)^d$, where $V_d = \pi^{d/2}/\Gamma(1+d/2)$ is the $d$-dimensional volume of a ball of unit radius. In the following, we consider the thermodynamic limit in which $N\rightarrow} \def\la{\left\langle} \def\ra{\right\rangle\infty$ and $V\rightarrow} \def\la{\left\langle} \def\ra{\right\rangle\infty$, at constant $\varphi \in (0, \varphi_\mathrm{cp})$. Defining $I(\underline{Y})$ the indicator function specifying that there are no overlaps between spheres, one can introduce \begin{align} Z_N &= \frac1{N!} \int \mathrm d \underline{Y} \, I(\underline{Y}) \ , \qquad Z^{\rm id}_N = \frac{V^N}{N!} \ , \\ Z^{\rm ex}_N &= \frac1{V^N} \int \mathrm d \underline{Y} \, I(\underline{Y}) = \frac{Z_N}{Z^{\rm id}_N} \nonumber \ , \end{align} which are the configurational, ideal gas, and excess partition functions, respectively. Note that $Z^{\rm ex}_N \in [0,1]$ is also the probability that $N$ randomly placed spheres in $V$ have no overlap. Similarly, in the thermodynamic limit, the per particle, ideal gas, and excess entropies are, respectively, \begin{align} s &=\lim_{N\rightarrow} \def\la{\left\langle} \def\ra{\right\rangle\infty} \frac1N \ln Z_N = s^{\rm id}+ s^{\rm ex} \ , \nonumber \\ \label{eq:excessS} s^{\rm id} &= -\ln(\rho\sigma^d) - d\ln (\Lambda/\sigma) + 1 \ , \\ s^{\rm ex} &=\lim_{N\rightarrow} \def\la{\left\langle} \def\ra{\right\rangle\infty} \frac1N \ln Z^{\rm ex}_N \ , \nonumber \end{align} where $\Lambda$ is the de Broglie wavelength and the sphere diameter $\sigma$ is here introduced purely for notational convenience. The thermodynamic relations for pressure $P$ and isothermal compressibility $\chi_T$ (for temperature $T=1/\beta$ with the Boltzmann constant set to unity) \begin{equation}\label{eq:thermo} \beta P = \frac{ds}{dv} \geq 0 \ , \qquad \chi_T = - \frac1V \frac{dV}{dP} = -\frac{\rho}{T} \frac{1}{\frac{d^2s}{dv^2}} \geq 0 \ , \end{equation} imply that the total entropy per particle, $s$, must be a monotonically increasing and concave function of the specific volume. \subsection{Crystallization via a first-order phase transition} In all dimensions $d$ for which the information is available, the densest (infinite-pressure) packing of hard spheres is crystalline; that is, given by a (Bravais or not) lattice packing. For $d\geq3$, at finite pressure, this densest packing gives rise to a stable crystalline phase separated from the liquid phase by a first-order transition. Such a liquid-crystal transition means that the liquid and crystal phases have distinct analytic entropy functions, $s_{\ell}$ and $s_{c}$, which are separately monotonically increasing and concave. Because Eqs.~\eqref{eq:thermo} should always be satisfied in equilibrium, the equilibrium state of the system corresponds to the Maxwell construction illustrated in Fig.~\ref{fig:sketch}. At low $P$ (high $v > v_f$), the homogeneous liquid dominates; at high $P$ (low $v < v_m$), the homogeneous crystal dominates. In the region $v_m < v < v_f$, pressure $P_\mathrm{co}$ is constant and the system is formed of coexisting crystalline and liquid domains. The equations determining the three unknown $v_m, v_f, P_\mathrm{co}$ which characterize the coexistence region can be obtained from the common tangent construction defined as \begin{equation} \begin{split} &\frac{ds_\ell}{dv}(v_f) = \frac{ds_c}{dv}(v_m) = \beta P_\mathrm{co} \ , \\ &s_\ell(v_f) - s_c(v_m) = \beta P_\mathrm{co} (v_f - v_m) \ . \end{split} \label{eq:coex} \end{equation} \begin{figure}[t] \includegraphics[width=\columnwidth]{parallelTangentIllustration.pdf} \caption{Sketch of the liquid (red) and crystal (blue) entropies as a function of the scaled specific volume $\overline{v}$, in the vicinity of the first-order fluid-crystal transition determined by common tangent construction (black dotted line). The crystal branch terminates at the densest packing density ${\overline{\varphi}_\mathrm{cp}=1/\overline{v}_\mathrm{cp}}$ (dashed line) and remains metastable beyond coexistence up to $\overline{\varphi}_s = 1/\overline{v}_s$. For $d=3$-$10$, $\overline{\varphi}_s>\overline{\varphi}_f$~\cite{charbonneau_thermodynamic_2021}, but no assumption is here made about their ordering in higher $d$. The liquid branch extends from zero density, and its metastable extension beyond coexistence terminates at the Kauzmann density ${\overline{\varphi}_{k}=1/\overline{v}_k}$, whereupon the liquid turns into an ideal glass (purple). The glass phase then terminates at the glass close packing density ${\overline{\varphi}_{\rm gcp}=1/\overline{v}_{\rm gcp}}$~\cite{parisi_theory_2020}.} \label{fig:sketch} \end{figure} \section{Conjectures and Scenarios} \label{sec:conjectureSection} In this section, we describe a set of conjectures that constrain the relationships in Eq.~\eqref{eq:coex} and work through their consequences, hence giving rise to three crystallization scenarios. Note that in considering high-$d$ systems, it is convenient to define scaled packing fraction $\overline{\varphi} =2^d \varphi = \rho V_d \sigma^d$, specific volume $\overline{v} = 1/\overline{\varphi} = v/(V_d \s^d)$, and pressure $\overline{P} = \beta P \sigma^d / V_d$. \subsection{Conjectures on the high-\texorpdfstring{$d$}{Lg} phase behavior} \label{sec:conj} We first make a series of conjectures: \begin{enumerate} \item The high $d$ equilibrium phase diagram is characterized by a low-density liquid and a high-density crystal, and no other equilibrium phase intervenes. If there is a high-density crystal phase, then it is separated from the liquid phase by a first-order phase transition, as in Fig.~\ref{fig:sketch}. We have no support for this conjecture, other than from the empirical observation that it holds in $d=3$-$10$~\cite{skoge_packing_2006,van_meel_hard-sphere_2009, charbonneau_thermodynamic_2021, lue_molecular_2021}. \item In the limit of high $d$, the excess entropy of the liquid phase is given by truncating the virial expansion at the lowest order, i.e., \begin{equation} s^{\rm ex}_\ell = - \frac{\overline{\varphi}}{2} = -\frac{1}{2 \overline{v}} \ , \end{equation} which implies that \begin{equation} \label{eq:Sliq} s_\ell = \ln\overline{v} -\frac{1}{2 \overline{v}} + \ln V_d - d \ln (\Lambda/\sigma) + 1 \ . \end{equation} Equation~\eqref{eq:Sliq} holds up to the so-called {\it Kauzmann density} $\overline{\varphi}_k = d \ln d + o(d\ln d)$, at which the liquid state condenses into an ideal glass phase. The ideal glass entropy has a different and less explicit expression---see Ref.~\onlinecite[Eq.~(7.43)]{parisi_theory_2020} and the surrounding discussion---that is continuous at $\overline{\varphi}_k$ and quickly diverges to $-\infty$ upon approaching the {\it glass close packing} density ${\overline{\varphi}_{\rm gcp} = d \ln d + o(d\ln d)}$, as illustrated in Fig.~\ref{fig:sketch}. (The difference between $\overline{\varphi}_k$ and $\overline{\varphi}_{\rm gcp}$ is at the level of subleading corrections.) In addition, the liquid dynamics become arrested for ${\overline{\varphi} > \overline{\varphi}_d \approx 4.8 d}$. This conjecture is supported by a large body of physics literature~\cite{frisch_classical_1985, wyler_hard-sphere_1987, frisch_high_1999, parisi_toy_2000, parisi_mean-field_2010, maimbourg_solution_2016, charbonneau_glass_2017, parisi_theory_2020, charbonneau_dimensional_2021}. \item The crystal phase is accurately described by the free-volume entropy. In other words, throughout the crystal phase, particles simply rattle in a cage formed by their neighbors. Consider the close packed crystal at density $\overline{\varphi}_\mathrm{cp}$, and reduce the diameter of all particles from $\sigma$ to $\sigma(1-\varepsilon)$. The density is correspondingly reduced to $\overline{\varphi} = \overline{\varphi}_\mathrm{cp} (1-\varepsilon)^d$, and each particle gains the possibility of rattling in a volume of linear size $a \varepsilon\sigma$ without overlapping its neighbors, $a$ being an unknown proportionality constant close to 1. Moreover all particles can be permuted, so each particle can access all the $N$ possible cages. Therefore, using $x=\overline{v}_\mathrm{cp}/\overline{v}=\rho/\rho_\mathrm{cp}$, one can estimate \begin{align} \label{eq:cryst} Z^{\rm ex}_N \approx& \left[ \frac{N V_d (a \varepsilon\sigma)^d }{V} \right]^N = \left[ \overline{\varphi}_\mathrm{cp} (a \varepsilon)^d \right]^N \Rightarrow\nonumber\\ s_c \approx& -\ln x + d \ln a + d \ln ( 1 - x^{1/d}) \\ &+\ln V_d - d \ln (\Lambda/\sigma) + 1 \ . \nonumber \end{align} We assume that this expression remains valid for all $\overline{v} \in [ \overline{v}_\mathrm{cp}, \overline{v}_m ]$. We have no support for this conjecture, except from the empirical observation that a similar expression provides a good fit to the crystal entropy in $d=3$-$10$~\cite{van_meel_hard-sphere_2009, charbonneau_thermodynamic_2021, lue_molecular_2021}. The free volume entropy gives a rigorous lower bound on $s^{\rm ex}$, and, if we assume the close-packed crystal to be a lattice packing, then we can allow particles to rattle in regions defined by scaling down the Voronoi cells around each center. A special consideration should be made for lattice packings, such as $\lambda_9$, which contain a set of internal soft (or zero) modes. Along such modes, the packing is allowed to shift freely without generating any overlap. Because the number of such modes is necessarily subextensive, however, the contribution of these modes to the entropy per particle must vanish in the thermodynamic limit (by analogy to the contribution of Goldstone modes in the low-temperature phase of a Heisenberg ferromagnet \cite{patashinskii_fluctuation_1979}). \item The liquid remains the equilibrium phase at least down to a specific volume $\overline{v} = 1/[d \ln(2/\sqrt{3})]$, i.e., $\overline{v}_\ell < 1/[d \ln(2/\sqrt{3})] \approx 6.952/d$ or $\overline{\varphi}_\ell > d \ln(2/\sqrt{3}) \approx 0.144 d$. This conjecture is motivated by the results of~\cite{jenssen_hard_2019}. \end{enumerate} \subsection{Crystallization in high \texorpdfstring{$d$}{Lg}} From the conjectures of Sec.~\ref{sec:conj} we can derive bounds on high-$d$ crystallization, which are discussed below. \subsubsection{Coexistence equations} First, by rewriting Eqs.~\eqref{eq:coex} in terms of scaled variables and using Eq.~\eqref{eq:Sliq} for $s_\ell$ and Eq.~\eqref{eq:cryst} for $s_c$, we obtain \begin{equation} \begin{split} &\frac{1}{\overline{v}_f} + \frac{1}{2 \overline{v}_f^2} = \frac1{\overline{v}_m} \frac{1}{ 1 - \left( \overline{v}_\mathrm{cp}/\overline{v}_m \right)^{1/d} } = \overline{P}_\mathrm{co} \ , \\ &\ln(\overline{v}_f) - \frac{1}{2 \overline{v}_f} - \ln( \overline{v}_m / \overline{v}_\mathrm{cp} ) - d \ln a \\ & - d \ln \left[ 1 - \left( \frac{\overline{v}_\mathrm{cp}}{\overline{v}_m} \right)^{1/d} \right] = \overline{P}_\mathrm{co} (\overline{v}_f - \overline{v}_m) \ . \end{split} \end{equation} It is then convenient to rewrite these equations in terms of density $\overline{\varphi}$: \begin{equation}\label{eq:coexolf} \begin{split} &\overline{\varphi}_f + \frac{1}{2} \overline{\varphi}_f^2 = \frac{\overline{\varphi}_m}{ 1 - \left( \overline{\varphi}_{m}/\overline{\varphi}_\mathrm{cp} \right)^{1/d} } = \overline{P}_\mathrm{co} \ , \\ &-\ln(\overline{\varphi}_f) - \frac{1}{2} \overline{\varphi}_f - \ln( \overline{\varphi}_\mathrm{cp} / \overline{\varphi}_{m} ) - d \ln a \\ & - d \ln \left[ 1 - \left( \frac{\overline{\varphi}_{m}}{\overline{\varphi}_\mathrm{cp}} \right)^{1/d} \right] = \overline{P}_\mathrm{co} \left( \frac{1}{\overline{\varphi}_f} -\frac{1}{\overline{\varphi}_m} \right) \ . \end{split} \end{equation} Given $\overline{\varphi}_\mathrm{cp}$, these equations can easily be solved numerically to yield the coexistence parameters. This strategy was employed by Finken et al.~\cite{finken_freezing_2001} (albeit possibly with an erroneous common tangent construction~\cite{van_meel_hard-sphere_2009}) using close packing density of laminated lattices up to $d\approx 50$. Here we take a different approach. We use our knowledge of $\overline{\varphi}_f$ to obtain bounds on $\overline{\varphi}_\mathrm{cp}$. \subsubsection{Asymptotic analysis} According to Sec.~\ref{sec:conj}, one has $\overline{\varphi}_f \in [0.144 d, d \ln d]$. In a more strict setting we could impose that crystallization happens before the liquid is dynamically arrested, which would restrict the upper bound to $4.8 d$. We thus introduce $\widehat{\varphi}_f = \overline{\varphi}_f/d$ that is of $\mathcal{O}(1) $ or at most $\mathcal{O}(\ln d)$. For $d\rightarrow} \def\la{\left\langle} \def\ra{\right\rangle\infty$, we have $\ln(\overline{\varphi}_f) \ll \frac{1}{2} \overline{\varphi}_f$ and $\overline{\varphi}_f \ll \frac{1}{2} \overline{\varphi}_f^2$, and also $\overline{P}_\mathrm{co} \sim \frac{1}{2} \overline{\varphi}_f^2$, which thus simplifies Eqs.~\eqref{eq:coexolf} as: \begin{equation} \begin{split} & \frac{d^2}2 \widehat{\varphi}_f^2 = \frac{\overline{\varphi}_m}{ 1 - \left( \overline{\varphi}_{m}/\overline{\varphi}_\mathrm{cp} \right)^{1/d} } \ , \\ & - d \widehat{\varphi}_f - \ln( \overline{\varphi}_\mathrm{cp} / \overline{\varphi}_{m} ) \\ & - d \ln a - d \ln \left[ 1 - \left( \frac{\overline{\varphi}_{m}}{\overline{\varphi}_\mathrm{cp}} \right)^{1/d} \right] = - \frac{d^2}{2} \frac{ \widehat{\varphi}_f^2}{ \overline{\varphi}_m} \ . \end{split} \label{eqn:asymptotics} \end{equation} Two possible asymptotic solutions to these equations exist, depending on the scaling of $\overline{\varphi}_m / \overline{\varphi}_\mathrm{cp}$. \subsection{Crystallization scenarios} \label{sec:3scenarios} Based on the above conjectures and asymptotic analysis, three distinct crystallization scenarios can be identified. \subsubsection{Scenario A} In this scenario, crystallization does not proceed and thus the liquid and the glass phases are the only possible equilibrium phases. The close packing density then equals the glass close packing density, and hence $\varphi_\mathrm{cp} = 2^{-d} \cdot \overline{\varphi}_{\rm gcp} \sim 2^{-d} d \ln d$. This scenario happens if the close packing density of the densest crystal remains below $\varphi_{\rm gcp}$. \subsubsection{Scenario B} In this scenario, we suppose that there is a crystalline phase and ${\overline{\varphi}_m / \overline{\varphi}_\mathrm{cp} \sim A/d^\g}$ (with $\g>0$ and $A>0$, or $\g=0$ and $0<A<1$), such that ${1-\big(\overline{\varphi}_m/\overline{\varphi}_\mathrm{cp}\big)^{1/d} \sim ( \g \ln d - \ln A)/d}$. Note that in this scenario $1-\big(\overline{\varphi}_m/\overline{\varphi}_\mathrm{cp}\big)^{1/d} \allowbreak \ll 1$ and the use of the free volume equation of state for the crystal is well justified. Defining $\widehat{\varphi}_m = \overline{\varphi}_m/d$ and neglecting subdominant terms, Eqs.~\eqref{eqn:asymptotics} become \begin{equation} \begin{split} & \frac{1}{2} \widehat{\varphi}_f^2 = \frac{\widehat{\varphi}_m }{ \g \ln d - \ln A} \ , \\ & - \widehat{\varphi}_f - \ln a - \ln(\g \ln d - \ln A) + \ln d = - \frac{1}{2} \frac{ \widehat{\varphi}_f^2}{ \widehat{\varphi}_m} \ . \end{split} \end{equation} The solution is \begin{equation} \label{eq:Bris} \begin{split} \overline{\varphi}_f &\sim d \ln d \ , \\ \overline{\varphi}_m &\sim d \frac{(\ln d)^2}{2} ( \g \ln d - \ln A) \ , \\ \overline{\varphi}_\mathrm{cp} &\sim \frac{d^{\g+1}}{A} \frac{(\ln d)^2}{2} ( \g \ln d - \ln A) \ . \\ \end{split} \end{equation} Note that one should check the subleading corrections to $\overline{\varphi}_f$ to make sure that $\overline{\varphi}_f \leq \overline{\varphi}_k$, which is a strict requirement for the consistency of our approach. It is also somewhat unpleasant that crystallization then takes place much beyond the dynamical arrest of the liquid, i.e., $\overline{\varphi}_f \gg \overline{\varphi}_d$. In this scenario, the close-packed crystal would be only slightly denser than the best amorphous packing, and its exponential scaling would be the same as the Minkowski bound. Crystallization would then be extremely unlikely, because the liquid would becomes dynamically arrested before any sign of crystallization could emerge. Note that the value of $a$, provided it remains finite for $d\rightarrow} \def\la{\left\langle} \def\ra{\right\rangle\infty$, here plays no role. \subsubsection{Scenario C} In this scenario, we suppose there is a crystalline phase, but by contrast to scenario B, here $\overline{\varphi}_m / \overline{\varphi}_\mathrm{cp} \sim e^{-\a d}$ with constant $\alpha > 0$. Hence, $1-\big(\overline{\varphi}_m/\overline{\varphi}_\mathrm{cp}\big)^{1/d} = 1-e^{-\a}$ remains finite, and the use of the free volume equation of state for the crystal is less justified for large $\a$. Then, the first equation gives $\overline{\varphi}_m = (d^2/2) \widehat{\varphi}_f^2 (1-e^{-\a})$. Plugging this expression into the second Eq.~\eqref{eqn:asymptotics} and taking the leading order, we get $\widehat{\varphi}_f = -\a -\ln(1-e^{-\a}) - \ln a$. The final result is then \begin{equation} \label{eq:finC} \begin{split} & \overline{\varphi}_f \sim d [ -\a -\ln(1-e^{-\a}) - \ln a ] \ , \\ & \overline{\varphi}_m \sim \frac{1}{2} \overline{\varphi}_f^2 (1-e^{-\a}) \ , \\ &\overline{\varphi}_\mathrm{cp} \sim e^{\a d} \overline{\varphi}_m \ . \\ \end{split} \end{equation} In this scenario, the beginning of the coexistence region is $\overline{\varphi}_f \propto d$, which is a natural scaling for the liquid state, while the end of the coexistence region is $\overline{\varphi}_m \propto d^2$ and the crystal close packing is $\overline{\varphi}_\mathrm{cp} \propto e^{\a d}$. \begin{figure}[t] \includegraphics[width=\columnwidth]{KLPhaseDiagram.pdf} \caption{Sketch of $\widehat{\varphi}_f$ as a function of $\a=\frac{1}{d}\ln\overline{\varphi}_\mathrm{cp}$ in scenario C, assuming that $\ln a=0$. All areas to the right and below of the red box are forbidden by the KL bound when ${d\rightarrow} \def\la{\left\langle} \def\ra{\right\rangle\infty}$. Simulation results for the freezing density in $d=3$-$10$ (magenta) trend towards the asymptotically allowed region as $d$ increases.} \label{fig:bound} \end{figure} The relation between $\widehat{\varphi}_f =\overline{\varphi}_f/d$ and $\alpha=\frac{1}{d}\ln\overline{\varphi}_\mathrm{cp}$ (at fixed $a$), given in Eq.~\eqref{eq:finC}, is illustrated in Fig.~\ref{fig:bound} (for $a=1$). It is a decreasing function, and it can be inverted to give \begin{equation} \alpha = \ln(1+e^{-\widehat{\varphi}_f-\ln a}) \ . \end{equation} Hence, upper bounds on $\alpha$ can be turned into lower bounds for $\widehat{\varphi}_f$, and vice versa. Let us consider upper bounds on $\a$ first. The Kabatiansky-Levenshtein (KL) upper bound on packing~\cite{kabatiansky_bounds_1978} requires that $\a \leq \allowbreak \ln(2) (1-0.5990) \allowbreak = 0.278$, which then implies \begin{equation}\label{eq:wfflb} \widehat{\varphi}_f > 1.138 -\ln a \ . \end{equation} The fourth conjecture in Sec.~\ref{sec:conj} implies a bound $\widehat{\varphi}_f \geq 0.144$; as long as $\ln a \leq 0.994$, this bound is however weaker than the KL one. We then consider lower bounds on $\a$. If we require that crystallization happens before dynamical arrest, i.e. $\widehat{\varphi}_f\leq 4.8$, then we obtain a lower bound on $\alpha$ in terms of~$a$, \begin{equation} \ln\big(1+e^{-4.8-\ln a}\big) \le \alpha \ . \label{eqn:alphaRange} \end{equation} Note that crystallization might well take place after dynamical arrest, as it is the case in scenario B discussed above; if $\ln a < -3.662$, then this is necessarily the case, due to Eq.~\eqref{eq:wfflb}. Unfortunately, $a$ is unknown, but if the third conjecture of Sec.~\ref{sec:conj} is correct, then it should be close to unity, which is what our finite-$d$ results suggest (see Sec.~\ref{sec:simulations}). Because the dependence of the above bounds on $a$ is logarithmic, relatively small deviations from unity further do not affect much the results. We thus assumed $a \rightarrow 1$ as $d \rightarrow \infty$, for illustration, in Fig.~\ref{fig:bound}. With this choice, we obtain $\widehat} \newcommand{\wt}{\widetilde{\varphi}_f = -\a -\ln(1-e^{-\a})$ (blue line) and \begin{equation} \ln(1+e^{-4.8}) =0.0082 \leq \a \leq 0.278 \ . \end{equation} The true value of $a$ in the high-dimensional limit simply sets the ordinate offset of the blue curve in Fig.~\ref{fig:bound} and, provided it is not too far from unity, only slightly shifts the lower bound. \subsubsection{Summary of the three scenarios} From the analysis so far, we conclude that under the assumptions in Sec.~\ref{sec:conj}, three possible scenarios arise: \begin{itemize} \item[A.] If there is no crystallization, then $\varphi_\mathrm{cp} = 2^{-d} \overline{\varphi}_{\rm gcp} \sim d \ln d \cdot 2^{-d}$, and optimal packings are glasses. \item[B.] If the close packing density of the crystal is not exponentially larger than the melting density, then crystallization happens deep in the dynamically arrested region ($\overline{\varphi}_d \ll \overline{\varphi}_f < \overline{\varphi}_k \sim d \ln d$), and we obtain the results in Eq.~\eqref{eq:Bris}, in particular with $\overline{\varphi}_\mathrm{cp} \sim d^{\g+1} \frac{(\ln d)^2}{2} ( \g \ln d - \ln A)$ being not exponential in $d$, and only slightly larger than $\overline{\varphi}_{\rm gcp}$. \item[C.] If instead crystallization happens on the same scale as the dynamical arrest ($\overline{\varphi}_f\propto d$), then the crystal close packing should be $\overline{\varphi}_\mathrm{cp}\sim e^{\a d}$. Quantitative bounds then depend weakly (logarithmically) on $a$; we assume $a=1$ for simplicity, which gives an upper bound $\a < 0.278$ from the KL bound and also implies $\overline{\varphi}_f > 1.138 d$. The additional requirement that crystallization happens {\it before} dynamical arrest ($\overline{\varphi}_f < \overline{\varphi}_d\sim 4.8d$) gives a lower bound $\a>0.0082$ on the close packing density (assuming $\ln a \approx 0$), which improves exponentially over the Minkowski bound. \end{itemize} \section{Insights from low-\texorpdfstring{$d$}{Lg} crystals} \label{sec:simulations} To obtain insights into which of the above three scenarios is most likely, we examine $\overline{\varphi}_f$, $\overline{\varphi}_m$, and $a$ for each of the densest crystals in $d=3$-$10$, which are $D_d$ checkerboard lattices in $d=3$-$5$, $E_6$ in $d=6$, $E_7$ in $d=7$, the $E_8$ root lattice in $d=8$, $\lambda_9$ in $d=9$, and $P_{10c}$ in $d=10$~\cite{conway_sphere_1993}. Because the relative distance between $\overline{\varphi}_m$ and $\overline{\varphi}_\mathrm{cp}$ grows with $d$, it is possible (through corrections to the free-volume expressions considered here) that a lower density crystal may be most stable at intermediate pressures for certain $d$. We here consider only the densest crystals, as all other crystal forms would in any case transition to the densest at high pressure given enough time. (Note that low-dimensional studies in $d=2$-$6$ have found no such discrepancy~\cite{van_meel_hard-sphere_2009, lue_molecular_2021}, and that if one were to exist, the densest crystal would nevertheless offer the strongest bound on the stability of the liquid, and thus the scaling analysis would not be impacted.) In this section, we consider three distinct estimates of the constant $a$ for these crystals: (1) a high-$d$ generalization of the Rudd--Stillinger cell-cluster expansion~\cite{rudd_rigid_1968} for nearly perfect crystals; (2) the scaling of the dynamical cage size near close packing; (3) thermal integration of the crystal equation of state from a reference crystal whose absolute entropy is determined by the Frenkel-Ladd scheme~\cite{frenkel_new_1984}. \begin{table*}[htb]\centering \begin{tabular}{c | c |c | c | c | c | c | c} \hline \hline crystal & $\varphi_\mathrm{cp}$ & $\kappa_0^\mathrm{CC_1}$ & $\kappa_0^\mathrm{CC_2}$ (Dir) & $\kappa_0^\mathrm{fit}$ & $\kappa_1^\mathrm{CC_1}$ & $\kappa_1^\mathrm{CC_2}$ (Dir) & $\kappa_1^\mathrm{fit}$\\ \hline $D_3$ & 0.7405 & 0.125 & 0.3136 & 0.511(18) & 0.6115 & 2.2108 & 3.8(4)\\ $D_4$ & 0.6169 & 0.15 & 0.3403 & 0.491(18) & 0.76 & 2.4845 & 3.3(3)\\ $D_5$ & 0.4653 & 0.2 & 0.4299 & 0.555(12)& 0.9205 & 1.5717 & 3.2(2)\\ $E_6$ & 0.3729 & 0.4286 & -- & 0.54(2) & 0.1228 & -- & 2.8(3)\\ $E_7$ & 0.2953 & 0.4554 & -- & 0.48(3) & -- & -- & 3.2(4)\\ $E_8$ & 0.2537 & 0.4336 & -- & 0.41(3)& -- & -- & 3.1(5)\\ $\lambda_9$ & 0.1458 & 0.4370 & -- & 1.2(2) & -- & -- & -3(2)\\ $P_{10c}$ & 0.0996 & -- & -- & 0.62(5) & -- & -- & 3.1(7)\\ \hline \hline \end{tabular} \caption{\textbf{Constants used and derived for each crystal.} Packing fraction at close packing $\varphi_\mathrm{cp}$ taken from Ref.~\onlinecite{conway_sphere_1993}. Cell cluster equation of state results are given to both first ($\kappa_0^\mathrm{CC_1}$ and $\kappa_1^\mathrm{CC_1}$) and second ($\kappa_0^\mathrm{CC_2}$ and $\kappa_1^\mathrm{CC_2}$) order, and are compared with the crystal equation of state obtained from numerical simulations, $\kappa_0^\mathrm{fit}$ and $\kappa_1^\mathrm{fit}$. Cell cluster results are exact to machine precision, but rounded to the fourth decimal places. Otherwise, error bars represent $95\%$ confidence intervals.} \label{table:packingConstants} \end{table*} \subsection{Cell-cluster expansion} \label{sec:cellClusterMain} Rudd and Stillinger proposed to expand the entropy of a high-pressure crystal of hard particles~\cite{rudd_rigid_1968} by ordering terms as \begin{multline} s_c = \lim_{x\rightarrow} \def\la{\left\langle} \def\ra{\right\rangle 1} \bigg[-d\ln(\Lambda/\sigma) + d\ln(1 - x^{1/d}) - \ln x - C \\ - D(1-x^{1/d}) - E(1-x^{1/d})^2 + \mathcal{O}(1-x^{1/d})^3 \bigg] \ , \label{eqn:rudd} \end{multline} where $x=\varphi/\varphi_\mathrm{cp}$. In essence, this scheme proposes a polynomial correction in $1-x^{1/d}$ to the free volume expansion of Eq.~\eqref{eq:cryst}. The coefficients $C$, $D$, and $E$, which depend on crystal symmetry and dimension, can further be expanded in (infinite) series of cell clusters (see Appendix~\ref{sec:cellClusterAppendix}). We here present two such expansions, denoted recursive (Rec) and direct (Dir). Although these series are neither unique nor proven to converge in any $d$, they nevertheless provide a constructive analytical framework. By (admittedly loose) physical analogy with the virial expansion for the liquid, one might even expect their convergence rate to improve as $d\rightarrow\infty$. Using Eqs.~\eqref{eq:thermo} and~\eqref{eqn:rudd}, the reduced pressure (or compressibility) can then be expressed as \begin{align} p &= \frac{\beta P}{\rho} =-x\left(\frac{\partial s_c}{\partial x}\right)_{\beta}\nonumber\\ &= \frac{1}{1-x^{1/d}} + \kappa_0 + \kappa_1(1-x^{1/d}) + \mathcal{O}(1-x^{1/d})^2, \label{eqn:compressDeriv} \end{align} where the first term $1/(1-x^{1/d})$ is the free volume equation of state~\cite{kirkwood_critique_1950, kamien_entropic_2007}, and its constant and linear offsets corrections are, respectively, \begin{align} \kappa_0 &= - \frac{D}{d} \ ,\label{eq:relationK0} \\ \kappa_1 &= \frac{D-2E}{d} \ .\label{eq:relationK1} \end{align} Comparing Eqs.~\eqref{eq:cryst} and \eqref{eqn:rudd} further identifies \begin{equation} d \ln a = - C - 1 -\ln V_d \ . \label{eq:relationCA} \end{equation} Values of $\kappa_0$ and $\kappa_1$ from the cell cluster expansions are presented in Table~\ref{table:packingConstants}. The standard derivation of the free volume---and $s_c$ by extension---assumes that upon decompressing a close-packed crystal the available free volume, $v_\mathrm{free}$, is that of the Voronoi cell (see Fig.~\ref{fig:curvedFV}). However, the true free volume is larger than this approximation, hence $C > 0$ for all lattices. Because its boundary is concave, i.e., $\frac{\partial^2 v_\mathrm{free}}{\partial x^2} < 0$, we also have that $\kappa_0 > 0$ for all lattices. No similar constraint, however, obviously fixes the sign of $\kappa_1$. (See Appendix \ref{sec:freeVolumeExpansion} for a fuller presentation.) \begin{figure}[htb] \includegraphics[width=0.5\linewidth]{curvedFreeVolume.pdf} \caption{Free volume schematic with each sphere (black circle) excluding a spherical volume of radius $\sigma$ (dashed lines) away from their centers for the center of another sphere to occupy. The space thus bounded (blue dashed lines) is the free volume, $v_\mathrm{free}$ (blue), available to the center sphere. For $x\to1$, the free volume boundary is approximately self-similar to that of the Voronoi cell (red), but upon decompression the curvature of the free volume boundary grows more pronounced. This concavity implies $\frac{\partial^2 v_\mathrm{free}}{\partial x^2}<0$, and thus $\kappa_0 > 0$.} \label{fig:curvedFV} \end{figure} \begin{figure}[ht] \includegraphics[width=0.9\linewidth]{bothDelta.pdf} \caption{\textbf{a)} Long-time MSD plateau for $d=3$-10 offset by a multiplicative factor of $2^d$ for visual clarity. \textbf{b)} From the scaling form in Eq.~\ref{eq:cagescaling} (solid lines), the prefactor $\Delta_0$ is extracted.} \label{fig:deltaScaling} \end{figure} \subsection{Dynamical Cage size} The cage size determined from the long-time limit of the mean squared displacement (MSD), $\langle r^2(t) \rangle$, can be used to estimate $a$ under simple assumptions. In order to compute the MSD, perfect crystals are first prepared by trivial planting. Equilibrium configurations are then sampled using the same Metropolis Monte Carlo scheme as in Ref.~\onlinecite{charbonneau_thermodynamic_2021}. A straightforward MSD computation is, however, inappropriate for $\lambda_9$, given that its nine internal soft modes permit unbounded motion along certain directions. For this crystal, the relevant MSD therefore excludes displacements along these soft dimensions, \begin{equation} \langle r^2(t) \rangle = \frac{1}{N} \sum_{i,\vartheta} [r^\vartheta_i(t) - r^\vartheta_i(0) - \Xi^\vartheta h^\vartheta_i - r^\vartheta_\mathrm{com}(t)]^2, \end{equation} where $\mathbf{r}_\mathrm{com}(t)$ denotes the center of mass of sublattice $\vartheta$ and $\{2\Xi^\vartheta\}$ denotes the (global) displacement vector along each mode. (The factor of $2$ is included for symmetrization.) Each of the nine sublattices contains half of all particles moving either in the positive or negative direction away from the center of mass of the sublattice. We denote the participation of each particle to each sublattice by the tensor elements $h_i^\vartheta=\pm 1$. The term $\Xi^\vartheta h_i^\vartheta$ then encodes the distance traveled by each particle from the center of mass of each sublattice during collective motions. In all cases, the MSD is fitted to a stretched exponential \begin{equation} \langle r^2(t) \rangle = \Delta\big(1-e^{-(t/\tau_\beta)^{\gamma}}\big) \label{eqn:tauDef} \end{equation} with relaxation time $\tau_\beta$, and stretching exponent, $\gamma < 1$, for time $t$ given in number of Monte Carlo sweeps, as it relaxes to its plateau height, $\Delta$. For the range of $d$, $N$, and $x$ considered, $\gamma$ typically varies between $0.4$ and $1$, and systematically increases with $d$ (Appendix~\ref{sec:equilibrationConstants}, Fig.~\ref{fig:equilibration}). The time constant depends only weakly on dimension and $x$--except for $d=3$ near coexistence---and $\tau_\beta$ are $\mathcal{O}(10)$. A system is deemed equilibrated after $t\geq 10\tau_\beta$, which can easily be achieved using standard computational resources. Over $10,000$ independent snapshots for each $x$ and $d$ can thus be efficiently obtained. As $x$ approaches unity, the typical linear cage size $\Delta$, scales as (Fig.~\ref{fig:deltaScaling}) \begin{equation} \label{eq:cagescaling} \Delta = \Delta_0(1-x^{1/d})^2, \end{equation} with prefactor $\Delta_0$. Using the partition function in Eq.~\eqref{eq:cryst} to compute the MSD of two random points in a $d$-dimensional sphere of radius $a$ separately gives \begin{equation} \Delta_0=2a^2\frac{d(d^2+2d-1)}{(1+d)^2(2+d)}. \label{eq:dynamicA} \end{equation} Within this structural assumption, an estimate for $a$ can thus be extracted from the scaling of the MSD plateau height. \begin{figure*}[htb] \includegraphics[width=\linewidth]{pressureConstantCompareOffsetUpdated.pdf} \caption{\textbf{a)} Correction to the free volume equation of state in $d=3$-$10$. The constant and linear terms, $\kappa^\mathrm{fit}_0$ and $\kappa^\mathrm{fit}_1$, are estimated from a simple linear fit (lines) of the simulation results (points) over $1-x^{1/d} \in(0, 0.1)$. The growth of the error bars, which denote 95\% confidence intervals, at high densities reflects the numerical difficulty of comparing two diverging quantities. Note that for visual clarity each curve is offset by $d/5$. \textbf{b)} Comparison of $\kappa^\mathrm{fit}_0$ (black) from (a) with the value from the first order cell cluster expansion $\kappa^\mathrm{CC_1}_0$ (red) and second $\kappa^\mathrm{CC_1}_0$ (blue) using both expansions. As $d$ increases, the direct cluster expansion results appear to slowly converge towards the pressure calculation to first order, while the recursive expansion appears to diverge. \textbf{c)} Comparison of $\kappa^\mathrm{fit}_1$ from (a) with $\kappa^\mathrm{CC}_1$ and $\kappa_1^\mathrm{CC_2}$ using both cell cluster expansions. Here, the truncated recursive expansion oscillates as expected (see Appendix~\ref{sec:cellClusterAppendix}). In (b) and (c), lines are only provided as guides for the eye.} \label{fig:speedyPDiff} \end{figure*} \subsection{Thermal integration} An assumption-free estimate of $a$ can also be obtained by thermally integrating the crystal equation of state from a state of known entropy. Such high-accuracy entropies are available for $d=3$ up to close packing~\cite{speedy_pressure_1998}, but comparable results are limited to the liquid-crystal coexistence regime for $d=4$-$10$~\cite{van_meel_hard-sphere_2009,charbonneau_thermodynamic_2021}. For succinctness, we here briefly describe the integration scheme, and especially how it differs from that reported in Ref.~\onlinecite{charbonneau_thermodynamic_2021}. The reduced pressure is first computed using the pair correlation at contact, $g(\sigma^+)$, \begin{equation} p = 1 + \frac{\overline{\varphi}}{2}g(\sigma^+) \ . \end{equation} These numerical results are then fitted using Eq.~\eqref{eqn:compressDeriv} up to $\mathcal{O}(1-x^{1/d})$, which provides numerical estimates of $\kappa_0$ and $\kappa_1$. The fitted crystal equation of state captures simulation results well for all $d$ in this regime (Fig.~\ref{fig:speedyPDiff}), thus validating the form proposed by Rudd et al.~for expanding the free energy around close packing. Higher-order corrections would, however, be needed to describe pressures down to the fluid-crystal coexistence regime~\cite{charbonneau_thermodynamic_2021}. Absolute entropies at a reference density $x_0$ are computed by performing a Frenkel-Ladd integration at that state~\cite{frenkel_new_1984,polson_finite-size_2000} in all but $d=9$ where the periodic potential defined in Ref.~\onlinecite{charbonneau_thermodynamic_2021} is used instead. In order to optimize numerical accuracy, the reference state is taken near close packing, i.e., $x_0\approx 1$, instead of near melting as in Ref.~\onlinecite{charbonneau_thermodynamic_2021}. However, a larger integration cutoff is then needed to prevent spheres from overlapping in the Einstein crystal limit of the Frenkel-Ladd scheme. These constraints are balanced by taking $x_0$ within the regime of validity of the fitted equation of state, but no denser. Because the reference entropy exhibits a significant size dependence--unlike the crystal equation of state--the thermodynamic $s_c(x_0)$ is further estimated using a standard finite-size scaling analysis (Appendix~\ref{sec:finSizeRef}). A numerical estimate of $a$ can then be obtained via Eqs.~\eqref{eqn:rudd} and \eqref{eq:relationK0}-\eqref{eq:relationCA}, which in the limit $x\rightarrow} \def\la{\left\langle} \def\ra{\right\rangle 1$ yield \begin{equation} \begin{split} d \ln a = s_c(x_0) - d\ln(1 - x_0^{1/d}) + \ln x_0 - 1 - \ln V_d \\ - d\kappa_0 (1-x_0^{1/d}) + \frac{d}{2}(\kappa_0 + \kappa_1)(1-x_0^{1/d})^2 \ . \label{eq:thermalIntA} \end{split} \end{equation} \subsection{Summary of results from low-\texorpdfstring{$d$}{Lg} crystals} The cell-cluster expansion and the numerical estimates of $a$ from both the cage size and thermal integration are compared in Fig.~\ref{fig:tivscc}. The two numerical estimates, $a^{\Delta_0}$ and $a^\mathrm{fit}$, neatly converge as dimension increases. The spherical caging assumption of Eq.~\eqref{eq:dynamicA} on which the former relies, although fairly crude in low $d$, becomes increasingly inconsequential as $d$ increases. Going from the first level of the cell cluster expansion, $a^\mathrm{CC_1}$, to the second, $a^\mathrm{CC_2}$, also suggests a rapid convergence towards $a^\mathrm{fit}$ using both cell cluster expansions. Over the accessible $d$ range, however, the agreement does not markedly increase with dimension. Most importantly, all of these estimates support the conjecture $a \sim \mathcal{O}(1)$. Because standard simulations of higher-dimensional systems become increasingly computationally challenging, were the expansion of Rudd et al.~fully controlled, it could help extend the present analysis. The convergence of the analytical expansion of $\kappa_0$ and $\kappa_1$ offers some hope in this direction, albeit only through the direct expansion strategy. Under the recursive strategy, the $\kappa_0$ and $\kappa_1$ terms appear to diverge from the numerical results at second order, in support of Rudd et al.'s expectation that derivatives of this expansion might never converge in a truncated series~\cite{rudd_rigid_1968}. In this context, further formal expansion of the direct integration strategy to third order might be of interest. At the moment, however, the ability to numerically evaluate integrals of order $n$ in reasonable time is capped at $nd \approx 14$~\cite{koch_most_2005}. Irrespective of any convergence concerns, the results for $\lambda_9$ stand out. In particular, thermal integration results for $\kappa_0$ and $a$ in $d=9$ are much larger than those of nearby dimensions, and $\kappa_1$ is of the opposite sign. These features largely track what one might expect of a crystal with soft modes. First, its (effective) cage should be elongated along soft directions, thus making $a$ larger. Second, because the free volume is elongated, its rate of increase with decreasing $x$, $-\frac{\partial^2 v_\mathrm{free}}{\partial x^2}$, should be larger than for standard caging, thus increasing $\kappa_0$. Third, the negative value of $\kappa_1$ might result from spheres in interlocking lattices being relatively less constrained and thus less likely to be in contact at high entropy points (where multiple soft modes are available). That said, unlike the crystal equation of state of other crystals, that of crystals with soft modes are expected to exhibit significant finite-size corrections, as discussed in Sec.~\ref{sec:conj}. Unfortunately, only a single system size is numerically available for this crystal~\cite{charbonneau_thermodynamic_2021}, and thus a systematic examination of these effects is not here feasible. Because the only path towards radical asphericity of the Voronoi cell--and thus significant deviations from $a \sim \mathcal{O}(1)$--is through the presence of a direction in which individual particles or subextensive collections of particles are not constrained or are only weakly constrained, a comparable analysis of lower-dimensional crystals containing soft modes, such as parallel hard cubes~\cite{swol_percolation_1987,jagla_melting_1998}, might thus be a more promising route to gain insight on this matter. \begin{figure}[ht] \includegraphics[width=0.9\columnwidth]{aAllEstimates.pdf} \caption{Dimensional evolution of the lattice constant $a$ evaluated using three approaches. Cell-cluster expansion results are reported using Eq.~\eqref{eq:relationCA} to first ($a^\mathrm{CC_1}$ in red) and second ($a^\mathrm{CC_2}$ in blue) order for both direct (dashed line) and recursive (solid line) expansions (see Appendix~\ref{sec:cellClusterAppendix}). Thermal integration results, $a^\mathrm{fit}$ (black), from Eq.~\eqref{eq:thermalIntA} and dynamical estimates $a^{\Delta_0}$ (green) from Eq.~\eqref{eq:dynamicA} evolve qualitatively similarly. All estimates of $a$ suggest that $a \sim \mathcal{O}(1)$ as $d\rightarrow\infty$. Error bars denote a 95\% confidence interval; the cell cluster calculations are accurate to double precision truncation.} \label{fig:tivscc} \end{figure} \section{Discussion and Conclusion} \label{sec:final} Under the assumptions of Sec.~\ref{sec:conj}, we summarize the three possible scenarios available for high dimensional crystallization: (scenario A) if there is no crystallization, then the optimal packings are glasses; (scenario B) if crystallization occurs but only deep in the dynamically arrested region, then the densest crystal is only slightly more dense than the closest packed glass; (scenario C) if instead crystallization happens on the same scale as the dynamical arrest then the crystal close packing is $\overline{\varphi}_\mathrm{cp}\sim e^{\a d}$, an exponential improvement over the Minkowski bound. Of these, scenario C relies on the constant $a$ being of order unity, while the others make no such requirement. Through the use of both cell cluster expansions and numerical simulations of crystals in $d=3$-$10$, we have obtained three independent measures of $a$ that are roughly consistent with each other. We further observe that the crystal entropy is dominated by the free volume description in all $d$. Although we observe a significant polynomial correction that does not markedly decrease with increasing dimension, it does not significantly increase in crystals with soft modes either. It is therefore expected that these contributions remain subdominant to the free volume description in all $d$. Three additional low-$d$ observations point to the relative likelihood of each of the three scenarios proffered. First, crystallization is thermodynamically favored at least up to $d=10$ and $\widehat{\varphi}_f \sim 1$, which is well below $\widehat} \newcommand{\wt}{\widetilde\varphi_d$. Second, $\alpha = \frac{1}{d}\ln{\overline{\varphi}_\mathrm{cp}}$ remains finite and approaches the zone of crystallization allowed by scenario C. Third, $a$ appears to remain $\mathcal{O}(1)$. Together, these results indicate that while all scenarios are possible, scenario C is most likely to be true, scenario B less likely, and scenario A less likely still. While future numerical simulations and analysis of higher dimensional crystals may be possible in a few additional dimensions, it seems improbable that such simulations would upend this ordering. Finally, it should be noted that even if scenario C is correct, it might still be heavily kinetically suppressed. If crystallization proceeds via classical nucleation theory, in particular, then the competition between surface and volume terms in the free energy creates a barrier such that the nucleation time should scale exponentially with $d$; see, e.g., Ref.~\onlinecite[Eq.~(3.28)]{debenedetti_metastable_1996} and Ref.~\onlinecite[Eq.~(8.15)]{parisi_theory_2020}. Although alternative crystallization schemes have been suggested in deeply supercooled liquids in $d=3$~\cite{filion_crystal_2010, sanz_crystallization_2011}, the geometrical peculiarities of three-dimensional space that underlie such mechanisms appear unlikely to find echo in any higher $d$. \begin{acknowledgments} We thank Yi Hu, Robert Hoy, and Henry Cohn for stimulating discussions. This work was supported by grants from the Simons Foundation (\#454937, Patrick Charbonneau, \#454955 Francesco Zamponi), the National Science Foundation (DMS-1847451, Will Perkins) and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n. 723955 - GlassUniversality). The computations were carried out on the Duke Compute Cluster (DCC), for which the authors thank Tom Milledge’s assistance. Data relevant to this work have been archived and can be accessed at the Duke Digital Repository~\cite{data}. \end{acknowledgments}
1,314,259,993,764
arxiv
\section{Introduction} Laser Und XFEL Experiment, also know as LUXE~\cite{Abramowicz:2021zja}, is a new type of experiment that aims at measuring quantum electrodymanics (QED) in a yet unexplored parameter regime. QED is the theory explaining the interaction between light and matter. The greatest successes of the theory have been obtained in a regime where experimental predictions can be made and tested using perturbation theory in the fine structure structure constant $\alpha_{EM}$. In this regime, QED predicts observables that can be tested experimentally up to a very high level of accuracy. For instance the electron anomalous moment (g-2) prediction is calculated to a precision higher than one part in a trillion, and it agrees with measurements so far~\cite{Hanneke_2011}. LUXE's goal is to explore QED in the presence of very strong electromagnetic field. In such a case it is impossible to carry out the QED perturbative expansions, and exact calculation methods are used. this is the parameter regime of strong field QED (SFQED) One expects such behavior above the Schwinger limit~\cite{Schwinger:1951nm}: \begin{equation} {E}_\textrm{cr}=\frac{m^{2}_{e}c^{3}}{\hslash{}e}\approx{}1.3\times{}10^{18}~V/m, \end{equation} where $m_{e}$ is the electron mass. LUXE will study SFQED by interacting the high-quality electron beam from the European X-Ray Free-Electron Laser Facility (European XFEL or Eu.XFEL)~\cite{XFELTDR} accelerator with a high-intensity laser beam. The experiment is sensitive to two processes characteristic, which are shown in fig.~\ref{fig:feynmanDiagrams}. Nonlinear Compton scattering is photon emission from a high-energy electron interacting with multiple photons from the laser. Breit-Wheeler process happens when a high energy photon interacts with multiple low energy photons from the laser producing a high energy electron-positron pair. \begin{figure}[h!] \centering \includegraphics[width=0.48\linewidth]{fig/feyn_strongbrem.png}\ \includegraphics[width=0.48\linewidth]{fig/feyn_strongpairprod.png}\ \caption{Processes accessible by LUXE: (left) Non-linear Compton scattering, (right) Breit-Wheeler.}\label{fig:feynmanDiagrams} \end{figure} Two dimensionless quantitites can be introduced to characterise SFQED~\cite{Fedotov:2022ely}:\\ $\xi$ which measures the electron--laser coupling and the laser intensity \begin{equation} \xi=\frac{m_e}{\omega_{L}} \frac{{\mathcal E}_\textrm{L}}{{E}_\textrm{cr}}, \end{equation} where $\omega_{L}$ is the laser frequency and ${\mathcal E}_\textrm{L}$ is the instantaneous laser field strength. $\chi_{i}$, whose squared measures the fraction of laser energy transferred to electron beam \begin{equation} \chi_{i}=\frac{\epsilon_{i}}{m_e}\frac{{\mathcal E}_\textrm{L}}{\epsilon_{crit}}(1+\beta\cos\theta), \end{equation} where subscripts i are used to denote particle type ("e" for an electron parameter and "$\gamma$" for a photon parameter), $\epsilon_{i}$ is the particle energy, and $\theta$ is the collision angle of the particle with the laser pulse such that $\theta= 0$ is “head-on”. $\hslash=c=1$ has been used, $\beta=1$ for photons and $\beta\approx 1$ for electrons. The parameter regime that will be probed by LUXE is shown in fig.~\ref{fig:theoryPlane}. Different experiments that were or will be built to characterise SFQED are also shown. Predictions for observables targeted by LUXE, such as the number of pairs created via the nonlinear Breit-Wheeler process, differ from the predictions of perturbation theory; we will discuss examples below. \begin{figure}[h!] \centering \includegraphics[width=0.95\linewidth]{fig/chivsxi_nochiline.png}\ \caption{$\xi$ vs $\chi_\gamma$ plane that can be probed by LUXE. Other experiments sensitivity are also shown.}\label{fig:theoryPlane} \end{figure} \section{Laser system} The strong electromagetic field background in LUXE will be provided by a high-intensity Titanium:Saphire laser, producing photons with an 800~nm wavelength, corresponding to about 1.55~eV. Such laser systems use the Chirped Pulse Amplification technique, which was developed by Donna Strickland and Gérard Mourou in the 1980s~\cite{strickland1985compression}. This technique allows to reach very high level of laser power intensities at the focus point, up to 10~PW at the ELI beam-lines~\cite{ELI} for instance. It relies on the amplification of a short pulse after having stretched it optically, and before it is re-compressed to a very high-energy ultrashort pulse, down to 30~fs in LUXE. In more details, the LUXE laser system is currently planned to function in two different phases to save on costs and starts physics data-taking early on. In Phase~0, one plans to use a commercial 40~TW laser system, that will be upgraded to a 350~TW system in Phase~1. The laser will function with a repetition rate of 1~Hz. It will interact with the electron beam at an angle of about 20$^{\circ}$. The main parameters that can be reached at LUXE are summarized in tab.~\ref{fig:LaserParameters}. \begin{table}[h!] \begin{center} \begin{tabular}{|l|c|c|c|} \hline \textbf{Parameter} & \multicolumn{2}{c|}{Phase 0} & Phase 1 \\ \hline \textbf{Laser Power [TW]} & \multicolumn{2}{c|}{40} & 350 \\ \hline \textbf{Laser energy after compression [J]} & \multicolumn{2}{c|}{1.2} & 10 \\ \hline \textbf{Percentage of Laser in focus [\%]} & \multicolumn{3}{c|}{50}\\ \hline \textbf{Laser focal spot size $w_0$ [$\mu$m] } & $>8$ & $>3$ & $>3$\\ \hline \textbf{Peak intensity in focus [$\times10^{19}$ ~Wcm$^{-2}$]} & $1.9$ & $13.3$ & $120$ \\ \hline \textbf{Peak intensity parameter $\xi$} & 3.0 & 7.9 & 23.6\\ \hline \textbf{Peak quantum parameter $\chi$ for $E_e=16.5$~GeV} & 0.56 & 1.50 & 4.45\\ \hline \end{tabular} \caption{Laser parameters user in the different LUXE running phases.}\label{fig:LaserParameters} \end{center} \end{table} It is crucial to precisely characterise the laser pulse at the focus point with a high-level of accuracy. New laser diagnostics method are currently being developed to mitigate the pulse length and position jitter at the focus point down to a few $\mu$m and to be able to measure them with a 1\% shot-to-shot uncertainty. It is also planned to reach a 5\% uncertainty on the laser intensity. While challenging, the accuracy on these measurement will be achievable at LUXE, because the laser interacts weakly with the electron beam, allowing to transport the laser pulse outside the interaction area after the focus for a detailed study using dedicated instruments. \section{The European XFEL} \begin{figure}[h!] \centering \includegraphics[width=0.95\linewidth]{fig/euxfel_arealview.png}\ \caption{Aerial view of the European XFEL complex. The future position of LUXE, at the end of the Linear accelerator is also shown.}\label{fig:euxfel} \end{figure} The European XFEL is a research facility providing X-Ray photons to the photon science community since 2017. The project is internationally funded by twelve countries and expand from the main DESY-Hamburg campus to the neighbouring state Schleswig-Holstein, through a 3.4~km long complex of tunnels housing an electron accelerator, undulators, and experiments, as shown in fig.~\ref{fig:euxfel}. The X-Ray light is produced by self-amplified spontaneous emission of the electron obtained when circulating the high-quality electron beam in the undulators. The electrons are then dumped and the photons used in the Schenefeld experimental hall. LUXE will only use electrons from the Eu.XFEL accelerator. For this reason, the experiment will be placed in an unused gallery located at the end of the 2~km long linear accelerator. This gallery has been constructed to allow the future extension of the Eu.XFEL facility after 2030. A new accelerator extraction line, called TD20~\cite{beamlinecdr}, will have to be installed at the end of the Linear accelerator to bring the electrons into the experimental area. It will contain a new fast kicker magnet that will be capable of sending a single bunch in about $2~\mu$s to the experimental area. The rest of the accelerator lattice will be created from standard magnets used elsewhere in the Eu.XFEL accelerator complex. The accelerator is capable of bringing 2700 electron bunches up to 17.5~GeV with a maximum bunch charge of 1~nC. The repetition rate of the accelerator is 10~Hz. LUXE has been planned to interfere as little as possible with the photon science operation. Therefore, it is currently planned to function with the standard injection parameters of XFEL electron beam. The maximum electron energy will be 16.5~GeV with a bunch charge of 0.25~nC, corresponding to $1.5\times{}10^{9}$ electrons per bunches. Only the last bunch of the train will be used by LUXE. Since the electron beam is running at 10~Hz, but the laser beam is only running at 1~Hz, 9~Hz will be recorded by the instruments for background studies. In order to ensure an optimum electron beam--laser overlap, it is currently foreseen to focus the electron beam in the transverse plane at $5~\mu$m. The bunch length is of the order of $100~f$s. \section{Experiment and SFQED results} LUXE will function in two different data-taking mode, as show in fig.~\ref{fig:dataTakingMode}. In the electron-laser setup, the electron beam will be interacted directly with the laser at the interaction point. In the gamma-laser setup, the electron beam will be converted into a photon beam using a converter target placed upstream of the interaction point. The photon beam will then interact with the laser. \begin{figure}[h!] \centering \includegraphics[width=0.95\linewidth]{fig/interaction_sketches_v3.png}\ \caption{Sketches of the data-taking modes planned in LUXE: (a) electron-laser setup, (b) gamma-laser setup.}\label{fig:dataTakingMode} \end{figure} In both setups, the electron-positron pairs created at the interaction point will be split by a dipole spectrometer magnet. Different detector technologies will be used to measure precisely the flux of particles at the electron and positron side of the spectrometer, depending on the intensity of the background expected. The electron side will be populated with radiation hard technologies such as Cherenkov detector or scintillating screens in the electron-laser setup, while it will be equipped with precision detectors such as tracker and electromagnetic calorimeters for the gamma-laser setup. Since the flux of expected particles in the positron side of the spectrometer is always expected to be smaller, only precision detectors such as tracker and electromagnetic calorimeters will be used. The characteristics of the photon flux created at the interaction point will also be measured precisely. The energy spectrum will be determined using a spectrometer. The shape of the photon beam will be measured with a sapphire strip profiler, and the absolute flux will be obtained with a calorimeter measuring the photons back-scattered from the final dump. \begin{figure}[h!] \centering \includegraphics[width=0.95\linewidth]{fig/luxe-elaserresult.png}\ \caption{Expected rate of positron in the electron-laser setup for the different data-taking phases of LUXE, and comparing different theoretical hypotheses.}\label{fig:ELaserResults} \end{figure} These measurements will be used to characterise the Compton scattering and the Breit-Wheeler processes, in the perturbative and SFQED phase-space. As an example, fig.~\ref{fig:ELaserResults} shows the expected positron rates in the electron-laser setup for the two data-taking phases of LUXE. The pseudo measurements, scaled for 10 days of data-taking, that includes the largest expected systematic errors as well as statistical errors, are compared to perturbative QED and a full QED calculation. \section{Beyond Standard Model physics} LUXE is a unique high-flux multi-GeV photons beam-line. This feature has been studied to search for physics beyond the Standard Model~\cite{Bai:2021dgm}. Several scenarios of new physics were investigated, one of which concerns the creation of new Axion-Like ParticleS (ALPS) produced in the dump by Primakoff effect. These ALPS would then travel in the dump and in the air, and decay into two photons. Such a search could be carried out by adding at the end of the experimental hall, after the final photon dump, a high granularity calorimeter allowing to measure precisely the two photons and the position of their decay vertex. For such a search, it is necessary that the background after the dump can be controlled very well such that it is completely neglected. The expected limits obtained for this scenario are shown in fig.~\ref{fig:LUXENPOD}. They are compared to other limits from different experiments that have been conducted in the past or are expected in the future. It appears that with one year of LUXE data taking, the expected reach is similar to, or better than dedicated BSM experiments. \begin{figure}[h!] \centering \includegraphics[width=0.95\linewidth]{fig/reach_luxe_1025_w_phase0_phase1.pdf}\ \caption{Limits on ALPS expected from the beyond the Standard Model search.}\label{fig:LUXENPOD} \end{figure} \section{Conclusions} LUXE has been designed to fit all the experimental requirements that will allow to measure in detail QED in the previously unexplored non-perturbative regime. LUXE will also investigate the presence of new physics beyond the Standard Model. The experiment will rely on innovative diagnostic and control system for the laser system to improve its accuracy and the stability. For the detectors used to measure electrons, positrons and photons, the experiment will work with state of the art technologies allowing to characterise the particles produced in a very wide range of flux and energies. Installation of the experiment is expected to take place in the Eu.XFEL complex in an exceptional six month shutdown of the facility that will happen in the coming years. \section*{Acknowledgments} This work was in part funded by the Deutsche Forschungsgemeinschaft under Germany’s Excellence Strategy – EXC 2121 “Quantum Universe" – 390833306 and the German-Israel Foundation (GIF) under grant number 1492. It has benefited from computing services provided by the German National Analysis Facility (NAF). \bibliographystyle{unsrt} \section{Introduction} Laser Und XFEL Experiment, also know as LUXE~\cite{Abramowicz:2021zja}, is a new type of experiment that aims at measuring quantum electrodymanics (QED) in a yet unexplored parameter regime. QED is the theory explaining the interaction between light and matter. The greatest successes of the theory have been obtained in a regime where experimental predictions can be made and tested using perturbation theory in the fine structure structure constant $\alpha_{EM}$. In this regime, QED predicts observables that can be tested experimentally up to a very high level of accuracy. For instance the electron anomalous moment (g-2) prediction is calculated to a precision higher than one part in a trillion, and it agrees with measurements so far~\cite{Hanneke_2011}. LUXE's goal is to explore QED in the presence of very strong electromagnetic field. In such a case it is impossible to carry out the QED perturbative expansions, and exact calculation methods are used. this is the parameter regime of strong field QED (SFQED) One expects such behavior above the Schwinger limit~\cite{Schwinger:1951nm}: \begin{equation} {E}_\textrm{cr}=\frac{m^{2}_{e}c^{3}}{\hslash{}e}\approx{}1.3\times{}10^{18}~V/m, \end{equation} where $m_{e}$ is the electron mass. LUXE will study SFQED by interacting the high-quality electron beam from the European X-Ray Free-Electron Laser Facility (European XFEL or Eu.XFEL)~\cite{XFELTDR} accelerator with a high-intensity laser beam. The experiment is sensitive to two processes characteristic, which are shown in fig.~\ref{fig:feynmanDiagrams}. Nonlinear Compton scattering is photon emission from a high-energy electron interacting with multiple photons from the laser. Breit-Wheeler process happens when a high energy photon interacts with multiple low energy photons from the laser producing a high energy electron-positron pair. \begin{figure}[h!] \centering \includegraphics[width=0.48\linewidth]{fig/feyn_strongbrem.png}\ \includegraphics[width=0.48\linewidth]{fig/feyn_strongpairprod.png}\ \caption{Processes accessible by LUXE: (left) Non-linear Compton scattering, (right) Breit-Wheeler.}\label{fig:feynmanDiagrams} \end{figure} Two dimensionless quantitites can be introduced to characterise SFQED~\cite{Fedotov:2022ely}:\\ $\xi$ which measures the electron--laser coupling and the laser intensity \begin{equation} \xi=\frac{m_e}{\omega_{L}} \frac{{\mathcal E}_\textrm{L}}{{E}_\textrm{cr}}, \end{equation} where $\omega_{L}$ is the laser frequency and ${\mathcal E}_\textrm{L}$ is the instantaneous laser field strength. $\chi_{i}$, whose squared measures the fraction of laser energy transferred to electron beam \begin{equation} \chi_{i}=\frac{\epsilon_{i}}{m_e}\frac{{\mathcal E}_\textrm{L}}{\epsilon_{crit}}(1+\beta\cos\theta), \end{equation} where subscripts i are used to denote particle type ("e" for an electron parameter and "$\gamma$" for a photon parameter), $\epsilon_{i}$ is the particle energy, and $\theta$ is the collision angle of the particle with the laser pulse such that $\theta= 0$ is “head-on”. $\hslash=c=1$ has been used, $\beta=1$ for photons and $\beta\approx 1$ for electrons. The parameter regime that will be probed by LUXE is shown in fig.~\ref{fig:theoryPlane}. Different experiments that were or will be built to characterise SFQED are also shown. Predictions for observables targeted by LUXE, such as the number of pairs created via the nonlinear Breit-Wheeler process, differ from the predictions of perturbation theory; we will discuss examples below. \begin{figure}[h!] \centering \includegraphics[width=0.95\linewidth]{fig/chivsxi_nochiline.png}\ \caption{$\xi$ vs $\chi_\gamma$ plane that can be probed by LUXE. Other experiments sensitivity are also shown.}\label{fig:theoryPlane} \end{figure} \section{Laser system} The strong electromagetic field background in LUXE will be provided by a high-intensity Titanium:Saphire laser, producing photons with an 800~nm wavelength, corresponding to about 1.55~eV. Such laser systems use the Chirped Pulse Amplification technique, which was developed by Donna Strickland and Gérard Mourou in the 1980s~\cite{strickland1985compression}. This technique allows to reach very high level of laser power intensities at the focus point, up to 10~PW at the ELI beam-lines~\cite{ELI} for instance. It relies on the amplification of a short pulse after having stretched it optically, and before it is re-compressed to a very high-energy ultrashort pulse, down to 30~fs in LUXE. In more details, the LUXE laser system is currently planned to function in two different phases to save on costs and starts physics data-taking early on. In Phase~0, one plans to use a commercial 40~TW laser system, that will be upgraded to a 350~TW system in Phase~1. The laser will function with a repetition rate of 1~Hz. It will interact with the electron beam at an angle of about 20$^{\circ}$. The main parameters that can be reached at LUXE are summarized in tab.~\ref{fig:LaserParameters}. \begin{table}[h!] \begin{center} \begin{tabular}{|l|c|c|c|} \hline \textbf{Parameter} & \multicolumn{2}{c|}{Phase 0} & Phase 1 \\ \hline \textbf{Laser Power [TW]} & \multicolumn{2}{c|}{40} & 350 \\ \hline \textbf{Laser energy after compression [J]} & \multicolumn{2}{c|}{1.2} & 10 \\ \hline \textbf{Percentage of Laser in focus [\%]} & \multicolumn{3}{c|}{50}\\ \hline \textbf{Laser focal spot size $w_0$ [$\mu$m] } & $>8$ & $>3$ & $>3$\\ \hline \textbf{Peak intensity in focus [$\times10^{19}$ ~Wcm$^{-2}$]} & $1.9$ & $13.3$ & $120$ \\ \hline \textbf{Peak intensity parameter $\xi$} & 3.0 & 7.9 & 23.6\\ \hline \textbf{Peak quantum parameter $\chi$ for $E_e=16.5$~GeV} & 0.56 & 1.50 & 4.45\\ \hline \end{tabular} \caption{Laser parameters user in the different LUXE running phases.}\label{fig:LaserParameters} \end{center} \end{table} It is crucial to precisely characterise the laser pulse at the focus point with a high-level of accuracy. New laser diagnostics method are currently being developed to mitigate the pulse length and position jitter at the focus point down to a few $\mu$m and to be able to measure them with a 1\% shot-to-shot uncertainty. It is also planned to reach a 5\% uncertainty on the laser intensity. While challenging, the accuracy on these measurement will be achievable at LUXE, because the laser interacts weakly with the electron beam, allowing to transport the laser pulse outside the interaction area after the focus for a detailed study using dedicated instruments. \section{The European XFEL} \begin{figure}[h!] \centering \includegraphics[width=0.95\linewidth]{fig/euxfel_arealview.png}\ \caption{Aerial view of the European XFEL complex. The future position of LUXE, at the end of the Linear accelerator is also shown.}\label{fig:euxfel} \end{figure} The European XFEL is a research facility providing X-Ray photons to the photon science community since 2017. The project is internationally funded by twelve countries and expand from the main DESY-Hamburg campus to the neighbouring state Schleswig-Holstein, through a 3.4~km long complex of tunnels housing an electron accelerator, undulators, and experiments, as shown in fig.~\ref{fig:euxfel}. The X-Ray light is produced by self-amplified spontaneous emission of the electron obtained when circulating the high-quality electron beam in the undulators. The electrons are then dumped and the photons used in the Schenefeld experimental hall. LUXE will only use electrons from the Eu.XFEL accelerator. For this reason, the experiment will be placed in an unused gallery located at the end of the 2~km long linear accelerator. This gallery has been constructed to allow the future extension of the Eu.XFEL facility after 2030. A new accelerator extraction line, called TD20~\cite{beamlinecdr}, will have to be installed at the end of the Linear accelerator to bring the electrons into the experimental area. It will contain a new fast kicker magnet that will be capable of sending a single bunch in about $2~\mu$s to the experimental area. The rest of the accelerator lattice will be created from standard magnets used elsewhere in the Eu.XFEL accelerator complex. The accelerator is capable of bringing 2700 electron bunches up to 17.5~GeV with a maximum bunch charge of 1~nC. The repetition rate of the accelerator is 10~Hz. LUXE has been planned to interfere as little as possible with the photon science operation. Therefore, it is currently planned to function with the standard injection parameters of XFEL electron beam. The maximum electron energy will be 16.5~GeV with a bunch charge of 0.25~nC, corresponding to $1.5\times{}10^{9}$ electrons per bunches. Only the last bunch of the train will be used by LUXE. Since the electron beam is running at 10~Hz, but the laser beam is only running at 1~Hz, 9~Hz will be recorded by the instruments for background studies. In order to ensure an optimum electron beam--laser overlap, it is currently foreseen to focus the electron beam in the transverse plane at $5~\mu$m. The bunch length is of the order of $100~f$s. \section{Experiment and SFQED results} LUXE will function in two different data-taking mode, as show in fig.~\ref{fig:dataTakingMode}. In the electron-laser setup, the electron beam will be interacted directly with the laser at the interaction point. In the gamma-laser setup, the electron beam will be converted into a photon beam using a converter target placed upstream of the interaction point. The photon beam will then interact with the laser. \begin{figure}[h!] \centering \includegraphics[width=0.95\linewidth]{fig/interaction_sketches_v3.png}\ \caption{Sketches of the data-taking modes planned in LUXE: (a) electron-laser setup, (b) gamma-laser setup.}\label{fig:dataTakingMode} \end{figure} In both setups, the electron-positron pairs created at the interaction point will be split by a dipole spectrometer magnet. Different detector technologies will be used to measure precisely the flux of particles at the electron and positron side of the spectrometer, depending on the intensity of the background expected. The electron side will be populated with radiation hard technologies such as Cherenkov detector or scintillating screens in the electron-laser setup, while it will be equipped with precision detectors such as tracker and electromagnetic calorimeters for the gamma-laser setup. Since the flux of expected particles in the positron side of the spectrometer is always expected to be smaller, only precision detectors such as tracker and electromagnetic calorimeters will be used. The characteristics of the photon flux created at the interaction point will also be measured precisely. The energy spectrum will be determined using a spectrometer. The shape of the photon beam will be measured with a sapphire strip profiler, and the absolute flux will be obtained with a calorimeter measuring the photons back-scattered from the final dump. \begin{figure}[h!] \centering \includegraphics[width=0.95\linewidth]{fig/luxe-elaserresult.png}\ \caption{Expected rate of positron in the electron-laser setup for the different data-taking phases of LUXE, and comparing different theoretical hypotheses.}\label{fig:ELaserResults} \end{figure} These measurements will be used to characterise the Compton scattering and the Breit-Wheeler processes, in the perturbative and SFQED phase-space. As an example, fig.~\ref{fig:ELaserResults} shows the expected positron rates in the electron-laser setup for the two data-taking phases of LUXE. The pseudo measurements, scaled for 10 days of data-taking, that includes the largest expected systematic errors as well as statistical errors, are compared to perturbative QED and a full QED calculation. \section{Beyond Standard Model physics} LUXE is a unique high-flux multi-GeV photons beam-line. This feature has been studied to search for physics beyond the Standard Model~\cite{Bai:2021dgm}. Several scenarios of new physics were investigated, one of which concerns the creation of new Axion-Like ParticleS (ALPS) produced in the dump by Primakoff effect. These ALPS would then travel in the dump and in the air, and decay into two photons. Such a search could be carried out by adding at the end of the experimental hall, after the final photon dump, a high granularity calorimeter allowing to measure precisely the two photons and the position of their decay vertex. For such a search, it is necessary that the background after the dump can be controlled very well such that it is completely neglected. The expected limits obtained for this scenario are shown in fig.~\ref{fig:LUXENPOD}. They are compared to other limits from different experiments that have been conducted in the past or are expected in the future. It appears that with one year of LUXE data taking, the expected reach is similar to, or better than dedicated BSM experiments. \begin{figure}[h!] \centering \includegraphics[width=0.95\linewidth]{fig/reach_luxe_1025_w_phase0_phase1.pdf}\ \caption{Limits on ALPS expected from the beyond the Standard Model search.}\label{fig:LUXENPOD} \end{figure} \section{Conclusions} LUXE has been designed to fit all the experimental requirements that will allow to measure in detail QED in the previously unexplored non-perturbative regime. LUXE will also investigate the presence of new physics beyond the Standard Model. The experiment will rely on innovative diagnostic and control system for the laser system to improve its accuracy and the stability. For the detectors used to measure electrons, positrons and photons, the experiment will work with state of the art technologies allowing to characterise the particles produced in a very wide range of flux and energies. Installation of the experiment is expected to take place in the Eu.XFEL complex in an exceptional six month shutdown of the facility that will happen in the coming years. \section*{Acknowledgments} This work was in part funded by the Deutsche Forschungsgemeinschaft under Germany’s Excellence Strategy – EXC 2121 “Quantum Universe" – 390833306 and the German-Israel Foundation (GIF) under grant number 1492. It has benefited from computing services provided by the German National Analysis Facility (NAF). \bibliographystyle{unsrt}
1,314,259,993,765
arxiv
\section{Introduction} Rotation averaging is a problem that consists of estimating absolute camera orientations that agree as well as possible with a set of pairwise relative orientations. Errors expressing disagreements between estimated absolute orientations and the measured relative orientations are hereby distributed over each pairwise constraint. Rotation averaging is essential in global or hierarchical Structure from Motion (SfM) \cite{DBLP:conf/iccv/MoulonMM13,DBLP:conf/iccv/CuiT15,DBLP:conf/iccv/SweeneySHTP15,zhu2017parallel,DBLP:conf/cvpr/ZhuZZSFTQ18}, as well as Simultaneous Localization and Mapping (SLAM) \cite{DBLP:conf/icra/BustosCER19} where it can accelerate camera pose estimation and reduce drift accumulation. In global SfM, we typically start by constructing a view graph $\mathcal{G}$ that encodes all connections between pairs of views $i$ and $j$ by an edge $(i, j)$, each one including the relative motion between image $i$ and image $j$. Rotation averaging then gives us the absolute orientation of each view, and it is typically followed by a translation averaging step \cite{DBLP:conf/iccv/JiangCT13,DBLP:conf/cvpr/OzyesilS15,DBLP:conf/eccv/GoldsteinHLVS16, DBLP:journals/cvpr/ZhuangCL18} to also obtain absolute positions. Triangulation of 3D points and joint optimisation over all parameters (i.e. bundle adjustment~\cite{DBLP:conf/iccvw/TriggsMHF99}) completes the reconstruction. In SLAM, rotation averaging has been used in the back-end pose graph optimisation \cite{rosen2019se,DBLP:conf/icra/BustosCER19} to flexibly encounter large drift accumulations or---more generally---replace the time-consuming bundle adjustment step. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{Trafalgar.pdf} \caption{Reconstructions generated from the Trafalgar dataset~\cite{DBLP:conf/eccv/WilsonS14}, where 7085 out of 15685 images have been registered, and the rotation averaging step only took 4.2 s.} \label{figure:picca_recon_results} \end{figure} Rotation averaging was first proposed using the quaternion representation~\cite{DBLP:conf/cvpr/Govindu01}. Later solutions can be categorised into approaches based on either local and global optimisation. Local optimisation approaches such as the one presented by Chatterjee and Govindu \cite{DBLP:conf/iccv/ChatterjeeG13} are well studied and practical. However, these methods only return the nearest local minimum. To overcome this limitation, the community has also proposed global optimisation approaches \cite{DBLP:conf/cvpr/TronZD16, rosen2019se, DBLP:conf/cvpr/ErikssonOKC18, EriksonOKC20,DBLP:conf/eccv/DellaertRWMC20}. Though the retrieval of global optima can be guaranteed, they have large computational cost and high sensitivity against outliers, and thus are impractical when applied to large-scale SfM problems. In this paper, we focus on improving the efficiency and robustness of the rotation averaging method, and on pushing its application to challenging scenes. We make a combination of a global solver and a local solver to solve rotation averaging, which guarantees global optimality and has strong resilience against outliers. Rotation averaging based on chordal distances can be reformulated as a semi-definite program (SDP) with a low-rank constraint. Taking advantage of the low-rank factorisation of the original SDP, we can apply globally optimal Riemannian-Staircase-based ~\cite{DBLP:journals/corr/Boumal15} methods. In principle, any global solver~\cite{rosen2019se, DBLP:journals/corr/abs-1903-00597, DBLP:conf/eccv/DellaertRWMC20} can be used in our hybrid approach, and we adopt the block coordinate minimisation method~\cite{DBLP:journals/corr/abs-1903-00597} to better leverage the view graph sparsity. By preprocessing the graph with fast view graph filtering, graph sparsity can be further exploited to accelerate the optimisation. Previous works mainly apply rotation averaging to global SfM. Though global SfM is efficient, translation averaging is often complicated by the unknown scale of relative translations and the difficulty of identifying outliers. In this work, we embed the proposed approach into an incremental SfM pipeline. The strategy is inspired by the work of Cui \emph{et al.}~\cite{DBLP:conf/cvpr/CuiGSH17}, who proposes a hybrid SfM scheme where camera rotations are estimated globally, and camera centers are estimated incrementally by a perspective-2-point (P2P) method. Though the approach is efficient, some estimated absolute rotations are not correct, thus it prevents camera centers from proper registration and scene reconstructions from completeness. Inspired by Cui's approach~\cite{DBLP:conf/cvpr/CuiGSH17}, we apply rotation averaging in a traditional incremental SfM pipeline, however, use the perspective-3-point (P3P) to register camera rotations and centers. We further propose a novel cost function to optimise camera poses and landmarks that alleviates drift accumulation, in which camera rotations obtained from rotation averaging are used as regularisers. The resulting SfM approach, named RA-SfM (Rotation-Averaged Structure from Motion), shows high practicality and surpasses state-of-the-art methods in accuracy, which is demonstrated at the hand of extensive experiments on large-scale real-world datasets. The reconstruction result of the largest dataset is shown in Fig.~\ref{figure:picca_recon_results}. In summary, the main contributions of our work are: \begin{itemize} \item We propose an outlier-resilient hybrid rotation averaging approach, which combines a global optimiser with fast view graph filtering and a local optimiser. \item We refine the traditional incremental bundle adjustment cost function by adding the obtained global rotations as a regularisation term, which significantly alleviates drift accumulation in incremental SfM. \end{itemize} The practicality and superiority of the proposed scheme is demonstrated by extensive experiments on synthetic datasets and challenged internet datasets. \section{Related Work} \label{sec:related_work} Motion averaging~\cite{DBLP:conf/cvpr/Govindu01,DBLP:conf/cvpr/Govindu04} is widely used in global SfM pipelines~\cite{DBLP:conf/iccv/MoulonMM13,DBLP:conf/iccv/CuiT15,DBLP:conf/iccv/SweeneySHTP15, zhu2017parallel,DBLP:conf/cvpr/ZhuZZSFTQ18} as an answer to the drift problem occurring in incremental SfM~\cite{DBLP:conf/iccv/AgarwalSSSS09, DBLP:conf/3dim/Wu13,DBLP:conf/cvpr/SchonbergerF16, DBLP:conf/3dim/CuiSGH17}. The first solution to rotation averaging goes back to Govindu \cite{DBLP:conf/cvpr/Govindu01}, who uses the quaternion representation and solves the problem by linear least-squares fitting. More reliable results were later on gained by optimising over a Lie algebra~\cite{DBLP:conf/cvpr/Govindu04}. In practice, the problem is complicated by the existence of outliers. To enhance the robustness of rotation averaging, absolute rotations may first be initialised under the $L_1$-norm, and then refined by Iteratively Reweighted Least Squares (IRLS) \cite{DBLP:conf/iccv/ChatterjeeG13, DBLP:journals/pami/ChatterjeeG18}. Despite great progress, all aforementioned approaches can only guarantee a locally optimal solution. Another local approach was proposed by Crandall \emph{et al.} \cite{DBLP:conf/cvpr/CrandallOSH11,DBLP:journals/pami/CrandallOSH13}, who couple the cost function with regularisation terms to enhance robustness. However, the method is computationally demanding as it relies on discrete belief propagation over a Markov random field. Fredriksson and Olsson \cite{DBLP:conf/accv/FredrikssonO12} exploit Lagrangian duality to become the first to find a globally optimal solution to the rotation averaging problem. In a similar approach, Eriksson \emph{et al.} \cite{EriksonOKC20} perform the optimisation directly on the rotation matrix by minimising chordal distances. By removing the determinant constraint on the rotation from the original SDP, they elegantly prove that there is no duality gap between the primal problem and its dual when residual errors are bounded below an angular residual threshold. Rotation averaging can be converted into an SDP optimisation problem~\cite{DBLP:books/cu/BV2014}. Wang and Singer \cite{DBLP:journals/ini/WangS13} solve it by the Alternating Direction Method of Multipliers (ADMM)~\cite{DBLP:journals/ftml/BoydPCPE11, DBLP:journals/mpc/WenGY10}. Eriksson \emph{et al.} \cite{EriksonOKC20} use a row-by-row block coordinate descent method (BCM) \cite{Wen09rowby}. However, due to the slow convergence of ADMM and the repetitive fill-in procedures of BCM, neither approach proves to be practical when applied to large-scale datasets. A seminal work on the solution of SDP problems is presented by Burer and Monteiro~\cite{DBLP:journals/mp/BurerM03}, where the positive semi-definite variable is replaced by an appropriate factorisation, and the minimal rank variable is chosen to enhance computational speed. The Burer-Monteiro factorisation later inspired Boumal~\cite{DBLP:journals/corr/Boumal15}, who proposes a general optimisation technique named the Riemannian staircase algorithm, where the rank variable is augmented until the KKT condition is met, thus guaranteeing global optimality. Rosen \emph{et al.} ~\cite{rosen2019se} address the SDP problem of pose graph optimisation in the Special Euclidean space (SE(n)). When translation variables are decoupled from rotations, they first find the second-order critical point by the second-order Riemannian trust-region method, and then adopt the low-rank optimisation framework of ~\cite{DBLP:journals/corr/Boumal15} to guarantee global optimality~\cite{rosen2019se}. Inspired by Wang's work~\emph{et al.}~\cite{DBLP:journals/corr/WangCK17}, which solves the low-rank SDP problem by block coordinate descent method, Tian \emph{et al.}~\cite{DBLP:journals/corr/abs-1903-00597} extends this work to Steifel manifold, and further applied a Riemannian BCM method to pose graph optimisation in distributed settings ~\cite{DBLP:journals/corr/abs-1911-03721}. Building on SE-Sync~\cite{rosen2019se}, Dellaert \emph{et al.}~\cite{DBLP:conf/eccv/DellaertRWMC20} propose \textit{Shonan rotation averaging}, a method in which the rotation matrix is vectorized, thus permitting the use of existing gradient-based optimisation methods on the manifold of rotation matrices. \section{Notations and Preliminaries} \label{sec:notation} Let $\mathcal{G} = \{V, E\}$ be an undirected graph, where $V$ represents the collection of nodes and $E$ the set of edges. Let $m = |E|$ be the number of edges and $n = |V|$ be the number of nodes. Let $\tr(\cdot)$ denote the trace of a square matrix. Given two matrices $A \in \mathbb{R}^{m\times n}$ and $B \in \mathbb{R}^{m\times n}$, let $\langle A, B \rangle = \sum_{i} \sum_j A_{ij}B_{ij}$. We therefore have $\tr(A^TB) = \langle A, B \rangle$. Let $\blockdiag(A)$ represent the block diagonal matrix of $A$, and $\symblockdiag(A)=\frac{1}{2} \blockdiag(A + A^T)$. The set of rotations in 3D forms the Special Orthogonal Group $\text{SO}(3)$, i.e., \begin{equation} \text{SO}(3) = \{R \in \mathbb{R}^{3 \times 3} | R^TR = I, \det(R) = 1\}. \end{equation} Since $\text{SO}(3)$ is a Lie group, there exists an exponential mapping between a rotation $R$ and its Lie algebra $\mathfrak{so}(3)$ representation $\bm{w}$~\cite{ma2012invitation}: \begin{equation} R = \exp ([\bm{w}]_{\times}). \end{equation} The absolute rotations are grouped in $\mathcal{R} = \{R_1, R_2, \cdots, R_n\}$, where $R_i \in \text{SO}(3)$, $i \in [n]$. Relative rotations are represented by $\mathcal{R}_{\text{rel}} = \{R_{ij}\}$, where $R_{ij} \in \text{SO}(3)$, $i,j \in [n]$, $i < j$ is the rotation from $R_i$ to $R_j$. The chordal distance between two rotations is measured by~\cite{DBLP:journals/ijcv/HartleyTDL13} \begin{equation} \label{equ:chordal_d} d_{\text{chord}}(R_1, R_2) = \left\|R_1 - R_2\right\|_F, \end{equation} where $\left\|\cdot\right\|_F$ represents the Frobenius norm of a matrix. \section{Hybrid Rotation Averaging} \label{sec:RA_by_BM} Globally optimal rotation averaging is sensitive to outliers, thus requiring an additional step to clean the view graph. In this section, we first present an efficient pre-processing step to filter outliers in the view graph. We then apply a block coordinate descent (BCD) method~\cite{DBLP:journals/corr/abs-1903-00597} to optimise the low-rank formulation of rotation averaging. Its global optimality can be guaranteed theoretically. Finally, we apply a local optimisation step to further refine the result in the case of scenes that have many erroneous edges. \subsection{Fast View Graph Filtering} \label{subsec:fast_view_graph_filtering} The view graph plays an important role in our SfM pipeline. We clean the view graph for two main reasons: (1) Solutions of global rotation averaging algorithms can be biased by outliers. Also, global optimality is only guaranteed when the residuals for each edge are bounded below a certain threshold~\cite{EriksonOKC20}. (2) Some view pairs are redundant and even harm the quality of SfM results. Zach \emph{et al.} \cite{DBLP:conf/cvpr/ZachKP10} proposed a view graph filtering (VGF) technique to obtain a high-quality initial view graph, where loop constraints of rotation triplets are utilised to detect outliers. Specifically, edge $(i,j)$ is an outlier if its angular error below a given threshold $\epsilon$ \begin{equation} \label{equ:loop_constraint} d(R_{ij}R_{jk}R_{ki}, I) > \epsilon. \end{equation} Despite its effectiveness, \cite{DBLP:conf/cvpr/ZachKP10} needs to validate all triplets, which is impractical for large-scale datasets. However, \cite{DBLP:conf/eccv/ShenZFZQ16} suggests that it is not necessary to check all triplets to distinguish inliers from outliers, and that an increased number of valid 2D-2D image correspondences usually suggests more reliable two-view geometries. We propose an efficient view graph filtering method that relies on this observation. In the following, we denote a group of 3 nodes as a \emph{triplet}, and a triplet with two valid edges and one unverified edge as a \emph{weak triplet}. Given an initial view graph $\mathcal{G}$, we start by constructing a maximum spanning tree (MST), where the weight of an edge is the number of valid 2D-2D correspondences. The relative rotations from this MST are all treated as valid. We then check the triplets along with the MST. That is, all adjacent edges that share a common node in the MST are used to build triplets. Next, we generate many weak triplets. Now supposing that edges $(i, j)$ and $(j, k)$ are valid and edge $(i, k)$ exists, we use criterion~\eqref{equ:loop_constraint} to verify the validity of edge $(i, k)$. An iteration is completed once all such weak triplets have been verified. After the first iteration, new weak triplets are generated based on which we can perform another iteration. We empirically found that $3$ iterations are sufficient for successful rotation averaging. \subsection{Global Rotation Averaging} \label{subsec:global_ra_optimization} In this section, we first review a globally optimal guaranteed rotation averaging method ~\cite{DBLP:journals/corr/abs-1903-00597}, then the sparsity pattern of the view graph is further exploited to accelerate the algorithm. Given a set of relative rotations $\{R_{ij}\}$, where $i, j \in [n]$, the aim of rotation averaging is to obtain the absolute rotations $\{R_i\}$ that minimises the cost function below: \begin{equation} \label{equ:rotation_averaging} \min_{R_1, \cdots, R_n}\ \sum_{(i,j) \in E} d^p(R_{ij}, R_j R_i^T), \end{equation} where $d^p(\cdot)$ represents a distance measure under a $p$-norm. While there are a lot of local methods \cite{DBLP:conf/cvpr/Govindu01,DBLP:conf/cvpr/Govindu04,DBLP:conf/iccv/ChatterjeeG13, DBLP:journals/pami/ChatterjeeG18} giving a least-squares solution to problem~\eqref{equ:rotation_averaging}, here we exploit a global optimisation approach that can obtain the global optimum. Adopting the chordal distances, the primal problem of rotation averaging is finally given by \footnote{See supplementary material for the complete derivation.} \begin{equation} \label{equ:primal_problem_ra} \begin{split} &\min_R \ \ -\tr(R^TGR) \ \ \ \text{s.t.} \quad R \in \text{SO}(3)^n, \end{split} \end{equation} where $R = [R_1\ R_2\ \cdots\ R_n]$, and $G_{ij} = a_{ij} R_{ij}$, with $a_{ij} = 1$ if the edge between views $i$ and $j$ exists, and $0$ otherwise. Eriksson \emph{et al.} \cite{EriksonOKC20} solve the dual problem with determinant constraint relaxation to the primal problem~\eqref{equ:primal_problem_ra} \begin{equation} \label{equ:duals_dual_ra} \begin{split} \min_{X} \ \ &-\tr(GX) \\ \text{s.t.} \quad &X_{ii} = I_3,\ i = 1, \cdots, n, \quad X \succeq 0, \end{split} \end{equation} where $X$ can be written as a block matrix $X_{ij}$, with $i, j \in [n]$ and $X_{ij} \in \mathbb{R}^3$. Problem~\eqref{equ:duals_dual_ra} is an SDP problem \cite{DBLP:books/cu/BV2014}. Since every $X \succeq 0$ can be factored as $Y^TY$ for some $Y$ \cite{DBLP:journals/mp/BurerM03}, and---in the case of rotation averaging---the optimal value of $X$ satisfies $X^{\star} = {R^{\star}}^T R^{\star}$~\cite{EriksonOKC20}, there is an implicit constraint on $X$ such that $\rank (X)=3$. Thus, $X$ can be reformulated as \begin{equation} \label{equ:low_rank_factorization} X = Y^T Y, \end{equation} where $Y = [Y_1\ Y_2\ \cdots\ Y_n], Y_i^T Y_i = I, \forall i \in [n]$. By substituting \eqref{equ:low_rank_factorization} into problem~\eqref{equ:duals_dual_ra}, a new problem is obtained \begin{equation} \label{equ:low_rank_sdp} \begin{split} \min_Y \ \ &-\tr(G Y^T Y) \qquad\qquad\\ \text{s.t.} \quad &Y = [Y_1\ Y_2\ \cdots\ Y_n],\quad Y_i^T Y_i = I, \forall i \in [n]. \end{split} \end{equation} While Tian \emph{et al.}~\cite{DBLP:journals/corr/abs-1903-00597} proposed a block coordinate minimisation (BCM) method to solve the low-rank SDP problem~\eqref{equ:low_rank_sdp}, we make a detailed derivation to the BCM solution, which is further accelerated by exploring graph sparsity. Note that \begin{align} \label{equ:trace_equation} &\tr(GY^TY) = \tr(YGY^T) = \langle YG, Y \rangle \\ \nonumber &= \sum_{j=1}^n \langle \sum_{i=1}^n Y_iG_{ij}, Y_j \rangle = \sum_{j=1}^n \langle \hat{Q}_j, Y_j \rangle, \end{align} where $\hat{Q}_j = \sum_{i=1}^n Y_iG_{ij}$. Let $f(Y^k) = \sum_{j=1}^n \langle \hat{Q}_j^k, Y_j^k \rangle$, where superscript $k$ represents the $k$-th iteration in BCM. Since $G_{ii}=\mathbf{0}$, using \eqref{equ:trace_equation} we have \begin{align} &\mathop{\arg\min}_Y f(Y^k) = \mathop{\arg\min}_Y \sum_{j=1}^n \langle \hat{Q}_j^k, Y_j^k \rangle \nonumber \\ = & \mathop{\arg\min}_Y \sum_{j=1}^n \langle \sum_{i \neq j}^n Y_i^k G_{ij}, Y_j^k \rangle = \mathop{\arg\min}_Y \sum_{j=1}^n \langle Q_j^k, Y_j^k \rangle, \nonumber \end{align} where $Q_j = \sum_{i \neq j}^n Y_i G_{ij}$. This leads us to the derivation \begin{align} \label{equ:derivation_low_rank_bcm} & Y_{j_k}^{k+1} = \mathop{\arg\min}_{Y_{j_k}}\ f(Y_1^k, \cdots, Y_{j_k-1}^k, Y_{j_k}^k, Y_{j_k+1}^k, \cdots, Y_n^k) \nonumber\\ = & \mathop{\arg\min}_{Y_{j_k}}\ \sum_{j=1}^n \langle \sum_{i \neq j}^n Y_i^k G_{ij}, Y_j^k \rangle \nonumber\\ = & \mathop{\arg\min}_{Y_{j_k}}\ \langle Q_j^k, Y_{j_k}^k \rangle + \sum_{j \neq j_k}^n \sum_{i \neq j}^n \langle Y_i^k G _{ij}, Y_j^k \rangle \nonumber\\ = & \mathop{\arg\min}_{Y_{j_k}}\ 2\langle Q_j^k, Y_{j_k}^k \rangle + \sum_{j \neq j_k}^n \sum_{i \neq j,j_k}^n \langle Y_i^k G _{ij}, Y_j^k \rangle \nonumber\\ = & \mathop{\arg\min}_{Y_{j_k}}\ 2\langle Q_j^k, Y_{j_k}^k \rangle = \mathop{\arg\min}_{Y_{j_k}}\ \frac{1}{2} \left\|Y_{j_k}^k + Q_j^k\right\|_F^2. \end{align} By solving problem~\eqref{equ:derivation_low_rank_bcm}, the update of $Y_j$ in problem~\eqref{equ:low_rank_sdp} can be determined by~\cite{DBLP:journals/jscic/LaiO14, DBLP:journals/corr/abs-1903-00597} \begin{equation} \label{equ:svd_rotation} Y_j^{*} = U_j I_{3 \times 3} V_j^T = U_j V_j^T, \end{equation} where $U_j\Sigma V_j$ is the singular value decomposition of $-Q_j$. Once the optimal value $Y_j^*$ is obtained, we need to update $Q_j$ at each inner iteration. The update rule is \begin{align} \label{equ:q_update_step} & Q_j^{k+1} = \sum_{i \neq j}^n Y_i^{k+1}G_{ij} = Y_{j_k}^{k+1} G_{j_k j} + \sum_{i \neq j, j_k}Y_i^{k+1}G_{ij} \nonumber\\ = & Y_{j_k}^{k+1} G_{j_k j} + \sum_{i \neq j, j_k} Y_i^k G_{ij} + Y_{j_k}^k G_{j_k j} - Y_{j_k}^k G_{j_k j} \nonumber\\ = & Y_{j_k}^{k+1} G_{j_k j} + \sum_{i \neq j} Y_i^k G_{ij} - Y_{j_k}^k G_{j_k j} \nonumber\\ = & Q_j^k + (Y_{j_k}^{k+1} - Y_{j_k}^k) G_{j_k j}. \end{align} In Algorithm~\ref{alg:rotation_averaging_algorithm}, we outline the BCM with graph sparsity. In steps 6$\sim$7 of Algorithm~\ref{alg:rotation_averaging_algorithm}, the time complexity of $O(n)$ is only an upper bound occurring for general cases. In practice, due to the commonly sparse structure of SfM problems, \textit{time complexity can be further reduced to $O(d)$}, where $d$ is the degree of the nodes. This property is important for accelerating the optimisation. Notice that, with our fast view graph filtering, the graph can be more sparse and we can gain more acceleration. \begin{algorithm} \caption{BCM for SDP~\cite{DBLP:journals/corr/abs-1903-00597} with Graph Sparsity} \label{alg:rotation_averaging_algorithm} \begin{algorithmic}[1] \Require relative rotations $\mathcal{R}_{\text{rel}}$, $maxIterNum$, $Y^0$. \Ensure First-order critical point $Y^{\star}$ \State $k \leftarrow 0$; $Q_j^0 \leftarrow \sum_{i \neq j}^n Y_i G_{ij}, \forall j \in [n]$. \While{$k < maxIterNum$ AND not converge} \For{$i < n$} \State $j_k \leftarrow i$ \State Update $Y_{j_k}^{k+1}$ by Eq.~\eqref{equ:svd_rotation} \For{$\forall j \neq j_k$ AND $G_{j_k j} \neq \mathbf{0}$ } \State Update $Q_j^{k+1}$ by Eq.~\eqref{equ:q_update_step} \EndFor \EndFor \State $k \leftarrow k + 1$; \EndWhile \State return $Y$ \end{algorithmic} \end{algorithm} \paragraph{Discussion of Global Optimality: } Problem~(\ref{equ:low_rank_sdp}) is non-convex, and there is no guarantee that we can obtain the global optimum. In~\cite{DBLP:journals/corr/WangCK17}, the global optimum is guaranteed by selecting an appropriate step size and random initialisation (Theorem 3.4). However, the theorem only holds for scalar variables. For the RA problem, global optimality is not held since we optimise on the manifold. Boumal~\cite{DBLP:journals/corr/Boumal15} proposes a general framework named the \textit{Riemannian Staircase} algorithm (RS), which can find the global optimum. As previous work has applied a Riemannian based method to ensure the global optimum ~\cite{rosen2019se, DBLP:journals/corr/abs-1911-03721, DBLP:conf/eccv/DellaertRWMC20}, we recommend interested readers to refer to them. \subsection{Local Optimization Refinement} \label{subsec:local_refine} The global optimisation presented in Sec.~\ref{subsec:global_ra_optimization} assumes that the input relative rotations do not contain any outliers. As a result, it is sensitive to outliers. To further improve the robustness and accuracy of the low-rank BCM method, we follow the \emph{suggest-and-improve} framework of~\cite{park2017general}. The global optimisation approaches can obtain a good solution close to the global optimum. Still, it could be further refined by a gradient descent algorithm. We adopt this framework and use the method of Chatterjee and Govindu \cite{DBLP:conf/iccv/ChatterjeeG13} as a local optimiser. This method performs an Iteratively Reweighted Least Square (IRLS) under Lie algebra, which leads to an efficient and robust optimiser. The rotation $R_{ij}, R_i, R_j$ can be represented by the corresponding Lie algebra $\boldsymbol{\omega}_{ij}, \boldsymbol{\omega}_i, \boldsymbol{\omega}_j$, respectively. Using the Baker-Campbell-Hausdorff (BCH) equation, a single constraint in Eq.~\eqref{equ:rotation_averaging} can be converted to \begin{equation} \label{equ:lie_algebra_ra_constraint} \boldsymbol{\omega}_{ij} = \boldsymbol{\omega}_j - \boldsymbol{\omega}_i. \end{equation} By collecting the relative constraints, we obtain \begin{equation} \label{equ:linear_ra_equation} A \boldsymbol{\omega}_{\text{global}} = \boldsymbol{\omega}_{\text{rel}}. \end{equation} Here $A$ is a sparse matrix, in which all consecutive $3 \times 3$ blocks are zeros except two matrices $I$ and $-I$. Encapsulating Eq.~\eqref{equ:linear_ra_equation} with a Huber loss $\rho(x) = \frac{x^2}{x^2 + \sigma^2}$, we can optimise a robust cost function under least square meaning \begin{equation} \label{equ:robust_l2} \mathop{\arg \min}_{\boldsymbol{w}_{\text{global}}} \sum \rho(\left\|A \boldsymbol{\omega}_{\text{global}} - \boldsymbol{\omega}_{\text{rel}}\right\|). \end{equation} \subsection{Hybrid of Global Rotation Averaging} We outline our hybrid rotation averaging algorithm in Algorithm ~\ref{alg:hybrid_rotation_averaging_algorithm}. An ablation study of robustness against outliers is shown in Fig.~\ref{fig:outlier_robustness}. The outlier ratio ranges from $0$ to $50\%$ and is incremented in steps of $5\%$. We display the mean rotation error over 30 experiments. As can be observed, both VGF and local refinement improve the robustness of the global rotation averaging approach. \begin{algorithm}[htbp] \caption{Hybrid Rotation Averaging Algorithm} \label{alg:hybrid_rotation_averaging_algorithm} \begin{algorithmic}[1] \Require relative rotations $\mathcal{R}_{\text{rel}}$ \Ensure global rotations $\mathcal{R} = \{R_1, R_2, \cdots, R_n\}$ \State Perform fast VGF as described in Sec.~\ref{subsec:fast_view_graph_filtering}. \State Calculate global rotations using Algorithm~\ref{alg:rotation_averaging_algorithm} (or any other global rotation averaging method). \State Refine global rotations by solving problem~\eqref{equ:robust_l2}. \end{algorithmic} \end{algorithm} \begin{figure}[htbp] \centering \includegraphics[width=0.95\linewidth]{robustness} \caption{Ablation study of robustness. Outliers are generated by perturbing ground truth by rotation between $60^{\circ}-90^{\circ}$. \textit{global} represents the proposed low-rank BCM method, and \textit{hybrid} represents the proposed hybrid method with VGF and local refinement.} \label{fig:outlier_robustness} \end{figure} \section{Rotation-Averaged Structure from Motion} \label{sec:ra_application} In this section, we apply our rotation averaging method to an incremental SfM pipeline, which is known to suffer from the drift problem. Our hybrid SfM pipeline can be summarised as follows: We first construct the view graph and obtain global rotations from our proposed hybrid rotation averaging approach. Next, we create a seed reconstruction by selecting two appropriate images. We then continue by incrementally registering adjacent camera poses using a RANSAC-based~\cite{DBLP:journals/cacm/FischlerB81} P3P~\cite{DBLP:conf/cvpr/KneipSS11} algorithm, and triangulate landmarks. To reduce the accumulation of errors in our incremental SfM pipeline, we perform local bundle adjustment after each successful registration of an image, and global bundle adjustment whenever the number of recently added views surpasses a certain threshold. The drift problem is not solved, as each newly computed camera pose is affected by a small error, and these errors accumulate along the graph. Traditional incremental SfM pipelines have no way to rectify these errors. To tackle this problem, we introduce a novel cost function with averaged rotations as regularisers for bundle adjustment. Let $\mathcal{I}_i$ denote the measurements of image $i$. 3D landmarks observed by image $i$ are denoted as set $\mathcal{P}_i$. Note that the sets $\{\mathcal{P}_{i} | i \in \mathcal{I}\}$ might have repetitive elements, which---in a slight abuse of notation---is ignored for the sake of simplicity. Let $\mathbf{u}_{il}\in\mathcal{I}_i$ furthermore denote the image keypoint measurement of landmark $l$ in frame $i$. the pre-computed known rotation with respect to image $i$ is denoted as $\hat{\mathcal{R}}_{i}$. The proposed cost function is given by \begin{align} \label{eq:map_joint_opt} \sum_{i \in \mathcal{I}} \sum_{l \in \mathcal{I}_i} \rho_v \left( \left\| \mathbf{r}_{\mathcal{I}_{il}} \right\|^2 \right) + \sum_{(i,j) \in \mathcal{E}} w_{ij} \left( \left\| \mathbf{r}_{\mathcal{R}_{ij}} \right\|_{}^2 \right), \end{align} where $\rho_v (\cdot)$ is a robust loss function, and $w_{ij}$ is an individual weighting function for each known rotation term. In this paper, we fix $w_{ij}$ as a constant. The objective divides into two terms which are explained as follows. {\bf Visual Term:} We adopt the traditional re-projection error in bundle adjustment as our visual term \begin{equation} \mathbf{r}_{\mathcal{I}_{il}} = \mathbf{u}_{il} - \mathbf{\Pi} (R_i, C_i, \mathcal{P}_i, l), \end{equation} where $R_i$ and $C_i$ are respectively the estimated camera rotation and center, and $ \mathbf{\Pi}(\cdot)$ is the back-projection function that projects landmarks into the image plane. Note that the latter also depends on camera intrinsics, which---for the sake of a simplified and general notation---are not specified. {\bf Known Rotation Term:} The added known rotation term is \begin{equation} \mathbf{r}_{\mathcal{R}_{ij}} = \log( \hat{\mathcal{R}}_j^T \hat{\mathcal{R}}_{i} R_i^T R_j), \end{equation} where $\log$ is the logarithm map $\mathrm{SO}(3) \rightarrow \mathfrak{so}(3)$. This known rotation term is used as a regulariser in the complete cost function. To better demonstrate the effectiveness of~\eqref{eq:map_joint_opt}, we further make an explanation for our cost function, and draw it as a toy example in Fig.\ref{figure:incremental_ba_ra}. In Eq. ~\eqref{eq:map_joint_opt}, the first term corresponds to the reprojection error, the second term is used to penalize the large RPE that is caused by pose drift. Note that the reprojection error may remain small even if camera poses are drifting. This can be seen from Fig.\ref{figure:incremental_ba_ra} (a): Suppose $C_0$ is correctly registered at first, and $C_1, C_2, C_3$ are registered sequentially. When $C_1$ is wrongly registered, the error will be passed to $C_2$, then $C_3$. But traditional BA can not optimise camera poses to the correct place, because the triangulated 3D points are geometrically coherent with the camera poses, and reprojection errors are small (measured by the red and green dots in Fig.\ref{figure:incremental_ba_ra}). However, in this situation, the known rotation term can measure the discrepancy between the averaged rotations and incrementally recovered rotations (See from Fig.\ref{figure:incremental_ba_ra} (b)). And our optimiser tries to minimise this discrepancy and thereby alleviate pose drift. \begin{figure}[!ht] \centering \subfigure[Incremental camera registerisation.] { \begin{minipage}{0.8\linewidth} \includegraphics[width=1\linewidth]{incremental.pdf} \end{minipage} } \subfigure[Incremental camera registration with averaged rotations.] { \begin{minipage}{0.82\linewidth} \includegraphics[width=1\linewidth]{incremental_ra.pdf} \end{minipage} } \caption{A toy example to explain our RA-SfM. Camera rotations are drawn by three arrows with colors in red, green, and blue. In (a), landmarks are denoted by $\star$, green dots in the image plane represent keypoints, and red dots are reprojected coordinates. In (b), we show the correctly registered camera poses for $C_1, C_2, C_3$. $R_{12}$ and $R_{23}$ denote the relative rotations, which are obtained from (a). $R_{12}^{'}$ and $R_{23}^{'}$ denote the relative rotations obtained from averaged rotations.} \label{figure:incremental_ba_ra} \end{figure} \section{Experimental Results} \label{sec:experiment} Our experiments aim at demonstrating the accuracy, efficiency, and robustness of the proposed methods. We implement Levenberg-Marquardt (LM)~\cite{DBLP:books/sp/NocedalW99}, row-by-row block coordinate descent (RBR-BCD)~\cite{EriksonOKC20}, and our hybrid rotation averaging in C++. Besides, the implementations of SE-Sync~\cite{rosen2019se} and Shonan~\cite{DBLP:conf/eccv/DellaertRWMC20} are provided by the authors and publicly available. For HSfM~\cite{DBLP:conf/cvpr/CuiGSH17} and LUD~\cite{DBLP:conf/cvpr/OzyesilS15}, we use \cite{DBLP:journals/pami/ChatterjeeG18} as the rotation averaging solver, and the Ceres solver~\cite{ceres-solver} for bundle adjustment. All approaches are tested on a laptop with a 2.7 GHz CPU and 8GB RAM. \begin{table*}[t] \centering \caption{Comparison of runtime on synthetic datasets. $n$ is the number of rotations, and $\bar{R}$ represents the average rotation error (unit: degree).} \vspace{0.05in} \resizebox{0.95\textwidth}{!}{ \begin{tabular}{| c || c || c || c | c || c | c || c | c || c | c || c | c | \hline \multirow{2}{*}{$n$} & \multirow{2}{*}{\#edges} & \multirow{2}{*}{$\sigma$} & \multicolumn{2}{c|}{\textbf{LM}~\cite{DBLP:books/sp/NocedalW99}} & \multicolumn{2}{c|}{\textbf{RBR-BCD}~\cite{EriksonOKC20}} & \multicolumn{2}{c|}{\textbf{Shonan}~\cite{DBLP:conf/eccv/DellaertRWMC20}} & \multicolumn{2}{c|}{\textbf{SE-Sync}~\cite{rosen2019se}} & \multicolumn{2}{c|}{\textbf{hybrid RA} } \\ \cline{4-13} & \ & \ & $\bar{R}$ & $time(s)$ & $\bar{R}$ & $time(s)$ & $\bar{R}$ & $time(s)$ & $\bar{R}$ & $time(s)$ & $\bar{R}$ & $time(s)$ \\ \hline \multirow{2}{*}{20} & \multirow{2}{*}{30} & 0.2 & 5.201e-05 & 0.001 & 1.219e-05 & 0.039 & 9.803e-06 & 0.032 & 8.975e-06 & $<$ \textbf{1e-06} & \textbf{7.307e-06} & $<$ \textbf{1e-06} \\ \cline{3-13} & \ & 0.5 & 1.492e-01 & \textbf{0.002} & 7.200e-02 & 0.028 & 1.955e-01 & 0.038 & 2.033e-01 & 0.003 & \textbf{1.550e-01} & \textbf{0.002} \\ \hline \multirow{2}{*}{100} & \multirow{2}{*}{300} & 0.2 & 9.490e-06 & 0.095 & 6.351e-06 & 0.651 & \textbf{5.374e-06} & 0.144 & 5.383e-06 & \textbf{0.005} & 5.938e-06 & 0.006\\ \cline{3-13} & \ & 0.5 & 8.771e-01 & 0.089 & 1.160e-01 & 0.813 & 1.108e-01 & 0.078 & 1.336e-01 & \textbf{0.079} & \textbf{9.400e-02} & 0.189 \\ \hline \multirow{2}{*}{500} & \multirow{2}{*}{1000} & 0.2 & 5.413e-06 & 0.845 & 5.381e-06 & 208.372 & 5.433e-06 & 0.886 & 5.209e-06 & 0.403 & \textbf{5.184e-06} & \textbf{0.159} \\ \cline{3-13} & \ & 0.5 & 7.351e-01 & 0.781 & 1.130e-01 & 245.818 & 1.255e-01 & 0.533 & \textbf{9.417e-02} & \textbf{0.255} & 1.080e-01 & 0.270 \\ \hline \multirow{2}{*}{1000} & \multirow{2}{*}{4000} & 0.2 & 1.800e-01 & 1.213 & \textbf{6.754e-06} & 2,274 & 9.835e-06 & 1.883 & 8.739e-06 & 0.821 & 1.021e-05 & \textbf{0.127}\\ \cline{3-13} & \ & 0.5 & 8.956e-01 & 1.372 & 1.120e-01 & 2,153 & 8.974e-02 & 1.141 & 1.310e-01 & 0.985 & \textbf{8.200e-02} & \textbf{0.949} \\ \hline \multirow{2}{*}{5000} & \multirow{2}{*}{20000} & 0.2 & 1.260e-01 & 4.414 & - & - & 8.341e-06 & 14.870 & \textbf{6.371e-06} & 9.083 & 7.159e-06 & \textbf{0.331} \\ \cline{3-13} & \ & 0.5 & 2.787e-01 & 5.183 & - & - & 1.516e-01 & 12.338 & 1.408e-01 & 3.699 & \textbf{8.200e-02} & \textbf{0.809} \\ \hline \multirow{2}{*}{10000} & \multirow{2}{*}{40000} & 0.2 & 1.410e-01 & 23.714 & - & - & 1.838e-05 & 45.680 & 9.037e-06 & 10.335 & \textbf{7.884e-06} & \textbf{0.362} \\ \cline{3-13} & \ & 0.5 & 3.240e-01 & 27.265 & - & - & 1.209e-01 & 42.627 & 1.386e-01 & 11.128 & \textbf{9.100e-02} & \textbf{1.704} \\ \hline \multirow{2}{*}{50000} & \multirow{2}{*}{200000} & 0.2 & - & - & - & - & 6.013e-06 & 956.821 & - & - & \textbf{6.310e-06} & \textbf{0.515} \\ \cline{3-13} & \ & 0.5 & - & - & - & - & 1.933e-01 & 905.294 & - & - & \textbf{7.500e-02} & \textbf{7.124} \\ \hline \end{tabular} } \label{table:synthetic_data} \end{table*} \subsection{Evaluation of Hybrid Rotation Averaging on Synthetic Datasets} We designed 7 synthetic datasets to evaluate the performance of our rotation averaging approach, The view and relative rotation numbers are shown in Table~\ref{table:synthetic_data}, and denoted by $n$ and $\#\text{edges}$ respectively. The ground truth absolute rotations are initialised randomly. The relative rotations are constructed by a spanning tree expanded by random edges until the given number of relative poses is reached. All relative rotations are derived from ground truth, and perturbed by random angular rotations about randomly selected axes. The perturbation angles are normally distributed with $0$ mean and variance of either $\sigma=0.2$ ~rad or $\sigma = 0.5$~rad. Initial absolute rotations are chosen randomly. The evaluation results are shown in Table~\ref{table:synthetic_data}, where we compare our method against LM~\cite{DBLP:books/sp/NocedalW99}, RBR-BCD~\cite{EriksonOKC20}, Shonan~\cite{DBLP:conf/eccv/DellaertRWMC20} and SE-Sync~\cite{rosen2019se}. In terms of efficiency, RBR-BCD is the slowest and almost 1000 times slower than others when $n = 1000$. SE-Sync is faster than LM but stays within the same order of magnitude. While SE-Sync is slightly faster than ours when the camera's number is below 500, the hybrid rotation averaging approach is $1\sim 2$ orders of magnitude faster than SE-Sync when the number of views grows beyond 1000. While Shonan is also a low-rank method, as well as SE-Sync and ours, it is almost 4 times slower than SE-Sync, and $2\sim3$ orders of magnitude slower than ours, when the number of the cameras goes above 5000. In terms of the scale of the solved problems, RBR-BCD failed when the camera number increased to $5000, 10000,$ or $50000$. This is primarily due to insufficient memory for optimisation, and we marked the corresponding cells in the table by ``--''. LM and SE-Sync failed when the camera number reaches $50000$, as there is insufficient memory to perform the CHOLMOD ~\cite{DBLP:journals/toms/ChenDHR08} factorisation. As our approach only needs to compute the SVD of a small block matrix and evaluate $d$ matrix operations of $3 \times 3$ matrices in each iteration, we can solve all the evaluated large-scale datasets. In terms of accuracy, LM achieves the global optimum with certain probability ($30\%-70\%$ as reported in \cite{EriksonOKC20}), and Table~\ref{table:synthetic_data} only shows the best results. While all the evaluated globally optimal methods have the same accuracy and can both obtain the global optimum for successful cases. \subsection{Evaluation of RA-SfM on Real-World Datasets} We evaluate the performance of our RA-SfM on large scale real-world datasets and compare it against state-of-the-art incremental~\cite{DBLP:conf/cvpr/SchonbergerF16}, global~\cite{DBLP:conf/cvpr/OzyesilS15} and hybrid~\cite{DBLP:conf/cvpr/CuiGSH17} SfM approaches. Since the quasi-convex SfM approach~\cite{DBLP:conf/cvpr/ZhangCL18} is sensitive to outlies and extremely slow in such datasets, we did not evaluate it in our experiment. Figure~\ref{figure:campus_result} shows the reconstruction results of COLMAP and our RA-SfM on the Campus~\cite{DBLP:conf/iccv/CuiT15} dataset. This dataset, which has a loop, mainly contains plants that can produce lots of wrong matching results. COLMAP~\cite{DBLP:conf/cvpr/SchonbergerF16} fails to reconstruct this dataset, as the camera poses drift and the loop is not closed. Our approach closes the loop successfully, as the known rotation optimisation can further constrain camera poses after the initial registration. \begin{figure}[t] \centering { \includegraphics[width=0.99\linewidth]{campus.pdf} } \caption{Reconstruction results for the Campus dataset~\cite{DBLP:conf/iccv/CuiT15}. Left: COLMAP~\cite{DBLP:conf/cvpr/SchonbergerF16}, Right: Our RA-SfM.} \label{figure:campus_result} \vspace{-0.2in} \end{figure} \begin{table*}[!ht] \centering \caption{Comparison of runtime and accuracy on online datasets \cite{DBLP:conf/eccv/WilsonS14}. $N_i$ and $N_c$ denotes the number of images and registered cameras respectively, $N_p$ denotes the reconstructed 3D landmarks, MRE means mean reprojection error in pixel. $T$ and $T_R$ denotes the total reconstruction time and hybrid rotation averaging time respectively (in seconds). The best MREs are marked in bold font.} \vspace{0.05in} \resizebox{1.0\textwidth}{!}{ \begin{tabular}{| c | | c | | c | c | c | c | | c | c | c | c | | c | c | c | c | | c | c | c | c | c |} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{$N_i$} & \multicolumn{4}{c||}{\textbf{COLMAP} \cite{DBLP:conf/cvpr/SchonbergerF16}} & \multicolumn{4}{c||}{\textbf{LUD} \cite{DBLP:conf/cvpr/OzyesilS15}} & \multicolumn{4}{c||}{\textbf{HSfM} \cite{DBLP:conf/cvpr/CuiGSH17}} & \multicolumn{5}{c|}{\textbf{RA-SfM}}\\ \cline{3-19} & \ & $N_c$ & $N_p$ & MRE & $T (s)$ & $N_c$ & $N_p$ & MRE & $T (s)$ & $N_c$ & $N_p$ & MRE & $T (s)$ & $N_c$ & $N_p$ & MRE & $T_R (s)$ & $T (s)$\\ \hline Alamo & 2,915 & 906 & 138K & 0.69 & 3,180 & 578 & 146K & 1.28 & 260 & 522 & 149K & 1.62 & 1,079 & 895 & 141K & \textbf{0.65} & 0.340 & 2,771 \\ \hline Ellis Island & 2,587 & 801 & 154K & 0.73 & 4,307 & 234 & 16K & 1.54 & 24 & 208 & 34K & 2.53 & 169 & 727 & 146K & \textbf{0.72} & 2.290 & 3,920 \\ \hline Gendarmenmarkt & 1,463 & 1,040 & 209K & 0.71 & 3,737 & 705 & 87K & 1.51 & 104 & 542 & 74K & 1.94 & 377 & 1,023 & 202K & \textbf{0.70} & 1.997 & 3,931 \\ \hline Madrid Metropolis & 1,344 & 460 & 60K & 0.62 & 1,320 & 350 & 51K & 1.08 & 36 & 292 & 51K & 1.48 & 221 & 438 & 66K & \textbf{0.59} & 0.420 & 1,417 \\ \hline Montreal N.D. & 2,298 & 554 & 107K & \textbf{0.67} & 1,902 & 462 & 166K & 1.64 & 194 & 418 & 155K & 1.95 & 1041 & 528 & 105K & 0.68 & 0.115 & 1,423 \\ \hline Notre Dame & 1,431 & 1,408 & 349K & 0.76 & 22,788 & 550 & 262K & 2.06 & 259 & 526 & 281K & 2.30 & 2,375 & 1,409 & 353K & \textbf{0.75} & 0.131 & 19,943 \\ \hline NYC Library & 2,550 & 556 & 101K & 0.72 & 1,698 & 336 & 70K & 1.52 & 75 & 282 & 74K & 1.99 & 356 & 519 & 100K & \textbf{0.65} & 0.515 & 1,715 \\ \hline Piazza del Popolo & 2,251 & 1,011 & 122K & 0.68 & 2,676 & 329 & 38K & 1.65 & 62 & 286 & 35K & 1.92 & 212 & 966 & 122K & \textbf{0.66} & 0.360 & 3,258 \\ \hline Piccadilly & 7,351 & 3,129 & 362K & \textbf{0.73} & 16,590 & 2,301 & 202K & 1.83 & 262 & 1,665 & 185K & 2.09 & 2,169 & 3,041 & 363K & 0.80 & 1.422 & 15,109 \\ \hline Roman Forum & 2,364 & 1,594 & 284K & \textbf{0.71} & 5,388 & 1,045 & 256K & 1.71 & 182 & 1,071 & 262K & 1.93 & 2,237 & 1,460 & 267K & 0.77 & 1.938 & 5,408 \\ \hline Tower of London & 1,576 & 707 & 140K & 0.61 & 2,767 & 485 & 140K & 1.65 & 95 & 398 & 149K & 1.91 & 816 & 672 & 139K & \textbf{0.58} & 0.800 & 1,979 \\ \hline Trafalgar & 15,685 & 6,980 & 581K & 0.81 & 14,790 & 5,044 & 378K & 1.56 & 713 & 3,446 & 318K & 1.95 & 5,761 & 7,085 & 597K & \textbf{0.72} & 4.213 & 14,831 \\ \hline Union Square & 5,961 & 937 & 69K & 0.66 & 2,604 & 803 & 41K & 1.65 & 107 & 769 & 38K & 1.88 & 1,763 & 809 & 57K & \textbf{0.52} & 1.304 & 1,962 \\ \hline Vienna Cathedral & 6,288 & 1,185 & 290K & 0.74 & 9,714 & 849 & 203K & 1.91 & 173 & 662 & 252K & 2.36 & 2,307 & 1,173 & 303K & \textbf{0.71} & 1.959 & 16,111 \\ \hline Yorkminster & 3,368 & 1,022 & 259K & 0.71 & 10,806 & 421 & 132K & 1.75 & 135 & 417 & 129K & 1.93 & 1,487 & 614 & 183K & \textbf{0.64} & 3.183 & 9,299 \\ \hline \end{tabular} } \label{table:internet_data} \end{table*} \begin{figure*}[t] \centering { \includegraphics[width=0.9\linewidth]{online_data.pdf} } \caption{Visual reconstruction results for some of the online datasets~\cite{DBLP:conf/eccv/WilsonS14}. For each subfigure, the top and bottom images are respectively the results obtained from COLMAP~\cite{DBLP:conf/cvpr/SchonbergerF16} and our RA-SfM. (The first two columns are results of two parts of the Ellis Island dataset.) } \label{fig:visual_internet_recon_results} \vspace{-0.15in} \end{figure*} We also evaluated our approach on the online datasets from~\cite{DBLP:conf/eccv/WilsonS14}, which are collections of challenging unordered images. The datasets contain many wrong epipolar geometries due to the extreme viewpoint, scale, and illumination changes. The runtime and accuracy results are shown in Table~\ref{table:internet_data}. As is observed, COLMAP~\cite{DBLP:conf/cvpr/SchonbergerF16} recovers the most camera poses in most online datasets. However, our method has the lowest mean reprojection error (MRE) in most of the online datasets, which indicates RA-SfM is more robust and accurate than COLMAP. For our RA-SfM, the time for rotation averaging is separately given in the penultimate column (denoted as $T_R$). While LUD~\cite{DBLP:conf/cvpr/OzyesilS15} is the most efficient one among the evaluated methods, it has large MRE, and the number of recovered camera poses is less than ours and COLMAP. HSfM~\cite{DBLP:conf/cvpr/CuiGSH17} is faster than COLMAP and RA-SfM because it only samples 2 correspondences to compute the camera centers in each RANSAC iteration. Besides, HSfM~\cite{DBLP:conf/cvpr/CuiGSH17} recovers the fewest camera poses and fails to recover the correct camera centers. Some visual results for online datasets are shown in Fig.~\ref{fig:visual_internet_recon_results}. For each subfigure, the top and bottom images are the results obtained by COLMAP~\cite{DBLP:conf/cvpr/SchonbergerF16} and our RA-SfM, respectively. For the Ellis Island dataset, we showed two different parts in the first two columns of Fig.~\ref{fig:visual_internet_recon_results}, where the red rectangle area shows the comparison result. For the Gendarmenmarkt dataset, the reconstruction result of COLMAP is bad on the left part, which indicates the wrong camera poses. For the Vienna Cathedral dataset, though COLMAP recovers more camera poses than ours, our approach reconstructed more scene details than COLMAP, as is indicated by the red rectangle. From Table~\ref{table:internet_data} and Fig.~\ref{fig:visual_internet_recon_results}, we demonstrate our RA-SfM can effectively correct the wrongly registered camera poses in the incremental SfM pipeline, and also achieves the state-of-the-art robustness and accuracy. \section{Conclusion} \label{sec:conclusion} \vspace{-0.05in} This paper presents a hybrid rotation averaging method that is robust to outliers. We combine fast view graph filtering to increase graph sparsity with state-of-the-art implementations of both global and local optimization methods. The exposition is rounded off by a soft embedding into an incremental SfM pipeline leading to accurate, reliable, and highly efficient results. However, our method solves the rotation averaging problem all in once, thus it can also meet memory limitation in larger scenes. In our future work, we are interested in extending this work in larger scenes with a more efficient manner. \noindent \textbf{Acknowledgement:} We sincerely thank Prof. Frank Dalleart and Jing Wu for the discussion of the experimental details, and also for their help in improving the experimental results. L. Kneip furthermore acknowledges the support of the Natural Science Foundation of Shanghai (grant number 19ZR1434000), and his affiliation with the Shanghai Engineering Research Center of Intelligent Vision and Imaging. {\small \bibliographystyle{ieee_fullname}
1,314,259,993,766
arxiv
\section{Introduction} A realistic description of the electromagnetic response of atomic nuclei is a challenging many-body problem as it requires an accurate understanding of both the nuclear dynamics and of the interaction vertex. In this regard a valuable strategy consists in analyzing the scaling properties of nuclear response functions in a variety of kinematic setups \cite{West:1974ua,Day:1987az,Donnelly:1998xg}. Scaling of the first kind is said to occur when the electron-nucleus cross section or longitudinal/transverse response functions, divided by an appropriate function describing the single-nucleon physics, do no longer depend on two variables (for example energy transfer $\omega$ and absolute value of the 3-momentum transfer ${|\bf q|}$ in the Laboratory frame), but only upon a specific function of them, which defines the scaling variable. Scaling of the second kind takes place when there is no dependence on the nuclear species. Finally, the simultaneous occurrence of both kinds of scaling is denoted as superscaling~\cite{Alberico:1988bv}. Superscaling is exactly fulfilled by the Global Relativistic Fermi gas (GRFG) model, for which a simple and symmetric scaling function can be derived in terms of the dimensionless scaling variable $\psi$~\cite{Barbaro:1998gu} (explicit expressions are provided in Sec.~\ref{GRFG} below). However, contrary to the GRFG model predictions, the results extracted from experimental data reveal an asymmetric shape of the scaling function, with a tail that extends to high values of $\psi$ (and $\omega$)~\cite{Amaro:2004bs}. These results represent a strong constraint for theoretical models of electron scattering reactions. Extensive studies with a large variety of models reveal the importance of a proper description of the interaction of knocked-out nucleons with the residual nucleus---final state interactions (FSI)---to obtain the tail of the scaling function~\cite{Caballero:2005sj,Caballero:2006wi,Caballero:2007tz,Meucci:2009nm,Antonov:2011bi}. The authors of Refs.~\cite{Caballero:2005sj,Caballero:2006wi} argue that, while this asymmetry in the scaling function is largely absent in non-relativistic mean-field models, it can be recovered within the relativistic impulse approximation, given that FSI are described using a strong relativistic mean field (RMF) potential. Asymmetric scaling functions also emerge in semi-relativistic models when FSI are described by local potentials derived from the RMF one~\cite{Caballero:2007tz}. On the other hand, the comparison between semi-relativistic and relativistic results shows a breakdown of the zeroth-kind scaling, {i.e.} different scaling functions in the longitudinal and transverse channel, only when the fully relativistic mean field approach is employed. According to Ref.~\cite{Caballero:2007tz} this effect has been ascribed to the dynamical enhancement of the lower component of the Dirac spinors, which are not present in the semi-relativistic approach. In this work we analyze the scaling properties exhibited by Green's Function Monte Carlo (GFMC). GFMC is an {\em ab initio} method allowing for a very accurate description of the properties of $A\leq 12$ nuclei, in which the dynamics of constituent nucleons are fully considered~\cite{Lovato:2013cua,Lovato:2014eva,Lovato:2015qka}. The longitudinal and transverse electromagnetic response functions of $^{12}$C, recently computed within GFMC turn out to be in very good agreement with experiment, when two-body currents are accounted for~\cite{Lovato:2016gkq}. Despite this remarkable result, GFMC is currently limited to $^{12}$C because of the exponentially growing cost of the calculation with the number of nucleons. In addition to that, the inclusion of relativistic kinematic and baryon resonance production would involve non trivial difficulties. The study of the behavior of the scaling functions obtained from the GFMC calculations, while being interesting in its own right, is aimed at elucidating the role of initial and final state correlations in the asymmetric shape of the scaling function. In Section~\ref{scaling} we review the derivation of the electron-nucleus cross section, as well as its expression in terms of longitudinal and transverse response functions, which are necessary to introduce the concept of scaling. In Section~\ref{GFMC}, the main elements of the Green's Function Monte Carlo approach are briefly outlined, while in Section~\ref{GRFG} we explicitly derive the expression of the longitudinal and transverse scaling functions in the context of the GRFG model, both in the relativistic and non relativistic cases. In Section~\ref{results} we report the results of our analysis of the scaling features of the GFMC response functions for $^4$He and $^{12}$C nuclei and in different kinematics. We then discuss a novel interpretation of the longitudinal and transverse scaling function in terms of the nucleon-density function. Finally, in Section \ref{conclusion} we summarize our findings and state the conclusions. \section{Scaling of the nuclear electromagnetic response within the Green's Function Monte Carlo approach} \label{scaling} In the one-photon-exchange approximation, the double differential electron-nucleus cross section can be written in the form \begin{equation} \label{xsec} \frac{d^2\sigma}{d E_{e^\prime} d\Omega_{e^\prime}}=\frac{\alpha^2}{q^4}\frac{E_{e^\prime}}{E_e}L_{\mu\nu}W^{\mu\nu} \ , \end{equation} where $k_e=(E_e,{\bf k}_e)$ and $k_{e^\prime}=(E_{e^\prime},{\bf k}_{e^\prime})$ are the laboratory four-momenta of the incoming and outgoing electrons, respectively; $\alpha \simeq 1/137$ is the fine structure constant, $d\Omega_{e^\prime}$, the differential solid angle in the direction of ${\bf k}_{e^\prime}$, and $q=k_e - k_{e^\prime} =(\omega,{\bf q})$ the four momentum transfer. The leptonic tensor is given by \begin{align} L^{\mu\nu}=2 \left( k_{e^\prime}^\mu k_e^\nu+ k_e^\mu k_{e^\prime}^\nu- g^{\mu\nu}k_{e^\prime}\cdot k_e \right)\,. \end{align} The hadronic tensor encompasses the electromagnetic transitions from the target nucleus to all possible final states. It is thus given by \begin{align} \label{response:tensor} W^{\mu \nu} =\sum_f \langle 0| {J^\mu}^\dagger(q) | f \rangle \langle f | J^\nu(q) | 0 \rangle \, \delta^{(4)}(P_0+q-P_f) \ , \end{align} where $| 0 \rangle$ and $| f \rangle$ denote the initial and final hadronic states with four-momenta $P_0 = ( E_0,{\bf p}_0 )$ and $P_f = (E_f,{\bf p}_f) $, while $J(q)$ is the electromagnetic nuclear current operator. Equation \eqref{xsec} can be rewritten in terms of two response functions, denoted by $R_L({\bf q}, \omega)$ and $R_T({\bf q},\omega)$, describing interactions with longitudinally (L) and transversely (T) polarized photons, respectively. The resulting expression reads \begin{align} \frac{d^2\sigma}{d E_{e^\prime} d\Omega_{e^\prime}} & =\left( \frac{d \sigma}{d\Omega_{e^\prime}} \right)_{\rm{M}} \Big[ A_L(|{\bf q}|,\omega,\theta_{e^\prime}) R_L(|{\bf q}|,\omega) \nonumber \\ & + A_T(|{\bf q}|,\omega,\theta_{e^\prime}) R_T(|{\bf q}|,\omega) \Big] \ , \end{align} where \begin{align} A_L = \Big( \frac{q^2}{{\bf q}^2}\Big)^2 \ \ \ , \ \ \ A_T = -\frac{1}{2}\frac{q^2}{{\bf q}^2}+\tan^2\frac{\theta_e}{2} \ , \end{align} and \begin{align} \label{Mott} \left( \frac{d \sigma}{d \Omega_{e^\prime}} \right)_{\rm{M}}= \left[ \frac{\alpha \cos(\theta_{e^\prime}/2)}{2 E_{e^\prime}\sin^2(\theta_{e^\prime}/2) }\right]^2 \end{align} is the Mott cross section. The L and T response functions can be readily expressed in terms of specific components of the hadronic tensor. Choosing the $z$-axis along the direction of the momentum transfer one finds \begin{align} \label{RL} R_L & = W^{00}\ ,\\ \label{RT} R_T &= \sum_{ij=1}^3\Big(\delta_{ij}-\frac{q_iq_j}{{\bf q}^2}\Big)W^{ij} \ . \end{align} \subsection{The Green's function Monte Carlo approach} \label{GFMC} GFMC provides a suitable framework to carry out accurate calculations of a variety of nuclear properties in the non relativistic regime, typically corresponding to $|{\bf q}| \buildrel < \over {_{\sim}} 500 \ {\rm MeV}$ (for a recent review of Quantum Monte Carlo methods for nuclear physics see, e.g., Ref. \cite{Carlson:2014vla}). The longitudinal and transverse response function are given by \begin{align} R_L({\bf q}, \omega)&= \sum_f \langle 0|\rho^\dagger({\bf q})|f\rangle\langle f|\rho({\bf q})|0\rangle\delta(\omega+E_0-E_f)\,, \nonumber\\ R_T({\bf q}, \omega)&= \sum_f \langle 0|{\bf j}_T^\dagger({\bf q})|f\rangle\langle f|{\bf j}_T({\bf q})|0\rangle\delta(\omega+E_0-E_f) \,, \label{resp:GFMC} \end{align} where $\rho({\bf q})$ and ${\bf j}_T({\bf q})$ denote non-relativistic reductions of the nuclear-charge and transverse-current operators, respectively \cite{Carlson:2001mp}. Valuable information on the L and T responses can be obtained from their Laplace transforms, also referred to as Euclidean responses \begin{equation} \widetilde{E}_{T,L}({\bf q}, \tau)= \int_{\omega_{\rm{el}}}^\infty \,{d\omega} e^{-\omega \tau}R_{T,L}({\bf q}, \omega)\ . \end{equation} The lower integration limit $\omega_{\rm{el}}= {\bf q}^2/2M_A$, $M_A$ being the mass of the target nucleus, is the elastic scattering threshold---corresponding to the $|f \rangle = |0 \rangle$ term in the sum of Eq. \eqref{response:tensor}---whose contribution is excluded. Within GFMC, the Euclidean responses are evaluated from \begin{align} \nonumber \widetilde{E}_L({\bf q},\tau) & = \langle 0| \rho^\dagger({\bf q}) e^{-(H-E_0)\tau} \rho({\bf q})|0\rangle \\ & - |\langle 0 | \rho({\bf q}) | 0 \rangle|^2 e^{-\omega_{\rm el} \tau} \ , \label{eq:eucL_mat_el} \end{align} and \begin{align} \nonumber \widetilde{E}_T({\bf q},\tau) & = \langle 0| {\bf j}_T^\dagger({\bf q}) e^{-(H-E_0)\tau} {\bf j}_T({\bf q})|0\rangle \\ & - |\langle 0 | {\bf j}_T({\bf q}) | 0 \rangle|^2 e^{-\omega_{\rm el} \tau} \ . \label{eq:eucT_mat_el} \end{align} Note that, although the states $|f \rangle \neq | 0 \rangle$ do not appear explicitly in Eqs. \eqref{eq:eucL_mat_el} and \eqref{eq:eucT_mat_el}, the Euclidean responses include the FSI effects of the particles involved in the electromagnetic interaction, both among themselves and with the spectator nucleons. The inversion of the Laplace transform, needed to retrieve the energy dependence of the responses, is long known to involve severe difficulties. However, maximum-entropy techniques, based on Bayesian inference arguments, have been successfully exploited to perform accurate inversions, supplemented by reliable estimates of the theoretical uncertainty. In the case of $^{12}$C, particular care has to be devoted to the subtraction of contributions arising from elastic scattering and the transitions to the low-lying $2^+$, $0^+_2$, and $4^+$ states~\cite{Lovato:2015qka}. \subsection{Scaling within the relativistic Fermi gas model} \label{GRFG} The easiest, albeit quite crude approximation, to describe the hadron tensor consists on using the GRFG model. Within this approach the scattering process is assumed to take place on a single nucleon with four-momentum $p=(E({\bf p}),{\bf p})$, where $E({\bf p})=\sqrt{|{\bf p}|^2+m^2}$, $m$ being the nucleon mass. The requirement that the struck nucleon is in the target nucleus implies that $|{\bf p}|$ is smaller than the Fermi momentum $p_F$. Furthermore, the outgoing nucleon with four-momentum ${p^\prime}^\mu=(p+q)^\mu$ should lay above the Fermi surface. The expression of the hadron tensor describing the response of the target nucleus then reads \begin{align} W^{\mu\nu}=& \frac{3 \mathcal{N} }{4\pi p_F^3}\int d^3p\frac{m^2}{E({\bf p})E({\bf p+q})}\ w^{\mu\nu}(p+q,p)\nonumber\\ &\times \theta(p_F-|{\bf p}|)\theta(|{\bf p+q}|-p_F)\nonumber\\ &\times \delta(\omega+E({\bf p})-E({\bf p+q}))\ . \label{had:tens:FG} \end{align} Once we only discuss symmetric nuclei, $\mathcal{N}$ denotes both the number of protons and neutrons in the nucleus. The single-nucleon response tensor $w^{\mu\nu}(p+q,p)$ encodes the response of a system in which a nucleon with 4-momentum $p$ in the initial state is scattered by a (virtual) photon, leading to a final state with a nucleon carrying a 4-momentum $(p+q)$. The following general expression \begin{align} \label{w12} w^{\mu\nu}(p+q,p)=& - W_1(\tau)\Big(g^{\mu\nu}-\frac{q^\mu q^\nu}{q^2}\Big)\nonumber\\ &+ W_2(\tau)\frac{1}{m^2}\Big(p^\mu-\frac{p\cdot q}{q^2}q^\mu\Big)\nonumber\\ &\times \Big( p^\nu -\frac{p\cdot q}{q^2}q^\nu\Big)\ , \end{align} where $\tau= -q^2/4m^2= Q^2/4m^2\geq0$, holds. It is well known that the nucleon structure functions $W_{1,2}$ can be written in terms of the proton and neutron electric and magnetic form factors as \begin{align} W_1(\tau)=& \tau G^2_M(\tau)\ ,\nonumber\\ W_2(\tau)=& \frac{G_E^2(\tau)+\tau G_M^2(\tau)}{(1+\tau)}\ , \end{align} and \begin{align} G_E(\tau)&= G^p_{E}(\tau)\frac{1}{2}(1+\tau_{z,i})+ G_E^n(\tau)\frac{1}{2}(1-\tau_{z,i})\ ,\nonumber\\ G_M(\tau) &= G_M^p(\tau)\frac{1}{2}(1+\tau_{z,i})+ G_M^n(\tau)\frac{1}{2}(1-\tau_{z,i})\ , \end{align} where $\tau_{z,p/n} = \pm 1$. Using the GRFG model to parametrize the nuclear amplitudes, the integral entering Eq.~\eqref{had:tens:FG} can be analytically solved. We start by evaluating the function \begin{align} F(p_F,q)= & \frac{3 \mathcal{N} }{4\pi p_F^3}\int d^3p\ \mathcal{F}(p_F,q,{\bf p})\ , \label{f_scal} \end{align} with \begin{align} \mathcal{F}(p_F,q,{\bf p})=&\frac{m^2}{E({\bf p})E({\bf p+q})}\nonumber\\ &\times \theta(p_F-|{\bf p}|)\theta(|{\bf p+q}|-p_F)\nonumber\\ &\times \delta(\omega+E({\bf p})-E({\bf p+q}))\, \label{eq:scale_integrand} \end{align} resulting in~\cite{Alberico:1988bv,Donnelly:1991qy} \begin{align} \mathcal{F}(p_F,q,{\bf p}) =\frac{3\mathcal{N}m^2}{2p_F^3|{\bf q}|}\theta(E_F-\Gamma)(E_F-\Gamma)\ , \end{align} where we have introduced $E_F=\sqrt{p_F^2+m^2}$ and \begin{align} \Gamma&=\rm{Max}\{\Gamma_1,\Gamma_2,\Gamma_3\}\nonumber\\ &=\rm{Max}\Big\{m, E_F-\omega,\frac{-\omega+|{\bf q}|\sqrt{1+1/\tau}}{2}\Big\}\ . \end{align} It is convenient to introduce the widespread set of dimensionless variables~\cite{Alberico:1988bv} \begin{align} \lambda=\omega/2m\ ,\nonumber\\ \kappa=|{\bf q}|/2m\ ,\nonumber\\ \eta_F=p_F/m\ . \end{align} The minimum $\Gamma_3/m=1$ at \begin{align} \lambda=\lambda^0=\frac{1}{2}\Big[ \sqrt{(1+4 \kappa^2)}-1\Big]\ , \end{align} corresponds to the quasi elastic peak $\tau = \lambda$~\cite{Alberico:1988bv}. In the limit of large $|{\bf q}|$, the relation $\Gamma=\Gamma_3$ is satisfied for each value of $\omega$. Hence, a dimensionless scaling variable can be defined in terms of this quantity as~\cite{Alberico:1988bv} \begin{align} \psi=sign(\lambda-\lambda^0)\Big[\frac{1}{\xi_F}\Big(\frac{\Gamma_3}{m}-1\Big)\Big]^{1/2}\ , \end{align} with $\xi_F= E_F/m -1$ and such that $\psi=0$ at the quasi elastic peak. Note that this definition of the scaling variable is equivalent to the more common expression \begin{align} \psi= \frac{1}{\sqrt{\xi_F}}\frac{\lambda-\tau}{\sqrt{(1+\lambda)\tau+\kappa\sqrt{\tau(1+\tau)}}} \,. \end{align} Collecting previous results one obtains \begin{align} F(p_F,q)=\frac{3\mathcal{N}\xi_F}{4 \eta_F^3 m \kappa}\big(1-\psi^2)\theta(1-\psi^2)\ . \end{align} Substituting Eq. \eqref{had:tens:FG} and \eqref{w12} into Eqs. \eqref{RL}, \eqref{RT} leads to the following expressions for the L and T response functions \begin{align} R_L=&\frac{3 \mathcal{N} }{4\pi p_F^3}\int d^3p\ \mathcal{F}(p_F,q,{\bf p})\Big\{ -W_1(\tau)\Big(1-\frac{{\omega}^2}{q^2}\Big)\nonumber\\ &+\frac{W_2}{m^2}\Big[ E_p - \frac{p\cdot q}{q^2}\omega\Big]^2\Big\}\ ,\nonumber\\ R_T=&\frac{3 \mathcal{N} }{4\pi p_F^3}\int d^3p\ \mathcal{F}(p_F,q,{\bf p})\Big\{2 W_1(\tau)+\frac{W_2(\tau)}{m^2}{\bf p}_T^2\Big\}\ . \end{align} After performing the integrations, the responses can be cast in the form \begin{align} R_L=& \frac{3\mathcal{N}\xi_F}{4 \eta_F^3 m \kappa}\big(1-\psi^2)\theta(1-\psi^2)\nonumber\\ &\times \Big\{\frac{\kappa^2}{\tau}[G_E^2(\tau)+W_2(\tau)\Delta ]\Big\}\ ,\nonumber\\ R_T=& \frac{3\mathcal{N}\xi_F}{4 \eta_F^3 m \kappa}\big(1-\psi^2)\theta(1-\psi^2)\nonumber\\ &\times \Big\{ 2\tau G^2_M(\tau)+ W_2(\tau)\Delta\Big\}\ , \end{align} where \begin{align} \Delta= \xi_F(1-\psi^2)\Big[\frac{\sqrt{\tau(1+\tau)}}{\kappa}+\xi_F(1-\psi^2)\frac{\tau}{3\kappa^2}\Big]\ . \end{align} The next step consists in the definition of the longitudinal and transverse scaling functions~\cite{Maieron:2001it} \begin{align} f_L(\psi)= p_F\times \frac{R_L}{G_L}\ ,\nonumber\\ f_T(\psi)= p_F\times \frac{R_T}{G_T}\ , \end{align} where \begin{align} G_L=\frac{\mathcal{N}}{2\kappa} \Big\{\frac{\kappa^2}{\tau}[G_E^2(\tau)+W_2(\tau)\Delta ]\Big\}\nonumber\\ G_T=\frac{\mathcal{N}}{2\kappa}\Big\{ 2\tau G^2_M(\tau)+ W_2(\tau)\Delta\Big\}\ . \label{FG:pre:fact} \end{align} Within the GRFG the same scaling function for the the longitudinal and transverse channel arises. This is a symmetric function centered in $\psi=0$ \begin{align} f(\psi)=f_L(\psi)= f_T(\psi)=\frac{3\xi_F}{2 \eta_F^2}\big(1-\psi^2)\theta(1-\psi^2)\ . \end{align} In the non relativistic limit the L and T responses can be expressed as \begin{align} R_L=& \frac{3\mathcal{N}}{4\pi p_F^3}\int d^3p \frac{1}{2}\sum_{s,s^\prime}\Big\{ \chi^\dagger_s \rho^\dagger({\bf q})\chi_{s^\prime}\chi^\dagger_{s^\prime}\rho({\bf q})\chi_s\Big\}\nonumber\\ &\times \theta(p_F-|{\bf p}|)\theta(|{\bf p+q}|-p_F)\nonumber\\ &\times \delta\Big(\omega +\frac{{\bf p}^2}{2m}-\frac{|{\bf p+q}|^2}{2m}\Big)\ ,\nonumber\\ R_T=& \frac{3\mathcal{N}}{4\pi p_F^3}\int d^3p \frac{1}{2}\sum_{s,s^\prime}\Big\{ \chi^\dagger_s {\bf j}_T^\dagger({\bf q})\chi_{s^\prime}\chi^\dagger_{s^\prime}{\bf j}_T({\bf q})\chi_s\Big\}\nonumber\\ &\times \theta(p_F-|{\bf p}|)\theta(|{\bf p+q}|-p_F)\nonumber\\ &\times \delta\Big(\omega +\frac{{\bf p}^2}{2m}-\frac{|{\bf p+q}|^2}{2m}\Big)\ , \end{align} where $s$ and $s^\prime$ are the spin quantum numbers of the nucleon in the initial and final state, respectively. In the following, non relativistic scaling variable and functions are introduced with the same non relativistic reduction of the current operator and relativistic corrections as in the GFMC calculations~\cite{Carlson:2001mp}. Neglecting the very small spin-orbit relativistic correction in the definition of charge operator, the charge and current operators read \begin{align} \rho({\bf q})=&\frac{G_E(\tau)}{\sqrt{1+\tau}}\ ,\nonumber\\ {\bf j}_T({\bf q})=& \Big[ \frac{G_E(\tau)}{m}{\bf p}_T-i \frac{G_M(\tau)}{2m}{\bf q}\times {\bm \sigma}\Big]\ . \end{align} As opposed to the semi relativistic model of Ref.~\cite{Amaro:2006if}, in the GFMC relativistic corrections enter only in the current definition, while the kinematics is fully non relativistic.\\ In the non relativistic limit, Eq.~\eqref{f_scal} reduces to \begin{align} F^{nr}(p_F,q)= & \frac{3 \mathcal{N} }{4\pi p_F^3}\int d^3p\ \mathcal{F}^{nr}(p_F,q,{\bf p})\nonumber\\ = &\frac{3\mathcal{N}m^2}{2p_F^3|{\bf q}|}\theta(E^{nr}_F-\Gamma)(E^{nr}_F-\Gamma^{nr})\ , \label{f_scal_nr} \end{align} with \begin{align} \mathcal{F}^{nr}(p_F,q,{\bf p})&= \theta(p_F-|{\bf p}|)\theta(|{\bf p+q}|-p_F)\nonumber\\ &\times \delta\Big(\omega +\frac{{\bf p}^2}{2m}-\frac{|{\bf p+q}|^2}{2m}\Big)\ , \end{align} and \begin{align} \Gamma^{nr}&={\rm Max}\{\Gamma^{nr}_1,\Gamma^{nr}_2\}\nonumber\\ &={\rm Max}\Big\{ E^{nr}_F-\omega,m+\frac{1}{2m}\Big( \frac{\omega m}{|{\bf q}|}-\frac{|{\bf q}|}{2}\Big)^2\Big\}\ . \label{eps_nr} \end{align} The non relativistic Fermi energy reads $E^{nr}_F=m+ {p_F^2}/{2m}$. We can then introduce a non relativistic scaling variable given by \begin{align} \psi^{nr}= &\Big[\frac{1}{\xi^{nr}_F}\Big(\frac{\Gamma^{nr}}{m}-1\Big)\Big]^{1/2}= \frac{1}{\sqrt{2\xi^{nr}_F}}\Big(\frac{\lambda}{\kappa}-{\kappa}\Big)\ . \end{align} In the limit of large $|{\bf q}|$, Eq.~\eqref{f_scal_nr} can be written in terms of $\psi^{nr}$ as \begin{align} F^{nr}(p_F,q)=\frac{3\mathcal{N}\xi^{nr}_F}{4 \eta_F^3 m \kappa}\big(1-{\psi^{nr}}^2)\theta(1-{\psi^{nr}}^2)\ . \end{align} In analogy with the relativistic case, the longitudinal and transverse responses are expressed as \begin{align} R^{nr}_L&=\frac{3 \mathcal{N} }{4\pi p_F^3}\int d^3p\ \mathcal{F}^{nr}(p_F,q,{\bf p})\Big\{\frac{G^2_E(\tau)}{1+\tau}\Big\}\ \nonumber\\ &=\frac{3\mathcal{N}\xi_F}{4 \eta_F^3 m \kappa}\big(1-{\psi^{nr}}^2)\theta(1-{\psi^{nr}}^2) \Big\{\frac{G^2_E(\tau)}{1+\tau}\Big\}\ ,\\ R^{nr}_T&=\frac{3 \mathcal{N} }{4\pi p_F^3}\int d^3p\ \mathcal{F}^{nr}(p_F,q,{\bf p})\Big\{ \frac{G^2_E(\tau)}{m^2}p_T^2\nonumber\\ &+ \frac{G^2_M(\tau)}{2m^2}|{\bf q}|^2\Big\}\nonumber\\ &= \frac{3\mathcal{N}\xi_F}{4 \eta_F^3 m \kappa}\big(1-{\psi^{nr}}^2)\theta(1-{\psi^{nr}}^2)\nonumber\\ &\times\Big\{ G_E^2(\tau)\Delta^{nr}+ 2 G_M^2(\tau)\kappa^2\Big\}\ , \end{align} where \begin{align} \Delta^{nr}= \xi^{nr}_F (1-{\psi^{nr}}^2)\ . \end{align} We then define the non relativistic longitudinal and transverse scaling functions as \begin{align} f^{nr}_L(\psi^{nr})&= p_F\times \frac{R^{nr}_L}{G^{nr}_L}\ ,\nonumber\\ f^{nr}_T(\psi^{nr})&= p_F\times \frac{R^{nr}_T}{G^{nr}_T}\ , \end{align} where \begin{align} G^{nr}_L&= \frac{\mathcal{N}}{2\kappa}\Big\{\frac{G^2_E(\tau)}{1+\tau}\Big\}\ ,\nonumber\\ G^{nr}_T&= \frac{\mathcal{N}}{2\kappa}\Big\{ G_E^2(\tau)\Delta^{nr}+ 2 G_M^2(\tau)\kappa^2\Big\} \ . \label{g_nr} \end{align} In order to compare our results with the data, we introduce the experimental scaling functions obtained from the extracted longitudinal and transverse responses for $^4$He and $^{12}$C \begin{align} f^{exp}_L&= p_F\times\frac{R^{exp}_L}{G_L}\ ,\nonumber\\ f^{exp}_T&= p_F\times \frac{R^{exp}_T}{G_T}\ . \end{align} It is long known that $f^{exp}_L$ clearly shows a scaling behavior in the limit of large momentum transfer. On the other hand, sizable scaling violations occur in the transverse channel, due to significant contributions given by two-body currents, resonance excitations and inelastic scattering. Hence, the comparison with the experimental data will be performed considering only the longitudinal contribution, $f^{exp}_L$. \section{Results} \label{results} \begin{figure}[!t] \centering \includegraphics[scale=0.675]{gl_gt_300_rat} \includegraphics[scale=0.675]{gl_gt_380_rat} \includegraphics[scale=0.675]{gl_gt_570_rat} \caption{(color online) Ratio of the non relativistic and relativistic expressions of the prefactors entering the definition of the scaling function plotted as a function of $\psi$, for $|{\bf q}|$= 300, 380, 570 MeV. The blue solid and red dashed lines correspond to the longitudinal and transverse channels, respectively. } \label{prefact} \end{figure} Here we analyze the scaling features of the GFMC responses. In order to highlight the underlying nuclear dynamics we first divide them by the non relativistic prefactors $G^{nr}_{L,T}$. These have been obtained expanding the relativistic-current matrix elements in powers of $1/m$ retaining terms up to $\mathcal{O}[1/m^2]$ \cite{Lovato:2016gkq}. Relativistic corrections appear as terms of $\mathcal{O}[1/m^2]$ in the longitudinal channel while they are $\mathcal{O}[1/m^3]$ in the transverse one, and are therefore neglected in this case. This difference plays a relevant role in the interpretation of the results presented below. For a meaningful comparison with the scaling functions extracted from experimental data, we also present the results obtained using the relativistic prefactors $G_{L,T}$. Figure \ref{prefact} clearly shows the different behavior of $G^{nr}_{L,T}$ and $G_{L,T}$ for three values of the momentum transfer. Relativistic effects are particularly relevant in the transverse case; at $|{\bf q}|$= 570 MeV the ratio $G^{nr}_T/G_T$ significantly differs from 1 for $\psi\geq 0$. \begin{figure}[h] \centering \includegraphics[scale=0.675]{12C300_rlrt_gfmc_nr2} \includegraphics[scale=0.675]{12C300_rlrt_gfmc_rel} \caption{(color online) Longitudinal (solid blue) and transverse (dashed red) scaling functions obtained from the GFMC calculation of the longitudinal and transverse responses of $^{12}$C at $|{\bf q}|= 300$ MeV. {\bf Upper panel}: the responses have been divided by the non relativistic prefactors and the resulting curves are plotted as a function of $\psi^{nr}$. {\bf Lower panel}: the standard definition of the prefactors given in Eq.~\eqref{FG:pre:fact} has been used to get both the theoretical curves and the experimental points obtained from the data of Ref.~\cite{Barreau:1983ht} . } \label{300_12C} \end{figure} \begin{figure}[] \centering \includegraphics[scale=0.675]{12C380_rlrt_gfmc_nr2} \includegraphics[scale=0.675]{12C380_rlrt_gfmc_rel} \caption{ Same as in Fig. \ref{300_12C} but for $|{\bf q}|= 380$ MeV.} \label{380_12C} \end{figure} \begin{figure}[] \includegraphics[scale=0.675]{12C570_rlrt_gfmc_nr2} \includegraphics[scale=0.675]{12C570_rlrt_gfmc_rel} \caption{ Same as in Fig. \ref{300_12C} but for $|{\bf q}|= 570$ MeV.} \label{570_12C} \end{figure} In Figs.~\ref{300_12C}, \ref{380_12C} and \ref{570_12C} we show the longitudinal (blue solid lines) and transverse (red dashed lines) scaling functions extracted from the GFMC calculations of the $^{12}$C response functions. The results in the upper panels, obtained dividing the GFMC calculations by $G^{nr}_{L(T)}$, are plotted as a function of the non relativistic scaling variable $\psi^{nr}$. In the lower panels a comparison between the theoretical curves and the experimental points, in which the relativistic form of the prefactors has been adopted, is presented. It is important to point out that the longitudinal response of $^{12}$C is known to be affected by the elastic and the low lying excited states\textemdash $J^{\pi}=2^+,\ 0_2^+,$ and $4^+$\textemdash contributions. In order to compare experiments \textemdash which refer only to the inclusive quasi-elastic response\textemdash with GFMC calculations, these contributions have been explicitly subtracted by using the experimental values of excitations energies and form factors. Because of the fast drop of the form factors with increasing momentum transfer, in Ref. \cite{Lovato:2016gkq} it is argued that these corrections are expected to be significant in the longitudinal channel at $|{\bf q}|= 300$ MeV, but almost negligible at $|{\bf q}|= 570$ MeV. On the other hand, in the transverse channel such contributions are expected to be always negligible. The scaling functions displayed in the upper panels exhibit a clearly asymmetric shape, with a tail extending in the region $\psi^{nr} > 0$, as opposed to the GRFG model predictions. The difference in magnitude between the longitudinal and transverse GFMC scaling functions, which become less evident for larger values of $|{\bf q}|$, is likely to be ascribed to small residual effects of the low lying excited state contributions. For the aforementioned reason, in the lower panels the agreement between the longitudinal GFMC scaling function and the experimental data improves with increasing momentum transfer. The different behavior of the transverse scaling functions displayed in the upper and lower panels deserves some comments. In the lower panels, the red curves present a large non vanishing tail for $\psi >1$, although those are expected to approach zero, as shown in the upper panels. This discrepancy can be best understood considering the results of Fig.\ref{prefact}. The relativistic and non relativistic expressions of the transverse prefactors used to extract the scaling functions are sizably different in the kinematic setups considered. In particular, for $|{\bf q}|=570$ MeV, these are very similar for $-1.5\leq \psi \leq 0$ where their ratio is almost 1, while in the region $\psi\geq 0$ their trend is significantly different and $G^{nr}_T/G_T$ increases for larger values of $\psi$. \begin{figure}[] \includegraphics[scale=0.675]{12Cfl_exp_all} \vspace*{-.1in} \caption{(color online) Experimental scaling functions of $^{12}$C obtained from the longitudinal responses for $|{\bf q}|=300,\ 380,\ 570$ MeV \cite{Barreau:1983ht}. } \label{fl_exp_12C} \end{figure} \begin{figure}[] \includegraphics[scale=0.675]{12Cfl_th_nr_all} \vspace*{-.1in} \caption{(color online) Longitudinal scaling functions of $^{12}$C obtained from GFMC calculations for $|{\bf q}|=300,\ 380,\ 570$ MeV as a function of $\psi^{nr}$. } \label{fl_all_12C} \end{figure} Figure \ref{fl_exp_12C} shows the experimental scaling functions of $^{12}$C extracted from the experimental data of Ref. \cite{Barreau:1983ht} for $|{\bf q}|=300,\ 380,$ and 570 MeV. Although scaling is expected to occur in the limit of large momentum transfer, within the error bars of the different data points, the longitudinal response functions scale to a universal curve over the entire quasi-elastic peak, even in the region of moderate $|{\bf q}|$. In Fig. \ref{fl_all_12C} the longitudinal GFMC scaling functions are shown as a function of $\psi^{nr}$ for $|{\bf q}|=300,\ 380,$ and 570 MeV. The theoretical results seem to indicate that first-kind scaling occurs. However, the interpretation of the differences between the three curves is obscured by the residual effect of the low-lying transitions discussed above. A more meaningful comparison can be carried out in the transverse channel, where the response functions are not affected by this effect. \begin{figure}[] \includegraphics[scale=0.675]{12Cft_th_nr_all} \vspace*{-.1in} \caption{(color online) Transverse scaling functions of $^{12}$C obtained from GFMC calculations for $|{\bf q}|=300,\ 380,\ 570$ MeV as a function of $\psi^{nr}$. } \label{ft_all_12C} \end{figure} Figure \ref{ft_all_12C} shows the GFMC results for the transverse scaling functions. The difference between the three curves in the ${\psi}^{nr}<0$ region suggests that, for $|{\bf q}|=300,\ 380$ MeV, the requirement $\Gamma=\Gamma_2$ [see Eq.~\eqref{eps_nr}]\textemdash which is necessary to introduce the scaling variable\textemdash is not satisfied for all the values of $\omega$. Indeed, the scaling violation in the low-energy transfer region is clearly visible. \begin{figure}[] \includegraphics[scale=0.675]{4hefl_exp} \vspace*{-.1in} \caption{(color online) Experimental scaling functions obtained from the longitudinal responses of $^4$He for $|{\bf q}|=$300, 400, 500, 600 and 700~MeV~\cite{Carlson:2001mp}. The value of the Fermi momentum of $^4$He has been set to $180$ MeV. The black dots correspond to the scaling function obtained from the experimental longitudinal response of $^{12}$C at $|{\bf q}|=570$ MeV \cite{Barreau:1983ht}. } \label{fl_exp_4He} \end{figure} To better elucidate the scaling properties of the GFMC calculations, it is worth to analyze the $^4$He nucleus, whose longitudinal response functions are not affected by low-lying transitions. In Fig.~\ref{fl_exp_4He}, the scaling functions obtained from the experimental data of the longitudinal responses of $^4$He at $|{\bf q}|= 300,\ 400,\ 500\ ,600,$ and $700$ MeV are shown. Choosing the Fermi momentum equal to $180$ MeV, we observe that the points corresponding to different values of the momentum transfer tend to lay on top of each other, and the agreement with the $^{12}$C data at $|{\bf q}|= 570$ MeV is also remarkable. \begin{figure}[] \centering \includegraphics[scale=0.675]{4he300_rlrt_gfmc_nr2} \includegraphics[scale=0.675]{4he300_rlrt_gfmc_rel} \caption{(color online) Longitudinal (solid blue) and transverse (dashed red) scaling functions obtained from the GFMC calculation of the longitudinal and transverse responses of $^{4}$He at $|{\bf q}|= 300$ MeV. {\bf Upper panel}: the responses have been divided by the non relativistic prefactors and the resulting curves are plotted as a function of $\psi^{nr}$. {\bf Lower panel}: the standard definition of the prefactors given in Eq.\eqref{FG:pre:fact} has been used to get both the theoretical curves and the experimental points obtained from the data of Ref. \cite{Carlson:2001mp} .} \label{300_4He} \end{figure} \begin{figure}[] \centering \includegraphics[scale=0.675]{4he400_rlrt_gfmc_nr2} \includegraphics[scale=0.675]{4he400_rlrt_gfmc_rel} \caption{ Same as in Fig. \ref{300_4He} but for $|{\bf q}|= 400$ MeV.} \label{400_4He} \end{figure} \begin{figure}[] \includegraphics[scale=0.675]{4he500_rlrt_gfmc_nr2} \includegraphics[scale=0.675]{4he500_rlrt_gfmc_rel} \caption{ Same as in Fig. \ref{300_4He} but for $|{\bf q}|= 500$ MeV.} \label{500_4He} \end{figure} \begin{figure}[] \includegraphics[scale=0.675]{4he600_rlrt_gfmc_nr2} \includegraphics[scale=0.675]{4he600_rlrt_gfmc_rel} \caption{ Same as in Fig. \ref{300_4He} but for $|{\bf q}|= 600$ MeV.} \label{600_4He} \end{figure} \begin{figure}[] \includegraphics[scale=0.675]{4he700_rlrt_gfmc_nr2} \includegraphics[scale=0.675]{4he700_rlrt_gfmc_rel} \caption{ Same as in Fig. \ref{300_4He} but for $|{\bf q}|= 700$ MeV.} \label{700_4He} \end{figure} \begin{figure}[] \includegraphics[scale=0.675]{4hefl_all_th} \vspace*{-.1in} \caption{(color online) Longitudinal scaling functions obtained from GFMC calculations of the longitudinal response of $^{4}$He for $|{\bf q}|=400,\ 500,\ 600,\ 700$ MeV and of $^{12}$C at $|{\bf q}|=570$ MeV. } \label{fl_all_4He} \end{figure} \begin{figure}[] \includegraphics[scale=0.675]{4heft_all_th} \vspace*{-.1in} \caption{(color online) Transverse scaling functions obtained from GFMC calculations of the transverse response of $^{4}$He for $|{\bf q}|=400,\ 500,\ 600,\ 700$ MeV and of $^{12}$C at $|{\bf q}|=570$ MeV.} \label{ft_all_4He} \end{figure} In Figs. \ref{300_4He}-\ref{700_4He} we show the longitudinal (solid blue) and transverse (dashed red) scaling functions extracted from the GFMC calculations of $^{4}$He at $|{\bf q}|=300,\ 400\ ,500\ ,600,$ and 700 MeV. In the upper and lower panels the same scheme followed to present the $^{12}$C scaling functions has been adopted. In the longitudinal channel, theoretical calculations and experimental data reported in the lower panels present are in very nice agreement in all the kinematic setups. Finding this agreement up to $|{\bf q}|= 700$ MeV may appear surprising since the GFMC is a non relativistic approach. This can be understood because all the relativistic corrections coming from both the Dirac-spinors and the currents are kept up to $\mathcal{O}[1/m^2]$. However, this is not the case in the transverse channel where relativistic corrections are subleading and have been neglected. Moreover, the differences in magnitude of the transverse scaling functions, following the discussion carried out for $^{12}$C, are likely to be ascribed to relativistic effects in the prefactors. The upper panels of Figs. \ref{300_4He}-\ref{700_4He} clearly show that in the $^4$He case the scaling of the zeroth-kind is manifest when the effects of nuclear dynamics are singled out by using the non relativistic expressions for the prefactors. The absence of low-lying transition contributions makes the scaling of the first kind apparent. The curves of Figs \ref{fl_all_4He} and \ref{ft_all_4He}, where we compare the longitudinal and transverse scaling functions of $^4$He for different values of the momentum transfer, present a remarkably good scaling behavior. The $^4$He results for $|{\bf q}|= 600\ ,700$ MeV are almost coincident and in good agreement with the longitudinal scaling function of $^{12}$C computed at $|{\bf q}|= 570$ MeV. Figures \ref{fl_all_4He} and \ref{ft_all_4He} prove that the asymmetric shape of the scaling function does not depend upon the momentum transfer. Consequently, it is not likely to be ascribed to collective excitation modes, that can be accounted for within the random phase approximation. This analysis, carried out for a variety of kinematics suggests that scaling occurs in the GFMC calculations of the longitudinal and transverse response functions of both $^4$He and $^{12}$C nuclei. Comparing the definition of the longitudinal response function and the one of the corresponding prefactor, see Eq. \eqref{resp:GFMC} and \eqref{g_nr}, while neglecting the spin-orbit contribution, one is lead to conclude that the scaling function corresponds to \begin{align} f_L= \frac{2\kappa\ R_\varrho}{\mathcal{N}} \label{n:dens} \end{align} where $R$ is the nucleon-density response function defined as \begin{align} R_\varrho\equiv &\sum_f \langle 0| \varrho^\dagger(\mathbf{q}) | f \rangle \langle f | \varrho(\mathbf{q}) | 0 \rangle\, \delta (E_0+\omega-E_f) \ , \end{align} in terms of the nucleon-density operator \begin{equation} \varrho\equiv \sum_i e^{i \mathbf{q}\cdot \mathbf{r}_i} \frac{(1\pm\tau_{i,z})}{2}\, , \end{equation} where the $\pm$ applies to protons and neutrons, respectively. Note that Eq.\eqref{n:dens} holds also in the relativistic case, provided that relativistic expressions for the energies are used and spinors are normalized as $\bar{u}u= \sqrt{m/E}$ to absorb the factor $m^2/(E({\bf p})E({\bf p+q}))$ of Eq. (\ref{eq:scale_integrand}). On the other hand, the transverse scaling function corresponds to the spin-response, which reduces to the nucleon-density response defined above in the limit of high momentum-transfer, where the impulse approximation is expected to be accurate and where $|\mathbf{q}| \gg |{\bf p}_T|$. We found that the longitudinal and transverse response functions obtained retaining only the one body current operator scale to the same universal scaling function: the nucleon-density response function. The results presented in Ref. \cite{Lovato:2016gkq} show that two-body currents lead to a significant enhancement of the transverse response of $^{12}$C in the region of the quasi elastic peak. We expect that the inclusion of this contribution in the scaling analysis, while leaving the longitudinal scaling function unchanged, would contribute to the observed scaling violation of the experimental scaling function in the transverse channel for $\psi\geq0$. \section{Conclusions} \label{conclusion} We have performed a scaling analysis of the GFMC electromagnetic response functions of $^4$He and $^{12}$C for a variety of kinematic setups. Despite the non relativistic nature of the calculation, all the GFMC scaling functions analyzed are strongly asymmetric, with a tail extending to the large $\psi$ region. Within the present picture, this is a consequence of nuclear correlations in both the initial and final states. This is at variance with the findings of Ref.~\cite{Caballero:2007tz}, where the asymmetric shape was ascribed to relativistic effects in the treatment of the final state interactions. In this regard, it is interesting to point out that the symmetry of the scaling function is not recovered even for momentum transfer as low as $|{\bf q}| =300$ MeV in both $^4$He and $^{12}$C. When the nuclear dynamics is properly singled out, the $^{12}$C response function shows a fairly good scaling behavior. However, the presence of the low lying transitions which is known to affect the longitudinal channel, introduces non trivial difficulties in drawing definitive conclusions. A better understanding is given by the analysis of $^4$He responses which are free from the uncertainties coming from these contributions. Our results for this nucleus indicate that both the zeroth- and first-kind of scaling occur. Moreover, the $^4$He and $^{12}$C scaling functions fulfill scaling of the second kind once the Fermi momentum of $^4$He is appropriately tuned. From our analysis, a novel interpretation of the scaling function emerges. If the spin-orbit contribution to the density-current operator is neglected, it can be easily noted that the longitudinal scaling function corresponds to the nucleon-density response. In the transverse channel, for sufficiently large momentum transfer the term proportional to the transverse momentum of the incoming nucleon can be safely neglected and the scaling function is proportional to the spin-response. In nuclei characterized by total spin $S=0$, such as $^4$He, $^{12}$C, $^{16}$O and $^{40}$Ca, in the impulse approximation the spin-response reduces to the nucleon-density response. Our findings on the occurrence of zeroth-kind scaling are consistent with this interpretation. In fact, within GFMC the scaling violation of the transverse response in the quasi-elastic region is likely to come from two-body currents. This was first noted by the authors of Ref.~\cite{Carlson:2001mp} in which a better agreement between the experimental data and the theoretical calculation of the Euclidean responses of $^3$He and $^4$He was found, once that this term was accounted for. The role played by two-body current contributions in the electromagnetic responses of $^{12}$C have been recently investigated in Ref.~\cite{Lovato:2016gkq}, where a significant enhancement of the transverse response is observed at all momentum transfers: not only in the {\it dip} region, but in the whole quasi-elastic peak region, extending below the pion-production threshold. In the pioneering work of Ref. \cite{Fabrocini:1996bu}, it has been shown that such enhancement is mainly due to the interference between one- and two-body currents leading to single knock out final state. In this case the kinematics would be very similar to the those analyzed in this paper, where only one-body current contributes. Hence, we expect that it would be possible to define an appropriate scaling function for these processes. The consequences of the two-body current contribution in the GFMC scaling functions as well as the study of the scaling properties of the total nuclear response \textemdash including both one- and two-body terms\textemdash will be the subject of a future work. \section*{Acknowledgements} Research partially supported by the Spanish Ministerio de Econom\'ia y Competitividad and the European Regional Development Fund, under contracts FIS2014-51948-C2-1-P and SEV-2014-0398, by Generalitat Valenciana under contract PROMETEOII/2014/0068, and by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under contract DE-AC02-06CH11357 (A.L.). Under an award of computer time provided by the INCITE program, this research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.
1,314,259,993,767
arxiv
\section{Introduction} A femtocell, as shown in Fig.~\ref{fig:femtocell}, is a relatively small cellular network with a femtocell base station (FBS), usually deployed in places where signal reception from the macro base station (MBS) is weak due to long distance or obstacles. An FBS is typically the size of a residential gateway or smaller and connects to the service provider's network via broadband connections. FBS is designed to serve approved users within its coverage to offload wireless traffic from MBS. Due to shortened wireless transmission distance, femtocell is shown very effective in reducing transmit power and boosting signal-to-interference-plus-noise ratio (SINR), which lead to prolonged battery life of mobile devices, improved network coverage, and enhanced network capacity~\cite{Andrews12}. Femtocells have gained a lot of attention from both academia and industry in the recent past. The three largest cellular network operators in the United States (i.e., AT\&T, Sprint and Verizon) have offered commercial femtocell products and service recently. Although highly promising, a plethora of problems with both technical and economic natures have not been fully addressed yet. In~\cite{Andrews12}, a comprehensive discussion is provided of the challenging technical issues in femtocell networks, ranging from synchronization, cell association, network organization, to quality of service (QoS) provisioning. Unlike the MBS, whose placement is planned and optimized by operators, FBS's are usually randomly deployed by users. When the chaotic femtocell placement meets randomly distributed mobile users, cell association (or load balancing) becomes a critical problem for the performance of femtocell networks. For example, an FBS might be deployed at a place with high user density. With an inappropriate cell association strategy, this FBS may have to serve all the users within its coverage, leading to very high load at this FBS and high service latency for its users. An effective cell association scheme should be used in this case to evenly distribute the load among neighboring FBS's and/or MBS. The cell association problem is particularly prominent in femtocell networks due to the unreliability of FBS's. The operation of an FBS may be interrupted by its owner (e.g., turned off after office hours); it may also experience power outage or any other faults. Then all the users initially associated with this FBS should be quickly assigned to other neighboring FBS's or the MBS. It is a load balancing problem on how to effectively associate these users with neighboring BS's without introducing a load burst and performance degradation at a particular BS. \begin{figure} [!t] \vspace{0.1in} \centering \includegraphics[width=2.5in]{femtocell.eps} \caption{Illustration of a two-tier femtocell network.} \label{fig:femtocell} \vspace{-0.15in} \end{figure} In this paper, we investigate the problem of cell association and service scheduling in a two-tier femtocell network. In addition to the general goal of offloading wireless traffic from the MBS, we also aim to minimize the latency of service requested by users, while considering both open and closed access strategies. In particular, we consider one MBS and multiple FBS's serving randomly distributed mobile users. Users request to the BS's for downlink transmission of data packets. Without loss of generality, we assume that each user is allowed to connect to either the MBS or an FBS. The cell associate problem is to assign the users to the BS's such that the transmission of all the data packets can be completed as soon as possible. When multiple users are associated with one BS, we also aim to develop a service scheduling scheme such that the average waiting time for the users will be minimized. We provide a general framework for the cell association problem for both open and closed access scenarios, which can be reduced to the classic load balancing problem and is NP-hard~\cite{Kleinberg05}. Therefore, we develop effective near-optimal algorithms with guaranteed performance. In particular, we first provide a sequential fixing algorithm based on a linear programming (LP) relaxation, which can achieve the best performance among the proposed schemes but with a relatively high computational complexity. To reduce the complexity, we propose a rounding approximation algorithm that ensures an $(\rho+1)$-approximation of the optimal solution, and a greedy approximation algorithm that ensures a $(2\rho)$-approximation of the optimal solution. To further reduce the requirement on frequently updated channel state information (CSI), we then develop a randomized algorithm that allows a user to randomly pick a BS to connect to from a reduced BS list. Once the reduced BS list is generated by the randomized algorithm, no information exchange is required among users. An upper bound for the maximum expected service time achieved by the randomized algorithm is then derived. After the users are assigned to the BS's, we next address the service scheduling problem for determining the transmission order of the data packets requested by the users associated with the same BS. We develop a simple algorithm to minimize the average waiting time for the users, and prove its optimality. In addition rigorous analysis of the proposed algorithms with respect to performance bounds, approximation ratios, and optimality, we also evaluate the proposed schemes with simulations, where superior performance is observed. The remainder of this paper is organized as follows. The related work is discussed in Section~\ref{sec:RelWork}. We present the system model in Section~\ref{sec:sysMod}. Cell Association problem formulation and solutions are presented in Section~\ref{sec:ProbSol}. The scheduling problem is studied in~\ref{sec:SevSchd}. The proposed algorithm are evaluated in Section~\ref{sec:PerfEva}. Section~\ref{sec:Conc} concludes this paper. \section{Related Work}\label{sec:RelWork} Femtocells have been acknowledged as an effective solution to the capacity problem of wireless networks. Ref.~\cite{Andrews12} provided comprehensive discussions of the technical issues, regulatory concerns, and economic incentives in femtocell networks. There are three different access control strategies in femtocell networks, open access, closed access and hybrid access. The pros and cons of these strategies were studied in~\cite{Roche10}. Deploying femtocells also means introducing interference if no appropriate mitigation strategy is incorporated. Considerable research have been conducted on interference mitigation by assigning users to proper orthogonal channels~\cite{Hu12JSAC}. Apart from the studies on interference mitigation, there are an increasing number of papers on cell association or cell selection under various scenarios~\cite{Dhahri12, Madan10, Corroy12, Zhou13, Jo12, Mukherjee12}. Dhahri and Ohtsuki in~\cite{Dhahri12} proposed a learning-based cell selection method for an open access femtocell network. The authors in~\cite{Madan10} described new paradigms of cell association in heterogeneous networks with the help of third-party backhaul connections. Their simple and lightweight methodologies and algorithms incur very low signaling overhead. In~\cite{Corroy12}, a convex optimization problem was formulated for cell association and a dynamic range extension algorithm was proposed to maximize the minimum rate of users on the downlink of heterogeneous networks. However, this paper did not directly optimize the load balancing in Heterogeneous Networks (HetNet), but rather focused on the sum rate and min rate. In~\cite{Zhou13}, a cell association and access control scheme was presented to maximize network capacity while achieving fairness among users. In~\cite{Jo12}, the authors provided an analytical framework for evaluating outage probability and spectral efficiency with flexible cell association in heterogeneous cellular networks. Mukherjee in~\cite{Mukherjee12} analyzed the downlink SINR distribution in heterogeneous networks with biased cell association. There are also some interesting prior work on load balancing in cellular networks. A theoretical framework was presented in~\cite{Kim12} for distributed user association and cell load balancing under spatially heterogeneous traffic distribution. A distributed $\alpha$-optimal algorithm was proposed and it supports different load-balancing objectives, which include rate-optimal, throughput-optimal, delay-optimal, and load-equalizing, as $\alpha$ is set to different values. In~\cite{Son09}, the authors developed an off-line optimal algorithm for load balancing to achieve network-wide proportional fairness in multi-cell networks. They considered partial frequency reuse (PFR) jointly with load-balancing in a multi-cell network to achieve network-wide proportional fairness. An on-line practical algorithm was also proposed and the expected throughput was taken as the decision making metric. On-line assignments when users arrive one at a time was studied extensively in computer science literature. The competitive ratio analysis in~\cite{Azar92} showed that any deterministic on-line algorithm can achieve a competitive ratio of $\log n$, where $n$ is the number of servers. We find most of the related research was focused on offloading MBS traffic and improving network capacity with FBS's. In the following sections, we propose several cell association and transmission scheduling schemes with the objective of minimizing service latency in femtocell networks. \section{System Model}\label{sec:sysMod} We consider a two-tier femtocell network with $M$ base stations: one MBS (indexed by $1$) and $M-1$ FBS's (indexed from $2$ to $M$). The All the BS's are connected to the Internet via broadband wired connections. There are $N$ mobile users randomly located within the coverage of the femtocell network. We assume the MBS and FBS's are well synchronized and they share the same spectrum. Assume each user requests a fixed-length data packet from one of the $M$ BS's. The problem is to assign the users to the BS's and schedule the transmission of their requested data packets at each BS, such that the transmissions can be finished as earlier as possible. \subsection{Link Capacity} Let $P_m$ be the transmit power of BS $m$ and $G_{m,n}$ the channel gain between the BS and user $n$. According to the Shannon Theorem, the network capacity of user $n$ connected to BS $m$ is given by \begin{eqnarray}\label{eq:Cmn} C_{m,n}=B\log_2\left(1+\frac{G_{m,n}P_m}{\sigma^2+I_{m,n}}\right), \end{eqnarray} where $B$ is network bandwidth,\footnote{It is well-known from queuing theory that a single server single buffer queue has the lowest delay than splitting the service capacity to multiple servers or maintaining multiple queues.} $\sigma^2$ is noise power density, and $I_{m,n}$ is the interference from all other BS's. We have that \begin{eqnarray}\label{eq:Imn} I_{m,n}=\sum_{i=1}^M G_{i,n}P_i-G_{m,n}P_m=I_n-G_{m,n}P_m, \end{eqnarray} where $I_n$ is the sum of interference from all BS's to user $n$. It does not depend on which BS user $n$ is connected to and is a constant for each user. Substituting (\ref{eq:Imn}) into (\ref{eq:Cmn}), we have \begin{eqnarray}\label{eq:Cmn1} C_{m,n} &=& B\log_2\left(1+\frac{G_{m,n}P_m}{\sigma^2+I_n-G_{m,n}P_m}\right) \nonumber \\ &=& B\log_2\left(\frac{1}{1-\eta_{m,n}}\right), \end{eqnarray} where $\eta_{m,n}$ is signal to interference plus noise ratio (SINR), the same ratio of the received power in $I_n$ at user $n$. \subsection{Service Time} We assume each user requests a fixed-length data packet from one of the BS's. For simplicity of notation, we assume all the packets have the same length, denoted as $L$. Then the processing/service time of BS $m$ for user $n$ is given by \begin{eqnarray}\label{eq:tmn} t_{m,n}=L/ C_{m,n}. \end{eqnarray} The service time depends on the link capacity $C_{m,n}$ as given in~(\ref{eq:Cmn1}). Note that the service time defined here is actually the transmission delay, i.e., the time it takes to finish the transmission of the data packet. The propagation delay is negligible due to the short distance and is ignored. \subsection{Femtocell Access Control} The type of access control for femtocells can be classified into two categories: closed access and open access. The open-access strategy allows all mobile users of an operator to connect to the FBS's; in this case, femtocells are often deployed by an operator to enhance coverage in an area where there is a coverage hole. With the closed access strategy, only a specific user group can get service from the FBS's~\cite{Golaup09}. Although closed access has been shown to decrease system throughput by 15\%, surveys suggest that closed access is users' favorite option~\cite{Hasan09}. In this paper, we consider both access strategies. Let $\mathcal{A}_m$ denote the set of users that can connect to BS $m$ and $\mathcal{B}_n$ the set of BS's that user $n$ can connect to. Both open and closed access strategies can be easily modeled by these two sets. Specifically, for open access, we have $\mathcal{A}_m=\{1,\cdots,N\}$ and $\mathcal{B}_n=\{1,\cdots,M\}$. \section{Cell Association Problem Formulation and Proposed Schemes}\label{sec:ProbSol} To make the complex problem tractable, we divide the problem into two steps. First, we assign each user to one of the $M$ BS's with the objective of minimizing the total service time on each BS. Second, we schedule the service order at each BS to minimize the average waiting time of users. \subsection{Problem Statement} The cell association problem can be formulated as a load balancing problem. Given a set of $N$ users and a set of $M$ BS's. Each user $n$ has a service time $t_{m,n}$ if it is connected to BS $m$. Let $\mathcal{C}_m$ denote the set of users assigned to BS $m$. Then it takes a total amount of time $T_m=\sum_{n\in \mathcal{C}_m}t_{m,n}$ for BS $m$ to transmit all the packets. For optimal network-wide performance, we seek to minimize the maximum load among all the BS's, i.e., \begin{equation} \label{eq:T} \min \; T=\max_m \{T_m\} = \max_m \left\{\sum_{n\in \mathcal{C}_m}t_{m,n} \right\}. \end{equation} We find the cell association problem is similar to a load balancing problem. However, our problem is more challenging than the classic load balancing problem, where the service time of a user is identical when connecting to any BS. In our cell associate problem, the service time is a function of the link capacity as in~(\ref{eq:tmn}). Its solution depends on not only user $n$, but also BS $m$. This cell association problem is easily seen to be NP-hard: when all the $t_{m,n}$'s are identical for any BS $m$, the problem is reduced to the classic load balancing problem, which is NP-hard~\cite{Kleinberg05}. In the remainder of this section, we develop effective algorithms to solve the cell association problem. In particular, we present a sequential fixing algorithm, an approximation algorithm, as well as a randomized algorithm, and derive several approximation ratios and performance bounds. \subsection{Sequential Fixing Algorithm} To solve the above problem, we first define an indicator variable $x_{m,n}$ as \begin{eqnarray}\label{eq:xmn} x_{m,n}=\left\{\begin{array}{l l} $1$, & \mbox{if user $n$ is connected to BS $m$}\\ $0$, & \mbox{otherwise}. \end{array}\right. \end{eqnarray} Then we reformulate the problem as follows: \begin{eqnarray}\label{eq:MILP} \min && T \\ \mbox{s.t.} && \sum_m x_{m,n}=1, \; \mbox{for all users} \nonumber\\ && \sum_n t_{m,n}x_{m,n}\le T, \; \mbox{for all BS's} \nonumber\\ && x_{m,n} \in \{0,1\}, \; \mbox{for all $n\in\mathcal{A}_m$} \nonumber\\ && x_{m,n}=0, \; \mbox{for all $n\notin\mathcal{A}_m$}. \nonumber \end{eqnarray} In the formulated problem~(\ref{eq:MILP}), all the indicator variable $x_{m,n}$'s are binary, while $T$ is a real variable. Thus it is a mixed integer linear programming problem~\cite{Kleinberg05}, denoted by MILP, which is usually NP-hard. The original MILP is next relaxed to a linear programming (LP) problem, denoted as RLP. Specifically, we allow binary variable $x_{m,n}$'s to take real values in $[0,1]$. Then, the MILP problem can be converted into RLP as follows: \begin{eqnarray}\label{eq:LP} \min && T \\ \mbox{s.t.} && \sum_m x_{m,n}=1, \; \mbox{for all users} \nonumber\\ && \sum_n t_{m,n}x_{m,n}\le T, \; \mbox{for all BS's} \nonumber\\ && x_{m,n}\ge 0, \; \mbox{for all $n\in\mathcal{A}_m$} \nonumber\\ && x_{m,n}=0, \; \mbox{for all $n\notin\mathcal{A}_m$}. \nonumber \end{eqnarray} Since the sum of $x_{m,n}$'s is already upper bounded by $1$ in the first constraint, we remove the upper bounds of $x_{m,n}$'s in the third constraint of MILP. Obviously, the solution to the RLP problem is a lower bound of the original MILP problem because it is obtained by expanding the solution space. Unfortunately, it is usually an infeasible solution to the original MILP problem. Therefore, we develop a sequential fixing (SF) algorithm~\cite{Hou08} to find a feasible solution to the MILP problem, which is presented in Algorithm~\ref{tab:SeqFixAlgo}. \begin{algorithm} [!t] \small \SetAlgoLined Initialize $\mathcal{N}=\{1,\cdots,N\}$ \; Relax $x_{m,n}$ to real numbers \; \While{$\mathcal{N}$ is not empty}{ Solve the RLP problem \; Find $x_{m',n'}$ that is the closest to integer \; $x_{m',n'}=\min_{n\in \mathcal{A}_m\cap\mathcal{N}}\{x_{m,n},1-x_{m,n}\}$ \; Set $x_{m',n'}$ to the closest integer \; \eIf{$x_{m',n'}$ is set to $1$}{ Set $x_{m,n'}=0$ for all $m \neq m'$ \; Remove $n'$ from $\mathcal{N}$ \; }{ Remove $n'$ from $A_{m'}$ \; } } \caption{Sequential Fixing for Cell Association} \label{tab:SeqFixAlgo} \end{algorithm} Algorithm~\ref{tab:SeqFixAlgo}, we solve the RLP problem iteratively. During each iteration, we find the $x_{m',n'}$ that has the minimum value for ($x_{m,n}-0$) or ($1-x_{m,n}$) among all fractional $x_{m.n}$'s, and round it up or down to the nearest integer. Setting $x_{m',n'}$ to $1$ means user $n'$ is connected to BS $m'$. Therefore, user $n'$ cannot be connected to any other BS's and the rest of $x_{m,n'}$'s are set to $0$, for all $m$. This procedure repeats until all the $x_{m,n}$'s are fixed. The complexity of SF depends on the specific LP algorithm. With Karmarkar's algorithm, the worst-case polynomial bound for solving LP problems is $O({n_v}^{3.5}L_b)$, where $n_v$ is the number of variables and $L_b$ is the number of bits of input to the algorithm. We have the following proposition. \begin{propo} The computational complexity of the sequential fixing algorithm is $O((MN)^{4.5}L_b)$. \end{propo} \begin{IEEEproof} The number of binary variables in MILP is at most $MN$, so the number of loops in sequential fixing problem is at most $MN$. In each iteration, the complexities of Steps $4$, $5$ and the rest of the steps are $O((MN)^{3.5}L_b)$, $O(MN)$ and $O(1)$, respectively. Besides, in each iteration, the number of variables is reduced by $1$. Therefore, the complexity of SF is given by $\sum_{i=1}^{MN} O((MN-i+1)^{3.5}L_b) =\sum_{i=1}^{MN} O(i^{3.5}L_b)=O((MN)^{4.5}L_b)$. Therefore, the complexity of SF is upper bounded by $O((MN)^{4.5}L_b)$. \end{IEEEproof} \subsection{Approximation Algorithm} \label{subsec:aa} Although the sequential fixing algorithm can solve the MILP problem within polynomial time, its complexity may be high even for small femtocell networks. In this section, we propose an approximation algorithm with low complexity to solve the MILP problem. Before we introduce the approximation algorithm, we first give the lemma below. \begin{lemma}\label{lemma:LB} The optimal solution, denoted by $T^\ast$, to the MILP problem is lower bounded by $T^\ast\ge \frac{1}{M}\sum_{n=1}^N \underline{t}_n$ where $\underline{t}_n=\min_{m\in \mathcal{B}_n} t_{m,n}$. \end{lemma} \begin{IEEEproof} Given the optimal allocation $\mathcal{C}^{\ast}_m$ for BS $m$, we have $T^\ast=\max_m \sum_{n\in \mathcal{C}^{\ast}_m}t_{m,n}$. Then we have \begin{eqnarray} T^\ast\ge\max_m \sum_{n\in \mathcal{C}^{\ast}_m}\underline{t}_n\ge\frac{1}{M}\sum_{m=1}^M \sum_{n\in \mathcal{C}^{\ast}_m}\underline{t}_n=\frac{1}{M}\sum_{n=1}^N\underline{t}_n. \nonumber \end{eqnarray} The first inequality is due to the definition of $\underline{t}_n$. The second inequality is due to the fact that the maximum value is always greater than the mean value. The last equality is because all users have to be connected to one of the BS's and $\cup_{m=1}^M\mathcal{C}^{\ast}_m$ is the set of all users. \end{IEEEproof} Intuitively, the maximum total service time is at least the service time of any one user. We have the following lemma. \begin{lemma}\label{lemma:LB2} The optimal solution, denoted by $T^\ast$, to the MILP problem is lower bounded by $T^\ast\ge \max \underline{t}_n$, where $\underline{t}_n=\min_{m\in \mathcal{B}_n} t_{m,n}$. \end{lemma} These lemmas will be used in analyzing the approximation ratio of the proposed approximation algorithms, which are presented in following subsections. \smallskip \subsubsection{Rounding Approximation Algorithm} \label{subsubsec:raa} To ensure required SINR for each user, $\mathcal{B}_n$ should not include all the FBS's in a real femtocell network. For example, some faraway FBS should not be considered by a user. Thus, we can use a threshold $\rho$ to obtain the subsets $\mathcal{A}_m$ and $\mathcal{B}_n$ ($\mathcal{A}_m$ will be updated when $\mathcal{B}_n$ is determined). \begin{eqnarray} \label{eq:rhodef} \mathcal{B}'_n=\mathcal{B}_n \cap \left(\left\{m|t_{m,n}/\underline{t}_n \le \rho\right\}\right), \;\; \mathcal{A}'_m=\{n|m\in \mathcal{B}'_n\}. \end{eqnarray} Usually only a limited number of FBS's will be taken into consideration for a user. After we adopt this threshold, not only users' SINR requirements will be satisfied, but also the computational complexity will be greatly reduced. Once $\mathcal{A}'_m$ and $\mathcal{B}'_n$ are determined, the following relaxed LP problem can be solved by any LP solver. \begin{eqnarray}\label{eq:RLP} \min && T \\ \mbox{s.t.} && \sum_m x_{m,n}=1, \; \mbox{for all users} \nonumber\\ && \sum_n t_{m,n}x_{m,n}\le T, \; \mbox{for all BS's} \nonumber\\ && x_{m,n}\ge 0, \; \mbox{for all $n\in\mathcal{A}'_m$} \nonumber\\ && x_{m,n}=0, \; \mbox{for all $n\notin\mathcal{A}'_m$}. \nonumber \end{eqnarray} We denote the solution obtained by solving this RLP program by $T$. Since $x$-variables are allowed to take fractional values, we have $T \le T^\ast$. Without sequentially fixing these fractional values, we adopt a rounding method from~\cite{Shmoys93} to obtain a feasible solution for the MILP problem. In this rounding method, a bipartite graph is constructed according to the RLP solution, which is constructed as a undirected bipartite graph $G(\mathcal{A}\cup \mathcal{B},E)$. In the disjoint set $\mathcal{A}$, each node represents a user $n$, while the other disjoint set $\mathcal{B}$ consists of BS nodes. We create $k_m=\lceil \sum_n x_{m,n} \rceil$ nodes in $\mathcal{B}$ for BS $m$ and these node are denoted by $\{ b_{m,1},b_{m,2},\cdots,b_{m,k},\cdots,b_{m,k_m}\}$. The edges are determined in the following way. For BS $m$, we sort the users in the order of non-increasing service time $t_{m,n}$ and the users are renamed $\{u_1,u_2,\cdots\}$. Let $X_{m,u_j}=\sum_{i=1}^{j} x_{m,u_i}$. For each BS, we divide the users associated to it into $k_m$ groups, as $G_1, G_2, \cdots, G_{K_m}$. User $u_j$ will be included in group $k$ ($1 \le k \le k_m$) if $k-1 < X_{m,u_j} \le k$ or $k-1 \le X_{m,u_{j-1}} < k$. If a user $u_j$ is included in two groups, the association $x$-variables need to be adjusted, such that $x'_{b_{m,k},u_j}=X_{m,u_j}-k+1$ and $x'_{b_{m,k-1},u_j}=x_{m,u_j}-x'_{b_{m,k},u_j}$. Then we insert edges between BS node $b_{m,k}$ and all the user nodes in group $k$. Now the bipartite graph is created and we next find a maximum matching $\mathcal{M}$ from each user to nodes in the other disjoint set. This maximum matching $\mathcal{M}$ indicates a feasible solution for MILP problem: for each edge $(n,b_{m,k})$ in $\mathcal{M}$, we associate user $n$ to BS $m$. Let $T_{(b_{m,k})}$ denote the total service time at node $b_{m,k}$ before the matching operation and $T'_{(b_{m,k})}$ the total service time at node $b_{m,k}$ obtained by the above rounding method. We have the following lemma. \begin{lemma}\label{lemma:Tint} For each node $b_{m,k}$, where $ k_m \ge k>1$, we have $T_{(b_m,k-1)} \ge T'_{(b_m,k)}$. \end{lemma} \begin{IEEEproof} First, observe that the minimum service time in group $(k-1)$ will be always no less than the maximum service time in group $k$, because we sort the users according to their service time in the non-increasing order. According the above bipartite graph construction, for any $k<k_m$, we have $\sum_{i\in G_k} x'_{b_{m,k},u_i}=1$; for $k=k_m$, we have $\sum_{i\in G_k} x'_{b_{m,k},u_i}\le1$. $T'_{(b_m,k)}$ will be no greater than the maximum service time in group $k$ and will thus be no greater than the minimum service time in group $(k-1)$, which is less than $\sum_{i\in G_{k-1}} x'_{b_{m,k-1},u_i} t_{m,u_i}$. Since $T_{(b_m,k-1)}=\sum_{i\in G_{k-1}} x'_{b_{m,k-1},u_i} t_{m,u_i}$, consequently, we have the conclusion that $T_{(b_m,k-1)} \ge T'_{(b_m,k)}$. \end{IEEEproof} Now we show that the solution produced by this rounding approximation algorithm is at most $(\rho+1)$ times greater than the optimal solution. \begin{theorem} The approximation algorithm based on linear programming and the rounding method ensures a $(\rho+1)$-approximation of the optimal solution. \end{theorem} \begin{IEEEproof} For each BS $m$, we create $k_m$ nodes for it and there are $k_m$ corresponding groups of user nodes adjacent to them. Thus the total service time is $\sum_{k=1}^{k_m} T'_{(b_m,k)}$. According to Lemma~\ref{lemma:Tint}, we have $T_{(b_m,k-1)} \ge T'_{(b_m,k)}$ for $k_m \ge k>1$. It follows that \begin{eqnarray} \sum_{k=2}^{k_m} T'_{(b_m,k)} \le \sum_{k=1}^{k_m-1} T_{(b_m,k)} \le \sum_{k=1}^{k_m} T_{(b_m,k)} \le T. \nonumber \end{eqnarray} In the first group, the maximum load will be the maximum service time of users associated with $m$. According to Lemma~\ref{lemma:LB2} and the definition of $\rho$ in~(\ref{eq:rhodef}), we have $T'_{(b_m,1)} \le \max t_{m,n} \le \rho \max \underline{t}_n \le \rho T^\ast $. Then, the total service time on any BS computed by our association algorithm will be $\sum_{k=1}^{k_m} T'_{(b_m,k)} \le \rho T^\ast + T \le (\rho+1) T^\ast$. The last inequality was due to $T \leq T^\ast$, since $T$ is the solution of the relaxed problem~(\ref{eq:RLP}). Our proof is complete. \end{IEEEproof} The complexity to compute a maximum matching is $O(VE)$, where $V$ and $E$ are the number of nodes and edges, respectively. Since we only need to run the matching algorithm once to obtain the association relationship, the total computational complexity of this algorithm is $O((MN)^{3.5}L_b)$, which is better than that of the sequential fixing algorithm. \begin{propo} The computational complexity of the rounding approximation algorithm is $O((MN)^{3.5}L_b)$. \end{propo} \smallskip \subsubsection{Greedy Approximation Algorithm} We next present a low complexity approximation algorithm, where the BS with the lowest load is greedily chosen and the user whose completion time at this BS is the smallest is assigned to this BS. By abuse of notation, we define $\rho_{m,n}=t_{m,n} / \underline{t}_{n}$ and $\rho=\max_{\{m,n\}}\rho_{m,n}$, which will be used in the optimality analysis. The greedy approximation algorithm is presented in Algorithm~\ref{tab:ApproxAlgo}. In Step $4$, we find the candidate BS for users that has the minimum $T_m$. Then we pick the user who has the minimum $T_{m,n}$ at the chosen BS in Step $5$. Obviously, the computational complexity of the approximation algorithm is $O(MN)$, which is much lower than that of sequential fixing. \begin{algorithm} [!t] \small \SetAlgoLined Initialize $T_m=0$ and $\mathcal{C}_m=\phi$ for all BS's \; Set the user set $\mathcal{N}=\{1,\cdots,N\}$ \; \While{$\mathcal{N}$ is not empty}{ Find the BS $m'$ that has the minimum $T_m$: $m'=\argmin_{m\in (\cup_{n\in\mathcal{N}}\mathcal{B}_n)}T_m$ \; Find the user $n'$ that has the minimum $t_{m',n}$: $n'=\argmin_{n\in\{\mathcal{A}_{m'}\cap\mathcal{N}\}} t_{m',n}$ \; Set $\mathcal{C}_{m'}=\mathcal{C}_{m'}\cup\{n'\}$ \; Set $T_{m'}=T_{m'}+t_{m',n'}$ \; Set $\rho_{m',n'}=\frac{t_{m',n'}}{\underline{t}_{n'}}$ \; Remove $n'$ from $\mathcal{N}$ \; } \caption{Greedy Approximation Algorithm for Cell Association} \label{tab:ApproxAlgo} \end{algorithm} \begin{propo} The computational complexity of the greedy approximation algorithm is $O(MN)$. \end{propo} We have the following lemma for the performance of the greedy approximation algorithm. \begin{lemma}\label{lemma:UB} The greedy approximation algorithm solution, denoted by $T$, is upper bounded by $\frac{\rho}{M}\sum_{n=1}^N \underline{t}_n + \rho T^\ast$. \end{lemma} \begin{IEEEproof} We first consider the open access strategy where each user can connect to any of the BS's. In the $l$-th iteration in Algorithm~\ref{tab:ApproxAlgo}, we choose the BS with the minimum $T_m$ in Step $4$. Thus we have \begin{eqnarray} T_{m'}^{(l-1)} \hspace{-0.075in}&\le&\hspace{-0.075in} \frac{1}{M}\sum_{m=1}^M T_m^{(l-1)}=\frac{1}{M}\sum_{m=1}^M\sum_{n\in \mathcal{C}_m^{(l-1)}}t_{m,n} \nonumber \\ &\hspace{-0.075in}=&\hspace{-0.075in} \frac{1}{M}\sum_{m=1}^M\sum_{n\in \mathcal{C}_m^{(l-1)}}\rho_{m,n}\underline{t}_n \le \frac{\rho^{(l-1)}}{M}\sum_{m=1}^M\sum_{n\in \mathcal{C}_m^{(l-1)}}\underline{t}_n, \nonumber \end{eqnarray} where $\rho^{(l-1)}=\max_{\{m,n\in \mathcal{C}_m^{(l-1)}\}}\rho_{m,n}$. Note that $\mathcal{C}_m^{(l-1)}$ is set of users that have been assigned to BS $m$ in the ($l-1$)-th iteration. In Step $5$, we pick user $n'$ and let user $n'$ connect to BS $m'$. Since $\rho^{(l)}$ will always be greater than $\rho^{(l-1)}$ and according to Lemma~\ref{lemma:LB2}, we have \begin{eqnarray} T_{m'}^{(l-1)}+t_{m',n'}\le\frac{\rho^{(l)}}{M}\sum_{m=1}^M\sum_{n\in \mathcal{C}_m^{(l)}}\underline{t}_n + \rho^{(l)} \underline{t}_n'. \nonumber \end{eqnarray} The algorithm stops after $N$ iterations. Since $T^{(l+1)}=\max\{T^{(l)},T_{m'}^{(l)}+t_{m',n'}\}$ and $T^{(0)}=0$, we conclude that \begin{eqnarray} T&=&T^{(N+1)}=\max\left\{T^{(N)},T_{m'}^{(N)}+t_{m',n'}\right\} \nonumber\\ &\le&\frac{\rho}{M}\sum_{m=1}^M\sum_{n\in \mathcal{C}_m}\underline{t}_n + \rho T^\ast =\frac{\rho}{M}\sum_{n=1}^N \underline{t}_n+ \rho T^\ast. \nonumber \end{eqnarray} With the closed access stragegy, we set $t_{m,n}=\infty$, for BS $m$ that user $n$ cannot connect to, for all $m$, $n$. The proof follows the same procedure and we have the same conclusion. \end{IEEEproof} Combining Lemmas~\ref{lemma:LB} and~\ref{lemma:UB}, we have the following theorem regarding the performance of Algorithm~\ref{tab:ApproxAlgo}. \begin{theorem}\label{lemma:Bounds} The greedy approximation algorithm in Algorithm~\ref{tab:ApproxAlgo} ensures a $(2\rho)$-approximation of optimal solution. \end{theorem} \begin{IEEEproof} The proof is straightforward. We have \begin{eqnarray} T^\ast\le T\le\frac{\rho}{M}\sum_{n=1}^N\underline{t}_n + \rho T^\ast \le 2 \rho T^\ast, \nonumber \end{eqnarray} where $T^\ast$ is the optimal solution and $T$ is the greedy approximation algorithm solution. Note that unlike in Section~\ref{subsubsec:raa}, we have $T^\ast\le T$ since there is no relaxation here. \end{IEEEproof} From Theorem~\ref{lemma:Bounds}, $\rho$ is an important parameter to the performance of the greedy approximation algorithm. The smaller the $\rho$, the smaller the optimality gap. In order to make the greedy approximation algorithm solution more competitive, we only allow users to choose from a subset $\mathcal{B}_n$ of the original BS set. Then we have the new subsets $\mathcal{B}'_n$ and $\mathcal{A}'_m$ as \begin{equation}\label{eq:SetAB} \mathcal{B}'_n=\mathcal{B}_n \cap \left( \left\{ m|\frac{t_{m,n}}{\underline{t}_n} \le \Gamma \right\}\cup \{1\} \right), \mathcal{A}'_m=\{n|m\in \mathcal{B}'_n\}, \end{equation} where $\Gamma$ is a predefined threshold and $\{1\}$ is the index of the MBS. $\Gamma$ can also be used to indicate the SINR requirement of users. The set $\mathcal{A}_m$ is replaced by $\mathcal{A}'_m$ accordingly. This way, the greedy approximation algorithm solution will be \begin{equation} T^\ast \leq T \leq 2 \Gamma T^\ast. \end{equation} \subsection{Randomized Algorithm} Both the rounding and greedy approximation algorithms are centralized algorithms that require frequent CSI updates. In this section, we introduce a randomized algorithm for the cell association problem. With the randomized algorithm, each user $n$ randomly chooses a subset of $\mathcal{B}_n$ to connect to. Once the subsets are determined, no information exchange is required among the users. We assume user $n$ connects to BS $m$ with probability $p_{m,n}$ and the expected service time for user $n$ on each BS is identical (i.e., by tuning the $p_{m,n}$'s), i.e., \begin{eqnarray} p_{m,n}t_{m,n}=H_n, \; \mbox{for all} \; m\in \mathcal{B}_n. \nonumber \end{eqnarray} Since a BS with a smaller $t_{m,n}$ should have higher preference, we set $p_{m,n}$ proportional to $1/t_{m,n}$. Since each user has to choose a BS to connect to, we have $\sum_{m\in \mathcal{B}_n}p_{m,n}=1$ for all $n$. It follows that \begin{eqnarray}\label{eq:Hn} H_n=\frac{1}{\sum_{m\in\mathcal{B}_n}1/t_{m,n}}, \mbox{for all} \; n. \end{eqnarray} The expected load on BS $m$, denoted by $\overline{T}_m$, is \begin{eqnarray}\label{eq:TmE} \overline{T}_m=\mathbb{E}[T_m]=\sum_{n\in\mathcal{A}_m} t_{m,n} p_{m,n}= \sum_{n\in\mathcal{A}_m} H_n, \mbox{for all} \; m. \end{eqnarray} Since users are randomly connected to the BS's, our objective is to minimize the maximum value of the expected load $\overline{T}_{max}$. \begin{eqnarray}\label{eq:minTmE} \min \mbox{\;\;\;} \overline{T}_{max}=\min \{ \max_{m} \mbox{\;\;\;} \overline{T}_m \}. \end{eqnarray} It can be seen from~(\ref{eq:TmE}) that minimizing $\overline{T}_m$ is equivalent to reducing the number of users in $\mathcal{A}_m$. The randomized algorithm consists of two phases. In Phase I, we use a threshold $\Lambda$ to obtain the subsets $\mathcal{A}_m$ and $\mathcal{B}_n$. \begin{eqnarray}\label{eq:SetAB2} \mathcal{B}'_n=\mathcal{B}_n \cap (\{m|t_{m,n}\le \Lambda\}\cup \{1\}), \mathcal{A}'_m=\{n|m\in \mathcal{B}'_n\}. \end{eqnarray} Note that the subsets $\mathcal{A}'_m$ and $\mathcal{B}'_n$ are different from those defined in (\ref{eq:SetAB}): $\Lambda$ is the upper bound of service time $t_{m,n}$, while $\Gamma$ is the upper bound on the service time ratios. Thus we have all $t_{m,n}\le \Lambda$ for all $n$ and $n\in\mathcal{A}'_m$. Then we derive the upper bounds for $H_n$, $\overline{T}_m$ and $\overline{T}_{max}$ as \begin{eqnarray}\label{eq:HnTm} \left\{ \begin{array}{l} H_n = \frac{1}{\sum_{m\in\mathcal{B}'_n}1/t_{m,n}} \le \frac{1}{\sum_{m\in\mathcal{B}'_n}1/\Lambda} = \frac{\Lambda}{|\mathcal{B}'_n|} \\ \overline{T}_m = \sum_{n\in\mathcal{A}'_m} H_n\le \frac{|\mathcal{A}'_m|}{\min_n|\mathcal{B}'_n|}\Lambda \\ \overline{T}_{max} = \max_m \overline{T}_m\le \frac{\max_m|\mathcal{A}'_m|}{\min_n|\mathcal{B}'_n|}\Lambda. \end{array} \right. \end{eqnarray} where $|\mathcal{A}'_m|$ and $|\mathcal{B}'_n|$ are the cardinalities of subsets $\mathcal{A}'_m$ and $\mathcal{B}'_n$, respectively. In Phase II, we aim to further reduce the sizes of $\mathcal{A}'_m$ and $\mathcal{B}'_n$. From~(\ref{eq:Hn}), we find that $H_{n'}$ gets increased when BS $m'$ is removed from set $\mathcal{B}'_{n'}$ and user $n'$ is removed from set $\mathcal{A}'_{m'}$ simultaneously. The increase, denoted by $\Delta_{m',n'}$, is given by \begin{eqnarray}\label{eq:Delta} \Delta_{m',n'} =\frac{1}{\sum_{m\in\mathcal{B}'_n} \hspace{-0.025in} 1/t_{m,n} \hspace{-0.025in}-\hspace{-0.025in} 1/t_{m',n'}} \hspace{-0.025in}-\hspace{-0.025in} \frac{1}{\sum_{m\in\mathcal{B}'_n} \hspace{-0.025in} 1/t_{m,n}} \nonumber \\ \hspace{-0.0in} =\frac{1/t_{m',n'}}{(\sum_{m\in\mathcal{B}'_n} \hspace{-0.025in} 1/t_{m,n} \hspace{-0.025in}-\hspace{-0.025in} 1/t_{m',n'})(\sum_{m\in\mathcal{B}'_n} \hspace{-0.025in} 1/t_{m,n})}. \end{eqnarray} For those BS's in the set $\{m|m\in \mathcal{B}'_{n'}, m\neq m'\}$, their $\overline{T}_m$'s become larger when BS $m'$ is removed from set $\mathcal{B}'_{n'}$ and user $n'$ is removed from set $\mathcal{A}'_{m'}$. On the other hand, $\overline{T}_{m'}$ is reduced by $H_{m',n'}$ according to~(\ref{eq:TmE}). The randomized algorithm is presented in Algorithm~\ref{tab:RandomAlgo}. In Step $2$, we find the users that each has more than one BS on their BS list $\mathcal{B}'_n$. Then from Step $5$ to Step $18$, we find the BS $m'$ with the largest $\overline{T}_{m'}$ and compute the possible maximum load $\overline{T}^{max}_{m',n}$ on BS's for all users that might be connected to BS $m'$, assuming user $n$ is removed from $\mathcal{A}''_{m'}$. In Step $19$, we pick user $n'$ with the minimum $\overline{T}^{max}_{m',n}$ value. If the value is less than the original $\overline{T}_{m'}$, we remove the BS-user pair $\{m',n'\}$ from sets $\mathcal{A}''_{m'}$ and $\mathcal{B}''_{n'}$. Otherwise, the algorithm is terminated. When the algorithm is executed, sets $\mathcal{A}''_{m'}$ and $\mathcal{B}''_{n'}$ are subsets of $\mathcal{A}'_{m'}$ and $\mathcal{B}'_{n'}$, respectively. Since the complexity from Step $5$ to Step $18$ is $O(MN)$ in the worst case, the complexity of the entire randomized algorithm is $O(M\times N^2)$. \begin{propo} The computational complexity of the randomized algorithm is $O(M\times N^2)$. \end{propo} Finally, we have the following theorem on the performance of the randomized algorithm. \begin{theorem} The maximum expected service time achieved by the randomized algorithm is upper bounded by \begin{eqnarray} \overline{T}_{max}\le \frac{\max_m|\mathcal{A}''_m|}{\min_n|\mathcal{B}''_n|}\times \max_n\max_{m\in\mathcal{B}''_n}t_{m,n}. \end{eqnarray} \end{theorem} \begin{IEEEproof} The proof is similar to the derivation of (\ref{eq:HnTm}), but the new upper bound of service time, $\max_n\max_{m\in\mathcal{B}''_n}t_{m,n}$, is used, instead of the service time bound $\Lambda$. \end{IEEEproof} \begin{algorithm} [!t] \small \SetAlgoLined Initialize $\mathcal{A}''_m=\mathcal{A}'_m$, $\mathcal{B}''_n=\mathcal{B}'_n$ \; Set the user set $\mathcal{N}=\{n||\mathcal{B}''_n|>1\}$ \; Compute $\overline{T}_m$ according to (\ref{eq:TmE}) \; \While{$\mathcal{N}$ is not empty}{ Find the BS $m'$ with $m'=\argmax_{m} \overline{T}_m$ \; \For{user $n$ in ($\mathcal{A}''_{m'}\cap\mathcal{N}$)}{ Compute $\Delta_{m',n}$ according to (\ref{eq:Delta}) \; \For{$m=1$ to $M$}{ \uIf{$m=m'$}{ Set $\overline{T}'_{m'}=\overline{T}_{m'}-H_n$ \; }\uElseIf{$m$ in $\{m|m\in \mathcal{B}''_{n}\}$}{ Set $\overline{T}'_m=\overline{T}_m+\Delta_{m',n}$ \; }\Else{ Set $\overline{T}'_m=\overline{T}_m$ \; } } Set $\overline{T}_{m',n}^{max}=\max_m \overline{T}'_m$ \; } Find user $n'$ with $n'=\argmin_n \overline{T}_{m',n}^{max}$ \; \eIf{$\overline{T}_{m'}\ge\overline{T}_{m',n'}^{max}$}{ Remove $m'$ from $\mathcal{B}''_{n'}$ and $n'$ from $\mathcal{A}''_{m'}$ \; Update all $\overline{T}_m$'s \; \If{$|\mathcal{B}''_{n'}| =1$}{ Remove $n'$ from $\mathcal{N}$ } }{ The algorithm is terminated \; } } \caption{Randomized Algorithm for Cell Association} \label{tab:RandomAlgo} \end{algorithm} \section{Service Scheduling}\label{sec:SevSchd} Once the cell associate problem is solved as in Section~\ref{sec:ProbSol}, we then study how to schedule the transmissions of multiple users connecting to the same BS. Since we assume the bandwidth $B$ is fully utilized for transmitting a user's data packet (see~(\ref{eq:Cmn1})), the packets are transmitted consecutively. We need to determine the service order of the users that are associated with the same BS. Consider a tagged BS to which $K$ users are connected. The user service times are $\{t_1,t_2,\cdots,t_K\}$. If the service order follows the user index, the average waiting time is given by \begin{eqnarray} \overline{T}_{wait}=\frac{1}{K}\sum_{n=1}^K \sum_{i=1}^n t_i. \end{eqnarray} We have the following theorem to minimize the average waiting time $\overline{T}_{wait}$. \begin{theorem} Given $K$ users with service times $\{t_1, t_2, \cdots, t_K\}$, the average waiting time is minimized when the users are served in the increasing order of their service times. \end{theorem} \begin{IEEEproof} First we sort the users according to their service times in the increasing order. The ordered service times are denoted by $\{t'_1,\cdots,t'_K\}$. Consider two ordered users $i$ and $j$, where $1\le i<j \le K$. We have $t'_i \le t'_j$. If the positions of $i$ and $j$ are swapped, it is obvious that the waiting times of users from $1$ to $i-1$ and the users from $j$ to $K$ are not affected and remain the same values. However, the awaiting time for each user from $i$ to $j-1$ is increased by $t'_j-t'_i$. Therefore, we conclude that the average waiting time is the least when the users are served in the increasing order of their service times. \end{IEEEproof} \section{Performance Evaluation}\label{sec:PerfEva} In this section, we evaluate the performance of the proposed cell association and service scheduling algorithms using MATLAB simulations. The channel models from~\cite{Moon10} are adopted in our simulations. The channel gain (in $dB$) from the BS's to users can be expressed as $10\log(G_{m,n})=-PL_m(d_{m,n})-u_m$, where $d_{m,n}$ is the distance from BS $m$ to user $n$, and $u_m$ is the shadowing effect, which is normally distributed with a zero mean and variance $\delta_m$. The simulation parameters are presented in Table~\ref{tb:Parameter}. In the figures, each point in the average of $10$ simulation runs; we included $95\%$ confidence intervals as error bars to make the simulation results credible. \begin{table} [!t] \begin{center} \caption{Simulation Parameters} \begin{tabular}{l|l} \hline {\em Paramter} & {\em Value} \\ \hline Number of BS's & $6$ \\ Total network bandwidth & $10 \mbox{ MHz}$ \\ Transmit power of the MBS & $43 \mbox{ dBm}$ \\ Transmit power of the FBS & $31.5 \mbox{ dBm}$ \\ Path loss model for MBS & $28+35\log_{10}(d)$\\ Path loss model for FBS & $38.5+20\log_{10}(d)$ \\ Shadowing effect & $6 \mbox{ dB}$\\ Packet length & $1 \mbox{ KBytes}$ \\ Threshold $\rho$ & 5 \\ \hline \end{tabular} \end{center} \label{tb:Parameter} \vspace{-0.15in} \end{table} \begin{figure*}[!t] \begin{center} \subfigure[Total service time vs. number of users] \label{fig:TotTimeOpen} \includegraphics[width=2.25in]{makespan_open} } \hfil \subfigure[Average waiting time vs. number of users] \label{fig:WaitTimeOpen} \includegraphics[width=2.25in]{waittime_open} } \subfigure[Fairness vs. number of users] \label{fig:FairnessOpen} \includegraphics[width=2.25in]{fairness_open} } \end{center} \caption{Performance evaluation of the open access strategy. \end{figure*} We present simulation results for the following two scenarios: (i) open access femtocells; (ii) closed access femtocells. For comparison purpose, we also developed and simulated a selfish scheme and compared it with the proposed schemes. With the selfish scheme, every user simply chooses the BS with the best channel condition to connect to. \subsection{Open Access Strategy} In the first scenario, there are $M=6$ BS's, i.e., one MBS and five FBS's. The number of users ranges from $30$ to $80$ with step size $10$. They are randomly located in network area. Each user can connect to one of the BS's. \begin{table} [!t] \begin{center} \caption{Execution Times of the Proposed Algorithms under the Open Access Strategy (s)} \setlength{\tabcolsep}{5pt} \begin{tabular}{r|c|c|c|c|c|c} \hline No. users & 30&40&50&60&70&80 \\ \hline \hline Greedy &0.024&0.034&0.024&0.030&0.026&0.038\\ Approx. & & & & & & \\ \hline Sequential&16.532&24.020&30.809&48.713&47.842&50.654\\ Fixing & & & & & & \\ \hline Randomized &0.030&0.048&0.077&0.136&0.132&0.151\\ Algorithm & & & & & & \\ \hline Selfish User&0.035&0.035&0.035&0.035&0.036&0.026\\ Scheme & & & & & & \\ \hline Rounding &0.133&0.148&0.160&0.168&0.176&0.213\\ Approx. & & & & & & \\ \hline \end{tabular} \end{center} \label{tb:Runtime_open} \vspace{-0.15in} \end{table} \begin{figure*}[!t] \begin{center} \subfigure[Total service time vs. number of users] \label{fig:TotTimeClose} \includegraphics[width=2.25in]{makespan_close} } \hfil \subfigure[Average waiting time vs. number of users] \label{fig:WaitTimeClose} \includegraphics[width=2.25in]{waittime_close} } \subfigure[Fariness vs. number of users] \label{fig:FairnessClose} \includegraphics[width=2.25in]{fairness_close} } \end{center} \caption{Performance evaluation of the closed access strategy. \end{figure*} We first examine the impact of the number of users on total service time. In Fig.~\ref{fig:TotTimeOpen}, we plot the maximum total service time for the five algorithms along with the lower bound found by solving the relaxed LP. As expected, the more users, the more total service time on BS's. Except for the low bound, the sequential fixing algorithm achieves the smallest total service time. The rounding approximation algorithm has a slightly better performance than the greedy approximation algorithm and the result justifies the approximation ratio proven in Section~\ref{subsec:aa}. Both approximation algorithms always achieve lower load than both the randomized algorithm and the selfish scheme. We also observe that beyond $50$ users, all the proposed algorithms have lower service times than the simple selfish scheme. When number of users becomes larger, the simple selfish scheme becomes less competitive and the rounding approximation algorithm achieves almost $50\%$ less total service time in the case of $80$ users. After cell association, users should be properly scheduled to get service in BS's to minimize average waiting time. In Fig.~\ref{fig:WaitTimeOpen}, we investigate the impact of the number of users on average waiting time. In the scheme of greedy approximation, randomized algorithm and sequential fixing, we use the service scheduling policy in Section~\ref{sec:SevSchd} to schedule users in BS's and obtain the corresponding waiting time. For comparison, we randomly schedule users in BS's in the selfish scheme and rounding approximation scheme. Intuitively, the larger the number of users, the larger the average waiting time. We can see from the figure that, the average waiting time obtained by the greedy approximation algorithm is very close to that by the sequential fixing algorithm, while without appropriate scheduling, the rounding approximation algorithm achieves the largest waiting time, which is almost twice as large as the waiting time achieved by greedy approximation algorithm. To evaluate the fairness performance, we adopt Raj Jain's fairness index given by $\mathcal{J}(C_1,C_2,\cdots,C_N)=\frac{(\sum_{n=1}^NC_n)^2}{N\times\sum_{n=1}^NC_n^2}$, where $C_n$ is the network throughput for user $n$~\cite{Zhou13}. The value of the index ranges from $1/N$ (worst case) to $1$ (best case). It can be seen from Fig.~\ref{fig:FairnessOpen} that fairness indexes decrease when the number of users is increased. We notice that, the selfish scheme and the randomized algorithm achieve better fairness than the other three schemes. Figs.~\ref{fig:TotTimeOpen} and~\ref{fig:FairnessOpen} show that from operator's viewpoint, the selfish and the randomized schemes are not preferred since they produce less balanced load on BS's. From users's viewpoint, these two schemes may be appealing due to their fairness performance. We list the execution times of the five schemes in Table~\ref{tb:Runtime_open}. We find the execution time increases as the number of users is increased. The selfish scheme always has the smallest execution time, while sequential fixing has the largest execution time. Although the rounding approximation algorithm can achieve smaller load on the BS's, its execution time is greater than that of the greedy approximation algorithm. This result also justifies the complexity analysis for the proposed schemes. The running time of the greedy approximation algorithm and the selfish scheme is always much smaller than other schemes and does not increase obviously with the number of users. For the closed access simulations shown in Section~\ref{subsec:cas}, the execution times of the proposed algorithms are all much smaller than that shown in Table~\ref{tb:Runtime_open}, since the user list include fewer users in the closed access case. We omit these results for brevity. \subsection{Closed Access Strategy}\label{subsec:cas} We next investigate the second scenario with closed access femtocells. Now each FBS maintains a user list and only serves the listed users. Note that the MBS will always serve all the users inside its coverage. In Fig.~\ref{fig:TotTimeClose}, we evaluate the impact of the number of users on total service time. Intuitively, the total service time increases as the number of users. However, we find that it also depends on the user list at each FBS. In the simulation, we randomly choose the user set $\mathcal{A}_m$ for BS $m$. Moreover, the user list at each FBS is further reduced due to the SINR threshold. Consequently, all the proposed algorithms achieve close performance in the closed access scenario. The total service time of the proposed algorithms is close to the low bound in closed access scenario. However, the performance of all the proposed algorithms is better than that of the selfish scheme, as we can see in Fig.~\ref{fig:TotTimeClose}. We next show the impact of the number of users on average waiting time in Fig.~\ref{fig:WaitTimeClose}. The scheduling policy setting is the same as that in the open access scenario. The result thus is also similar to the open access case that, the selfish scheme and the rounding approximation scheme achieve the largest waiting time. Actually with proposed optimal service scheduling, the approximation algorithms will achieve as less waiting time as that of the sequential fixing scheme. Finally, we plot the fairness indices in Fig~\ref{fig:FairnessClose}. The randomized algorithm, although not better than the selfish scheme, achieves the best performance in fairness than the other proposed schemes. Despite of its good performance in minimizing the maximum service time, the rounding approximation algorithm, is not competitive with respect to fairness. Due to the randomness of user lists at BS's, the confidential intervals are larger than those in the open access scenario. \section{Conclusion}\label{sec:Conc} In this paper, we investigated the problem of cell association and service scheduling in two-tier femtocell networks. We developed several algorithms and analyzed their performance. The sequential fixing algorithm achieves the best performance in total service time but it has a relatively high complexity. Then we presented two approximation algorithms with lower complexity and proven approximation ratios. We also proposed a randomized algorithm with a proven performance bound that requires the least information exchange among users. In addition, we addressed the service scheduling problem with an optimal solution. The proposed algorithms were validated with simulations in both open and closed access scenarios. \bibliographystyle{IEEEtran}
1,314,259,993,768
arxiv
\section{introduction} To initiate most of the biological processes in living cells, protein molecules, the workhorses of the cell, have to bind to specific sites on nucleic acid molecules \cite{alberts,bressloff}. One of the common examples, which is also crucially important for the functioning of a cell, is the binding of transcription factor (TF) proteins to a specific sequence site on the DNA. Protein has to `search' for this binding site, but such a `search' carried out by a TF is neither guided by any external cues nor does it benefit from past experience because of the absence of any memory. Instead, this search is believed to be a random (stochastic) process \cite{bressloff,halford04,mirny09,kolomeisky11}. A one-dimensional diffusive scanning of the DNA strand by the TF protein constitutes one mode of search that is combined with other possible modes, including dissociation and diffusion in the bulk solution, re-association back to DNA and other possibilities \cite{bressloff,halford04,mirny09,kolomeisky11}. For proper biological function, the search strategy should not only be fast but must also rule out the possibility of erroneous recognition of any other site as the intended target site of binding. Enormous progress have been made in the last few decades in understanding the strategies evolved by nature by combining various possible modes of search by a TF that optimize the opposite demands of speed and accuracy of search in a cell \cite{halford04,mirny09,kolomeisky11,veksler13,tafvizi11,sheinman12,kolomeisky12,kolomeisky16}. In live cells, the search by a TF is made difficult by the fact that the target binding sites are usually located in an extremely crowded environment. The molecules surrounding the DNA strand reduce the accessibility of the target site while those bound to DNA create a steric hindrance against scanning of the DNA chain \cite{shvets16,shvets15b}. Since a dissociation of a TF from DNA, followed by a subsequent re-attachment elsewhere on DNA, is an integral part of its search strategy, a TF does not remain permanently obstructed by any DNA-bound molecule. Nevertheless, the blockages created by such DNA-bound particles against the diffusive search by TFs can have significant non-trivial effects on dynamics of the search process. This phenomenon has already attracted the attention of theorists in recent years \cite{veksler13,shvets16,shvets15b}. Often what makes the search problem even more challenging is that many DNA-bound molecules are themselves mobile so that, during the diffusive scanning of the DNA chain for its target binding site, the TF encounters the mobile obstacles either co-directionally or head-on. For example, RNA polymerases (RNAPs), for which a segment of DNA serves as the template for the synthesis (polymerization) of a specific molecular species of RNA, use the template DNA strand also as a track for its motor-like \cite{chowdhury13,kolomeisky15} walk in a directed manner \cite{alberts,buc}. The process of synthesis of RNA, as directed by a DNA template, is called transcription (of a gene) \cite{alberts}. The traffic of RNAPs \cite{tripathi08,klumpp08,klumpp11,sahoo11,ohta11,wang14,belitsky18}, engaged in transcription, would act as an oncoming stream of mobile roadblocks against a TF that simultaneously searches for its specific binding site on the same segment of DNA. Besides, at any given time, many TFs can search for the same segment of DNA for their respective binding sites and, therefore, their interactions are also likely to affect their individual efficiency of search. To our knowledge, no theoretical model has been developed so far to study the effects of DNA-bound mobile molecules on the diffusive search by TFs on the same segment of DNA. In this paper we develop a kinetic model motivated by the search of specific binding sites by a single TF as well as that by several TFs simultaneously over a segment of DNA that is also undergoing transcription by a traffic of RNAP motors. This minimal model is not intended to comprehensively describe {\it in-vitro} or {\it in-vivo} experimental observations with any specific cell or organism. Instead, the main aim of our biologically motivated kinetic model is to clarify a complex molecular picture and reveal interesting physics that such stochastic systems are likely to exhibit. In our model, we represent the TFs and RNAPs by two distinct species of particles. Although not all the features of TFs and RNAPs are taken into account in our analysis, these are still called TF and RNAP for the sake of simplicity of terminology. The RNAPs hop forward uni-directionally on a discrete lattice of DNA sites while the TFs perform unbiased random walk on the same lattice. There is a pair of specially designated sites for the entry and exit of the RNAPs; these particles cannot attach to, or detach from, the lattice at any other site in between. In contrast, the TFs can attach to-, and detach from, any site on the lattice. The model captures the key features of RNAP traffic by a totally asymmetric simple exclusion process (TASEP) \cite{derrida98,schutz00,Schadschneider10,mallick15} which is one of the simplest models of collective stochastic movement of interacting self-propelled particles on a one-dimensional lattice. Since none of the sites can be occupied simultaneously by more than one particle, irrespective of the identity of the occupant, mutual exclusion is the only intra-species (RNAP-RNAP and TF-TF) as well as inter-species (RNAP-TF) interaction in our model. The process of target search by a TF is treated in our theoretical framework as a {\it first-passage} process; starting from a given unique initial state, the time required for the completion of each successful search is a {\it first-passage time} (FPT) \cite{bressloff,redner,redner14}. Since the search process is stochastic, the search time is a random variable whose probability distribution is one of the main quantities of our interest here. The goal of this paper is to investigate the effects of intra-species and inter-species interactions among the particles on the search dynamics of the TFs. Our theoretical studies of the model are based on (approximate) analytical calculations and computer simulations. Among the various phenomena that we observe the following are most notable: (i) Over a range of parameter values, the search by the TF follows a mechanism that is similar to a Brownian ratchet; (ii) For given values of all the other parameters, there is an optimum rate of detachment of TF from the lattice at which its mean search time attains a minimum. We explain the underlying physical principles that give rise to these interesting phenomena. \section{model} \begin{figure}[h] \includegraphics[angle=0,width=1.0\columnwidth]{FIG1.pdf} \caption{The schematic representation of the model. The model consists a one dimensional lattice and multiple particles of two different species. The first species particles represent TFs, whereas, second species particles represent RNAPs. A TF can attach at any site throughout the lattice with the rate $ k_{on} $ as well as it can detach from any site with rate $ k_{off} $. Inside the lattice a TF can hop in both the forward and backward directions with identical rates $b$. Unlike TF, a RNAP can attach only at site $ i=1 $, with rate $ \alpha $ and it can detach only from site $ i=L $, with rate $ \beta $. Inside the lattice, a RNAP can jump only in the forward direction, with the rate $u$. All the particles follow the exclusion principle, i.e., no two particles can occupy the same site simultaneously.} \label{fig-model} \end{figure} The kinetics of the model is shown schematically in Fig. \ref{fig-model}. The model consists of a one dimensional lattice with equispaced lattice sites that are labeled by the integer index $i$ ($1 \leq i \leq L$). Thus, the length of the lattice is $L$ in the units of lattice spacing. Throughout this paper we assume that there is a single TF searching for a specific site located on the one-dimensional lattice. A TF can attach to an arbitrary lattice site, with a rate $k_{on} $, provided that this site is empty. Once attached to the lattice, the TF can hop forward or backward till it detaches from the lattice. The TF that is already attached to the lattice can detach from it, with a detachment rate $k_{off}$. Inside the bulk of the system, i.e., at sites $2 \leq i \leq L-1$, the TF can hop both forward or backward, with a rate $b$. This unbiased random walk (RW) of the particle captures the one-dimensional diffusion of the TF on the DNA chain. At the edges of the lattice, i.e., at $i=1$ and $i=L$, the TF can hop, with the rate $b$, only in the forward and backward directions, respectively. The position of the TF is marked by an integer index $n$, where $n$ is allowed to vary over the range $0 \leq n \leq L$. The positions $n=1,2, \dots, L$ of the TF coincide with the lattice sites $i=1,2, \dots, L$, whereas the position $n=0$ indicates being in the solution (i.e. the medium in which the lattice exists). Unlike the TFs, a RNAP can attach to the lattice, with rate $\alpha$, only at the site $i=1$, provided that this site is not already occupied by another RNAP or TF. Once attached to the lattice, the RNAP can hop only in the forward direction, with a rate $u$, while respecting the exclusion principle at each step. The RNAP continues its forward hopping at the given rate till it reaches the last lattice site labeled by $i=L$ from where it is allowed to detach from the lattice with a rate $\beta$. Throughout the paper, we assume that the TASEP of RNAPs attain its non-equilibrium steady state, characterized by a time-independent flux, before the search by TFs begin. \section{Results} Since it has not been possible to obtain a single analytical expression that would be valid in all regimes of the parameters of our model, we derive these expressions in two different limits, namely low- and high-rates of detachments from the lattice. We also check the accuracy of the analytical expressions derived in these two limits by comparing those with the corresponding numerical data obtained from Monte Carlo (MC) simulations of the model. \subsection{Vanishing rate of detachment: insight from the extreme limit} In order to get insight into the effects of the traffic of the RNAPs on the kinetics of the TF, let us begin with the extreme limiting case of $k_{off}=0$. If the TF is assumed to begin at $t=0$ in a state where it is already attached to the lattice at a site $n \neq m$, it remains attached at all times $t > 0$. The kinetics is still interesting and depends on the dynamical phase that the RNAPs are expected to exhibit for the chosen set of values of the parameters in the complete absence of any TF. \begin{figure}[h] \includegraphics[angle=0,width=1.0\columnwidth]{FIG2.pdf} \caption{Position ($x$) of the TF and the RNAP immediately on its left are plotted against time ($t$) in the LD phase of the RNAP traffic for four different values of the ratio $u/b$. All the four triangles correspond to the RNAP whereas the remaining four symbols correspond to the TF. The corresponding data in the HD phase and MC phase of the RNAP traffic are plotted in the insets; the data are practically independent of the value of the ratio $u/b$. The solid black lines indicate average velocities of the RNAPs while the black dashed lines correspond to $v_{eff}$.} \label{fig-xt0koff} \end{figure} Intuitively, it is obvious that the RNAPs prevent the backward (left-ward according to the Fig. \ref{fig-model}) steppings of the TF. Therefore, the TF behaves as a Brownian ratchet and exhibits a forward-directed (i.e., towards right) motion. Consequently, the TF can never reach the target site in this extreme limit if initially $n > m$, i.e., the searcher is located initially on the right side of the target. In contrast, if initially $n < m$, i.e., the search begins from a site on the left of the target site, the searcher TF would certainly hit the target after some time during which it is closely followed by a RNAP that rectifies the Brownian motion of the searcher. In Fig.\ref{fig-xt0koff} we plot the position of the TF and the closest RNAP following it , both as functions of time. The asymptotic linear increase of the position $x$ with time $t$ establishes that the TF moves, effectively, ballistically instead of its natural diffusive search, because of the rectification of the backward steps by the RNAP following it from behind. Suppose the parameters $\alpha$, $\beta$ and $u$ are such that the RNAPs would be in the LD phase in the absence of any TF. In this case the mean search time is \begin{equation} <t_s> \simeq (m-n)/v_{eff} \end{equation} where the effective average velocity of the TF should be \begin{equation} v_{eff} = \biggl(\frac{b}{b+u}\biggr) v = \biggl(\frac{1}{1 + (u/b)}\biggr) v \label{eq-v0koff} \end{equation} with \begin{equation} v = u \biggl(1- \frac{\alpha}{u}\biggr) \label{eq-LDavVel} \end{equation} being the corresponding average velocity that the RNAP would have in the LD phase in the absence of any TF. Indeed, the RNAPs do achieve the average velocity (\ref{eq-LDavVel}) if $u/b$ is sufficiently small (see Fig.\ref{fig-xt0koff}) because the faster moving TF vacates the site in front of the following RNAP sooner thereby leaving the RNAP practically unaffected by the TF. The formula (\ref{eq-v0koff}) is in excellent agreement with the simulation data plotted in Fig.\ref{fig-xt0koff} where we have used $\alpha= 10$ s$^{-1}$, $\beta=10^3$ s$^{-1}$, $u = 10^3$ s$^{-1}$ and, hence, $v = 990$ s$^{-1}$. Note that in the HD phase the mean search time is $<t_s> = (m-n)/v$, where $v=\beta$, irrespective of the magnitude of $b/u$, because the gap between successive RNAPs is so small that the TF is essentially dragged by the flow of the RNAPs with the same velocity as that of the RNAPs. For the $x-t$ plots in both the insets of Fig.\ref{fig-xt0koff} we have used the parameters $u= b = 10^{3}$ s$^{-1}$. The other parameters, namely, $\alpha$ and $\beta$ were selected so as to attend the desired phase of the RNAPs in the absence of TF. For attaining the HD phase we chose $\alpha =1000$ s$^{-1}$ and $\beta=100$ s$^{-1}$. In contrast, for attaining the MC phase we selected $\alpha=1000$ s$^{-1}$ and $\beta=1000$ s$^{-1}$. So, in the steady state, the average velocity of the RNAPs would be $v = u (\beta/u) = \beta = 10^{2}$ s$^{-1}$. The slope of the straight line in the corresponding inset of Fig.\ref{fig-xt0koff} is, indeed, $10^{2}$ s$^{-1}$. Similarly, the average velocity of the RNAPs in the MC phase is $u (1 - \rho) = u/2$, as $\rho = 1/2$. Moreover, since we have taken $u = 1000$ s$^{-1}$, the average velocity of the RNAPs in the MC phase would be $500$ s$^{-1}$ which, indeed, is the slope of the $x-t$ straight line in the corresponding inset of Fig.\ref{fig-xt0koff}. \subsection{Non-vanishing rate of detachment} \subsubsection{Low detachment limit: heuristic analytical argument} Let us consider the low, but non-vanishing, values of the rate of detachment $k_{off}$. \subsubsection{High detachment limit: approximate theory based on first-passage analysis} Next let us consider the opposite limit, namely, the high detachment limit. In this case, the searcher TF dissociates even before it feels the strong directional push of the unidirectional flow of RNAPs. Therefore, in this limit the recently developed theoretical framework \cite{veksler13} is expected to provide a reasonable description of the search dynamics. For a self contained discussion, we first summarize the main steps of the calculations reported in ref.\cite{veksler13} before presenting the new analytical formulas that we use for our work. Following Veksler and Kolomeisky \cite{veksler13}, we define the probability $F_{n|m}(t)$ to reach the target on site $m$ {\it for the first time} at time $t$ if at $t = 0$ the TF was at the site $n (n = 0, 1, ..., L)$. The time evolution of these first-passage probabilities are governed by the backward master equations \cite{veksler13}, \begin{eqnarray} \frac{dF_{n|m}(t)}{dt}&=&b\bigl[F_{n+1|m}(t)+F_{n-1|m}(t)\bigr]+k_{off} F_{0|m}(t)\nonumber\\ &-& (2b + k_{off} )F_{n|m}(t) ~({\rm for}~ 2 \le n \le L-1), \nonumber \\ \end{eqnarray} while for the two ends of the lattice at $n = 1$ and $n = L$ the equations are \begin{eqnarray} \frac{dF_{1|m}(t)}{dt}&=&bF_{2|m}(t)+k _{off}F_{0|m}(t)-(b+k_{off} )F_{1|m}(t) \nonumber \\ \end{eqnarray} \begin{eqnarray} \frac{dF_{L|m}(t)}{dt}&=&bF_{L-1|m}(t)+k _{off}F_{0|m}(t)-(b+k_{off} )F_{L|m}(t) \nonumber \\ \end{eqnarray} If the TF starts from the solution, i.e., $n = 0$ according to our notation, then the corresponding backward master equation is given by \cite{veksler13}, \begin{eqnarray} \frac{dF_{0|m}(t)}{dt}&=&\frac{k_{on}}{L}\sum_{n=1}^{L}F_{n|m}(t)-k _{on}F_{0|m}(t). \end{eqnarray} These equations can be analyzed by introducing Laplace transformations of first-passage probability functions, $\tilde{\cal F}_{n|m}(s)= \int_{0}^{\infty} e^{-st} F_{n|m} (t) dt$. Then, backward master equations can be rewritten as a set of simpler algebraic expressions, \begin{eqnarray} (s+2b+k_{off} )\tilde{\cal F}_{n|m}(s)&=&b[\tilde{\cal F}_{n+1|m}(s)+\tilde{\cal F}_{n-1|m}(s)] \nonumber \\ &+&k_{off}\tilde{\cal F}_{0|m}(s) \end{eqnarray} \begin{equation} (s+b+k_{off} )\tilde{\cal F}_{1|m}(s) = b\tilde{\cal F}_{2|m}(s)+k_{off}\tilde{\cal F}_{0|m}(s) \end{equation} \begin{equation} (s+b+k_{off} )\tilde{\cal F}_{L|m}(s) = b\tilde{\cal F}_{L-1|m}(s)+k_{off}\tilde{\cal F}_{0|m}(s) \end{equation} \begin{equation} (s+k_{on})\tilde{\cal F}_{0|m}(s) = \frac{k_{on}}{L}\sum_{n=1}^{L}\tilde{\cal F}_{n|m}(s) \end{equation} These equations are solved by assuming that the general form of the solution is $\tilde{\cal F}_{n|m}(s) = Ay^{n} + B$, and using boundary and initial conditions it yields \begin{eqnarray} \tilde{\cal F}_{n|m}(s)&=&\frac{(1-B)(y^{n} +y^{-n})}{y^{m}+y^{-m}}+B \end{eqnarray} for $1 \le n \le m$, and \begin{eqnarray} \tilde{\cal F}_{n|m}(s)&=&\frac{(1-B)(y^{1+L-n} +y^{n-L-1})}{y^{1+L-m}+y^{m-L-1}}+B \end{eqnarray} for $m \le n \le L$. Here, parameters $y$ and $B$ are given by \begin{eqnarray} y&=&\frac{s+2b+k_{off}-\sqrt{(s+2b+k_{off})^{2} -4b^{2}}}{2b},\\ B&=&\frac{k_{off}\tilde{\cal F}_{0|m}(s)}{(k_{off}+s)}.\\\nonumber \end{eqnarray} One can also show that \begin{eqnarray} \tilde{\cal F}_{0|m}(s)&=&\frac{k_{on}(k_{off} + s)S(s)}{Ls(k_{off} + k_{on} + s) + k_{off}k_{on}S(s)} \end{eqnarray} where the new auxiliary function $S(s)$ is given by \begin{eqnarray} S(s)&=&\frac{y(1 + y)(y^{-L} - y^{L})}{(1-y)(y^{1-m}+y^{m} )(y^{m-L}+y^{1+L-m})} \end{eqnarray} More specifically, the first-passage times to reach the target located at the site $m$, starting from any other site $n$, on the lattice can be computed from \begin{eqnarray} T_{n|m}&=&-\frac{d}{ds}\tilde{\cal F}_{n|m}(s)\bigg|_{s=0}. \end{eqnarray} \begin{figure}[h!] (a)\\[0.02cm] \includegraphics[angle=0,width=0.85\columnwidth]{FIG3a.pdf} \\ (b)\\[0.02cm] \includegraphics[angle=0,width=0.85\columnwidth]{FIG3b.pdf} \\ (c)\\[0.02cm] \includegraphics[angle=0,width=0.85\columnwidth]{FIG3c.pdf} \\ \caption{Variation of $T_{n|m}$ with respect to $m-n$ for different values of $k_{off}$, for (a) LD phase ($\rho=0.3$), (b) MC phase ($\rho=0.5$) and (c) HD phase ($\rho=0.7)$. All other parameters are kept fixed at values : $b=10^{3}~ s^{-1}$, $k_{on}=10^{4}~ s^{-1}$, $m=L/2$ and $L=10^{3}$. Continuous lines have been obtained from extended Kolomeisky's formula and discrete data points have been obtained from MC simulation. Blue, red and orange color lines correspond to $k_{off}=1000~ s^{-1}$, $k_{off}=400~ s^{-1}$ and $k_{off}=200~ s^{-1}$ ,respectively. Triangle, circle and square correspond to $k_{off}=1000~ s^{-1}$, $k_{off}=400~ s^{-1}$ and $k_{off}=200~ s^{-1}$ ,respectively. The inset shows the variation of $T_{n|m}$ with respect to $m-n$ for different values of $k_{off}$ from $-L/2$ to $L/2$. Line colors are same for same parameters as before.} \label{fig-T_n_vs_n_Kolomeisky_theory} \end{figure} \begin{widetext} \begin{eqnarray} T_{n|m}&=&\biggl[2^{-1 - 2 m - n} \biggl(k_{on} + k_{off}\biggr) L x^{-1 - 2 m - n} \biggl(-k_{off} + \sqrt{k_{off} (k_{off} + 4 b)} \biggr)\nonumber\\ &&\biggl(-2^{m + 2 n} x^{m} + 2^{2 m + n} x^{n} + 2^{n} x^{2 m + n} - 2^{m} x^{m + 2 n}\biggr)\nonumber\\ &&\biggl\{-4^{m} k_{off} + 4^{m} \sqrt{k_{off} (k_{off} + 4 b)} - 2 b \biggl(4^{m} +x^{2 m}\biggr)\biggr\} \biggl\{-4^{m} k_{off} x^{2 L} \nonumber\\ &&+ 4^m \sqrt{k_{off} (k_{off} + 4 b)} x^{2 L}+2 b \biggl(4^{m} x^{2 L} + 4^{L} x^{2 m}\biggr)\biggr\}\biggr]\bigg/ \nonumber\\ &&\biggl[k_{on} k_{off} b^{2} \biggl\{k_{off} + 4 b - \sqrt{k_{off} (k_{off} + 4 b)}\biggr\} \biggl(4^{L} - x^{2 L}\biggr) \biggl(4^{m} + x^{2 m}\biggr)\biggr]~~~~\rm (for ~1 \le n \le m), \end{eqnarray} and \begin{eqnarray} T_{n|m}&=&\biggl[2^{-1 - 2 m - n} \biggl(k_{on} + k_{off}\biggr) L x^{-1 - 2 m - n} \biggl\{-k_{off} + \sqrt{k_{off} (k_{off} + 4 b)}\bigg\}\nonumber\\ && \biggl(2^{n} x^{m} - 2^{m}x^{n}\biggr) \biggl\{-4^{m} k_{off} + 4^{m} \sqrt{k_{off} (k_{off} + 4 b)} - 2 b \biggl(4^{m} + x^{2 m}\biggr)\biggr\}\nonumber\\ && \biggl\{-4^{m} k_{off} x^{2 L} + 4^{m} \sqrt{k_{off} (k_{off} + 4 b)} x^{2 L} - 2 b \biggl(4^{m} x^{2 L} + 4^{L} x^{2 m}\biggr)\biggr\}\nonumber\\ &&\biggl\{-2^{m + n}k_{off}^{2} x^{2 L} + 2^{m + n} k_{off} x^{2 L} \biggl(-4 b + \sqrt{k_{off} (k_{off} + 4 b)}\biggr) \nonumber\\ &&+ 2 b \biggl(-2^{m + n} b x^{2 L} +2^{m + n} \sqrt{k_{off} (k_{off} + 4 b)} x^{2 L} + 4^{L} b x^{m + n}\biggr)\biggr\}\biggr]\bigg/\nonumber\\ &&\biggl[k_{on} k_{off} b^{2} \biggl(k_{off} + 4 b - \sqrt{k_{off} (k_{off} + 4 b)}\biggr) \biggl(4^{L} - x^{2 L}\biggr) \biggl\{4^{m} k_{off}^{2} x^{2 L} - 4^{m} k_{off} x^{2 L}\nonumber\\ &&\biggl(-4 b + \sqrt{k_{off} (k_{off} + 4 b)}\biggr)+ 2 b \biggl(4^{m} b x^{2 L} - 4^{m} \sqrt{k_{off} (k_{off} + 4 b)} x^{2 L} + 4^{L} b x^{2 m}\biggr)\biggr\}\biggr]~\rm (for ~m \le n \le L) \end{eqnarray} \end{widetext} where \begin{equation} x=\biggl(k_{off}+2b-\sqrt{k_{off}(k_{off}+4b)}\biggr)\bigg/b. \end{equation} The average time to find the target, starting from the solution, $T_{0}$, can be easily found using the following equality, \begin{eqnarray} T_{0}&=&-\frac{d}{ds}\tilde{\cal F}_{0|m}(s)\bigg|_{s=0}\nonumber\\ &=&\frac{k_{off} L+k_{on}(L-S(0))}{k_{on}k_{off}S(0)}. \end{eqnarray} The theoretical treatment summarized above does not include traffic-like flow of the RNAPs. In order to capture these effects, the diffusion rate $b$ and the attachment rate $k_{on}$ have been replaced by the effective rates $b(1-\rho)$ and $k_{on}(1-\rho)$, respectively, where $\rho$ is the steady state density of the RNAPs. Note that, in the absence of the TF, the density $\rho$ of the RNAPs is determined by the magnitudes of the three rates $\alpha, \beta$ and $u$. In order to check how well this theory works in the limit of high detachment rates, we have carried out MC simulations. First, in the absence of any TF, the system of RNAPs are allowed to evolve for one million MC steps allowing their traffic flow to reach the steady value during that period. Then a TF is added to the solution (in the state $0$) and the composite system consisting of the TF and the RNAPs is allowed to evolve following the dynamical rules summarized by Fig.\ref{fig-model}. Each round of the search process ends as soon as the TF reaches the target site; the next round of search begins with a new TF in solution. The time taken by the TF in each round of search is recorded and the data are finally averaged over 2000 rounds of search to calculate the mean first-passage time, i.e., the average search time, needed by a single TF searcher to reach the target. The numerical values of the rates $\alpha, \beta$ and $u$ were chosen in such a way that the traffic of the RNAPs, in the absence of the TF, would attain the desired dynamical phase. The particular sets of values chosen for the three phases LD, MC and HD phases are as follows: LD : $\alpha =300~s^{-1}, ~\beta=1000~s^{-1}$; for MC : $\alpha =1000~s^{-1}, ~\beta=1000~s^{-1}$ and for HD : $\alpha =1000~s^{-1}, ~\beta=300~s^{-1}$. We have used the value $u=1000~s^{-1}$ for all three phases. Corresponding to these values of the parameters, over the range of $1000~ s^{-1} \geq k_{off} \geq 200~ s^{-1}$ the detachment rate $k_{off}$ is still sufficiently high so that excellent agreement between the theoretical predictions and the data obtained from MC simulation are seen in Fig.\ref{fig-T_n_vs_n_Kolomeisky_theory}. This level of agreement with the MC data established that the approximations made in the theoretical derivations are well justified in this regime. Note that, because of the rapid detachments of the TF from the lattice in the parameter regime used for the plots in Fig.\ref{fig-T_n_vs_n_Kolomeisky_theory}, the duration of its each round of scanning is too short to be significantly affected by the flow of the RNAPs. Therefore, it is not surprising that the theory is in good agreement with the MC simulation data in this regime. \begin{figure}[h!] (a)\\[0.02cm] \includegraphics[angle=0,width=0.85\columnwidth]{FIG4a.pdf} \\ (b)\\[0.02cm] \includegraphics[angle=0,width=0.85\columnwidth]{FIG4b.pdf} \\ (c)\\[0.02cm] \includegraphics[angle=0,width=0.85\columnwidth]{FIG4c.pdf} \\ \caption{Variation of $T_{n|m}$ with respect to $m-n$ for different values of $k_{off}$, for (a) LD phase ($\rho=0.3$), (b) MC phase ($\rho=0.5$) and (c) HD phase ($\rho=0.7$). All the other parameters are kept fixed at values : $b=10^{3}~ s^{-1}$, $k_{on}=10^{4}~ s^{-1}$, $m=L/2$ and $L=10^{3}$. The discrete data points have been obtained from MC simulation. Blue triangle, red circle and orange square correspond to $k_{off}=100~ s^{-1}$, $k_{off}=10~ s^{-1}$ and $k_{off}=1~ s^{-1}$, respectively. Dashed lines are drawn just as guide to the eyes.} \label{fig-T_n_vs_n_Kolomeisky_theory2} \end{figure} But, as the detachment rate $k_{off}$ decreases, the TF spends longer times scanning the lattice, and most of its attempts to step against the flow of the RNAPs become unsuccessful in comparison with its steps in the direction of flow. The resulting motion of the TF is similar to that of a Brownian ratchet \cite{reimann02,julicher97}. The curves showing $T_{n|m}$ are now asymmetric about $m-n=0$; the lower is the value of $k_{off}$ the stronger is this asymmetry (see Fig.\ref{fig-T_n_vs_n_Kolomeisky_theory2}). Since in this regime the Veksler-Kolomeisky theory \cite{veksler13} shows large deviation from the MC data and the deviation increases with decreasing $k_{off}$ we have plotted only the MC data in Fig.\ref{fig-T_n_vs_n_Kolomeisky_theory2}; the lines connecting these discrete data points serves merely as a guide to the eye. When the TF is attached to the lattice it scans a distance of the order $2 \lambda$, where $\lambda=\sqrt{b_{eff}/k_{off}}$ during each encounter that lasts for a time of the order of $1/k_{off}$. Here $b_{eff}$ is the effective diffusion rate of the TF. If we take the smallest $k_{off} = 1$s$^{-1}$ and the largest $b_{eff}=1000$s$^{-1}$ the scanning distance is of the order of $70$. For $k_{off}=10$ s$^{-1}$ it will be about $27$, while for $k_{off}=100$ s$^{-1}$ it will be about $7$. One would see the asymmetry displayed in Fig.\ref{fig-T_n_vs_n_Kolomeisky_theory2} if the TF are not farther than this distance from the target, irrespective of whether it is located to the left or right of the target. Because, on the average, the TF downstream will have difficulty to find the target (by colliding with the moving RNAPs) and it will have to dissociate, while the TF upstream can find it without dissociation. Let us call the distance $2\lambda$ upstream and $2\lambda$ downstream as 2 `��antenna' zones. It is easy to explain the observed asymmetry almost quantitatively. Suppose the TF starts in the middle of the antenna zone upstream, say with initial $m-n=35$. Then the search time can be estimated as $T=1/2k_{off}$. The coefficient $1/2$ appears because the TF starts only in the middle of the antenna region. If we take $k_{off}=1$ s$^{-1}$, then we get the search time to be about $0.5$ s. This is exactly what we see in Fig. 4. Now if the TF starts in the middle of the antenna zone downstream (i.e., say,with initial $m-n=-35$) then the searcher cannot reach the target because of the oncoming traffic of RNAPs unless it dissociates. It will spend $0.5$ s here and then, after dissociation, it will have the chance to bind to the upstream region and do better. But, on the average, it will come to the position $L/4$ from the target, which is in the middle of the left half of the chain. On the average, it will have to do $(L/4)/(2\lambda)$ cycles, each of approximate average duration $1/k_{off}$ (because $k_{on}=10000$ s$^{-1}$ $>>1$). For $k_{off}=1$ s$^{-1}$ the searcher will need to make $3.5$ cycles, on the average. So, the total search time for a TF starting in the downstream antenna region is $0.5s+3.5s=4$ s which is in excellent agreement with the corresponding simulation data plotted in Fig. \ref{fig-T_n_vs_n_Kolomeisky_theory2} for the LD phase. It will take longer in MC and HD phases because the effective diffusion rates here are smaller due to the smaller average spacings between the RNAPs. So it takes a little bit longer. Because $\rho_{HD}=0.7$ and $\rho_{LD}=0.3$ it would take longer $\sqrt{\rho_{HD}/\rho_{LD}}=1.5$, or $T=6$ s. This is also in excellent agreement with the simulation data. In reality, search time available to a TF is finite. What is the effect of such {\it finite time of search} on the success in hitting the target site? If a TF begins search from a site located downstream from the target site, it cannot reach the target site if $k_{off}=0$. In other words, the probability of hitting the target is zero for all $n > m$ in the special case $k_{off}=0$. Even for small values of $k_{off}$, only a fraction of the search attempts are successful is reaching the target site {\it over a finite duration of search}. In our MC simulations, for each given initial value of $|n-m|$, we define the probability of successful events ($P_{\alpha}$) as the number of successful events divided by the total number of events when each unsuccessful search attempt is aborted after $10^6$ MC steps. The parameter values chosen for this study are $\alpha =10~s^{-1}, ~\beta=1000~s^{-1}$. $u=1000~s^{-1}$ which, in the absence of any TF would lead to the LD phase of the RNAPs in the steady state. All other parameters are kept fixed at values : $b=10^{3}~ s^{-1}$, $k_{on}=10^{6}~ s^{-1}$, $m=L/2$ and $L=10^{3}$. In Fig.\ref{fig-prob} we have computed the logarithm of the probability of the successful events as a function of $|n-m|$ for several different values of the parameter $k_{off}$. The data fit well with straight lines indicating that \begin{equation} P_{\alpha} \propto exp(-|n-m|/\xi) \end{equation} where the normalized range $\xi/L$ of successful search increases from 0 to 1 with the increase of $k_{off}$. \begin{figure}[th] (a)\\[0.02cm] \includegraphics[angle=0,width=0.9\columnwidth]{FIG5a.pdf} \\[0.02cm] (b)\\[0.02cm] \includegraphics[angle=0,width=0.9\columnwidth]{FIG5b.pdf} \\ \caption{(a) LD ($\rho=0.01$) : Logarithm of the probability of the successful search events {\it in finite time} are plotted with respect to relative distance of initial position of TF from the target. Data points are obtained from MC simulation. Lines correspond to best fit curves. Different lines are for different values of $k_{off}$. (b) Inverse of slopes (correlation lengths) of these lines are plotted against $k_{off}$. Dashed line is drawn just to guide the eyes.} \label{fig-prob} \end{figure} \subsection{Mean search time starting from solution} Let the symbols $b_{eff}$ and $v_{eff}$ denote the effective velocities of TF and RNAPs, respectively. Similarly, $(k_{on})_{eff}$ is the effective attachment rate of the TF on the track per site. $k_{off}$ is the detachment rate of the TF from any site on track. Suppose $\rho$ is the steady state number density of the RNAPs while $\alpha$ and $\beta$ denote the rates of their attachment and detachment, respectively, with the track. From known TASEP results we can write, \begin{eqnarray} \rho = \alpha/u {\rm~in ~LD}\\ b_{eff} = b(1-\rho)\\ v_{eff} = u(1-\rho)\\ (k_{on})_{eff}=k_{on}(1-\rho) \end{eqnarray} where $b$ and $u$ are the hoping rates of TF and RNAPs, respectively and $k_{on}$ is the attachment rate of the TF on the track per site. Since the oncoming traffic of RNAPs reduce the effective velocity of the TF, the effective speed of the TF is reduced to $b_{eff}-v_{eff}$ when it moves against flow of the RNAPs. In contrast, the TF can hop in the direction of flow of the RNAPs with the effective speed $b_{eff}$ as, in this case, except for the fact that $b_{eff} \neq v_{eff}$, the TF and RNAPs move in same direction respecting the same exclusion principle. We use the integer indices $n=1,2,3,.......,L$ to label the equispaced sites on the track, where the integers increase in the direction of movement of the RNAPs. Let $P(n,t)$ denote the probability that there is a TF at site $n$ at time $t$ where $n=0$ represents solution.The stochastic time evolution of the system, in terms of these probabilities, is governed by the master equations \begin{widetext} \begin{eqnarray} \frac{dP(0,t)}{dt}&=&k_{off}\sum_{n=1}^{L/2-1}P(n,t)+k_{off}\sum_{n=L/2+1}^{L}P(n,t)- L~(k_{on})_{eff}~P(0,t)~ ,\\ \frac{dP(1,t)}{dt}&=&\bigl(b_{eff}-v_{eff}\bigr)~P(2,t)+(k_{on})_{eff}~P(0,t)-\bigl(b_{eff}+k_{off}\bigr)~P(1,t)~ ,\\ \frac{dP(L/2-1,t)}{dt}&=&b_{eff}~P(L/2-2,t)+(k_{on})_{eff}~P(0,t)-\bigl(2~b_{eff}-v_{eff}+k_{off}\bigr)~P(L/2-1,t)~ ,\\ \frac{dP(L/2,t)}{dt}&=&b_{eff}~P(L/2-1,t)+\bigl(b_{eff}-v_{eff}\bigr)~P(L/2+1,t)+(k_{on})_{eff}~P(0,t)~ ,\\ \frac{dP(L/2+1,t)}{dt}&=&\bigl(b_{eff}-v_{eff}\bigr)~P(L/2+2,t)+(k_{on})_{eff}~P(0,t)-\bigl(2~b_{eff}-v_{eff}+k_{off}\bigr)~P(L/2+1,t)~ ,\\ \frac{dP(L,t)}{dt}&=&b_{eff}~P(L-1,t)+(k_{on})_{eff}~P(0,t)-\bigl(b_{eff}-v_{eff}+k_{off}\bigr)~P(L,t)~ ,\\ \frac{dP(n,t)}{dt}&=&b_{eff}~P(n-1,t)+\bigl(b_{eff}-v_{eff}\bigr)~P(n+1,t)+(k_{on})_{eff}~P(0,t)\nonumber\\ &-&\bigl(2~b_{eff}-v_{eff}+k_{off}\bigr)~P(n,t)~~~~~({\rm for}~ 1<n<L/2-1~ {\rm and}~L/2+1<n<L)~ . \end{eqnarray} By solving these master equations iteratively, starting with the initial condition $P(n=0,t=0)=1$, i.e; with the TF initially in the solution, we can get the above probabilities for all subsequent instants of time $t > 0$. Utilizing these solutions, we get the probability density of the search times. More specifically, the probability density $f(t)$ to reach the target site located at $m=L/2$ between the times $t$ and $t+dt$ is obtained from \begin{eqnarray} f(t)&=&b_{eff}~P(L/2-1,t)+\bigl(b_{eff}-v_{eff}\bigr)~P(L/2+1,t)+(k_{on})_{eff}~P(0,t) \end{eqnarray} In the actual numerical calculation, we stop the process of iterative solution of the set of coupled master equations at a time $t=T_{max}$ when $P(m=L/2,t \to \infty)$ becomes $\approx$ 1. The mean search time for a TF, defined by \begin{eqnarray} <T_{s}>&=&\int_{0}^{\infty}t~f(t) dt \end{eqnarray} \end{widetext} is then numerically computed by evaluating the integral after replacing the upper limit of the integral by $T_{max}$. In our computation of the mean search time by MC simulation, we start with empty lattice and switch on the flow of RNAPs. We monitor the flow of the RNAPs for the first one million time steps to ensure that the system reaches steady state. Then an absorbing boundary condition for the TF is imposed at the designated target site at $m=L/2$. We place a TF in the solution ($n=0$) and start our clock ($t=0$). The updating of the state of the combined system of RNAPs and TF is continued till the TF reaches the target site; at that instant the simulation is stopped and the corresponding clock reading gives the search time for that particular MC run. We repeat this same procedure for 1000 MC runs and then average over all the MC runs to calculate the mean search time $(<T_{s}>)$ for TF to find the target located at $m=L/2$. The results of the numerical solutions of the master equations and those obtained from MC simulations are plotted in Fig.\ref{fig-5}. The predictions of the MFA are in good agreement with the MC data. An interesting feature of the $<T_{s}>$ -vs-$k_{off}$ curves is the occurrence of a (local-) minimum indicating an optimal search strategy. Moreover, for a given $k_{on}$, the search can be made even more efficient than that implied by the local minimum by tuning $k_{off}$ to an appropriately large value. \begin{figure}[h] \includegraphics[angle=0,width=0.75\columnwidth]{FIG6.pdf} \caption{LD: Mean search time is plotted against $k_{off}$ for four different values of $k_{on}$. Parameters are kept fixed at values : $\alpha =10~s^{-1}$, $\beta=10^{3}~s^{-1}$, $u=10^{3}~s^{-1}$, $\rho=0.01$, $b=10^{3}~ s^{-1}$, $dt=5 \times 10^{-5}~s$, $m=L/2$ and $L=10^{3}$. Initially TF was in solution $(n=0)$. Continuous lines correspond to mean-field (MF) theory. Discrete data points are obtained from simulation; $T_{s}$ has been averaged over 1000 MC runs to get $<T_{s}>$. Dashed lines are drawn just to guide the eyes. } \label{fig-5} \end{figure} \section{Summary and Conclusion} In this paper we have developed a kinetic model for the search of a specific binding site on a linear chain by a single particle that executes diffusive motion along the chain in the presence of a uni-directional traffic flow of another distinct species of particles. This phenomenon resembles diffusive search conducted by a protein, called transcription factor (TF), for its specific binding site on a DNA while a stream of RNA polymerase (RNAP) motors move collectively in a uni-directional traffic-like manner on the same segment of DNA. At first sight, one might expect that such a huge crowd of RNAP might cause strong hindrance against the natural movement of the TF thereby increasing the time it requires for hitting the target site on the DNA. Contrary to this naive expectation, we find that, over wide range of values of the kinetic parameters of this model, the search required shorter time because the RNAPs can reduce wasteful excursions in the wrong directions by pushing the TF in the correct direction. More precisely, the Brownian diffusion of the TF gets rectified to a pattern of movement that can be identified with a Brownian ratchet \cite{reimann02,julicher97}. Thus, in the presence of the RNAP traffic, the mean time of successful search can be even shorter than that required in the absence of RNAP traffic. Once a TF detaches from the DNA, it can resume its diffusive search only after re-attaching to the DNA. However, if the site of re-attachment is random, the distance between the site of its re-attachment and the target site may be longer than that between its location just before detachment and the target site. Thus, the TF does not draw any benefit from its earlier search history. So, detachment may appear to disrupt the search process. But, that is not true, as we show in this paper. When the TF finds it practically impossible to move towards the target site by hopping against the flow of RNAP traffic, detachment from the DNA gives it a fresh opportunity re-start search from another location from where it can move co-directionally with the RNAP traffic. In fact, in the latter situation, instead of hindering the search by the TF, the RNAPs assist its search by dragging it downstream along with them. However, too frequent detachment can be detrimental for the successful completion of the search process. Based on these intuitive arguments, one would expect an optimal rate of detachment that would correspond to fastest search, i.e., the shortest mean search time. This is exactly what we show analyzing our kinetic model analytically under mean-field approximation as well as by direct MC simulation. The model developed here does not include the non-motor crowders on the lattice. Therefore, it does not account for the effects of DNA-bound proteins like histones, etc. on the time of search by the TF. Moreover, effects of elastic forces arising from bending and possible twisting of DNA are not not incorporated in our calculations. We hope to extend our model in future including both these features to describe the search by TF more realistically. Nevertheless, we hope, our work would motivate test of the validity of the theoretically predicted phenomena mentioned above by carrying out experiments {\it in-vitro} using a single DNA strand stretched by applying tension at its two ends with optical tweezers. \section*{Acknowledgments} Work of one of the authors (BM) has been supported by a Senior Research Fellowship from UGC. ABK acknowledges the support from the Welch Foundation (Grant No. C-1559), from the NSF (Grant No. CHE-1664218), and from the Center for Theoretical Biological Physics sponsored by the NSF (Grant No. PHY-1427654). DC acknowledges support from SERB through a J.C. Bose National Fellowship. The authors thank ICTS for the hospitality in Bangalore where this work was initiated during the ICTS program ``Collective Dynamics of-,on- and around Filaments in Living Cells: Motors, MAPs, TIPs and Tracks''.
1,314,259,993,769
arxiv
\section{Introduction} The idea of localization dates back to Anderson's seminal work in 1958 \cite{Anderson1958}. While Anderson's discussion was general, the phenomenon was nonetheless widely assumed to be particular to systems of non-interacting particles (but see Refs.~\cite{Fleishman, MaksimovKagan}). A decade ago strong perturbative arguments were put forward \cite{AGKL, Mirlin, BAA} in support of the idea that interacting many body quantum systems could be in a localized phase, where they failed to equilibrate even at infinite times - a phenomenon that was dubbed `many body localization' (MBL). Numerical works \cite{Prosen, OganesyanHuse, PalHuse} and a recent mathematical proof \cite{Imbrie} have not only put this {\it many body localized} phase on firm ground, but have established that MBL can even persist into the regimes of strong interactions and high energy densities that are inaccessible to perturbation theory. More recently, it has been realized \cite{LPQO, VoskAltman2014, Pekkeretal2014, Bahrietal2013, Chandranetal2014} that MBL can support exotic forms of quantum order at high energy densities, even when such types of ordering are forbidden in thermal equilibrium. MBL is also associated with a rich phenomenology, including an emergent integrability \cite{HNO, Serbynlbits}, an unusual pattern of entanglement \cite{geraedts}, a nonlocal response to local perturabtions \cite{nonlocal} and novel behavior in linear response \cite{gopalakrishnan}. For a review, see \cite{ARCMP}. For all these reasons and more, MBL has excited tremendous interest, which has only heightened since the phenomenon was potentially observed in ultracold atoms experiments \cite{bloch1, bordia, bloch2}. However, despite the enormous interest in MBL, most investigations of this phenomenon have focused on {\it closed} quantum systems, perfectly isolated from any environment. Any `realistic' experimental system will always be coupled, however weakly, to a thermalizing environment. Additionally, in a great many settings, including systems with protected delocalized states \cite{QHMBL, BanerjeeAltman, spinbath} and continuum systems \cite{Aleiner, 2dcontinuum} an `internal' heat bath may also be inevitably present in the system. How then should one understand MBL systems coupled to baths? In a series of works \cite{QHMBL, Aleiner2, BanerjeeAltman, Aleiner, 2dcontinuum, NGH, GN, JNB, HNPRS, proximity, Hyatt, Fischer, spinbath, lesanovsky2016, everest2016role, Znidaric}, a partial answer to the above question has emerged. In \cite{NGH}, the `generic' case of an MBL system coupled to a `good' bath was considered (this situation was also studied numerically in \cite{JNB}). It was pointed out that while weak coupling to a bath causes the {\it eigenstates} to become effectively thermal, nonetheless signatures of MBL survive in the {\it dynamics} (as characterized by spectral functions) as long as the coupling is weaker than the characteristic energy scales in the system. A logarithmic enhancement of the relaxation rate particular to MBL systems was also identified, as were various experimental diagnostics of MBL. Recently, this situation has also been analyzed within the Lindblad formalism in \cite{Fischer, Znidaric, lesanovsky2016, everest2016role}. In \cite{GN}, a {\it narrow bandwidth} bath (able to supply only small amounts of energy) was considered, and it was pointed out that the relaxation rate should have an additional power law smallness in the bandwidth of the bath. Additionally, this perspective was used to develop a self consistent mean field theory of the many body localization transition. All of the above works stayed in the regime where the `back action' of the system on the bath was negligible. In \cite{HNPRS}, single particle localized systems coupled to baths were investigated, including in the regime of strong back action. In \cite{proximity}, it was pointed out that in this strong back action regime, the system could localize the bath instead of the bath delocalizing the system, a phenomenon that was dubbed the `MBL proximity effect.' This scenario was investigated numerically in \cite{Hyatt}. There has thus emerged a large body of work examining MBL systems coupled to baths. However, each of these works operates in a particular corner of parameter space, and the full spectrum of possibilities has never been organized or systematically surveyed. Additionally, all of the above works have focused on situations where the system and bath are coupled {\it everywhere}, ignoring the interesting issue of a system and bath coupled only on the {\it boundary} (a partial discussion of this `codimension one' problem was recently provided in \cite{Chandran}). Nor has it been clarified under what circumstances the bath may be modeled as a source of classical noise, as is done when using the Lindblad formalism with dephasing noise, as in Ref.~\cite{Fischer, Znidaric, lesanovsky2016, everest2016role}. In this paper we provide a general theory of MBL systems coupled to baths. We begin in Sec.\ref{models} by introducing the basic models we will use to illustrate our discussion. In Sec.\ref{noise} we discuss MBL systems coupled to {\it classical} stochastic noise, where the noise serves as a minimal model for the effect of a bath. We discuss the limits in which classical noise is a good model for the bath, and when quantum effects become important. In Sec.\ref{mblbath} we consider MBL systems coupled to thermalizing quantum systems in the traditional `co-dimension zero' setting where the system and bath are of the same dimensionality and are coupled everywhere, and where the bath can be characterized by a single timescale. We point out that the behavior of the resulting coupled systems is governed by a small number of parameters. We organize the parameter space in terms of dimensionless ratios of parameters, and point out that the previous works \cite{NGH, GN, JNB, HNPRS, proximity, Hyatt, Fischer} can all be understood as describing particular corners of parameter space. In Sec.\ref{specialcases} we discuss the `spectral diffusion' scenario, where the correlation time in the bath is much longer than the inverse bandwidth. This situation obtains when the bath is close to a localization transition, or for situations where a bath is strongly coupled to the system but `protected' against localization by symmetry, topology, or long range interactions. We also identify a regime that had not previously been explored, and discuss the behavior therein. In Sec.\ref{boundary} we turn our attention to the `co-dimension one' case. We discuss the phenomenology of MBL systems with `boundary baths' and thermal systems with MBL layers deposited on the boundary, paying particular attention to counterintuitive near boundary phenomena that can arise in certain regimes. We conclude in Sec.\ref{conclusions} by summarizing the general principles guiding our understanding of MBL systems coupled to baths. The appendix provides technical details referred to in the main text, and also discusses the special case of non-interacting systems and baths. \section{Model} \label{models} The system we consider is the following: a system on a $d$ dimensional lattice with two species of particles - A and B. The particles can be spinless fermions for specificity, although we are working at high temperatures where the particle statistics are likely unimportant. The A particles are present with density $n_A$ and have Hamiltonian \begin{equation} H_A = \sum_{\langle ij \rangle} t_A c^{\dag}_{i} c_{j} + U_A c^{\dag}_i c_i c^{\dag}_j c_j + \sum_i \epsilon^A_i c^{\dag}_i c_i \end{equation} where $t^A$ is the hopping, $U^A$ is a nearest neighbor interaction, and $\epsilon^A$ is a random potential, drawn from a distribution of width $\mathcal{W}$. The width of the distribution is sufficiently large that the A particles in isolation are in an MBL phase, with a localization length $\xi_A$ and an associated energy scale $ W = \mathcal{W} \exp(-s \xi_A^{d})$, where $s$ is the entropy density. We do assume we are working well away from the trivial limit $t = 0$, and also well away from the non-interacting limit, so that the `many body level spacing' is the only relevant quantity. Meanwhile, the B particles have $N$ flavors, are present with total density $n_B \approx N n_A$, and have Hamiltonian \begin{eqnarray} H_B &=& \sum_{\langle ij \rangle, \alpha } t_B d^{\dag}_{i, \alpha} d_{j, \alpha} + \sum_{i, \alpha} \epsilon^B_i d^{\dag}_{i,\alpha} d_{i,\alpha} \\&+& \sum_{i \alpha \beta} U_B d^{\dag}_{i, \alpha} d_{i, \beta} d^{\dag}_{i, \beta} d_{i, \alpha} +\sum_{\langle ij \rangle, \alpha \beta} U_B' d^{\dag}_{i, \alpha} d_{i, \beta} d^{\dag}_{j, \beta} d_{j, \alpha} \nonumber \end{eqnarray} where $\alpha$ and $\beta$ are flavor labels, $i, j$ label lattice sites, and $\langle ...\rangle$ denotes that the sum goes over nearest neighbor pairs of sites $i$ and $j$. However, the B particles see a weaker disorder potential, and the parameters are such that the $B$ particles in isolation are in a thermal phase. The B system in isolation has a characteristic local bandwidth $\Delta$ and a dynamical timescale $\tau$. We couple the two systems together with a coupling Hamiltonian \begin{equation} H_{int} = \frac{g}{\sqrt{N}} \sum_{i, \alpha \leq N} c^{\dag}_i c_i d^{\dag}_{i, \alpha} d_{i, \alpha} \end{equation} The parameter $N$ controls the strength of back-action of the MBL system on the bath. In the limit of large $N$ (at constant $g$), back-action can be ignored; if we also take the bath to be at infinite temperature, as we discuss below, the bath can be treated as a classical external noise source. When $N$ is small and $n_B \lesssim n_A$, however, it is possible for the system to substantially alter the properties of the bath. In the latter sections of this paper, where we consider codimension one, we will restrict either the $A$ or the $B$ particles to live entirely on a `boundary layer' of the lattice. In the earlier part of the paper however (codimension zero), both $A$ and $B$ particles can live anywhere on the $d$ dimensional lattice. \section{MBL system coupled to classical noise} \label{noise} To simplify our analysis, we first discuss the case of large $N$ and infinite temperature, where back-action is negligible and the bath can therefore be regarded as a classical noise source. In this case, one can rewrite the system-bath interaction as $\sum_i c^\dagger_i c_i \phi_i(t)$, where $\phi_i(t)$ is a fluctuating classical field. Without loss of generality, we can take $\langle \phi_i \rangle = 0$, since any constant shift can be absorbed into $H_A$. In this approximation, the coupling constant $g$ can be absorbed into $\phi$, and determines the ``strength'' of the noise, $\Lambda^2 = \langle \phi^2(t) \rangle$. We note that when $W/\Lambda \ll 1$, then the noise strength is the largest energy scale in the A system. In this regime, the appropriate starting point is to account for the noise \emph{exactly} and treat the Hamiltonian of system A as a perturbation - see e.g. Ref.~\cite{alp}. As this situation does not fall within the framework of `MBL + bath' we will not discuss it further here, restricting our attention to situations where $\Lambda \ll W$. Apart from the strength, the properties of the noise that are most relevant for our purposes are its bandwidth $\tilde \Delta$ and its correlation time $\tilde \tau$. The bandwidth is defined as the frequency-space width of the power spectrum, viz. $C_i(\omega) = \int dt e^{i\omega t} \langle \phi_i(t) \phi_i(0) \rangle$. In general, $\tilde \Delta$ and $1/\tilde \tau$ are distinct concepts, although $\tilde \Delta \agt 1/\tilde \tau$. A noise profile like that shown in Fig.\ref{linewidth}, corresponding, e.g., to a noise source with slow spectral diffusion, can have $\tilde \Delta \gg 1/\tilde \tau$ (in this situation, $\tilde \tau$ is a correlation time that is given by the decay time of the fourth-order correlator $\langle \phi^2(t) \phi^2(0) \rangle$). \begin{figure} \includegraphics[width = \columnwidth]{decayrates2} \caption{\label{noisefig} Figure illustrating the parameter space of MBL + classical noise models, which is controlled by two parameters: $g \tau$ and $W \tau$. The color code is such that brighter colors correspond to faster relaxation. We restrict ourselves to the regime $W/g\gg 1$, since $W/g \ll 1$ does not fall into the framework of MBL+bath. The parameter $g \tau$ controls the general framework within which relaxation should be understood. When $g \tau \gg 1$ (but $W \gg g$) then relaxation is best understood in terms of Landau Zener transitions. When $g\tau \ll 1$ (and $W \gg g$) then relaxation is best understood in terms of the Golden Rule. In this latter case $W \tau$ controls the nature of the Golden Rule relaxation. For $W \tau \ll 1$ relaxation is dominated by the lowest order rearrangements, whereas when $W \tau \gg 1$ then relaxation is dominated by highly collective rearrangements (i.e. is bottlenecked by the small bandwidth of the bath). These different regimes are discussed in detail in the text. } \end{figure} In the case of $\tilde \Delta \sim 1/\tilde \tau$, there are two dimensionless parameters governing the behavior of the system: these are the quantities $W \tilde \tau$ (which determines whether the noise is ``narrow-band'' or ``broad-band''), and $\Lambda \tilde \tau$ (which determines whether the Golden Rule is applicable). Thus there are three limiting behaviors consistent with our assumption that $ \Lambda \ll W$. These are (i) $\Lambda \ll W \ll \frac{1}{\tilde \tau}$, (ii) $\Lambda \ll \frac{1}{\tilde \tau} \ll W$, and (iii) $\frac{1}{\tilde \tau} < \Lambda < W$. We discuss all three in turn (see also Fig.\ref{noisefig} for a summary). {\bf The simplest limit is that of weak coupling to rapidly fluctuating noise: $\Lambda \ll W \ll 1/\tilde\tau$}. Here, Fermi's Golden Rule is evidently applicable and suggests a transition rate $\sim \Lambda^2 \tilde \tau$. This corresponds to an essentially Markovian bath, as considered in Ref.~\cite{NGH}. For an MBL system with exponentially decaying interactions, there is a log enhancement to Fermi's Golden Rule (\cite{NGH}) such that the `true' decay rate is \begin{equation} \label{enhance} \Gamma = \Lambda^2 \tilde \tau s \xi^d \ln^d \frac{W}{\Lambda^2 \tilde \tau}. \end{equation} where $s$ is the entropy density i.e. a measure of the fraction of degrees of freedom that are `active.' This enhancement is obtained from the following line of reasoning: for a single degree of freedom coupled to classical noise, Fermi's Golden Rule predicts a decay rate $\Lambda^2 \tilde \tau$. For $N$ strongly coupled degrees of freedom, this decay rate is multipled by $N$, since any of the degrees of freedom can couple to the noise. For a system with interactions that fall off as $\exp(-r/\xi)$, two degrees of freedom should be considered strongly coupled if their mutual interaction $W \exp(-r/\xi)$ exceeds the decay rate. Self consistency then yields the expression above. Parenthetically, we note that for stretched exponential interactions $W \exp(-(r/\xi)^{\alpha})$, the logarithm is raised to a power $d/\alpha$, whereas for power law interactions $W (r/\xi)^{-\beta}$ one obtains a {\it power law} enhancement \begin{equation} \Gamma = \Lambda^2 \tilde \tau s \xi^d \left(\frac{W}{\Lambda^2 \tilde \tau} \right)^{d/\beta} \end{equation} (assuming, of course, that the exponent $\beta$ is large enough to be compatible with MBL \cite{Burin, yaodipoles}). All of the above expressions are accurate only when the `enhancement factor' is large compared to one (otherwise the relaxation rate is simply $\Lambda^2\tilde \tau$), and correspond to an inverse `$T_2$' time for the system (the `$T_1$' time is simply $1/\Lambda^2 \tilde \tau$ \cite{NGH}). {\bf A second simple limit is of very weak coupling to slowly fluctuating noise, i.e., $\Lambda \ll 1/\tilde \tau \ll W$}. This is the limit considered in Ref.~\cite{GN}; its key feature is that the frequency of a typical nearest-neighbor system transition greatly exceeds $1/\tilde \tau$. Again, the Golden Rule applies here, but it predicts that decay rates are strongly suppressed. The precise nature of the resulting behavior depends on the large-$\omega$ behavior of $C_i(\omega)$: if it falls off faster than $1/\omega^2$, relaxation is dominated by large-scale rearrangements and the relaxation rate is power law small in $1/\tilde \tau$, with a continuously varying exponent that depends on $W$ and temperature. If $C_i(\omega$) falls off as $1/\omega^2$ or slower, the dominant channel remains lowest-order, but has a suppressed rate $\sim \Lambda^2 C(W)$. In both cases, however, we expect the $\Lambda$-dependence of the decay rate to be quadratic (up to the enhancement factors associated with MBL \cite{NGH}, which take the same form as discussed above, but with $\Lambda^2 \tilde \tau$ replaced by the Golden Rule relaxation rate). {\bf The third and final limit falling within the framework of `MBL + noise' is $1/\tilde \tau \ll \Lambda \ll W$}. This is the regime of strong, slowly fluctuating noise. In the limit of $\tilde \tau \rightarrow \infty$, the reasoning of Ref.~\cite{nonlocal} suggests that the Golden Rule breaks down, and the dominant transitions are instead Landau-Zener crossings. In this case, the lifetime of the MBL system is dominated by the transition probability of its nearest adiabatic crossing. The distance to a Landau-Zener crossing is given~\cite{gkd} by $x \sim (1/s) \log(W/\Lambda)$, where $s$ is the entropy density, and the probability of an adiabatic crossing is $P_{ad} \sim \min(1, W^2 \tilde \tau \exp(-2x/\zeta)/\Lambda) \sim \min(1, (W^2/\Lambda) (\Lambda/W)^{2/(s\zeta)})$. The Golden Rule rate in this limit will be $\sim (1/\tilde \tau) P_{ad}$. We now briefly consider the ``spectral-diffusion'' scenario in which $\Delta \gg 1/\tilde \tau$. We specialize to the case $\Delta \agt W$. In the Golden-Rule limit $\Lambda \tilde \tau \rightarrow 0$ we find that $\tilde \tau$ is irrelevant and the rates are given by substituting $\Delta$ for $1/\tilde \tau$ in the expressions above. This is because on the (very long) timescale associated with the decay, spectral diffusion allows the spectrum of the bath to `fill in' everywhere in a region of width $\Delta$. However, a crossover takes place when the resulting Golden-Rule rate becomes comparable to $1/\tilde \tau$. When the decay rate is comparable to (or faster than) $1/\tilde \tau$, the time-averaged bandwidth $\Delta$ is irrelevant to the physics: on these timescales the drive is close to monochromatic on each site, and does not ``sample'' over the bandwidth $\Delta$. Thus one expects the decay rate to be bottlenecked by $1/\tilde \tau$ in this regime, provided that $\Lambda \ll W$. (In the $\tilde \tau \rightarrow \infty$ limit, the physics corresponds to that of a system driven at a large number of incommensurate frequencies; whether such a system remains localized is at present an open question.) There is also an intermediate regime when spectral diffusion is partially but not wholly effective on the timescale set by the decay, and the relevant energy scale is intermediate between $1/\tilde \tau$ and $W$, and is in fact set self consistently by the decay rate. This regime was discussed at length in \cite{gnarxivv2}. \begin{figure} \includegraphics[width=\columnwidth]{SpectralDiffusion} \includegraphics[width = \columnwidth]{spectraldiffinrealtime} \caption{\label{linewidth}Upper panel: An illustration of the spectral diffusion scenario. When the power spectrum of the noise is evaluated over a time window of order $1/\tau$ it may take the form above, with a collection of narrow spectral lines (of width $1/\tau$) spread over a frequency window of width $\Delta$. When the power spectrum is evaluated over a time window $\gg \tau$, these spectral lines will move around and the power spectrum will fill in everywhere in a window of width $\Delta$. The width of the power spectrum of the noise may thus depend on the timescale on which the noise is being probed, but will be lower bounded by $1/\tau$ and upper bounded by $\Delta$. Lower panel: a sample real-time noise trajectory in the spectral-diffusion regime, showing the separation of timescales between the rapid oscillations (with period $1/\Delta$) that change frequency on timescales $\tau$.} \end{figure} \subsection{Quantum effects: discreteness and back-action} In the previous discussion we took the $N \rightarrow \infty$ limit, which allowed us to make two important simplifications. First, we were able to neglect the back-action of the system on the bath; and second, we were able to ignore possible complications related to the discreteness of the bath energy levels. These complications arise because of the following logic. Assuming the bath is in a thermal diffusive phase, entanglement spread takes place ballistically, so that on a timescale $t$ one can regard the bath as consisting of causally disconnected blocks of size $L_t \sim vt$. (In the subdiffusive Griffiths phase near the transition \cite{Kartiek, voskhusealtman, pvp}, $L_t \sim t^\alpha$ with $\alpha$ approaching zero at the MBL transition.) Thus, if a system level decays into the bath on a timescale $\Gamma$, it can only entangle with a region of size $L_{1/\Gamma}$ on this timescale. For the Golden Rule (or any other continuum approximation) to be consistent, we must require that $\Gamma$ exceed the level spacing of the bath on scale $L_{1/\Gamma}$, or in other words that \begin{equation} \label{consist} \Gamma \agt \frac{\Delta v^d}{\Gamma^d} \exp[- s v^d/\Gamma^d]. \end{equation} This condition is always satisfied at sufficiently weak coupling -- although this weak-coupling requirement becomes increasingly stringent as one approaches the MBL transition. Note that Eq.~\ref{consist} is a consistency condition: it is \emph{necessary} for Golden-Rule reasoning to apply but not sufficient. Note also that when there are $N$ flavors in the bath, $s \sim N$, so that in the large $N$ limit this consistency condition is automatically satisfied.See Fig.\ref{discreteness} for an illustration of these points. When this weak-coupling approximation fails, the leading interactions between the system and the bath are ``off-resonant'' processes such as Hartree and Stark shifts. Such shifts can in general {\it enhance} the effective disorder in the system, increasing it to $\sqrt{W^2+\Lambda^2}$, in effect {\it strengthening} localization (this is akin to the `Zeno localization' phenomenon discussed in \cite{HNPRS}). Another effect that is ignored when the bath is modeled as a classical noise source is the effect of {\it back action} on the bath. In particular, the coupling to the system causes the effective disorder strength in the bath to be increased. If the `bare' disorder strength in the bath is $w'$, then the disorder strength in the presence of coupling to the system becomes $\sqrt{(w')^2 + g^2/N}$ (adding scales in quadrature). In the limit $N\rightarrow \infty$ the back action is asymptotically weak and may be neglected, but at finite $N$, and particularly if the bath is only weakly ergodic, this increase in the effective disorder strength in the bath can be sufficient to drive the bath itself into a localized phase. This is the MBL proximity effect discussed in \cite{proximity}. \begin{figure} \includegraphics[width=\columnwidth]{1overNplot} \caption{\label{discreteness} Figure illustrating how, for not too large $N$ and not too weak noise, one can enter a regime where naive estimates of the relaxation rate violate the consistency criterion (\ref{consist}). In this regime relaxation is bottlenecked by discreteness of the accessible states in the bath. This regime is discussed at length in Sec.\ref{specialcases} and Sec.\ref{boundary}.} \end{figure} \section{MBL+Bath: Fully quantum treatment} \label{mblbath} In this section we consider a fully quantum treatment of the `MBL+Bath' problem. The model we are considering was laid out in Sec.\ref{models}; we now enumerate the relevant energy scales and dimensionless parameters. The dynamics of the $A$ system (which would, in isolation, be localized) is characterized by a characteristic local energy scale $W$ (which can generally be associated with the disorder bandwidth) The dynamics of the $B$ system (which would, in isolation, be thermal) is characterized by a local energy bandwidth $\Delta$, a correlation time $\tau$, and an entropy density $s \propto N$. Moreover, the spread of entanglement in the isolated $B$ system would proceed as $S \sim (t/\tau)^\alpha$, where $\alpha = 1$ in diffusive systems and $\alpha < 1$ in subdiffusive Griffiths phases. Deep in the thermal phase, $\alpha = 1$ and $\Delta \simeq 1/\tau$. In this section we restrict ourselves to the scenario $\Delta \sim 1/\tau$, deferring a discussion of the spectral diffusion scenario $\Delta \gg 1/\tau$ to the next section. It is helpful to rewrite these parameters as scale-dependent quantities. A block of linear size $L$ in the $B$ subsystem has a bandwidth $L^d \Delta$ [and correspondingly a many-body level spacing $L^d \Delta \exp(-s L^d)$], and becomes entangled on the timescale $t(L) \equiv L^{1/\alpha} \tau$. As discussed in the previous section, the relaxation rate determines a characteristic length-scale $L_\Gamma \sim (\Gamma \tau)^{-\alpha}$. This length-scale specifies a bandwidth, $s L^d_\Gamma \Delta$, as well as a level spacing \begin{equation} \label{discrete} \delta_{\Gamma} \sim \min(\frac{1}{\tau}, \frac{s L_{\Gamma}^d}{\tau } \exp[- s L_{\Gamma}^d]). \end{equation} this energy scale will be an important point of reference in our analysis. We further denote by $\delta_g$ the level spacing associated with taking the Golden Rule result for the relaxation rate $\Gamma \sim g^2 \tau$. Finally, there is the coupling $g/\sqrt{N}$ between the two systems. Thus there are overall three independent dimensionful parameters $W, 1/\tau, g/\sqrt{N}$ and two dimensionless numbers $s$ and $N$ on which the physics may depend. Additionally there is an energy scale $\delta_{\Gamma}$ which is fully determined by the above parameters but is nonetheless important. Note that we have also assumed that the system and bath are interacting systems on the timescale set by $\Lambda$. When this is not true and system or bath are effectively non-interacting on the relevant timescale, then a different analysis must be used, and this is discussed in Appendix.\ref{noninteracting}. {\bf The dimensionless ratio $W / g$ controls how strongly the A system is coupled to the B system}. When $W/g \ll 1$ then the coupling to the B system is the largest energy scale in the problem, and should be diagonalized first, before incorporating the Hamiltonian $H_A$ as a perturbation. This situation does not fall within the framework of `MBL+bath' and will not be discussed here. We will restrict our attention to $W/g \gg 1$. {\bf The dimensionless ratio $\tau g /\sqrt{N}$ controls the strength of the back action on the B system}. When $\tau g/ \sqrt{N} \ll 1$ then the back action on the B system is weak, and when $\tau g/\sqrt{N} \gg 1$ the back action on the B system is strong. Note that $\tau g/\sqrt{N} \ll 1$ is a necessary (but not sufficient) condition for us to be able to model the B system as a classical noise source. Note also that $\tau g/\sqrt{N} \ll 1$ automatically guarantees $\delta_g \ll 1/\tau$, whereas in the strong back action regime $\delta_g \approx 1/\tau$, and there is little entanglement spreading in the bath on the timescale $t_g$. {\bf This straightaway allows us to identify {\bf $g \tau / \sqrt{N} \gg 1$} as a regime of strong back action}, where the bath cannot be modeled as a classical noise source. The coupling to the A system is the dominant energy scale for the B system, but the coupling is only a weak perturbation to the A system (since $W/g \gg 1$ by postulate) . In this scenario, the bath is likely localized by an `MBL proximity effect' \cite{proximity}. We henceforth specialize to the regime $g \tau /\sqrt{N} \ll1 $, when the back action on the bath is weak. Note that $g \tau / \sqrt{N} \ll 1$ automatically ensures $\delta_g < \Gamma_g$, so the discreteness of the bath is not an issue in this regime. The behavior in this regime is controlled by two parameters $g \tau$ and $W \tau$. {\bf The parameter $g \tau$ controls whether the B system is slowly or rapidly fluctuating on the timescale relevant for the A system.} This parameter is obtained by comparing the Golden Rule decay rate $g^2 \tau$ to the dynamical timescale in the bath. When $g \tau \ll 1$ the B system is rapidly fluctuating on the timescale relevant for the A system. Meanwhile, {\bf the parameter $W \tau $ controls whether we are in the broad or narrow band regimes}. $ W \tau \ll 1$ is the broad band regime where the B system can easily supply enough energy to place rearrangements in the $A$ system on shell. In this limit the physics is essentially independent of $\tau$ (although not $\delta$). Meanwhile, $ W \tau \gg1$ is the narrow band regime where the relaxation rate (if the system delocalizes) is bottlenecked by $1/ \tau$. There are three limits compatible with our assumption $W/g \gg1 $ and $g \tau/\sqrt{N} \ll 1$, and these are (i) $W \tau \ll 1$ and $ g \tau \ll 1$, (ii) $W \tau \gg 1$ and $g \tau \gg 1$, and (iii) $W \tau \gg 1$ and $g \tau \ll 1$. There are three distinct regimes which map onto the three models of MBL + classical noise discussed in Sec.\ref{noise}, namely \begin{enumerate} \item {\bf $ W\tau \ll $ and $g \tau \ll 1$}. This is the regime of a broad bandwidth bath where relaxation proceeds via the Golden Rule. Back action on the bath is weak and discreteness of the bath spectrum is unimportant so the bath can be modeled as a rapidly fluctuating classical noise source. In this limit, the bath generically delocalizes the system, and the behavior is as discussed in \cite{NGH} and \cite{JNB}, and also in Sec.\ref{noise} as the regime $\Lambda \ll W \ll 1/\tau$. \item {\bf $ W \tau \gg 1$ and $g \tau \ll 1$}. This is the regime of a good but narrow bandwidth bath that is able to place rearrangements in the A system on shell. Back action on the bath is weak and the bath can be modeled as a rapidly fluctuating classical noise source. In this limit, the bath delocalizes the system, but the relaxation rate is bottlenecked by $1/\tau$. This is the regime that was discussed in \cite{GN}, and also in Sec.\ref{noise} as the regime $\Lambda \ll 1/\tau \ll W$. \item {\bf $ W \tau \gg 1$ and $g \tau/\sqrt{N} \ll 1 \ll g \tau$}. In this limit, the bath delocalizes the system, but the dominant relaxation mechanism involves Landau Zener transitions rather than the Golden rule. This is the regime that was discussed in \cite{gkd}, and also in Sec.\ref{noise} as the regime $1/\tau \ll \Lambda \ll W$. \end{enumerate} This concludes the survey of possibilities in the case when the bath is characterized by a single parameter and is not `protected' in any way. We have restricted ourselves to situations where the relevant energy scales are widely separated and the behavior can be straightforwardly deduced. Intermediate regimes where e.g. $g \tau/\sqrt{N} \approx 1$ are beyond the scope of the current analysis. Additionally, we have restricted ourselves to a regime where the A system is weakly coupled. When the A system is strongly coupled to the B system a different approach is called for, and either a localized or delocalized phase may result \cite{proximity}. \section{{Baths with multiple intrinsic timescales}} \label{specialcases} \label{spectral} Thus far we have assumed that the B system is fully characterized by an entanglement spreading time $\tau$ and an associated energy scale $1/\tau$. However, as has been discussed in Sec.\ref{noise}, when the timescale on which the B system is being probed is long compared to $\tau$, spectral diffusion can lead to the emergence of a second energy scale $\mathcal{E}$, which is lower bounded by $1/\tau$, upper bounded by the local bandwidth of the B system $\Delta$, and is self consistently determined taking into account the relaxation rate in the system. This makes a difference if we are in the regime $ 1/\tau \ll W$ and $g \tau \ll 1$ when the bath is narrow bandwidth, rapidly fluctuating, and there is weak back action on the bath. In this case, the bottleneck on the relaxation becomes $\mathcal{E}$ instead of $1/\tau$ i.e. relaxation is faster than one would naively expect. This situation was analysed in detail in \cite{gnarxivv2}. There are two generic situations when spectral diffusion is expected to be relevant. One is when the $B$ system is close to a localization transition. As the $B$ subsystem approaches its MBL transition, the timescale $\tau$ becomes much larger than $1/\Delta$, as the diffusion constant vanishes. In this regime, The structure of local spectral functions in the B system is as follows. A typical local operator has $\sim s$ spectral lines, in a bandwidth $\Delta$. Each spectral line has a characteristic ``width'' $\sim 1/\tau$, and decays exponentially or faster at frequencies $\gg \Delta$; this is implied, e.g., by rigorous results on absorption~\cite{ADH}. At intermediate timescales, the \emph{typical} spectral function is a Lorentzian (as one expects in the diffusive phase) or possibly a Levy-stable distribution (in the subdiffusive phase). The spectral diffusion scenario can also obtain in the regime where the bath is strongly coupled $g \tau /\sqrt{N} \gg 1$, but the bath is protected against localization because of symmetry, topology, or the existence of sufficiently long range interactions. {\bf In the case of protected baths, there is an additional {\it intermediate coupling regime} that can arise that has not been hitherto discussed. This is a regime where $g \tau/\sqrt{N} \gg 1$}(but $W/g \ll 1$). Even though this is a regime of strong back action on the bath, if the bath is protected against localization then the `proximity effect' is evaded. We can however enter a regime where $\Gamma_0 < \delta_{\Gamma_0}$, where $\Gamma_0$ is the relaxation rate determined from either the broad band Golden Rule, narrow band Golden Rule, or Landau Zener formulae, according to the relative sizes of $W$, $N$ $\tau$ and $g$. (This regime disappears in the large $N$ limit since $s \propto N$). In this regime, discreteness of the spectrum of the B system is important, and the bath thus cannot be modeled as a classical noise source. Instead, the B system enables relaxation in the A system by going to {\it high orders} in the coupling to the bath or by coupling to {\it highly collective rearrangements} in the system, which have a correspondingly smaller matrix element and relaxation rate. This situation is analyzed in detail in Appendix \ref{bulk}. A key result is that in this regime the relaxation rate is bottlenecked by the discreteness of the bath spectrum and is effectively {\it independent} of $g$. The analysis in \cite{BanerjeeAltman} was in a similar regime, except that the analysis there was developed for a non-interacting system (such that the collective rearrangements were absent) and for a non-interacting bath (such that $\delta$ scales differently with $\Gamma$ to Eq.\ref{discrete}). We now discuss these various types of `protected baths' that can arise, and comment briefly on each. \subsection{Topologically protected baths} In Sec.\ref{mblbath} we assumed that the bath {\it could} get localized in a strong back action regime. However, if the bath is topologically protected against localization, then this conclusion must be revisited. Even though such a problem might naively be in the regime of strong back action on the bath (and weak coupling for the system), the end result must be delocalization of the composite. This scenario is relevant for e.g. the analysis in Ref.\cite{QHMBL}, where the bath in question was the (topologically protected) critically delocalized state at the center of a Landau level. Of course, the analysis in Ref.\cite{QHMBL} also differs in that the `bath' and system were not cleanly separated, as in Sec.\ref{models}, but were different parts of the same single particle spectrum. \subsection{Symmetry protected baths} If the bath is protected against localization by a symmetry, then too delocalization of the composite system must result, even in the strong back action regime. One realization of such a scenario is when the bath consists of Goldstone modes. This scenario was discussed in e.g. Ref.\cite{BanerjeeAltman}, where the bath in question was the phonons in a Dyson chain. Another realization of a protected bath involves a bath made out of spin degrees of freedom for which the Hamiltonian has SU(2) symmetry, since such a system is also protected against localization \cite{vpp}. This latter realization was discussed in \cite{spinbath} in the context of the spin incoherent Luttinger liquid, where the charge and thermal transport properties in the presence of this spin bath were deduced. \subsection{Baths protected by long range interactions} Systems with long range interactions that decay as power laws in space may support percolating networks of resonances \cite{Burin, yaodipoles}, that may act as a heat bath for the problem, triggering delocalization. If this does happen, then the heat bath in question will `live' on a sparse network of sites, and will be exceedingly narrow bandwidth. Transport in the presence of such a bath has unusual properties that have been explored in the low temperature limit in Ref.\cite{gpbgsm}. \section{MBL+Bath in codimension one} \label{boundary} The preceding discussion was for systems and baths coupled together with codimension zero i.e. the system and bath have the same dimensionality and are coupled everywhere in space. In this section we consider codimension one: a layer of thermal phase deposited on the surface of an MBL bulk, and a layer of MBL phase deposited on an thermal bulk. Note that this setup only makes sense if the bulk is in dimension $d > 1$, so that the boundary can itself be thermodynamically large (and hence capable of supporting either an MBL or thermal phase). \subsection{MBL Bulk with thermal boundary} In this subsection we consider the behavior of an MBL system where a thermalizing quantum system is placed on its boundary. Such a situation can be modeled within the framework outlined in Sec.\ref{models}, if the B particles are restricted to living on the boundary of the lattice on which the A particles live. What sort of behavior should we expect from such a setup? We note that a similar setup was analyzed in Ref.\cite{Chandran}, in the regime where the boundary bath was good (broad bandwidth, weak back action, rapidly fluctuating), and where the overall geometry was that of a $d$ dimensional cubic lattice of linear size $L$. We consider the generalized version of this problem, where we do not restrict the nature of the boundary bath or the system geometry. One possibility is that the `thermal' B system gets localized by the disorder coming from its coupling to the A system. Such a scenario may play out if the bath is in the `strong back action' regime $g \tau / \sqrt{N} \gg 1$, where $g$ is the coupling on the boundary and $\tau$ is the entanglement time for the bath. In this case, one simply has an MBL system with some extra localized degrees of freedom on the boundary. A more interesting possibility is that the boundary is in the weak coupling regime $g \tau/ \sqrt{N} \ll 1$, such that the B particles remain in a thermal phase, where they could in principle act as a heat bath for the A system. How should one then understand relaxation in the A system? We assume in the following that the A system is characterized by a single localization length $\xi_A$ (which we henceforth denote simply by $\xi$), ignoring the possible complications of multiple localization lengths \cite{HNO}. Recall that modes deep in the MBL bulk will be well localized, with exponentially small weight on the boundary. The matrix elements for coupling to the bath will thus fall off exponentially with distance from the boundary $g(r) \sim g \exp(-r/\xi)$. Meanwhile the parameters $W$, $\tau$, $s$ and $N$ are defined as previously, but $\delta$ is defined as \begin{equation} \label{boundarydiscrete} \delta_{\Gamma} \sim \min(\frac{1}{\tau}, \frac{s L_{\Gamma}^{d-1}}{\tau } \exp[- s L_{\Gamma}^{d-1}]). \end{equation} where, recall, $L_{\Gamma} = 1/\Gamma^{\alpha}(r).$ In the Golden Rule regime $\Gamma \sim g^2$ and $L_{\Gamma} \sim \exp(2\alpha r/\xi)$ such that $\delta_{\Gamma}$ decays as a {\it double exponential} function of $r$ as we go into the bulk. \footnote{One could argue that perhaps the degrees of freedom in the MBL system `on the way' to the boundary should be included in the size of the effective bath. However, this simply changes $L_{\Gamma}^{d-1}$ to $L_{\Gamma}^{d-1} \log L_{\Gamma}$ in the above formula, and does not significantly alter the behavior. } We note that back action will always be weak, since $g \tau / \sqrt{N} \ll 1$ at the boundary and since $g$ decays exponentially going into the bulk. We note also that $W/g \gg 1$ is ensured deep in the bulk, so the behavior deep in the bulk is guaranteed to fall into the `MBL + bath' formalism. Finally, $g \tau \ll 1$ deep in the bulk, so that the bath is rapidly fluctuating, and $\delta \ll g \ll W,\tau$. Deep in the bulk we are thus inevitably in the regime where the B system can be modeled as a rapidly fluctuating classical noise source, and the only question remaining is whether this noise source is narrow band or broad band. This last is determined by whether $W \tau \ll 1$ or $W\tau \gg 1$, and the relaxation rate is obtained from Fermi's Golden Rule, substituting $g(r)$ in for $g$ from Sec.\ref{noise}. Note that the relaxation rate will decline {\it exponentially} with distance from the boundary. Ref.\cite{Chandran} discussed the case when the boundary is {\it finite}, such that $L_{\Gamma}$ saturates to a maximum lengthscale $L$. In this case, $\delta$ saturates to a minimum value $\delta_{min} = \frac{L^{d-1}}{\tau} \exp(-s L^{d-1})$. There then emerges a depth $R$ beyond which $g(r>R) < \delta_{min}$, such that the Golden Rule becomes inapplicable and the relaxation rate drops to zero. This depth may be estimated as \begin{equation} R \approx \xi s L^{d-1} \end{equation} At depths greater than $R$, the MBL system is effectively decoupled from the bath (no relaxation). Note however that this critical depth $R$ diverges as $L \rightarrow \infty$. This concludes our discussion of the dynamics deep in the bulk. We now consider the behavior of the $A$ system {\it near} the boundary. We assume that $W/g \gg 1$ even at the boundary, so that the A system can be described in the MBL + bath framework everywhere. If the boundary also satisfies $g \tau/\sqrt{N} \ll 1$, such that the back action on the bath is weak, then it can be readily verified that $\delta_{\Gamma} < \Gamma$ everywhere, so that the discreteness of the bath is also unimportant. In this case the effect of the bath can be modeled everywhere as classical noise. There are the usual three cases: \begin{enumerate} \item If $W \tau \ll 1$ then the bath can be everywhere modeled as a classical rapidly fluctuating broad bandwidth noise source, and the relaxation rate is given by the Golden Rule (with log enhancements as in Sec.\ref{noise} and with a matrix element that decays as $\exp(-r/\xi)$). \item If $W \tau \gg 1$ and $g \tau \ll 1$ then the relaxation rate is given by the Golden Rule as before, but bottlenecked by $1/\tau$, and with a matrix element that decays as $\exp(-r/\xi)$. \item If $W\tau \gg 1$ and $g(0) \tau/\sqrt{N}\ll 1\ll g(0)\tau $, then {\bf there is a crossover behavior as a function of depth}. Deep in the bulk $g(r) \tau \ll1$ such that the noise source is rapidly fluctuating and relaxation is described by the Golden Rule, whereas close to the boundary $g(r) \tau \gg 1$ such that the noise source is slowly fluctuating and relaxation is described in terms of Landau Zener transitions. The crossover between the two pictures is at a radius $r_c \approx \xi \ln (g \tau)$. In both regimes the relaxation rate decreases exponentially with distance from the boundary, but with different decay lengths. \end{enumerate} {\bf A different behavior can arise when the bath is protected against localization, and we are in the strong back action regime $g(0) \tau /\sqrt{N} \gg 1$}. Even though the bath is protected against localization, the relaxation rate obtained from the Golden Rule (or Landau Zener) calculation can violate Eq.\ref{consist}, near the boundary, such that in the near boundary regime the discreteness of the bath becomes important. In this event the bath cannot be modeled as a classical noise source close to the boundary (although it can be so modeled deep in the bulk). Relaxation close to the boundary then requires going to high orders in the coupling $g$, or making use of highly collective rearrangements that couple only weakly to the boundary bath. This scenario is analyzed in detail in Appendix \ref{boundaryapp}, and leads to a relaxation rate that is bottlenecked by discreteness of the bath spectrum, and is not only independent of $g$ but saturates to a constant for distances $r < r_c$ from the boundary. For distances $r > r_c$ of course the relaxation rate continues to decay exponentially with distance from the boundary. The final possibility (which we do not discuss since it does not fall into the MBL+bath framework), is that close to the boundary the system is strongly coupled to the bath, $W/g \ll 1$, and the MBL+bath framework only starts to apply at depths greater than $\xi \ln W/g$. This concludes our survey of MBL systems coupled to boundary baths. \subsection{Thermal phase with MBL boundary} We now consider the situation where the B particles live on a $d>1$ dimensional lattice, and the $A$ particles are restricted to the $d-1$ dimensional boundary. If $g\tau/\sqrt{N} \ll 1$ at the boundary (weak back action at the boundary), then the MBL system simply gets delocalized, and falls into the appropriate class discussed in Sec.\ref{mblbath} according to the boundary values of the relevant parameters. A more interesting regime is when $g \tau / \sqrt{N} \gg 1$ at the boundary but $W/g \gg 1$. This further implies that at the boundary $W \gg 1/\tau$. This situation still falls into the MBL + bath framework, but if the B system consisted of only the boundary layer, the end result would be that the bath ends up localized by `proximity effect.' However, the B system lives in a higher dimensional space to the A system, and the coupling to the A system will fall off as we go deep into the bulk, such that sufficiently far into the bulk we will be in the weak back action regime, and will end up with a bath capable of delocalizing the MBL boundary. We are assuming here that the B system {\it does} see disorder so that the correlation length $r_0$ in the B system is finite. If we assume that the coupling falls off as $g (r/r_0)^{-\chi}$ with distance from the boundary, then the shell of depth $r_0 (g \tau)^{1/\chi} > r > r_0 (g \tau /\sqrt{N})^{1/\chi}$ will constitute a slowly fluctuating bath that will lead to relaxation in the MBL boundary due to Landau Zener transitions, with coupling $\Lambda \approx \sqrt{N}/\tau$, whereas the region at depth $r > r_0 (g \tau)^{1/\chi}$ will constitute a rapidly fluctuating bath that will lead to relaxation via the Golden Rule, with coupling $\Lambda \approx 1/\tau$. These two channels should be added in quadrature to determine the relaxation rate in the A system, which will be power law small in large $\tau$, but will be independent of $g$. We conclude by speculating as to the possibilities for a stronger result. If the `bath' were extremely weak, right on the cusp of a localization transition, could the application of an MBL boundary layer trigger a `localization avalanche' whereby the whole system gets localized? Such a scenario may play out as follows: the coupling to the A system causes the boundary layer to become frozen. The next layer then sees additional disorder coming from the boundary layer and freezes in turn, itself constituting a source of disorder for the third layer, and so on. If the localization transition is indeed second order (as is widely believed), then this scenario seems highly unlikely, since the application of an MBL boundary layer will not alter the disorder strength in the bulk, and the disorder strength is (presumably) the parameter driving the transition. However, if the MBL transition were {\it first} order, so that there was a co-existence regime, and the B system happened to be in the thermal phase in the co-existence regime, then indeed applying an appropriate boundary condition could trigger a localization avalanche causing the entire bulk to localize. We note too that recent numerical results \cite{Luitz} appear to support a scenario where the localization transition has at least some first order character, in that the local entropy density appears to show a discontinuity. Since we are not aware of any arguments conclusively establishing that the MBL transition must be second order, we cannot exclude the possibility of such a `localization avalanche,' which would seem to be an interesting topic for future work. \section{Conclusions and open questions} \label{conclusions} We have discussed the behavior of an MBL system weakly coupled to a bath. When the back action on the bath is weak and the discreteness of the bath levels unimportant then the bath can be modeled as a classical noise source. In Sec.\ref{noise} we discussed the behavior of an MBL system coupled to a classical noise source, and highlighted three different regimes of relaxation. In Sec.\ref{mblbath} we introduced a fully quantum treatment of an MBL system weakly coupled to a heat bath. We identified a small number of parameters that control the physics, and discussed limiting regimes where these parameters were widely separated. There turned out to be four distinct regimes: three corresponding to the three distinct relaxation regimes for an MBL system subjected to classical noise, and an intrinsically quantum regime of {\it strong back action}, where the bath can itself get localized by the disorder coming from the MBL system. In Sec.\ref{specialcases} we discussed the spectral diffusion scenario where the inverse correlation time of the bath is much less than the local bandwidth. This scenario obtains in baths close to the localization transition and baths protected against localization by topological or symmetry considerations, or long range interactions. In this case there is also an additional intrinsically quantum regime that can arise. This is a regime where the discreteness of the bath energy levels is important, such that the dominant relaxation mechanisms involve high order coupling or highly collective rearrangements, and where the relaxation rate is independent of the coupling $g$. Finally, in Sec.\ref{boundary} we discussed the behavior of the `codimension one' problem, where the system and bath do not have the same dimensionality. We discussed first the case of an MBL system with a boundary bath. When the bath is unprotected and in the weak back action regime there arise the usual three distinct regimes of effectively classical noise. However, one of these three cases involves a crossover between different models of noise as a function of distance from the boundary. When the bath is protected against localization and in the strong back action regime there arises an intrinsically quantum regime where discreteness of the bath matters close to the boundary and leads to a relaxation rate that is depth independent in the near boundary regime. We also discussed the case of a thermal system with an MBL boundary, and speculated as to the possibilities for a localization avalanche. We trust that the framework introduced in this paper will prove useful for future investigations of MBL systems coupled to thermalizing environments. {\bf Acknowledgements:} We acknowledge useful conversations with P.W. Anderson, Ravin Bhatt, Anushya Chandran, Eugene Demler, Sonika Johri, Vedika Khemani, Michael Knap, Andrew Potter, Antonello Scardicchio, S.L. Sondhi and especially David A. Huse.
1,314,259,993,770
arxiv
\section{THE THERMAL NON-EQUILIBRIUM (TNE) DEBATE\label{sec:_tne_debate}} The very existence of solar and stellar coronae remains one of the great problems in astrophysics. In particular, the heating mechanism(s) capable of keeping the plasma confined in magnetic loops at temperatures of several million degrees still resist comprehensive understanding \citep{Klimchuk2015}. Despite considerable effort and progress \citep[for a review, see][]{Reale2014}, we do not know for sure where the heating occurs or how it evolves with time. In the Parker field-line tangling scenario \citep{Parker1972, Parker1988}, one can expect the heating to be highly stratified, i.e. concentrated at the footpoints of the loops \citep{Rappazzo2007}. If in addition it is quasi-steady, i.e. varying slowly (or impulsively with a high repetition rate) compared to the cooling time, numerical simulations consistently show that the loops are susceptible to entering a regime of thermal non-equilibrium \citep[e.g.][]{Kuin1982, Antiochos1991, Karpen2001, Muller2003, Mok2008, Klimchuk2010, Lionello2013}. For specific combinations of the heating conditions and geometry, the footpoint heating drives evaporative upflows, hot plasma accumulates in the loop, and as it cools, a condensation grows quickly near the apex, falls down one leg, hits the chromosphere, and the cycle repeats with periods from several tens of minutes to several hours. This process is thought to play a significant role in the formation of prominences \citep{Antiochos1991, Karpen2006} and coronal rain \citep{Muller2003, Muller2004, Muller2005, Antolin2010, Antolin2015}. However, \cite{Klimchuk2010} argued that TNE models fail to reproduce simultaneously the key observational properties of coronal loops, thus discarding the possibility that highly stratified, quasi-constant heating could be the norm in active regions. But other studies \citep{Mikic2013, Lionello2013, Lionello2016, Winebarger2014} have shown that the inconsistencies with observations can be resolved if the geometry is more complex than the constant cross-section, semicircular vertical loops used by \cite{Klimchuk2010}. In particular, if the loops are expanding and asymmetric, the condensations do not fully develop. The plasma thus remains at coronal temperatures and densities, which results in unstructured intensity profiles, as observed in the extreme ultraviolet (EUV). Still, this does not prove that quasi-steady stratified heating is commonplace, because outside TNE conditions it leads to hydrostatic solutions that, at least for monolithic loops, seem incompatible with several observational constraints~\citep{Reale2014}. Other scenarios involving more sporadic heating, such as the nanoflare storms, have been developed to resolve these issues \citep{Klimchuk2009, Viall2013}. Surprisingly, in this debate, the most striking characteristic of TNE conditions -- the predicted periodicity of the temperature and density and hence of the plasma emissivity -- has not yet been searched for in the observations in order to test the models. We processed more than 13 years of observations at 19.5~nm with the Extreme-ultraviolet Imaging Telescope \citep[EIT;][]{Delaboudiniere1995} of the {\it Solar and Heliospheric Observatory} \citep[{\it SOHO};][]{Domingo1995} and discovered hundreds of long-period (3-16 hr) pulsation events in coronal loops, some lasting for up to six days \citep{Auchere2014}. \citet{Froment2015} analyzed in detail three other events observed in the six coronal bands of the Atmospheric Imaging Assembly \citep[AIA;][]{Lemen2012} on the {\it Solar Dynamics Observatory} \citep[{\it SDO};][]{Pesnell2012}. The differential emission measure (DEM) tools developed by \citet{Guennou2012a, Guennou2012b, Guennou2013} revealed periodic variations of the total emission measure and DEM peak temperature that resemble those in the TNE simulations of \citet{Mikic2013}. However, despite the similarities, we cannot yet unambiguously conclude that the observed pulsations are caused by TNE without additional evidence, such as spectroscopic observations of the predicted outflows. For example, while no magnetohydrodynamic mode can explain periods of several hours in coronal loops \citep{Auchere2014}, it is difficult to exclude the possibility of slow beats resulting from the coupling between adjacent loops of similar eigenfrequencies. But in this paper, we demonstrate that the power spectral densities (PSD) of the time series in which \citet{Froment2015} detected pulsations do not have the characteristics expected from waves or damped waves, but instead those of signals known as random pulse trains. This reinforces the idea that the observed system undergoes a cyclic evolution in a constantly varying environment, as expected if TNE is at play in coronal loops. \section{WAVES {\it VS.} PULSE TRAINS\label{sec:waves_vs_pulses}} \begin{figure*} \centering \includegraphics[width=\textwidth]{figure_aia_mid_frames.eps} \caption{Left: middle frame of the one-minute cadence, 6.4 day-long sequence corresponding to Case 1 of \citet{Froment2015}. Right: middle frame of the one-minute cadence, 4 day-long, AIA 17.1~nm sequence corresponding to Case 3 of \citet{Froment2015}. The regions where excess Fourier power was automatically detected (white contours) delineate two bundles of loops, one in the outskirts of NOAA AR 11499 (left), the other one in the core of NOAA AR 11268 (right). Figures~\ref{fig:case_1_335} and~\ref{fig:case_3_171} present the time series obtained by averaging the intensity over the black boxes, along with their Fourier and wavelet power spectra.} \label{fig:aia_mid_frames} \end{figure*} The Fourier power spectra of time series of coronal intensity commonly exhibit an overall power-law behavior caused by a background of stochastic plasma processes \citep{Gruber2011, Auchere2014, Inglis2015}. A hump superimposed on this basic shape is also frequently observed \citep{Ireland2015, Auchere2016}. Two examples of such power spectra are given in the rightmost panels of Figures~\ref{fig:case_1_335} and~\ref{fig:case_3_171} (gray histograms), the corresponding time series being shown in the top left panels. These latter have been obtained from two sequences of {\it SDO}/AIA\ images (Figure~\ref{fig:aia_mid_frames}) by averaging the intensity over the regions selected by \citet{Froment2015} for plasma diagnostics (black boxes). These time series and their spectral analysis are described in detail in \S~\ref{sec:aia_detection}. There are two fundamentally different possibilities to explain the humps in the power spectra. First, they can be due to periodic damped oscillations: from the convolution theorem, the power spectrum of a damped wave is obtained from the convolution of the Fourier transform of the damping function with that of the wave. For example, the power spectrum of an exponentially decaying sine is a Lorentzian centered on the sine frequency. While the resulting hump represents excess power compared to a background power law, its presence is not sufficient in itself to infer the presence of a periodic phenomenon. Indeed, the second possibility is that the hump is due to the presence in the time series of one or a few pulses\footnote{The term {\it pulse} is used throughout the paper to describe a rapid, transient increase in intensity followed by a rapid return to the original value.} of similar widths, even if they are not periodic, as in the reference region of \citet{Auchere2016}. For example, the power spectrum of a single exponential pulse is also a Lorentzian, but unlike the case for a damped wave -- and this is a major difference -- the hump is now centered at zero frequency. In all the cases that we have studied \citep[and also in the moss regions examined by][]{Ireland2015}, the width of the hump is comparable to its central frequency. For example, fitting the power spectrum of the rightmost panel of Figure~\ref{fig:case_1_335} with the sum of a power law and a Gaussian without forcing the latter to be centered at zero, yields a central frequency of 26~$\mu$Hz\ and a full width at $1/e$ of 31~$\mu$Hz. If interpreted as a damped wave, this would correspond to a damping time shorter than the period itself. In addition, as shown in \S~\ref{sec:aia_detection}, a Gaussian centered at zero is justas valid a fit of this PSD. Therefore, the best explanation for the presence of several peaks (more than 15 in the time series of Figures~\ref{fig:case_1_335} and~\ref{fig:case_3_171}) is that the physical phenomenon at their origin repeats itself, potentially with different initial and boundary conditions each time. This leads to the idea that the time series should in fact be interpreted as a periodic succession of pulses of random amplitudes. \section{PSDs OF RANDOM-AMPLITUDE PULSE TRAINS\label{sec:pulse_trains_psds}} \begin{figure}[t] \centering \includegraphics[width=0.456\textwidth]{figure_theoretical_pulse_train.eps} \caption{Top: sample random pulse trains for rounded ($\kappa=50$, blue) and pointed ($\kappa=5$, red) pulses defined by Equation~\ref{eq:kappa_pulse}. Bottom: corresponding expected (red and blue) and actual (lighter shades of red and blue) PSDs. The $\chi_3^2$ distribution of amplitudes creates a continuum that is a scaled version of the PSD of the elementary pulse. The contrast between the spectral lines and the continuum depends only on the number of pulses and on the statistical distribution of their amplitudes.} \label{fig:sim_pulse_train} \end{figure} A periodic succession of pulses of random amplitudes, called a random pulse train, can be expressed as \begin{equation} f(t) = \sum_{m=0}^{M-1} a_m p(t - mT), \label{eq:pulse_train_from_annex} \end{equation} \noindent where $t$ is the time and $T$ is the repetition period of $M$ copies of an elementary pulse $p(t)$ with random amplitudes $a_m$. The corresponding expected PSD is given by (see the Appendix and references therein) \begin{equation} \Psi(\nu) = \abs{P(\nu)}^2\left\lbrace M\sigma^2 + \mu^2\left(\frac{\sin(\pi\nu TM)}{\sin(\pi\nu T)}\right)^2\right\rbrace \label{eq:random_pulse_train_power_from_annex} \end{equation} \noindent where $\abs{P(\nu)}^2$ is the power spectrum of the elementary pulse $p(t)$, and $\sigma$ and $\mu$ are respectively the standard deviation and the mean of the statistical distribution of the amplitudes of the pulses. The power spectrum of the pulse train is thus the power spectrum of the elementary pulse modulated by a function periodically peaked in frequency with period $1/T$. \begin{figure*} \centering \includegraphics[width=\textwidth]{figure_wavelet_cas_1_light_curve_clara_335.eps} \caption{The time series corresponding to the left panel of Figure~\ref{fig:aia_mid_frames} is shown in the top left panel. Its Fourier and time-averaged wavelet power spectra (rightmost panel, gray histograms and black curves) exhibit a broad hump superimposed on a power law leveling off at high frequencies. The 26.3~$\sigma$ peak of Fourier power labeled h1 at 30~$\mu$Hz\ stands out in the whitened spectra (middle panel) and has a probability of random occurrence of $1.7\times 10^{-8}$. The corresponding Fourier component is overplotted on the time series in magenta. The whitened wavelet spectrum (left panel) shows a matching strip of significant power lasting for most of the sequence. The elementary pulse reconstructed by inverse Fourier transform of the kappa function component (dashed red) of the mean power fit (solid red) resembles the shape of the pulsations in the light curve. Power within the cone of influence of the Morlet wavelet is shown in lighter shades of gray.} \label{fig:case_1_335} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{figure_wavelet_cas_3_light_curve_12_171.eps} \caption{Same as Figure~\ref{fig:case_1_335} but for the time series corresponding to the right panel of Figure~\ref{fig:aia_mid_frames}. } \label{fig:case_3_171} \end{figure*} \cite{Auchere2016} found that the PSDs of many coronal time series can be represented by the following model: \begin{equation} \sigma(\nu)=A\nu^s+B\text{K}_{\rho,\kappa}(\nu)+C, \label{eq:kappa_model} \end{equation} \noindent where the first term is a power law of slope $s$ representing the background power, the second term is a kappa function representing the hump, and the third term is a constant representing high-frequency white noise. In order to illustrate the properties of the power spectra given by Equation~\ref{eq:random_pulse_train_power_from_annex}, we thus consider trains whose elementary pulses $p(t)$ have power spectra proportional to the kappa function term of the above power model, \begin{equation} \abs{P(\nu)}^2=\text{K}_{\rho,\kappa}(\nu)=\left(1+\frac{\nu^2}{\kappa \rho^2}\right)^{-\frac{\kappa+1}{2}}. \label{eq:kappa_psd} \end{equation} \noindent The analytic expression of these pulses is obtained by taking the inverse Fourier transform of the square root of the kappa function:\footnote{The expression was obtained with the Mathematica software.} \begin{equation} p(t) =\frac{2\pi^{\frac{\kappa+1}{4}}(\kappa\rho^2)^{\frac{\kappa+3}{8}}}{\Gamma\left(\frac{\kappa+1}{4}\right)} \abs{t}^{\frac{\kappa-1}{4}}K_{\frac{\kappa-1}{4}}\left(2\pi\rho\sqrt{\kappa}\abs{t}\right) \label{eq:kappa_pulse} \end{equation} \noindent where $K_\alpha(x)$ denotes the modified Bessel function of the second kind and $\Gamma(x)$ denotes the gamma function. The initial fraction ensures normalization to unity. For a given width $\rho$, the pulses tend to a Gaussian as $\kappa$ tends to infinity, and they become increasingly peaked as $\kappa$ decreases. For $\kappa=3$ the pulse is a double-exponential. Two sample pulse trains, normalized to their standard deviation $\sigma_0$, are plotted in the top panel of Figure~\ref{fig:sim_pulse_train} as a function of $t/T$, in blue for rounded (nearly Gaussian, $\kappa=50$) pulses and in red for pointed (nearly double-exponential, $\kappa=5$) pulses. Apart from the pulse shape, all parameters are equal: $M=17$ pulses equally spaced by $T$, of width\footnote{Corresponding to a full width at half maximum of $\approx T/2$ for $\kappa=50$ and $\approx T/3$ for $\kappa=5$.} $\rho=T/120$, and of amplitudes drawn -- as an example and to ensure positiveness -- from a chi-squared distribution of degree 3, which has mean $\mu=3$ and variance $\sigma^2=6$. The bottom panel shows, with the same color coding, the corresponding expected PSDs\footnote{Note that any distribution of amplitudes with the same coefficient of variation $\sigma/\mu$ would result in identical expected PSDs (see the Appendix and Equation~\ref{eq:random_pulse_train_power}).} computed using Equation~\ref{eq:random_pulse_train_power_from_annex} after substitution of $\abs{P(\nu)}^2$ by its expression given in Equation~\ref{eq:kappa_psd}. They are the {\it average} PSDs that one would expect from an infinite number of realizations of the amplitudes, {\it not} the PSDs of the curves of the top panel.\footnote{The analytic expressions for the PSDs of the two particular pulse trains of the top panel of Figure~\ref{fig:sim_pulse_train} can be derived from the Fourier transform of Equation~\ref{eq:kappa_pulse} and the time-shifting theorem (Equation~\ref{eq:generic_pulse_train_power}). They are represented in light shades of blue and red, but they are of no practical use.} Since the number of pulses, the period, and the distribution of amplitudes are identical in both cases, so is the periodic modulation term between brackets in Equation~\ref{eq:random_pulse_train_power_from_annex}. Only the PSDs of the elementary pulses -- which correspond to the lower envelopes -- are different. For nearly Gaussian pulses, $\abs{P(\nu)}^2$ is also nearly Gaussian, while for nearly double-exponential pulses, $\abs{P(\nu)}^2$ has an extended high-frequency power-law wing of slope -6. Interestingly, while the harmonic peaks are due to the periodicity of the pulses, the continuum $M\sigma^2\abs{P(\nu)}^2$ arises from the randomness of their amplitudes ($\sigma\neq 0$). The probability density function (PDF) of the amplitudes of an observed pulse train cannot be completely determined from the PSD because only the mean and variance appear in Equation~\ref{eq:random_pulse_train_power_from_annex}. Nonetheless, for a given number of pulses, the contrast between the peaks and the continuum, given by \begin{equation} 1 + M\frac{\mu^2}{\sigma^2}, \label{eq:random_pulse_contrast} \end{equation} \noindent provides the coefficient of variation $c_v=\sigma/\mu$, which quantifies the extent of variability of a random variable in relation to its mean. While the contrast increases with the number of pulses, it decreases with the square of the coefficient of variation, which implies that the periodic component of the PSD tends to vanish if the amplitudes are highly variable. In the example of Figure~\ref{fig:sim_pulse_train}, the PDF is $\chi_3^2$, $c_v=\sqrt{2/3}$ and the contrast is $1+3M/2=26.5$. The only signature of the periodicity of the pulses in the PSD is the presence of harmonic peaks. Since the PSD of a single pulse is proportional to $\abs{P(\nu)}^2$, it is indistinguishable from the continuum of the PSD of a random pulse train. Therefore, the hump in the PSD of a real time series should in all cases be accounted for as background power. This is essential in order to derive proper confidence levels for the detection of the peaks, and justifies {\it a posteriori} the model of Equation~\ref{eq:kappa_model}. \section{EVIDENCE FOR PULSE TRAINS IN CORONAL LOOPS\label{sec:pulses_evidence}} \subsection{Detection In AIA Data.\label{sec:aia_detection}} In this section, we re-examine two of the three cases (Case 1 and Case 3) studied in detail by~\citet{Froment2015} in the light of the properties of random pulse trains described in \S\ref{sec:pulse_trains_psds}. We picked these two cases because, as demonstrated in \S\ref{sec:comparison_simulations}, they exhibit the two types of pulses shown in Figure~\ref{fig:sim_pulse_train}: nearly Gaussian and nearly double-exponential. Figure~\ref{fig:aia_mid_frames} shows the middle frames of the two input AIA sequences. Case 1 corresponds to the one-minute cadence, 9202 frame-long, 33.5~nm sequence (left panel), starting 2012 June 3 at 18:00~UT and ending 2012 June 10 at 04:29~UT. Case 3 corresponds to the one-minute cadence, 4611 frame-long, AIA 17.1~nm sequence (right panel) starting 2011 August 8 at 04:01 UT and ending 2011 August 12 at 03:59~UT. Each original image has been binned over $4\times 4$ pixels and remapped in heliographic coordinates \citep{Auchere2005} with a $0^{\circ}.05$ sampling pitch in longitude and latitude for feature tracking. The white contours delineate the regions of excess Fourier power \citep[Figure 4 of][]{Froment2015}. The time series have been obtained by averaging the intensity over the black boxes. The Fourier and wavelet analyses of these time series, including a critical reassessment of confidence levels, have been described in detail in \citet{Auchere2016} and are summarized below. The time series of Case 1 is plotted in dark gray in the top left panel of Figure~\ref{fig:case_1_335}. Data gaps, defined as the intervals during which no data exist within 30 s of an integer number of minutes since the beginning, represent 0.7\% of the sequence and are represented by the vertical gray bars, the height of which also represents the range of variation of the intensity. The gaps have been filled with linear interpolations between the nearest data points. Since we used a one-minute cadence sample of the original 12~s cadence AIA data, the remainder of the time series was considered to be evenly spaced and thus kept as-is. The histogram-style curve of the right panel is the Fourier power spectrum of the Hann-apodized time series. The solid red curve is the least-squares fit (of reduced $\chi^2=1.7$) of this spectrum with the three-component (dashed red curves) model $\sigma(\nu)$ of Equation~\ref{eq:kappa_model}. The hump formed by the kappa function term dominates the expected background power law between 6 and 80~$\mu$Hz. The peak of Fourier power at 30~$\mu$Hz\ (9 hr) labeled h1 exceeds the 95\% {\it global}\footnote{{\it Global} confidence levels take into account the total number of degrees of freedom in the spectra, as opposed to {\it local} confidence levels that apply to individual frequencies and/or dates \citep{Auchere2016}.} confidence level (gray curve) and reaches 26.3~$\sigma$, which corresponds to probability of random occurrence of $1.7\times 10^{-8}$ \citep{Scargle1982, Auchere2016}. The same information is displayed in the middle panel after whitening of the spectrum, i.e. normalization to $\sigma(\nu)$. The bottom left panel shows the whitened wavelet spectrum of the zero-padded time series. The power at 30~$\mu$Hz\ (magenta line) exceeds the 95\% {\it local} confidence level (orange contours) during most of the sequence, with a maximum above the 95\% {\it global} confidence level (yellow contours) 39~hr after the beginning. Such a long-lived structure has a probability of random occurrence of $7\times 10^{-11}$. This produces a 6~$\sigma$ peak in the time-averaged wavelet spectrum (black curves in the middle and right panels) that lies above the 95\% global confidence levels (yellow curves), with an associated random occurrence probability of $6\times 10^{-7}$. Figure~\ref{fig:case_3_171} is identical to Figure~\ref{fig:case_1_335} for Case~3. In the fitted model (red curves), the kappa function dominates the power-law between 13 and 3000~$\mu$Hz. The peak of Fourier power at 72~$\mu$Hz\ (3.9 hr) labeled h1 exceeds the 95\% global confidence level (gray curves) and reaches 38.9~$\sigma$, which corresponds to a probability of random occurrence of $1.5\times 10^{-14}$. The power at 72~$\mu$Hz\ in the wavelet spectrum of the bottom left panel exceeds the 95\% global confidence level (yellow contour) during most of the sequence. This produces a 16.3~$\sigma$ peak in the time-averaged wavelet spectrum (black curves). The associated probabilities are too low to be meaningful. A second peak of Fourier power surpasses the 95\% global confidence level at 158~$\mu$Hz\ (1.8 hr). At 18.9~$\sigma$, it has a probability of random occurrence of $1.4\times 10^{-5}$. It lies 14~$\mu$Hz, or 8\%, higher than the theoretical frequency of the second-order harmonic -- labeled h2 -- of the primary peak (h1, the fundamental, or first harmonic). The expected frequencies of the higher undetected orders are marked by gray ticks. The h2 peak corresponds in the wavelet spectrum to the secondary band of power that exceeds the 95\% local confidence level between 23 and 43~hr after the beginning of the sequence, preceded by an isolated peak at the same frequency around 15~hr. The secondary band of wavelet power actually lies at exactly twice the frequency of the fundamental between 23 and 31~hr, both peaks being shifted by about 14~$\mu$Hz\ toward the high frequencies compared to h1 and h2 (magenta lines). It is thus likely that the secondary peak of Fourier power is indeed the second harmonic, the offset from h2 resulting from a combination of the noise and of the temporal variations of the fundamental frequency. As we will see in the next section, this explanation is corroborated by the more pointed shape of the pulses at the times where the harmonic is visible in the wavelet spectrum. Other explanations would require either a physical mechanism of frequency-doubling or the presence along the line of sight of a second structure pulsating at twice the frequency of the other. Combined with our analysis of the possible artefacts~\citep{Auchere2014}, all confidence levels indicate beyond reasonable doubt that the periodicities detected in the two time series of Figures~\ref{fig:case_1_335} and~\ref{fig:case_3_171} are of solar origin. Unlike in most observational studies of coronal loop , we did not subtract an estimate of the background and foreground emission. Background subtraction is notably difficult and different methods can yield contrasting conclusions on the physical properties of loops \citep{Terzo2010}. In any case, by definition, the neighboring loops do not pulsate \citep[Figure 4 of][]{Froment2015, Auchere2016}. Therefore the pulsations would still be present after subtraction of a co-spatial background estimated from neighboring loops \citep[e.g.][]{Aschwanden2011}. In addition, since the automatically detected regions of excess Fourier power clearly take the shape of visible bundles of loops and of the corresponding extrapolated magnetic field lines~\citep[][]{Froment2016}, the detected pulsations can safely be attributed to these bundles of loops. The associated Fourier and wavelet power spectra present all the characteristics expected from random pulse trains (see \S~\ref{sec:pulse_trains_psds}): a broad hump centered on zero frequency, a primary peak of power a few tens of $\sigma$ above, and possibly the presence of higher-order harmonics. \subsection{Comparison With Simulated Pulse Trains\label{sec:comparison_simulations}} \begin{figure*} \centering \includegraphics[width=\textwidth]{figure_wavelet_sim_gauss_light_curve_revised.eps} \caption{Fourier and wavelet analysis of simulated data based on a train of nearly Gaussian pulses of random amplitudes. This Figure is to be compared with Figure~\ref{fig:case_1_335}. See text for details.} \label{fig:sim_wavelet_gauss} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{figure_wavelet_sim_expo_light_curve_revised.eps} \caption{Same as Figure~\ref{fig:sim_wavelet_gauss} for a nearly double-exponential pulse train. To be compared with Figure~\ref{fig:case_3_171}.} \label{fig:sim_wavelet_expo} \end{figure*} In order to determine whether, and under what conditions, the fundamental and harmonic peaks expected in the PSDs of random pulse trains can be detected in real data, we simulated observations of the two pulse trains of Figure~\ref{fig:sim_pulse_train} by adding background emissions and photon noise, and we analyzed the resulting light curves using the exact same code as the real time series of \S~\ref{sec:aia_detection}. We set $T=8$~hr, a cadence $\delta t$ of 1 minute and a total of $N=8192$ data points, i.e. a total duration of 137~hr or 5.7 days. As for real data (see Equation~\ref{eq:kappa_model}), the PSDs of the simulated data have three components: the PSD of the pulse train, that of the background emission, and a constant produced by photon noise. The pulse trains and the background emissions were scaled so that the relative variances of the three components are similar to those in observed PSDs. The variance of the photon noise is equal to the mean of the signal, which was set to be comparable to that in the real AIA data (3~s exposures, $4\times 4$ binned images, summation over 231 heliographic pixels and 16~photons.s$^{-1}.\text{pixel}^{-1}$ at 33.5~nm, summation over 55~pixels and 750~photons.s$^{-1}.\text{pixel}^{-1}$ at 17.1 nm). The background emissions are random time series synthesized using the algorithm of \citet{Timmer1995} to have PSDs following power laws of exponent -2. The zero-mean backgrounds were scaled to have variances 672 and 128 times that of the photon noise at 17.1~nm and 33.5~nm respectively (higher signal-to-noise ratio at 17.1~nm than at 33.5~nm). The zero-mean pulse trains were normalized to both have variances ten times that of their respective background. Next we included photon noise by replacing the intensity in the total signal at each time step by a random deviate drawn from a Poisson distribution with that mean. Finally, we removed 30 randomly chosen data points to mimic data gaps. The resulting light curves, normalized to their standard deviation $\sigma_0$, are shown in gray in the top left panels of Figures~\ref{fig:sim_wavelet_gauss} and~\ref{fig:sim_wavelet_expo} for, respectively, the nearly Gaussian ($\kappa=50$) and nearly double-exponential ($\kappa=5$) cases. The background emissions are shown in green and the pulse trains (identical to those of Figure~\ref{fig:sim_pulse_train}) in blue. These two figures are to be compared individually with Figures~\ref{fig:case_1_335} and~\ref{fig:case_3_171}: \begin{enumerate} \item In the bottom right panels, the Fourier (gray histograms) and global wavelet (black lines) spectra have identical shapes in simulated and real data: an overall power-law behavior flattening out at high frequencies with a hump between 10 and 100~$\mu$Hz. The latter is more pronounced in the AIA 33.5~nm and Gaussian pulse train spectra (Figures~\ref{fig:case_1_335} and~\ref{fig:sim_wavelet_gauss}). In simulated data, the hump matches the expected PSDs of the pulse trains\footnote{We use the following normalizations for the Fourier transform and the fast Fourier transform (FFT): $F(\nu)=\int_{-\infty}^\infty f(x)\exp\left(-2\text{i}\pi\nu x\right) \mathrm{d}x$, $\mathrm{FFT}(k)=\frac{1}{N}\sum_{n=0}^{N-1}x_n\exp\left(-2\text{i}\pi kn/N\right)$. The analytic PSDs are scaled by $(N\delta t)^2$ to match those computed by FFT.} (superimposed in blue and shown in the bottom panel of Figure~\ref{fig:sim_pulse_train}). \item A fundamental frequency (marked h1 in magenta) is detected in the Fourier spectra in all cases with comparable significance levels (10-30 times the local mean power). A second harmonic (marked h2) is also detected for the AIA 17.1~nm series and for the double-exponential train (Figures~\ref{fig:case_3_171} and~\ref{fig:sim_wavelet_expo}). \item The model of Equation~\ref{eq:kappa_model} is a good fit to the mean power in all cases, as shown by the reduced $\chi^2$ values and by the flatness of the whitened spectra (middle panels). In the simulations, while the $s=-2$ slope of the power law of background emission and the width of the hump ($\rho=0.07$) are correctly recovered, the values of $\kappa$ differ significantly from the input. The reason is that the parameters of the kappa function are constrained only over a limited range of frequencies. Nonetheless, the fit correctly identifies the simulated pulses as nearly Gaussian ($\kappa=31.6$, Figure~\ref{fig:sim_wavelet_gauss}) and nearly double-exponential ($\kappa=2.9$), Figure~\ref{fig:sim_wavelet_expo}), as shown also by the elementary pulses reconstructed by inverse Fourier transformation of the kappa function component. \item From Equation~\ref{eq:kappa_psd} and Figure~\ref{fig:sim_pulse_train}, the more peaked the pulses, the more extended is the high-frequency wing of their PSD. It is thus easier to detect high order harmonics for peaked pulses than for rounded ones. The hump in Figures~\ref{fig:case_1_335} and~\ref{fig:sim_wavelet_gauss} drops too rapidly ($\kappa=29.3$ and $\kappa=31.6$) for the second harmonic to be detected. Conversely, in Figures~\ref{fig:case_3_171} and~\ref{fig:sim_wavelet_expo} , the extended wing ($\kappa=2.6$ and $\kappa=2.9$) remains above the power-law component for longer and the second harmonic is visible. \item The high-frequency wing of the kappa function is itself a power-law (Equation~\ref{eq:kappa_psd}), so it can be difficult to distinguish the hump from the background in the case of peaked pulses (Figures~\ref{fig:case_3_171} and~\ref{fig:sim_wavelet_expo}). \item Significant power is detected in all the wavelet spectra (bottom left panels), but intermittently. Indeed, the power in each pulse scales with the square of its amplitude, while the model of background power used to derive the confidence levels is constant with time. \item The strongest peaks of power in the wavelet spectra of Figures~\ref{fig:sim_wavelet_gauss} and~\ref{fig:sim_wavelet_expo} present high-frequency extensions (most visible between 25 and 60~hr in the AIA 17.1~nm data). These correspond to the enhanced visibility of the high-frequency wing of the PSD of strong individual pulses with respect to the background power law. \end{enumerate} This comparison shows that the fundamental characteristics of the Fourier and wavelet spectra of Figure~\ref{fig:case_1_335} (respectively Figure~\ref{fig:case_3_171}) can be explained by the presence of a nearly Gaussian (respectively nearly double-exponential) pulse train in the AIA 33.5~nm (respectively 17.1~nm) time series. \section{CONCLUSIONS\label{sec:conclusions}} Numerical simulations of coronal loops indicate that periodic thermal non-equilibrium cycles are an unambiguous tracer of quasi-steady footpoint heating. TNE has been proposed as a viable explanation of the intensity pulsations that we recently detected in coronal loops~\citep{Froment2015, Froment2016}. Since the boundary conditions relevant to TNE (loop geometry, heating rate, and localization, etc.) are likely to vary randomly over time, it is expected that each TNE cycle will be different from the preceding one, effectively producing periodic intensity pulses of random amplitudes. In this paper, we demonstrated that the PSDs of the time series reported by~\cite{Auchere2014} and~\cite{Froment2015} indeed exhibit the characteristic harmonics and continuum expected from random pulse trains. We thus explicitly use the terminology {\it periodic pulses}, as opposed to {\it oscillations}, which would incorrectly suggest that the observed periodicities correspond to vibrational modes. The theoretical PSD of pulse trains to which we compared our observations presupposed that the amplitudes are not correlated (see the Appendix). However, correlated amplitudes -- e.g. resulting from a remnant of the conditions of past cycles -- would only modify the contrast between the harmonic peaks and the continuum of the PSD~\citep{Xiong2000}. In all cases, the harmonics are the signature of the periodicity of the pulses, the continuum is the signature of the randomness of their amplitudes, and the ratio between the two constrains the PDF of the latter The identification of random pulse trains in the data reinforces TNE as being the correct explanation for the slow pulsations observed in coronal loops. \citet{Auchere2014} estimated that half of the active regions in the year 2000 underwent a pulsation event. Considering that many events may have been missed by the automatic detection algorithm (because of, e.g., the high detection thresholds, data gaps, or the bias toward strictly periodic events inherent to working in the Fourier space), it is reasonable to think that the vast majority of active regions exhibit this type of behavior at least once in their lifetime. In addition, using one-dimensional hydrodynamic simulations with realistic loop geometries from photospheric magnetic field extrapolations, \citet{Froment2016} have shown that the region of parameter space for which TNE cycles develop is very limited, thus explaining why only some of the loops of an active region exhibit pulsations even if all were heated quasi-steadily at their footpoints. Since TNE is already the standard model of prominence formation and coronal rain, we now have a growing body of evidence that quasi-steady footpoint heating is more common in active regions than previously thought, even though the fundamental mechanism could still be anything from truly continuous wave dissipation to high-frequency nanoflares. As a final point, about half of the pulsation events reported by \citet{Auchere2014} were located in the quiet Sun, which tantalizingly hints that TNE may be at play in these regions too. \begin{acknowledgements} The authors acknowledge the use of the wavelet code by \cite{Torrence1998}. The authors acknowledge the use of of {\it SDO}/AIA\ data. This work used data provided by the MEDOC data and operations centre (CNES/CNRS /Univ. Paris-Sud), http://medoc.ias.u-psud.fr/. \end{acknowledgements} \bibliographystyle{apj}
1,314,259,993,771
arxiv
\section{Introduction} \label{intro} $\delta\;{\rm Scuti}$ stars constitute a large class of pulsating stars representative of chemically normal intermediate mass stars on and near the main sequence (see eg \cite{Breger2000}, \cite{Rodriguez2001}). Their seismic exploitation however meets major difficulties often refered to as 'the mode identification problem', 'the fast rotation treatment', and 'the selection effects'. These various expressions refer to two main difficulties. First, $\delta\;{\rm Scuti}$ stars, as chemically normal A and early F stars are characterized by a large rotation rate (\cite{Royer2014}) and the theoretical modelling of their pulsation spectra cannot rely on classical perturbative approaches (\cite{Lignieres2006},\cite{Reese2006}). Then, although we understand the process responsible for their pulsational instability, we have very little insight about the process responsible for the amplitude limitation (see however recent studies by \cite{Barcelo2015} and \cite{Bowman2016} )and thus no clue about how amplitudes are distributed between modes and for different stars. However, we do know a couple of things about these stars. First, there is increasing evidence that periodicities or regular spacings can be found in $\delta\;{\rm Scuti}$ spectra (\cite{GarciaH2013}, \cite{Paparo2016}, \cite{Breger2011}) and recently, \cite{GarciaH2015} demonstrated that this spacing is a good proxy of the mean density just as the large separation $\Delta \nu$ is for solar-like pulsators. Then, linear stability calculations provide reliable results (\cite{Dupret2004}) and it is appealing to use them to characterize stars in terms of mass range or evolution stage (see e.g. \cite{Michel1999} for stars in clusters and \cite{Zwintz2014} in the case of pre-main sequence $\delta\;{\rm Scuti}$ stars). In the present paper, we use a large set of homogeneous spectra observed with CoRoT (\cite{Baglin2016}) to revisit these questions and see what CoRoT data tell us about $\delta\;{\rm Scuti}$ stars. \section{The observational sample and the determination of $f_{\rm min}$, $f_{\rm max}$ and $a_{\rm max}$.} \label{sec-1} The automated supervised classification of variable stars in the CoRoT programme (ASCVC hereafter, \cite{Debosscher2009}) brings about 1860 objects classified as $\delta\;{\rm Scuti}$ stars with a probability higher than 80$\%$. In comparison, catalogues before the space-photometry era gathered about 700 objects (\cite{Rodriguez2000}), among which a large fraction had been discovered by large surveys like the Hipparcos (\cite{Perryman1997}), OGLE (\cite{Udalski1997}) and MACHO (\cite{Alcock2000}) projects. The present CoRoT sample is thus very valuable in terms of number of objects and also in terms of homogeneity. We computed the Fourier spectrum for each of these light-curves and we set to zero-amplitude the parts of the spectra possibly hampered by intrumental/environmental artefacts induced by the orbital period (see \cite{Auvergne2009}). It consists in narrow intervals around the frequency of the orbital period plus its harmonics, each of them associated with a few daily aliases as it can be seen in figure~\ref{fig-6}. For each spectrum, we also exclude from our study the part below $f_{Lcut}=25\;{\rm {\mu}Hz}$ (also set to zero-amplitude) in order to avoid the influence on our analysis of possible power of instrumental or environmental origin at low frequency. We define a limit amplitude criterion for peaks to be considered. Here we take the maximum between 10 times the mean amplitude level and the amplitude of the highest peak divided by 8. This last constraint aims at avoiding artefacts of large amplitude peaks convolved by the observational window. Then, we determine for each spectrum the range of detected signal, noted $[f_{\rm min},f_{\rm max}]$, as the frequency range encompassing all peaks satisfying the previous amplitude criterion. We also use these spectra to produce an index characterizing the amplitude of the oscillations. We considered two versions of this index. One is simply the amplitude of the highest peak. The second is the square root of the quadratic sum of amplitudes of all peaks satisfying our amplitude criterion. Interestingly, the results were found to vary at most by a factor two from one version to the other. In the present work, all results for $a_{\rm max}$ refer to the square root of the quadratic sum, which we expect to be more stable a measurement and more representative of the energy involved in pulsation. The results are presented in figure~\ref{fig-1} and their interpretation in the light of theoretical models is discussed in Sect.~\ref{sec-3}. \begin{figure}[h] \centering \includegraphics[width=\hsize,clip]{MichelDSC_Fig1.eps} \caption{Diagram $f_{\rm min}\_f_{\rm max}$ for the set of CoRoT stars described in the text. Oblique lines indicate equal frequency width $f_{\rm max}-f_{\rm min}$. The values mentioned on the y-axis indicate both the $f_{\rm max}$ value for a horizontal line and the $f_{\rm max}-f_{\rm min}$ width for an oblique line. Values of $a_{\rm max}$ are given by the colour code} \label{fig-1} \end{figure} \section{Theoretical estimates of $f_{\rm min}$ and $f_{\rm max}$} \label{sec-2} We used a grid of theoretical models representative of the whole $\delta\;{\rm Scuti}$ stars instability strip for the main sequence evolution stage (see figure~\ref{fig-2}). This grid is the one used and described in \cite{Dupret2004}. The models have been computed with the code CLES (\cite{Scuflaire2008}) The physics of the models is standard and the only specific aspects of interest at the level of the present study are: the use of overshooting with $\alpha_{ov}=0.2 H_p$ and the mixing length parameter which has been set to the solar calibrated value 1.8. The metallicity (Z=0.02) had been chosen as representative of the solar one in \cite{Dupret2004}. Here again, the change for a more up-to-date value would not change significantly our results. The linear stability of the modes has been obtained following (\cite{Dupret2004}, \cite{Grigahcene2005}). \begin{figure}[h] \centering \includegraphics[width=\hsize,clip]{MichelDSC_Fig2.eps} \caption{Hertzsprung-Russell diagram featuring models described in the text. Models showing at least one unstable mode are marked by red star symbols delimiting the theoretical instability strip. For each sequence, mass is indicated in solar units.} \label{fig-2} \end{figure} These linear stability calculations are used to derive theoretical counterparts of the $f_{\rm min}$ and $f_{\rm max}$ values obtained for observed stars in Sect.~\ref{sec-1}. They have been determined on modes of degree $l=0,1,2$ and are illustrated in figure~\ref{fig-3}. \begin{figure}[h] \centering \includegraphics[width=\hsize,clip]{MichelDSC_Fig3.eps} \caption{Diagram $f_{\rm min}\_f_{\rm max}$, as in figure~\ref{fig-1}, but for sequences of models illustrated in figure~\ref{fig-2} and described in the text. The colours stand for sequences of models of a given mass.} \label{fig-3} \end{figure} When looking at figure~\ref{fig-3}, it is clear that, for sequence of models of given mass, the position in the $f_{\rm min}\_f_{\rm max}$ diagram changes under two effects. One is the decrease of mean density (or the decrease of $\Delta {\nu}_0 \propto (G M/R^3)^{1/2}$) with evolution. The second is associated with the gradual decrease of the radial orders of unstable modes, as the star crosses the instability strip from the blue to the red border, as described in Ref Dupret 2004. Both effects globally induce a decrease of $f_{\rm min}$ and $f_{\rm max}$ with age on the main sequence. It is also worth noticing that, evolution sequences entering the instability strip through the so-called blue border (i.e. masses higher than $1.9M_\odot$) correspond, in the $f_{\rm min}\_f_{\rm max}$ diagram, to sequences starting on the $(f_{\rm min}=f_{\rm max})$ central axis, i.e. with a very narrow frequency range of unstable modes. On the contrary, evolution sequences starting on the ZAMS in the instability strip (i.e. masses between 1.5 and 1.9$M_\odot$) correspond, in the $f_{\rm min}\_f_{\rm max}$ diagram, to sequences starting at high $f_{\rm max} - f_{\rm min}$ values, i.e. with a larger frequency range of unstable modes and thus farther from the $(f_{\rm min}=f_{\rm max})$ central axis. Finally, we can also notice that, with the evolution, all sequences tend to converge at low $f_{\rm min}$ and $f_{\rm max}$ in the diagram, those going out of the instability strip through the red border corresponding to sequences terminating on the $(f_{\rm min}=f_{\rm max})$ central axis. We take as a work hypothesis that the rotation will not impact drastically these theoretical estimates of $f_{\rm min}$ and $f_{\rm max}$ values. Rotational splitting is expected to extend the observed range of observed peaks by an amount which is difficult to estimate precisely in the present state of our knowledge, but should be, in the worst case, of the order of a few times the rotation frequency, i.e. a few times 10 to 20$\;{\rm {\mu}Hz}$ for the fastest rotating objects. This is not negligible, but if this extension remains below 50$\;{\rm {\mu}Hz}$ for most of the stars, the $f_{\rm min}\_f_{\rm max}$ diagram remains discriminent in terms of evolution stage and mass range, as can be seen in figure~\ref{fig-3}. \section{Comparison of observational and theoretical $f_{\rm min}\_f_{\rm max}$ diagrams} \label{sec-3} When comparing figure~\ref{fig-1} and figure~\ref{fig-3} in the light of previous remarks, we notice a few differences and similarities. The set of observed stars shows components in the $f_{\rm min}\_f_{\rm max}$ diagram which are not present for the models. The main difference consists in a vertival ridge observed at low $f_{\rm min}$ (below 50$\;{\rm {\mu}Hz}$) in figure~\ref{fig-1}. This accumulation ridge could be due to an edge effect induced by the processing of spectra described in Sect.~\ref{sec-1}, when we fix the $f_{Lcut}$ value to prevent our analysis from the influence of possible low frequency noise. However, we should keep in mind that this ridge is also the place where we should expect hybrid $\delta\;{\rm Scuti}$-$\gamma\;{\rm Doradus}$ stars. In addition to the modes characterizing $\delta\;{\rm Scuti}$ stars, these objects present low-frequency pulsation modes not considered unstable in our models. A closer inspection of these spectra will be necessary to explore this possibility. The second component presented by the observed set of stars and not by the models is much less important in numbers. It appears as a cloud of points around the upper left corner of figure~\ref{fig-1}, i.e. stars with $f_{\rm min}$ values bellow 200$\;{\rm {\mu}Hz}$ and $f_{\rm max}$ values between 500 and 900$\;{\rm {\mu}Hz}$. As it reads from the oblique lines in figure~\ref{fig-1}, this corresponds to extremely large estimates of frequency range value ($f_{\rm max}-f_{\rm min} > 500 \;{\rm {\mu}Hz}$). The inspection of several spectra suggests that, to a large extent, these values could be due to the poor handling of particularly severe window-artefacts by the data processing presented in Sect.~\ref{sec-1}. This is supported by the presence of a significant fraction of spectra with high $a_{\rm max}$ values among those points. Beside this, the $f_{\rm min}\_f_{\rm max}$ diagram for observed stars and the one for models look in reasonably good agreement, with a scarce distribution of points at high $f_{\rm min}$ and $f_{\rm max}$ values, where evolution sequences are highly spread in mass and age in figure~\ref{fig-3}, and a denser concentration of points at low values, where the evolution sequences tend to converge. \subsection{ Amplitude of the oscillations versus evolution.} \label{sec-30} It is worth noticing that the distribution of $a_{\rm max}$ shows a clear gradient (from 0.1 to 40 part-per-thousand, ppt hereafter) with increasing amplitude values toward low values of $f_{\rm min}$ and $f_{\rm max}$. As we already commented, according to the theoretical diagram, this domain of the $f_{\rm min}\_f_{\rm max}$ diagram is expected to host rather evolved stars. The fact that evolved $\delta\;{\rm Scuti}$ stars tend to show higher amplitudes than main sequence ones has been observed for long (see e.g. \cite{Rodriguez2001}), but to our knowledge, this trend has never been observed with such a resolution in amplitude and on such a large sample of objects. In the case of {\it Kepler} data, \cite{Balona2011} considered about 1570 objects, but a large fraction of them ($\sim$1150) were observed in the so-called long cadence mode, i.e. with a 30 minutes sampling time which is not suited to address frequencies higher than 270$\;{\rm {\mu}Hz}$. These data will thus be very helpful to study $\delta\;{\rm Scuti}$ stars, but mostly evolved ones. \subsection{ Evidence of a regular frequency pattern in early main sequence $\delta\;{\rm Scuti}$ stars.} \label{sec-31} Models off the main sequence are not produced here, but we have inspected several such sequences, and they are always found in the domain of low $f_{\rm min}$ and $f_{\rm max}$. Even with $\alpha_{ov}=0$ (no overshooting), which corresponds to the shortest extension of the main sequence, post-main sequence models seem to remain below the $f_{\rm max}=400\;{\rm {\mu}Hz}$ limit. We now consider stars with $f_{\rm max} > 400 \;{\rm {\mu}Hz}$, i.e. for which, according to models, $f_{\rm min}$ and $f_{\rm max}$ values correspond unambiguously to main sequence models, whatever the amount of overshooting considered. As illustrated in figure~\ref{fig-4} , we also avoid spectra with dubious values as discussed in Sect.~\ref{sec-1}. The domain of the HR diagram corresponding to this selection in the case of models presented here is illustrated in figure~\ref{fig-5}. For this set of about 200 stars, we build an image presented in figure~\ref{fig-6}, where each amplitude spectrum is a line where amplitude is limited to an arbitrary value $a_{thr}$ (here $a_{thr}=0.2 ppt$) and coded in grey scale. The full image thus presents more than 200 spectra, sorted by increasing $f_{\rm max}$ value from the bottom to the top. In figure~\ref{fig-6}, we distinguish a few ridges approximately parallel to the ridge drawn by peaks associated with $f_{\rm max}$ values. These ridges separated by a few tens of $\;{\rm {\mu}Hz}$ immediately recall the quasiregular pattern of axisymmetric island modes as described by nonperturbative calculation of fast rotators (\cite{Reese2008} and \cite{Ouazzani2015}). They also recall the various detection of spacings of the order of the large separation in $\delta\;{\rm Scuti}$ stars (\cite{GarciaH2015},\cite{Paparo2016}) The pattern associated with these ridges has to be common to a sufficient fraction of our stellar sample to show up in figure~\ref{fig-6}. This suggests that this quasiregular pattern of peaks has to be, to some extent, independent of rotation, which necessarily varies from one star to another. Axisymmetric modes are good candidates to explain this pattern as we will show hereafter. An appealing idea is that the eigenspectra of axisymmetric modes of the different stars could be, at first order, homologous and thus distributed according to a common pattern of peaks, just multiplied by a different value of the large separation (or mean density). In order to test a bit further this idea, we rescaled the (observational) spectra, taking in abscissae the logarithm of frequency instead of the frequency itself. If our hypothesis is correct, the rescaled eigenspectra should show a similar pattern, just shifted by an amount depending on (the logarithm of) the individual large separation value. These rescaled spectra are presented in figure~\ref{fig-7} where they have been shifted according to their individual $f_{\rm max}$ value. Here the sequence of ridges is more visible even if the pattern remains blured at low abscissae. We treated the same way the theoretical spectra associated with the set of models shown in figure~\ref{fig-5} (models satisfying the same criterion $f_{\rm max}>400\;{\rm {\mu}Hz}$ than our subsample of stars). Here again, we considered only unstable axisymmetric (m=0) l=0,1,2 modes. The result is presented in figure~\ref{fig-8}. The figure~\ref{fig-7} and figure~\ref{fig-8} show great similarities with a few clear ridges near zero in abscissae followed by a less clear pattern. In fact, we have to pay attention to the fact that all these stars do not have the same range of unstable radial orders and this makes the superposition of the spectra less clear. This is demonstrated in figure~\ref{fig-9}, where the theoretical spectra this time are normalized by $\Delta \nu$ instead of $f_{\rm max}$. The ridges are much clearer now. This shows that (for models without rotation) a common pattern exists and is not too much distorted by the structural changes of models in the early main sequence. In order to illustrate the anticipate impact of rotation on this comparison, we added at the bottom of figure~\ref{fig-8} (between line 10 and 18), the pattern obtained for axisymmetric (\~l=0) island modes (see \cite{Lignieres2009}) in 9 models based on the Self-Consistent Field (SCF) method (\cite{MacGregor2007}). These models are ZAMS 2$M_{\odot}$, ranging from 0 to 80\% of the break-up rotation rate. These eigenfrequencies are shown to illustrate the effect of rotation on the pattern, but these modes are not necessary unstable. We see that the regular pattern is preserved to a good extent over this large range of rotation. In fact, at low radial order, the distribution of the modes appears even more regular than the one of models without rotation. To conclude with this point, it seems important to stress that the spectra considered in this study obviously show numerous peaks outside of the common quasiregular pattern revealed by figure~\ref{fig-6} or figure~\ref{fig-7}. On the other hand, these figures do not suggest either that the modes associated to this pattern are systematically expressed (with detectable amplitude). However these results suggests that this pattern is characteristic of the eigenspectra of chemically normal early main sequence stars. \begin{figure}[h] \centering \includegraphics[width=\hsize,clip]{MichelDSC_Fig4.eps} \caption{Same as in figure~\ref{fig-1} but with purple lines marking the selection of the set of early main sequence objects with $f_{\rm max} > 400\;{\rm {\mu}Hz}$ and $f_{\rm max}/f_{\rm min} < 4.5$, as discussed in Sect.~\ref{sec-31}. } \label{fig-4} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\hsize,clip]{MichelDSC_Fig5.eps} \caption{Same as in figure~\ref{fig-2} but here only unstable models from the selection described in Sect.~\ref{sec-31} and illustrated in figure~\ref{fig-4} are marked by red symbols. } \label{fig-5} \end{figure} \begin{figure*} \centering \includegraphics[width=\hsize,clip]{MichelDSC_Fig6.eps} \caption{ Each horizontal line of this image is an grey scale coded amplitude spectrum of one of the stars belonging to the set of early main sequence stars selected as described in Sect.~\ref{sec-31} and illustrated in figure~\ref{fig-4}. Each spectrum is truncated in amplitude to values lower than a common limit value (here $2. 10^{-4}$). The spectra clearly show thin vertical dark ridges resulting from setting to zero-amplitudes parts of the spectra associated with the orbital period and its harmonics, plus day-aliases around them as explained in Sect.~\ref{sec-1}. } \label{fig-6} \end{figure*} \begin{figure*} \centering \includegraphics[width=\hsize,clip]{MichelDSC_Fig7.eps} \caption{ Same as in figure~\ref{fig-6} but abscissae of the spectra have been normalized by $f_{\rm max}$ and converted in logarithm as described in the text. In addition, spectra have been ordered by increasing $f_{\rm max}/f_{\rm min}$ value from the bottom to the top. The upper part of the image has been extended to host a few additional lines showing (also in grey scale) the mean of all individual spectra. } \label{fig-7} \end{figure*} \begin{figure*} \centering \includegraphics[width=\hsize,clip]{MichelDSC_Fig8.eps} \caption{ Same as in figure~\ref{fig-7} (except for the 20 bottom raws), but for the set of models shown in figure 5, i.e. models satisfying the same criterion $f_{\rm max} > 400\;{\rm {\mu}Hz}$ than the subsample of stars considered in figure~\ref{fig-7}. The 20 bottom raws are dedicated to \~l=0 axisymmetric modes of models with rotation value going from zero to 80\% of the breakup rotation rate, as described in Sect.~\ref{sec-31}. } \label{fig-8} \end{figure*} \begin{figure*} \centering \includegraphics[width=\hsize,clip]{MichelDSC_Fig9.eps} \caption{ Same as in figure~\ref{fig-8}, but here the theoretical spectra are normalized by $\Delta \nu$ instead of $f_{\rm max}$. } \label{fig-9} \end{figure*} \section{Conclusions} \label{sec-con} We used a homogeneous set of about 1860 stellar light curves collected with CoRoT for stars classified as $\delta\;{\rm Scuti}$ stars with a probability higher than 80$\%$ by the automated supervised classification of variable stars in the CoRoT archive. We have characterized these spectra in terms of range of observed frequencies, defining two parameters, $f_{\rm min}$ and $f_{\rm max}$. The distribution of our sample of stars in a $f_{\rm min}\_f_{\rm max}$ diagram appears to be consistent with the one obtained from a grid of theoretical models and linear stability calculations. This suggests that $f_{\rm min}\_f_{\rm max}$ values could be used to characterize a specific mass range or evolution stage. Based on this criterion, we have selected stars that we consider to be on the early main sequence. We have shown that their spectra reveal a common pattern modulated by individual large separation values. The existence of regularities in the spectra of $\delta\;{\rm Scuti}$ stars has already been demonstrated by several studies. It has even been demonstrated that this spacing is compatible with the classical large separation index used for solar-type pulsators and that rotation does not hamper the use of this index. Here the ridges we find for a large sample of stars confirm these results, but in addition, we show on a large set of objects that these regularties are due to a regular pattern of consecutive peaks which is compatible with patterns expected for axisymmetric island modes as suggested by recent non-perturbative modelling of fast rotating stars (\cite{Reese2009}). We still need to improve our knowledge and parametrisation of this pattern in synergy with further theoretical work on fast rotation. This study obviously would benefit from being extended to an even larger set of stars. The extension of this work to $\it Kepler$ data is not expected to change considerably the case of the early main sequence stage since the high frequencies characterizing this domain are not accessible with the sampling time of long cadence data which constitute the essential of $\it kepler$ data. However, in the difficult part of the $f_{\rm min}\_f_{\rm max}$ diagram corresponding to evolved $\delta\;{\rm Scuti}$ stars, the $\it Kepler$ data might be of great help. This suggests that $f_{\rm min}$ and $f_{\rm max}$ as well as large separation values might be used as seismic indices to characterize stars and this open the perspective for ensemble seismology using $\delta\;{\rm Scuti}$ stars. \begin{acknowledgement} The CoRoT space mission, launched on December 27th 2006, has been developed and is operated by the Centre National d’Etudes Spatiales (CNES), with contributions from Austria, Belgium, Brazil, the European Space Agency (RSSD and Science Programme), Germany and Spain. We acknowledge the support from the EC Project SpaceInn (FP7-SPACE-2012-312844). EM, KB, RS and DR acknowledge the support from the Programme de Physique Stellaire (PNPS). AGH acknowledges support from Fundação para a Ciência e a Tecnologia (FCT, Portugal) through the fellowship SFRH/BPD/80619/2011. JCS acknowledges funding support from Spanish public funds for research under project ESP2015-65712-C5-5-R (MINECO/FEDER), and under Research Fellowship program “Ramón y Cajal” (MINECO/FEDER). \end{acknowledgement}
1,314,259,993,772
arxiv
\section{Introduction} Quantum spin liquids (QSL) constitute an exotic many-body state without symmetry-breaking spin order, where a number of unconventional properties such as fractionalized excitations and long-range entanglement emerge \cite{Anderson1973,Balents2010,Zhou2017,JW2019QMats,Broholm2020Science}. The celebrated, exactly solvable Kitaev model has attracted enormous attention due to the QSL ground state with localized and itinerant Majorana fermions useful for fault-tolerant quantum computing~\cite{Kitaev2003,Kitaev2006}. Such remarkable properties have incited a flurry of works on the materialization of the Kitaev model in, e.g., certain 4$d$- and 5$d$-electron compounds including cations with the low-spin $d^5$ electron configuration and the edge-shared ligand octahedra. It yields the Kitaev interaction by the synergy of large spin-orbit coupling and Coulomb repulsion on a honeycomb lattice~\cite{Jackeli2009}. Moreover, some high-spin $d$- and $f$-electron systems beyond the Jackeli-Khaliullin mechanism come forth recently which may also realize the Kitaev interaction in the compounds \cite{Trebst2017arXiv,Winter2017,Janssen2019,Motome2020b}. The ruthenium halide $\alpha$-RuCl$_3$ is arguably the most studied Kitaev material~\cite{Sears2015,Banerjee2017,Do2017,Do2017, Banerjee2018,Balz2021,Sears2017, Zheng2017,Baek2017,Jansa2018, Wulferding2020,Ponomaryov2020,Leahy2017,Modic2021, Kasahara2018Unusual,Kasahara2018,Yokoi2021Science, Yamashita2020sample}. Although it has a long-range zigzag antiferromagnetic ordered state below 7~K ~\cite{Sears2015,Banerjee2017, Do2017}, proximate Kitaev QSL behaviors at elevated temperatures were observed~\cite{Do2017,Banerjee2018}. The zigzag spin order is suppressed under an in-plane field of around 7~T~\cite{Sears2017,Zheng2017,Banerjee2018,Balz2021}, where the possible field-induced QSL phase has been intensively studied via multiple experimental probes including the Raman scattering \cite{Wulferding2020}, terahertz absorption measurements \cite{Ponomaryov2020}, nuclear magnetic resonance~\cite{Zheng2017, Baek2017,Jansa2018}, magnetic torque \cite{Leahy2017,Modic2021} and thermal Hall conductivity measurements~\cite{Kasahara2018Unusual, Kasahara2018,Yokoi2021Science,Yamashita2020sample}. \begin{figure}[h!] \includegraphics[angle=0,width=1\linewidth]{Fig1.pdf} \renewcommand{\figurename}{\textbf{Fig. }} \caption{The angle-field phase diagram of the realistic $K$-$J$-$\Gamma$-$\Gamma'$ model for $\alpha$-RuCl$_3$. There are three phases including zigzag (ZZ), quantum spin liquid (QSL), and the polarized (P) states as indicated in the figure. The phase boundaries are determined from the responses in Gr\"uneisen parameters $\Gamma_B$, magnetic torque $\tau$, and the magnetotropic susceptibility $k$ at $T/|K| \simeq 0.01$, in consistent with that from the ground-state magnetization curves~\cite{Zhou2022arXiv}. We reveal that the phase transitions between ZZ (a ``solid'' order), QSL (liquid-like phase), and the P (a weakly interacting ``gas''-like system) phases meet at a tricritical point. The inset illustrates the honeycomb lattice defined on a cylinder of width $W=4$, where the $x$, $y$, and $z$ bonds with bond-directional Kitaev interactions are marked in blue, green, and red colors, respectively. The in-plane $a$-axis, out-of-plane $c^*$-axis, and the angle $\theta$ of the applied field within the ${ac}^*$-plane are indicated by the arrows. } \label{Fig:Illus} \end{figure} On the other hand, in the theoretical studies, the accurate microscopic model description of $\alpha$-RuCl$_3$ is important for understanding the compound, which, however, has been unsettled for a long period~\cite{Laurell2020}. Recently, some of the authors proposed a Kitaev-Heisenberg-Gamma-Gamma' ($K$-$J$-$\Gamma$-$\Gamma'$) model with dominant Kitaev interaction $K=-25$~meV, nearest-neighbor Heisenberg coupling $J=-0.1|K|$, off-diagonal terms $\Gamma=0.3|K|$ and $\Gamma'=-0.02|K|$, which puts the major experimental observations in a coherent picture, and makes a relevant prediction of QSL states induced by high out-of-plane fields~\cite{Han2021}. Such a high-field QSL phase is separated from zigzag antiferromagnetic and the polarized phases, through two quantum phase transitions (QPTs) at 35~T and 130~T, respectively. This theoretical prediction is recently confirmed in high pulsed field experiments~\cite{Zhou2022arXiv}. In this work, we extend the previous theoretical studies to the angle-field phase diagram of the realistic $K$-$J$-$\Gamma$-$\Gamma'$ model with the thermal tensor network approach~\cite{Chen.b+:2017:SETTN, Chen2018,Lih2019}. Through the finite-temperature simulations of the specific heat $C_{\rm m}$, Gr\"uneisen parameters $\Gamma_B$, magnetic torque $\tau$, and magnetotropic susceptibility $k$, etc., we find a high-field QSL phase residing between the zigzag antiferromagnetic and the field-polarized phases. We determine the transition fields with prominent thermodynamic responses and offer concrete theoretical proposal for experimental probes of such spin liquid transitions in $\alpha$-RuCl$_3$ and potentially also other Kitaev candidate magnets. \section{Model and methods} The effective spin Hamiltonian of $\alpha$-RuCl$_3$~\cite{Han2021} considered in this work reads \begin{equation} \begin{split} H=& \sum_{\langle i,j\rangle_{\gamma}} [K S_i^{\gamma}S_j^{\gamma} + J\,\textbf{S}_i\cdot \textbf{S}_j + \Gamma(S_i^{\alpha}S_j^{\beta}+S_i^{\beta}S_j^{\alpha}) \\ & +\Gamma'(S_i^{\gamma}S_j^{\alpha}+S_i^{\gamma}S_j^{\beta} + S_i^{\alpha}S_j^{\gamma}+S_i^{\beta}S_j^{\gamma})], \end{split} \label{Eq:HamRuCl3} \end{equation} where the summation is over the nearest-neighbor (NN) bond $\langle i, j \rangle_\gamma$ with $\gamma = \{ x,y,z\}$ (see inset in Fig.~\ref{Fig:Illus}). $K$ denotes the bond-dependent Kitaev interactions, $J$ is the Heisenberg term, and $\Gamma$, $\Gamma'$ are the off-diagonal symmetric couplings with $\{ \alpha, \beta, \gamma\}$ being the three spin components under a cyclic permutation. The magnetic field $B$ is applied along the direction $[l\ m\ n]$ in the spin space $(S^x, S^y, S^z)$, i.e., the Zeeman term is $H_{\rm Zeeman} = \frac{B}{\sqrt{l^2+m^2+n^2}}[S^x, S^y, S^z]\cdot [l,m,n]^{T}$. Therefore, $H_{[11\bar2]}$ and $H_{[111]}$ correspond to the fields applied along the $a$- and $c^*$-axis, respectively. The angle between the applied field $H_{[11n]}$ and $c^*$-axis within the $ac^*$-plane can be represented by $\theta = \arccos(\frac{2+n}{\sqrt{6+3n^2}})\times \frac{180^{\circ}}{\pi}$, as depicted in the inset of Fig.~\ref{Fig:Illus}. Simulations based on the $K$-$J$-$\Gamma$-$\Gamma'$ model can well reproduce the low-temperature zigzag order~\cite{Sears2015, Banerjee2017,Do2017}, double-peaked specific heat~\cite{Kubota2015, Do2017,Widmann2019}, magnetic anisotropy~\cite{Sears2015,Kubota2015, Johnson2015,Weber2016,Banerjee2017,Lampen-Kelley2018,Sears2020}, magnetization curves~\cite{Kubota2015,Johnson2015,Zheng2017, Banerjee2018}, and the prominent M-star dynamical spin structure factors~\cite{Banerjee2017,Do2017} in $\alpha$-RuCl$_3$ (see a brief recapitulation in Appendix \ref{Sec:KJGGP}). Besides, one remarkable prediction based on this realistic model is the presence of high-field QSL driven by out-of-plane fields~\cite{Han2021}, whose nature is still under intensive investigation~\cite{Yu2022}. Below we employ the exponential tensor renormalization group (XTRG)~\cite{Chen2018,Lih2019} method and perform finite-temperature calculations on a honeycomb-lattice cylinder with total sites $N=W\times L\times 2$, where the width is fixed as $W=4$ and length $L$ ranges from 6 to 12, as illustrated in the inset of Fig.~\ref{Fig:Illus}. We retained up to $D = 400$ bond states with truncation errors $\epsilon \simeq 10^{-4}$ down to the lowest temperature $T/|K| \simeq 0.0085$, which guarantees well converged results till the lowest temperature (c.f., Appendix \ref{Sec:Benchmark}). \begin{figure*}[t!] \includegraphics[angle=0,width=0.99\linewidth]{Fig2.pdf} \renewcommand{\figurename}{\textbf{Fig. }} \caption{(a,b) Contour plots of the specific heat results in fields applied along $\theta=0.8^{\circ}$ and $5^{\circ}$, respectively. The solid dots marked on the $T=0$ axis denote the QPTs obtained through the DMRG calculations \cite{Zhou2022arXiv}. The white dashed lines separating the ZZ, QSL, P, Kitaev fractional liquid (KFL), and paramagnetic (PM) phases are guides for the eye. (c,d) show the isentropes $S/\ln{2}$ for two different $\theta$ angles, where the critical fields ($B_{c_1,c_2}$ and $B_c$) are indicated by the dots. These calculations with scanned fields are performed on the YC$4\times6\times2$ lattice. (e) The field-dependent Gr\"uneisen parameters $\Gamma_B$ at various $\theta$ angles with fixed initial temperatures $T_{\rm i} \simeq 0.015$, 0.012, and 0.01. The data are calculated by $\Gamma_B = 1/T (dT/dH)_S$, and are shifted vertically by a value of 30 for clarify. For small $\theta$, e.g., $0.8^{\circ}$, $1.4^{\circ}$, and $2.8^{\circ}$, two critical fields $B_{c1}$ and $B_{c2}$ indicated by the red and blue arrows denote the low- and high-field phase transitions, respectively; while only a single phase transition $B_c$ indicated by a black arrow is observed for $\theta \geq 4^{\circ}$. The segment around each arrow gives the range of errorbar for the determined transition fields. } \label{Fig:Gruneisen} \end{figure*} \section{Finite-temperature characteristics of quantum spin states and transitions} \subsection{Specific heat and isentropes} We start with conventional thermodynamic quantities such as the specific heat $C_{\rm m}$ and magnetic entropy $S/\ln{2}$ in Fig.~\ref{Fig:Gruneisen}(a-d), where the contour plots can be used to map the temperature-field phase diagram with various angles $\theta$. As shown in Fig.~\ref{Fig:Gruneisen}(a), when the field is applied along the $\theta=0.8^{\circ}$ direction, the double-peaked $C_{\rm m}$ structure can be observed under a finite range of fields ($B/|K| \lesssim 0.22$), with the high-$T$ and low-$T$ peaks correspond to two temperature scales $T_{\rm H}$ and $T_{\rm L}$: the short-range spin correlations establish at $T_{\rm H}$ and the long-range antiferromagnetic zigzag order is formed below $T_{\rm L}$, respectively. When the field $B/|K|$ is increased from 0 to 0.22, the low-$T$ $C_{\rm m}$ peak moves towards lower temperatures, indicating that the zigzag order gets gradually suppressed by the magnetic fields. On the other hand, as the field exceeds $B/|K|=0.22$, and below the polarization field, a low-$T$ peak emerges as indicated by $T_{\rm L}^{''}$, below which there exists a field-induced QSL phase (c.f., Appendix \ref{Sec:KJGGP}). The corresponding isentropes with $\theta = 0.8^{\circ}$ are shown in Fig.~\ref{Fig:Gruneisen}(c). The adiabatic $T$-$B$ curves exhibit distinct changes when entering (rapid increase of $T$) and leaving (a dip) the intermediate QSL regime. They clearly signal two QPTs from the zigzag order to the QSL phase then to the field-polarized phases, at {$B/|K| \simeq 0.22$ and 0.62}, respectively. The transition fields determined with density matrix renormalization group (DMRG) calculations on the same geometry \cite{Zhou2022arXiv} are denoted in the $T=0$ axis with solid dots, where excellent agreements with the present finite-temperature results are seen. The situation changes dramatically when the field angle increases to $\theta = 5^{\circ}$. As shown in Fig.~\ref{Fig:Gruneisen}(b,d), the results suggest that there is only one critical field between the zigzag ordered and field polarized phases, with no intermediate states any more. The behaviors of $C_{\rm m}$ and $S$ are quite similar to that of the in-plane-field case \cite{Han2021}, except that the transition field is higher. Thus we find the intermediate QSL phase very sensitively depends on the angle $\theta$. To accurately determine the phase boundaries in the angle-field phase diagram, below we resort to the thermodynamic, experimentally accessible quantities and parameters. \subsection{Gr\"uneisen parameter} The magnetic Gr\"uneisen parameter $\Gamma_B$ has been employed to accurately determine the critical in-plane fields in $\alpha$-RuCl$_3$ \cite{Bachus2021}, which, however, poses challenges to many-body calculations. Here with the state-of-the-art XTRG method, we are able to compute this thermodynamic ratio and show the results in Fig.~\ref{Fig:Gruneisen}(e). The field-dependent $\Gamma_B = 1/T (dT/dH)_S$ are derived from the simulated isentropes starting from various initial temperatures (and a fixed field). A sign change structure in $\Gamma_B$ can be observed in Fig.~\ref{Fig:Gruneisen}(e) near the higher transition field $B_{c2}/|K| \simeq 0.62$ (indicated by the blue arrows), and it becomes more and more pronounced as temperature lowers, revealing a second-order phase transition from QSL to the polarized phase. On the other hand, in the relatively low-field regime with {$B_{c1}/|K| \simeq 0.22$}, a peak in $\Gamma_B$ is observed (indicated by a red arrow) that corresponds to a first-order QPT between ZZ and the QSL phases. When the field is rotated within the ${ac}^*$-plane, the higher transition field shifts from $B/|K| \simeq 0.62$ to 0.06 as the angle $\theta$ changes from $0.8^{\circ}$ to ${20}^{\circ}$, which reflects that the polarization field is very sensitive to the angle $\theta$. The first-order QPT stays around $B_{c1}/|K| \simeq 0.23$ for small angles and merge to the second-order QPT at around $\theta \gtrsim 4^{\circ}$, where a tricritical point emerges. In Fig.~\ref{Fig:Illus}, we gather the transition fields estimated by $\Gamma_B$ and obtain the angle-field phase diagram. As also indicated in Fig.~\ref{Fig:Gruneisen}(e), the errorbars of the phase boundaries can be estimated as the difference in field strengths of the dips and peaks in $\Gamma_B$. \subsection{Magnetic torque and magnetotropic susceptibility} The torque magnetometry constitutes a sensitive technique to probe the magnetic anisotropies in quantum materials, and recently be used to study the intricate quantum spin states and transitions in $\alpha$-RuCl$_3$~\cite{Modic2018,Modic2021}. However, its numerical results are lacking, partly due to the challenges in its many-body simulations. With thermal tensor networks, we can compute the magnetic torque and its derivative, magnetotropic susceptibility, with a high accuracy. As the free energy $F$ can be written as ${\rm d}F = -S{\rm d}T - P{\rm d}V - M{\rm d}B + \tau{\rm d}\theta$ where $\theta$ is the tilted angle of the magnetic field, the first derivative $\tau = \partial F/ \partial \theta$ represents the magnetic torque, which can be measured in $\alpha$-RuCl$_3$ experiments through $M\times B$~\cite{Modic2018}. Recently, resonant torsion magnetometry technique is also used to measure the magnetotropic susceptibility $k = \partial ^2F /\partial \theta^2$ (the second derivative of free energy)~\cite{Modic2021, Modic2022arXiv}. Following this line, below we perform XTRG calculations of the $K$-$J$-$\Gamma$-$\Gamma'$ model for $\alpha$-RuCl$_3$, investigate $\tau$ and $k$ at various temperatures and fields, and predict salient features of the two QPTs in the magnetotropic quantities to be checked in future high-field measurements. \begin{figure}[h!] \includegraphics[angle=0,width=0.9\linewidth]{Fig3.pdf} \renewcommand{\figurename}{\textbf{Fig. }} \caption{ (a) The calculated magnetic torque $\tau$ (the upside curves with left axis) and the absolute value of its derivative $|{\rm d}\tau/{\rm d}B|$ (the downside two with right axis) of $\alpha$-RuCl$_3$ model with fields applied along $\theta = 0.4^{\circ}$ and $0.7^{\circ}$ at $T\simeq0.03$, 0.02 and 0.01. Two transition fields $B_{c1}$ and $B_{c2}$ are identified from the peak positions of $|{\rm d}\tau/{\rm d}B|$ indicated by the red and blue arrows, respectively. (b) The static spin-structure factors $S($\textbf{k}$)$ (see the main text) for $\theta \simeq 0.8^{\circ}$ with $\textbf{k} =$ M and $\Gamma$ in the Brillouin zone (shown in the inset). The red arrow denotes a fast drop of $S({\rm M})$, indicating the suppression of the zigzag antiferromagnetic order at low temperatures, while the blue arrow corresponds to the field where both $S({\rm M})$ and $S(\Gamma)$ decrease towards zero. (c) The calculated magnetotropic susceptibility $k$ for $\theta \simeq 0.8^{\circ}$ at various low temperatures. The sharp dip corresponds to the second-order phase transition denoted by the blue arrow, while a kink occurs at around {$B/|K| = 0.19$} signposted by the red arrow as zoomed in in the inset. } \label{Fig:Magnetotropic} \end{figure} In Fig.~\ref{Fig:Magnetotropic}(a), we show the magnetic torque $\tau(\theta/2) = (F_{\theta} - F_{0}) / \theta$ (where $F_0$ represents the free energy at zero-field) with $\theta=0.8^{\circ}$ and $1.4^{\circ}$, computed at low temperatures $T/|K|=0.03$, 0.02 and 0.01. At low fields, $B < B_{c1}$, we find a relatively small value of $\tau$, which is understandable as the torques in two sublattices are expected to cancel each other in the antiferromagnetic ZZ phase, resulting in a nearly zero total net torque value. As fields further increase, the calculated $\tau$ gets enhanced rapidly as the ZZ order is suppressed in the intermediate QSL regime, which eventually drops again to small values at high fields as the system enters to the polarized phase. This can be ascribed to the fact that the angle between induced moments and fields is almost zero. The transition fields can thus be determined from where the torque changes most rapidly by computing the derivatives of $\tau$ with respect to the field $B$, i.e., $d\tau/d B$ shown in Fig.~\ref{Fig:Magnetotropic}(a). The red and blue arrows indicate the transition fields from ZZ to QSL and QSL to polarized phases, respectively. The behaviors of magnetic torque are also found consistent with the static spin-structure factor results \begin{equation} {S}(\textbf{k})=\sum_{j\in {N}, j \neq i_0} e^{ i \textbf{k} (\textbf{r}_j-\textbf{r}_{i_0})} (\langle S_{i_0} S_j\rangle - \langle S_{i_0}\rangle\langle S_{j}\rangle), \end{equation} where $i_0$ indicates a central reference site and the results are at relatively low temperature $T/|K| \simeq 0.02$ and 0.01. As shown in Fig. \ref{Fig:Magnetotropic}(b), the zigzag spin correlations at small fields, e.g., $B < B_{c1}$ can be evidenced by the large $S({\rm M})$ value [with the $\rm M$ as well as $\Gamma$ point indicated in the inset of Fig.~\ref{Fig:Magnetotropic}(b)], which becomes suppressed in the intermediate QSL phase. The enhancement of $S(\Gamma)$ near $B_{c1}$ signals the buildup of uniform magnetization where the torque $\tau$ also increases rapidly in Fig.~\ref{Fig:Magnetotropic}(a). When the system enters the spin polarized phase at $B_{c2}$, the structure factor peaks at M and $\Gamma$ points both vanish as expected [Fig.~\ref{Fig:Magnetotropic}(b)]. The magnetotropic susceptibility $k$ can also be used to sensitively probe the two quantum phase transitions. In Fig.~\ref{Fig:Magnetotropic}(c), we plot the results with $\theta = 0.8^{\circ}$ at $T/|K|=0.03$, 0.02, and 0.01. The parameter $k$, second-order derivative of the free energy with respect to the magnetic field orientation $\theta$, has intimate relation to the susceptibility $\chi$~\cite{Modic2022arXiv} and exhibits discontinuities at second-order phase transitions. In Fig.~\ref{Fig:Magnetotropic}(c), the sharp dip at around {$B\simeq B_{c2}$} denoted by the blue arrow corresponds to a second-order transition, while the low-field one, as emphasized in the inset, shows a kink at around $B \simeq B_{c1}$ which corresponds to a first-order phase transition. From the magnetotropic quantities $\tau$ and $k$, we determine the transition fields at $\theta=0.7^{\circ}$ and $0.8^{\circ}$ and show them also in Fig.~\ref{Fig:Illus}. Besides, we have also computed the matrix product operator (MPO) entanglement of the system, which provides accurate estimate of transition fields in accordance with the results above (see Appendix \ref{Sec:SE}). With these finite-temperature simulations, we show that the high-field torque magnetometry measurements can be used to sensitively detect the two QPTs associated with the intermediate QSL phase in future experimental studies. \section{Gapless nature of the high-field QSL identified from thermodynamics} \label{Sec:CmS} As indicated by the dome-like feature in Fig.~\ref{Fig:Gruneisen}(a,c), there exist an intermediate QSL regime below the emergent low-temperature scale $T_{\rm L}^{''}$. To further reveal the nature of this intermediate phase, we push the calculations of $C_{\rm m}$ and $S/\ln{2}$ to longer YC4 cylinders with $L$ up to 12. In Fig.~\ref{Fig:CmS}(a), we find the high- and low-temperature scales $T_{\rm H}$ and $T_{\rm L}^{''}$ change only slightly as we elongate the system from $L=6$ to $L=12$. The height of the peak at $T_{\rm L}^{''}$ gets lowered, while the $C_{\rm m}$ values for $T< T_{\rm L}^{''}$ are actually enhanced, which gives rise to a shoulder-like structure for the largest system size $L=12$ as indicated by the grey arrow below $T/|K| \simeq 0.03$. The corresponding entropy curves are shown in Fig.~\ref{Fig:CmS}(b), where we see that there are considerable amount of low-temperature entropies below $T/|K| \simeq 0.03$, indicating the strong spin fluctuations and large spin excitation density of states. In the inset of Fig.~\ref{Fig:CmS}(b), we subtract the results of two YC4 lattices with different (adjacent) lengths, e.g., the $[8-6]$ represent results obtained by subtracting YC$4\times 6 \times2$ data from the YC$4\times 8 \times2$. The obtained entropy results reflect the bulk property in the central columns and suffer less severe boundary effects, and a power-law behavior of entropy $S \sim T^{\alpha}$ can be clearly seen, which indicates that the high-field QSL has gapless low-energy excitations and there are considerable entropies released only below the temperature $T \simeq 0.03 |K|$. Overall, the thermodynamic results here along the tilted angle point to the conclusion of a gapless QSL, consistent with previous DMRG results (restricted to out-of-plane fields)~\cite{Han2021}. \begin{figure}[t!] \includegraphics[angle=0,width=0.88\linewidth]{Fig4.pdf} \renewcommand{\figurename}{\textbf{Fig.}} \caption{ (a) The computed specific heat $C_{\rm m}$ on various YC$4\times L\times 2$ geometries with different lengths $L$ ranging from 6 to 12 under a field $B/|K| = 0.38$ along $\theta=0.8^{\circ}$ away from the $c^*$-axis. The high- and low-temperature scale $T_{\rm H}$ and $T_{\rm L}^{''}$ are indicated. The grey arrow stress the enhancement of $C_{\rm m}$ at very low temperature $T/|K|<0.03$ as system length $L$ increases. (b) The corresponding thermal entropy $S/\ln{2}$ results, with the subtracted data reflecting the bulk property at low temperature shown in the inset. The dashed line at $T/|K|<0.02$ represents a power-law fitting with $\alpha \simeq 1.5$, serving as a guide for the eye. } \label{Fig:CmS} \end{figure} \section{Conclusions and discussions} In the present work, we have calculated the experimentally relevant thermodynamic properties, i.e., magnetic specific heat, magnetocaloric effect characterized by the Gr\"uneisen parameters, magnetic torque, and the magnetotropic susceptibility of the primary candidate Kitaev magnet $\alpha$-RuCl$_3$ based on the realistic $K$-$J$-$\Gamma$-$\Gamma'$ model and through highly accurate XTRG method. Recently, a high-field magnetization measurement on $\alpha$-RuCl$_3$ up to 102 T have witnessed two phase transitions enclosing an intermediate phase \cite{Zhou2022arXiv}, in agreement with the prediction based on the model calculations~\cite{Han2021}. Here we calculated further thermodynamic properties that provide a comprehensive angle-field phase diagram and useful guide for future experimental studies. For $\theta<4^{\circ}$, we find two field-induced quantum phase transitions evidenced by various quantities. (i) The diverging Gr\"uneisen parameter $\Gamma_B$ shows a sign change behavior at high-field transition point $B_{c2}$, suggesting a second-order phase transition. Exactly at the same field, the magnetotropic susceptibility $k$ features a sharp peak. (ii) The hump in $\Gamma_B$ at around $B_{c1}$ reflects a quantum phase transition possibly of first-order. There is also a peak in $|{\rm d}\tau/{\rm d}B|$ and a kink in $k$, which point to the same conclusion. On the other hand, for large $\theta \gtrsim 4^{\circ}$, only a single phase transition from antiferromagnetic to polarized phase is found, suggesting the absence of an intermediate QSL phase. Moreover, it is noteworthy that besides the conventional candidate materials with Kitaev interactions, e.g., X$_2$IrO$_3$ (X = Na, Li, Cu)~\cite{Singh2010,Chaloupka2010,Singh2012, Katukuri2014,Yamaji2014,Winter2016,Mehlawat2017,Abramchuk2017,Choi2019}, X$_3$LiIr$_2$O$_3$ (X = Ag, Cu, H) with Ir$^{4+}$ \cite{Todorova2011, Roudebush2016,Kitagawa2018}, and XR$_3$ (X = Ru, Yb, Cr; R = Cl, I, Br) \cite{Winter2016, Winter2017NC, Wu2018, Cookmeyer2018, Kim2016, Suzuki2019, Ran2017, Wang2017, Ozel2019, Banerjee2016, HSKim2015, Danrui2022, Imai2021, Hao2021, Xing2020, Sala2020, McGuire2015}, etc., some newly reported Kitaev family such as rare-earth chalcohalide REChX (RE = rare earth; Ch = O, S, Se, Te; X = F, Cl, Br, I)~\cite{Zhang2021CPL, Zhang2022PRR} and cobalt honeycomb oxides Na$_2$Co$_2$TeO$_6$ \cite{Lin2021NC, Yao2022PRL}, Na$_3$Co$_2$SbO$_6$~\cite{Liu2020PRL}, and BaCo$_2$(AsO$_4$)$_2$~\cite{Zhong2020SA}, etc., also offer a platform exhibiting highly anisotropic, bond-dependent exchange couplings. It would be worthwhile to explore their field-induced quantum spin states along the out-of-plane direction and generally tilted angles in the future, and the present study on angle-field phase diagram of the $K$-$J$-$\Gamma$-$\Gamma'$ model provides theoretical guide for experimental explorations in these intriguing quantum magnets. \begin{acknowledgments} {This work was supported by the National Natural Science Foundation of China (Grant Nos. 12222412, 11974036, 11834014, 12047503, and 12174386), Strategic Priority Research Program of CAS (Grant No. XDB28000000), National Key R$\&$D Program of China (Grant No. 2018YFA0305800), CAS Project for Young Scientists in Basic Research (Grant No.~YSBR-057), and China National Postdoctoral Program for Innovative Talents (Grant No. BX20220291). H.L. and W.L. are indebted to Xu-Guang Zhou, Shun-Yao Yu, Shou-Shu Gong, and Zheng-Xin Liu for stimulating discussions, and thank the HPC-ITP for the technical support and generous allocation of CPU time.}\\ \end{acknowledgments}
1,314,259,993,773
arxiv
\section{Introduction} One of the central quantities associated to a Riemannian metric is the Ricci tensor. In Einstein's field equations, the energy-momentum tensor yields the Ricci tensor, and this determines the metric of space-time. In Riemannian geometry, the importance of the Ricci tensor came to the fore in particular through the work of Gromov \cite{Gromov1981}. The Ricci flow, introduced by Hamilton \cite{Hamilton1986}, culminated in the work of Perelman \cite{Perelman2002,Perelman2003} which solved the Poincar\`e and the more general Geometrization Conjecture for three-dimensional manifolds. On the other hand, there have been important developments extending the notion of Ricci curvature axiomatically to metric spaces more general than Riemannian manifolds \cite{Bakry2014,Lott2009,Sturm2006}. More precisely, one identifies metric properties on a Riemannian manifold that can be formulated in terms of local quantities such as growth of volumes of distance balls, transportation distances between balls, divergence of geodesics, and meeting probabilities of coupled random walks. On Riemannian manifolds such local quantities are implied by, or even equivalent to, Ricci curvature inequalities. Moreover when such metric properties are satisfied on some metric space, one says that the space satisfies the corresponding generalized Ricci curvature inequality. This research paradigm has been remarkably successful, and the geometry of metric spaces with such inequalities is currently a very active and fertile field of mathematical research (see for instance \cite{Bauer2017}). Of course, on Riemannian manifolds various such properties are equivalent to Ricci curvature inequalities and therefore also to each other. However, when passing to a discrete, metric setting, each approach captures different aspects of the classical Ricci curvature and thus, the various discretizations need no longer be equivalent. One such approach to Ricci curvature inequalities is Ollivier's \cite{Ollivier2007,Ollivier2009,Ollivier2010,Ollivier2013} construction on metric spaces. There is also an older line of research \cite{Stone1976} that searches for the discretization of Ricci curvature on graphs and more general objects with a combinatorial structure. Here, one has exact quantities rather than only inequalities as in the aforementioned research. One elegant approach is by Chow and Luo \cite{Chow2003} based on circle packings which lent itself to many practical applications in graphics, medical imaging and communication networks \cite{Jin2007,Gu2013,Gao2014}. On the other hand, Ollivier's \cite{Ollivier2007,Ollivier2009,Ollivier2010,Ollivier2013} discretization has proven to be suitable for modelling complex networks as well as rendering interesting theoretic results with potential of future applications \cite{Lin2010,Lin2011,Bauer2012,Jost2014,Loisel2014,Ni2015,Sandhu2015a}. Yet another approach to discretization of Ricci curvature on polyhedral complexes, and more generally, $CW$ complexes is due to Forman \cite{Forman2003}. In recent work \cite{Sreejith2016,Sreejith2016directed,Sreejith2017,Weber2017,Saucan2018}, we have introduced the Forman's \cite{Forman2003} discretization to the realm of graphs and have systematically explored the Forman-Ricci curvature in complex networks. A crucial advantage of Forman-Ricci curvature is that, while it also captures important geometric properties of networks, it is far simpler to evaluate on large networks than Ollivier-Ricci curvature \cite{Sreejith2016,Saucan2018}. In this contribution, we have performed an extensive empirical comparison of the Forman-Ricci curvature and Ollivier-Ricci curvature in complex networks. In addition, we have also performed an empirical analysis in complex networks of the augmented Forman-Ricci curvature which accounts for two-dimensional simplicial complexes arising in graphs. We find that the Forman-Ricci curvature, especially the augmented version, is highly correlated to Ollivier-Ricci curvature in many model and real networks. This renders Forman-Ricci curvature a preferential tool for the analysis of very large networks with various practical applications. Although, in this contribution, we show that Forman-Ricci curvature is highly correlated to Ollivier-Ricci curvature in many networks, one should not construe from this observation that we introduce Forman-Ricci curvature as a substitute (and certainly not as a ``proxy'' \cite{Pal2018}) for Ollivier-Ricci curvature. As mentioned above, and as we shall further explain in the following section, the two discretizations of Ricci curvature capture quite different aspects of network behavior. Indeed the specific definitions of both Ollivier's and Forman's discretizations of Ricci curvature prescribe some of their respective essential properties that have important consequences in certain significant applications. Therefore, we shall detail these definitions and not restrict ourselves to the mere technical defining formulas. Given that networks permeate almost every field of research \cite{Wasserman1994,Watts1998,Barabasi1999,Albert2002,Feng2007,Newman2010,Fortunato2010}, an important challenge has been to unravel the architecture of complex networks. In particular, the development of geometric tools \cite{Eckmann2002,Ollivier2009,Lin2010,Lin2011,Bauer2012,Jost2014,Wu2015,Ni2015,Sandhu2015a,Sreejith2016,Bianconi2017}, and mainly curvature, allow us to gain deep insights into the structure, dynamics and evolution of networks. It is in the very nature of discretization of differential geometric properties that each such discrete notion sheds a different light and understanding upon the studied object, for example, a network. In particular, Ollivier's curvature is related to clustering and network coherence via the distribution of the eigenvalues of the graph Laplacian, giving insights into the global and local structure of networks. In contrast, Forman's curvature captures the geodesics dispersal property and also gives information on the algebraic topological structure of the network. Furthermore, Forman's curvature is simple to compute and can easily be extended to analyze both directed networks and hyper-networks \cite{Sreejith2016,Sreejith2016directed,Sreejith2017,Weber2017}. Given the contrast between the two discretizations of Ricci curvature at hand, the empirically observed correlation in many networks is quite surprising and encouraging. Moreover, both types of curvature admit natural Ricci curvature flows \cite{Gu2013,Weber2017} that enable the study of long time evolution and prediction of networks. Moreover, the observed correlation further increases the relevance and importance of future investigation of discrete Ricci flows for the better understanding of the structure and evolution of complex networks. Note that in Riemannian geometry, the Ricci tensor encodes all the essential properties of a Riemannian metric. Similarly, it is an emerging principle that Ricci curvature, because it evaluates edges instead of vertices, also captures the basic structural aspects of a network. Both Ollivier-Ricci curvature and Forman-Ricci curvature are {\em edge-based} measures which assign a number to each edge of a (possibly weighted and directed) network that encodes local geometric properties in the vicinity of that edge. We highlight that {\em edges} are what networks are made of as the {\em vertices} alone do not yet constitute a network. \begin{figure} \includegraphics[width=.7\columnwidth]{Figure-1.pdf} \caption{{\bf (a)} The geometric interpretation of Ricci curvature. Ricci curvature measures the growth of volumes, more precisely, the growth of $(n-1)$-dimensional solid angles in the direction of the vector ${\bf v}$. It also measures the dispersion rate of the family of geodesics with the same initial point, that are contained within the given solid angle. {\bf (b)-(d)} The interpretation of Ollivier-Ricci curvature. {\bf (b)} Given two close points $x$ and $y$ in a Riemannian manifold of dimension $n$, defining a tangent vector $\bar{v}_{xy}$, one can consider the parallel transport in the direction $\bar{v}_{xy}$. Then points on a infinitesimal sphere $S_\varepsilon(x)$ centered at $x$, are transported to points on the corresponding sphere $S_\varepsilon(y)$ by a distance equal to $d(x,y)\left(1 - \frac{\varepsilon^2}{2n}\rm{Ric}(\bar{v}_{xy})\right)$, on the average. {\bf (c)} In Riemannian manifolds of positive (respectively, negative) curvature, balls are closer (respectively, farther) than their centers. Thus, in spaces of positive Ricci curvature spheres are closer than their centers, while in spaces of negative curvature they are farther away. {\bf (d)} To generalize this idea to metric measure spaces, one has to replace the (volumes of) spheres or balls, by measures $m_x$, $m_y$. Points will be transported by a distance equal to $(1 - \kappa)d(x,y)$, on the average, where $\kappa = \kappa(x,y)$ represents the coarse (Ollivier) curvature along the geodesic segment $xy$. This illustration is an adaptation of the original figure \cite{Ollivier2013}. {\bf (e)-(f)} Forman-Ricci curvature of an edge $e$ connecting the vertices $v_1$ and $v_2$ and contributions from edges parallel to the edge $e$ under consideration. An edge is said to be parallel to a given edge $e$, if it has in common with $e$ either a \textit{child} (i.e., a lower dimensional face), or a \textit{parent} (i.e., a higher dimensional face), but not both simultaneously. In part {\bf (e)}, all the edges $e_{11}, \ldots, e_{15}$ are parallel to $e$ because they share the vertex $v_1$, while the edges $e_{21}, \ldots, e_{25}$ are parallel to $e$ because they share the vertex $v_2$. In contrast, in part {\bf (f)}, edges $e_{11}, e_{21}, e_{15}, e_{25}$ are not parallel anymore to the edge $e$, because they have common \textit{children} with $e$ (namely, $v_1$ and $v_2$) and a common parent with $e$ (namely, $f_1$ or $f_2$). In consequence, edges $e_{11}, e_{21}, e_{15}, e_{25}$ do not contribute in the computation of the Augmented Forman-Ricci curvature of edge $e$ which also accounts for the two-dimensional simplicial complexes $f_1$ and $f_2$.} \label{fig:ricciinterpret} \end{figure} \section{Discrete Ricci curvatures on networks} We briefly present here the geometric meaning of the notion of Ricci curvature, as well as the two discretizations considered herein. For other discretizations of this type of curvature and their applications, see for instance \cite{Gu2013}. \subsection{Ricci curvature} In Riemannian geometry curvature measures the deviation of the manifold from being locally Euclidean. Ricci curvature quantifies that deviation for tangent directions. It controls the average dispersion of geodesics around that direction. It also controls the growth of the volume of distance balls and spheres. In fact, these two properties are related, as can be seen from the following formula \cite{Heintze1978}: \begin{equation} \label{eq:Ricci} {\rm Vol}_\alpha(\varepsilon) = d\alpha\,\varepsilon^{n-1}\left(1 - \frac{{\rm Ric}({\bf v})}{3}\varepsilon^2 + o(\varepsilon^2)\right)\,. \end{equation} Here, $n$ is the dimension of the Riemannian manifold in question, and ${\rm Vol}_\alpha(\varepsilon)$ is the $(n-1)$-volume generated within an $n$-solid angle $d\alpha$ by geodesics of length $\varepsilon$ in the direction of the vector ${\bf v}$ (i.e., it controls the growth of measured angles). Thus, Ricci curvature controls both divergence of geodesics and volume growth (Figure \ref{fig:ricciinterpret}(a)). In dimension $n=2$, Ricci curvature reduces to the classical {\it Gauss curvature}, and can therefore be easily visualized. As we shall see, the two discretizations of Ricci curvature by Ollivier and Forman considered here for networks capture different properties of the classical (smooth) notion. Forman's definition expresses dispersal (diffusion), while Ollivier's definition compares the averaged distance between balls to the distance between their centers. Thus, the two definitions lead to different generalization of classical results regarding Ricci curvature. In this respect, Ollivier's version seems to be advantageous, since, in addition to certain geometric properties, analytic inequalities also hold, whereas Forman's version encapsulates mainly the topology of the underlying space. Nevertheless, in our specific context of complex networks, as we shall show in the sequel, the definitions by Ollivier and Forman are highly correlated in many networks. Therefore, for the empirical analysis of large networks, at least in a first approximation, from the analysis of Forman's definition, one can also make inferences about the properties encoded by the Ollivier's definition. For instance, Ollivier's curvature is, by its very definition, excellently suited to capture diffusion and stochastic properties of a given network. Unfortunately, the computation of Ollivier-Ricci curvature might be prohibitive for many large complex networks. In contrast, due to its simple, combinatorial formula, Forman-Ricci curvature is easy and fast to compute \cite{Sreejith2016}. Given the basic equivalence, at least on a statistical level, between these two discretizations, one can therefore determine, at least in first approximation, many properties encapsulated by Ollivier's curvature via simple computations with Forman's curvature. However, for a finer analysis, each of the two discrete Ricci curvatures should be employed in the context that best befits the geometrical phenomenology it encapsulates. \subsection{Ollivier-Ricci curvature} Ollivier's approach \cite{Ollivier2007,Ollivier2009,Ollivier2010,Ollivier2013} interprets Eq. \ref{eq:Ricci} as follows: If a small ball $B_x$ of radius $\varepsilon$ and centered at $x$ is mapped, via parallel transport \cite{Jost2017} to a corresponding ball $B_y$ centered at $y$, then the average distance between points on $B_x$ and their corresponding points on $B_y$ is: \begin{equation} \label{eq:Ollivier1} \delta\left(1 - \frac{\varepsilon^2}{2(n+2)}{\rm Ric}(v) + O(\varepsilon^3 + \varepsilon^2\delta) \right)\,, \end{equation} where $d(x,y) = \delta$, and where $\varepsilon, \delta \rightarrow 0$. Thus, we can {\em synthetically} characterize Ollivier-Ricci curvature \cite{Ollivier2013} by the following phrase: ``In positive (negative) curvature, balls are closer (farther) than their centers are''. Balls are given by their volume measures, and in fact, one may define a transportation distance for any two (normalized) measures. In this sense, Ollivier's notion compares the distance between the centers of their balls with that between their measures (Figure \ref{fig:ricciinterpret}(b)-(d)). For the distance between the centers one takes (of course) the given metric of the underlying space, i.e., manifold, mesh, network, etc. As for the distance between measures, there is a natural choice, the Wasserstein transportation metric $W_1$ \cite{Vaserstein1969}. More formally, Ollivier's curvature is defined as: \begin{equation} \label{eq:Ollivier2} \kappa(x,y) = 1 - \frac{W_1(m_x,m_y)}{d(x,y)}\,; \end{equation} where $m_x, m_y$ represent the measures of the balls around $x$ and $y$, respectively. Here, given that the measure $m$, associated to the discrete set of vertices of a graph (network) is obviously a discrete measure, the Wasserstein distance $W_1(m_x, m_y)$, i.e. the transportation distance between the two probability measures $m_x$ and $m_y$, is given by \begin{equation} \label{eq:Ollivier3} W_1(m_x, m_y)=\inf_{\mu_{x,y}\in \prod(m_x, m_y)}\sum_{(x',y')\in V\times V}d(x', y')\mu_{x,y}(x', y'), \end{equation} with $\prod(m_x, m_y)$ being the set of probability measures $\mu_{x,y}$ that satisfy: \begin{equation} \label{eq:Ollivier4} \sum_{y'\in V}\mu_{x,y}(x', y')=m_x(x'), \,\,\sum_{x'\in V}\mu_{x,y}(x', y')=m_y(y'). \end{equation} Measures satisfying Eq. \ref{eq:Ollivier4} start with the measure $m_x$ and end up with $m_y$, and represent all the transportation possibilities of the mass (measure) $m_x$ to the measure $m_y$, by \textit{disassembling} it, transporting it, along all possible paths, and \textit{reassembling} it as $m_y$. $W_1(m_x, m_y)$ is the minimal cost (measured in terms of distances) to transport the mass of $m_x$ to that of $m_y$. Note that the distance $d$ in Eq. \ref{eq:Ollivier3} above can be any useful or expressive graph metric. However, in practice, when considering the Wasserstein metric and Ollivier-Ricci curvature for unweighted networks, the {\it combinatorial} metric is naturally considered. In the Riemannian setting, Ollivier's definition reduces to the classical one. More precisely, if $M^n$ is a Riemannian manifold, with its natural measure $d{\rm Vol}$, then for $d(x,y)$ small enough and $v$ the unit tangent vector at $x$ on the geodesic $\overline{xy}$ \begin{equation} \kappa(x,y) = \frac{\varepsilon^2}{2(n+2)}{\rm Ric}(v) + O(\varepsilon^3 + \varepsilon^2d(x,y))\,. \end{equation} The Wasserstein distance \cite{Vaserstein1969} between two vertices in a network depends on the triangles, quadrangles and pentagons that they are contained in (see for instance \cite{Jost2014,Bhattacharya2015}). It can also be computed in terms of random walks on a graph, where one has the choice between the {\em lazy } \cite{Ni2015} and the {\em non-lazy} \cite{Jost2014} random walk. While the two variants are clearly equivalent from a theoretical viewpoint, the choices may render differences in the implementation. In this work, we have used the lazy random walk option within the open-source implementation of Ollivier-Ricci curvature, originally developed by P. Romon and improved by E. Madsen, within SageMath software (http://www.sagemath.org/) for our computations. While Ollivier-Ricci curvature is essentially defined on edges, one can define Ollivier-Ricci curvature of a vertex \cite{Sandhu2015a} as the sum of the Ollivier-Ricci curvatures of edges incident on that vertex in the network, and this is analogous to scalar curvature in Riemannian geometry \cite{Jost2017}. \subsection{Forman-Ricci curvature} Forman's definition is conceptually quite different from Ollivier's definition. To begin with, Forman's definition works in the framework of \textit{weighted $CW$ cell complexes}, rather than that of Markov chains and metric measure spaces, as Ollivier's definition does. The weighted $CW$ cell complexes are of fundamental importance in topology and include both polygonal meshes and weighted graphs. In the setting of weighted $CW$ cell complexes, Forman's definition develops an abstract version of a classical formula in differential geometry or geometric analysis, the so called \textit{Bochner-Weitzenb\"{o}ck formula} (see for instance \cite{Jost2017}), that relates curvature to the classical (Riemannian) Laplace operator. Forman \cite{Forman2003} derived an analogue of the Bochner-Weitzenb\"{o}ck formula that holds in the setting of $CW$ complexes. In the 1-dimensional case, i.e. of graphs or networks, it takes the following form \cite{Sreejith2016}: \begin{equation} \label{eq:Forman1} {\rm F}(e) = w_e \left( \frac{w_{v_1}}{w_e} + \frac{w_{v_2}}{w_e} - \sum_{e_{v_1}\ \sim\ e,\ e_{v_2}\ \sim\ e} \left[\frac{w_{v_1}}{\sqrt{w_e w_{e_{v_1} }}} + \frac{w_{v_2}}{\sqrt{w_e w_{e_{v_2} }}} \right] \right)\, \end{equation} where $e$ denotes the edge under consideration between two nodes $v_1$ and $v_2$, $w_e$ denotes the weight of the edge $e$ under consideration, $w_{v_1}$ and $w_{v_2}$ denote the weights associated with the vertices $v_1$ and $v_2$, respectively, $e_{v_1} \sim e$ and $e_{v_2} \sim e$ denote the set of edges incident on vertices $v_1$ and $v_2$, respectively, after excluding the edge $e$ under consideration which connects the two vertices $v_1$ and $v_2$ (Figure \ref{fig:ricciinterpret}(e)). Since edges in the discrete setting of networks naturally correspond to vectors or directions in the smooth context, the above formula represents, in view of the classical Bochner-Weitzenb\"{o}ck formula, a discretization of Ricci curvature. For gaining further intuition regarding this discretization of Ricci curvature in its generality, the reader is referred to Forman's original work \cite{Forman2003}, and to our previous papers \cite{Sreejith2016,Saucan2018} for more insight on its adaptation to networks. In the combinatorial case, i.e. for $w_e = w_v = 1, \; e \in E(G), v \in V(G)$, where $E(G)$ and $V(G)$ represent the set of edges and vertices, respectively, in graph $G$, the above formula (Eq. \ref{eq:Forman1}) reduces to the quite simple and intuitive expression: \begin{equation} \label{eq:Forman2} {\rm F} (e) = 4 - \sum_{v \sim e} \deg(v) \; \end{equation} where $v \sim e$ denote the vertices anchoring the edge $e$. This simple case captures the role of Ricci curvature as a measure of the flow through an edge and illustrates how Ricci curvature captures the \textit{social behavior} of geodesics dispersal depicted in Figure \ref{fig:ricciinterpret}. While Forman-Ricci curvature is essentially defined on edges, one can easily define Forman-Ricci curvature of a vertex \cite{Sreejith2017} as the sum of the Forman-Ricci curvatures of edges incident on that vertex in the network. \subsubsection*{Augmented Forman-Ricci curvature} From a graph, one may construct two-dimensional polyhedral complexes by inserting a two-dimensional simplex into any connected triple of vertices (or cycle of length 3), a tetragon into any cycle of length 4, a pentagon into a cycle of length 5, and so on. This is natural, if, for instance, one wants to represent higher order correlations between vertices in the network. Again, Forman's scheme assigns a Ricci curvature to such a complex, via the following formula, which also includes possible weights $w$ of simplices, edges, and vertices: \begin{equation} \label{eq:Forman3} {\rm F}^{\#} (e) = w_e \left[ \left( \sum_{e < f} \frac{w_e}{w_f}+\sum_{v < e} \frac{w_v}{w_e} \right) \right. - \left. \sum_{\hat{e} \parallel e} \left| \sum_{\hat{e},e < f} \frac{\sqrt{w_e \cdot w_{\hat{e}}}}{w_f} - \sum_{v < e, v < \hat{e}} \frac{w_v}{\sqrt{w_e \cdot w_{\hat{e}}}} \right| \right] \; ; \end{equation} where $w_e$ denotes weight of edge $e$, $w_v$ denotes weight of vertex $v$, $w_f$ denotes weight of face $f$, $\sigma < \tau$ means that $\sigma$ is a face of $\tau$, and where $||$ signifies \textit{parallelism}, i.e. the two cells have a common \textit{parent} (higher dimensional face) or a common \textit{child} (lower dimensional face), but not both a common parent and common child. In particular, we have employed Eq. \ref{eq:Forman3} to define an \textit{Augmented Forman-Ricci curvature} of an edge which also accounts for two-dimensional simplicial complexes or cycles of length 3 arising in graphs while neglecting cycles of length 4 and greater (Figure \ref{fig:ricciinterpret}(f)). In unweighted networks, $w_f = w_e = w_v = 1, \; \forall f \in F(G), e \in E(G), v \in V(G)$, where $F(G)$, $E(G)$ and $V(G)$ represent the set of faces, edges and vertices, respectively, in graph $G$. In such unweighted networks, we remark that there is a simple relationship \cite{Webere2017triangle} between Forman-Ricci curvature ${\mathrm F} (e)$ of an edge $e$ and Augmented Forman-Ricci curvature ${\mathrm F}^{\#} (e)$ of an edge $e$, namely, \begin{equation} \label{eq:Forman4} {\rm F}^{\#} (e) = {\rm F} (e) + 3m \end{equation} where $m$ is the number of triangles containing edge $e$ under consideration in the network. In this work, we have explored both Forman-Ricci curvature and its augmented version in model and real-world networks. \subsection{Ollivier's vs. Forman's Ricci curvature: A first comparison} As we have seen in detail in the previous section, and already explained in the Introduction, the two types of discrete Ricci curvature, Ollivier's and Forman's, express different geometric properties of a network, and they can therefore be quite different from each other for specific networks. In this section, let us consider some simple examples. As the first example, consider a complete graph on $n$ vertices. Then any two vertices share $n-2$ neighbors in the complete graph, and therefore, the corresponding balls largely overlap. The transportation distance between the balls is thus very small in a complete graph, and thus, the Ollivier-Ricci curvature (Eq. \ref{eq:Ollivier2}) is almost 1 for large $n$, the largest possible value. On the other hand, the degree of any vertex is $n-1$ in a complete graph, and therefore, the Forman-Ricci curvature (Eq. \ref{eq:Forman2}) takes the most negative possible value. Thus, for such complete graphs, the two types of Ricci curvature behave in opposite fashion. The reason is that Ollivier-Ricci curvature is positively affected by triangles whereas Forman-Ricci curvature is not at all. Thus, it is not surprising that locally they can numerically diverge from each other. As the second example, consider a star graph, that is, a graph consisting of a central vertex $v_0$ that is connected to all other vertices $v_1,\dots ,v_m$, while these vertices have no further connections. Consider an edge, for example, $e=(v_0,v_1)$ in the star graph. The neighborhood of $v_1$ consists of $v_0$ only, while that of $v_0$ contains all the vertices $v_1,\dots ,v_m$ in the star graph. Since each of these vertices $v_1,\dots ,v_m$ have distance $1$ from $v_0$ in the star graph, the transportation cost is $1$, and hence the Ollivier-Ricci curvature is $0$. In this example of a star graph, there are no triangles. In contrast, the Forman-Ricci curvature of the edge in the star graph is $3-m$. As the third example, consider a double star graph, that is, take two stars with vertices $v_0, v_1,\dots ,v_m$ and $v'_0, v'_1,\dots ,v'_{m'}$, where the two central vertices $v_0$ and $v'_0$ of the stars are connected by an edge. In this case of double star graph, almost all vertices in their respective neighborhoods are a distance $3$ apart, and so, the Ollivier-Ricci curvature of the edge $(v_0,v'_0)$ is quite negative, and so is the Forman-Ricci curvature, which equals $2-m -m'$. Thus, the second example of a star graph is an intermediate between the first example of a complete graph and the third example of a double star graph. While these examples suggest an equivocal picture wherein sometimes the two discretizations of Ricci curvatures are aligned, but in other cases, they may show an opposite behavior, our numerical results in complex networks which are reported in the following sections show that, Ollivier-Ricci and Forman-Ricci curvature in many networks are highly correlated to each other. Thus, in several model and real networks that we have investigated, large degrees of the vertices bounding an edge do not correlate highly with large fractions of triangles or other short loops containing these vertices. Furthermore, if we augment the definition of the Forman-Ricci curvature to account for two-dimensional simplicial complexes (i.e., triads or cycles of length 3) arising in graphs (Eqs. \ref{eq:Forman3} and \ref{eq:Forman4}), then such an Augmented Forman-Ricci curvature is even better correlated at \textit{small scale} to Ollivier-Ricci curvature, as in the augmented definition the triangles of vertices no longer contribute negatively to Forman-Ricci curvature. In the sequel, we shall also show that the Augmented Forman-Ricci curvature is better correlated to Ollivier-Ricci curvature in both model and real-world networks. \section{Benchmark dataset of complex networks} \label{dataset} We have considered four models of undirected networks, namely, Erd\"{o}s-R\'{e}nyi (ER) \cite{Erdos1961}, Watts-Strogatz (WS) \cite{Watts1998}, Barab\'{a}si-Albert (BA) \cite{Barabasi1999} and Hyperbolic Graph Generator (HGG) \cite{Krioukov2010}. The ER model \cite{Erdos1961} produces an ensemble of random graphs $G(n,p)$ where $n$ is the number of vertices and $p$ is the probability that each possible edge exists between any pair of vertices in the network. The WS model \cite{Watts1998} generates small-world networks which exhibit both a high clustering coefficient and a small average path length. In the WS model, an initial regular graph is generated with $n$ vertices on a ring lattice with each vertex connected to its $k$ nearest neighbours. Subsequently an endpoint of each edge in the regular ring graph is rewired with probability $\beta$ to a new vertex selected from all the vertices in the network with a uniform probability. The BA model \cite{Barabasi1999} generates scale-free networks which exhibit a power-law degree distribution. In the BA model, an initial graph is generated with $m_0$ vertices. Thereafter, a new vertex is added to the initial graph at each step of this evolving network model such that the new vertex is connected to $m$ $\le$ $m_0$ existing vertices, selected with a probability proportional to their degree. Thus, the BA model implements a preferential attachment scheme whereby high-degree vertices have a higher chance of acquiring new edges than low-degree vertices. The HGG model \cite{Krioukov2010,Aldecoa2015} can produce random hyperbolic graphs with power-law degree distribution and non-vanishing clustering. In the HGG model, the $n$ vertices of the network are placed randomly on a hyperbolic disk, and thereafter, pairs of vertices are connected based on some probability which depends on the hyperbolic distance between vertices. In the HGG model, the input parameters \cite{Krioukov2010,Aldecoa2015} are the number of vertices $n$, the target average degree $k$, the target exponent $\gamma$ of the power-law degree distribution and temperature $T$. In this work, we have used HGG model with default input parameters of $\gamma=2$ and $T=0$ to generate hyperbolic random geometric graphs. Note that the input parameters, $\gamma$ and $T$, of the HGG model \cite{Krioukov2010,Aldecoa2015} can be varied to produce other random graph ensembles such as configuration model, random geometric graphs on a circle and ER graphs. Supplementary Table S1 lists the model networks analyzed in this work along with the number of vertices, number of edges, average degree and edge density of each network. In each model, we have chosen different combinations of input parameters to generate networks with different sizes and average degree (Supplementary Table S1). Moreover, we have sampled 100 networks starting with different random seed for a specific combination of input parameters from each generative model, and the results reported in the next section for model networks in an average over the sample of 100 networks with chosen input parameters (Supplementary Tables S2-S5). We have also considered seventeen widely-studied real undirected networks. These are six communication or infrastructure networks, the Chicago road network \cite{Eash1979}, the Euro road network \cite{Subelj2011}, the US Power Grid network \cite{Leskovec2007}, the Contiguous US States network \cite{Knuth2005}, the autonomous systems network \cite{Leskovec2007} and an Email communication network \cite{Guimera2003}. In the Chicago road network, the 1467 vertices correspond to transportation zones within the Chicago region, and the 1298 edges are roads in the region linking them. In the Euro road network, the 1174 vertices are cities in Europe, and the 1417 edges are roads in the international E-road network linking them. In the US Power Grid network, the 4941 vertices are generators or transformers or substations in the western states of the USA, and the 6594 edges are power supply lines linking them. In the Contiguous US States network, the 48 vertices correspond to the 48 contiguous states of USA (except the two states, Alaska and Hawaii, which are not connected by land with the other 48 states), and the 107 edges represent land border between the states. In the autonomous systems (AS) network, the 26475 vertices are autonomous systems of the Internet, and the 53381 edges represent communication between autonomous systems connected to each other from the CAIDA project. In the Email communication network, the 1133 vertices are users in the University Rovira i Virgili in Tarragona in Spain, and the 5451 edges represent direct communication between them. We have considered five social networks, the Zachary karate club \cite{Zachary1977}, the Jazz musicians network \cite{Gleiser2003}, the Hamsterster friendship network, the Dolphin network \cite{Lusseau2003} and the Zebra network \cite{Sundaresan2007}. In the Zachary karate club, the 34 vertices correspond to members of an university karate club, and the 78 edges represent ties between members of the club. In the Jazz musicians network, the 198 vertices correspond to Jazz musicians, and the 2742 edges represent collaboration between musicians. In the Hamsterster friendship network, the 2426 vertices are users of hamsterster.com, and the 16631 edges represent friendship or family links between them. In the Dolphin network, the 62 vertices correspond to bottlenose Dolphins living off Doubtful Sound in South West New Zealand, and the 159 edges represent frequent associations among Dolphins observed between 1994 and 2001. In the Zebra network, the 27 vertices correspond to Grevy's Zebras in Kenya, and the 111 edges represent observed interaction between Zebras during the study \cite{Sundaresan2007}. We have also considered a scientific co-authorship network based on papers from the arXiv's Astrophysics (astro-ph) section \cite{Leskovec2007} where the 18771 vertices correspond to authors and the 198050 edges represent common publications among authors. We have also considered the PGP network \cite{Boguna2004}, an online contact network, where the 10680 vertices are users of the Pretty Good Privacy (PGP) algorithm, and the 24316 edges represent interactions between the users. We have also considered a linguistic network, an adjective-noun adjacency network \cite{Newman2006}, where the 112 vertices are nouns or adjectives, and the 425 edges represent their presence in adjacent positions in the novel David Copperfield by Charles Dickens. We have considered three biological networks, the yeast protein interaction network \cite{Jeong2001}, the PDZ domain interaction network \cite{Beuming2005} and the human protein interaction network \cite{Rual2005}. In the yeast protein interaction network, the 1870 vertices are proteins in yeast \textit{Saccharomyces cerevisiae}, and the 2277 edges are interactions between them. In the PDZ domain interaction network, the 212 vertices are proteins, and the 244 edges are PDZ-domain mediated interactions between proteins. In the human protein interaction network, the 3133 vertices are proteins, and the 6726 edges are interactions between human proteins as captured in an earlier release of the proteome-scale map of human binary protein interactions. The seventeen empirical networks analyzed here were downloaded from the KONECT \cite{Kunegis2013} database. Supplementary Table S1 lists the real networks analyzed in this work along with number of vertices, number of edges, average degree and edge density of each network. We remark that the above-mentioned model and real-world networks considered in this work are unweighted graphs, and thus, the weights of vertices, edges and two-dimensional simplicial complexes are taken to be 1 while computing the Forman-Ricci curvature and its augmented version. Furthermore, the largest connected component of the above-mentioned model and real-world networks is considered while computing the Ollivier-Ricci curvature of edges. In earlier work \cite{Sreejith2016,Sreejith2017}, we had characterized the Forman-Ricci curvature of edges and vertices in some of the above-mentioned networks. In the present work, we have compared the Forman-Ricci curvature and its augmented version with Ollivier-Ricci curvature in above-mentioned networks. \section{Results and Discussion} \label{results} \subsection{Comparison between Forman-Ricci and Ollivier-Ricci curvature in model and real networks} We have compared the Ollivier-Ricci with Forman-Ricci and Augmented Forman-Ricci curvature of edges in model networks (Table \ref{tab:ORFRedge} and Supplementary Table S2). In random ER networks, small-world WS networks and scale-free BA networks, we find a high positive correlation between the Ollivier-Ricci and Forman-Ricci curvature of edges or between Ollivier-Ricci and Augmented Forman-Ricci curvature of edges when the model networks are sparse with small average degree, however, the observed correlation vanishes with increase in average degree of model networks (Table \ref{tab:ORFRedge} and Supplementary Table S2). In hyperbolic random geometric graphs, we also find a high positive correlation between the Ollivier-Ricci and Forman-Ricci curvature of edges or between Ollivier-Ricci and Augmented Forman-Ricci curvature of edges, however, the observed correlation in the hyperbolic graphs seems relatively less dependent on average degree of networks based on our limited exploration of the parameter space (Table \ref{tab:ORFRedge} and Supplementary Table S2). We remark that hyperbolic random geometric graphs unlike ER, WS and BA networks have explicit geometric structure. Note that the Augmented Forman-Ricci in comparison to Forman-Ricci curvature of edges has typically higher positive correlation with Ollivier-Ricci curvature of edges in ER, WS and BA models (Table \ref{tab:ORFRedge} and Supplementary Table S2). Moreover, WS networks have higher clustering coefficient (and thus, higher proportion of triads) in comparison to ER or BA networks with same number of vertices and average degree, and thus, it is not surprising to observe that the Augmented Forman-Ricci curvature in comparison to Forman-Ricci curvature of edges has much higher positive correlation with Ollivier-Ricci curvature of edges in WS networks, especially, when networks become denser with increase in average degree (Table \ref{tab:ORFRedge} and Supplementary Table S2). This last result is expected because the Augmented Forman-Ricci curvature of edges also accounts for two-dimensional simplicial complexes or cycles of length 3 arising in graphs (see discussion in Theory section and Figure \ref{fig:ricciinterpret}(e)-(f)). We have also compared the Ollivier-Ricci with Forman-Ricci and Augmented Forman-Ricci curvature of edges in seventeen real-world networks. In several of the analyzed real-world networks, we find a moderate to high positive correlation between Ollivier-Ricci and Forman-Ricci curvature of edges (Table \ref{tab:ORFRedge} and Supplementary Table S2). We highlight that some of the real-world networks such as Astrophysics co-authorship network, Email communication network, Jazz musicians network and Zebra network have very weak or no correlation between Ollivier-Ricci and Forman-Ricci curvature of edges (Table \ref{tab:ORFRedge} and Supplementary Table S2). However, in most real-world networks analyzed here, we find a moderate to high positive correlation between Augmented Forman-Ricci and Ollivier-Ricci curvature of edges (Table \ref{tab:ORFRedge} and Supplementary Table S2). Interestingly, we also find that the Augmented Forman-Ricci curvature has moderate to high correlation with Ollivier-Ricci curvature of edges in Astrophysics co-authorship network, Email communication network, Jazz musicians network and Zebra network where Forman-Ricci curvature has very weak or no correlation with Ollivier-Ricci curvature of edges (Table \ref{tab:ORFRedge} and Supplementary Table S2). Thus, at the level of edges, we observe a positive correlation between Ollivier-Ricci and Forman-Ricci curvature, especially, the augmented version, in many networks (Table \ref{tab:ORFRedge} and Supplementary Table S2). From the definition of the Ollivier-Ricci and Forman-Ricci curvature of edges, it is straightforward to define Ollivier-Ricci and Forman-Ricci curvature of vertices in networks \cite{Sandhu2015a,Sreejith2017} as the sum of the Ricci curvatures of the edges incident on the vertex in the network. Note that the definition of Ollivier-Ricci and Forman-Ricci curvature of vertices in networks \cite{Sandhu2015a,Sreejith2017} is a direct discrete analogue of the scalar curvature in Riemannian geometry \cite{Jost2017}. We have compared the Ollivier-Ricci with Forman-Ricci and Augmented Forman-Ricci curvature of vertices in model networks (Table \ref{tab:ORFRvertex} and Supplementary Table S3). In random ER networks, small-world WS networks and scale-free BA networks, we find a high positive correlation between the Ollivier-Ricci and Forman-Ricci curvature of vertices or between Ollivier-Ricci and Augmented Forman-Ricci curvature of vertices, and the observed correlation seems to have minor dependence on size or average degree of networks based on our limited exploration of the parameter space (Table \ref{tab:ORFRvertex} and Supplementary Table S3). In most hyperbolic random geometric graphs analyzed here, we also find a moderate positive correlation between the Ollivier-Ricci and Forman-Ricci curvature of vertices or between Ollivier-Ricci and Augmented Forman-Ricci curvature of vertices (Table \ref{tab:ORFRvertex} and Supplementary Table S3). Note that in random ER networks, small-world WS networks and scale-free BA networks, the Spearman correlation is typically higher than Pearson correlation between Ollivier-Ricci and Forman-Ricci curvature of vertices, however, in the hyperbolic random geometric graphs, the Spearman correlation is typically lower than Pearson correlation between Ollivier-Ricci and Forman-Ricci curvature of vertices (Tables \ref{tab:ORFRedge}-\ref{tab:ORFRvertex} and Supplementary Tables S2-S3). We have also compared the Ollivier-Ricci with Forman-Ricci and Augmented Forman-Ricci curvature of vertices in seventeen real-world networks. In several of the analyzed real-world networks, we find a moderate to high positive correlation between Ollivier-Ricci and Forman-Ricci curvature of vertices (Table \ref{tab:ORFRvertex} and Supplementary Table S3). Also, in most real-world networks analyzed here, we find a higher positive correlation between Augmented Forman-Ricci and Ollivier-Ricci curvature of vertices in comparison to Forman-Ricci and Ollivier-Ricci curvature of vertices (Table \ref{tab:ORFRvertex} and Supplementary Table S3). Thus, at the level of vertices, we observe a positive correlation between Ollivier-Ricci and Forman-Ricci curvature, especially, the augmented version, in many networks (Table \ref{tab:ORFRvertex} and Supplementary Table S2). Importantly, we find that the correlation between Ollivier-Ricci and Forman-Ricci curvature of vertices is higher than Ollivier-Ricci and Forman-Ricci curvature of edges in most networks analyzed here (Tables \ref{tab:ORFRedge}-\ref{tab:ORFRvertex} and Supplementary Tables S2-S3). An intuitive explanation consists in the following observation. For the curvature of a vertex $v_0$ in an unweighted network, we average over all edges $(v_0,v)$ that have that vertex as one of its endpoints. Therefore, the Forman-Ricci curvature of each edge $(v_0,v)$ with vertex $v_0$ as one of its endpoint in an unweighted network has the form, $4-\deg v_0 -\deg v$ (see Eq. \ref{eq:Forman2}), and the Forman-Ricci curvature of all such edges $(v_0,v)$ share the term $\deg v_0$ which decreases the variance. For example, we even find a high positive correlation between Ollivier-Ricci and Forman-Ricci curvature of vertices in Email communication network where only a weak positive correlation exists between Ollivier-Ricci and Forman-Ricci curvature of edges (Tables \ref{tab:ORFRedge}-\ref{tab:ORFRvertex} and Supplementary Tables S2-S3). In a nut shell, although the two discretizations of Ricci curvature, Ollivier-Ricci and Forman-Ricci, capture different geometrical properties, our empirical analysis intriguingly finds a high positive correlation in many networks, especially, real-world networks. Deeper investigations in future are needed to better understand this empirically observed correlation between Ollivier-Ricci and Forman-Ricci curvature in many networks. \begin{table}[ht] \caption{Comparison of Ollivier-Ricci curvature (OR) with Forman-Ricci curvature (FR) or Augmented Forman-Ricci curvature (AFR) of edges in model and real networks. In this table, we list the Spearman correlation between the edge curvatures. In case of model networks, the reported correlation is mean (rounded off to two decimal places) over a sample of 100 networks generated with specific input parameters. Supplementary Table S2 also contains results from additional analysis of model networks with an expanded set of chosen input parameters. Moreover, Supplementary Table S2 also lists the Pearson correlation between the edge curvatures in model and real networks.} \label{tab:ORFRedge} \begin{tabular}{|l|c|c|} \hline {\textbf{\small Network}} & {\textbf{\small OR versus FR of edges}} & {\textbf{\small OR versus AFR of edges}} \\ \hline \textbf{Model networks} & & \\ \small{ER model with $n=1000$, $p=0.003$} & 0.89 & 0.90 \\ \small{ER model with $n=1000$, $p=0.007$} & 0.39 & 0.43 \\ \small{ER model with $n=1000$, $p=0.01$} & -0.03 & 0.04 \\ \small{WS model with $n=1000$, $k=2$ and $p=0.5$} & 0.92 & 0.92 \\ \small{WS model with $n=1000$, $k=8$ and $p=0.5$} & 0.18 & 0.70 \\ \small{WS model with $n=1000$, $k=10$ and $p=0.5$} & 0.10 & 0.69 \\ \small{BA model with $n=1000$, $m=2$} & 0.74 & 0.74 \\ \small{BA model with $n=1000$, $m=4$} & 0.33 & 0.36 \\ \small{BA model with $n=1000$, $m=5$} & 0.13 & 0.16 \\ \small{HGG model with $n=1000$, $k=3$, $\gamma=2$, $T=0$} & 0.78 & 0.66 \\ \small{HGG model with $n=1000$, $k=5$, $\gamma=2$, $T=0$} & 0.82 & 0.76 \\ \small{HGG model with $n=1000$, $k=10$, $\gamma=2$, $T=0$} & 0.85 & 0.87 \\ \hline \textbf{Real networks} & & \\ \small{Autonomous systems} & 0.43 & 0.42 \\ \small{PGP} & 0.32 & 0.83 \\ \small{US Power Grid} & 0.60 & 0.76 \\ \small{Astrophysics co-authorship} & 0.25 & 0.70 \\ \small{Chicago Road} & 0.98 & 0.98 \\ \small{Yeast protein interactions} & 0.70 & 0.74 \\ \small{Euro Road} & 0.81 & 0.88 \\ \small{Human protein interactions} & 0.48 & 0.52 \\ \small{Hamsterster friendship} & 0.23 & 0.30 \\ \small{Email communication} & 0.19 & 0.53 \\ \small{PDZ domain interactions} & 0.72 & 0.71 \\ \small{Adjective-Noun adjacency} & 0.15 & 0.35 \\ \small{Dolphin} & 0.07 & 0.71 \\ \small{Contiguous US States} & 0.68 & 0.91 \\ \small{Zachary karate club} & 0.75 & 0.81 \\ \small{Jazz musicians} & 0.11 & 0.90 \\ \small{Zebra} & -0.04 & 0.62 \\ \hline \end{tabular} \end{table} \begin{table}[ht] \caption{Comparison of Ollivier-Ricci curvature (OR) with Forman-Ricci curvature (FR) or Augmented Forman-Ricci curvature (AFR) of vertices in model and real networks. In this table, we list the Spearman correlation between the vertex curvatures. In case of model networks, the reported correlation is mean (rounded off to two decimal places) over a sample of 100 networks generated with specific input parameters. Supplementary Table S3 also contains results from additional analysis of model networks with an expanded set of chosen input parameters. Moreover, Supplementary Table S3 also lists the Pearson correlation between the vertex curvatures in model and real networks.} \label{tab:ORFRvertex} \begin{tabular}{|l|c|c|} \hline {\textbf{\small Network}} & {\textbf{\small OR versus FR of vertices}} & {\textbf{\small OR versus AFR of vertices}} \\ \hline \textbf{Model networks} & & \\ \small{ER model with $n=1000$, $p=0.003$} & 0.97 & 0.97 \\ \small{ER model with $n=1000$, $p=0.007$} & 0.97 & 0.97 \\ \small{ER model with $n=1000$, $p=0.01$} & 0.96 & 0.96 \\ \small{WS model with $n=1000$, $k=2$ and $p=0.5$} & 0.90 & 0.90 \\ \small{WS model with $n=1000$, $k=8$ and $p=0.5$} & 0.80 & 0.93 \\ \small{WS model with $n=1000$, $k=10$ and $p=0.5$} & 0.77 & 0.92 \\ \small{BA model with $n=1000$, $m=2$} & 0.61 & 0.61 \\ \small{BA model with $n=1000$, $m=4$} & 0.59 & 0.60 \\ \small{BA model with $n=1000$, $m=5$} & 0.63 & 0.64 \\ \small{HGG model with $n=1000$, $k=3$, $\gamma=2$, $T=0$} & 0.48 & 0.57 \\ \small{HGG model with $n=1000$, $k=5$, $\gamma=2$, $T=0$} & 0.34 & 0.41 \\ \small{HGG model with $n=1000$, $k=10$, $\gamma=2$, $T=0$} & 0.09 & 0.13 \\ \hline \textbf{Real networks} & & \\ \small{Autonomous systems} & 0.64 & 0.64 \\ \small{PGP} & 0.37 & 0.74 \\ \small{US Power Grid} & 0.68 & 0.82 \\ \small{Astrophysics co-authorship} & 0.43 & 0.78 \\ \small{Chicago Road} & 0.96 & 0.96 \\ \small{Yeast protein interactions} & 0.85 & 0.92 \\ \small{Euro Road} & 0.90 & 0.92 \\ \small{Human protein interactions} & 0.83 & 0.84 \\ \small{Hamsterster friendship} & 0.85 & 0.86 \\ \small{Email communication} & 0.79 & 0.86 \\ \small{PDZ domain interactions} & 0.91 & 0.91 \\ \small{Adjective-Noun adjacency} & 0.47 & 0.50 \\ \small{Dolphin} & 0.04 & 0.49 \\ \small{Contiguous US States} & 0.61 & 0.89 \\ \small{Zachary karate club} & 0.24 & 0.70 \\ \small{Jazz musicians} & -0.79 & 0.01 \\ \small{Zebra} & -0.72 & 0.99 \\ \hline \end{tabular} \end{table} \subsection{Comparison of Forman-Ricci and Ollivier-Ricci curvature with other edge-based measures} We emphasize that Ollivier-Ricci and Forman-Ricci curvature are edge-based measures of complex networks. We compared Ollivier-Ricci, Forman-Ricci and Augmented Forman-Ricci curvature with three other edge-based measures, edge betweenness centrality \cite{Freeman1977,Girvan2002,Newman2010}, embeddedness \cite{Marsden1984} and dispersion \cite{Backstrom2014}, for complex networks. Edge betweenness centrality \cite{Freeman1977,Girvan2002,Newman2010} measures the number of shortest paths that pass through an edge in a network. Edge betweenness centrality can be used to identify bottlenecks for flows in network. Embeddedness \cite{Marsden1984} of an edge quantifies the number of neighbors that are shared by the two vertices anchoring the edge under consideration in the network. Embeddedness is a measure to quantify the strength of ties in social networks \cite{Marsden1984}. Dispersion \cite{Backstrom2014} quantifies the extent to which the neighbours of the two vertices anchoring an edge are not themselves well connected. Dispersion is a measure to predict romantic relationships in social networks \cite{Backstrom2014}. In model networks, we find that Ollivier-Ricci, Forman-Ricci and Augmented Forman-Ricci curvature have significant negative correlation with edge betweenness centrality (Table \ref{tab:edge} and Supplementary Table S4). In most real networks considered here, we find that Ollivier-Ricci curvature has moderate to high negative correlation with edge betweenness centrality while Forman-Ricci curvature has a weak to moderate negative correlation with edge betweenness centrality (Table \ref{tab:edge} and Supplementary Table S4). Moreover, in most real networks considered here, we observe a higher negative correlation between Ollivier-Ricci curvature and edge betweenness centrality in comparison to Forman-Ricci curvature and edge betweenness centrality (Table \ref{tab:edge} and Supplementary Table S4). This may be explained by the fact that Ollivier-Ricci curvature is also affected by cycles of length 3, 4 and 5 containing the two vertices of an edge, and these are relevant for edge betweenness centrality. Interestingly, in real networks considered here, the Augmented Forman-Ricci curvature in comparison to Forman-Ricci curvature has much higher negative correlation with edge betweenness centrality (Table \ref{tab:edge} and Supplementary Table S4). Our results suggest that the augmented version of Forman-Ricci curvature which also accounts for two-dimensional simplicial complexes arising in graphs is better suited for analysis of complex networks. In both model and real networks considered here, we find no consistent relationship between Ollivier-Ricci, Forman-Ricci, or Augmented Forman-Ricci curvature of an edge and embeddedness (Table \ref{tab:edge} and Supplementary Table S4). Similarly, In both model and real networks considered here, we find no consistent relationship between Ollivier-Ricci, Forman-Ricci, or Augmented Forman-Ricci curvature of an edge and dispersion (Table \ref{tab:edge} and Supplementary Table S4). In summary, the two discrete notions of Ricci curvatures are negatively correlated to edge betweeness centrality but have no consistent relationship with embeddedness or dispersion in analyzed networks. \begin{table}[ht] \caption{ Comparison of Ollivier-Ricci curvature (OR), Forman-Ricci curvature (FR) and Augmented Forman-Ricci curvature (AFR) of edges with other edge-based measures, edge betweenness centrality (EBC), embeddedness (EMB) and dispersion (DIS), in model and real networks. In this table, we list the Spearman correlation between the edge-based measures. In case of model networks, the reported correlation is mean (rounded off to two decimal places) over a sample of 100 networks generated with specific input parameters. Supplementary Table S4 also contains results from additional analysis of model networks with an expanded set of chosen input parameters. Moreover, Supplementary Table S4 also lists the Pearson correlation between the edge-based measures in model and real networks.} \label{tab:edge} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline {\textbf{\small Network}} & \multicolumn{3}{c|}{\textbf{\small OR versus}} & \multicolumn{3}{c|}{\textbf{\small FR versus}} & \multicolumn{3}{c|}{\textbf{\small AFR versus}} \\ \cline{2-10} & \textbf{\small EBC} & \textbf{\small EMB} & \textbf{\small DIS} & \textbf{\small EBC} & \textbf{\small EMB} & \textbf{\small DIS} & \textbf{\small EBC} & \textbf{\small EMB} & \textbf{\small DIS}\\ \hline \textbf{Model networks} & & & & & & & & & \\ \small{ER model with $n=1000$, $p=0.003$} & -0.86 & 0.08 & 0.00 & -0.81 & -0.07 & 0.00 & -0.82 & 0.04 & 0.00 \\ \small{ER model with $n=1000$, $p=0.007$} & -0.53 & 0.25 & 0.05 & -0.80 & -0.11 & -0.03 & -0.82 & 0.06 & 0.02 \\ \small{ER model with $n=1000$, $p=0.01$} & -0.34 & 0.32 & 0.10 & -0.76 & -0.13 & -0.05 & -0.79 & 0.07 & 0.03 \\ \small{WS model with $n=1000$, $k=2$ and $p=0.5$} & -0.75 & 0.00 & 0.00 & -0.57 & 0.00 & 0.00 & -0.57 & 0.00 & 0.00 \\ \small{WS model with $n=1000$, $k=8$ and $p=0.5$} & -0.85 & 0.79 & 0.44 & -0.52 & -0.05 & -0.08 & -0.89 & 0.68 & 0.42 \\ \small{WS model with $n=1000$, $k=10$ and $p=0.5$} & -0.87 & 0.82 & 0.49 & -0.45 & -0.05 & -0.07 & -0.89 & 0.73 & 0.47 \\ \small{BA model with $n=1000$, $m=2$} & -0.73 & -0.09 & -0.11 & -0.76 & -0.30 & -0.16 & -0.77 & -0.26 & -0.15 \\ \small{BA model with $n=1000$, $m=4$} & -0.45 & 0.18 & 0.14 & -0.83 & -0.48 & -0.35 & -0.84 & -0.43 & -0.33 \\ \small{BA model with $n=1000$, $m=5$} & -0.30 & 0.30 & 0.25 & -0.85 & -0.54 & -0.41 & -0.86 & -0.48 & -0.39 \\ \small{HGG model with $n=1000$, $k=3$, $\gamma=2$, $T=0$} & -0.47 & -0.30 & -0.15 & -0.67 & -0.04 & -0.18 & -0.76 & 0.27 & -0.07 \\ \small{HGG model with $n=1000$, $k=5$, $\gamma=2$, $T=0$} & -0.62 & -0.20 & -0.13 & -0.73 & -0.08 & -0.17 & -0.81 & 0.20 & -0.10 \\ \small{HGG model with $n=1000$, $k=10$, $\gamma=2$, $T=0$} & -0.78 & -0.03 & -0.06 & -0.79 & -0.15 & -0.12 & -0.87 & 0.14 & -0.08 \\ \hline \textbf{Real networks} & & & & & & & & & \\ \small{Autonomous systems} & -0.17 & -0.37 & -0.25 & -0.26 & -0.44 & -0.18 & -0.27 & -0.41 & -0.16 \\ \small{PGP} & -0.64 & 0.20 & -0.13 & 0.11 & -0.69 & -0.17 & -0.56 & 0.21 & -0.15 \\ \small{US Power Grid} & -0.61 & 0.16 & 0.06 & -0.26 & -0.41 & -0.19 & -0.45 & 0.09 & 0.04 \\ \small{Astrophysics co-authorship} & -0.78 & 0.47 & -0.16 & -0.23 & -0.58 & -0.23 & -0.63 & 0.07 & -0.27 \\ \small{Chicago Road} & -0.65 & 0.00 & 0.00 & -0.65 & 0.00 & 0.00 & -0.65 & 0.00 & 0.00 \\ \small{Yeast protein interactions} & -0.83 & 0.06 & -0.01 & -0.52 & -0.15 & -0.13 & -0.59 & 0.14 & 0.00 \\ \small{Euro Road} & -0.54 & 0.05 & 0.02 & -0.40 & -0.31 & -0.07 & -0.43 & 0.00 & 0.03 \\ \small{Human protein interactions} & -0.46 & 0.07 & 0.01 & -0.38 & -0.22 & -0.19 & -0.43 & -0.07 & -0.10 \\ \small{Hamsterster friendship} & -0.53 & 0.12 & 0.00 & -0.35 & -0.61 & -0.40 & -0.42 & -0.47 & -0.32 \\ \small{Email communication} & -0.61 & 0.55 & 0.24 & -0.32 & -0.45 & -0.41 & -0.57 & 0.01 & -0.16 \\ \small{PDZ domain interactions} & -0.79 & -0.04 & 0.00 & -0.55 & -0.02 & 0.00 & -0.55 & 0.06 & 0.00 \\ \small{Adjective-Noun adjacency} & -0.51 & 0.22 & 0.09 & -0.42 & -0.72 & -0.55 & -0.57 & -0.42 & -0.37 \\ \small{Dolphin} & -0.66 & 0.51 & 0.28 & 0.11 & -0.58 & -0.21 & -0.61 & 0.59 & 0.31 \\ \small{Contiguous US States} & -0.68 & -0.10 & -0.15 & -0.49 & -0.72 & -0.71 & -0.64 & -0.03 & -0.08 \\ \small{Zachary karate club} & -0.79 & 0.10 & -0.06 & -0.64 & -0.29 & -0.37 & -0.80 & 0.43 & 0.14 \\ \small{Jazz musicians} & -0.84 & 0.57 & -0.03 & -0.22 & -0.66 & -0.18 & -0.76 & 0.47 & -0.05 \\ \small{Zebra} & -0.94 & 0.52 & 0.13 & 0.04 & -0.71 & -0.15 & -0.65 & 0.97 & 0.09 \\ \hline \end{tabular} \end{table} \begin{table}[ht] \caption{ Comparison of Ollivier-Ricci curvature (OR), Forman-Ricci curvature (FR) and Augmented Forman-Ricci curvature (AFR) of vertices with other vertex-based measures, degree, betweenness centrality (BC) and clustering coefficient (CC), in model and real networks. In this table, we list the Spearman correlation between the vertex-based measures. In case of model networks, the reported correlation is mean (rounded off to two decimal places) over a sample of 100 networks generated with specific input parameters. Supplementary Table S5 also contains results from additional analysis of model networks with an expanded set of chosen input parameters. Moreover, Supplementary Table S5 also lists the Pearson correlation between the vertex-based measures in model and real networks.} \label{tab:vertex} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline {\textbf{\small Network}} & \multicolumn{3}{c|}{\textbf{\small OR versus}} & \multicolumn{3}{c|}{\textbf{\small FR versus}} & \multicolumn{3}{c|}{\textbf{\small AFR versus}} \\ \cline{2-10} & \textbf{\small Degree} & \textbf{\small BC} & \textbf{\small CC} & \textbf{\small Degree} & \textbf{\small BC} & \textbf{\small CC} & \textbf{\small Degree} & \textbf{\small BC} & \textbf{\small CC} \\ \hline \textbf{Model networks} & & & & & & & & & \\ \small{ER model with $n=1000$, $p=0.003$} & -0.94 & -0.94 & -0.07 & -0.94 & -0.94 & -0.13 & -0.94 & -0.94 & -0.08 \\ \small{ER model with $n=1000$, $p=0.007$} & -0.98 & -0.98 & -0.18 & -0.99 & -0.98 & -0.26 & -0.99 & -0.98 & -0.21 \\ \small{ER model with $n=1000$, $p=0.01$} & -0.98 & -0.98 & -0.16 & -0.99 & -0.98 & -0.25 & -0.99 & -0.98 & -0.21 \\ \small{WS model with $n=1000$, $k=2$ and $p=0.5$} & -0.71 & -0.82 & 0.00 & -0.75 & -0.73 & 0.00 & -0.75 & -0.73 & 0.00 \\ \small{WS model with $n=1000$, $k=8$ and $p=0.5$} & -0.81 & -0.96 & 0.51 & -0.98 & -0.91 & 0.05 & -0.91 & -0.98 & 0.38 \\ \small{WS model with $n=1000$, $k=10$ and $p=0.5$} & -0.79 & -0.95 & 0.57 & -0.99 & -0.91 & 0.09 & -0.92 & -0.98 & 0.41 \\ \small{BA model with $n=1000$, $m=2$} & -0.90 & -0.90 & -0.18 & -0.59 & -0.77 & -0.39 & -0.59 & -0.78 & -0.37 \\ \small{BA model with $n=1000$, $m=4$} & -0.94 & -0.88 & -0.08 & -0.73 & -0.84 & -0.49 & -0.73 & -0.85 & -0.45 \\ \small{BA model with $n=1000$, $m=5$} & -0.94 & -0.90 & -0.05 & -0.78 & -0.85 & -0.40 & -0.79 & -0.86 & -0.37 \\ \small{HGG model with $n=1000$, $k=3$, $\gamma=2$, $T=0$} & -0.28 & -0.30 & -0.14 & -0.86 & -0.60 & -0.45 & -0.79 & -0.58 & -0.37 \\ \small{HGG model with $n=1000$, $k=5$, $\gamma=2$, $T=0$} & -0.15 & -0.17 & -0.03 & -0.89 & -0.61 & -0.21 & -0.85 & -0.60 & -0.18 \\ \small{HGG model with $n=1000$, $k=10$, $\gamma=2$, $T=0$} & 0.06 & -0.06 & 0.01 & -0.93 & -0.68 & 0.31 & -0.91 & -0.66 & 0.30 \\ \hline \textbf{Real networks} & & & & & & & & & \\ \small{Autonomous systems} & -0.85 & -0.70 & -0.39 & -0.51 & -0.38 & -0.55 & -0.50 & -0.38 & -0.55 \\ \small{PGP} & -0.12 & -0.49 & 0.29 & -0.73 & -0.51 & -0.51 & -0.35 & -0.46 & -0.05 \\ \small{US Power Grid} & -0.68 & -0.80 & 0.03 & -0.79 & -0.62 & -0.49 & -0.69 & -0.68 & -0.13 \\ \small{Astrophysics co-authorship} & -0.39 & -0.72 & 0.62 & -0.95 & -0.64 & 0.25 & -0.64 & -0.66 & 0.41 \\ \small{Chicago Road} & -0.33 & -0.34 & 0.00 & -0.42 & -0.42 & 0.00 & -0.42 & -0.42 & 0.00 \\ \small{Yeast protein interactions} & -0.54 & -0.67 & -0.05 & -0.57 & -0.56 & -0.33 & -0.45 & -0.54 & -0.07 \\ \small{Euro Road} & -0.82 & -0.75 & -0.22 & -0.82 & -0.64 & -0.38 & -0.80 & -0.65 & -0.24 \\ \small{Human protein interactions} & -0.77 & -0.78 & -0.23 & -0.71 & -0.65 & -0.43 & -0.67 & -0.64 & -0.34 \\ \small{Hamsterster friendship} & -0.87 & -0.87 & -0.30 & -0.92 & -0.76 & -0.45 & -0.91 & -0.76 & -0.42 \\ \small{Email communication} & -0.80 & -0.88 & 0.06 & -0.97 & -0.87 & -0.31 & -0.93 & -0.88 & -0.19 \\ \small{PDZ domain interactions} & -0.50 & -0.58 & -0.12 & -0.62 & -0.64 & -0.14 & -0.61 & -0.64 & -0.09 \\ \small{Adjective-Noun adjacency} & -0.57 & -0.76 & 0.07 & -0.96 & -0.84 & -0.50 & -0.95 & -0.84 & -0.45 \\ \small{Dolphin} & -0.04 & -0.39 & 0.44 & -0.98 & -0.77 & -0.45 & -0.73 & -0.72 & -0.04 \\ \small{Contiguous US States} & -0.59 & -0.74 & 0.71 & -0.98 & -0.82 & 0.55 & -0.78 & -0.79 & 0.70 \\ \small{Zachary karate club} & 0.10 & -0.09 & 0.35 & -0.84 & -0.76 & 0.40 & -0.47 & -0.60 & 0.52 \\ \small{Jazz musicians} & 0.78 & 0.34 & 0.08 & -0.99 & -0.72 & 0.33 & -0.49 & -0.56 & 0.56 \\ \small{Zebra} & 0.78 & 0.35 & -0.33 & -0.94 & -0.73 & 0.70 & 0.76 & 0.33 & -0.31 \\ \hline \end{tabular} \end{table} \subsection{Comparison of Forman-Ricci and Ollivier-Ricci curvature with vertex-based measures} We compared Ollivier-Ricci, Forman-Ricci and Augmented Forman-Ricci curvature of vertices with three other vertex-based measures, degree, betweenness centrality \cite{Freeman1977,Newman2010} and clustering coefficient \cite{Holland1971,Watts1998}, in a network. Vertex degree gives the number of edges incident to that vertex in a network. Betweennness centrality \cite{Freeman1977,Newman2010} of a vertex quantifies the fraction of shortest paths between all pairs of vertices in the network that pass through that vertex. The clustering coefficient \cite{Holland1971,Watts1998} of a vertex quantifies the number of edges that are realized between the neighbours of the vertex divided by the number of edges that could possibly exist between the neighbours of the vertex in the network. We remark that the clustering coefficient has been proposed as a measure to quantify the curvature of networks \cite{Eckmann2002}. Not surprisingly, we find that Ollivier-Ricci, Forman-Ricci or Augmented Forman-Ricci curvature of vertices have high negative correlation with degree in most model as well as real networks analyzed here (Table \ref{tab:vertex} and Supplementary Table S5). After all, the vertex degree is intrinsic in the definition of the Ollivier-Ricci or Forman-Ricci curvature of a vertex as it appears implicitly in the sum over adjacent edges in the defining formula. Similarly, in model as well as real networks analyzed here, we find that Ollivier-Ricci, Forman-Ricci or Augmented Forman-Ricci curvature of vertices have high negative correlation with betweenness centrality (Table \ref{tab:vertex} and Supplementary Table S5). In contrast, we do not find any consistent relationship between Ollivier-Ricci, Forman-Ricci or Augmented Forman-Ricci curvature of vertices and clustering coefficient in model and real networks analyzed here (Table \ref{tab:vertex} and Supplementary Table S5). \begin{figure} \includegraphics[width=.7\columnwidth]{Figure-2.pdf} \caption{Communication efficiency as a function of the fraction of edges removed in model and real networks. (a) Erd\"{o}s-R\`{e}nyi (ER) model. (b) Watts-Strogratz (WS) model. (c) Barab\`{a}si-Albert (BA) model. (d) Hyberbolic random geometric graph (HGG) model. (e) US Power Grid. (f) Yeast protein interactions. (g) Euro road. (h) Email communication. } \label{fig:rob_edge} \end{figure} \begin{figure} \includegraphics[width=.7\columnwidth]{Figure-3.pdf} \caption{Communication efficiency as a function of the fraction of vertices removed in model and real networks. (a) Erd\"{o}s-R\`{e}nyi (ER) model. (b) Watts-Strogratz (WS) model. (c) Barab\`{a}si-Albert (BA) model. (d) Hyberbolic random geometric graph (HGG) model. (e) US Power Grid. (f) Yeast protein interactions. (g) Euro road. (h) Email communication.} \label{fig:rob_node} \end{figure} \subsection{Relative importance of Forman-Ricci and Ollivier-Ricci curvature for topological robustness of networks} We employ a global network measure, communication efficiency \cite{Latora2001}, to quantify the effect of removing edges or vertices on the large-scale connectivity of networks. Communication efficiency $E$ of a graph $G$ is given by: \begin{equation} E = \frac{1}{n(n-1)}\sum_{i < j \in V(G)}\frac{1}{d_{ij}}, \end{equation} where $d_{ij}$ denotes the shortest path between the pair of vertices $i$ and $j$, $n$ is the number of vertices in the graph, and $V(G)$ denotes the set of vertices in the graph. Note that communication efficiency captures the resilience of a network to failure in the face of perturbations, as it essentially identifies locally with the clustering coefficient and globally with the inverse of the characteristic path length. We investigated the relative importance of Ollivier-Ricci, Forman-Ricci or Augmented Forman-Ricci curvature of edges for the large-scale connectivity of networks by removing edges based on the following criteria: random order, increasing order of the Forman-Ricci curvature of an edge, increasing order of the Augmented Forman-Ricci curvature of an edge, increasing order of the Ollivier-Ricci curvature of an edge, and decreasing order of edge betweenness centrality. In both model and real networks, we find that removing edges based on increasing order of Ollivier-Ricci curvature or increasing order of Forman-Ricci curvature or increasing order of Augmented Forman-Ricci curvature or decreasing order of edge betweenness centrality leads to faster disintegration in comparison to the random removal of edges (Figure \ref{fig:rob_edge}). Furthermore, in most cases, removing edges based on increasing order of Ollivier-Ricci curvature or decreasing order of edge betweenness centrality typically leads to faster disintegration in comparison to removing edges based on increasing order of Forman-Ricci curvature (Figure \ref{fig:rob_edge}). We remark that both Ollivier-Ricci curvature of an edge and edge betweenness centrality are global measures while Forman-Ricci curvature of an edge is a local measure dependent on nearest neighbors of an edge. We also investigated the relative importance of Ollivier-Ricci, Forman-Ricci or Augmented Forman-Ricci curvature of vertices for the large-scale connectivity of networks by removing vertices based on the following criteria: random order, increasing order of the Forman-Ricci curvature of a vertex, increasing order of the Augmented Forman-Ricci curvature of a vertex, increasing order of the Ollivier-Ricci curvature of a vertex, decreasing order of betweenness centrality of a vertex, decreasing order of vertex degree, and decreasing order of clustering coefficient of a vertex. In both model and real networks, we find that removing vertices based on increasing order of Ollivier-Ricci curvature or increasing order of Forman-Ricci curvature or increasing order of Augmented Forman-Ricci curvature or decreasing order of betweenness centrality or decreasing order of degree leads to faster disintegration in comparison to the random removal of vertices (Figure \ref{fig:rob_node}). Furthermore, in most model as well as real networks, removing vertices based on increasing order of Ollivier-Ricci curvature typically leads to faster disintegration in comparison to removing edges based on increasing order of Forman-Ricci curvature or on increasing order of Augmented Forman-Ricci curvature (Figure \ref{fig:rob_node}). Also, in most model as well as real networks, removing edges based on increasing order of Ollivier-Ricci curvature typically leads to at least slightly faster disintegration in comparison to removing edges based on any other measure (Figure \ref{fig:rob_node}). In summary, vertices or edges with highly negative Ollivier-Ricci curvature are found to be more important than vertices or edges with highly negative Forman-Ricci curvature for maintaining the large-scale connectivity of most networks analyzed here. \section{Conclusions} We have performed an empirical investigation of two discretizations of Ricci curvature, Ollivier's Ricci curvature and Forman's Ricci curvature, in a number of model and real-world networks. The two discretizations of Ricci curvature were derived using different theoretical considerations and methods, and thus, convey insights into quite different geometrical properties and behaviors of complex networks. Specifically, Ollivier-Ricci curvature captures clustering and coherence in networks while Forman-Ricci curvature captures dispersal and topology. Moreover, in the context of weighted networks, Ollivier-Ricci curvature implicitly, by its very definition, relates to edge weights as probabilities, while Forman's Ricci curvature fundamentally views edge weights as abstractions of lengths, and vertex weights as, for instance, concentrated area measures. This suggests that Ollivier-Ricci curvature is intrinsically better suited to study probabilistic phenomenon on networks while Forman-Ricci curvature is better suited to investigate networks where edge weights correspond to distances. Still, our results obtained in a wide-range of both model and real-world networks, consistently demonstrate that the two types of Ricci curvature in many networks are highly correlated. The immediate benefit of this realization is that one can compute Forman-Ricci curvature in large networks to gain some first insight into the computationally much more demanding Ollivier-Ricci curvature. Furthermore, the state of the art computational implementation of the Ollivier-Ricci curvature can handle only weights on edges rather than vertices in weighted networks. In addition, while computing the Ollivier-Ricci curvature of an edge in a weighted network, a necessary step is the normalization of the neighboring edge weights. In contrast, the mathematical definition of the Forman-Ricci curvature can incorporate any set of positive weights, placed simultaneously at the vertices and the edges. Furthermore, the Augmented Forman-Ricci curvature can also account for higher-dimensional simplicial complexes, thus making it a natural and simple to employ tool for the understanding networks with explicit geometric structure, especially, hyper-networks. Therefore, our empirical observations on the correlation between these two different notions of Ricci curvature in networks warrant deeper investigation in the future. We remark that while the present manuscript was under final stages of submission, a preprint \cite{Pouryahya2017} devoted to comparison problem in biological networks appeared on Arxiv server, independently from our present study. \section*{Acknowledgments} We thank the anonymous reviewers for their constructive comments which have helped improve the manuscript. E.S. and A.S. thank the Max Planck Institute for Mathematics in the Sciences, Leipzig, for their warm hospitality. A.S. would like to acknowledge support from Max Planck Society, Germany, through the award of a Max Planck Partner Group in Mathematical Biology.
1,314,259,993,774
arxiv
\section{Introduction} \label{sec:sec1} In a 2001 paper,~\cite{Kitaev2001} Kitaev discovered the one-dimensional topological superconductor (1DTSC) -- a one-dimensional proximity induced $p$-wave superconductor hosting a single Majorana quasi particle (MQP) at each of its ends. More recently, experimentally feasible realizations of the 1DTSC phase have been proposed \cite{LutchynTSC,OppenTSC} in semiconducting nanowires in proximity to an $s$-wave superconductor. In these settings, the interplay of Rashba spin-orbit coupling and a magnetic field induced Zeeman splitting in the nanowire gives rise to an effective $p$-wave pairing. By now, several groups have reported first experimental signatures of MQPs in InSb nanowires.~\cite{LeoMaj,LarssonXu,HeiblumMaj} Besides the fundamental interest attached to the experimental discovery of Majorana fermions in nature, MQPs as realized in 1DTSC also have intriguing features relating to various aspects of fundamental quantum physics: On the one hand the non-Abelian anyonic nature of MQPs shows great promise for topological quantum information processing architectures.~\cite{Kitaev2001,Nayak:2008p51,Alicea:2011p260} On the other hand the delocalized pair of MQPs at the ends of a 1DTSC can be viewed as a single ordinary (spinless Dirac) fermionic zero mode leading to electron teleportation mechanisms,~\cite{Semenoff:2007p1479,Fu:2010} i.e., coherent long-range quantum effects. In a hybrid system of a 1DTSC and two single-level quantum dots, ground state entanglement of the occupation number of the quantum dots has been reported in Ref.~\onlinecite{XuDots}. The understanding of genuine quantum effects on macroscopic lengthscales is one of the main motivations to study nano-electromechanical systems~\cite{Poot} and nano-optomechanical systems.~\cite{AKM} In recent years, decisive progress towards cooling nanomechanical resonators to the ground state has been reported.~ \cite{GroundStateCooling1,GroundStateCooling2,GroundStateCooling3,GroundStateCooling4} However, long distance entanglement of nanomechanical systems which would be another experimental hallmark in fundamental quantum physics has not been achieved yet although a variety of theoretical proposals have been made.~\cite{Eisert:2004aa,Vitali:2007is,Cavities,MirrorMirrorEntanglement1,MirrorMirrorEntanglement2,MirrorMirrorEntanglement3,MirrorMirrorEntanglement4,Hammerer:2009tf,BEC1,BEC2,StefanJanJensBjoern} In the interest of quantum coherence, different interaction mechanisms between spatially separated systems have been suggested ranging from coupling to a common optical mode \cite{Cavities,MirrorMirrorEntanglement1,MirrorMirrorEntanglement2,MirrorMirrorEntanglement3,MirrorMirrorEntanglement4} to exploiting the large coherence length of a Bose Einstein condensate \cite{BEC1,BEC2} and a Cooper-pair condensate,~\cite{StefanJanJensBjoern} respectively. A hybrid system of a 1DTSC and one NEMO was studied in Ref.~\onlinecite{WalterNEMSmsb}. The article is organized as follows. In Sec.~\ref{sec:sec2}, we summarize our main results. We propose the setup, discuss a possible realization of it, and introduce the Hamiltonian of the underlying model in Sec.~\ref{sec:sec3}. We present and discuss the results of the generated entanglement in Sec.~\ref{sec:sec4}. Finally, we summarize in Sec.~\ref{sec:sec5}. \begin{figure}[ht] \centering \includegraphics[width=0.95\columnwidth]{setup} \caption{\label{fig:setup} (Color online) Schematics of the proposed setup. Two nano-electromechanical oscillators (blue) each tunnel coupled to one end of a one-dimensional topological superconductor. The one-dimensional topological superconductor is sketched as a nanowire (yellow) placed on top of a mesoscopic superconductor (gray). At each end of the wire a single Majorana quasi particle (orange) is located. A gate voltage $V_{g}$ is applied (across a gate capacitance $C_{g}$) to the mesoscopic superconductor resulting in a finite charging energy $E_{c} = e^{2}/2 C_{g}$ of the superconductor. The nano-electromechanical oscillators are modeled as normal metal leads at the chemical potentials $\mu_{L/R}$. } \end{figure} \section{Main results} \label{sec:sec2} In this work, we bridge the research fields of topological superconductivity and entanglement in nanomechanical systems by proposing a mechanism to entangle two nano-electromechanical oscillators (NEMOs). More concretely, we demonstrate that the electron teleportation mechanism reported in Ref.~\onlinecite{Fu:2010} can lead to an effective superexchange coupling of two distant NEMOs located in the vicinity of the opposite ends of a mesoscopic 1DTSC. The combination of electron-phonon coupling on the NEMOs and a finite Coulomb charging energy $E_{c} = e^{2}/2 C_{g}$ on the 1DTSC are shown to be the crucial ingredients for achieving long range entanglement in the proposed setup. The teleportation mechanism guarantees coherence at length scales that significantly exceed those of the superconducting condensate wave function. In the proposed setup (see Fig. \ref{fig:setup}) entanglement between two distant conducting NEMOs can be generated by simply driving a current through the device. Using a non-Markovian master equation approach, we demonstrate that for NEMOs in their ground states, switching on a tunneling current induces entanglement that persists over many oscillator periods. In the Markovian limit, we derive a Lindblad master equation which provides an intuitive understanding of how number states of the NEMOs are dynamically entangled by the superexchange coupling via the 1DTSC. \section{Model} \label{sec:sec3} We will now show how an effective coupling between two NEMOs can be generated via an electron teleportation mechanism involving the MQPs located at the ends of a 1DTSC. The proposed setup is shown in Fig.~\ref{fig:setup} and is modeled by the following Hamiltonian (we put $\hbar$ $=$ $e$ $=$ $k_{B}$ $=$ $1$) \begin{align} H &= \sum_{\alpha=L,R} H_{\trm{osc}}^{(\alpha)} + H_{\trm{lead}}^{(\alpha)} + H_{\rm{tun}} + H_{c} \, , \nn \end{align} where $ H_{\trm{osc}}^{(\alpha)} = \op_{\alpha}^{2}/2 m_{\alpha} +m_{\alpha} \Omega_{\alpha}^{2} \ox_{\alpha}^{2}/2 $ describes the two NEMOs denoted by $\alpha=L,R$~ with effective mass $m_{\alpha}$, frequency $\Omega_{\alpha}$, and position and momentum operators $\ox_{\alpha}$ and $\op_{\alpha}$, respectively. For simplicity, we assume that $m_{L}=m_{R}$~and $\Omega_{L}=\Omega_{R}$. The conducting NEMOs act as two independent normal metal leads which are characterized by the Hamiltonians $ H_{\trm{lead}}^{(\alpha)} = \sum_{k} \ve_{k}^{\pd} \psi_{\alpha k}^{\dag} \psi_{\alpha k}^{\pd} $ and which are held at the chemical potentials $\mu_{L/R}$. The tunneling Hamiltonian $ H_{\rm{tun}}$~ from a normal metal lead into a 1DTSC without charging energy can be written as~\cite{Bolech:2007} \begin{align}\label{eqn:mo2} H_{\rm{tun}} = \sum_{k} [ i T^{\pd}_{L} ( \psi^{\pd}_{Lk} + \psi^{\dag}_{Lk} ) \gamma_{L}^{\pd} + (L \to R) ] \, , \end{align} where, in general, the tunneling amplitudes $T_{\alpha}$ have an exponential dependence on the displacement of the NEMOs, i.e., $T_{\alpha} \sim e^{-x_{\alpha}/x_{0}}$. As the oscillation amplitude is assumed to be small compared to the mean distance between the edge of the 1DTSC and the NEMO, we approximate $T_{\alpha}$ to depend linearly on the oscillator displacement: $T_{\alpha} = t_{0\alpha} + t_{x\alpha} \ox_{\alpha}$. Such a tunneling gap between a suspended gold beam and an electronic reservoir was realized in Ref.~\onlinecite{Flowers:2007}. Other possibilities include, for instance, replacing the suspended metallic beam by a vibrating metallic tip or by a shuttle-like device.~\cite{shuttle} As yet another possibility to achieve such a coupling, the suspended point contacts could be replaced by an electrostatically gated connection to the 1DTSC that is modulated piezoelectrically or capacitively by the NEMO. The left ($\gamma_{L}$) and right ($\gamma_{R}$) MQP satisfy $\left\{ \gamma_{i}, \gamma_{j} \right\} = 2 \delta_{ij}$ and can be expressed as $ \gamma_{L} = (c^{\pd} + c^{\dag}) $ and $ \gamma_{R} = -i(c^{\pd} - c^{\dag}) $, where $c$ and $c^{\dag}$ are the annihilation and creation operators, respectively, of a single spinless Dirac fermion that is delocalized over the two ends of the 1DTSC. Equation~(\ref{eqn:mo2}) contains so called anomalous terms which break particle number conservation in the mean field picture of superconductivity as they microscopically involve the creation or annihilation of a Cooper pair which is not explicitly accounted for at that level of description. In a 1DTSC with zero charging energy $E_{c}=0$, the NEMOs independently couple locally to the two ends of the 1DTSC and the effective coupling necessary for entangling the oscillators is absent. However, the situation is different in a mesoscopic superconductor with a finite charging energy $E_{c}$~which gives rise to an explicit dependence of the energy on the number of electrons. Hence, one has to go beyond the effective description of Eq. (\ref{eqn:mo2}) and explicitly keep track of the change in the number of Cooper pairs in the condensate during anomalous tunneling processes. The gate voltage $V_{g}$~is assumed to be adjusted such that the number of Cooper pairs $N_C$~ in the ground state of the 1DTSC is $N_0$~and the occupation number $n_c=c^\dag c$~of the delocalized fermionic bound state is zero. The charging Hamiltonian $H_{c}$ then reads $ H_{c} = E_{c} \left(2 N_{C} + n_{c} - 2 N_{0} \right)^{2} $. We would like to point out that $n_{c} = \frac{1}{2}( i \gamma_{L} \gamma_{R} + 1)$~as appearing in $H_c$ effectively couples the two MQPs $\gamma_L$~and $\gamma_R$~even if the direct overlap of the two bound state wave functions is negligible. This coupling is crucial for the electron teleportation mechanisms as it prevents the dynamical independence of the two MQPs. We would like to focus on the parameter regime $T_{\alpha}, V < \Omega_{\alpha} < E_{c} < \Delta \to \infty$. In this limit, non-local tunneling processes involving continuum states of the superconductor (e.g. Crossed Andreev Reflection or electron cotunneling) are suppressed. Moreover, in this scenario, there are no resonant levels in the superconductor for first-order tunneling processes. However, second-order cotunneling processes via virtual states with energies on the order of $E_c$~are allowed and lead to an effective superexchange coupling between the NEMOs as we will derive now. We neglect processes containing intermediate states with two or more excess electrons on the superconductor which are suppressed by an energy denominator of at least $4 E_{c}$~and are hence less relevant. This approximation excludes all terms where an extra Cooper pair is created. The only anomalous second order tunneling process which is then allowed is the anomalous cotunneling depicted in Fig. \ref{fig:anomalouscot}. \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{anomalouscot2} \caption{\label{fig:anomalouscot} (Color online) Anomalous cotunneling process decomposed into two (virtual) steps. Looking only at input and output states, one electron tunnels from the left to the right lead (blue). The state of the superconductor is unchanged. This is the only anomalous second order process contributing to the current that does not contain states with $E>E_{c}$. The oval denotes the a Cooper pair in the condensate. The level in the middle is the subgap-fermion $c$. The tunnel processes depicted here appear in the effective tunnel Hamiltonian Eq.~(\ref{eqn:mo3}). } \end{figure} After truncating the Hilbert space of the superconductor to the eigenstates with $E\le E_c$, we obtain a three-dimensional Hilbert space with the basis \begin{align*} &\lvert 0\rangle=\lvert N_C=N_0,n_c=0\rangle &&E_{0}=0 \\ &\lvert 1\rangle=\lvert N_C=N_0,n_c=1\rangle &&E_{1}=E_{c}\\ &\lvert 2\rangle=\lvert N_C=N_0-1,n_c=1\rangle &&E_{2}=E_{c} \end{align*} In this basis, $H_c$~can be represented as: $ H_{c} = \trm{diag}\{ 0, E_{c}, E_{c} \} $. The tunneling Hamiltonian (\ref{eqn:mo2}) constrained to the truncated Hilbert space of the superconductor reads as \begin{align}\label{eqn:mo3} H_{\rm{tun}} \approx~ & iT_{L} \sum_{k} \left(\lvert 1 \rangle \langle 0 \rvert \psi_{Lk} - \lvert 2 \rangle \langle 0 \rvert \psi_{Lk}^{\dag} \right) \nn \\ +~&T_{R} \sum_{k} \left(-\lvert 1 \rangle \langle 0 \rvert \psi_{Rk} + \lvert 2 \rangle \langle 0 \rvert \psi_{Rk}^{\dag} \right) + \text{h.c.}. \end{align} The terms in Eq. (\ref{eqn:mo3}) which involve the breaking and recombination of a Cooper pair, respectively are illustrated in Fig. \ref{fig:anomalouscot}. Assuming that the superconductor is initially in its ground state $\lvert 0\rangle$, we can integrate out the first order tunnel coupling to the excited states $\lvert 1\rangle,\lvert 2\rangle$. That way, we obtain an effective direct tunneling Hamiltonian between the left and the right lead containing the leading second order cotunneling processes in the original tunnel coupling Eq.~(\ref{eqn:mo2}). Explicitly, we get \begin{align}\label{eqn:mo4} H_{\trm{tun}}^{(\trm{eff})} = -\frac{T^{2}_{L} + T^{2}_{R}}{E_{c}} - \frac{2T_{L} T_{R}}{E_{c}} \sum_{k} [ i \psi_{Lk}^{\dag} \psi^{\pd}_{Rk} + \trm{h.c.} ] \, . \end{align} Recalling the position dependence $T_{\alpha} = t_{0\alpha} + t_{x\alpha} \ox_{\alpha}$~of the tunnel couplings, it becomes clear that Eq. (\ref{eqn:mo4}) also contains an effective direct coupling between the NEMOs. This formally mimics the superexchange coupling which could also be achieved using a single quantum dot with a finite charging energy. However, we would like to stress two conceptual advantages of the electron teleportation-induced superexchange coupling. First, it guarantees phase coherent coupling between the NEMOs over distances where the confinement induced level spacing on a quantum dot would become very small. Second, the tunneling density of states associated with the delocalized fermion $c$~in our setting is spatially strongly peaked around the interface between the NEMO and the 1DTSC. In a large single level quantum dot in contrast, the same spectral weight would be smeared out all over the "bulk" of the dot. In the following, we will demonstrate how this teleportation-induced superexchange coupling can be employed to generate entanglement between the oscillators over distances which are not limited by the coherence length of the superconducting condensate. \section{Entanglement} \label{sec:sec4} As shown above (see Eq. (\ref{eqn:mo4})), tunnel coupling two NEMOs to a 1DTSC leads to an effective direct coupling between the NEMOs. Therefore, we expect the generation of entanglement in the bipartite continuous variable system consisting of the two NEMOs. We study the time evolution of entanglement between the two NEMOs using the logarithmic negativity as an entanglement measure: $E_{N}(\rho_{\trm{osc}}) = \log_{2}(\| \rho_{\trm{osc}}^\Gamma\|_{1})$.~ \cite{Negativity1,Negativity2,Negativity3} Here, $\rho_{\trm{osc}}^\Gamma$ is the partial transpose of the state of the bipartite system. For a Gaussian state, the logarithmic negativity can be computed from the covariance matrix $\fc_{j,k}(t) = \tr[ \rho_{\trm{osc}} (t) \{R_j, R_k \} ]$, where ${R} = (\ox_{1},\op_{1},\ox_{2},\op_{2})^{T}$ is the vector of quadratures. We compute the time dependence of the entries of $\fc(t)$ by solving the equation of motion for the system's density matrix $\rho_{\trm{osc}}(t)$ employing a time convolutionless master equation method.~\cite{Breuer:2002wp} Within our effective tunneling Hamiltonian approach (see Eq. (\ref{eqn:mo4})), the master equation in the Born approximation is given by \begin{align}\label{eqn:nemsEnt4} \dot{\rho}_{\trm{osc}}(t) = &-i \com{H_{\trm{osc}},\rho_{\trm{osc}}(t)} \\ - \int_{0}^{t} d\tau \, & \tr_{\trm{leads}} \left[ H_{\trm{tun}}^{(\trm{eff})}, \left[ H_{\trm{tun}}^{(\trm{eff})}(\tau-t),\rho_{\trm{osc}}(t) \otimes \rho_{\trm{leads}} \right] \right] \, . \nn \end{align} For the sake of simplicity, we assume in the following identical NEMOs ($\Omega_{\alpha}=\Omega$ and $m_{\alpha}=m$). We also chose a symmetric coupling and real tunneling amplitudes ($t_{0\alpha} = t_{0}$ and $t_{x\alpha} = t_{x}$). Up to second order in $t_x$, i.e., only taking into account terms $\sim (t_{0} t_{x})^{2}$ in Eq.~(\ref{eqn:nemsEnt4}), the time dependence of the covariance matrix $\fc(t)$ can be obtained similarly as in Ref.~\onlinecite{StefanJanJensBjoern}, for technical details we refer to the Appendix. In Fig.~\ref{fig:Enall}, we show results for the logarithmic negativity, taking for simplicity the vacuum state as an initial state. The Gaussian character of this initial state is preserved at all times of the dynamics. Figure~\ref{fig:Enall}a) shows the time dependence of the logarithmic negativity $E_{N}$ for a fixed bias voltage $V = \mu_{L} - \mu_{R}$ and for various values of the charging energy $E_{c}$. We see that the two NEMOs become entangled right after the tunneling has been suddenly switched on. The generated entanglement is higher but decays faster for smaller values of $E_{c}$ compared to larger values of $E_{c}$. Figure~\ref{fig:Enall}b) shows $E_{N}$ over time for a fixed charging energy $E_{c}$ for different bias voltages $V$. Here, we see that lower voltages lead to a higher logarithmic negativity. This can be interpreted by recognizing that the bias voltage is similar to an effective temperature of the leads and thereby leads to decoherence. As a first result, we conclude that an effective interaction mediated by an electron teleportation mechanism involving MQPs leads to the generation of entanglement of two distant NEMOs. \begin{figure}[t] \centering \includegraphics[width=0.85\columnwidth]{Enall} \caption{\label{fig:Enall} (Color online) a) Logarithmic negativity as a function of $E_{c}$ and time for $V/\Omega=0.5$. For small $E_{c}$ the generated entanglement is higher but decays faster than for large $E_{c}$. b) Logarithmic negativity as a function of $V$ and time for $E_{c}/\Omega = 4$. For lower bias voltages the entanglement is higher. In both cases, the other parameters are $T_{el} = 0$, $L_{c}/\Omega=1$, and $t_{0} t_{x}/\sqrt{m \Omega} = 0.1$. } \end{figure} To lowest order in tunneling $(t_{0} t_{x})^{2}$, the entanglement is due to damping and decoherence mechanisms described by time-dependent kernels $\fg(t)$, cf. Appendix. However, the effective tunneling Hamiltonian Eq.~(\ref{eqn:mo4}) together with the equation of motion for $\rho_{\trm{osc}}$ leads to contributions of order $t_{x}^{4}$ in the equation of motion for the NEMOs. Next, we analyse exactly these contributions. Restricting ourselves to the low-bias limit, we show that entanglement between the NEMOs can be generated in a purely dissipative fashion, described by a Lindblad master equation. In the limit of low-bias voltages, it is not possible to excite any of the NEMOs by the applied bias voltage. This allows us to employ the rotating wave approximation, i.e., excitations can only be interchanged between the two NEMOs. In the low-bias limit and taking the Markovian limit, the equation of motion reduces to \begin{align} &\dot \rho_{\trm{osc}} = \fl [\rho_{\trm{osc}}] = -i [H_{\trm{osc}}, \rho_{\trm{osc}}] + \gamma \mathcal{D}[O] \rho_{\trm{osc}} \nn \end{align} with the Liouvillian superoperator $\fl$ and a Lindblad dissipator $\fd[O]\rho = O \rho O^{\dag} - \frac{1}{2} \left\{ O^{\dag} O, \rho \right\}$. In our case we have $\gamma = \frac{\pi \, t_{x}^{4}}{E_{c}^{2}} \frac{\rho_{L} \rho_{R}}{(m \Omega)^{2}} V$~and $O = {a}_{L}^{\dag} {a}_{R}^{\pd} + {a}_{R}^{\dag}{a}_{L}^{\pd}$, where ${a}_{\alpha} = (\ox_{\alpha} \sqrt{m \Omega} + i \op_{\alpha}/\sqrt{m\Omega} ) /\sqrt{2}$ and ${a}_{\alpha}^{\dag} = (\ox_{\alpha} \sqrt{m \Omega} - i \op_{\alpha}/\sqrt{m\Omega} ) /\sqrt{2}$ are bosonic annihilation and creation operators, respectively. $\rho_{\alpha}$ is the density of states in lead $\alpha$ which we assume as constant in the relevant energy window. If other dissipation channels such as an additional bosonic heat bath are absent, the steady state of the system ($\fl [\rho_{ss}] = 0$) is not unique. However, if the number of excitations ($N_{\trm{tot}}$ = $\sum_{\alpha} {a}_{\alpha}^{\dag} {a}_{\alpha}^{\pd}$ = $\sum_{\alpha} n_{\alpha}$) is kept fixed, the steady state is unique. For instance, the pure initial state $\lvert \Psi \rangle = \lvert n_{L}=1,n_{R}=1 \rangle$ is dissipatively driven to the (mixed) entangled state \begin{align} \rho_{ss} = \frac{1}{2} \lvert \Psi \rangle \langle \Psi \rvert + \frac{1}{2} \lvert \Phi \rangle \langle \Phi \rvert \, , \nn \end{align} where $\lvert \Phi \rangle = \frac{1}{\sqrt{2}} ( \lvert 2,0 \rangle + \lvert 0,2 \rangle)$ is a maximally entangled state. The degree of entanglement of $\rho_{ss}$ is readily quantified by calculating $E_{N}(\rho_{ss}) = \log_{2}(3/2)$. In the presence of a finite temperature heat bath, sectors of different particle number will start to couple. Thereby, the stationary state becomes unique and entanglement is unsurprisingly lost. However, processes destroying and generating the entanglement now compete with each other. This still allows for the generation of entanglement in a dissipation fashion. The rates of the entanglement generating and destroying processes (characterized by an independent rate determined by the microscopic environment of the NEMOs) are governed by their respective Liouvillian gaps, for details we also refer to Ref.~\onlinecite{StefanJanJensBjoern}. \section{Concluding discussion} \label{sec:sec5} To summarize, we have shown that entanglement between two distant NEMOs can be achieved by tunnel coupling of the NEMOs to two MQPs residing at the ends of a 1DTSC. A finite charging energy on the 1DTSC leads to an effective superexchange coupling between the NEMOs via the non-local MQPs. This electron teleportation mechanism guarantees phase coherence over length scales $\sim 1/E_c$~ that are significantly larger than the superconducting coherence length $\sim 1/\Delta$. Our proposal allows for entangling two mesoscopic NEMOs initially cooled to their ground states in an all electronic setup by driving a current through the device. In the Markov approximation, the equation of motion for the system's density matrix $\rho_{\trm{osc}}$ reduces to a Lindblad master equation. In this limit, NEMOs initially prepared in number states can be entangled by purely dissipative means. We briefly want to elaborate on the conceptual difference between our work and Ref.~\onlinecite{XuDots}, where the non-local nature of a pair of MQPs was exploited to create a charge-entangled ground state of two single level quantum dots in the Coulomb blockade regime. On the contrary, in our proposal, the electron charge degrees of freedom are in fact only used to generate an effective superexchange coupling between two rather \emph{macroscopic mechanical degrees of freedom}. In our setting, entanglement is not a ground state property of a closed system but is dynamically generated by driving a current between the two metallic leads. Remarkably, the thermalization (decoherence) of the electrons after their tunneling into these reservoirs does not affect the coherence times of the entangled NEMOs. Our analysis relies crucially on the hierarchy $V,T_{\alpha}<\Omega<E_c<\Delta$~of the involved energy scales. Finally, we would like to discuss experimentally relevant energy scales in the proposed setup thereby demonstrating the feasibility of the assumed parameter regime. For an $\trm{InSb}$ wire proximity coupled to a $\trm{NbTiN}$ superconductor, experimental data reported in Ref.~\onlinecite{LeoMaj} indicate an induced gap on the order of $\Delta = 250 \, \mu eV$. By varying the size of the superconductor, the charging energy $E_{c}$ can be adjusted. Here, we assume $E_{c} = 20 \, \mu eV$. Frequencies of doubly clamped NEMOs can be as high as $\Omega = 500 \, \trm{MHz}\approx 2\,\mu eV$,~\cite{Li2008} i.e., one order of magnitude smaller than a typical charging energy. Still, such NEMOs could be passively cooled to their ground state at typical dilution refrigerator temperatures. Taking these estimates, the localization length of the MQPs at the ends of the 1DTSC is about $2\, \mu m$. For the assumed charging energies, the MQPs could be separated by at least $20\, \mu m$, hence direct tunneling between them is negligible. \begin{acknowledgements} We would like to thank Christoph Bruder, Patrik Recher, Thomas Schmidt, and Bj{\"o}rn Trauzettel for stimulating discussions. SW acknowledges financial support form the Swiss SNF and the NCCR Quantum Science and Technology. JCB acknowledges financial support from the Swedish Research Council (VR) and the ERC Synergy Grant UQUAM. \end{acknowledgements}
1,314,259,993,775
arxiv
\section{Experiments} \label{sec:result} \subsection{Environments \& data collection} We perform experiments on three continuous control tasks with state-based inputs. \paragraph{UMaze~\citep{mujocomaze}.} The first environment, shown in \autoref{fig:umaze}, is a two-dimensional U-shaped maze with continuous action space and a fixed initial position. We generate the training data for this environment by deploying a random policy with randomized start position in the maze. We collect 10k trajectories of length 1k. We evaluate the goal-conditioned agent by giving the agent a goal sampled at random in the environment and computing the final euclidean distance to the goal. \paragraph{RoboYoga Walker~\citep{mendonca2021discovering}.} Introduced by \citet{mendonca2021discovering}, the challenging RoboYoga benchmark is based on the Walker domain of the DeepMind control suite~\citep{tassa2018deepmind}, and consists of 12 goals that correspond to body poses inspired from yoga (\textit{e}.\textit{g}. lying down, raising one leg or balancing). We consider the state-based version of the task, and use the task-agnostic dataset from \citet{yarats2022don} generated with an unsupervised exploration policy. It contains 10k trajectories of length 1k obtained by deploying the ``proto"~\citep{yarats2021reinforcement} algorithm in the Walker domain. The success metric of the evaluation policy is assessed by the pose of the humanoid at the end of the episode. \paragraph{Pusher~\citep{nair2018visual}.} We also apply our method on \emph{Pusher}, a realistic robotic environment shown in \autoref{fig:pusher_perf}~(left), where a robot arm (red) needs to push a puck (blue) to a specified location on a table. To build the offline dataset, we generated 10k random trajectories of length 200. Similar to prior works~\citep{nair2018visual, pong2020skew, mezghani2022walk}, we generated 500 goals at random in the state space, and we measured the performance as the final Euclidean distance between the puck and its target location. \subsection{Ablation \& design choices} \label{sec:ablation} We first show that the graph structure is necessary for long-term planning. Then, we explain the importance of the directness of the graph on tasks with asymmetric behaviours. Finally, we show the impact of transition augmentation techniques when labeling data for the goal-conditioned policy. \paragraph{Necessity of graph-based rewards.} An important component of our method is the construction of the graph $\mathcal{M}$ that enables computing a distance with good global properties. To empirically validate this hypothesis, we performed a comparison between the goal-conditioned policy trained with RNet rewards (\textit{i}.\textit{e}., by using the distance $d_l$ from equation (5)) and the one trained with both distance terms as reward. We run this experiment on the UMaze environment, and show results in \autoref{fig:graph_vs_rnet}. We note that the model trained with graph rewards outperforms the one trained with RNet rewards overall, particularly for distant goals (ie. rooms 3 and 4). We also notice that the model trained with RNet rewards is slightly better for goals that are close to the initial position. This highlights the fact that RNet is good at estimating local distances. The qualitative visualization in \autoref{fig:graph_dist_qual1}~\&~\ref{fig:graph_dist_qual2} confirms this observation, as it shows low values between states in the first and fourth rooms. \begin{figure}[t] \centering \begin{subfigure}{.16\linewidth} \centering \includegraphics[width=\linewidth]{fig/maze_U4rooms.png} \caption{UMaze} \label{fig:umaze} \end{subfigure} \begin{subfigure}{.25\linewidth} \centering \includegraphics[width=\linewidth]{fig/values_nograph.png} \caption{RNet distance} \label{fig:graph_dist_qual1} \end{subfigure} \begin{subfigure}{.25\linewidth} \centering \includegraphics[width=\linewidth]{fig/values_graph.png} \caption{Graph distance} \label{fig:graph_dist_qual2} \end{subfigure} \begin{subfigure}{.26\linewidth} \centering \includegraphics[width=\linewidth]{fig/graph_vs_rnet_rewards.pdf} \caption{RNet vs. Graph Rewards} \label{fig:graph_vs_rnet} \end{subfigure} \caption{ (a) UMaze environment, Heatmap of rewards computed with RNet (b) and graph (c) distances, and (d) Performance of the goal-conditioned policy trained with RNet and graph-based rewards on UMaze. In (b) and (c), high rewards are shown in yellow, and low rewards in black.} \vspace{-10pt} \end{figure} \paragraph{Importance of graph directness.} We then investigate the importance of the asymmetry of the RNet and the directness of the graph. To this end, we implement an undirected version of our method where the RNet is symmetric and the graph is undirected. All other components of our method are unchanged. First, we compare the performance of both variants in the UMaze task in \autoref{fig:directed_vs_undirected_maze}, and note that asymmetric RNet and directed graph in our approach significantly improve the goal-conditioned policy performance ($+11\%$ on success rate), especially on goals close to the initial location, \textit{i}.\textit{e}., goals in rooms 1 and 2. We then analyze qualitative visualizations of the shortest path in the undirected and directed graphs in the RoboYoga task, as shown in \autoref{fig:shortest_path}. In the undirected case, the humanoid defies the laws of gravity and is encouraged to stand its head by flipping backwards, which might be extremely difficult, or even infeasible. In the directed case, the shortest path fosters the agent to first get back on its legs, and then lean forward. In this exemple, the gravity makes the dynamics of the environment non-symmetric and non-fully reversible, which justifies the directed formulation described in our method. \begin{figure}[t] \captionsetup[subfigure]{justification=centering} \begin{subfigure}{.35\linewidth} \includegraphics[width=\linewidth]{fig/directed_vs_undirected_maze.pdf} \caption{Comparison on the UMaze task} \label{fig:directed_vs_undirected_maze} \end{subfigure} \begin{subfigure}{.64\linewidth} \includegraphics[width=\linewidth]{fig/walker_path_undirected.png} \includegraphics[width=\linewidth]{fig/walker_path_directed.png} \caption{Shortest Path visualization for undirected (top) and directed (bottom) graphs} \label{fig:shortest_path} \end{subfigure} \caption {Importance of graph directness on (a) the UMaze task and (b) the RoboYoga Walker task.} \vspace{-10pt} \end{figure} \paragraph{Transition sampling strategy.} As a final ablation study, we study the utility of the transition augmentation techniques described in~\autoref{sec:policy_training}. We evaluate four possible variants of our method: (i) without any augmentation, (ii) with edge augmentation only, (iii) with subgoal augmentation only, and (iv) with both augmentations. We execute this experiment on the RoboYoga task, and show results in \autoref{fig:transition_comp}. We observe that both of the augmentation techniques improve the performance of the goal-conditioned agent, with subgoal augmentation showing greater improvement. Moreover, we note that combining both augmentations improves the performance further. For the reminder of the experiments, we use both these augmentation techniques. \begin{figure}[h] \centering \includegraphics{fig/maze_perf.pdf} \caption{Performance on the UMaze task. We show the success rate for goals sampled at random in each of the four rooms, as well as the average over all rooms.} \label{fig:maze_perf} \end{figure} \subsection{Comparison to prior work} \textbf{Baselines.} We compare our method to prior work on unsupervised goal-conditioned policy learning. We perform an apples-to-apples comparison by implementing the baselines using the same learning framework as our method, and changing the reward relabeling process. We compare with the following baselines: \begin{itemize \item \textbf{Hindsight Experience Replay [HER]~\citep{andrychowicz2017hindsight}} This is a re-implementation of the standard unsupervised RL technique, adapted to the offline setting. More precisely, we relabel sub-trajectories from $\mathcal{D}$ with a sparse reward, which is equal to 1 only for the final transition of the sub-trajectory, and 0 everywhere else. Following \citet{chebotar2021actionable}, we also label sub-trajectories with goals sampled at random in $\mathcal{D}$ and zero reward. \item \textbf{HER~\citep{andrychowicz2017hindsight} with random negative action} is a variant of HER where, for a transition in $\mathcal{D}$ we sample an action uniformly at random in the action space and label it with zero reward. This helps overcoming the problem of over-estimation of the Q-values for unseen actions mentioned in \citet{chebotar2021actionable}. \item \textbf{Actionable Models~\citep{chebotar2021actionable}} This approach is based on goal-conditioned Q-learning with hindsight relabeling. We re-implemented the goal relabeling procedure that uses the Q-value at the final state of sub-trajectories in $\mathcal{D}$ to enable goal chaining, as well as the negative action sampling trick. \end{itemize} \textbf{Comparison on UMaze.} We compare our method to the baselines on the UMaze task, and show results in~\autoref{fig:maze_perf}. We observe that our model outperforms all baselines overall, and shows greater improvements on challenging goals that are far from the initial position. Interestingly, we note that Actionable Models reaches goals in the first room only. This confirms the intuition that sparse rewards make it difficult for the policy to learn long-horizon tasks. \begin{figure}[t] \centering \begin{subfigure}{.49\linewidth} \centering \includegraphics{fig/walker_perf.pdf} \caption{Comparison to baselines} \label{fig:walker_perf} \end{subfigure} \begin{subfigure}{.49\linewidth} \centering \includegraphics[width=\linewidth]{fig/transition_augmentation.pdf} \caption{Impact of Transition Augmentation} \label{fig:transition_comp} \end{subfigure} \caption{Performance on the RoboYoga Walker task} \vspace{-10pt} \end{figure} \textbf{Comparison on RoboYoga Walker.} In a second experiment, we compare our method to baselines on the RoboYoga task, as shown in \autoref{fig:walker_perf}. Here again, our method outperforms prior work, and Actionable Models does not make any significant improvement over HER. The results broken down by goal are shown in the supplementary material. Overall these results suggest that our dense reward shaping method allows for faster and more robust offline goal-conditioned policy training. \begin{figure}[h] \centering \includegraphics{fig/pusher_perf.pdf} \caption{Performance on the Pusher task (lower is better). We report the final average, hand, and puck distance to the goal for our model and all baselines.} \label{fig:pusher_perf} \end{figure} \textbf{Comparison on Pusher.} As a final experiment, we compared our method to prior work on a realistic robotic environment, as shown in~\autoref{fig:pusher_perf}. Our policy trained offline is evaluated by sampling a goal at random in the state space, and measuring three different metrics: (i) the \emph{hand distance}, which corresponds to the final distance between the end of the robot arm and the target, (ii) the \emph{puck distance}, which measures the distance between the final puck location and the target, and (iii) the \emph{average distance}, the average of the first two metrics. Our method outperforms the baselines on this task, and our goal-conditioned agent is able to sequentially place the puck at the goal location, and then place the hand at its target location. On the contrary, \textbf{HER~\citep{andrychowicz2017hindsight}} places the puck at the target location with a performance similar to our method, but lacks precision on the hand location. \section{Introduction} While the goal of realizing general autonomous agents requires mastery of a large and diverse set of skills, achieving this by focusing on each skill individually with standard reinforcement learning (RL) frameworks is prohibitive. This is primarily due to the need for manually designed reward functions and environment interactions for each skill. Unsupervised RL has opened a way for learning agents that can execute diverse abilities without supervision (\textit{i}.\textit{e}., hand-crafted rewards), and then be further adapted to downstream tasks through few-shot or zero-shot generalization \citep{pathak2017curiosity, burda2018exploration, yarats2021reinforcement, eysenbach2018diversity}. However, learning policies with such methods is impractical with real robots as they require millions of interactions when trained online. Recently, a line of study has emerged that uses pre-collected datasets of trajectories and trains policies offline (\textit{i}.\textit{e}., without additional interactions with the environment) \citep{yarats2022don, lambert2022challenges}. More precisely, given a dataset of reward-free trajectories and a reward function designed to solve a specific task, the agent learns offline by relabeling the transitions in the dataset with the reward function. This setting is particularly relevant in robotics, where data collection is extremely time-consuming: disentangling data collection and policy learning in this context allows for faster policy iteration. However, it would require designing one specific reward function and learning one policy for each individual task. An important question to scale offline robot learning is therefore to find ways of learning multi-task policies from already collected datasets. Recent works~\citep{chebotar2021actionable, yang2022rethinking, li2022hierarchical}, have targeted this problem from a goal-conditioned perspective: given a dataset of previously collected trajectories, the objective is to learn a goal-oriented agent that can reach any state in the dataset. The advantages of this formulation are two-fold: first, it makes it easy to interpret skills, and second it does not require any adaptation at test time. Making this framework unsupervised requires to break free from hand-crafted rewards, as proposed by \citet{chebotar2021actionable}, where they learn goal-conditioned policies offline through hindsight relabeling~\citep{andrychowicz2017hindsight}. However, their approach is subject to the pitfall of learning from sparse rewards, and can be inefficient in long-horizon tasks. In this work, we present a self-supervised reward shaping method that enables building an offline dataset with dense rewards. To this end, we develop a self-supervised learning phase that aims at learning the structure and dynamics of the environment before training the policy. During this phase, we: (i) train a reachability network~\citep{savinov2018episodic} to estimate the local distance in the state space $\mathcal{S}$, then (ii) extract a set of representative states that covers $\mathcal{S}$, and finally (iii) build a graph on this set to approximate the global distance in $\mathcal{S}$. When training the goal-conditioned policy, we use the graph in two ways: to compute rewards through shortest path distance, and to create transitions of intermediate difficulty on the path to the goal. We evaluate our method on complex continuous control tasks, and compare it to previous state-of-the-art offline~\citep{chebotar2021actionable, andrychowicz2017hindsight} approaches. We show that our graph-based reward method learns good goal-conditioned policies by leveraging transitions from a dataset of past experience with neither any additional interactions with the environment nor manually-designed rewards. Moreover, we show that, contrary to prior work that uses datasets collected with a policy trained with supervised rewards~\citep{chebotar2021actionable}, our method allows for learning goal-conditioned policies even from datasets of poor quality, \textit{e}.\textit{g}. containing trajectories sampled with a random policy. Our work is thus the first to learn goal-conditioned policies from offline datasets without any supervision, as it does not require any hand-crafted reward function at any stage: data collection, policy training and evaluation. \section{Limitations} Our work presents a method that learns a goal-conditioned state-reaching policy without any supervised reward signal nor online interactions with the environment. However, it relies on the availability of a pre-collected dataset of trajectories, whose coverage on the state space should be large enough for proper policy training. Although such data can be already available, as for the RoboYoga Walker task, or that offline dataset collection could be done with random policies, as we did on the UMaze and Pusher tasks, this step can be challenging for other environments. Moreover, we evaluated our method exclusively on simulated environments, and we did not perform any experiments on real robots, for which pre-collected dataset with expert demonstrations can be available~\citep{chebotar2021actionable}. \section{Conclusion: Summary and Limitations} \label{sec:conclusion} We proposed a method for learning multi-task policies from pre-generated datasets in an offline and unsupervised fashion, \textit{i}.\textit{e}., without requiring any additional interaction with the environment, nor manually designed rewards. Our method leverages a self-supervised stage that aims at learning the dynamics of the environment from the offline dataset, and that allows for shaping a dense reward function. It shows significant improvement over prior works based on hindsight relabeling, especially on long-horizon tasks, where dense rewards are crucial for learning a good policy. The main limitation of our method is that it relies on the availability of a pre-collected dataset of trajectories, with a sufficiently large coverage of the state space for proper policy learning. Although such data can be already available, as for the RoboYoga Walker task, or that offline dataset collection could be done with random policies, as we did on the UMaze and Pusher tasks, this step can be challenging for other environments. Another limitation is that we evaluated our method exclusively on simulated environments, and we did not perform any experiments on real robots, for which pre-collected dataset with expert demonstrations can be available~\citep{chebotar2021actionable}. \clearpage \acknowledgments{Karteek Alahari is supported in part by the ANR grant AVENUE (ANR-18-CE23-0011).} \section{Self-supervised Reward Shaping} \begin{SCfigure} \includegraphics[width=.5\linewidth]{fig/graph_building.pdf} \caption{Overview of the graph building algorithm. Given a transition $(s_i, s_{i+1}) \in \mathcal{D}$, we add $s_i$ as node if it is distant enough from existing nodes in the graph. Moreover, we add an edge in the graph between the incoming nearest neighbor of $s_i$ and the outgoing nearest neighbor of $s_{i+1}$.} \label{fig:graph_building} \end{SCfigure} We now describe our self-supervised reward shaping method. It comprises three stages that we will detail below. In the first stage, we train a Reachability Network (RNet)~\citep{savinov2018episodic} on the trajectories in $\mathcal{D}$ to predict whether two states are reachable from one another. The second stage consists in building a directed graph $\mathcal{M}$ whose nodes are a subset of states in $\mathcal{D}$, and edges connect reachable states. We employ the RNet as a criterion to avoid adding similar states to $\mathcal{M}$ so that its nodes cover the states in $\mathcal{D}$ uniformly. The final stage consists in training the goal-conditioned policy on transitions and goals sampled from $\mathcal{D}$. It is trained with dense rewards computed as the sum of a global (based on the graph distance in $\mathcal{M}$) and local (based on the RNet) distance terms. The important aspect of our method is that the whole training only uses trajectories from the pre-collected dataset $\mathcal{D}$ without running a single action in the environment. We now describe each component in more detail. \subsection{Reachability network} In order to learn a good local distance between states in $\mathcal{D}$, we adopt an asymmetric version of the Reachability Network (RNet) \cite{savinov2018episodic}. The general idea of RNet is to approximate the distance between states in the environment by the average number of steps it takes for a random policy to go from one state to another. We adapted the original formulation with two modifications: first, we use exploration trajectories from $\mathcal{D}$ instead of random trajectories and second, we leverage the temporal direction because a state can be reachable from another without the converse being true. Let $(s^a_1, ..., s^a_T)$ denote a trajectory in $\mathcal{D}$, where $a$ is a trajectory index. We define a \textit{reachability label} $y^{ab}_{ij}$ for each pair of observations $(s^a_i, s^b_j)$ by \begin{equation} y^{ab}_{ij} = \begin{cases} 1 \quad \text{if} \ a = b \ \text{and} \ 0 \leq j - i \leq \tau_\text{reach}, \\ 0 \quad \text{otherwise}, \end{cases} \quad \text{for } 1 \leq i, j \leq T, \end{equation} where the \textit{reachability threshold} $\tau_\text{reach}$ is a hyperparameter. The reachability label is equal to 1 \textit{iff} the states are in the same trajectory and the number of steps from $s^a_i$ to $s^b_j$ is below $\tau_\text{reach}$, as shown in \autoref{fig:rnet_labels}. Note that $y^{ab}_{ij} \neq y^{ab}_{ji}$. We train a siamese neural network $R$, the RNet, to predict the reachability label $y^{ab}_{ij}$ from a pair of observations $(s^a_i, s^b_j)$ in $\mathcal{D}$. The RNet consists of an embedding network $g$, and a fully-connected network $f$ to compare the embeddings, \textit{i}.\textit{e}., \begin{equation} R(s^a_i, s^b_j) = \sigma \left[ f(g(s^a_i), g(s^b_j)) \right], \end{equation} where $\sigma$ is a sigmoid function. A higher $R$ value indicates two states reachable easily with random walk, so they can be considered close in the environment. More precisely, $R$ takes values in $(0, 1)$ and $s'$ is reachable from $s$ if $R(s, s') \geq 0.5$. RNet is learned in a self-supervised fashion, as the ground-truth labels needed to train the network are generated automatically. \subsection{Directed graph} \label{sec:memory} In the next phase, we use trajectories in $\mathcal{D}$ to build a directed graph $\mathcal{M}$ that captures high-level dynamics of the environment, as illustrated in \autoref{fig:graph_building}. We want the nodes of $\mathcal{M}$ to evenly represent the states in $\mathcal{D}$. This is achieved by filtering the states in $\mathcal{D}$: a state is added to $\mathcal{M}$ only if it is distant enough from all the other nodes in $\mathcal{M}$. More precisely, a state $s \in \mathcal{D}$ is added to $\mathcal{M}$ if and only if \begin{equation} R(s, n) < 0.5 \; \text{and} \; R(n, s) < 0.5, \quad \text{for all} \; n \in \mathcal{M} . \end{equation} Note that we require both the directions to be novel. This filtering avoids redundancy by preventing similar states to be added to the memory. It also has a balancing effect because it limits the number of states that can be added from a certain area even if it is visited by the agent many times in $\mathcal{D}$. Once the nodes are selected, we connect pairs that are reachable from one to another. To this end, we employ trajectories in $\mathcal{D}$ because they contain actual feasible transitions. Given a transition $s_i \to s_j$ in $\mathcal{D}$, we add edge $n_i \rightarrow n_j$ if $s_i$ can be reached from node $n_i$ and node $n_j$ can be reached from $s_j$. This way, we have a chain $n_i \rightarrow s_i \rightarrow s_j \rightarrow n_j$ and can assume $n_j$ is reachable from $n_i$. Concretely, we select node $n_i$ to be the incoming nearest neighbor ($\text{NN}_\text{in}$) to $s_i$, and $n_j$ to be the outgoing nearest neighbor ($\text{NN}_\text{out}$) from $s_j$, \textit{i}.\textit{e}., \begin{equation} n_i = \text{NN}_\text{in}(s_i)= \operatorname*{argmax}_{n \in \mathcal{M}} R(n, s_i), \quad n_j = \text{NN}_\text{out}(s_j) = \operatorname*{argmax}_{n \in \mathcal{M}} R(s_j, n). \end{equation} By performing this action over all the transitions in $\mathcal{D}$, we turn $\mathcal{M}$ into a directed graph where edges represent reachability from one node to another. \begin{figure}[t] \centering \begin{subfigure}{.3\linewidth} \centering \includegraphics[width=\linewidth]{fig/rnet_labels.png} \caption{Training labels for RNet} \label{fig:rnet_labels} \end{subfigure} \begin{subfigure}{.5\linewidth} \centering \includegraphics[width=\linewidth]{fig/reward.pdf} \caption{Visualisation of the reward computation} \label{fig:reward} \end{subfigure} \caption{ Visualization of our dense reward shaping method. (a) shows how training labels are generated for training the RNet: given a state $s_i$, positive pairs are sampled in the same trajectory within a threshold $\tau_\text{reach}$, and the rest of the trajectory forms negative pairs. (b) presents how rewards are implemented as a combination of a global distance term (green), computed with the shortest path in the graph between the outgoing nearest neighbor ($\text{NN}_\text{out}$) of the state $s_{t+1}$ and the incoming nearest neighbor of the goal ($\text{NN}_\text{in}$), and a local distance term (red) computed using the RNet value between $\text{NN}_\text{in}$ and $g$. } \label{fig:other_figures} \end{figure} \subsection{Distance function for policy training} We then use the obtained directed graph to compute a global distance in the state space. Indeed, RNet predicts reachability between $s_i$ and $s_j$ so we can directly use it as a distance metric \begin{equation} d_l(s_i, s_j) = 1 - R(s_i, s_j), \quad \forall s_i, s_j \in \mathcal{S}. \end{equation} However, this reachability metric is confined to a certain threshold, so there is no guarantee that the RNet predictions will have good global properties. In contrast, the directed graph $\mathcal{M}$ captures high-level global dynamics of the environment. We can easily derive a distance function $d_\mathcal{M}(n_i, n_j)$ between any pair of nodes in $\mathcal{M}$ by computing the length of the shortest path in this graph, provided the graph is connected. In practice, we can use a trick to connect the graph if necessary, by adding an edge between the pair of nodes from different connected components with the maximum RNet value. Moreover, we can extend this distance $d_\mathcal{M}$ to a global distance function $d_g$ in the state space $\mathcal{S}$ by finding, for any pair $s_i$ and $s_j$ in $\mathcal{S}$ their nearest neighbors in the corresponding direction. More precisely, \begin{equation} d_g(s_i, s_j) = d_\mathcal{M}(\text{NN}_\text{out}(s_i),\text{NN}_\text{in}(s_j)), \quad \forall s_i, s_j \in \mathcal{S}. \end{equation} The distance $d_g$ between two states in the state space becomes the length of the shortest path between their respective closest nodes in the graph. This process, summarized in \autoref{fig:reward}, propagates the good local properties of RNet to get a well-shaped distance function for states that are further away. Since $d_g$ captures global distances while $d_l$ captures local fine-grained distance, we use their combination as a final distance function: $\forall s_i, s_j \in \mathcal{S}, \quad d(s_i, s_j) = d_g(s_i, s_j) + d_l(s_i, s_j)$. \subsection{Policy training} \label{sec:policy_training} The last phase of our method is training the goal-conditioned policy offline. Here, we create an offline replay buffer $\mathcal{B}$ that is filled with relabeled data. We randomly sample a transition $(s_t, a_t, s_{t+1})$ from $\mathcal{D}$ as well as a goal $g$ and relabel the transition with reward $r_t=-d(s_{t+1}, g)$. We then push the relabeled transition $(s_t, a_t, g, r_t, s_{t+1})$ to $\mathcal{B}$. In order to create a curriculum that artificially guides the agent towards the goal, we experimented with two different transition augmentation techniques: \paragraph{Sub-goal augmentation.} Let $(s_t, a_t, g, r_t, s_{t+1})$ denote a relabeled transition and $(n_0, ..., n_{P-1})$ the shortest path in the graph $\mathcal{M}$ between $n_0 = \text{NN}_\text{out}(s_t)$ and $n_{P-1} = \text{NN}_\text{in}(g)$. The augmentation technique consists in adding to the replay buffer every transition $(s_t, a_t, n_i, r_t^i, s_{t+1})$ for all $i~\in~\{0, P - 1\}$, where $r_t^i=-d(s_{t + 1}, n_i)$. In other words, given a transition $(s_t, a_t, s_{t+1})$ and a goal $g$ from $\mathcal{D}$, we push to the replay buffer a set of relabeled transitions with all goals on the shortest path from $s_t$ to $g$ (and their corresponding rewards). \paragraph{Edge augmentation.} Similar to the subgoal augmentation technique, we consider a relabeled transition $(s_t, a_t, g, r_t, s_{t+1})$ and the associated shortest path $(n_0, ..., n_{P-1})$. This time, we keep the same goal $g$ for every augmented transition, but for every edge $(n_{i-1}, n_i), i \in \{1, P-1\}$, we add the relabeled transition $(s^i_t, a^i_t, g, r^i_t, s^i_{t+1})$ to $\mathcal{B}$ where $(s^i_t, a^i_t, s^i_{t+1}) \in \mathcal{D}$, $\text{NN}_\text{out}(s^i_t) = n_{i - 1}$, $\text{NN}_\text{in}(s^i_{t + 1}) = n_i$ and $r_t^i = -d(s^i_t, g)$. Note that the existence of such a transition in $\mathcal{D}$ is guaranteed by construction: an edge is added to the graph from one node to another \textit{iff} there exist a transition in $\mathcal{D}$ whose corresponding nearest neighbors are these two nodes (in the same order). Once the replay buffer $\mathcal{B}$ is filled, the goal-conditioned policy can be trained using any off-policy algorithm. In our implementation, we chose Soft Actor-Critic~\cite{haarnoja2018soft}, as it is known to require few hyper-parameter tuning, and is widely used in the literature. \section{Preliminaries} Let $\mathcal{E} = (\mathcal{S}, \mathcal{A}, P, p_0, \gamma, T)$ define a reward-free Markov decision process (MDP), where $\mathcal{S}$ and $\mathcal{A}$ are state and action spaces respectively, $P: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \ensuremath{\mathrm{I\! R}}_+$ is a state-transition probability function, $p_0: \mathcal{S} \rightarrow \ensuremath{\mathrm{I\! R}}_+$ is an initial state distribution, $\gamma$ is the discount factor, and $T$ is the task horizon. In the goal-conditioned setting, the objective is to learn a policy $\pi : \mathcal{S} \times \mathcal{G} \rightarrow \mathcal{A}$ that maximizes the expectation of the cumulative return over the goal distribution, where $\mathcal{G}$ denotes the goal space. Here, we make the common assumption that states and goals are defined in the same form, \textit{i}.\textit{e}., $\mathcal{G} \subset \mathcal{S}$. We assume that we have access to a dataset $\mathcal{D}$ of pre-collected episodes generated by using any data collection algorithm in $\mathcal{E}$. Each episode is stored in $\mathcal{D}$ as a series of $(s, a, s')$ tuples, where $s, s' \in \mathcal{S}$ and $a \in \mathcal{A}$. In the general offline formulation introduced by \citet{yarats2022don}, the dataset $\mathcal{D}$ can be relabeled by evaluating any reward function $r : \mathcal{S} \times \mathcal{A} \rightarrow \ensuremath{\mathrm{I\! R}}$ at each tuple in $\mathcal{D}$, and adding the resulting tuple $(s, a, r(s, a), s')$ in the relabeled dataset $\mathcal{D}_r$. We can extend this protocol to the goal-oriented setting by considering a goal distribution $p_\mathcal{G}$ in the goal space, and any goal-conditioned reward function $r : \mathcal{S} \times \mathcal{A} \times \mathcal{G} \rightarrow \ensuremath{\mathrm{I\! R}}$. Given a tuple $(s, a, s')$ in $\mathcal{D}$, we relabel it by sampling a goal $g \sim p_\mathcal{G}$, computing $r(s, a, g)$ and adding the resulting tuple $(s, a, g, r(s, a, g), s')$ in the relabeled dataset $\mathcal{D}_{r, p_\mathcal{G}}$. Once the relabeled dataset $\mathcal{D}_{r, p_\mathcal{G}}$ is generated, we can learn a goal-conditioned policy by executing any offline RL algorithm. The algorithm runs completely offline, by sampling tuples from $\mathcal{D}_{r, p_\mathcal{G}}$ and without any interaction with the environment. The goal-conditioned policy is then evaluated online in $\mathcal{E}$ on a set of fixed evaluation goals that is not known during training. \section{Related Work} \paragraph{Goal-conditioned RL.} In its original formulation, goal-conditioned reinforcement learning was tackled by several methods~\citep{kaelbling1993learning, schaul2015universal, andrychowicz2017hindsight, nasiriany2019planning}. The policy learning process is supervised in these works: the set of evaluation goals is available at train time as well as a reward function that guides the agent to the goal. Several works propose solutions for generating goals automatically when training goal-conditioned policies, including self-play~\citep{sukhbaatar2018intrinsic, sukhbaatar2018learning, openai2021asymmetric}, and adversarial student-teacher policies~\citep{campero2020learning}. A recent line of research~\citep{warde2018unsupervised, nair2018visual, ecoffet2019go, pong2020skew, venkattaramanujam2019self, hartikainen2019dynamical, mendonca2021discovering, mezghani2022walk} focuses on learning goal-conditioned policies in an unsupervised fashion. The objective is to train general agents that can reach any goal state in the environment without any supervision (reward, goal-reaching function) at train time. In particular, \citet{mendonca2021discovering} trains a model-based agent that learns to discover novel goals with an explorer model, and reach them with an achiever policy via imagined rollouts. \paragraph{Offline RL.} The data collection technique is an important aspect when studying the training of policies from pre-collected datasets. In this context, the first works assumed access to policies trained with task-specific rewards~\citep{fu2020d4rl, gulcehre2020rl}. More recently, methods proposed to leverage unsupervised exploration to collect datasets for offline RL~\citep{yarats2022don,lambert2022challenges}. In particular, \citet{yarats2022don} creates a dataset of pre-collected trajectories, ExoRL, on the DeepMind control suite~\citep{tassa2018deepmind} generated without any hand-crafted rewards. Similar to URLB~\citep{laskin2021urlb}, ExoRL benchmarks a number of exploration algorithms~\citep{pathak2017curiosity, eysenbach2018diversity, pathak2019self,yarats2021reinforcement}, and evaluates the performance of a policy trained on the corresponding offline datasets relabeled with task-specific rewards. \paragraph{Multi-task Offline RL.} Recent works proposed to learn multiple tasks from pre-collected datasets, starting with methods~\citep{endrawis2021efficient} that generate goals to improve the offline data collection process in a self-supervised way. This connection has also been studied in the supervised setting~\citep{yang2022rethinking, ma2022far} and when learning hierarchical policies~\citep{li2022hierarchical}. In a setting closely related to our work, Actionable Models~\citep{chebotar2021actionable} considers the problem of learning goal-conditioned policies from offline datasets without interacting with the environment, and with no task-specific rewards. They employ goal-conditioned Q-learning with hindsight relabeling~\citep{andrychowicz2017hindsight}. As opposed to their work that relies on learning from sparse rewards, we propose to leverage a self-supervised training stage to densely shape rewards. \section{Implementation details} \subsection{Self-supervised reward shaping} We provide pseudo-code for the two stages of our approach: the graph building process is shown in \autoref{alg:graph_building}, and the steps for filling the replay buffer are shown in \autoref{alg:rb_filling}. Here we use the notation $R(s, \mathcal{M})$ as the maximum RNet value between $s$ and all nodes in $\mathcal{M}$ \textit{i}.\textit{e}., \begin{equation*} R(s, \mathcal{M}) := \max\limits_{m \in \mathcal{M}} {R(s, m)} \end{equation*} \begin{minipage}{0.6\textwidth} \begin{algorithm}[H] \caption{Building the directed graph $\mathcal{M}$} \label{alg:graph_building} \begin{algorithmic} \State \textbf{Input:} pre-collected dataset $\mathcal{D}$, Reachability Network $R$ \State \textbf{Initialize:} $\mathcal{M} = \{\}$ \\ \State \textcolor{gray}{/ * Build the set of nodes * /} \For{each state $s$ in $\mathcal{D}$} \If{$R(s, \mathcal{M}) < 0.5$ and $R(\mathcal{M}, s) < 0.5$} \State Update $\mathcal{M} := \mathcal{M} \cup \{s\}$ \EndIf \EndFor \\ \State \textcolor{gray}{/ * Build edges * /} \For{each transition $(s_t, s_{t + 1})$ in $\mathcal{D}$} \State Let $n_t := \text{NN}_\text{in}(s_t) = \operatorname*{argmax}_{n \in \mathcal{M}} R(n, s_t)$ \State Let $n_{t+1} := \text{NN}_\text{out}(s_{t+1}) = \operatorname*{argmax}_{n \in \mathcal{M}} R(s_{t + 1}, n)$ \State Create directed edge from $n_t$ to $n_{t+1}$ \EndFor \end{algorithmic} \end{algorithm} \end{minipage} \begin{minipage}{0.65\textwidth} \begin{algorithm}[H] \caption{Building replay buffer $\mathcal{B}$ for offline policy training} \label{alg:rb_filling} \begin{algorithmic} \State \textbf{Input:} pre-collected dataset $\mathcal{D}$, Reachability Network $R$, directed graph $\mathcal{M}$ \State \textbf{Initialize:} $\mathcal{B} = \{\}$ \\ \While{$\mathcal{B}$ is not full} \State Sample a transition $(s_t, a_t, s_{t + 1})$ at random in $\mathcal{D}$ \State Sample a goal $g$ at random in $\mathcal{D}$ \\ \State Compute $d_l(s_{t+1}, g) := 1 - R(s_{t+1}, g)$ \State Let $n_{t+1} := \text{NN}_\text{out}(s_{t+1}) = \operatorname*{argmax}_{n \in \mathcal{M}} R(s_{t + 1}, n)$ \State Let $n_g := \text{NN}_\text{in}(g) = \operatorname*{argmax}_{n \in \mathcal{M}} R(n, g)$ \State Compute $d_g(s_{t + 1}, g) := \text{ShortestPathLength}(n_{t+1}, n_g)$ \\ \State Compute reward $r_t := -(d_l(s_{t+1}, g) + d_g(s_{t + 1}, g))$ \State Relabel transition with goal $g$ and reward $r_t$, and \State Push $(s_t, a_t, g, r_t, s_{t + 1})$ to $\mathcal{B}$ \EndWhile \end{algorithmic} \end{algorithm} \end{minipage} \subsection{Actionable Models baselines re-implementation} In this section, we provide details for our re-implementation of Actionable Models~\citep{chebotar2021actionable} and HER~\citep{andrychowicz2017hindsight}. Since we are using the same optimization algorithm for offline policy training for these baselines and our method, the only difference lies in how the transitions in the pre-collected dataset~$\mathcal{D}$ are relabeled to build the replay buffer~$\mathcal{B}$. In HER~\citep{andrychowicz2017hindsight}, the idea is to sample a trajectory and a goal $g$ at random in $\mathcal{D}$, and to cut the trajectory at a step $i$. Each transition in the trajectory (until step $i$) is then relabelled twice: once with the goal $g$ and reward 0 for all transitions, and once with goal $s_i$ (the final state of the trajectory) and reward 0 for all transitions except the final one that gets a reward of 1. The pseudo-code for this method is shown in \autoref{fig:baseline_algo}. Actionable Models~\citep{chebotar2021actionable} relies on a similar idea as in HER~\citep{andrychowicz2017hindsight}, but contains two additional steps to improve the method. The first step is a form of \textit{goal chaining} and it consists in using the Q-value at the final state of the trajectory to compute the reward for the final transition. The second step aims at balancing the unseen action effect, in order to regularize the action space. In practice, it consists in sampling negative actions from the policy and label zero-reward transitions with these actions. The implementation of both tricks is shown in \autoref{fig:baseline_algo}. For the implementation of the third baseline, HER + random negative action, the overall algorithm is the same as HER, except that we also generate transitions with negative actions, similar to Actionable Models. This time, the negative actions are not sample from the policy, but are generated uniformly at random from the action space. \begin{figure}[h] \begin{minipage}{0.49\textwidth} \begin{algorithm}[H] \caption{HER} \label{alg:her} \begin{algorithmic} \State \textbf{Input:} dataset $\mathcal{D}$ \State \State \State \textbf{Initialize:} $\mathcal{B} = \{\}$ \\ \While{$\mathcal{B}$ is not full} \State Sample trajectory $\tau \in \mathcal{D}$ \State Sample goal $g \in \mathcal{D}$ \State Randomly cut $\tau$ at step $i$ \For{$j \in \{0,...,i-2\}$} \State $(s_j, a_j, g, 0, s_{j+1}) \rightarrow \mathcal{B}$ \State \State \State $(s_j, a_j, s_i, 0, s_{j+1}) \rightarrow \mathcal{B}$ \State \State \EndFor \State $(s_{i-1}, a_{i-1}, g, 0, s_i) \rightarrow \mathcal{B}$ \State \State \State $(s_{i-1}, a_{i-1}, s_i, 1, s_i) \rightarrow \mathcal{B}$ \State \State \EndWhile \end{algorithmic} \end{algorithm} \end{minipage} \begin{minipage}{0.49\textwidth} \begin{algorithm}[H] \caption{Actionable Models} \label{alg:am} \begin{algorithmic} \State \textbf{Input:} dataset $\mathcal{D}$, \State \textcolor{red}{goal-conditioned critic network $Q$}, \State \textcolor{blue}{goal-conditioned policy $\pi$} \State \textbf{Initialize:} $\mathcal{B} = \{\}$ \\ \While{$\mathcal{B}$ is not full} \State Sample trajectory $\tau \in \mathcal{D}$ \State Sample goal $g \in \mathcal{D}$ \State Randomly cut $\tau$ at step $i$ \For{$j \in \{0,...,i-2\}$} \State $(s_j, a_j, g, 0, s_{j+1}) \rightarrow \mathcal{B}$ \State \textcolor{blue}{$a_1 \sim \pi(s_j, g)$} \State \textcolor{blue}{$(s_j, a_1, g, 0, s_{j+1}) \rightarrow \mathcal{B}$} \State $(s_j, a_j, s_i, 0, s_{j+1}) \rightarrow \mathcal{B}$ \State \textcolor{blue}{$a_2 \sim \pi(s_j, s_i)$} \State \textcolor{blue}{$(s_j, a_2, s_i, 0, s_{j+1}) \rightarrow \mathcal{B}$} \EndFor \State $(s_{i-1}, a_{i-1}, g, \textcolor{red}{Q(s_i, a_{i-1}, g)}, s_i) \rightarrow \mathcal{B}$ \State \textcolor{blue}{$a_3 \sim \pi(s_{i-1}, g)$} \State \textcolor{blue}{$(s_{i-1}, a_3, g, 0, s_i) \rightarrow \mathcal{B}$} \State $(s_{i-1}, a_{i-1}, s_i, 1, s_i) \rightarrow \mathcal{B}$ \State \textcolor{blue}{$a_4 \sim \pi(s_{i-1}, s_i)$} \State \textcolor{blue}{$(s_{i-1}, a_4, s_i, 0, s_i) \rightarrow \mathcal{B}$} \EndWhile \end{algorithmic} \end{algorithm} \end{minipage} \caption{Pseudo-code for replay buffer filling with HER~\citep{andrychowicz2017hindsight} and Actionable Models~\citep{chebotar2021actionable} methods. We compare both implementations by showing in \textcolor{red}{red} modifications related to goal chaining, and in \textcolor{blue}{blue} edits related to unseen action regularization.} \label{fig:baseline_algo} \end{figure} \clearpage \subsection{Hyper-parameters} We first list the hyper-parameters for the self-supervised reward shaping phase in \autoref{tab:hp_rnet}. \autoref{tab:hp_sac} details the hyper-parameters for the offline policy training stage with SAC~\cite{haarnoja2018soft}. For the Actionable Models~\citep{chebotar2021actionable} and HER~\citep{andrychowicz2017hindsight} baselines, we used the same parameters as in our approach, with the exception of some parameters specific to these methods, shown in \autoref{tab:hp_AM}. These hyper-parameters were obtained by performing a random search for all the methods over several parameter values. All the experiments in this work were performed on 3 random seeds. \begin{table}[h] \centering \begin{tabular}{ @{} l c c c @{} } \toprule Common hyper-parameter & \multicolumn{3}{c}{Value} \\ \midrule Task & UMaze & RoboYoga & Pusher \\ \midrule Number of training pairs & $5 \times 10^5$ & $5 \times 10^5$ & $5 \times 10^5$ \\ Ratio of negatives & 0.5 & 0.5 & 0.5 \\ Ratio of negatives from same trajectory & 0.5 & 0.5 & 0.5 \\ Reachability threshold ($\tau_\text{reach}$) & 5 & 2 & 10 \\ Weight of local distance in reward & 1 & 1 & 100 \\ Batch size & 512 & 512 & 512 \\ Learning rate & 0.001 & 0.0003 & 0.001 \\ Weight decay & 0.00001 & 0.00001 & 0.00001 \\ Total number of training epochs & 100 & 100 & 100 \\ Capacity of the directed graph & 1000 & 10000 & 1000\\ \bottomrule \end{tabular} \caption{Hyper-parameters for reachability network training and directed graph construction.} \label{tab:hp_rnet} \end{table} \begin{table}[h] \centering \begin{tabular}{ @{} l c c c @{} } \toprule Hyper-parameter & \multicolumn{3}{c}{Value} \\ \midrule Task & UMaze & RoboYoga & Pusher \\ \midrule Replay buffer capacity & $10^6$ & $10^6$ & $10^6$\\ Batch size & 2048 & 2048 & 2048 \\ Discount ($\gamma$) & 0.90 & 0.95 & 0.95 \\ Number of updates per epoch & 1000 & 1000 & 1000 \\ Total number of epochs & 1000 & 1000 & 1000 \\ Target update interval & 1 & 1 & 1 \\ Soft update coefficient ($\tau$) & 0.005 & 0.005 & 0.005 \\ SAC entropy parameter ($\alpha$) & 0.05 & 0.01 & 0.0001 \\ Optimizer & Adam & Adam & Adam \\ Learning rate & 0.0003 & 0.0003 & 0.0005 \\ Action repeat & 1 & 2 & 1 \\ Reward scaling factor & 0.1 & 0.5 & 0.01 \\ \bottomrule \end{tabular} \caption{Hyper-parameters for offline policy learning with SAC~\citep{haarnoja2018soft} with our method.} \label{tab:hp_sac} \end{table} \begin{table}[h] \centering \begin{tabular}{ @{} l c c c @{} } \toprule Hyper-parameter & \multicolumn{3}{c}{Value} \\ \midrule Task & UMaze & RoboYoga & Pusher \\ \midrule Discount ($\gamma$) & 0.99 & 0.99 & 0.99 \\ SAC entropy parameter ($\alpha$) & 0.01 & 0.001 & 0.005 \\ Learning rate & 0.0001 & 0.0001 & 0.0001 \\ Reward scaling factor & 1 & 10 & 1 \\ \bottomrule \end{tabular} \caption{Hyper-parameters for offline policy learning with SAC~\citep{haarnoja2018soft} specific to Actionable Models~\citep{chebotar2021actionable} and HER~\citep{andrychowicz2017hindsight} baselines.} \label{tab:hp_AM} \end{table} \clearpage \subsection{Architecture details} \paragraph{Reachability Network~\citep{savinov2018episodic}} The RNet has a siamese architecture with two embedding heads (one for each observation of the pair) with tied weights, and a comparator network that compares both embeddings and returns a reachability score. For the UMaze task, we used an embedding head with 3 fully-connected layers with batch normalization and Tanh activations, with a hidden size of 64 and an embedding size of 16. For the Roboyoga Walker task, the embedding network has the same architecture, but we increased both the hidden and embedding sizes to 128. The comparator network is also a fully-connected network. It contains batch normalization and ReLU activations. The hidden size for the UMaze (respectively the RoboYoga Walker) task is set to 16 (resp.\ 128) and the number of layers is 2 (resp.\ 4). \paragraph{Policy Network} The goal-conditioned policy network takes as input both the observation and the goal, in separate heads with the same architecture but independent weights. These heads are implemented as 3-layer fully-connected networks with Tanh activations, hidden size of dimension 64, and 16 dimensions for the feature size. The output from both the heads is then concatenated and fed into a 2-layer fully-connected network of width 256. The critic network has the same architecture for both observation and goal heads, and is followed by 3 fully connected-layers of width 256. \section{Full results on RoboYoga Walker task} We show the comparison of our method against the aforementioned baselines on each of the 12 goals of the RoboYoga Walker task in \autoref{fig:walker_by_goal}. These goals are illustrated in \autoref{fig:walker_all_goals}. We see that our method masters most of the goals that do not require balancing (Lie Back \& Front, Legs Up, Lunge), and succeeds quite well at more complicated goals like Side Angle, Lean Back and Bridge, but is unable to progress in complex goals like Head Stand or Arabesque. \begin{figure}[h] \centering \includegraphics{fig/walker_by_goal.pdf} \caption{Performance on the RoboYoga Walker talk for each of the 12 goals.} \label{fig:walker_by_goal} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{fig/walker_all_goals1.pdf} \includegraphics[width=\linewidth]{fig/walker_all_goals2.pdf} \caption{Visualization of the 12 evaluation goals for the RoboYoga Walker task.} \label{fig:walker_all_goals} \end{figure}
1,314,259,993,776
arxiv
\section{Introduction} \label{sec:introduction}} \IEEEPARstart{M}{embership} Inference (MI) attacks pose serious privacy risks to Machine Learning-as-a-Service (MLaaS). In a nutshell, MI attacks apply a binary classifier on the prediction vectors (outputs) of data samples obtained from a non-private machine learning (ML) model to infer whether the data samples are members of the training set or not. In this paper, we assume the baseline ML model is a deep neural network (DNN). There are two general types of MI attacks, black-box MI attacks~\cite{ShokriR2017} and white-box MI attacks~\cite{NasrM2018C}, depending on the adversary's access to the target ML model trained on some training set. Compared to the white-box MI attacks, the black-box attacks require less~\cite{NasrM2018C} (or even no~\cite{SalemA2018}) auxiliary information about the online MLaaS, which does not release model details but only the queried prediction output. Our focus is on providing privacy against black-box MI attacks. To prevent membership disclosure, differential privacy (DP)~\cite{DworkC2006C} is a promising technique, which (informally) shields the existence of any arbitrary data sample in a dataset, thereby preserving membership privacy for each member in the training set. There are three broad categories of applying DP into deep learning -- objective perturbation, gradient perturbation and output perturbation -- according to Jayaraman et al.~\cite{JayaramanB2019}. In particular, objective perturbation injects DP noise into the objective function of a learning task~\cite{ZhangJ2012}; gradient perturbation injects DP noise into each epoch during gradient descent~\cite{AbadiM2016}; output perturbation injects DP noise into the elements (edges or nodes) of a trained non-private neural network~\cite{ChaudhuriK2011,WuX2017} or into the prediction results following the sample-and-aggregate mechanism of DP~\cite{PapernotN2018}. Due to the non-convexity of the loss function, applying objective perturbation and output perturbation mainly rely on convexification techniques. For example, Phan et al.~\cite{PhanN2016,PhanN2017} convexify the loss function of convolutional deep belief networks~\cite{LeeH2009}, then inject DP noise into the coefficients via the functional mechanism (originally for non-deep learning tasks)~\cite{ZhangJ2012}. More generally, we could implement output perturbation by training a baseline non-private neural network with a universal convexified loss function~\cite{LoJ2012,DvijothamK2014}, then injecting DP noise into the elements of the trained network. However, there are some weaknesses that limit the application and performance of existing DP approaches to deep learning. Objective perturbation approaches (combining convexified loss function~\cite{PhanN2016,PhanN2017} and DP objective function~\cite{ZhangJ2012}) only work for a specified learning task -- convolutional deep belief network~\cite{LeeH2009}, which makes it hard to apply it to general deep learning tasks. The gradient perturbation approaches (including using different DP variants and composition theorems) suffer from over-injected noise, mainly since the overall noise injection depends on the number of training epochs, which are usually large in deep learning~\cite{HarderF2020}. PATE~\cite{PapernotN2018}, the only work implementing sample and aggregate mechanism of DP for output perturbation, works in a special configuration requiring additional publicly available data to assist differentially private aggregation of the distributed learning outputs. The output perturbation framework (universal convexification plus DP noise injection) relies on tight upper bound on the DP noise scale parameter (the global sensitivity). Existing theoretical results~\cite{ChaudhuriK2011,WuX2017} provide loose upper bounds assuming convexity of the loss function. To tighten their results, more conditions should be introduced in addition to convex loss function, such as normalised training sets with binary classes~\cite{ChaudhuriK2011} or smooth loss functions with a decreasing step size during the process of stochastic gradient descent (SGD)~\cite{WuX2017}. These conditions on existing upper bounds are shown in Table~\ref{tab:comparison}. \begin{table*}[!th] \centering \captionsetup{justification=centering} \caption{Comparison between the Upper Bounds on the $L_{2}$ Global Sensitivity of Trained Model Parameters. \\($\rho$-Lipschitz loss functions, $\lambda$-strongly convex $L_{2}$ regularisers, $\eta$: constant SGD step, $C$: number of classes, \\$\lVert \mathbf{x} \rVert_{2}$: maximum $L_{2}$ norm of a data sample in the training data space, $n$: size of the training set.)} \label{tab:comparison} \scalebox{0.9}{ \begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{Work} & \multirow{2}{*}{Upper Bound} & \multicolumn{5}{c}{Conditions} \\ \cline{3-7} & & Convex loss & Smooth loss & Step size in SGD & Normalised data & Binary classes \\ \hline\hline Chaudhuri et al.~\cite{ChaudhuriK2011} & $2\rho/\lambda n$ & $\checkmark$ & - & - & $\checkmark$ & $\checkmark$ \\ \hline Chaudhuri et al.~\cite{ChaudhuriK2011} & $C\lVert \mathbf{x} \rVert_{2}\rho/\lambda n$ & $\checkmark$ & - & - & - & - \\ \hline Wu et al.~\cite{WuX2017} & $2\rho/\lambda n$ & $\checkmark$ & $\checkmark$ & decreasing & - & - \\ \hline Wu et al.~\cite{WuX2017} & $2\eta\rho/(1 - (1 - \eta\lambda)^{n})$ & $\checkmark$ & $\checkmark$ & constant & - & - \\ \hline This Work & $2\rho/\lambda n$ & $\checkmark$ & - & - & - & - \\ \hline \end{tabular} } \end{table*} \descr{Contributions.} To address the problems in existing DP practice in deep learning, we propose a novel DP framework in the black-box setting. We summarise our main contributions below. \begin{enumerate} \setlength\itemsep{0em} \item We mathematically analyse a tighter upper bound on the global sensitivity of the trained model parameters, which, like existing upper bounds, assumes a convex loss function, but without other conditions (see Table~\ref{tab:comparison} for a brief comparison). In contrast to existing works, we bound the maximum change of the model parameters by analysing the stability~\cite{ShalevS2014} of a trained model after removing an arbitrary data sample from a training set. \item We propose a novel framework in the black-box setting, where DP noise is injected into an individual neuron at the output layer of a trained neural network. Specifically, at the training stage (for a baseline non-private model), we apply a universal convexification of the loss function~\cite{LoJ2012,DvijothamK2014} to meet the convexity condition. At the prediction stage (for a differentially private prediction probability vector), we first sample a neuron with the exponential mechanism of DP (Exp-DP)~\cite{McsherryF2007} at the output layer, and then inject DP noise into the sampled neuron, where we scale the noise to the global sensitivity of an individual neuron (bounded by the global sensitivity of the trained model parameters). \item We empirically compare the privacy-utility trade-off of our framework with existing open-source differentially private stochastic gradient descent (DP-SGD) approaches for classification on six commonly used real-world datasets, following the same experimental configurations as existing studies~\cite{ShokriR2017,JayaramanB2019}. The experimental results show that, when the baseline non-private models have observable privacy leakage under MI attacks, our framework achieves a better privacy-utility trade-off than existing DP-SGD implementations, \reviseA{given an overall privacy budget $\epsilon \leq 1$ for a large number of queries, which is likely to be the size we expect in practice.} \end{enumerate} \section{Related Work} \label{sec:rw} Differentially private deep learning can be mainly categorised into objective perturbation, gradient perturbation and output perturbation~\cite{JayaramanB2019}. The difference between the three categories is at the point where we inject DP noise. Objective perturbation injects DP noise into the objective function; gradient perturbation injects DP noise into the process of gradient descent; output perturbation injects DP noise into the output, i.e., the elements of a trained neural network. There are two works for differentially private deep learning using objective perturbation~\cite{JayaramanB2019}. Phan et al.~\cite{PhanN2016,PhanN2017} replace the non-convex loss function with polynomial approximations via Taylor expansions, then inject DP noise into the coefficients of the approximate function via the functional mechanism~\cite{ZhangJ2012}. However, this framework only works on a specific machine learning task -- convolutional deep belief network~\cite{LeeH2009}, which makes it hard to apply it to general deep learning tasks. Additionally, it is hard to find the optimal polynomial degree to achieve an acceptable trade-off between utility and privacy in practice. Gradient perturbation is the most used DP technique in deep learning (DP-SGD)~\cite{AbadiM2016}), including an open-source Python library provided by Google's TensorFlow Privacy~\cite{TensorflowPrivacy}. DP-SGD perturbs the minibatch stochastic optimisation process, i.e., injecting DP noise into the gradients of each epoch during the training process. Specifically, in each epoch, DP-SGD bounds the global sensitivity of the gradients (the maximum change of the gradients by an individual data samples) by norm clipping. However, since DP-SGD injects noise into all the epochs, \reviseA{the overall injected noise is highly depending on the number of training epochs. If this number is large, which it usually is, DP-SGD would finally inject too much noise.} For output perturbation, there are two main streams of techniques. One line of works applies the sample-and-aggregate mechanism of DP to produce differentially private prediction probability vectors. For example, Papernot et al.~\cite{PapernotN2018} propose a framework for differentially private aggregate machine learning, called Private Aggregation of Teacher Ensembles (PATE). Specifically, PATE splits the private dataset into several \reviseA{(at least 100 reported in~\cite{PapernotN2018})} disjoint subsets which are inputs for training sub-models/teacher models. Those teacher models predict the labels of a publicly available data \reviseA{(following the same distribution as the original dataset)}. Then PATE injects DP noise into the count of each predicted label (vote histogram), takes the label having maximum noisy count as the label. Such noisy labelled data are then used to train a student model, which provides differentially private prediction (MLaaS) for new observations. \reviseA{However, PATE may not always be realistic in real-world applications~\cite{HarderF2020}. First, data curators may not have a large amount of private dataset to split, such that each subset is large enough to train a model. Second, data curators may not have any publicly accessible dataset to train the student model. Third, in case there is no such publicly accessible dataset, data curators may use Generative Adversarial Network (GAN)~\cite{GoodfellowI2014} to generate artificial datasets; however, this further introduces additional computational costs, as well as potential impact on utility.} The other line of work in output perturbation is to inject DP noise into the baseline non-private neural networks trained with a convexified loss function. To do so, we should analyse the upper bound on the global sensitivity of the trained model parameters for noise injection. There are two theoretical results~\cite{ChaudhuriK2011,WuX2017} for this upper bound, subject to the convexity of the loss function. To tighten these bounds, more conditions need to be introduced, such as normalised training data for binary classification tasks~\cite{ChaudhuriK2011} or smooth loss function with decreasing step size during the SGD process~\cite{WuX2017}. However, these additional conditions may not always fit real-world machine learning tasks. Based on the above analyses, we conclude that existing DP approaches in objective perturbation, gradient perturbation and output perturbation suffer from three problems. First, conditions on objective functions limit the objective perturbation in general deep learning tasks. Second, gradient perturbation does not achieve a satisfactory privacy-utility trade-off due to over-injected noise in each epoch. Third, the utility of output perturbation is not guaranteed because of loose upper bounds on the global sensitivity of the trained model parameters. Therefore, in this paper, we aim to provide a DP framework of output perturbation to control the overall noise injection with a tighter upper bound on global sensitivity of model parameters, achieving a better trade-off between utility and privacy, in general deep learning tasks. \section{Preliminaries} \label{sec:preliminaries} A dataset $X$ is a collection of \emph{rows}: $(\mathbf{x}_{1}, \mathbf{x}_2, \ldots, \mathbf{x}_{n})$, where each $\mathbf{x}_i$ is from some fixed data domain. Two datasets $X$ and $X^{\prime}$ are called neighbouring datasets, denoted $X \sim X^{\prime}$, if one is obtained from the other by adding or removing a single row. Given a dataset $X = (\mathbf{x}_{1}, \dots, \mathbf{x}_{i-1}, \mathbf{x}_{i}, \mathbf{x}_{i+1}, \dots, \mathbf{x}_{n})$, $X^{(-i)}$ represents the dataset $X^{(-i)} = (\mathbf{x}_{1}, \dots, \mathbf{x}_{i-1}, \mathbf{x}_{i+1}, \dots, \mathbf{x}_{n})$, i.e., the neighbouring dataset of $X$ with an arbitrary row $\mathbf{x}_i$ removed. Similarly, $X^{(i)}$ denotes the dataset $X \setminus \{\mathbf{x}_{i}\} \cup \{\mathbf{x}^{\prime}\} = (\mathbf{x}_{1}, \dots, \mathbf{x}_{i-1}, \mathbf{x}^{\prime}, \mathbf{x}_{i+1}, \dots, \mathbf{x}_{n})$, i.e., the row $\mathbf{x}_i$ replaced with an arbitrary row $\mathbf{x}^{\prime}$. Note that the Hamming distance between $X$ and $X^{(-i)}$, $d(X, X^{(-i)}) = 1$ and datasets $X$ and $X^{(i)}$ are \emph{not} neighbouring datasets according to our definition. \subsection{Differential Privacy} Informally, differential privacy (DP)~\cite{DworkC2006C} ensures that two outputs, produced by a randomised algorithm on two neighbouring datasets, are almost indistinguishable. \begin{definition}[$\epsilon$-DP~\cite{DworkC2006C}] \label{def:dp} A randomised mechanism $\mathcal{T}$ is $\epsilon$-differentially private if for all neighbouring datasets $X$ and $X^{(-i)}$, $i \sim U(n)$, and for all outputs $S \in \text{Range}(\mathcal{T})$, $\mathcal{T}$ satisfies: \begin{equation} \label{eq:dp} \Pr[\mathcal{T}(X) = S]\leq \exp(\epsilon) \times \Pr[\mathcal{T}(X^{(-i)}) = S], \end{equation} where $\epsilon > 0$ is the privacy budget. \end{definition} Two prototypical $\epsilon$-DP mechanisms considered in this paper are the Laplace mechanism (Lap-DP)~\cite{DworkC2006C} and the exponential mechanism (Exp-DP)~\cite{McsherryF2007}. The former adds additive noise drawn from the Laplace distribution of scale $\Delta f/\epsilon$ to the numeric computation $f(\cdot)$, where $\Delta f$ is the $L_{1}$ global sensitivity of $f$. The $L_{p}$ global sensitivity of $f$ is: \begin{equation} \label{eq:gs} \Delta_{p} f = \max_{\forall X \in \mathcal{D},d(X,X^{(-i)})=1}\lVert f(X) - f(X^{(-i)}) \rVert_{p}. \end{equation} where $\lVert \cdot \rVert_{p}$ is the $L_{p}$-norm. For simplicity, in this paper, $\Delta f$ denotes the $L_{1}$ global sensitivity. The Exp-DP mechanism is a weighted sampling scheme to generate the output from a set of (arbitrary) candidates, and is well-suited for non-numeric computations. The mechanism simply outputs a given candidate $r$ from a set of candidates $R$ computed over a dataset $X$, with probability proportional to $\exp(\epsilon q(X,r)/2\Delta q)$, where $q(X,r)$ is the score/quality function of the candidate $r$, and $\Delta q$ is the maximum possible change in $q$ over all neighbouring datasets. An important property of differential privacy is that the mechanisms compose~\cite{DworkC2014}: \begin{theorem}[Sequential Composition~\cite{DworkC2014}] \label{thm:Lap-DP_composition} Let $M_{i}(X)$ provide $\epsilon_{i}$-DP. Then the sequence of $M_{i}(X)$ provides $\sum_{i}\epsilon_{i}$-DP. \end{theorem} The recently proposed $\epsilon$-Gaussian DP ($\epsilon$-GDP)~\cite{DongJ2019}, injecting noise following a Gaussian distribution $\mathcal{N}(0, (\sfrac{\Delta f}{\epsilon})^{2})$, is a relaxation of traditional $\epsilon$-DP. We have \begin{theorem}~\cite{DongJ2019} \label{thm:gdp} A mechanism is $\epsilon$-GDP if and only if it is $(\epsilon, \delta(\epsilon))$-DP, $\forall \epsilon > 0$, where $\delta(\epsilon) = \Phi(-1 + \sfrac{\epsilon}{2}) - \exp(\epsilon)\Phi(-1 - \sfrac{\epsilon}{2})$, $\Phi(\cdot)$ is the cumulative distribution function of $\mathcal{N}(0, 1)$. \end{theorem} The composition of GDP is as follows. \begin{theorem}[Composition of GDP~\cite{DongJ2019}] \label{thm:GDP_composition} The $n$-fold composition of $\epsilon_{i}$-GDP is $\sqrt{\sum_{i=1}^{n}\epsilon_{i}^{2}}$-GDP, where each $\epsilon_{i}$-GDP injects Gaussian noise following $\mathcal{N}(0, (\sfrac{\Delta f}{\epsilon_{i}})^{2})$. \end{theorem} \subsection{Machine Learning Basics} Let $A$ be a learning algorithm, $X = (\mathbf{x}_{1}, \dots, \mathbf{x}_{n})$ be a training set drawn from a distribution $\mathcal{D}$ ($X \sim \mathcal{D}^{n}$), $W_{X} (= (\omega_{1}, \dots, \omega_{|E|}) \in \mathbb{R}^{|E|}) = A(X)$ be the output of the learning algorithm $A$ on the training set $X$, i.e., trained model parameters, where $|E|$ is the number of model parameters. Once the model parameters are set after training, we predict the class label $y$ of an observation $\mathbf{x}$ as $y = \argmax \{p_{1}, \dots, p_{C}\}$, where $\mathbf{p} = \{p_{1}, \dots, p_{C}\} = h(W_{X}, \mathbf{x})$, $h$ is the learnt hypothesis and $C$ is the number of classes. We use a loss function $l(W_{X},\mathbf{x}_{i})$ on an observation $\mathbf{x}_{i} \in X \sim \mathcal{D}^{n}$ to measure the empirical error of $W_{X}$ on $\mathbf{x}_{i}$. Note that each $\mathbf{x}_{i}$ is a composition of a set of features and an accompanying class label. Accordingly, we have the following equation to measure the training error of the learning algorithm $A$ on the dataset $X$. The learning algorithm $A$ aims to minimise the training error. \begin{equation} \label{eq:loss} L_{X}(W_{X}) = \frac{1}{|X|}\sum_{i=1}^{|X|}l(W_{X}, \mathbf{x}_{i}) \end{equation} \descr{Measurement for overfitting.} A fundamental issue to avoid in machine learning is overfitting of a learning algorithm. One way to measure overfitting of a learning algorithm is the notion of On-Average-Replace-One stability of a model~\cite{ShalevS2014}. That is, replacing a single data sample does not result in a large difference on the value of the loss function. \begin{definition}[On-Average-Replace-One Stability~\cite{ShalevS2014}] \label{def:replace_stable} Let $\sigma:\mathbb{N} \to \mathbb{R}$ be a monotonically decreasing function. We say that a learning algorithm $A$ is on-average-replace-one stable with rate $\sigma(n)$ if for every distribution $\mathcal{D}$, \begin{equation} \label{eq:replace_stable} \mathbb{E}_{(X, \mathbf{x}^{\prime}) \sim \mathcal{D}^{n+1}, i \sim U(n)}[l(W_{X^{(i)}}, \mathbf{x}_{i}) - l(W_{X}, \mathbf{x}_{i})] \leq \sigma(n), \end{equation} where $X^{(i)} = X \setminus \{\mathbf{x}_{i}\} \cup \{\mathbf{x}^{\prime}\}$, and $U(n)$ is the uniform distribution over $[n]$. \end{definition} We extend On-Average-Replace-One stability to On-Average-Remove-One stability to fit the neighbouring dataset requirement of Definition~\ref{def:dp}. \begin{definition}[On-Average-Remove-One (OARO) Stability] \label{def:remove_stable} Given a monotonically decreasing function $\sigma:\mathbb{N} \to \mathbb{R}$, an arbitrary distribution $\mathcal{D}$, then a learning algorithm $A$ is on-average-remove-one stable with rate $\sigma(n)$ if \begin{equation} \label{eq:remove_stable} \mathbb{E}_{X \sim \mathcal{D}^{n}, i \sim U(n)}[l(W_{X^{(-i)}}, \mathbf{x}_{i}) - l(W_{X}, \mathbf{x}_{i})] \leq \sigma(n), \end{equation} where $U(n)$ is the uniform distribution over $[n]$. \end{definition} Based on Inequality~\eqref{eq:remove_stable}, a less $\sigma(n)$ indicates higher OARO stability. \descr{Topology of deep neural networks.} A deep neural network is a graph, $G = (V, E)$, formed by connecting $T+1$ layers. A layer $V_{t}$ $(t \in [0, T])$ is a set of disconnected vertices, $V_{t} = \{v_{t,1}, v_{t,2}, \dots, v_{t, |V_{t}|}\}$. Each vertex is a neuron in the neural network, one of the neurons serves as the bias term. The vertices set $V$ is a union of disjoint layers, $V = \bigcup_{t=0}^{T} V_{t} $ and the edges set $E$ is the set of weighted edges connecting vertices in two adjacent layers. For example, an edge $e_{t,i,j}$ (with a weight $\omega_{t,i,j}$) connects $v_{t-1,j}$ and $v_{t,i}$. We call $V_{0}$ input layer, $V_{t}$ $(t \in [1, T-1])$ hidden layers and $V_{T}$ output layer. Note that $|V_{0}| = m + 1$, where $m$ is the dimension of a data sample, $|V_{T}| = C$, where $C$ is the number of classes, $|E| = |W_{X}|$ is the number of model parameters. When using a trained neural network (trained model parameters/weights and given topology) to make prediction for a given data sample $\mathbf{x} = (x_{1}, \dots, x_{n})$, we calculate the output of each vertex/neuron, $a_{t,i}$ at layer $V_{t}$ as follows. \begin{equation} \label{eq:neuron} \begin{cases} z_{t,i} & = \sum_{\forall e_{t,i,j}} \omega_{t,i,j}a_{t-1,j}, \\ a_{t,i} & = \phi(z_{t,i}), \end{cases} \end{equation} where $\phi(\cdot)$ is an activation function for the neurons, $a_{0,i} = x_{i}$ and $a_{t-1,|V_{t-1}|}$ is also known as the bias term. At the output layer, the final prediction probability vector of the given data sample $\mathbf{x}$ would be $\mathbf{p} = (a_{T,1}, \dots, a_{T,C})$, where $\sum_{i=1}^{C} a_{T,i} = 1$. In our experiments in Section~\ref{sec:exp}, $\phi(\cdot)$ is Tanh for $t < T$, $\phi(\cdot)$ is Softmax for $t = T$. Furthermore, we also use the concepts of \textit{convex set}, \textit{convex function}, $\lambda$-\textit{strong convexity} and $\rho$-\textit{Lipschitzness} in this paper. We follow the standard definitions of these concepts. \begin{definition}[Convex set~\cite{ShalevS2014}] $X$ is a convex set if and only if for any two vector $\mathbf{u}$, $\mathbf{v} \in X$ and an $\alpha \in [0, 1]$, we have $\alpha\mathbf{u} + (1 - \alpha)\mathbf{v} \in X$. \end{definition} \begin{definition}[Convex function~\cite{ShalevS2014}] \label{def:convex} A function $f: X \to \mathbb{R}$ is convex if $X$ is convex and for any two vectors $\mathbf{u}$ and $\mathbf{v}$ in $X$, we have \begin{equation} f(\alpha\mathbf{u} + (1 - \alpha)\mathbf{v}) \leq \alpha f(\mathbf{u}) + (1 - \alpha)f(\mathbf{v}). \end{equation} \end{definition} \begin{definition}[$\lambda$-Strong convexity~\cite{ShalevS2014}] \label{def:strong_convex} A convex function $f: X \to \mathbb{R}$ is $\lambda$-strongly convex if for any two vectors $\mathbf{u}$ and $\mathbf{v}$ in $X$ and $\alpha \in [0, 1]$, we have \begin{equation} f(\alpha\mathbf{u} + (1 - \alpha)\mathbf{v}) \leq \alpha f(\mathbf{u}) + (1 - \alpha)f(\mathbf{v}) - \frac{\lambda}{2}\alpha(1 - \alpha)\lVert \mathbf{u} - \mathbf{v} \rVert^{2}_{2}. \end{equation} \end{definition} \begin{definition}[$\rho$-Lipschitzness~\cite{ShalevS2014}] \label{def:lip} A function $f: X \to \mathbb{R}^{d}$ is $\rho$-Lipschitz over $X$ if for any two vectors $\mathbf{u}$ and $\mathbf{v}$ in X, we have \begin{equation} \lVert f(\mathbf{u}) - f(\mathbf{v}) \rVert \leq \rho\lVert \mathbf{u} - \mathbf{v} \rVert. \end{equation} \end{definition} \subsection{Membership Inference Attacks} \label{subsec:mia} The goal of Membership Inference (MI) Attacks is to infer the membership of a given data sample by the prediction output of the data sample. In this paper we specify MI attacks as black-box MI attacks implemented via the shadow model technique by Shokri et al.~\cite{ShokriR2017}. That is, the MI adversaries only have access to the distribution of the target/private dataset and the prediction output of a given data sample. For simplicity, a black-box MI attack via shadow models is a function $\mathcal{A}: \mathcal{D}^{kn+1} \to \{0, 1\}$. The adversary performs a shadow models-based MI attack in three steps. Firstly, the adversary trains $k$ shadow models with shadow datasets $X^{\prime}_{i} \sim \mathcal{D}^{n}$, $i \in [1, k]$, mimicking the behaviour of the target model trained with the target/private dataset $X$. Then the adversary uses the $k$ shadow models to make predictions for training data (members) and test data (non-members). These prediction vectors will then be the training data to train an attack model (a binary classifier). Finally, for a given data sample $\mathbf{x} \sim \mathcal{D}$, the adversary queries the prediction probability vector $\mathbf{p}$ of $\mathbf{x}$ from the target model, then feeds $\mathbf{p}$ to the trained attack model to predict the membership ($\{0,1\}$) of $\mathbf{x}$. Table~\ref{tab:notations} summarises the notations used in this paper. \begin{table}[!ht] \centering \caption{Summary of Notations.} \scalebox{0.9}{ \begin{tabular}{c|l} \hline Notation & Description \\ \hline\hline $A$ & Learning algorithm \\ \hline $C$ & Number of classes \\ \hline $\mathcal{D}$ & Data distribution \\ \hline $\epsilon$ & Privacy budget \\ \hline $e_{t,i,j}$ & Edge connecting $v_{t-1,j}$ and $v_{t,i}$ \\ \hline $G = (V, E)$ & Neural network \\ \hline $\lambda$ & Strongly convex constant \\ \hline $l(W_{X}, \mathbf{x})$ & Loss function on $\mathbf{x}$ \\ \hline $L_{X}(W_{X})$ & Training error on $X$ \\ \hline $m$ & Dimension of the training set \\ \hline $n$, $|X|$ & Size of the training set \\ \hline $\rho$ & Lipschitzness constant \\ \hline $U(n)$ & Uniform distribution over $[1, n]$ \\ \hline $V_{t}$ & Neurons set at layer $t$ \\ \hline $W_{X}$, $A(X)$ & Model parameters trained on $X$ \\ \hline $\Delta_{2} W$ & $L_{2}$ global sensitivity of weights vector \\ \hline $\Delta \omega$ & $L_{1}$ global sensitivity of an individual weight \\ \hline $\omega_{t,i,j}$ & Weight on edge $e_{t,i,j}$ \\ \hline $v_{t,i}$ & $i$th neuron at $t$th layer in $G$ \\ \hline $X$ & Training set \\ \hline $X^{(-i)}$ & Neighbouring dataset of $X$ \\ \hline $\mathbf{x}$ & Data sample (vector) of $X$ \\ \hline $\mathbf{x} \sim \mathcal{D}$ & $\mathbf{x}$ drawn from distribution $\mathcal{D}$ \\ \hline $\Delta z_{T}$ & $L_{1}$ global sensitivity of a neuron (output layer) \\ \hline $\lVert \mathbf{a}, \mathbf{b} \rVert_{p}$ & $L_{p}$ distance between $\mathbf{a}$ and $\mathbf{b}$ \\ \hline $|S|$ & Cardinality of a set $S$ \\ \hline \end{tabular} } \label{tab:notations} \end{table} \section{Differential Privacy via Output Perturbation and Theoretical Results} \label{sec:main} In this section, we show our differentially private algorithm, then mathematically analyse the properties of the proposed approach including global sensitivity of model parameters and the effect of differential privacy against membership inference attacks. \subsection{Idea Overview} \label{sec:overview} \reviseA{Based on our discussion in the related works section (Section~\ref{sec:rw}), an output perturbation-based DP solution has one potential benefit over gradient and input perturbation-based approaches. In the latter two approaches, the entire privacy budget $\epsilon$ is consumed for model training. Any subsequent use of the trained model does not further consume the privacy budget due to the post-processing property of differential privacy. This, however, means that the utility offered is fixed for any number of queries; whether we ask 10 queries or 10,000 queries (a query means one data sample given as input to the model; the model in turn returns the corresponding probability vector). On the other hand, the output perturbation approach depletes budget per query to the trained model. This means that if the trained model is used for a fixed number of queries (as opposed to an unlimited number of queries), then we can offer better utility by virtue of allocating larger chunks of the budget per query. This motivates our focus on output perturbation.} \begin{figure}[!ht] \centering \begin{tikzpicture}[xscale=0.5, yscale=0.5] \node [draw, circle, name=a1] at (0,0) {$z_{1}$}; \node [right, name=z1] at (1.3,0) {$\hat{z}_{1} (= z_{1} + \text{Dist}(0,\sfrac{\Delta z_{T}}{\epsilon}))$}; \path[->,draw] (a1) to (z1); \path[->, dashed] (-5,0) edge node[above left, pos=1] {$z_{1} = \sum_{i}\omega_{T-1,1,i}a_{T-1,i}$} (a1); \node [draw, circle, name=a2] at (0,-2) {$z_{2}$}; \node [right, name=z2] at (1.3,-2) {$\hat{z}_{2} (= z_{2})$}; \path[->,draw] (a2) to (z2); \path[->, dashed] (-5,-2) edge node[above left, pos=1] {$z_{2} = \sum_{i}\omega_{T-1,2,i}a_{T-1,i}$} (a2); \node [draw, circle, name=an] at (0,-6) {$z_{C}$}; \node [right, name=zn] at (1.3,-6) {$\hat{z}_{C} (= z_{C})$}; \path[->,draw] (an) to (zn); \path[->, dashed] (-5,-6) edge node[above left, pos=1] {$z_{C} = \sum_{i}\omega_{T-1,C,i}a_{T-1,i}$} (an); \path (a2) -- (an) node [midway, sloped] {$\dots$}; \node [draw, rectangle, name=c] at (6,-4) {$\text{Softmax}$}; \path[->, draw] (z1) to (c); \path[->, draw] (z2) to (c); \path[->, draw] (zn) to (c); \node [name=p, right] at (8,-4) {$\mathbf{p}$}; \path[->, draw] (c) to (p); \end{tikzpicture} \caption{Idea Overview (assume $z_{1}$ is the sampled neuron).} \label{fig:overview} \end{figure} In this section, we propose an output perturbation-based DP algorithm for general deep learning tasks. To provide a better privacy-utility trade-off, we inject DP noise into a randomly sampled neuron at the output layer. Figure~\ref{fig:overview} depicts our general idea, where neuron $z_{1}$ is the randomly sampled neuron and the DP noise follows the distribution $\text{Dist}(0, \sfrac{\Delta z_{T}}{\epsilon})$. \reviseA{ We shall use both the Laplace and the Gaussian mechanisms as examples of this distribution in our experiments, providing $\epsilon$-DP and $\epsilon$-GDP (or $(\epsilon, \delta(\epsilon))$-DP) guarantees, respectively.} \begin{algorithm}[!t] \small \caption{DP Prediction Probability Vector for Non-Private Deep Learning Models.} \label{alg:dpdnn} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{ $G = (V, E)$: an artificial neural network; \\ $C$: number of classes; \\ $W_{X} = \{\omega_{t,i,j} | t \in [1, T], i \in [1, |V_{t}|], j \in [1, |V_{t-1}|]\}$: trained model parameters on dataset $X$; \\ $\phi(\cdot)$: activation function at hidden layers; \\ $\epsilon_{\text{sampling}}$: privacy budget for sampling a neuron; \\ $\epsilon_{\text{neuron}}$: privacy budget for injecting noise into a neuron; \\ $\Delta z_{T}$: $L_{1}$ global sensitivity of the neurons at layer $T$; \\ $\Delta p$: $L_{1}$ global sensitivity of the prediction probability vector; \\ $\mathbf{x}$: a data sample (query). } \Output{ $\mathbf{p} = (p_{1}, \dots, p_{C})$: differentially private prediction probability vector. } \BlankLine $\mathbf{a}_{0}$ $\gets$ $\mathbf{x}$\; \For{$t$ $\gets$ $0$ to $T-1$}{ \For{$i$ $\gets$ $1$ to $|V_{t+1}|$}{ \uIf{$t \neq T-1$}{ $a_{t+1,i}$ $\gets$ $\phi(\sum_{j=1}^{|V_{t}|}\omega_{t+1,i,j}a_{t,j})$\; } \Else{ $z_{t+1,i}$ $\gets$ $\sum_{j=1}^{|V_{t}|}\omega_{t+1,i,j}a_{t,j}$\; } } } $\mathbf{p}$ $\gets$ $\text{Softmax}(z_{T,1}, \dots, z_{T,C})$\; $v$ $\gets$ $\text{Exp-DP}(\{z_{T,1}, \dots, z_{T,C}\}, \Pr[v] \propto \exp(\sfrac{\epsilon_{\text{sampling}} p_{v}}{2\Delta p}))$\; $\hat{z}_{T,v}$ $\gets$ $z_{T,v} + \text{Dist}(0, \sfrac{\Delta z_{T}}{\epsilon_{\text{neuron}}})$\; $(\hat{z}_{T,1}, \dots, \hat{z}_{T,C})$ $\gets$ $(z_{T,1}, \dots, \hat{z}_{T,v}, \dots, z_{T,C})$\; $\mathbf{p}$ $\gets$ $\text{Softmax}(\hat{z}_{T,1}, \dots, \hat{z}_{T,C})$\; \Return: $\mathbf{p}$\; \end{algorithm} Specifically, our DP algorithm works in three steps. Firstly, for a given data sample, we feed this data sample to a trained non-private neural network to calculate the values of the neurons at the output layer. Next, we inject DP noise into a randomly sampled neuron at the output layer. After noise injection, we apply Softmax function on the noisy neuron vector to produce a differentially private probability vector. Algorithm~\ref{alg:dpdnn} shows the implementation details, where Line 1 to Line 7 are the first step, Line 8 to Line 10 are the second step, and Line 11 to Line 13 are the third step. \reviseA{Note that, Algorithm~\ref{alg:dpdnn} only introduces marginally more time than the non-private neural network: Lines 9 and 10 to the non-private neural networks. Sampling (Line 9) is efficiently implementable even with a large number of classes. The injected noise does not depend on the number of classes and the number of training data samples.} \reviseA{Also, since noise is not injected into the rest of the neurons in the neural network, the output perturbation technique is useful only in the black-box setting, where model parameters are not released.} The key point of implementing Algorithm~\ref{alg:dpdnn} is to find a tight upper bounds on $\Delta z_{T}$ for deep learning models. In the following sections, we show our upper bound on $\Delta z_{T}$ by analysing the upper bound on the global sensitivity of the model parameters, $\Delta_{2} W$ (which is tighter than existing results~\cite{ChaudhuriK2011,WuX2017} subject to the convexity of the loss function). We then show the DP guarantee of Algorithm~\ref{alg:dpdnn} and the effect of DP against the black-box MI attacks. \subsection{Analysis of the Upper Bound on Global Sensitivity} \label{sec:global_sens} This section analyses the upper bounds on the $L_{2}$ global sensitivity of trained model parameters and the $L_{1}$ global sensitivity of an individual neuron at the output layer. We analyse these upper bounds based on the fundamental properties of convex and strongly convex functions. \begin{lemma}~\cite{ShalevS2014} \label{lm:l2_norm_regulariser} The $L_{2}$-norm regulariser $2\lambda\lVert W_{X} \rVert^{2}_{2}$ is $\lambda$-strongly convex. \end{lemma} \begin{lemma}~\cite{ShalevS2014} \label{lm:lambda_strong} Let function $h_{i}$ be convex, function $g$ be $\lambda$-strongly convex. Their linear composition $f = \frac{1}{n}\sum_{i=1}^{n}h_{i} + g$ is $\lambda$-strongly convex. \end{lemma} \begin{lemma}~\cite{ShalevS2014} \label{lm:minimiser} Let $\mathbf{u}$ be a minimiser of a $\lambda$-strongly convex function $f$ ($f^{\prime}(\mathbf{u}) = 0$), then for any $\mathbf{v}$, we have \begin{equation} f(\mathbf{v}) - f(\mathbf{u}) \geq \frac{\lambda}{2}\lVert \mathbf{v}-\mathbf{u} \rVert^{2}_{2}. \end{equation} \end{lemma} Based on Lemma~\ref{lm:l2_norm_regulariser}, Lemma~\ref{lm:lambda_strong} (when $h$ is the loss function and $g$ is the $L_{2}$-norm regulariser) and Lemma~\ref{lm:minimiser}, we have the following theorem to find the upper bound on the On-Average-Remove-One (OARO) stability (Definition~\ref{def:replace_stable}) of the loss function and the global sensitivity of the model parameters, where we measure the global sensitivity of the model parameters by $L_{2}$-norm (a.k.a. $L_{2}$-sensitivity~\cite{DworkC2014}). \begin{theorem} \label{thm:global_sensitivity} Given a convex and $\rho$-Lipschitz loss function and a $\lambda$-strongly convex regulariser, the upper bound on the $L_{2}$ global sensitivity of the model parameters (of the neural network) $\Delta_{2} W$ is $\sfrac{2\rho}{\lambda n}$; the On-Average-Remove-One stability of the loss function $l(W_{X}, \mathbf{x})$ is bounded by $\sfrac{2\rho^{2}}{\lambda n}$. \end{theorem} \begin{proof} Based on Lemma~\ref{lm:l2_norm_regulariser}, Lemma~\ref{lm:lambda_strong} and Lemma~\ref{lm:minimiser}, we have \begin{align} & \lVert W_{X^{(-i)}} - W_{X} \rVert^{2}_{2} \nonumber \\ \leq & \frac{2}{\lambda}\left(L_{X}(W_{X^{(-i)}}) + \frac{\lambda}{2}\lVert W_{X^{(-i)}} \rVert^{2}_{2}\right) \nonumber \\ & - \frac{2}{\lambda}\left(L_{X}(W_{X}) + \frac{\lambda}{2}\lVert W_{X} \rVert^{2}_{2}\right) \label{subeq:lambda_strong} \\ = & \frac{2}{\lambda}\left(L_{X^{(-i)}}(W_{X^{(-i)}}) + \frac{\lambda}{2}\lVert W_{X^{(-i)}} \rVert^{2}_{2}\right) \nonumber \\ & - \frac{2}{\lambda}\left(L_{X^{(-i)}}(W_{X}) + \frac{\lambda}{2}\lVert W_{X} \rVert^{2}_{2}\right) \nonumber \\ & + \frac{2}{\lambda n}\left(l(W_{X^{(-i)}}, \mathbf{x}_{i}) - l(W_{X}, \mathbf{x}_{i})\right) \label{subeq:change_measure} \\ \leq & \frac{2}{\lambda n}\left(l(W_{X^{(-i)}}, \mathbf{x}_{i}) - l(W_{X}, \mathbf{x}_{i})\right) \label{subeq:minimiser} \\ \leq & \frac{2\rho}{\lambda n} \lVert W_{X^{(-i)}} - W_{X} \rVert_{2}, \label{subeq:rho_lip} \end{align} where Inequality~\eqref{subeq:lambda_strong} is based on Lemma~\ref{lm:minimiser}; we change the measurement domain for the training error in Equation~\eqref{subeq:change_measure}, such that one more term is added for each training error; since $W_{X^{(-i)}}$ is the minimiser of $L_{X^{(-i)}}(W_{X^{(-i)}}) + \frac{\lambda}{2}\lVert W_{X^{(-i)}} \rVert^{2}_{2}$, we have Inequality~\eqref{subeq:minimiser}; Inequality~\eqref{subeq:rho_lip}, $\forall \mathbf{x}_{i} \in X$ is a result of $\rho$-Lipschitz loss function. Because $\lVert W_{X^{(-i)}} - W_{X} \rVert_{2} \geq 0$, $i \sim U(|X|)$, we immediately have \begin{equation} \label{eq:global_sensitivity_weights} \Delta_{2} W = \lVert W_{X^{(-i)}} - W_{X} \rVert_{2} \leq \frac{2\rho}{\lambda n}. \end{equation} Furthermore, since the loss function $l(W_{X}, \mathbf{x})$ is $\rho$-Lipschitz, we have \begin{align} l(W_{X^{(-i)}}, \mathbf{x}_{i}) - l(W_{X}, \mathbf{x}_{i}) \leq \rho \lVert W_{X^{(-i)}} - W_{X} \rVert_{2} \leq \frac{2\rho^{2}}{\lambda n}. \end{align} Since this inequality holds for any $X$ and $\mathbf{x}_{i}$ ($i \sim U(|X|)$), we then have \begin{align} \label{eq:upper_loss} \mathbb{E}_{(X, \mathbf{x}_{i}) \sim \mathcal{D}^{n+1}, i \sim U(n)}[l(W_{X^{(-i)}}, \mathbf{x}_{i}) - l(W_{X}, \mathbf{x}_{i})] \leq \frac{2\rho^{2}}{\lambda n}. \end{align} Combining Inequality~\eqref{eq:global_sensitivity_weights} and Inequality~\eqref{eq:upper_loss}, concludes the proof. \end{proof} We note that Chaudhuri et al.~\cite{ChaudhuriK2011} provide a similar result to Theorem~\ref{thm:global_sensitivity}. However, their upper bound on the $L_{2}$ global sensitivity of the model parameters is $\sfrac{2\rho}{\lambda n}$ under the condition of a binary classification task and a normalised training set ($\lVert \mathbf{x}_{i} \rVert \leq 1$). In addition, Wu et al.~\cite{WuX2017} also provide the same upper bound on the global sensitivity of the model parameters as ours. However, their result needs the loss function to be $\beta$-smooth with a decreasing step size during the process of stochastic gradient descent. Once we remove these additional conditions, the two upper bounds become loose. Table~\ref{tab:comparison} in Section~\ref{sec:introduction} shows a brief comparison between these upper bonds and their conditions. Compared to \cite{ChaudhuriK2011} and \cite{WuX2017}, when achieving the same tight bound, our result only relies on the convexity of the loss function but not the additional conditions introduced in \cite{ChaudhuriK2011,WuX2017}. Since $\Delta_{2} W$ is the $L_{2}$ global sensitivity, we have $(\Delta_{2} W)^{2} = \sum_{i=1}^{|W_{X}|}(\Delta \omega_{i})^{2}$, where $|W_{X}|$ is the number of edges in a deep neural network. Since each $\Delta \omega_{i}$ has the same upper bound $\Delta \omega$, we have the global sensitivity ($L_{1}$-norm) of an individual model parameter \begin{equation} \label{eq:global_sensitivity_weight} \Delta \omega = \frac{\Delta_{2} W}{\sqrt{|W_{X}|}} \leq \frac{2 \rho}{\lambda n \sqrt{|W_{X}|}}. \end{equation} Once we have the $L_{1}$ global sensitivity of each model parameter/weight, we can further analyse the upper bound on the global sensitivity of the neuron (input to the Softmax function) at the output layer. \begin{corollary} \label{cr:global_sensitivity_output} Given the $L_{1}$ global sensitivity of an individual model parameter $\Delta \omega$, a fully connected $(T+1)$-layer neural network, $G = (V, E)$, where the output layer $V_{T}$ has no activation function, the activation function at the hidden layers is bounded by $a_{u}$, the $L_{1}$ global sensitivity of a neuron $v_{T}$ at the output layer is bounded by $a_{u}|V_{T-1}|\Delta \omega$. \end{corollary} \begin{proof} Let $z_{T}$ be the value of an arbitrary neuron at the output layer $v_{T}$, then we have $z_{T} = \sum_{j=1}^{|V_{T-1}|}a_{T-1,j}\omega_{T,j}$, where $|V_{T-1}|$ is the incoming degree of $v_{T}$ in a fully connected neural network. Since the activation function is bounded by $a_{u}$. We achieve the maximum difference between $z_{T}$ and $z_{T}^{(-i)}$ when $\omega_{T,j} > 0 > \omega_{T,j}^{(-i)}$ and $a_{T-1,j} = a_{T-1,j}^{(-i)} = a_{u}$, $\forall j \sim U(|V_{T-1}|)$, that is \begin{align} \label{eq:global_neuron} \Delta z_{T} & = \sum_{j=1}^{|V_{T-1}|}a_{u}\omega_{T,j} - \sum_{j=1}^{|V_{T-1}|}a_{u}\omega_{T,j}^{(-i)} \nonumber \\ & \leq a_{u}\sum_{j=1}^{|V_{T-1}|}\left(\omega_{T,j} - \omega_{T,j}^{(-i)}\right) \nonumber \\ & \leq a_{u}|V_{T-1}|\Delta \omega \end{align} as required. \end{proof} In practice, some commonly used activation functions, such as, Tanh, Sigmoid, Binary step and Gaussian functions, provide $a_{u} = 1$. In our experiments (Section~\ref{sec:exp}), we use Tanh function as the activation function in the hidden layers, following existing works~\cite{ShokriR2017,SalemA2018,YeomS2018,JayaramanB2019} in MI attacks. \begin{corollary} \label{cr:sensitivity_softmax} Given the global sensitivity of an individual neuron at the output layer of a neural network $\Delta z_{T}$, the upper bound on the global sensitivity of $p \in \mathbf{p}$, $\Delta p$, is $\min\{\exp(2\Delta z_{T}) - 1, 1\}$, where $\mathbf{p}$ is the prediction probability vector provided by the Softmax function. \end{corollary} \begin{proof} For an arbitrary neuron $v$ at the output layer, we have $p_{v} = \frac{\exp(z_{T,v})}{\sum_{j=1}^{C}\exp(z_{T,j})}$, where $C$ is the number of classes. Since $z_{T,j} - \Delta z_{T} \leq z_{T,j}^{(-i)} \leq z_{T,j} + \Delta z_{T}$, the global sensitivity of $p$ is \begin{align} \Delta p = & \sup\left|p_{v}^{(-i)} - p_{v}\right| \nonumber \\ = & \sup\left|\frac{\exp(z_{T,v}^{(-i)})}{\sum_{j=1}^{C}\exp(z_{T,j}^{(-i)})} - \frac{\exp(z_{T,v})}{\sum_{j=1}^{C}\exp(z_{T,j})}\right| \nonumber \\ = & \frac{\exp(z_{T,v} + \Delta z_{T})}{\exp(z_{T,v} + \Delta z_{T}) + \sum_{j \neq v}\exp(z_{T,j} - \Delta z_{T})} \nonumber \\ & \quad - \frac{\exp(z_{T,v})}{\sum_{j=1}^{C}\exp(z_{T,j})} \nonumber \\ = & \frac{\exp(z_{T,v})}{\exp(z_{T,v}) + \sum_{j \neq v}\exp(z_{T,j} - 2\Delta z_{T})} \nonumber \\ & \quad - \frac{\exp(z_{T,v})}{\sum_{j=1}^{C}\exp(z_{T,j})} \nonumber \\ \leq & \frac{\exp(z_{T,v})}{\sum_{j=1}^{C}\exp(z_{T,j} - 2\Delta z_{T})} - \frac{\exp(z_{T,v})}{\sum_{j=1}^{C}\exp(z_{T,j})} \nonumber \\ = & (\exp(2 \Delta z_{T}) - 1) \times \frac{\exp(z_{T,v})}{\sum_{j=1}^{C}\exp(z_{T,j})} \nonumber \\ < & \exp(2 \Delta z_{T}) - 1. \end{align} Since both $p_{v}$ and $p_{v}^{(-i)}$ are less than $1$, we have $\Delta p = \min\{\exp(2 \Delta z_{T}) - 1, 1\}$ to conclude this corollary. \end{proof} \reviseA{Note that, when $\Delta z_{T} > 0.5 \ln{2}$, $\Delta p$ becomes trivial, i.e., $\Delta p = 1$, which is the maximum change for a probability $p_{v}$ of neuron $v$ at the output layer). However, based on the way the Exp-DP behaves (in Algorithm~\ref{alg:dpdnn}), even if the bound on $\Delta p$ is trivial, the highest probability neuron will still have a relatively high probability of being sampled. That is, Algorithm~\ref{alg:dpdnn} still guarantees the quality of the sampled neuron. Moreover, in Table~\ref{tab:sensitivity} in Section~\ref{sec:exp_dp}, we show that there are datasets where this upper bound on $\Delta p$ is non-trivial, and we therefore get even better utility.} \subsection{Lipschitz Constant of the Cross-entropy Loss Function} \label{sec:lip_cons} In this section, we show how to calculate the upper bound on the Lipschitz constant $\rho$, which is important when computing the upper bound on global sensitivities $\Delta \omega$, $\Delta z_{T}$ and $\Delta p$ (see Equation~\eqref{eq:global_sensitivity_weight}). \begin{lemma}~\cite{VirmauxA2018,GoukH2020} \label{lm:lip_constant} Given a fully connected neural network containing $T$ layers, $f_{T}: X \to \mathbb{R}^{C}$, where the hidden layers apply $1$-Lipschitz activation functions (e.g., ReLU, Tanh and Sigmoid), the output layer applies Softmax function, and $\mathbf{X}_{i}$ is the values of neurons at Layer $i$, the Lipschitz constant (with respect to the model parameters) of $f_{T}$ is bounded by $\prod_{i=1}^{T} \lVert \mathbf{X}_{i} \rVert_{2}$. \end{lemma} \begin{lemma}~\cite{YedidaR2019} \label{lm:one_layer_lip} For a one-layer neural network with cross-entropy loss function, the Lipschitz constant of the cross-entropy loss function (with respect to the model parameters) is $\frac{C-1}{C|V|}\lVert \mathbf{X} \rVert_{2}$, where $C$ is the number of classes, $|V|$ is the number of the neurons at the input layer and $\mathbf{X}$ is a given data sample. \end{lemma} Based on Lemma~\ref{lm:lip_constant} and Lemma~\ref{lm:one_layer_lip}, we have Proposition~\ref{prop:lip_constant} to calculate the upper bound on the Lipschitz constant of the cross-entropy loss function, which is used in our experiments and also a commonly used loss function in existing works~\cite{ShokriR2017,SalemA2018,YeomS2018,JayaramanB2019}, for a $(T+1)$-layer neural network. \begin{proposition} \label{prop:lip_constant} Using cross entropy function as the loss function of the aforementioned $(T+1)$-layer neural network, the upper bound on Lipschitz constant (with respect to the model parameters) is \begin{align} \label{eq:lip_constant} \rho \leq \frac{(C-1)\prod_{t=0}^{T-1}\sqrt{|V_{t}|}x_{t}}{C|V_{T-1}|}, \end{align} where $C$ is the number of classes, $|V_{t}|$ is the number of neurons at layer $t$ and $x_{t}$ is the maximum absolute value of all the neurons at layer $t$. \end{proposition} \reviseA{Based on Equation~\eqref{eq:lip_constant}, the Lipschitz constant grows exponentially with the number of layers. However, when calculating the global sensitivities of the parameters $\Delta \omega$, $\Delta z_{T}$ and $\Delta p$, this exponential growth (in the number of layers and the maximum value of neuron per layer), is somewhat compensated by the terms $\sqrt{|W_{X}|}$ and $n$ (the number of data samples in the training set), as can be seen by putting the value of $\rho$ from Equation~\eqref{eq:lip_constant} into Equation~\eqref{eq:global_sensitivity_weight}. This is also empirically demonstrated in Table~\ref{tab:sensitivity} in Section~\ref{sec:exp_dp}. } \subsection{Convexified Loss Function} In our proofs of the upper bounds on the global sensitivities, a key assumption is that the objective function of the learning algorithm is a combination of a convex loss function and a $\lambda$-strongly convex regulariser. Lemma~\ref{lm:l2_norm_regulariser} guarantees the strong convexity for an $L_{2}$-norm regulariser. However, loss functions of deep learning models are non-convex due to a large number of model parameters~\cite{BengioY2006}. To circumvent this, we follow existing results in multi-layer (more than one hidden layers) neural networks convexification~\cite{LoJ2012,DvijothamK2014} by risk-averse optimisation to convexify the non-convex loss functions. \begin{theorem}~\cite{LoJ2012,DvijothamK2014} \label{thm:convexification} For a non-convex loss function $l(W_{X},\mathbf{x})$ and its training error $L_{X}(W_{X}) = \frac{1}{n}\sum_{i=1}^{n}l(W_{X},\mathbf{x}_{i})$, their risk-averting error functions, $l^{(\alpha)}(W_{X},\mathbf{x})$ and $L^{(\alpha)}_{X}(W_{X})$, are convex, where \begin{align} & l^{(\alpha)}(W_{X},\mathbf{x}) = \exp(\alpha \times l(W_{X},\mathbf{x})), \nonumber \\ & L^{(\alpha)}_{X}(W_{X}) = \frac{1}{\alpha}\ln\left[\frac{1}{n}\sum_{i=1}^{n}l^{(\alpha)}(W_{X},\mathbf{x}_{i})\right]. \end{align} $\alpha$ is the risk-factor which measures the size of convex region. Larger $\alpha$ indicates larger convex region. \end{theorem} So we have \begin{align} \label{eq:convexification} & L^{(\alpha)}_{X}(W_{X}) + 2\lambda\lVert W_{X} \rVert^{2}_{2} \nonumber \\ = & \frac{1}{\alpha}\ln\left[\frac{1}{n}\sum_{i=1}^{n}\exp(\alpha \times l(W_{X},\mathbf{x}_{i}))\right] + 2\lambda\lVert W_{X} \rVert^{2}_{2}, \end{align} where $l(W_{X},\mathbf{x})$ can be conventional/commonly used non-convex loss functions, such as quadratic loss and cross-entropy loss. We use the convexified loss function rather than the traditional non-convex loss function in the experimental evaluations. \reviseA{Intuitively, the convexification constant $\alpha$ might impact the trade-off between privacy and utility of the non-private model against MI attacks (since it might impact the performance/overfitting of the \revise{convexified} loss function~\cite{YeomS2018}). In this paper, we take $\alpha$ as another hyper-parameter to the machine learning model; hence, studying the relationship between $\alpha$ and the privacy-utility trade-off is left as future work.} \subsection{Effect of Differential Privacy} \label{sec:dp_effect} In this section, we study the effect of differential privacy, including the DP guarantee of Algorithm~\ref{alg:dpdnn} and the \reviseA{OARO stability of a DP neural network.} \reviseA{In the $\epsilon$-DP proof of the algorithm, we assume the Laplace mechanism as an instance of $\text{Dist}(0, \sfrac{\Delta z_{T}}{\epsilon})$. But this guarantee can be converted into an $\epsilon$-GDP guarantee by invoking the advanced composition theorem of $(\epsilon, \delta(\epsilon))$-DP and subsequently applying Theorem~\ref{thm:gdp}. Thus, the result holds for both the Laplace and the Gaussian mechanisms.} \begin{theorem} \label{thm:final_dp} The prediction probability vector $\mathbf{p}$ of a given observation $\mathbf{x}$, produced by Algorithm~\ref{alg:dpdnn} (injecting Laplace noise) is $(2C + 1)\epsilon$-differentially private, where $C$ is the number of classes. \end{theorem} \begin{proof} Let the activation function at the output layer be Softmax and the values of the neurons prior to feeding Softmax be $\mathbf{z}_{T} = (z_{T,1}, \dots, z_{T,C})$, then we have the prediction probability vector $\mathbf{p} = (p_{1}, \dots, p_{C})$, where $p_{i} = \frac{\exp(z_{T,i})}{\sum_{i=1}^{C}\exp(z_{T,i})}$ and $C$ is the number of classes. Consider two sets of model parameters $W_{X}$ and $W_{X^{(-i)}}$ trained from two neighbouring dataset $X$ and $X^{(-i)}$, respectively. We run Algorithm~\ref{alg:dpdnn} on $W_{X}$ and $W_{X^{(-i)}}$ with the same privacy budget, the same data sample $\mathbf{x}$ for prediction and the same topology of the neural network. Based on Algorithm~\ref{alg:dpdnn}, we analyse the DP guarantee for the weighted sampling step and the noise injection step below. Let the score function of the Exp-DP for sampling the neuron $v$ be $q(X,v) = p_{v}$. Based on Exp-DP and Lemma~\ref{cr:sensitivity_softmax}, the sampling weights of a neuron $v$ is $\Pr[\text{sample}(X,q,\epsilon) = v] = \frac{\exp(\epsilon q(X,v)/2\Delta q)}{\sum_{i=1}^{C}\exp(\epsilon q(X,i)/2\Delta q)}$, where $\Delta q = \exp(\Delta p)$. Following the standard proof of the Exp-DP~\cite{McsherryF2007}, we have \begin{equation} \frac{\Pr[\text{sample}(X,q(X,v),\epsilon) = v]}{\Pr[\text{sample}(X^{(-i)},q(X^{(-i)},v),\epsilon) = v]} \leq \exp(\epsilon). \end{equation} For a neuron $v$ where noise has been injected and an arbitrary $r_{v} \in (0,1)$, we have the worst-case upper bound on the probability ratio for $p_{u} \in \mathbf{p}$ ($\mathbf{p}$ is the noisy prediction probability vector) as follows. \begin{align} \label{eq:noisy_neuron} & \frac{\Pr[\hat{p}_{v} = r_{v} | \text{sample}(X,q(X,v),\epsilon) = v]}{\Pr[\hat{p}_{v}^{(-i)} = r_{v} | \text{sample}(X^{(-i)},q(X^{(-i)},v),\epsilon) = v]} \nonumber \\ = & \frac{\Pr\left[\frac{\exp(\hat{z}_{T,v})}{\exp(\hat{z}_{T,v}) + \sum_{j \neq v}\exp(z_{T,j})} = r_{v}\right]}{\Pr\left[\frac{\exp(\hat{z}_{T,v}^{(-i)})}{\exp(\hat{z}_{T,v}^{(-i)}) + \sum_{j \neq v}\exp(z_{T,j}^{(-i)})} = r_{v}\right]} \nonumber \\ = & \frac{\Pr\left[\hat{z}_{T,v} = \ln\left(\frac{r_{v}}{1-r_{v}}\sum_{j \neq v}\exp(z_{T,j})\right)\right]}{\Pr\left[\hat{z}_{T,v}^{(-i)} = \ln\left(\frac{r_{v}}{1-r_{v}}\sum_{j \neq v}\exp(z_{T,j}^{(-i)})\right)\right]} \nonumber \\ = & \frac{\exp\left(\frac{\epsilon}{\Delta z_{T}}\left|\ln\left(\frac{r_{v}}{1-r_{v}}\sum_{j \neq v}\exp(z_{T,j})\right) - z_{T,v}\right|\right)}{\exp\left(\frac{\epsilon}{\Delta z_{T}}\left|\ln\left(\frac{r_{v}}{1-r_{v}}\sum_{j \neq v}\exp(z_{T,j}^{(-i)})\right) - z_{T,v}^{(-i)}\right|\right)} \nonumber \\ \leq & \exp\left(\frac{\epsilon}{\Delta z_{T}}\left|\ln\left(\frac{\sum_{j \neq v}\exp(z_{T,j}))}{\sum_{j \neq v}\exp(z_{T,j}^{(-i)}))}\right) - z_{T,v} + z_{T,v}^{(-i)}\right|\right) \nonumber \\ \leq & \exp\left(\frac{\epsilon}{\Delta z_{T}}\left|\ln\left(\frac{\sum_{j \neq v}\exp(z_{T,j}^{(-i)} + \Delta z_{T}))}{\sum_{j \neq v}\exp(z_{T,j}^{(-i)}))}\right) + \Delta z_{T}\right|\right) \nonumber \\ = & \exp\left(\frac{\epsilon}{\Delta z_{T}}\left|\ln\left(\exp(\Delta z_{T})\frac{\sum_{j \neq v}\exp(z_{T,j}^{(-i)}))}{\sum_{j \neq v}\exp(z_{T,j}^{(-i)}))}\right) + \Delta z_{T}\right|\right) \nonumber \\ = & \exp\left(\frac{\epsilon}{\Delta z_{T}}\big|\Delta z_{T} + \Delta z_{T}\big|\right) \nonumber \\ = & \exp(2\epsilon). \end{align} For an arbitrary neuron $u$ where noise has not been injected (we inject noise into a neuron $v$) and an arbitrary $r_{u} \in (0,1)$, we have the worst-case upper bound on the probability ratio for $\hat{p}_{u} \in \mathbf{p}$ ($\mathbf{p}$ is the noisy prediction probability vector) under the condition of $\hat{p}_{v} = r_{v}$ and $\hat{p}_{v}^{(-i)} = r_{v}$. \begin{align} & \frac{\Pr[\hat{p}_{u} = r_{u} | \hat{p}_{v} = r_{v}]}{\Pr[\hat{p}_{u}^{(-i)} = r_{u} | \hat{p}_{v}^{(-i)} = r_{v}]} \nonumber \\ = & \frac{\Pr\left[\frac{\exp(z_{T,u})}{\sum_{j=1}^{C}\exp(z_{T,j})} = r_{u} \Big| \frac{\exp(\hat{z}_{T,v})}{\sum_{j=1}^{C}\exp(z_{T,j})} = r_{v}\right]}{\Pr\left[\frac{\exp(z_{T,u}^{(-i)})}{\sum_{j=1}^{C}\exp(z_{T,j}^{(-i)})} = r_{u} \bigg| \frac{\exp(\hat{z}_{T,v}^{(-i)})}{\sum_{j=1}^{C}\exp(z_{T,j}^{(-i)})} = r_{v}\right]} \nonumber \\ = & \frac{\Pr\left[r_{v} \cdot \frac{\exp(z_{T,u})}{\exp(\hat{z}_{T,v})} = r_{u}\right]}{\Pr\left[r_{v} \cdot \frac{\exp(z_{T,u}^{(-i)})}{\exp(\hat{z}_{T,v}^{(-i)})} = r_{u}\right]} \nonumber \\ = & \frac{\Pr\left[\hat{z}_{T,v} = \ln\left(\frac{r_{v}\exp(z_{T,u})}{r_{u}}\right)\right]}{\Pr\left[\hat{z}_{T,v}^{(-i)} = \ln\left(\frac{r_{v}\exp(z_{T,u}^{(-i)})}{r_{u}}\right)\right]} \nonumber \\ = & \frac{\exp\left(\frac{\epsilon}{\Delta z_{T}}\left|\ln\left(\frac{r_{v}\exp(z_{T,u})}{r_{u}}\right) - z_{T,v}\right|\right)}{\exp\left(\frac{\epsilon}{\Delta z_{T}}\left|\ln\left(\frac{r_{v}\exp(z_{T,u}^{(-i)})}{r_{u}}\right) - z_{T,v}^{(-i)}\right|\right)} \nonumber \\ \leq & \exp\left(\frac{\epsilon}{\Delta z_{T}}\left|\ln\left(\frac{\exp(z_{T,u})}{\exp(z_{T,u}^{(-i)})}\right) - z_{T,v} + z_{T,v}^{(-i)}\right|\right) \nonumber \\ \leq & \exp\left(\frac{\epsilon}{\Delta z_{T}}\left|\ln\left(\frac{\exp(z_{T,u}^{(-i)} + \Delta z_{T})}{\exp(z_{T,u}^{(-i)})}\right) + \Delta z_{T}\right|\right) \nonumber \\ = & \exp\left(\frac{\epsilon}{\Delta z_{T}}\left|\ln\left(\exp(\Delta z_{T})\frac{\exp(z_{T,u}^{(-i)}))}{\exp(z_{T,u}^{(-i)}))}\right) + \Delta z_{T}\right|\right) \nonumber \\ = & \exp\left(\frac{\epsilon}{\Delta z_{T}}\big|\Delta z_{T} + \Delta z_{T}\big|\right) \nonumber \\ = & \exp(2\epsilon). \end{align} Then we give the upper bound on the privacy guarantee for the final prediction result. \begin{align} \label{eq:final_dp} & \frac{\Pr[\hat{\mathbf{p}} = \mathbf{r}]}{\Pr[\hat{\mathbf{p}}^{(-i)} = \mathbf{r}]} \nonumber \\ = & \frac{\Pr[\text{sample}(X,\mathbf{p},\epsilon) = v]}{\Pr[\text{sample}(X^{(-i)},\mathbf{p}^{(-i)},\epsilon) = v]} \times \prod_{j=1}^{C}\frac{\Pr[\hat{p}_{j} = r_{j}]}{\Pr[\hat{p}^{(-i)}_{j} = r_{j}]} \nonumber \\ \leq & \exp(\epsilon) \times \frac{\Pr[\hat{p}_{v} = r_{v}]}{\Pr[\hat{p}^{(-i)}_{v} = r_{v}]} \times \prod_{j \neq v}\frac{\Pr[\hat{p}_{j} = r_{j} | \hat{p}_{v} = r_{v}]}{\Pr[\hat{p}^{(-i)}_{j} = r_{j} | \hat{p}^{(-i)}_{v} = r_{v}]} \nonumber \\ \leq & \exp(\epsilon) \times \prod_{j=1}^{C}\exp(2\epsilon) \nonumber \\ = & \exp((2C+1)\epsilon). \end{align} This concludes the proof. \end{proof} \begin{corollary} \label{cr:dp_effect} \reviseA{A differentially private neural network is more On-Average-Remove-One stable than a non-private neural network.} \end{corollary} \begin{proof} Injecting DP noise, either by objective perturbation~\cite{PhanN2016,PhanN2017} or gradient perturbation (as DP-SGD~\cite{AbadiM2016}) or output perturbation (as Algorithm~\ref{alg:dpdnn}), could be modelled as modifying the model parameters to be $\hat{W}_{X}$. Note that $\hat{W}_{X}$ is not a minimiser of $L_{X}(W_{X}) + \frac{\lambda}{2}\lVert W_{X} \rVert^{2}_{2}$, replacing $W_{X}$ in Inequality~\eqref{subeq:lambda_strong} by $\hat{W}_{X}$ decreases the value of Inequality~\eqref{subeq:lambda_strong} and Equation~\eqref{subeq:change_measure}. Following the same proof as Theorem~\ref{thm:global_sensitivity}, we have a tight upper bound for $\lVert W_{X^{(-i)}} - \hat{W}_{X} \rVert_{2}$, say, with a $c > 0$, $\lVert W_{X^{(-i)}} - \hat{W}_{X} \rVert_{2} = \lVert W_{X^{(-i)}} - W_{X} \rVert_{2} - c \leq \sfrac{2\rho}{\lambda n} - c$. Since we assume the loss function is $\rho$-Lipschitz, we have \begin{align} l(W_{X^{(-i)}},\mathbf{x}_{i}) - l(\hat{W}_{X},\mathbf{x}_{i}) & \leq \rho\lVert W_{X^{(-i)}} - \hat{W}_{X} \rVert_{2} \nonumber \\ & = \rho\lVert W_{X^{(-i)}} - W_{X} \rVert_{2} - c\rho \nonumber \\ & \leq \frac{2\rho^{2}}{\lambda n} - c\rho. \end{align} Since this is valid for any $X$ and $\mathbf{x}_{i}$, $i \sim U(|X|)$, $|X| = n$, we immediately have \begin{equation} \mathbb{E}_{(X, \mathbf{x}_{i}) \sim \mathcal{D}^{n+1}, i \sim U(n)}[l(W_{X^{(-i)}}, \mathbf{x}_{i}) - l(\hat{W}_{X}, \mathbf{x}_{i})] \leq \frac{2\rho^{2}}{\lambda n} - c\rho. \end{equation} Since $c\rho > 0$ and the upper bound on $\mathbb{E}_{(X, \mathbf{x}_{i}) \sim \mathcal{D}^{n+1}, i \sim U(n)}[l(W_{X^{(-i)}}, \mathbf{x}_{i}) - l(W_{X}, \mathbf{x}_{i})]$ is $\sfrac{2\rho^{2}}{\lambda n}$ (Theorem~\ref{thm:global_sensitivity}), injecting DP noise makes the model more On-Average-Remove-One stable. \end{proof} \reviseA{Based on Definition~\ref{def:remove_stable}, the more On-Average-Remove-One stable a model, the less likely it is to overfit. As shown in \cite{YeomS2018}, a less overfitted (or more generalised) model is more resistant against MI attacks.} \section{Experimental Evaluation} \label{sec:exp} In this section, we show experimental evaluations of the proposed algorithm. We start with a description of the datasets, the performance metrics, experimental configurations and finally evaluation results and analyses. \reviseA{We are providing an open-source implementation of our algorithm to aid future research.\footnote{See \url{https://github.com/suluz/dp_ml_API}}} \subsection{Datasets} In the experiments, we use the same datasets as Shokri et al.~\cite{ShokriR2017} and Jayaraman et al.~\cite{JayaramanB2019}, i.e., US Adult (Income)\footnote{\url{http://archive.ics.uci.edu/ml/datasets/Adult}}, MNIST\footnote{\url{http://yann.lecun.com/exdb/mnist/}}, Location (Bangkok restaurants check-ins)\footnote{\url{https://sites.google.com/site/yangdingqi/home/foursquare-dataset}}, Purchases\footnote{\url{https://www.kaggle.com/c/acquire-valued-shoppers-challenge/data}}, CIFAR\footnote{\url{https://www.cs.toronto.edu/~kriz/cifar.html}} and Texas Hospital\footnote{\url{https://www.dshs.texas.gov/THCIC/Hospitals/Download.shtm}}. Since these datasets are commonly used in the field of machine learning and MI attacks, we only show the statistics of them in Table~\ref{tab:datasets}, where \#Rec. is the number of records (randomly sampled from the raw datasets) and \#Feat. is the number of features/attributes in training sets. For the details of these datasets, please refer to Section IV-a of \cite{ShokriR2017} and Section 4.1 of \cite{JayaramanB2019}. \begin{table}[!ht] \centering \caption{Datasets Statistics.} \label{tab:datasets} \scalebox{0.9}{ \begin{tabular}{l|c|c|c|c} \hline Dataset & \#Rec. & \#Feat. & \#Classes & \#Shadow Models\\ \hline\hline US Adult & 10,000 & 14 & 2 & 20 \\ \hline MNIST & 10,000 & 784 & 10 & 50 \\ \hline Location & 1,200 & 446 & 30 & 30 \\ \hline Purchase-2 & 10,000 & 600 & 2 & 20 \\ \hline Purchase-10 & 10,000 & 600 & 10 & 20 \\ \hline Purchase-20 & 10,000 & 600 & 20 & 20 \\ \hline Purchase-50 & 10,000 & 600 & 50 & 20 \\ \hline Purchase-100 & 10,000 & 600 & 100 & 20 \\ \hline CIFAR-10 & 10,000 & 3,072 & 10 & 100 \\ \hline CIFAR-100 & 10,000 & 3,072 & 100 & 100 \\ \hline Texas Hospital & 10,000 & 6,169 & 100 & 10 \\ \hline \end{tabular} } \end{table} \subsection{Performance Metrics} \descr{Metrics for overfitting of the baseline non-private models.} In our experiments, we use the upper bound on the On-Average-Remove-One (OARO) Stability (Definition~\ref{def:remove_stable} and Equation~\eqref{eq:upper_loss} in Theorem~\ref{thm:global_sensitivity}) to measure the overfitting of baseline non-private models, that is, \begin{equation} \label{eq:oaro} \mathbb{E}_{(X, \mathbf{x}_{i}) \sim \mathcal{D}^{n+1}, i \sim U(n)}[l(W_{X^{(-i)}}, \mathbf{x}_{i}) - l(W_{X}, \mathbf{x}_{i})] \leq \frac{2\rho^{2}}{\lambda n}, \end{equation} where $\rho$ is the maximum Lipschitz constant on a given training set and a given neural networks, $\lambda$ is the strongly convex constant shown in Lemma~\ref{lm:l2_norm_regulariser} and $n$ is the size of the training set. \reviseA{In practice, for a given training set and a given neural network, $C$, $T$ and $|V_{t}|$ ($t \in [0, T]$) are fixed and pre-determined. We further take the maximum value of $x_{t}$ at each layer to calculate the upper bound on Lipschitz constant $\rho$ by Equation~\eqref{eq:lip_constant} in Proposition~\ref{prop:lip_constant}. We then use this empirical maximum $\rho$ to calculate the upper bound on the global sensitivities.} Note that, a tight upper bound on Inequality~\eqref{eq:oaro} indicates more OARO stable. \descr{Metrics for DP models.} Following existing studies~\cite{YeomS2018,JayaramanB2019} on measuring DP performance against MI attacks, we use the same metrics in this paper, i.e., accuracy loss, a DP model's accuracy loss on the test set with respect to the baseline non-private model and privacy leakage, the difference between the true positive rate and the false positive rate of the MI attacks (as binary classifiers). They are defined as follows. \begin{align} & \text{Acc\_Loss} = 1 - \frac{\text{Test\_Acc}_{\text{DP}}}{\text{Test\_Acc}_{\text{Baseline (S)}}}, \label{subeq:acc_loss}\\ & \text{Priv\_Leak} = \frac{|\text{TP}_{\text{MI}}|}{|\text{P}|} - \frac{|\text{FP}_{\text{MI}}|}{|\text{N}|}, \label{subeq:priv_leak} \end{align} where Baseline (S) indicates the baseline non-private model trained on the \underline{S}urrogate loss function, $\text{TP}_{\text{MI}}$ and $\text{FP}_{\text{MI}}$ are the true positive and false positive of the MI attacks (specifying as the black-box shadow model-based approach from Shokri et al.~\cite{ShokriR2017} in this paper), P and N are positive/member and negative/non-member labels of a test data sample. \reviseA{Note that, based on Equation~\eqref{subeq:acc_loss} and Equation~\eqref{subeq:priv_leak} both $\text{Acc\_Loss}$ and $\text{Priv\_Leak}$ are in $[0, 1]$. $\text{Acc\_Loss} = 0$ indicates that a DP model achieved the maximum prediction accuracy, i.e., the same as the baseline model and $\text{Acc\_Loss} = 1$ indicates no prediction utility of a DP model. $\text{Priv\_Leak} = 0$ indicates no privacy leakage of a given model and $\text{Priv\_Leak} = 1$ indicates that a given model leaks the maximum privacy under MI attacks.} \subsection{Experimental Configurations} Since existing upper bounds on the global sensitivity of the trained model parameters are looser than ours under the same condition -- convexity of the loss function -- our upper bound is expected to outperform existing upper bounds given the same output perturbation-based algorithm. Hence we do not perform experimental comparison between our upper bound and existing upper bounds. In our experiments, we compare the performance achieved by Algorithm~\ref{alg:dpdnn} and Google's open-source implementation~\cite{TensorflowPrivacy} of DP-SGD approaches on the real-world datasets. \begin{table*}[!ht] \centering \caption{Configurations in the Experiments.} \label{tab:configurations} \scalebox{0.9}{ \begin{tabular}{c|c|c|c|c|c|c} \hline \multicolumn{7}{c}{Non-Private/DP-SGD/MI Shadow Models} \\ \hline \#hidden neurons & $L_{2}$ reg. & learning rate & \#hidden layer & optimiser & batch size & \#epoch \\ \hline \textit{128} & \textit{0.001} & \textit{0.001} & \textit{1} & \textit{ADMA} & \textit{100} & \textit{100} \\ \hline activation func. & loss func. & $\epsilon$ & $\delta$ & \multicolumn{3}{c}{DP Implementation} \\ \hline \textit{Tanh}, \textit{Softmax} & \textit{cross-entropy} & \textit{[0.01, 1000]} & $\sfrac{1}{10 \times |X|}$ & \multicolumn{3}{c}{\textit{RDP}, \textit{zCDP}, \textit{AC}, \textit{NC}} \\ \hline \multicolumn{7}{c}{Convexification Constant ($\alpha$ in Equation~\eqref{eq:convexification})} \\ \hline \multicolumn{5}{c|}{$\alpha = 1$: Location, MNIST, US Adult, Purchase-2, CIFAR-100} & \multicolumn{2}{c}{$\alpha = 2$: CIFAR-10} \\ \hline \multicolumn{2}{c|}{$\alpha = 4$: Texas Hospital} & \multicolumn{5}{c}{$\alpha = 5$: Purchase-100, Purchase-50, Purchase-20, Purchase-10} \\ \hline\hline \multicolumn{7}{c}{MI Attack Models} \\ \hline \#hidden neurons & $L_{2}$ reg. & learning rate & \multicolumn{4}{c}{Other Configurations} \\ \hline \textit{64} & \textit{$1e^{-6}$} & \textit{0.01} & \multicolumn{4}{c}{\textit{The Same as MI Shadow Models}} \\ \hline \end{tabular} } \end{table*} \descr{Hyper-parameters.} In general, we mainly follow the configurations in existing studies on MI attacks (the shadow model-based approach from Shokri et al.)~\cite{ShokriR2017} and DP-SGD~\cite{JayaramanB2019} but train baseline non-private models, MI shadow models and DP models with a convexified loss function. Specifically, for training baseline non-private models and MI shadow models~\cite{ShokriR2017}, we keep the same configurations as Shokri et al.~\cite{ShokriR2017}; to implement the DP-SGD (injecting Gaussian noise) approaches, we call Google's open-source tensorflow-privacy package following the same configurations as Jayaraman et al.~\cite{JayaramanB2019}. We also follow the same computation as Jayaraman et al.~\cite{JayaramanB2019} to plot the theoretical upper bound on the privacy leakage of $\epsilon$-DP, which is $\exp(\epsilon) - 1$. When plotting it, we bound it by the privacy leakage of the baseline non-private model. We empirically search the value of convexification constant ($\alpha$ in Equation~\eqref{eq:convexification}) \reviseA{in $(0, 10]$ with a step of $0.5$} to ensure the \revise{convexified} loss function achieves (almost) the same training \reviseA{and test accuracy} as the original (non-convex) loss function. Table~\ref{tab:configurations} shows the detailed configurations, where \textit{RDP}, \textit{zCDP}, \textit{AC} and \textit{NC} represent R{\'e}nyi DP~\cite{MironovI2017}, zero-Concentrated DP~\cite{BunM2016}, DP with advanced composition~\cite{DworkC2014} and DP with na{\"i}ve composition~\cite{DworkC2006O}, respectively, $|X|$ is the size of the training set. \descr{Privacy budget.} \reviseA{In our experiments, we report the average performance of all DP models with the same privacy budget $\epsilon$ answering a single query (the prediction vector of a single data sample). This is mostly for ease of presentation. See Section~\ref{sub:budget-multiple-queries} for a discussion of how this can be expanded to multiple queries.} We apply both Laplace and Gaussian mechanisms to generate DP noise for Algorithm~\ref{alg:dpdnn}. In particular, when applying Laplace distribution, according to the sequential composition of privacy budget (Theorem~\ref{thm:Lap-DP_composition}), we split the overall privacy budget $\epsilon$ for a single query as Theorem~\ref{thm:final_dp}, that is, $\epsilon_{\text{neuron}} (= \epsilon_{\text{sampling}}) = \sfrac{\epsilon}{(2C + 1)}$. When applying Gaussian distribution, according to the relationship between $\epsilon$-GDP and $(\epsilon, \delta)$-DP (Theorem~\ref{thm:GDP_composition}) and the composition theorem proposed in Gaussian differential privacy (Theorem~\ref{thm:GDP_composition}), we split the overall privacy budget $\epsilon$ as $\epsilon_{\text{neuron}} (= \epsilon_{\text{sampling}}) = \sfrac{\epsilon}{\sqrt{2^{2} \times C + 1}}$. \reviseA{When implementing DP-SGD, we use the source code of Jayaraman et al.~\cite{JayaramanB2019} to split $\epsilon$ into each epoch.} The privacy budget values in this paper are the same as Jayaraman et al.~\cite{JayaramanB2019}, i.e., $\epsilon = \{0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100, 500, 1000\}$. \descr{Experimental setup.} To evaluate the DP models (Algorithm~\ref{alg:dpdnn} and DP-SGD models) on a given dataset with a privacy budget $\epsilon$ \reviseA{for a single data sample for prediction}, we first randomly sample two disjoint sets having the same size from the dataset to be the training set and the test set. Using the training set, we train the baseline non-private model and the $\epsilon$-DP-SGD models. We then implement Algorithm~\ref{alg:dpdnn} with $\epsilon$ on the test set. We calculate the accuracy loss of each DP model based on the test accuracy obtained from the test set. Then we perform the MI attack via shadow models~\cite{ShokriR2017} to attack the baseline non-private model and the DP models to calculate the privacy leakage. Finally, we repeat the training and attacking process 10 times to report the average accuracy loss and privacy leakage. \subsection{Experimental Results and Analysis} In this section, we empirically study the non-private model and DP models on real-world datasets. We first report the performance, including training and test accuracy, OARO stability and privacy leakage, of the baseline non-private model using the \revise{convexified} loss function. Then we present the comparison between Algorithm~\ref{alg:dpdnn} and DP-SGD on accuracy loss and privacy leakage. Finally we analyse the experimental results (accuracy loss and privacy leakage) of the DP models (Algorithm~\ref{alg:dpdnn} and existing DP-SGD) obtained on the real-world datasets. \subsubsection{Performance of the Convexified Loss Function} \label{sec:exp_surrogate} We compare the performance of models trained on the original non-convex loss function and on the convexified loss function with the same hyper-parameters and the same training and training sets. Table~\ref{tab:accuracy} shows the average training accuracy and the average test accuracy achieved on the real-world datasets of $10$ experiments. \begin{table*}[!ht] \centering \caption{Performance of the Baseline Non-Private Models\\(sorted in decreasing order of Acc. Gap/Priv. Leak. of overfitting and fitting models).} \label{tab:accuracy} \scalebox{0.9}{ \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Dataset} & \multicolumn{3}{c|}{Original Loss Func.} & \multicolumn{3}{c|}{\revise{Convexified} Loss Func.} & \multirow{2}{*}{Priv. Leak.} & \multirow{2}{*}{OARO Stability} \\ \cline{2-7} & Training Acc. & Test Acc. & Acc. Gap & Training Acc. & Test Acc. & Acc. Gap & \\ \hline\hline Location & 1.0000 & 0.6068 & 0.3932 & 0.9916 & 0.6484 & 0.3432 & 0.3600 & 10.8532 \\ \hline Texas Hospital & 0.7990 & 0.5689 & 0.2301 & 0.8770 & 0.5364 & 0.3406 & 0.2295 & 18.8945 \\ \hline Purchase-100 & 0.9992 & 0.7985 & 0.2007 & 0.9942 & 0.7723 & 0.2219 & 0.1664 & 1.8193 \\ \hline Purchase-50 & 0.9994 & 0.8636 & 0.1358 & 0.9985 & 0.8315 & 0.1670 & 0.1236 & 1.7827 \\ \hline\hline Purchase-20 & 0.9986 & 0.9022 & 0.0964 & 0.9796 & 0.8759 & 0.1037 & 0.0727 & 1.6750 \\ \hline Purchase-10 & 0.9973 & 0.9203 & 0.0770 & 0.9735 & 0.8903 & 0.0832 & 0.0292 & 1.5036 \\ \hline Purchase-2 & 0.9963 & 0.9642 & 0.0321 & 0.9951 & 0.9642 & 0.0309 & 0.0073 & 0.4641 \\ \hline MNIST & 0.9863 & 0.9528 & 0.0335 & 0.9494 & 0.9297 & 0.0197 & 0.0035 & 1.9845 \\ \hline US Adult & 0.8310 & 0.8300 & 0.0010 & 0.8260 & 0.8262 & 0.0002 & 0.0023 & 0.0109 \\ \hline\hline CIFAR-10 & 0.6198 & 0.4453 & 0.1745 & 0.5731 & 0.4236 & 0.1495 & 0.0111 & 7.7760 \\ \hline CIFAR-100 & 0.3224 & 0.1677 & 0.1647 & 0.2026 & 0.1389 & 0.0637 & 0.0099 & 9.4090 \\ \hline \end{tabular} } \end{table*} As shown in Table~\ref{tab:accuracy}, the \revise{convexified} loss function provides approximately the same model performance (training accuracy and test accuracy) as the original non-convex loss function. Such a result is also confirmed by the experimental analysis in \cite{DvijothamK2014}. Therefore, we can use the \revise{convexified} loss function to provide reliable baseline non-private models to further analyse the performance of the DP models. \subsubsection{Performance of the On-Average-Remove-One Stability on Measuring Overfitting} \label{sec:exp_stability} In this section, we examine the performance of the OARO stability on measuring overfitting via checking the relationship between the OARO stability and two empirical rules of detecting overfitting~\cite{YeomS2018}, i.e., accuracy gap between the training accuracy and the test accuracy and privacy leakage under the MI attacks. Table~\ref{tab:accuracy} and Figure~\ref{fig:stability} show that the OARO stability is significantly correlated to both the accuracy gap and the privacy leakage, when the training accuracy of a baseline non-private model is acceptable (test accuracy is greater than $0.5$). Specifically, in Figure~\ref{subfig:acc_gap_stability}, the Pearson Correlation Coefficient is 0.8277 (p-value = 0.0059). In Figure~\ref{subfig:priv_leak_stability}, the Pearson Correlation Coefficient is 0.7412 (p-value = 0.0223). In both Figures~\ref{subfig:acc_gap_stability} and \ref{subfig:priv_leak_stability}, the Spearman's rank Correlation Coefficient is 0.7333 (p-value = 0.0246), the Kendall's rank Correlation Coefficient is 0.6667 (p-value = 0.0127). Therefore, we conclude that \textbf{the OARO stability can estimate the overfitting of a model when the model has a high training accuracy,} since we calculate the OARO stability during the training process. This could be a potential benefit to detect overfitting of a model when there are not enough data to prepare training sets to compare the accuracy gap. \begin{figure}[!ht] \centering \captionsetup{justification=centering} \subfloat[OARO Stability \\vs. Acc. Gap]{ \includegraphics[width=0.15\textwidth]{./figs/acc_gap_vs_stability} \label{subfig:acc_gap_stability} } \hfill \subfloat[OARO Stability \\vs. Priv. Leak.]{ \includegraphics[width=0.15\textwidth]{./figs/priv_leak_vs_stability} \label{subfig:priv_leak_stability} } \hfill \subfloat[Priv. Leak. \\vs. Acc. Gap]{ \includegraphics[width=0.15\textwidth]{./figs/priv_leak_vs_acc_gap} \label{subfig:priv_leak_acc_gap} } \caption{OARO Stability vs. Empirical Rules of Overfitting.} \label{fig:stability} \end{figure} \subsubsection{Performance of the Differentially Private Models} \label{sec:exp_dp} Figures~\ref{fig:location} to \ref{fig:cifar_100} depict the accuracy loss and privacy leakage on the six real-world datasets, where \textit{Alg.~\ref{alg:dpdnn} (Gaus)} and \textit{Alg.~\ref{alg:dpdnn} (Lap)} represent Algorithm~\ref{alg:dpdnn} implemented with Gaussian noise and Laplace noise, respectively. \reviseA{Table~\ref{tab:sensitivity} shows the Lipschitz constant and the global sensitivities we calculated and used in our experiments based on our theoretical results.} We give our key findings below. \textbf{Finding 1: Avoiding overfitting is still the rule of thumb to mitigate the effect of MI attacks (via shadow models).} Based on the observed accuracy gap and privacy leakage of the non-private models on different datasets (Figure~\ref{subfig:priv_leak_acc_gap}, where the Pearson correlation coefficient is 0.9592, Spearman's rank and Kendall's rank correlation coefficients are 1, the p-value of all the three correlation coefficient is 0), we have the same conclusion as existing works~\cite{YeomS2018,TruexS2019,TonniS2020}. That is, when a model is not overfitting, the privacy leakage of the non-private model would be rather marginal (almost zero privacy leakage) against MI attacks. \textbf{Finding 2: The accuracy loss of Algorithm~\ref{alg:dpdnn} follows a stable monotonic decreasing curve when tuning the privacy budget.} This property gives us a predictable accuracy loss when configuring a specific value of the privacy budget to generate DP prediction. Such a stability is in two-fold. First, on a single query, the maximum accuracy loss of Algorithm~\ref{alg:dpdnn} is about $0.5$, which is much smaller than existing DP-SGD approaches. Second, the accuracy loss of Algorithm~\ref{alg:dpdnn} (Gaussian noise) significantly decreases from its maximum value to $0$ when $\epsilon \in [0.1, 10]$, which is the commonly used range for tuning the privacy budget in practice (for Laplace noise, $\epsilon \in [1, 100]$). We observe exceptions on Location (Figure~\ref{fig:location}) and Purchase-2 (Figure~\ref{fig:purchase_2}) datasets, where such ranges of the privacy budget are in $[1, 100]$ and $[0.01, 1]$, respectively. Whereas, we cannot observe such a stable curve in existing DP-SGD approaches in any aspect, such as the maximum accuracy loss (varying from $1.0$ to $0.1$ on different datasets), range of privacy budget for decreasing the accuracy loss (varying from $[0.05, 100]$ to $[0.01, 1]$ on different datasets). \textbf{Finding 3: Algorithm~\ref{alg:dpdnn} achieves a good privacy-utility trade-off when the privacy leakage of the baseline model does not approximately equal to 0.} Specifically, in most datasets (except the two most fitted models on MNIST and US Adult datasets, Figures~\ref{fig:mnist} and \ref{fig:adult}, where the privacy leakage is less than $0.02$), for a given privacy budget, Algorithm~\ref{alg:dpdnn} (Gaussian noise and Laplace noise) provides the least accuracy loss and achieves the closest privacy leakage to the theoretical bound on $\epsilon$-DP for a single query. When tuning the privacy budget to large values, say over $10$ (Gaussian noise) or $100$ (Laplace noise), Algorithm~\ref{alg:dpdnn} could converge to the same privacy leakage as the non-private models in most of the datasets, which is expected as a DP algorithm. \textbf{Finding 4: Algorithm~\ref{alg:dpdnn} (Laplace noise) achieves similar accuracy loss curve as Algorithm~\ref{alg:dpdnn} (Gaussian noise) for the binary classes datasets.} In the two binary classes datasets, Purchase-2 (Figure~\ref{fig:purchase_2}) and US Adult (Figure~\ref{fig:adult}), the performance of injecting Gaussian noise and injecting Laplace noise is close to each other. Whereas, on the multi-class datasets, injecting Gaussian noise always outperforms Laplace noise injection on the privacy-utility trade-off. \begin{figure}[!ht] \centering \captionsetup{justification=centering} \subfloat[Accuracy Loss]{ \includegraphics[width=0.23\textwidth]{./figs/acc_loss_locations_30} \label{subfig:location_acc_loss} } \hfill \subfloat[Privacy Leakage]{ \includegraphics[width=0.23\textwidth]{./figs/priv_leak_locations_30} \label{subfig:location_priv_leak} } \caption{Performance Evaluations on Location Dataset.} \label{fig:location} \end{figure} \begin{figure}[!ht] \centering \captionsetup{justification=centering} \subfloat[Accuracy Loss]{ \includegraphics[width=0.23\textwidth]{./figs/acc_loss_texas_100} \label{subfig:texas_acc_loss} } \hfill \subfloat[Privacy Leakage]{ \includegraphics[width=0.23\textwidth]{./figs/priv_leak_texas_100} \label{subfig:texas_priv_leak} } \caption{Performance Evaluations on Texas Hospital Dataset.} \label{fig:texas} \end{figure} \begin{figure}[!ht] \centering \captionsetup{justification=centering} \subfloat[Accuracy Loss]{ \includegraphics[width=0.23\textwidth]{./figs/acc_loss_purchases_100} \label{subfig:purchase_100_acc_loss} } \hfill \subfloat[Privacy Leakage]{ \includegraphics[width=0.23\textwidth]{./figs/priv_leak_purchases_100} \label{subfig:purchase_100_priv_leak} } \caption{Performance Evaluations on Purchase-100 Dataset.} \label{fig:purchase_100} \end{figure} \begin{figure}[!ht] \centering \captionsetup{justification=centering} \subfloat[Accuracy Loss]{ \includegraphics[width=0.23\textwidth]{./figs/acc_loss_purchases_50} \label{subfig:purchase_50_acc_loss} } \hfill \subfloat[Privacy Leakage]{ \includegraphics[width=0.23\textwidth]{./figs/priv_leak_purchases_50} \label{subfig:purchase_50_priv_leak} } \caption{Performance Evaluations on Purchase-50 Dataset.} \label{fig:purchase_50} \end{figure} \begin{figure}[!ht] \centering \captionsetup{justification=centering} \subfloat[Accuracy Loss]{ \includegraphics[width=0.23\textwidth]{./figs/acc_loss_purchases_20} \label{subfig:purchase_20_acc_loss} } \hfill \subfloat[Privacy Leakage]{ \includegraphics[width=0.23\textwidth]{./figs/priv_leak_purchases_20} \label{subfig:purchase_20_priv_leak} } \caption{Performance Evaluations on Purchase-20 Dataset.} \label{fig:purchase_20} \end{figure} \begin{figure}[!ht] \centering \captionsetup{justification=centering} \subfloat[Accuracy Loss]{ \includegraphics[width=0.23\textwidth]{./figs/acc_loss_purchases_10} \label{subfig:purchase_10_acc_loss} } \hfill \subfloat[Privacy Leakage]{ \includegraphics[width=0.23\textwidth]{./figs/priv_leak_purchases_10} \label{subfig:purchase_10_priv_leak} } \caption{Performance Evaluations on Purchase-10 Dataset.} \label{fig:purchase_10} \end{figure} \begin{figure}[!ht] \centering \captionsetup{justification=centering} \subfloat[Accuracy Loss]{ \includegraphics[width=0.23\textwidth]{./figs/acc_loss_purchases_2} \label{subfig:purchase_2_acc_loss} } \hfill \subfloat[Privacy Leakage]{ \includegraphics[width=0.23\textwidth]{./figs/priv_leak_purchases_2} \label{subfig:purchase_2_priv_leak} } \caption{Performance Evaluations on Purchase-2 Dataset.} \label{fig:purchase_2} \end{figure} \begin{figure}[!ht] \centering \captionsetup{justification=centering} \subfloat[Accuracy Loss]{ \includegraphics[width=0.23\textwidth]{./figs/acc_loss_mnist_10} \label{subfig:mnist_10_acc_loss} } \hfill \subfloat[Privacy Leakage]{ \includegraphics[width=0.23\textwidth]{./figs/priv_leak_mnist_10} \label{subfig:mnist_10_priv_leak} } \caption{Performance Evaluations on MNIST Dataset.} \label{fig:mnist} \end{figure} \begin{figure}[!ht] \centering \captionsetup{justification=centering} \subfloat[Accuracy Loss]{ \includegraphics[width=0.23\textwidth]{./figs/acc_loss_adults_2} \label{subfig:adult_acc_loss} } \hfill \subfloat[Privacy Leakage]{ \includegraphics[width=0.23\textwidth]{./figs/priv_leak_adults_2} \label{subfig:adult_priv_leak} } \caption{Performance Evaluations on US Adult Dataset.} \label{fig:adult} \end{figure} \begin{figure}[!ht] \centering \captionsetup{justification=centering} \subfloat[Accuracy Loss]{ \includegraphics[width=0.23\textwidth]{./figs/acc_loss_cifar_10} \label{subfig:cifar_10_acc_loss} } \hfill \subfloat[Privacy Leakage]{ \includegraphics[width=0.23\textwidth]{./figs/priv_leak_cifar_10} \label{subfig:cifar_10_priv_leak} } \caption{Performance Evaluations on CIFAR-10 Dataset.} \label{fig:cifar_10} \end{figure} \begin{figure}[!ht] \centering \captionsetup{justification=centering} \subfloat[Accuracy Loss]{ \includegraphics[width=0.23\textwidth]{./figs/acc_loss_cifar_100} \label{subfig:cifar_100_acc_loss} } \hfill \subfloat[Privacy Leakage]{ \includegraphics[width=0.23\textwidth]{./figs/priv_leak_cifar_100} \label{subfig:cifar_100_priv_leak} } \caption{Performance Evaluations on CIFAR-100 Dataset.} \label{fig:cifar_100} \end{figure} \reviseA{ \textbf{Finding 5: For a given neural network topology, the upper bounds on the global sensitivities are highly depending to the number of classes and the number of records of the training set, rather than the Lipschitz constant.} From the results reported in Table~\ref{tab:sensitivity}, the values of Lipschitz constant $\rho$ varies over different datasets; whereas the global sensitivities $\Delta \omega$ and $\Delta z_{T}$ remain relatively stable. Exceptions on the global sensitivities are US Adult, Purchase-2 and Location datasets, where the former two are binary classes datasets (where the global sensitivities are about half of the global sensitivities of the datasets having the same training set size) and the latter contains 1200 records (whose global sensitivities are about 10 times more than the global sensitivities of datasets having multiple classes). } \begin{table}[!th] \centering \caption{\reviseA{Sensitivities and Lipschitz Constant of Alg.~\ref{alg:dpdnn} Calculated and Used in Experiments.}} \label{tab:sensitivity} \scalebox{0.85}{ \begin{tabular}{l|c|c|c|c} \hline Dataset & $\rho$ & $\Delta \omega$ & $\Delta z_{T}$ & $\Delta p$ ($\exp(2\Delta z_{T})-1$) \\ \hline\hline US Adult & 0.1654 & 0.0015 & 0.1871 & 0.4538 (0.4538) \\ \hline MNIST & 2.2274 & 0.0028 & 0.3577 & 1.0000 (1.0450) \\ \hline Location & 1.8044 & 0.0244 & 3.1190 & 1.0000 (510.8338) \\ \hline Purchase-2 & 1.0771 & 0.0016 & 0.2000 & 0.4918 (0.4918) \\ \hline Purchase-10 & 1.9388 & 0.0028 & 0.3570 & 1.0000 (1.0421) \\ \hline Purchase-20 & 2.0465 & 0.0029 & 0.3738 & 1.0000 (1.1119) \\ \hline Purchase-50 & 2.1111 & 0.0029 & 0.3765 & 1.0000 (1.1234) \\ \hline Purchase-100 & 2.1327 & 0.0029 & 0.3664 & 1.0000 (1.0809) \\ \hline CIFAR-10 & 4.4091 & 0.0028 & 0.3594 & 1.0000 (1.0520) \\ \hline CIFAR-100 & 4.8500 & 0.0030 & 0.3897 & 1.0000 (1.1802) \\ \hline Texas Hospital & 6.8729 & 0.0031 & 0.3928 & 1.0000 (1.1937) \\ \hline \end{tabular} } \end{table} \subsubsection{Analysis of Experimental Results} The observations in Section~\ref{sec:exp_surrogate} and Section~\ref{sec:exp_stability}, reflect the theoretical properties of the \revise{convexified} loss function shown in \cite{LoJ2012,DvijothamK2014} and the OARO stability in Definition~\ref{def:remove_stable} and Corollary~\ref{cr:dp_effect}. Findings 1, 2, 3, 4 \reviseA{and 5} in Section~\ref{sec:exp_dp} are supported by the noise injection scheme of Algorithm~\ref{alg:dpdnn}, i.e., applying Exp-DP to sample an individual neuron for noise injection with a tight upper bound on the global sensitivity could provide a better privacy-utility trade-off for an output perturbation-based DP algorithm. First, based on the measurement for model's accuracy, injecting large positive noise into the neuron representing the accurate prediction class (or injecting small negative noise into the neurons representing the inaccurate prediction classes) would not impact the test accuracy of Algorithm~\ref{alg:dpdnn}. When $\epsilon \leq 10^{-2}$, the sampling probabilities (applying Exp-DP) of all neurons at the output layer are similar. In this case, Algorithm~\ref{alg:dpdnn}, injecting noise following Laplace distribution or Gaussian distribution, gives the same (or similar) prediction as the baseline model half the time. Hence, we observe $\text{Acc\_Loss} \approx 0.5$ and $\text{Priv\_Leak} > 0$ from Algorithm~\ref{alg:dpdnn} for $\epsilon \leq 10^{-2}$. Once having a relatively greater privacy budget, the neuron representing the accurate prediction class would have much higher probability to be sampled, together with the tight upper bound on the global sensitivities, Algorithm~\ref{alg:dpdnn} ensures a better privacy-utility trade-off for a single query. Second, in the binary-class datasets, since there are only two neurons at the output layer, with high probability, the Exp-DP would sample the neuron representing the accurate prediction class. In this case, the amount of noise would not impact the final prediction outcome. Hence, we cannot observe significant differences between Gaussian noise and Laplace noise in binary-class datasets. \reviseA{ Third, as discussed in Section~\ref{sec:lip_cons}, roughly $\frac{C-1}{Cn}$ is the factor impacting the global sensitivities, where $C$ is the number of classes and $n$ is the number of training records. $\frac{C-1}{Cn}$ is a monotonically increasing function for $C \in [2, +\infty)$ and a monotonically decreasing function for $n \in [1, +\infty)$. When $C = 2$ (binary datasets) we get $\frac{C-1}{C} = \frac{1}{2}$. On the other hand, when $C \geq 10$, we get $\frac{C-1}{C} \approx 1$. This implies that the global sensitivities obtained in US Adult and Purchase-2 datasets (binary class datasets) are about half of that in other datasets ($C > 2$, $n = 10,000$) as can be seen in Table~\ref{tab:sensitivity}. On the other hand, the Location dataset is the only dataset having $n = 1,200$ records, as compared to $n = 10,000$ records for all other datasets (see Table~\ref{tab:datasets}). As a result, the global sensitivities obtained in Location dataset are about 10 times than that in other datasets ($C > 2$, $n = 10,000$). } \reviseA{ \subsubsection{Privacy Budget Consumption for Multiple Queries.} \label{sub:budget-multiple-queries} Since Algorithm~\ref{alg:dpdnn} is based on output perturbation, it consumes privacy budget for each query (a single data sample for prediction), and hence can only be used for a fixed number of queries before exhausting the privacy budget. On the other hand, DP-SGD, as a gradient perturbation, consumes privacy budget during the training process, so it could accept an unlimited number of queries without further privacy budget consumption. However, we can always scale the privacy budget according to the number of queries we are willing to answer. Based on the experimental results observed in Figures~\ref{fig:location} to \ref{fig:purchase_50}, we conclude that when the privacy leakage of a non-private model is no less than $0.1$, given an overall privacy budget $\epsilon \leq 1$ for multiple queries, Algorithm~\ref{alg:dpdnn} (Gaussian) could answer a large number of queries (which is likely to be the size we expect in practice), while still outperforming DP-SGD in the privacy-utility trade-off. Note that while we report these results for single queries, we can use the same results assuming a larger number of queries for a fixed budget. For instance, consider the results on the Location dataset in Figure~\ref{fig:location}. If we have $\epsilon = 10$, and we would like to answer 1,000 queries, then we can look at the accuracy loss (privacy leakage) at $\epsilon = 10^{-2} = 10/1000$ in the figure, for the accuracy loss (privacy leakage) per query. Thus, again on the Location dataset (Figure~\ref{fig:location}), given $\epsilon = 10$ for $100$ queries, based on the sequential composition of privacy budget (Theorem~\ref{thm:Lap-DP_composition}), each single query consumes $\epsilon = 10^{-2}$. At $\epsilon = 10^{-2}$, Algorithm~\ref{alg:dpdnn} (Gaussian) has about $0.5$ accuracy loss and $0.2$ privacy leakage, whereas the best DP-SGD (RDP) at $\epsilon = 1$ (since DP-SGD does not consume privacy budget during the test/prediction phase) has about $0.9$ accuracy loss and $0.0$ privacy leakage. That is, Algorithm~\ref{alg:dpdnn} has less accuracy loss and more privacy leakage, hence having a better privacy-utility trade-off in this example. For a larger number of queries (1,000 or 10,000) the privacy-utility loss is comparable to DP-SGD. Based on the analysis of Finding 2, for a single query, when $\epsilon < 10^{-2}$, Algorithm~\ref{alg:dpdnn} should still have $\text{Acc\_Loss} \approx 0.5$ and $\text{Priv\_Leak} > 0$, then we could extend the number of queries answered by Algorithm~\ref{alg:dpdnn} to be a large number in practice. Thus, the advantage of our method is that we can provide higher accuracy if the model is required to answer a small number of queries, which is not the case with input perturbation or gradient perturbation-based methods such as DP-SGD.} \section{Conclusion} \label{sec:conclusion} \revise{\textbf{Concluding Remarks.}} In this paper we propose a framework that provides differentially private prediction probability vector for general deep learning tasks. Our approach only injects DP noise into one neuron (sampled with Exponential mechanism of differential privacy) at the output layer of a given neural network. To implement our approach, we mathematically analyse the upper bound on $L_{1}$ global sensitivity of an individual neuron via analysing a tighter upper bound on $L_{2}$ global sensitivity of the trained model parameters (than existing results). Our empirical studies show that our approach achieves a better trade-off between utility and privacy than existing DP-SGD approaches on six commonly used real-world datasets, \reviseA{given an overall privacy budget $\epsilon \leq 1$ for a large number of queries.} \revise{ \textbf{Limitations and Future Work.} Our approach only provides DP predictions before reaching a pre-defined number of queries, since we consume privacy budget per query. For an output perturbation-based solution, to answer an unlimited number of queries while guaranteeing DP, it requires injecting DP noise directly into all the trained model parameters. However, due to the complexity of the topology of neural networks, this would result in over-injected noise adversely impacting utility of prediction. Additionally, a tight upper bound on $L_{2}$ global sensitivity of the trained model parameters does not always give us a tight upper bound on $L_{1}$ global sensitivity of an individual neuron as shown in Section~\ref{sec:global_sens}. Instead, experimental results show trivial upper bounds in most datasets (Table~\ref{tab:sensitivity}) having no greater than $10,000$ training data samples. To improve our results, i.e., answering more queries, we should consume less privacy budget per query by either exploring other noise injection schemes having better privacy budget composition or further tightening our upper bounds on the global sensitivity. This is an open question for future work. } \section*{Acknowledgements} We thank Bargav Jayaraman for clarifications on the use of their implementation of DP-SGD. This work was conducted with funding received from the Macquarie University Cyber Security Hub, in partnership with the Defence Science \& Technology Group and Data61-CSIRO, through the Next Generation Technologies Fund. The experiments of this work was partially supported by the Australasian Leadership Computing Grants scheme, with computational resources provided by NCI Australia, an NCRIS enabled capability supported by the Australian Government. Hassan Jameel Asghar is the corresponding author. \bibliographystyle{IEEEtran}
1,314,259,993,777
arxiv
\section{Statement of the problem and preliminary results}\label{1} \vs 0.5cm In \cite{[S]}, the second author studied the following class of problems: \begin{equation*}(P_0) \begin{cases} & \text{Find } (u, \chi) \in H^{1}(\Omega)\times L^\infty (\Omega) \text{ such that}:\\ & (i)\quad u\geq 0, \quad 0\leq \chi\leq 1 , \quad u(\chi -1 ) = 0 \,\,\text{ a.e. in } \Omega\\ & (ii)\quad u= 0 \quad \text{ on } \Gamma_2 \\ & (iii)\quad \displaystyle{\int_\Omega }\big( a(x) \nabla u + \chi h(x) \big) .\nabla\xi dX \,\leq\, \int_{\Gamma_3}\beta(x,\varphi-u)\xi d\sigma(x)\\ &\hskip 1.7cm \forall \xi \in H^{1}(\Omega),\quad\xi \geq 0 \text{ on } \Gamma_2 \end{cases} \end{equation*} \n where $\Omega=\{(x_1,x_2)\in \mathbb{R}^2\,\,/\,\,x_1\in(a_0,b_0),\,\, d_0<x_2<\gamma(x_1)\}$, with $\gamma\in C^{0,1}(a_0,b_0)$, $\Gamma_2\cup \Gamma_3=\{(x_1,\gamma(x_1))\,\,/\,\,x_1\in(a_0,b_0)\}$, $\Gamma_2\cap \Gamma_3=\emptyset$, $\Gamma_3$ is a nonempty connected and relatively open subset of $\partial\Omega$, $a=[a_{ij}]$ is a $2\times2$ matrix and $h$ is a nonnegative function. \n In \cite{[C]}, \cite{[ChaL2]}, and \cite{[S]}, the monotonicity of $\chi$ with respect to the variable $x_2$, has allowed the authors to define the free boundary $\partial \{u>0\} \cap\Omega$ as a graph of a function $\Phi(x_1)$. Moreover, under suitable assumptions, it was proven that $\Phi$ is continuous both for Dirichlet and Neuman conditions. \vs0.2cm\n In this paper, we consider a more general class of free boundary problems in the same spirit of \cite{[ChaL3]}, namely we replace the real valued function $h$ by a vector function $H$: \begin{equation*}(P) \begin{cases} & \text{Find } (u, \chi) \in H^{1}(\Omega)\times L^\infty (\Omega) \text{ such that}:\\ & (i)\quad u\geq 0, \quad 0\leq \chi\leq 1 , \quad u(1-\chi) = 0 \,\,\text{ a.e. in } \Omega\\ & (ii)\quad u= 0 \quad \text{ on } \Gamma_2 \\ & (iii)\quad \displaystyle{\int_\Omega }\big( a(x) \nabla u + \chi H(x)\big) .\nabla\xi dx \,\leq\, \int_{\Gamma_3}\beta(x,\varphi-u)\xi d\sigma(x)\\ &\hskip 1.7cm \forall \xi \in H^{1}(\Omega),\quad\xi \geq 0 \text{ on } \Gamma_2 \end{cases} \end{equation*} \n where $\Omega$ is a bounded domain of $\mathbb{R}^2$ whose boundary $\partial\Omega$ is of class $C^1$, $\Gamma_2$ and $\Gamma_3$ are disjoint nonempty subsets of $\partial\Omega$, with $\Gamma_3$ connected and relatively open in $\partial\Omega$. \n $a=[a_{ij}]$ is a $2\times2$ matrix that satisfies for two positive constants $\lambda$ and $\Lambda$ \begin{eqnarray} & \vert a_{ij}(x)\vert \leq \Lambda, \quad\text{ for a.e. }x\in \Omega, \quad \forall i, j=1, 2\\ &{ a}(x)\xi.\xi \geq \lambda\vert \xi\vert^2\quad\forall\xi\in \mathbb{R}^{2},\quad\text{ for a.e. }x\in \Omega, \end{eqnarray} \n $H=(H_1,H_2)$ is a $C^1(\overline{\Omega})$ vector function, that satisfies for some positive constants $\overline{H}>\underline{H}$: \begin{eqnarray} & |H_1(x)|\leq \bar H \quad\text{ in } \Omega \\ & \underline{H} \leq H_2(x)\leq \overline{H} \quad\text{ in } \Omega \\ & \text{div}(H(x))\geq 0\quad\text{ in }\Omega\\ & H(x).\nu>0\quad \text{ on } \Gamma_3 \end{eqnarray} \begin{eqnarray} && \beta(x,.)\quad\text{ is continuous for a.e. }x\in \Gamma_3 \\ && \beta(x,0)=0\quad\text{for a.e. }x\in \Gamma_3 \\ && \beta(x,.)\quad\text{ is non-decreasing for a.e. }x\in \Gamma_3 \end{eqnarray} \n Many free boundary problems belongs to the above class of problems. For example the dam problem with Neuman boundary condition on the reservoirs bottoms (see \cite{[ChiL1]}, \cite{[ChiL2]}, \cite{[L1]}, \cite{[L2]}, \cite{[L3]}, \cite{[L4]}). Another problem arises from the thermoelectrical modelling of aluminum electrolytic cells (see \cite{[BMQ]}). \vs0.2cm\n In these problems we investigate the free boundary $\partial[u>0]\cap\Omega$ that separates two different regions. In the dam problem, it separates wet and non wet parts of the porous medium. In the aluminium electrolysis problem, it separates liquid and solid aluminium. \begin{remark}\label{r1.1} Under assumptions (1.1)-(1.4) and (1.7)-(1.9), we can prove the existence of a solution for the problem $(P)$ as in \cite{[ChiL1]}. For a more general situation, we refer to \cite{[L1]}. \end{remark} \vs0.3cm \n We begin by the following proposition that can be obtained as in \cite{[ChaL3]}. \begin{proposition}\label{p1.1} For any solution $(u,\chi)$ of $(P)$, we have: \begin{eqnarray*} && i)\quad \text{div}(a(x)\nabla u)=-\text{div}(\chi H(x)) \quad\text{in}\quad \mathcal{D}'(\Omega).\\ &&ii)\quad \text{div}(\chi H(x))-\chi_{\{u>0\}}\text{div}(H(x))\leq0\quad\text{in}\quad\mathcal{D}'(\Omega). \end{eqnarray*} \end{proposition} \begin{remark}\label{r1.2} As a consequence of Proposition 1.1 i), we obtain (see \cite{[GT]}): \vs 0.2cm\n $i)$ $u\in C_{loc}^{0,\alpha}( \Omega)$ for some $\alpha\in(0,1)$. In particular $u$ is continuous in $\Omega\cup\Gamma_2$ and the set $\{u>0\}$ is open. \vs 0.2cm \n $ii)$ If $a\in C_{loc}^{0,\alpha}( \Omega)$ $(0<\alpha<1)$, then we have $u\in C_{loc}^{1,\alpha} (\{u>0\})$. \end{remark} \vs 0,5cm\n Following \cite{[ChaL1]}, we introduce for each $h\in \pi_{x_{2}}(\Omega)$ and $w\in \pi_{x_1}(\Omega\cap\{x_{2}=h\})$, the following differential equation: $$(E(w,h))\left\{\begin{array}{l} X' (t,w,h)= H(X(t,w,h))\\ X(0,w,h)=(w,h)\\ \end{array}\right.$$ \n It is well known that this equation has a maximal solution $X(.,w,h)$ defined on a maximal interval $(\alpha_{-}(w,h), \alpha_{+}(w,h))$ and continuous on the open set: $$\{(t,w,h):~\alpha_{-}(w,h)< t < \alpha_{+}(w,h), h\in \pi_{x_2}(\Omega) , w\in \pi_{x_1}(\Omega\cap\{x_2=h\})\}$$ \n Moreover due to (1.4), we have: \[X(\alpha_{-}(w,h),w,h) \in \partial\Omega\cap\{x_2<h\} \quad \text{and}\quad X(\alpha_{+}(w,h),w,h) \in \partial\Omega\cap\{x_2>h\}\] \n In the sequel, we will denote the functions $X(t,w,h), \alpha_{-}(w,h)$, and $\alpha_{+}(w,h)$ respectively by $X(t,w),\alpha_{-}(w),$ and $\alpha_{+}(w)$. \vs 0.3cm \n The function $\alpha_{-}$ (resp. $\alpha_{+}$) is upper (resp. lower) semi-continuous. The next result gives more regularity for $\alpha_{+}$. \begin{theorem}\label{1.1} For every $h\in\pi_{x_2}(\Omega)$, $\alpha_{+}$ is continuously differentiable at each $w_0\in\pi_{x_1}(\Omega\cap \{x_2=h\})$ such that $x_0=X(\alpha_{+} (w_0),w_0)\in \Gamma_3$. \end{theorem} \n \emph{Proof.} Let $h$ and $w_0$ as in the theorem. Since $\partial\Omega$ is a $C^1$ curve, there exists an open set $U\subset \mathbb{R}^2$ that contains $x_0$ and a $C^1-$diffeomorphism $\Upsilon=(\Upsilon_1\Upsilon_2):~U~\rightarrow~B_1$ such that \begin{equation}\label{1.10} \Upsilon(U\cap\Omega)=B_1\cap\{y_2>0\}\quad\text{and}\quad \Upsilon(U\cap\partial\Omega)=B_1\cap\{y_2=0\}, \end{equation} where $B_1$ is the unit ball. \vs 0.2 cm\n Let $x_0^-\in (U\cap\partial\Omega)\setminus\{x_0\}$ such that $(x_0^--x_0).\tau(x_0)<0$, where $\tau(x_0)$ is the unit tangent vector to $\partial\Omega$ at $x_0$. \vs 0.2 cm\n Since $H\in C^1(\overline{\Omega})$, there exists an open set $\Omega^*$ and an extension $H^*$ of $H$ such that $\bar\Omega\subset \Omega^*$ and $H^*\in C^1( \Omega^*)$. Then we consider the unique maximal solution $Z(t)$ of the differential equation: \[\left\{\begin{array}{l} Z' (t)= H^*(Z(t))\\ Z(0)=x_0^-\\ \end{array}\right.\] which is defined on a maximal open interval $(\gamma,\delta)$. \vs 0.2 cm\n Taking into account (1.6), we can see that $Z(t)\in \Omega$ for all $t\in(\gamma,0)$. Now if we assume that $h$ is close enough to $x_{02}$, and denote by $t_h$ the real number for which the curve $Z(t)$ intersects the line $x_2=h$, then there exists $w_0^-\in \pi_{x_1}(\Omega\cap \{x_2=h\})$ such that $Z(t_h)=(w_0^-,h)$. Moreover, it is easy to see that \[\left\{\begin{array}{l} X(t)=Z(t_h-t)\\ X(0)=(w_0^-,h)\\ \end{array}\right.\] \n Since $(x_0^--x_0).\tau(x_0)<0$, we necessarily have $w_0^-<w_0$. Furthermore, for each $w_0^-<w<w_0$, the curve $X(t,w)$ is located between the curves $X(t,w_0)$ and $X(t,w_0^-)$. Therefore we have \begin{equation}\label{1.11} X( \alpha_{+}(w),w)\in U\cap\partial\Omega\quad \forall w\in (w_0^-,w_0) \end{equation} \vs 0.2 cm\n Let now $x_0^+\in (U\cap\partial\Omega)\setminus\{x_0\}$ such that $(x_0^+-x_0).\tau(x_0)>0$. Arguing as above, we can prove that there exists $w_0^+\in \pi_{x_1}(\Omega\cap \{x_2=h\})$ such that \begin{equation}\label{1.12} X( \alpha_{+}(w),w)\in U\cap\partial\Omega\quad \forall w\in (w_0,w_0^+) \end{equation} \n Taking into account (1.10)-(1.12), we see that there exists $\eta>0$ small enough such that \begin{equation}\label{1.13} \Upsilon_2(X( \alpha_{+}(w),w))=0\quad \forall w\in (w_0-\eta,w_0+\eta) \end{equation} \vs 0.2cm\n For each $\omega\in \pi_{x_1}(\Omega^*\cap\{x_2=h\})$, let $X^*(t,w)$ be the unique maximal solution of the differential equation: \[(E(w,h))\left\{\begin{array}{l} X' (t,w)= H^*(X(t,w))\\ X(0,w)=(w,h)\\ \end{array}\right.\] $X^*(t,w)$ is defined on the interval $[\alpha_{-}^* (w), \alpha_{+}^*(w)]$, and we obviously have $X^*_{\vert_{(\alpha_{-}(w), \alpha_{+}(w))}}=X$ . Moreover, we have $\alpha_{-}^* (w)<\alpha_{-}(w)$ and $\alpha_{+}(w)<\alpha_{+}^*(w)$. \vs 0.2cm\n Let $D^*=\{ (t,w) \,/\, w\in (w_0-\eta,w_0+\eta),\, t\in (\alpha_{-}^* (w), \alpha_{+}^*(w))\}$. Since $X^*\in C^1(D^*)$ and $\Upsilon_2\in C^1(U)$, the function $F^*=\Upsilon_2 oX^*$ is in $C^1(D^*)$. In addition, $F^*$ is an extension of $F=\Upsilon_2 oX$ to $D^*$ and we have \begin{eqnarray*}{{\partial F^*}\over{\partial t}}(t,w) &=& \nabla \Upsilon_2(X^*(t,w)).X'^*(t,w)\\ &=& \nabla \Upsilon_2(X^*(t,w)).H^*(X^*(t,w)) \end{eqnarray*} \n In particular, we obtain from (1.6) and (1.13) \[{{\partial F^*}\over{\partial t}}(\alpha_{+} (w_0), w_0) =\nabla \Upsilon_2(X(\alpha_{+} (w_0), w_0)) . H(X(\alpha_{+} (w_0), w_0))\neq0 \] Therefore by the implicit function theorem, there exists $\delta \in(0,\eta)$ and a unique function $f : (w_0 -\delta , w_0+\delta) \rightarrow {I\!\! R}$ such that \begin{eqnarray*} F^* (t,\omega) &=& 0 \quad\text{iff}\quad t=f(\omega)\\ f(w_0) &=&\alpha_{+}(w_0) \quad\text{ and }\quad f\in C^1 (w_0-\delta,w_0+\delta).\end{eqnarray*} Since $F^* (\alpha_{+} (w),w)= F (\alpha_{+} (w),w)=0$, we have $\alpha_{+}(w)= f(w)$ for all $w\in(w_0-\delta,w_0+\delta)$. We conclude that $\alpha_{+}\in C^1(\pi_{x_1}(\Omega\cap\{x_2=h\}))$. \qed \vs 0.3cm\n Following \cite{[ChaL1]}, we define for each $h\in\pi_{x_2}(\Omega)$, the set: $$D_{h}=\{(t,w): w\in\pi_{x_1}(\Omega\cap\{x_2= h\}), t\in (\alpha_{-}(w), \alpha_{+}(w))\}$$ \n and the $C^1$ mapping: $$\begin{array}{llll} T_{h} :& D_{h} & \longrightarrow & T_{h}(D_{h})\\ & (t,w) &\longmapsto& T_{h}(t,w)= X(t,w)\\ \end{array}$$ \n whose Jacobian determinant is denoted by $Y_{h}(t,w)$. \vs 0,3cm\n The next proposition can be established as in \cite{[ChaL1]}: \begin{proposition}\label{p1.2} \n $i)$ $T_{h}$ is a $C^1-$diffeomorphism. \n $ii)$ $\displaystyle \frac{\partial Y_{h}}{\partial t}(t,w)= Y_{h}(t,w).div(H(X(t,w))).$ \n $iii)$ $Y_{h}(t,w)=-H_2(w,h)exp\left[\displaystyle\int_{0}^{t} div(H(X(s,w))ds\right].$ \end{proposition} \vs0.2cm\n In Section 3, we will use the $C^1-$diffeomorphism $T_h$ as a change of variable to transform the problem $(P)$ to a problem of type $(P_0)$. As a consequence, we obtain from \cite{[S]} that the free boundary is locally represented by graphs of a family of continuous functions. \section{Parametrization of the free boundary}\label{2} \vs 0.5cm\n For each $h\in\pi_{x_2}(\Omega)$ and any function $f$ defined in $\Omega$, we shall denote by $\widetilde{f}$ the function $f\circ T_{h}$. \n The first result of this section is a monotonicity property of $\widetilde{\chi}$ with respect to $t$, which translates into the fact that $\chi$ is non-increasing along the orbits of the differential equation $E(w,h)$. For the proof we refer to the one of Theorem 2.1 in \cite{[ChaL3]}. \begin{proposition}\label{p2.1} Let $(u,\chi)$ be a solution of $(P)$. Then we have for each $h\in \pi_{x_2}(\Omega)$: \[\frac{\partial\widetilde{\chi}}{\partial t} \leq0 \;\;\;in\;\;\;\mathcal{D}'(D_{h})\] \end{proposition} \vs 0,5cm\n The next proposition is a consequence of the monotonicity of $\widetilde{\chi}$ and the continuity of $\widetilde{u}$. For the proof we refer to the one of Proposition 3.1 in \cite{[ChaL3]} \begin{proposition} \label{p2.2} Let $(u,\chi)$ be a solution of $(P)$ and $(t_0,w_0)\in D_h$. \vs 0,2cm \n $i)$\quad If $\widetilde{u}(t_0,w_0)>0$, then there exists $\epsilon>0$ such that: $$\widetilde{u}(t,w)>0\quad \forall (t,w)\in \mathcal{C}_{\epsilon}=\{(t,w)\in D_h: |w-w_{0}|<\epsilon,~ t<t_0+\epsilon\}$$ \vs 0,2cm \n $ii)$\quad If $\widetilde{u}(t_0,w_0)=0$, then: $$\widetilde{u}(t,w_0)=0, \quad\forall t\geq t_0$$ \end{proposition} \vs 0,5cm \n Thanks to Proposition 2.2, we can define for each $h\in\pi_{x_2}(\Omega)$, the following function in $\pi_{x_1}(\Omega\cap\{x_2=h\})$: \begin{equation*} \Phi_{h}(w)=\left\{\begin{array}{lll} \sup \{t:(t,w)\in D_{h}:\widetilde{u}(t,w)>0\} & : & \text{if this set is not empty} \\ \alpha_{-}(w) & : & \text{otherwise} \\ \end{array}\right. \end{equation*} \vs 0,2cm\n Arguing as in \cite{[ChaL1]}, we can see that $\Phi_h$ is well defined and satisfies \vs 0,2cm \begin{proposition}\label{p2.3} $\Phi_h$ is lower semi-continuous on $\pi_{x_1} (\Omega\cap\{x_2=h\})$ and $$\{\widetilde{u}>0\}\cap D_h=\{t<\Phi_h(w)\}$$ \end{proposition} \begin{remark}\label{r1.3} If the functions $\Phi_h$ are smooth, then the family of functions $\{\Phi_h\}$ is a local parametrization of the free boundary $\partial \{u>0\} \cap\Omega$. \end{remark} \vs 0,5cm\n The next result gives a description of $\chi$ in the interior of the set $\{u=0\}$. \begin{theorem} \label{th2.1} Let $(u,\chi)$ be a solution of $(P)$, $(x_{01},x_{02})=T_{h}(t_0,w_0) \in T_{h}(D_h)$, $B_r(t_0,w_0)$ the ball of center $(t_0,w_0)$ and radius $r$, $Z_0=\big((t_0,\infty)\times (w_0-r,w_0+r)\big)\cap D_h$ and $C_r=Z_0 \cup B_{r}(t_0,w_0)$. \n If $\widetilde{u}=0$ in $B_r(t_0,w_0)\subset D_h$, then we have $\widetilde{u}=0$ in $C_r$. Moreover \begin{enumerate} \item If $\overline{T_{h} (Z_{0})} \cap \Gamma_{3} =\emptyset,$ then $\widetilde{\chi}=0$ in $C_r$. \item If $\overline{T_{h} (Z_{0})} \cap \Gamma_{2} = \emptyset,$ then: \[ \widetilde{\chi}(t,w)=\displaystyle\frac{Y_{h}(\alpha _{+}(w),w)} {Y_{h}(t,w)}\frac {\beta (., \varphi(.))} {H.\nu}(X(\alpha_{+}(w),w)). \] \end{enumerate} \end{theorem} \n To prove the theorem, we need two lemmas. \vs 0.3cm \begin{lemma}\label{lem2.1} For each $x_0\in \Gamma_3$, there exists $\eta>0$ small enough and a $C^1$ function $\sigma$ such that one of the following situations holds \begin{eqnarray*} i) &\quad \Gamma_3\cap B(x_0,\eta)\subset\{(x_1,\sigma(x_1))~\} \\ ii)&\quad \Gamma_3\cap B(x_0,\eta)\subset\{(\sigma(x_2),x_2)~\} \\ \end{eqnarray*} \end{lemma} \n\emph{Proof.} Since $\Gamma_3$ is a $C^1-$curve, there exists an open set $U\subset \mathbb{R}^2$ that contains $x_0=(x_{01},x_{02})$ and a $C^1-$diffeomorphism $\Upsilon:~U~\rightarrow~B_1$ such that $\Upsilon(U\cap\Omega)=B_1\cap\{y_2>0\}$ and $\Upsilon(U\cap\Gamma_3)=B_1\cap\{y_2=0\}$. \vs0.2cm\n If $\Upsilon=(\Upsilon_1\Upsilon_2)$, then we have: \[\Upsilon_2(x)=0\quad \forall x\in U\cap\Gamma_3\] \vs0.2cm\n Due to (1.6), we have $\nabla\Upsilon_2(x_0)\neq0$. Therefore either $\displaystyle{{{\partial \Upsilon_2}\over{\partial x_1}}(x_0)\neq0}$, or $\displaystyle{{{\partial \Upsilon_2}\over{\partial x_2}}(x_0)\neq0}$. \vs0.2cm\n Assume for example that we have $\displaystyle{{{\partial \Upsilon_2}\over{\partial x_2}}(x_0)\neq0}$. Then by the implicit function theorem, there exists $\delta>0$ small enough and a unique $C^1-$function $\sigma : (x_{01} -\delta , x_{01}+\delta) \rightarrow {I\!\! R}$ such that \begin{eqnarray*} &&\Upsilon_2(x_1,x_2)= 0 \quad\text{iff} \quad x_2=\sigma(x_1)\\ &&\quad\text{for all } x_1\in(x_{01}-\delta,x_{01}+\delta). \end{eqnarray*} \n So $i)$ holds. \vs 0.2cm\n If $\displaystyle{{{\partial \Upsilon_2}\over{\partial x_1}}(x_0)\neq0}$, then we can show in a same fashion that $ii)$ holds. \qed \begin{lemma}\label{lem2.2} Let $w_1, w_2\in \pi_{x_1}(\Omega\cap\{x_2=h\})$ such that $w_1<w_2$ and $T_h(\alpha_+(w_i),w_i)\in \Gamma_3$, for $i=1, 2$. Then we have: \begin{eqnarray*} &&\int_{Z}\big( \texttt{a}(x) \nabla\tilde{u} + \tilde{\chi} \texttt{h}(t,w)e_t\big) .\nabla\xi dtdw \,=\, \int_{\tilde{\Gamma}_3}\lambda(.,\tilde{\varphi}-\tilde{u})\xi d\tilde{\sigma}\\ &&\qquad\forall \xi \in H^1(Z),\quad\xi=0 ~~\text{ on }~~ \partial Z\cap D_h \end{eqnarray*} where \begin{eqnarray*} && Z=\{(t,w):~w_1<w<w_2 ~\text{ and }~ h< t < \alpha_{+}(w)\}\\ && \tilde{\Gamma}_3=\{(\alpha_{+}(w),w):~w_1<w<w_2\} \end{eqnarray*} \begin{eqnarray*} && \lambda((t,w),z)=\mu(w)\beta(T_h(t,w),z)\\ &&\mu(w)={{|Y_h|(\alpha_{+}(w),w)}\over \sqrt{1+\alpha_{+}^{\prime 2}(w)}(H.\nu)(T_h(\alpha_{+}(w),w)) }\\ && \texttt{h}(t,w) = |Y_h(t,w)|, \qquad e_t=(1,0) \\ && \texttt{a}(t,w)= |Y_h(t,w)| ^{t}P(t,w).a(X(t,w)).P(t,w) \\ &&\hbox{with } \quad P= (^{t}\mathcal{J}T_h)^{-1} = \displaystyle{1\over Y_h(t,w)} \left \begin{array}{cc} \displaystyle{\partial X_2\over \partial\omega} (t,w) & -H_2(X(t,w))\\ -\displaystyle{\partial X_1\over \partial\omega} (t,w) & H_1(X(t,w))\\ \end{array \right). \end{eqnarray*} \end{lemma} \vs0,3cm\n\emph{Proof.} Let $\xi\in H^1(Z)$ such that $\xi=0$ on $\partial Z\cap D_h$. Then $\pm\xi o T_h^{-1} \chi( T_h(Z))$ are test functions for $(P)$ and we have \begin{equation}\label{2.1} \int_{T_h(Z)}( a(x)\nabla u + \chi H(x)). \nabla(\xi o T_h^{-1}) dx=\int_{\Gamma_3\cap T_h(\partial Z)}\beta(x,\varphi-u)\xi oT_h^{-1} d\sigma(x) \end{equation} \n The left hand side of (2.1) can be written using the change of variable $T_h$ (see \cite{[ChaL3]}) as \begin{equation}\label{2.2} \int_{D_h}( \texttt{a}(t,\omega)\nabla (u oT_h) + \chi oT_h . \texttt{h}(t,\omega) e_t). \nabla \xi dt d\omega \end{equation} where the matrix $\texttt{a}$ and the function $\texttt{h}$ are given by \begin{eqnarray*} && \texttt{h}(t,\omega) = |Y_h(t,\omega)|, \qquad e_t=(1,0) \\ && \texttt{a}(t,\omega)= |Y_h(t,\omega)| ^{t}P(t,\omega).a(X(t,\omega)).P(t,\omega) \\ &&\hbox{with } \quad P= (^{t}\mathcal{J}T_h)^{-1} = \displaystyle{1\over Y_h(t,\omega)} \left \begin{array}{cc} \displaystyle{\partial X_2\over \partial\omega} (t,\omega) & -H_2(X(t,\omega)) \\\\ -\displaystyle{\partial X_1\over \partial\omega} (t,\omega) & H_1(X(t,\omega))\\ \end{array \right). \end{eqnarray*} \n To handle the right hand side of (2.1), we first observe that \begin{equation}\label{2.3} \{T_h(\alpha^+(w),w), w_1<w<w_2\}=\Gamma_3\cap T_h(\partial Z) \end{equation} \n Shrinking if necessary, we can assume by Lemma 2.1, that there exists a $C^1-$function $\sigma$ such that one of the following situations holds \begin{eqnarray*} i)&\quad \sigma(X_1 (\alpha_{+}(w),w))= X_2(\alpha_{+}(w),w)\quad \forall w\in (w_1,w_2) , \\ ii)&\quad \sigma(X_2(\alpha_{+}(w),w))= X_1 ( \alpha_{+}(w),w)\quad \forall \omega\in (w_1,w_2). \end{eqnarray*} Assume for example that $i)$ holds. The case $ii)$ can be treated in the same way. Since $x_1\rightarrow (x_1,\sigma(x_1))$ is a $C^1-$parametrization of $\Gamma_3\cap\partial (T_h (Z))$, the integral in the right hand side of (2.4) can be written as \begin{eqnarray}\label{2.4} &&\int_{\Gamma_3\cap T_h(\partial Z)}\beta(x,\varphi-u)\xi oT_h^{-1} d\sigma(x)\nonumber\\ &&=\int_{\pi_{x_1}(\Gamma_3\cap\partial (T_h(Z))}\beta((x_1,\sigma (x_1)), \varphi (x,\sigma(x)))\xi oT_h^{-1}(x_1,\sigma (x_1))\sqrt{1\!+\!\sigma^{\prime 2 }(x_1) }dx_1\nonumber\\ \end{eqnarray} \n Now observe that $(x_1,\sigma(x_1))= T_h (\alpha_{+}(w),w)$ for $w\in (w_1,w_2)$, and let $\theta(w) = x_1= T_h^1( \alpha_{+}(w),w)$. Then $\theta$ is a $C^1-$function and $\theta'(w) = \alpha_{+}'(w) H_1(X(\alpha_{+}(w),w)) +{\partial X_1\over \partial w} $. Using Theorem 1.1 and arguing as in \cite{[ChaL1]}, we can show via implicit differentiation that $$\alpha_{+}'(\omega)={\sigma' (X_{1}(\alpha_{+}(w),w)){\partial X_1 /\partial w}(\alpha_{+}(w),w) - {\partial X_2/\partial w}(\alpha_{+}(w),w)\over H_2( X(\alpha_{+}(\omega),w)) - \sigma'(X_{1}(\alpha_{+}(w),w))H_1(X(\alpha_{+}(w),w)) }$$ which leads to \begin{eqnarray*} &&\theta'(w) = { -Y_h(\alpha_{+}(w),w)\over H_2( X(w_{+}(w),w))-\sigma'(X_{1}(\alpha_{+}(w),w))H_1(X(\alpha_{+}(w),w)) }\\ && = { |Y_h|(\alpha_{+}(w),w)(1+\sigma^{\prime 2 }(x_1))^{-1/2}\over H(X(\alpha_{+}(w),w),e).\nu (X(\alpha_{+}(w),w)) }\end{eqnarray*} where $\nu(x)={( -\sigma'(x_1),1)\over \sqrt{1+\sigma^{\prime 2 }(x_1)}}$ is the outward unit normal to $\Gamma_3$. \n Lastly we apply the change of variable $\theta$ to (2.4) to show that \begin{eqnarray}\label{2.5} &&\int_{\Gamma_3\cap T_h(\partial Z)}\beta(x,\varphi-u)\xi oT_h^{-1} d\sigma(x)\nonumber\\ &&=\int_{w_1}^{w_2} {\beta((T_h(\alpha_{+}(w),w))), \varphi (T_h(\alpha_{+}(w),w))) |Y_h|(\alpha_{+}(w),w)\over H(T_h(\alpha_{+}(w),w)).\nu(T_h(\alpha_{+}(w),w)) }\xi(\alpha_{+}(w),w)) dw\nonumber\\ &&=\int_{w_1}^{w_2} {{\beta((T_h(\alpha_{+}(w),w))), \varphi (T_h(\alpha_{+}(w),w)))|Y_h|(\alpha_{+}(w),w)}\over \sqrt{1+\alpha_{+}^{\prime 2}(w)}H(T_h(\alpha_{+}(w),w)).\nu(T_h(\alpha_{+}(w),w)) } \xi(\alpha_{+}(w),w)) d\sigma(w)\nonumber\\ &&=\int_{\widetilde{\Gamma}_3} \lambda((\alpha^+(w),w),\widetilde{\varphi}-\widetilde{u})\xi d\sigma(w) \end{eqnarray} \n Combining (2.1), (2.2) and (2.5), the result follows. \qed \vs0,3cm\n\emph{Proof of Theorem 2.1.} We first observe that $\widetilde{u}=0$ in $C_r$ and that statements 1) can be established as in \cite{[ChaL3]}. \vs0,2cm\n Next we assume that $\overline{T_{h} (Z_0)} \cap \Gamma_{2} = \emptyset$. \vs0,2cm\n From Lemma 2.2 and Proposition 2.4 of \cite{[S]}, we obtain for all $(t,w)$ in $C_r$: \begin{eqnarray*} \widetilde{\chi}(t,w)&=&\frac {\lambda((\alpha_{+}(w),w), \widetilde{\varphi}(\alpha_{+}(w),w))} {\texttt{h}(t,w)\nu_2(\alpha_{+}(w),w)}\\ &=&\frac {{{|Y_h|(\alpha_{+}(w),w)}\over \sqrt{1+\alpha_{+}^{\prime 2}(w)}H(T_h(\alpha_{+}(w),w)).\nu(T_h(\alpha_{+}(w),w)) }\beta(X(\alpha_{+}(w),w),\varphi(X(\alpha_{+}(w),w)))} {|Y_h(t,w)|.\nu_2(\alpha_{+}(w),w)}\\ &=&{{|Y_h|(\alpha_{+}(w),w)}\over{|Y_h(t,w)|}}.{{\beta(.,\varphi)}\over{H.\nu}}(X(\alpha_{+}(w),w)) \end{eqnarray*} \n Thus the result follows. \qed \vs 0,5cm \section{Continuity of the Free Boundary}\label{3} \vs 0,5cm\n In this section, we assume that: \begin{eqnarray}\label{e3.1-3.4} && H\in C^{1,1}_{loc}( \Omega)\\ &&a\in C_{loc}^{0,\alpha}(\Omega\cup\Gamma_3),\quad \alpha\in(0,1)\\ && \exists c_0\in\mathbb{R}\quad/ \quad \forall y \in \Omega \quad :\qquad div(a(x)(x-y)) \leq c_0 \quad \mbox{ in } \mathcal{D}^{\prime} (\Omega)\\ &&\Gamma_3 \text{ is } C_{loc}^{1,\alpha} \end{eqnarray} \vs0.2cm\n Here is the main result of this paper: \begin{theorem}\label{t3.1} Let $w_0\in \pi_{x_1}(\Omega\cap\{x_2=h\})$ such that $(w_0,\Phi_h(w_0))\in D_h$, $T_h(\alpha_+(w_0),w_0))\in \Gamma_3$ and \begin{equation}\label{e3.5} \left[\frac{|Y_h|\beta(x,\varphi)}{H.\nu}\right](X(\alpha_+(w_0),w_0)<Y_h(X(w_0,\Phi(w_0))) \end{equation} Then $\Phi_h$ is continuous at $w_0$. \end{theorem} \vs0,3cm\n\emph{Proof.} Let $w_0\in \pi_{x_1}(\Omega\cap\{x_2=h\})$ as in the theorem. Since $T_h(\alpha_+(w),w))$ is continuous at $w_0$ and $\Gamma_3$ is relatively open in $\partial\Omega$, there exists $w_1<w_0$ and $w_2>w_0$ such that \[T_h(\alpha_+(w),w))\in \Gamma_3\quad\text{for all } w\in(w_1,w_2)\] \n From Lemma 2.2, we know that $(\widetilde{u},\widetilde{\chi})$ is a solution on the domain \[Z=\{(t,w):~w_1<w<w_2 ~\text{ and }~ h< t < \alpha_{+}(w)\}\] of a similar problem to $(P_0)$. Therefore it is enough to check that the assumptions of Theorem 4.1 of \cite{[S]} are satisfied. \n First, we deduce from Proposition 1.2 that the function $\texttt{h}$ satisfies $$\left\ \begin{array}{ll} 0< \underline{h}\leq \texttt{h}(t,\omega) \leq C \bar{h} & \hbox{for a.e }(t,\omega)\in D_h \\ 0\leq \texttt{h}_t (t,\omega) \leq C \bar{h} & \hbox{for a.e }(t,\omega)\in D_h .\\ \end{array \right.$$ \n Next, since $H\in C^{1,1}_{loc}( \Omega)$, it is easy to see that $\texttt{a}\in C^{0,1}( D_h )$. Then by arguing as in \cite{[ChaL3]}, we can show that we have for some positive constant $c_0, C_0$ \begin{eqnarray*} &&|\texttt{a}(t,\omega)| \leq C_0\\ &&\texttt{a}(t,\omega)\xi.\xi\geq c_0 |Y_h| \xi|^2 \geq c_0|\xi|^2 \qquad \forall(t,w)\in D_h~~\forall \xi \in \mathbb{R}^2 \end{eqnarray*} \qed \vs0,3cm\n Moreover, since we have on $\widetilde{\Gamma_3}$ \begin{eqnarray*} \lambda(.,\widetilde{\varphi})-\texttt{h}\nu_2&=&{{|Y_h|}\over {\sqrt{1+\alpha_{+}^{\prime 2}(w)}}}.{{\beta(., \varphi)(T_h(\alpha_{+}(w),w)) }\over {H.\nu(T_h(\alpha_{+}(w),w))}}-|Y_h|(\alpha_{+}(w),w)\nu_2\\ &=& |Y_h|\Big[{{\beta(., \varphi)}\over {H.\nu}}-1\Big](T_h(\alpha_{+}(w),w))\nu_2 \end{eqnarray*} \n this function is continuous on $\widetilde{\Gamma_3}$. \n Finally, arguing as in the proof of Theorem 2.1 and using (3.5), we can show that \begin{eqnarray*} {{\lambda((\alpha_+(w_0),w_0),\widetilde{\varphi}(\alpha_+(w_0),w_0))} \over{\texttt{h}(\phi_h(w_0),w_0)\nu_2(\alpha_+(w_0),w_0)}} &=& {{|Y_h|\beta(., \varphi)(T_h(\alpha_{+}(w_0),w_0)))(\alpha_{+}(w_0),w_0)}\over {|Y_h|(\phi_h(w_0),w_0)H.\nu(T_h(\alpha_{+}(w_0),w_0))}}<1\\ \end{eqnarray*} \n We conclude that the function $\phi_h$ is continuous at $w_0$. \qed
1,314,259,993,778
arxiv
\section{Introduction} Different versions of Burnside Problem ask what one can say about finitely generated periodic groups under additional assumptions. Kurosh type problems ask similar questions about properties of finitely generated nil (more generally, algebraic) associative algebras. In case of finitely generated Lie algebras, the periodicity is replaced by the condition that the adjoint mapping is nil. In particular, for Lie $p$-algebras one assumes that the $p$-mapping is nil. One of recent important directions in these areas is to study the growth of finitely generated (periodic) groups and (nil) algebras~\cite{ErshlerZheng20,BellZel19}. The goal of this paper is to construct finitely generated nil restricted Lie algebras with extremely slow quasi-linear growth, these algebras are needed in further research~\cite{Pe20flies}. Main results are formulated in Section~\ref{Smain}, see Theorem~\ref{Tparam} and Theorem~\ref{Tparam2}. \subsection{Kurosh problem, Golod-Shafarevich algebras and groups} The General Burnside Problem asks whether a finitely generated periodic group is finite. The first negative answer was given by Golod and Shafarevich: they proved that there exist finitely generated infinite $p$-groups for each prime $p$~\cite{Golod64}. As an important instrument, they first construct finitely generated infinite dimensional associative nil-algebras~\cite{Golod64}. Using this construction, there are also examples of infinite dimensional 3-generated Lie algebras $L$ such that $(\ad x)^{n(x,y)}(y)=0$, for all $x,y\in L$, the field being arbitrary~\cite{Golod69}. Similarly, one easily obtains infinite dimensional finitely generated restricted Lie algebras $L$ with a nil $p$-mapping. This gives a negative answer to the question of Jacobson whether a finitely generated restricted Lie algebra $L$ is finite dimensional provided that each element $x\in L$ is algebraic, i.e. satisfies some $p$-polynomial $f_{p,x}(x)=0$ (\cite[Ch.~5, ex.~17]{JacLie}). It is known that the construction of Golod yields associative nil-algebras of exponential growth. Using specially chosen relations, Lenagan and Smoktunowicz constructed associative nil-algebras of polynomial growth~\cite{LenSmo07}; there are more constructions including associative nil-algebras of intermediate growth~\cite{BellYoung11,LenSmoYoung12,Smo14}. On further developments concerning Golod-Shafarevich algebras and groups see~\cite{Voden09,Ershov12}. A close by spirit but different construction was motivated by respective group-theoretic results. A restricted Lie algebra $G$ is called {\it large} if there is a subalgebra $H\subset G$ of finite codimension such that $H$ admits a surjective homomorphism on a nonabelian free restricted Lie algebra. Let $K$ be a perfect at most countable field of positive characteristic. Then there exist infinite-dimensional finitely generated nil restricted Lie algebras over $K$ that are residually finite dimensional and direct limits of large restricted Lie algebras~\cite{BaOl07}. \subsection{Grigorchuk and Gupta-Sidki groups} The construction of Golod is rather undirect, Grigorchuk gave a direct and elegant construction of an infinite 2-group generated by three elements of order 2~\cite{Grigorchuk80}. Originally, this group was defined as a group of transformations of the interval $[0,1]$ from which rational points of the form $\{k/2^n\mid 0\le k\le 2^n,\ n\ge 0\}$ are removed. For each prime $p\ge 3$, Gupta and Sidki gave a direct construction of an infinite $p$-group on two generators, each of order $p$~\cite{GuptaSidki83}. This group was constructed as a subgroup of an automorphism group of an infinite regular tree of degree $p$. The Grigorchuk and Gupta-Sidki groups are counterexamples to the General Burnside Problem. Moreover, they gave answers to important problems in group theory. So, the Grigorchuk group and its further generalizations are first examples of groups of intermediate growth~\cite{Grigorchuk84}, thus answering in negative to a conjecture of Milnor that groups of intermediate growth do not exist. The construction of Gupta-Sidki also yields groups of subexponential growth~\cite{FabGup85}. The Grigorchuk and Gupta-Sidki groups are {\it self-similar}. Now self-similar, and so called {\it branch groups}, form a well-established area in group theory~\cite{Grigorchuk00horizons,Nekr05}. \subsection{Fibonacci Lie algebra, nil (restricted) Lie (super)algebras} There are also constructions of self-similar associative algebras~\cite{Bartholdi06,Sidki09,PeSh13ass}. Despite some efforts~\cite{Sidki09,PeSh13ass}, in case of associative algebras, an appropriate analogue of the Grigorchuk and Gupta-Sidki groups is not known yet. But in case of restricted Lie algebras, we have natural analogues. \begin{Example} ({\it Fibonacci restricted Lie algebra}~\cite{Pe06}). Let $\ch K=p=2$ and $R=K[t_i| i\ge 0 ]/(t_i^p| i\ge 0)$, a truncated polynomial ring. Put $\dd_i=\frac {\dd}{\partial t_i}$, $i\ge 0$. Define two derivations of $R$: \begin{align*} v_1 & =\dd_1+t_0(\dd_2+t_1(\dd_3+t_2(\dd_4+t_3(\dd_5+t_4(\dd_6+\cdots )))));\\ v_2 & =\qquad\quad\;\, \dd_2+t_1(\dd_3+t_2(\dd_4+t_3(\dd_5+t_4(\dd_6+\cdots )))). \end{align*} Consider the restricted Lie algebra generated by them $\LL=\Lie_p(v_1,v_2)\subset\Der R$ and an associative algebra $\AA=\Alg(v_1,v_2)\subset \End R$. \end{Example} The Fibonacci restricted Lie algebra has a slow polynomial growth with Gelfand-Kirillov dimension $\GKdim \LL=\log_{(\sqrt 5+1)/2} 2\approx 1.44$~\cite{Pe06}. Further properties of the Fibonacci restricted Lie algebra are studied in~\cite{PeSh09,PeSh13fib}. On background and some results on Lie algebras of differential operators in infinitely many variables see~\cite{Razmyslov,Rad86,PeRaSh,FutKochSis}. Probably, the most interesting property of $\LL$ is that it has a nil $p$-mapping~\cite{Pe06}, which is an analog of the periodicity of the Grigorchuk and Gupta-Sidki groups. We still do not know whether the associative hull $\AA$ is a nil-algebra. We have a weaker statement. The algebras $\LL$, $\AA$, and the augmentation ideal of the restricted enveloping algebra $\uu=\omega u(\LL)$ are direct sums of two locally nilpotent subalgebras~\cite{PeSh09}. The next step was made by Shestakov and Zelmanov, in case of an arbitrary prime characteristic, they constructed an example of a 2-generated restricted Lie algebra with a nil $p$-mapping~\cite{ShZe08}. An example of a $p$-generated nil restricted Lie algebra $L$, characteristic $p$ being arbitrary, was studied in~\cite{PeShZe10}. These infinite dimensional restricted Lie algebras and their restricted enveloping algebras as well, have different decompositions into a direct sum of two locally nilpotent subalgebras~\cite{PeShZe10}. Observe that only the original example has a clear monomial basis~\cite{Pe06,PeSh09}. In other examples, elements of a Lie algebra are linear combinations of monomials; to work with such linear combinations is sometimes an essential technical difficulty, see e.g.~\cite{ShZe08,PeShZe10,Pe20flies}. A systematic approach to construct (restricted) Lie (super)algebras having {\it good monomial bases} was developed due to the second Lie superalgebra introduced in~\cite{Pe16}. \begin{Example}\label{Example_Q}(second example $\QQ$ in \cite{Pe16}) Consider the Grassmann superalgebra $\Lambda=\Lambda[x_i,y_i,z_i| i\ge 0]$, field being arbitrary. Using its partial superderivations, define recursively odd elements in the associative superalgebra $\End(\Lambda)$: \begin{equation* \begin{split} a_i &= \partial_{x_i} + y_ix_i a_{i+1},\\ b_i &= \partial_{y_i} + z_iy_i b_{i+1},\\ c_i &= \partial_{z_i} + x_iz_i c_{i+1}, \end{split} \qquad i\ge 0. \end{equation*} Define the Lie superalgebra $\QQ:=\Lie(a_0,b_0,c_0)\subset \Der\Lambda$. \end{Example} Using a similar approach, a family of nil restricted Lie algebras of slow polynomial growth having good monomial bases was constructed in~\cite{Pe17} (Example~\ref{E2}, actually, these algebras are more close to the first example in~\cite{Pe16}). Informally speaking, there are no "natural analogues" of the Grigorchuk group in the world of Lie algebras of characteristic zero~\cite{MaZe99}. On the other hand, we show that Example~\ref{Example_Q} serves as an appropriate analogue of the Grigorchuk group in the class of Lie {\it super}algebras in case of an arbitrary field, because it is nil finely $\Z^3$-graded, see details in~\cite{Pe16}. Next, we construct a more "handy" 2-generated fractal Lie superalgebra $\mathbf{R}$ over an arbitrary field~\cite{PeOtto}. This example is close to the smallest possible one, because $\mathbf{R}$ has a linear growth with a growth function $\gamma_\mathbf{R}(m)\approx 3m$, as $m\to\infty$. Moreover, its degree $\mathbb{N}$-grading is of finite width 4 ($\ch K\ne 2$). We also construct a just infinite fractal 3-generated Lie superalgebra ${\mathbf Q}$ over an arbitrary field, which gives rise to an associative hull, a Poisson superalgebra, and two Jordan superalgebras supplying analogues of the Grigorchuk and Gupta-Sidki groups in respective classes of algebras~\cite{PeSh18FracPJ}. \subsection{Narrow groups and Lie algebras} The Grigorchuk group $G$ is of finite width, namely, the lower central series factors are bounded~\cite{Rozh96,BaGr00,Grigorchuk00horizons}. In particular, the respective Lie algebra $L=L_K(G)=\oplus_{i\ge 1} L_i$ has a linear growth. Bartholdi presented $L_{K}(G)$ as a self-similar restricted Lie algebra and proved that the restricted Lie algebra $L_{\F_2}(G)$ is nil while $L_{\F_4}(G)$ is not nil~\cite{Bartholdi15}. Also, $L_K(G)$ is {\it nil graded}, namely, for any homogeneous element $x\in L_i$, $i\ge 1$, the mapping $\ad x$ is nilpotent, because the group $G$ is periodic. Naturally $\N$-graded Lie algebras over $\R$ and $\C$ satisfying the condition $\dim L_n+\dim L_{n+1}\le 3$, $n\ge 1$, are classified recently by Millionschikov~\cite{Mil20}. Slowly growing so called filiform Lie algebras in characteristic zero are studied in~\cite{CarMatNew97,CarNew00}. Concerning narrow Lie algebras and groups see survey~\cite{ShaZel99}. \section{Basic notions: restricted Lie algebras, Growth}\label{Sdef} As a rule, $K$ is an arbitrary field of positive characteristic $p$, $\langle S\rangle_K$ denotes a linear span of a subset $S$ in a $K$-vector space. Let $L$ be a Lie algebra, then $U(L)$ denotes the universal enveloping algebra. Long commutators are {\it right-normed}: $[x,y,z]:=[x,[y,z]]$. We use a standard notation $\ad x(y)=[x,y]$, where $x,y\in L$. Also, we use the notation $[x^k,y]:=(\ad x)^k (y)$, where $k\ge 1$, $x,y\in L$; in case $k=p^l$, we have also $[x^{p^l},y]=[x^{[p^l]},y]$, in terms of the $p$-mapping (see below). \subsection{Restricted Lie algebras} Let $L$ be a Lie algebra over a field $K$ of characteristic $p>0$. Then $L$ is called a \textit{restricted Lie algebra} (or \textit{Lie $p$-algebra}), if it is additionally supplied with a unary operation $x\mapsto x^{[p]}$, $x\in L$, that satisfies the following axioms~\cite{JacLie,Ba,Strade1,StrFar,BMPZ}: \begin{itemize} \item $(\lambda x)^{[p]}=\lambda^px^{[p]}$, for $\lambda\in K$, $x\in L$; \item $\ad(x^{[p]})=(\ad x)^p$, $x\in L$; \item $(x+y)^{[p]}=x^{[p]}+y^{[p]}+\sum_{i=1}^{p-1}s_i(x,y)$, for all $x,y\in L$, where $i s_i(x,y)$~is the coefficient of $t^{i-1}$ in the polynomial $\operatorname{ad}(tx+y)^{p-1}(x)\in L[t]$. \end{itemize} This notion is motivated by the following construction. Let $A$ be an associative algebra over a field ~$K$. If the vector space $A$ is supplied with a new product $[x,y]=xy-yx$, $x,y\in A$, one obtains a Lie algebra denoted by $A^{(-)}$. In case $\operatorname{char}K=p>0$, the mapping $x\mapsto x^p$, $x\in A^{(-)}$, satisfies the three axioms above. Suppose that $L$~ is a restricted Lie algebra. Let $J$~be the ideal of the universal enveloping algebra~$U(L)$ generated by $\{x^{[p]}-x^p\mid x\in L\}$. Then $u(L)=U(L)/J$ is called a \textit{restricted enveloping algebra}. In this algebra, the formal operation $x^{[p]}$ coincides with the $p$th power~$x^p$ for any $x\in L$. One has an analogue of Poincare-Birkhoff-Witt's theorem yielding a basis of the restricted enveloping algebra~\cite[p.~213]{JacLie}. We shall use the following version of the formula above: \begin{equation}\label{power_P} (x+y)^{[p]}=x^{[p]}+y^{[p]}+(\ad x)^{p-1}(y)+ \sum_{i=1}^{p-2}s_i(x,y),\qquad x,y\in L, \end{equation} where $s_i(x,y)$ consists of commutators containing $i$ letters $x$ and $p-i$ letters $y$. \subsection{Growth} Let $A$ be an associative (or Lie) algebra generated by a finite set $X$. Denote by $A^{(X,n)}$ the subspace of $A$ spanned by all monomials in $X$ of length not exceeding $n$, $n\ge 0$. If $A$ is a restricted Lie algebra, we define $A^{(X,n)}=\langle\, [x_{i_1},\dots,x_{i_s}]^{p^k}\mid x_{i_j}\in X,\, sp^k\le n\rangle_K$~\cite{Pape01}. One obtains the {\em growth function}: $$ \gamma_A(n)=\gamma_A(X,n):=\dim_KA^{(X,n)},\quad n\ge 0. $$ Clearly, the growth function depends on the choice of the generating set $X$. Let $f,g:\N\to\R^+$ be increasing functions. Write $f(n)\preccurlyeq g(n)$ if and only if there exist positive integers $N,C$ such that $f(n)\le g(Cn)$ for all $n\ge N$. Introduce equivalence $f(n)\sim g(n)$ if and only if $f(n)\preccurlyeq g(n)$ and $g(n)\preccurlyeq f(n)$. Different generating sets of an algebra yield equivalent growth functions~\cite{KraLen}. The growth of the free associative, (restricted) Lie (super)algebras is exponential~\cite{Ba,BMPZ,KraLen,Pe03}. Moreover, any finitely generated linear algebra (i.e. existence of a bilinear product is assumed only) has at most exponential growth, because it is a homomorphic image of the {\it absolutely free algebra} with finite number of generators, which exponential growth is well-known, see e.g.~\cite{Pe05}. To describe the growth of a finitely generated algebra $A$ one defines its {\it exponent} (which depends on the generating set!): $$\EXP(A,X):=\limsup_{n\to \infty} \sqrt[n]{\gamma_A(X,n)},$$ where in case of an associative algebra $A$ there exists the two-sided limit~\cite{KraLen}. If $\EXP(A)>1$ then $A$ is said of {\it exponential growth}. Otherwise, $\EXP A=1$, and $A$ is said of {\it subexponential growth}. If there exists a constant $\a>0$ such that $\gamma_A(n)\preccurlyeq n^\a$, then $A$ has a {\it polynomial growth}. A subexponential growth that is not polynomial is called {\it intermediate}. These types of growth do not depend on the generating set. A growth function $\gamma_A(n)$ is compared with polynomial functions $n^\a$, $\a\in\R^+$, by computing the {\em upper and lower Gelfand-Kirillov dimensions}~\cite{KraLen}: \begin{align*} \GKdim A&:=\limsup_{n\to\infty} \frac{\ln\gamma_A(n)}{\ln n}=\inf\{\a>0\mid \gamma_A(n)\preccurlyeq n^\a\} ;\\ \LGKdim A&:=\liminf_{n\to\infty}\, \frac{\ln\gamma_A(n)}{\ln n}=\sup\{\a>0\mid \gamma_A(n)\succcurlyeq n^\a\}. \end{align*} Solvable finitely generated Lie algebras have subexponential growth, see~\cite{Licht84}. The author constructed a scale to measure the growth of such algebras~\cite{Pe96,Pe99int}. \subsection{Quasi-linear growth, its stratification} Now, assume that $A$ is an associative or Lie algebra generated by a finite set $X$. Consider the sequence of non strictly increasing subspaces $\{A^{(X,n)}| n\ge 0\}$. There are two cases. 1) There exists $n_0\in \N$ such that $A^{(X,n_0)}=A^{(X,n_0+1)}$. Using that all Lie monomials are expressed via the right-normed ones, we get $A^{(X,m)}=A^{(X,n_0)}$ for all $m\ge n_0$. Thus, $A$ is finite dimensional and $\GKdim A=0$. 2) All subsequent terms are different, hence their dimensions are strictly increasing. By induction, we get the lower bound $\gamma_A(n,X)=\dim A^{(X,n)}\ge n+1$ and $\GKdim A\ge 1$. Thus, one has a trivial gap $\GKdim A\notin (0,1)$. Also, if $A$ is infinite dimensional, then the growth function is bounded from below by the linear function $n+1$. The analogue of this gap for restricted Lie algebras is studied in~\cite{Pape01}. In this paper, we construct algebras whose growth is somewhat close to that lowest possible linear growth function. Let an algebra $A$ satisfies $\GKdim A=\LGKdim A=1$, we say that $A$ has a {\it quasi-linear growth}. Quasi-linear growths are not distinguishable from viewpoint of the Gelfand-Kirillov dimension because they merge as "one point". In order to blow up this point, we compare a quasi-linear growth function with two families of etalon functions. Denote $\ln^{(q)}(x):=\underbrace{\ln(\cdots\ln}_{q\text{ times}}(x)\cdots)$ and $\exp^{(q)}(x):=\underbrace{\exp(\cdots\exp}_{q\text{ times}}(x)\cdots)$ for all $q\in\N$. Consider the first family of quasi-linear functions: $m\exp \big((\ln m)^{\beta}\big)$, $\beta\in(0,1)$ being a constant. The second family of quasi-linear functions is $m (\ln^{(q)} m)^{\beta}$, where $q\in \N$, $\beta\in \R^+$ are constants. Now, we compare a growth function with these etalon functions, determining their parameters $q,\beta$. Formally, we set: \begin{align*} \Ldim^0 A=& \inf\{\beta\in(0,1) \mid \gamma_A(n) \preccurlyeq m \exp \big((\ln m)^{\beta}\big)\}, \quad (q=0);\\ \LLdim^0 A=& \sup\{\beta\in(0,1) \mid \gamma_A(n) \succcurlyeq m \exp \big((\ln m)^{\beta}\big)\}, \quad (q=0);\\ \Ldim^q A=& \inf\{\beta\in\R^+ \mid \gamma_A(n) \preccurlyeq m (\ln^{(q)} m)^{\beta}\},\qquad\qquad\ q\in\N;\\ \LLdim^q A=& \sup\{\beta\in\R^+ \mid \gamma_A(n) \succcurlyeq m (\ln^{(q)} m)^{\beta}\},\qquad\qquad\ q\in\N; \end{align*} where the last two numbers are defined for any fixed $q\in \N$, the latter specifying the number of iterations of the logarithm in the right hand side functions. One checks that these numbers are invariants not depending on a generating set. We refer to $q\ge 0$ as the {\it level} of the functions above. Remark that these notations are different from~\cite{Pe17}. Define {\it extreme values}, in case $q=0$: $\Ldim^0 A=1$, or $\LLdim^0 A=0$; in case $q\ge 1$: $\Ldim^q A=+\infty$, or $\LLdim^q A=0$. In these cases, the etalon functions of level $q$ are not suited to specify the quasi-linear growth of an algebra. Observe that the functions with bigger $q$ are slower. One checks that the functions of different levels stratify the merged point of all quasi-linear growths as follows. \begin{Lemma} Assume that for some $q\ge 1$ one has $\Ldim^q A=\LLdim^q A=\beta$ where $\beta\ne 0$ and $\beta\ne +\infty$. Then $\Ldim^{q+1} A=+\infty$ and $\LLdim^{q-1} A=0$. \end{Lemma} Assume that generators $X=\{x_1,\dots,x_k\}$ are assigned positive weights $\wt(x_i)=\lambda_i$, $i=1,\dots,k$. Define a {\it weight growth function}: $$ \tilde \gamma_A(n)=\dim_K\langle x_{i_1}\cdots x_{i_m}\mid \wt(x_{i_1})+\cdots+\wt(x_{i_m})\le n,\ x_{i_j}\in X\rangle_K,\quad n\ge 0. $$ (Where in case of (restricted) Lie algebras one considers (restricted) Lie monomials as above). Set $C_1=\min\{\lambda_i\mid i=1,\dots,k \}$, $C_2=\max\{\lambda_i\mid i=1,\dots,k \}$, then $\tilde\gamma_A(C_1 n) \le \gamma_A(n)\le \tilde\gamma_A(C_2 n)$ for $n\ge 1$. Thus, we obtain an equivalent growth function $\tilde \gamma_A(n)\sim\gamma_A(n)$. Therefore, we can use the weight growth function $\tilde\gamma_A(n)$ in order to compute the Gelfand-Kirillov dimensions and $\Ldim^q A$, $\LLdim^q A$ as well. Suppose that $L$ is a Lie algebra and $X\subset L$. By $\Lie(X)$ denote the subalgebra of $L$ generated by $X$. In case $L$ is a restricted Lie algebra $\Lie_p(X)$ denotes the restricted subalgebra of $L$ generated by $X$. Similarly, assume that $X$ is a subset in an associative algebra $A$. Write $\Alg(X)\subset A$ to denote the associative subalgebra (without unit) generated by~$X$. \subsection{Divided power algebra and its derivations} Fix $\ch K=p>0$. Let $\Theta$ be an arbitrary non-empty set. Fix a tuple of integers $\bar S=\{S_a\in \N | a\in \Theta \}$. We consider a {\it divided power algebra} $R=R(\Theta,\bar S)$ which $K$-basis consists of formal symbols $$\bigg\{ \prod_{a\in \Theta}t_a^{(i_a)} \ \bigg|\ 0\le i_a< p^{S_a}, \ a\in \Theta\bigg\}, $$ where only finitely many formal powers $i_a$ are non-zero. Define a product of these elements as $$\bigg(\prod_{a\in \Theta}t_a^{(i_a)}\bigg )\cdot \bigg (\prod_{a\in \Theta}t_a^{(j_a)}\bigg)= \prod_{a\in \Theta}\binom{i_a+j_a}{i_a} t_a^{(i_a+j_a)}. $$ The product is well defined and $R=R(\Theta,\bar S)$ is an associative commutative ring with unit, which is isomorphic to a ring of truncated polynomials~\cite{Strade1}. Fix $a\in \Theta$. Define an action $\partial_{a}$ on the whole of $R$ acting on the respective divided variables only: $\partial_{a}(t_a^{(i_a)}):=t_a^{(i_a-1)}$, $i_a\in\{0,\dots,p^{S_a}-1\}$, where $t_a^{(0)}=1$ and $t_a^{(l)}=0$ for $l<0$. We obtain derivations $\partial_{a}\in \Der R$, $a\in \Theta$. Their $p^m$-powers are also derivations and $\partial_{a}^{p^m}(t_a^{(i_a)})=t_a^{(i_a-p^m)}$, $m\ge 0.$ Clearly, $\partial_{a}^{p^{m}}=0$ for $m\ge S_a$. For more properties of divided power algebras and their derivations, see~\cite{Strade1}. \section{Main results: Clover restricted Lie algebras of Quasi-linear Growth} \label{Smain} \subsection{Clover restricted Lie algebras} Recently, the author introduced a large class of {\it drosophila Lie algebras}~\cite{Pe20flies}, that yields a uniform generalized construction including some examples of (restricted) Lie (super)algebras considered before~\cite{Pe17,Pe16}. In particular, it includes a family of 2-generated restricted Lie algebras studied in~\cite{Pe17}; now we call such algebras as {\it duplex Lie algebras}. \begin{Example}[family of restricted Lie algebras $\LL(\Xi)$ in~\cite{Pe17}]\label{E2} Let $\ch K=p>0$. Let $\Theta=\{x_n,y_n|n\ge 0\}$ and consider a tuple of integers $\Xi=(S_n,R_n| n\ge 0)$. As described above, these parameters yield the divided power algebra with a basis $\Omega(\Xi):=\langle x_0^{(\xi_0)}y_0^{(\eta_0)}\!\!\!\cdots x_i^{(\xi_i)}y_i^{(\eta_i)}| 0\le\xi_i<p^{S_i}, 0\le \eta_i<p^{R_i}, i\ge 0\rangle_K$. Define {\sc pivot elements} belonging to $\Der \Omega(\Xi)$ recursively: \begin{equation}\label{aibip} \begin{split} a_i &= \partial_{x_i} + x_i^{(p^{S_i}-1)} y_i^{(p^{R_i}-1)} a_{i+1};\\ b_i &= \partial_{y_i} + x_i^{(p^{S_i}-1)} y_i^{(p^{R_i}-1)} b_{i+1}; \end{split} \qquad\quad i\ge 0. \end{equation} Define the 2-generated restricted Lie algebra $\LL(\Xi):=\Lie_p(a_0,b_0)\subset \Der \Omega(\Xi)$. \end{Example} Now we define the main object to study in the present work. \begin{Definition}\label{Def1} Now, let $\Theta=\{x_n,y_n,z_n|n\ge 0\}$. Fix the same tuple of integers $\Xi=(S_n,R_n| n\ge 0)$ as above and consider another divided power algebra with a basis $$R=R(\Xi):=\Big\langle x_0^{(\a_0)}y_0^{(\b_0)}z_0^{(\gamma_0)}\!\!\cdots x_i^{(\a_i)}y_i^{(\b_i)}z_i^{(\gamma_i)}\ \Big |\ 0\le\a_i<p^{S_i},\ 0\le \b_i,\gamma_i<p^{R_i},\ i\ge 0\Big\rangle_K. $$ We draw attention that pairs of divided variables $y_i$, $z_i$ have the same top indices determined by $R_i$, for all $i\ge 0$. This trick is important for further computations to be feasible at all. We define recursively the {\sc pivot elements} belonging to $\Der R(\Xi)$: \begin{equation}\label{pivot-3} \begin{split} v_i &=\dd_{x_i}+x_{i}^{(p^{S_{i}}-1)} y_{i}^{(p^{R_{i}}-1)} v_{i+1} ;\\ w_i &=\dd_{y_i}+y_{i}^{(p^{R_{i}}-1)} x_{i}^{(p^{S_{i}}-1)} w_{i+1};\\ u_i &=\dd_{z_i}+z_{i}^{(p^{R_{i}}-1)} x_{i}^{(p^{S_{i}}-1)} u_{i+1}; \end{split}\qquad\qquad i\ge 0. \end{equation} For any $i\ge 0$, we get explicit formulas: \begin{equation}\label{aibi0} \begin{split} v_i &= \partial_{x_i} {+} x_i^{(p^{S_i}-1)}y_i^{(p^{R_i}-1)}\Big(\partial_{x_{i+1}}{+} x_{i+1}^{(p^{S_{i+1}}-1)} y_{i+1}^{(p^{R_{i+1}}-1)} \Big(\partial_{x_{i+2}} {+}x_{i+2}^{(p^{S_{i+2}}-1)} y_{i+2}^{(p^{R_{i+2}}-1)}\Big(\cdots \Big)\Big)\Big),\\ w_i &= \partial_{y_i} {+} y_i^{(p^{R_i}-1)}x_i^{(p^{S_i}-1)}\Big(\partial_{y_{i+1}}{+} y_{i+1}^{(p^{R_{i+1}}-1)}x_{i+1}^{(p^{S_{i+1}}-1)} \Big(\partial_{y_{i+2}} {+}y_{i+2}^{(p^{R_{i+2}}-1)}x_{i+2}^{(p^{S_{i+2}}-1)} \Big(\cdots \Big)\Big)\Big),\\ u_i &= \partial_{z_i} {+} z_i^{(p^{R_i}-1)}x_i^{(p^{S_i}-1)}\Big(\partial_{z_{i+1}}{+} z_{i+1}^{(p^{R_{i+1}}-1)}x_{i+1}^{(p^{S_{i+1}}-1)} \Big(\partial_{z_{i+2}} {+}z_{i+2}^{(p^{R_{i+2}}-1)}x_{i+2}^{(p^{S_{i+2}}-1)} \Big(\cdots \Big)\Big)\Big). \end{split} \end{equation} We call $i$ the {\sc length} (also {\sc generation}, following terminology~\cite{Pe20flies}) of the pivot elements above. Now, we define the 3-generated {\sc clover restricted Lie algebra} $\TT(\Xi):=\Lie_p(v_0,w_0,u_0)\subset \Der R(\Xi)$. \end{Definition} \begin{Remark} Let us draw attention that there is some symmetry between $v_i$ and $w_i$, while the remaining $u_i$ stays separate because there is {\bf no $\Z_3$-cyclic symmetry} unlike the second example of a 3-generated Lie superalgebra in~\cite{Pe16}. Another observation is that $v_i$, $w_i$ are just renaming of $a_i$, $b_i$ in~\eqref{aibip}, for all $i\ge 0$. \end{Remark} \begin{Remark} Example~\eqref{aibip} cannot supply the lower estimate for Lie algebras of oscillating growth constructed in~\cite{Pe20flies} and it was necessary to modify that example. This modification is very specific in order to make the computation feasible at all. In terminology of~\cite{Pe20flies}, species of flies having two flies in some generation either have two flies in all subsequent generations or go extinct. The goal in introducing the clover species is to have three flies in each generation (yielding respective three pivot elements~\eqref{pivot-3}), so that at some moment three flies can produce a wild specie and the constructed Lie algebra can return to a fast intermediate growth. To this end we extend the duplex specie in a specific "skew" way and obtain the clover species. This idea enables us to construct restricted Lie algebras with an oscillating growth in~\cite{Pe20flies} using the two theorems below. \end{Remark} \subsection{Main results} As a specific case, we construct restricted Lie algebras of quasi-linear growth. The main goal of the paper is to prove the following two theorems, which are an important part of the construction of nil restricted Lie algebras of oscillating intermediate growth in~\cite{Pe20flies}, namely, the algebras constructed below are responsible for periods of quasi-linear growth of that algebras. We stress that it was necessary to change the approach of~\cite{Pe17}, because in a further construction of nil restricted Lie algebras of oscillating growth~\cite{Pe20flies} we need three so called "flies" in each generation. The asymptotic in~\cite{Pe17} has the upper and lower bounds with different constants $C_1$, $C_2$. Now we are proving a stronger asymptotic with bounds $C+o(1)$, constant being the same for both sides. Moreover, the second theorem yields even slower quasi-linear growths. \begin{Theorem}\label{Tparam} Let $K$ be a field, $\ch K=p> 0$, fix $\kappa\in(0,1)$. There exists a tuple of integers $\Xi_\kappa$ such that the 3-generated clover restricted Lie algebra $\TT=\TT(\Xi_\kappa)=\Lie_p(v_0,w_0,u_0)$ has the following properties. \begin{enumerate} \item $\gamma_{\TT}(m)=m\exp \big((C+o(1))(\ln m)^\kappa\big)$ as $m\to\infty$, where $C:=2(\ln p)^{1-\kappa}/\kappa^\kappa$; \item $\GKdim \TT=\LGKdim \TT= 1$; \item $\Ldim^0 \TT=\LLdim^0 \TT=\kappa$; \item the growth function $\gamma_\TT(m)$ is not linear; \item algebras $\TT(\Xi_\kappa)$ for different parameters $\kappa\in(0,1)$ are not isomorphic. \end{enumerate} \end{Theorem} In comparison with~\cite{Pe17}, algebras with even slower quasi-linear growth are constructed in the next theorem. \begin{Theorem}\label{Tparam2} Let $\ch K=p> 0$, fix parameters $q\in\N$, $\kappa\in\R^+$. There exists a tuple of integers $\Xi_{q,\kappa}$ such that the 3-generated clover restricted Lie algebra $\TT=\TT(\Xi_{q,\kappa})=\Lie_p(v_0,w_0,u_0)$ has the following properties. \begin{enumerate} \item $\gamma_{\TT}(m)= m \big(\ln^{(q)} \!m\big )^{\kappa+o(1)}$ as $m\to\infty$; \item $\GKdim \TT=\LGKdim \TT= 1$; \item $\Ldim^q \TT=\LLdim^q \TT=\kappa$; \item the growth function $\gamma_\TT(m)$ is not linear; \item algebras $\TT(\Xi_{q,\kappa})$ for different pairs $(q,\kappa)$ are not isomorphic. \end{enumerate} \end{Theorem} \begin{Remark} Similar to~\cite{Pe17}, we can also consider the associative algebra $\AA=\Alg(v_0,w_0,u_0)\subset\End R(\Xi)$ and describe its growth as $\gamma_{\AA}(m)= m^2 \big(\ln^{(q)} \!m\big )^{\kappa+o(1)}$, as $m\to\infty$. In particular, $\GKdim \AA=\LGKdim \AA=2$, let us call such a growth {\it quasi-quadratic}. \end{Remark} \begin{Theorem}[\cite{Pe20flies}, Theorem~7.7]\label{Tnillity} Fix $\ch K=p> 0$ and a tuple of integers $\Xi=(S_n,R_n| n\ge 0)$. Consider the respective clover restricted Lie algebra $\TT=\TT(\Xi)$. Then $\TT$ has a nil $p$-mapping. \end{Theorem} \begin{proof} The nillity is established in a more general setting of so called drosophila Lie algebras with uniform parameters in~\cite{Pe20flies}, Theorem~7.7. That proof is an essential modification of the approach in case of 2-generated duplex Lie algebras $\LL(\Xi)$~\cite{Pe17}, Theorem~8.6. \end{proof} Let us describe some more results and ideas of the paper. \begin{itemize} \item We describe the structure and construct a clear monomial basis for all clover restricted Lie algebras (Theorem~\ref{Tsemidirect}). \item An important instrument is a notion of a weight function, using which we prove that $\TT(\Xi)=\TT(v_0,w_0,u_0)$ is $\NO^3$-graded by multidegree in the generators (Theorem~\ref{Tgraded}). \item We prove that $1\le \GKdim\TT(\Xi)\le 3$ for any tuple $\Xi$ (Theorem~\ref{Tgrowth3}.) \item Let the sequence $\Xi=(S_i,R_i|i\ge 0)$ be periodic, then $\TT(\Xi)$ is a self-similar restricted Lie algebra (Lemma~\ref{Lself}) and we compute its Gelfand-Kirillov dimension explicitly (Theorem~\ref{Tperiod}). This result may be viewed as an analogue of the result on the intermediate growth for the Grigorchuk periodic groups $G_\omega$ having a periodic tuple $\omega$~\cite[Theorem~B]{ErshlerZheng20}. \item Let $\Xi$ be {constant}: $S_i=S$, $R_i=R$ for $i\ge 0$, where $S,R\in\N$ are fixed. Then denote $\TT(S,R):=\TT(\Xi)$. We prove that $\{\GKdim\TT(S,R)\mid S,R\in\N\}$ is dense on $[1,3]$ (Corollary~\ref{Cinterval}). \end{itemize} \begin{Remark} We suggest that the clover restricted Lie algebras $\TT(\Xi)$ are Lie algebra analogues of the family of the Grigorchuk groups $G_\omega$ constructed and studied in~\cite{Grigorchuk84}. \end{Remark} \subsection{Nil Lie algebras of slow polynomial growth} The Gelfand-Kirillov dimension of an associative algebra cannot belong to the interval $(1,2)$~\cite[Bergman]{KraLen}. One has the same gap for finitely generated Jordan algebras~\cite[Martinez and Zelmanov]{MaZe96}. The author showed that a similar gap does not exist for Lie algebras, the Gelfand-Kirillov dimension of a finitely generated Lie algebra can be an arbitrary number $\{0\}\cup [1,+\infty)$~\cite{Pe97}. The same fact is also established for Jordan superalgebras~\cite{PeSh18Jslow}. Also, an interesting direction of research is constructing associative nil algebras of different kinds of growth, in particular, of slow polynomial growth, see~\cite{LenSmo07,BellYoung11,LenSmoYoung12,Smo14}. Now we get a stronger version of~\cite{Pe97}: the gap $(1,2)$ can be filled with {\it nil} Lie $p$-algebras. Namely, using constant tuples, we get self-similar clover nil restricted Lie algebras which Gelfand-Kirillov dimensions are dense on $[1,3]$ (Corollary~\ref{Cinterval}). \section{Structure of Clover Lie algebras $\TT(\Xi)$} \subsection{Basic relations} We start with establishing basic relations in clover restricted Lie algebras. In what follows, we assume that a field $K$ of characteristic $\ch K=p>0$ and a tuple of integers $\Xi=(S_n,R_n| n\ge 0)$ are fixed, and we consider 3-generated clover restricted Lie algebra $\TT=\Lie_p(v_0,w_0,u_0)$. \begin{Lemma}\label{Lrelations} Let $i\ge 0$. Then \begin{align*}\label{vap} v_i^{p^m}&=\dd_{x_i}^{p^m}+ x_i^{(p^{S_i}-p^m)} y_i^{(p^{R_i} -1)}v_{i+1}, \qquad\ 0\le m\le S_i;\\ w_i^{p^m}&=\dd_{y_i}^{p^m}+ y_i^{(p^{R_i}-p^m)} x_i^{(p^{S_i} -1)}w_{i+1}, \qquad\ 0\le m\le R_i;\\ u_i^{p^m}&=\dd_{z_i}^{p^m}+ z_i^{(p^{R_i}-p^m)} x_i^{(p^{S_i} -1)}u_{i+1},\ \qquad\ 0\le m\le R_i, \end{align*} where $\dd_{x_i}^{p^{S_i}}=\dd_{y_i}^{p^{R_i}}=\dd_{z_i}^{p^{R_i}}=0$ above. \end{Lemma} \begin{proof} Let us prove the first equality by induction on $m$. The base of induction $m=0$ is trivial by~\eqref{pivot-3}. Assume that the claim is valid for $0\le m< S_i$. The summation in~\eqref{power_P} is trivial because the second term cannot be used more than once: \begin{align*} v_i^{p^{m+1}} &={(v_i^{p^{m}})}^p=\Big(\dd_{x_i}^{p^m}+ x_i^{(p^{S_i}-p^m)} y_i^{(p^{R_i} -1)}v_{i+1}\Big)^p\\ &={(\partial_{x_i}^{p^{m}})}^p+\big(\ad \partial_{x_i}^{p^m} \big)^{p-1} \Big( x_i^{(p^{S_i}-p^m)} y_i^{(p^{R_i} -1)}v_{i+1}\Big)\\ &=\partial_{x_i}^{p^{m+1}}+ x_i^{(p^{S_i}-{p^{m+1}})} y_i^{(p^{R_i} -1)}v_{i+1} ,\qquad\qquad 0\le m< S_i.\qedhere \end{align*} \end{proof} \begin{Lemma}\label{L_clover_relations} Let $\ch K=p>0$ and a tuple $\Xi$ be fixed. Consider the clover restricted Lie algebra $\TT(\Xi)=\Lie_p(v_0,w_0,u_0)$. Then \begin{enumerate} \item $v_i^{p^{S_{i}}} = y_{i}^{(p^{R_{i}}-1)} v_{i+1}$,\qquad\ $w_i^{p^{R_{i}}} = x_{i}^{(p^{S_{i}}-1)} w_{i+1}$,\qquad\ $u_i^{p^{R_{i}}} = x_{i}^{(p^{S_{i}}-1)} u_{i+1}$,\ for all $i\ge 0$. \item $ [ w_i^{p^{R_i}-1},v_i^{p^{S_i}}]=v_{i+1}$,\qquad $ [ v_i^{p^{S_i}-1},w_i^{p^{R_i}}]=w_{i+1}$,\qquad $ [ v_i^{p^{S_i}-1},u_i^{p^{R_i}}]=u_{i+1}$,\ for all $i\ge 0$. \item $v_i,w_i,u_i\in \TT(\Xi)$, $i\ge 0$. \end{enumerate} \end{Lemma} \begin{proof} Follows from computations of~\cite{Pe20flies}. But let us check the formulas directly. The first claim is a partial case of Lemma~\ref{Lrelations}. The third claim follows from the second. Finally, let us check the second claim. \begin{align*} &[ w_i^{p^{R_i}-1},v_i^{p^{S_i}}] =(\ad w_i)^{p^{R_i}-2}[w_i,v_i^{p^{S_i}}]\\ &\quad =(\ad w_i)^{p^{R_i}-2} \Big[\dd_{y_i}+y_{i}^{(p^{R_{i}}-1)} x_{i}^{(p^{S_{i}}-1)} w_{i+1}, y_{i}^{(p^{R_{i}}-1)} v_{i+1} \Big]=v_{i+1}. \qedhere \end{align*} \end{proof} \begin{Lemma} The subalgebra of $\TT(\Xi)$ generated by $v_0,w_0$ is isomorphic to $\LL(\Xi)$ defined by~\eqref{aibip}. \end{Lemma} \begin{proof} We observe that $v_0,w_0$ have the same presentation as $a_0,b_0\in \LL(\Xi)$. \end{proof} \subsection{Head elements of two types} We construct a clear monomial basis for $\TT(\Xi)$ similar to that for its subalgebra $\LL(\Xi)\cong \Lie_p(v_0,w_0)\subset \TT(\Xi)$ found in~\cite{Pe17}. Consider products of two pivot elements of the same length~\eqref{pivot-3}: \begin{equation} \label{abp} \begin{split} h_{i+1}:=[w_i,v_i] &=x_i^{(p^{S_i}-1)} y_i^{(p^{R_i}-2)} v_{i+1} -x_i^{(p^{S_i}-2)} y_i^{(p^{R_i}-1)} w_{i+1},\\ g_{i+1}:=[v_i,u_i]&=x_i^{(p^{S_i}-2)} z_i^{(p^{R_i}-1)} u_{i+1}, \\ [w_i,u_i]&=0,\qquad i\ge 0. \end{split} \end{equation} \begin{Lemma}[{\cite{Pe17}, Lemma~4.2}]\label{Lcomm} For all $i\ge 0$ we have the following elements: \begin{enumerate} \item for all $0\le\xi<p^{S_i}$, $0\le\eta<p^{R_i}$ (except the case $\xi=p^{S_i}-1$ and $\eta=p^{R_i}-1$) we get: \begin{equation}\label{abp2} h_{i+1}^{\xi,\eta}:=[v_i^{\xi},w_i^{\eta},h_{i+1}] =x_i^{(p^{S_i}-1-\xi)} y_i^{(p^{R_i}-2-\eta)} v_{i+1} -x_i^{(p^{S_i}-2-\xi)} y_i^{(p^{R_i}-1-\eta)} w_{i+1}; \end{equation} \item The order of the multiplication above is not essential. As partial cases, we get: \begin{align*} &h_{i+1}^{0,0}=h_{i+1}=[w_i,v_i];\\ &h_{i+1}^{p^{S_i}-1,\eta}=y_i^{(p^{R_i}-2-\eta)} v_{i+1}, \text{ for } 0\le \eta\le p^{R_i}-2; \text{ as a particular case: }\\ &h_{i+1}^{p^{S_i}-1,p^{R_i}-2}=v_{i+1};\\ &h_{i+1}^{\xi,p^{R_i}-1}=- x_i^{(p^{S_i}-2-\xi)} w_{i+1}, \text{ for } 0\le \xi\le p^{S_i}-2; \text{ as a particular case: }\\ &h_{i+1}^{p^{S_i}-2,p^{R_i}-1}=-w_{i+1}; \end{align*} \end{enumerate} \end{Lemma} Thus, for all $i\ge 0$, we obtain elements~\eqref{abp2}, called {\em heads of first type of length} $i+1$: \begin{equation}\label{heads} \Big\{h_{i+1}^{\xi,\eta}\ \Big|\ 0\le\xi<p^{S_{i}},\ 0\le\eta<p^{R_{i}}, \text{ except } (\xi=p^{S_{i}}-1 \text{ and } \eta=p^{R_{i}}-1)\Big\}. \end{equation} Consider~\eqref{heads} as a table of size $p^{S_i}\times p^{R_i}$, rows and columns being indexed by $\xi$, $\eta$, the lower right corner is empty. The table contains $v_{i+1}$ and $-w_{i+1}$ in respective cells. We multiply~\eqref{abp} by $v_i$, $u_i$ (the order is not essential) we get {\em heads of second type of length} $i+1$: \begin{equation}\label{heads2} \Big\{g_{i+1}^{\xi,\zeta}:=[v_i^{\xi},u_i^{\zeta},[v_i,u_i]] =x_i^{(p^{S_i}-2-\xi)} z_i^{(p^{R_i}-1-\zeta)} u_{i+1} \Big| \ 0\le\xi\le p^{S_{i}}-2,\ 0\le\zeta\le p^{R_{i}}-1 \Big\}. \end{equation} We put~\eqref{heads2} in table of size $(p^{S_i}-1)\times p^{R_i}$, rows and columns being indexed by $\xi$ and $\zeta$, the low right corner containing $u_{i+1}$. \subsection{Monomial basis of clover restricted Lie algebras $\TT(\Xi)$} Define {\it tails} of first and second types: \begin{equation}\label{rmmp} \begin{split} r_n(x,y)&=x_0^{(\xi_0)}y_0^{(\eta_0)}\!\!\cdots x_n^{(\xi_n)}y_n^{(\eta_n)} \in R,\qquad\qquad\qquad 0\le\xi_i<p^{S_i},\ 0\le \eta_i<p^{R_i};\ n\ge 0;\\ r_n(x,y,z)&=x_0^{(\xi_0)}y_0^{(\eta_0)}z_0^{(\zeta_0)}\!\!\cdots x_n^{(\xi_n)}y_n^{(\eta_n)}z_n^{(\zeta_n)} \in R,\qquad 0\le\xi_i<p^{S_i},\ 0\le \eta_i,\zeta_i<p^{R_i};\ n\ge 0. \end{split} \end{equation} For $n<0$ we assume that $r_n=1$. The notation $r_n(x,y)$ denotes a particular element of type~\eqref{rmmp}. If appears another element of such type, it will be denoted as $r_n'(x,y)$, $\tilde r_n(x,y,z)$, $r_n^{(k)}(*)$, etc., where the lower index denotes the biggest index of variables, whose types are given in parenthesis. Define {\em standard monomials of first type} of length $n$, $n\ge 1$: \begin{equation} \label{rmmp3} \begin{split} &r_{n-2}(x,y)h_{n}^{\xi_{n-1},\eta_{n-1}}\\ &=r_{n-2}(x,y)\Big(x_{n-1}^{(p^{S_{n-1}}-1-\xi_{n-1})} y_{n-1}^{(p^{R_{n-1}}-2-\eta_{n-1})} v_{n} -x_{n-1}^{(p^{S_{n-1}}-2-\xi_{n-1})} y_{n-1}^{(p^{R_{n-1}}-1-\eta_{n-1})} w_{n} \Big),\\ &\quad \text{where } 0\le\xi_{n-1}<p^{S_{n-1}},\ 0\le \eta_{n-1}<p^{R_{n-1}}, \quad \text{except } (\xi_{n-1}{=}p^{S_{n-1}}{-}1\text{ and }\eta_{n-1}{=}p^{R_{n-1}}{-}1). \end{split} \end{equation} Recall that the {\em heads} $h_{n}^{\xi_{n-1},\eta_{n-1}}$ are described by~\eqref{abp2}, while the {\em tails} $r_{n-2}(x,y)$ are~\eqref{rmmp}. We call $x_{n-1},y_{n-1}$ {\it neck letters}. By Lemma~\ref{Lcomm}, we get the pivot elements $v_n,w_n$, for $n\ge 1$, as particular cases of such monomials. So, we consider that $v_0,w_0$ are also the standard monomials of first type of length 0. Define {\em standard monomials of second type} of length $n$, $n\ge 1$: \begin{equation} \label{rmmp3B} \begin{split} r_{n-2}(x,y,z)g_{n}^{\xi_{n-1},\zeta_{n-1}}=r_{n-2}(x,y,z) x_{n-1}^{(p^{S_{n-1}}-2-\xi_{n-1})} z_{n-1}^{(p^{R_{n-1}}-1-\zeta_{n-1})} u_{n},\\ \text{ where }\quad 0 \le\xi_{n-1}\le p^{S_{n-1}}{-}2,\quad 0\le \zeta_{n-1}\le p^{R_{n-1}}{-}1. \end{split} \end{equation} Recall that the {\em heads} $g_{n}^{\xi_{n-1},\zeta_{n-1}}$ are described by~\eqref{heads2}, while the {\em tails} $r_{n-2}(x,y,z)$ are~\eqref{rmmp}. We call $x_{n-1},z_{n-1}$ {\it neck letters}. By definition, consider that $u_0$ is a standard monomial of second type of length $0$. \begin{Theorem}\label{Tstmonom-p} A basis of the Lie algebra $L=\Lie(v_0,w_0,u_0)$ (i.e. we use only the Lie bracket) is given by \begin{enumerate} \item the standard monomials of first type of length $n\ge 0$; \item the standard monomials of second type of length $n\ge 0$. \end{enumerate} \end{Theorem} \begin{proof} The standard monomials of first type form a basis of $\Lie(v_0,w_0)$~\cite[Theorem 5.1]{Pe17}. Let us prove that the standard monomials of second type belong to $L$. We proceed by induction on length $n$. We have $u_0\in L$. By~\eqref{heads2}, we get $g_{1}^{\xi,\zeta}=x_0^{(p^{S_0}-2-\xi)} z_0^{(p^{R_0}-1-\zeta)} u_{1}=[v_0^{\xi},u_0^{\zeta},[v_0,u_0]]\in R$, for all $0\le\xi\le p^{S_{0}}-2$, $0\le\zeta\le p^{R_{0}}-1$. Thus, we have the base of induction for $n=0,1$. Now let $n\ge 1$. By induction hypothesis, $r_{n-2}(x,y,z)z_{n-1}^{(p^{R_{n-1}}-1)}u_{n}\in L$. We use recurrence formula for $v_{n-1}$ and~\eqref{abp}: \begin{align*} &[v_{n-1}, r_{n-2}(x,y,z)z_{n-1}^{(p^{R_{n-1}}-1)}u_{n}] =\Big[\dd_{x_{n-1}}+x_{n-1}^{(p^{S_{n-1}}-1)}y_{n-1}^{(p^{R_{n-1}}-1)}v_{n}, r_{n-2}(x,y,z)z_{n-1}^{(p^{R_{n-1}}-1)}u_{n}\Big]\\ &\qquad\qquad=r_{n-2}(x,y,z)x_{n-1}^{(p^{S_{n-1}}-1)}y_{n-1}^{(p^{R_{n-1}}-1)}z_{n-1}^{(p^{R_{n-1}}-1)}[v_{n},u_{n}]\\ &\qquad\qquad=r_{n-2}(x,y,z)x_{n-1}^{(p^{S_{n-1}}-1)}y_{n-1}^{(p^{R_{n-1}}-1)}z_{n-1}^{(p^{R_{n-1}}-1)} \cdot x_{n}^{(p^{S_{n}}-2)} z_{n}^{(p^{R_{n}}-1)} u_{n+1}. \end{align*} By assumption, $r_{n-2}(x,y,z)$ can have arbitrary powers of its variables. Multiplying by $v_n=\dd_{x_n}+x_{n}^{(p^{S_{n}}-1)} y_{n}^{(p^{R_{n}}-1)} v_{n+1} $, we can reduce the power of $x_n$ above to any desired value. The same argument applies to the remaining variables. Thus, all standard monomials of second type belong to $L$. It remains to show that products of standard monomials are expressed via standard monomials. This is true for monomials of first type because they form a basis of $\Lie(v_0,w_0)$~\cite{Pe17}. Take two standard monomials of second type, shortly written as $d_1=r_{n-1}u_n$, and $d_2=r_{m-1}u_m$, divided variables will be shortly written as $x_i^{*}$. Assume that $n\le m$. Using recurrence presentation, we get \begin{align*} [d_1,d_2]&=\big[r_{n-1}(\dd_{z_n}+x_{n}^{*} z_{n}^{*} (\dd_{z_{n+1}} +\cdots+ x_{m-2}^{*} z_{m-2}^{*}(\dd_{z_{m-1}}+x_{m-1}^{*}z_{m-1}^{*}u_m))),r_{m-1}u_m\big ] \\ &=r_{n-1}\dd_{z_n}(r_{m-1})u_m+r'_{n}\dd_{z_{n+1}}(r_{m-1})u_m+\cdots+r''_{m-2}\dd_{z_{m-1}}(r_{m-1})u_m. \end{align*} Observe that we get standard monomials of second type above. Consider products of monomials of different types. Write shortly $d_1=r_{n-1}(x,y,z)u_n$, $d_2=r_{m-2}(x,y)h_{m}^{\xi,\eta}=r'_{m-1}(x,y)v_m-{r}''_{m-1}(x,y)w_m$. Consider the case $n\le m$. Using recurrence presentation and~\eqref{abp}, \begin{align*} [d_1,d_2]&=\Big[r_{n-1}(x,y,z)\big(\dd_{z_n}+x_{n}^{*} z_{n}^{*} (\dd_{z_{n+1}} +\cdots \\[-3pt] &\qquad + x_{m-2}^{*} z_{m-2}^{*}(\dd_{z_{m-1}}+x_{m-1}^{*}z_{m-1}^{*}u_m))\big), r'_{m-1}(x,y)v_m-r''_{m-1}(x,y)w_m\Big ] \\ &=\bar r'_{m-1}(x,y,z)[u_m,v_m]-\bar r''_{m-1}(x,y,z)[u_m,w_m] =-\bar r'_{m-1}(x,y,z)x_m^{(p^{S_m}-2)} z_m^{(p^{R_m}-1)} u_{m+1}, \end{align*} yielding a standard monomial of second type. Consider the case $m<n$. Then \begin{align*} d_2&=\sum_{j=m}^{n-1}\Big(r^{(j)}_{j-1}(x,y)\dd_{x_j}- \bar r^{(j)}_{j-1}(x,y)\dd_{y_j}\Big) +r'_{n-1}(x,y)v_n- r''_{n-1}(x,y)w_n;\\ [d_2,d_1]&=\sum_{j=m}^{n-1}\Big(r^{(j)}_{j-1}(x,y)\dd_{x_j}- \bar r^{(j)}_{j-1}(x,y)\dd_{y_j}\Big) \Big(r_{n-1}(x,y,z)\Big)u_n +\tilde r'_{n-1}(x,y,z)[v_n,u_n], \end{align*} the last term yielding a standard monomial of second type by~\eqref{abp}. In the preceding sum, the action on variables with indices $n-1$ appears in case $j=n-1$. Recall that $d_1=r_{n-1}(x,y,z)u_n=r_{n-2}(x,y,z)x_{n-1}^\a z_{n-1}^\g u_n$, where $0\le\a<p^{S_{n-1}}-1$, $0\le\g<p^{R_{n-1}}$. After the action in the sum above, we again get standard monomials of second type. \end{proof} By Lemma~\ref{Lrelations}, \begin{equation}\label{powers} \begin{split} v_n^{p^i}&= \begin{cases} \dd_{x_n}^{p^i}+ x_n^{(p^{S_n}-p^i)} y_n^{(p^{R_n}-1)}v_{n+1}, &\quad 1\le i< S_n;\\ \hfill y_n^{(p^{R_n}-1)} v_{n+1}, &\hfill i=S_n, \end{cases}\\ w_n^{p^i}&= \begin{cases} \dd_{y_n}^{p^i}+ y_n^{(p^{R_n}-p^i)} x_n^{(p^{S_n}-1)}w_{n+1}, &\quad 1\le i< R_n;\\ \hfill x_n^{(p^{S_n}-1)} w_{n+1}, &\hfill i=R_n. \end{cases}\\ u_n^{p^i}&= \begin{cases} \dd_{z_n}^{p^i}+ z_n^{(p^{R_n}-p^i)} x_n^{(p^{S_n}-1)}u_{n+1}, &\quad 1\le i< R_n;\\ \hfill x_n^{(p^{S_n}-1)} u_{n+1}, &\hfill i=R_n. \end{cases} \end{split} \end{equation} We refer to nonzero powers of $v_n,w_n$ as {\it power standard monomials of first type} of length $n+1$, powers of $u_n$ are {\it power standard monomials of second type}. One checks that they are linearly independent with the standard monomials. \begin{Lemma}\label{Lnum_power} Let $n\ge 0$. There are $S_n+R_n$ power standard monomials of first type and $R_n$ power standard monomials of second type of length $n+1$. \end{Lemma} \begin{Theorem}\label{Tsemidirect} Let $\TT(\Xi)=\Lie_p(v_0,w_0,u_0)$ be the clover restricted Lie algebra. Then \begin{enumerate} \item A basis of $\TT(\Xi)$ is given by the standard and power standard monomials of first and second types. \item We have a semidirect product $$\TT(\Xi)=\Lie_p(v_0,w_0)\rightthreetimes J ,\qquad \Lie_p(v_0,w_0)\cong \LL(\Xi),$$ the subalgebra $\Lie_p(v_0,w_0)$ is spanned by the standard and power standard monomials of first type, the ideal $J$ is spanned by the standard and power standard monomials of second type. \end{enumerate} \end{Theorem} \begin{proof} To get a basis of the $p$-hull of a Lie algebra we need to add $p^m$-powers, $m\ge 1$, of its basis~\cite{StrFar}. Observe that the standard monomials contain non-trivial tails except for the pivot elements. Thus, to get a basis of $\Lie_p(v_0,w_0,u_0)$ we add nontrivial powers of the pivot elements, i.e. power standard monomials. The first claim is proved. By our specific construction, the pivot elements $\{v_i,w_i|i\ge 0\}$ (i.e., except $u_i$, $i\ge 0$) in~\eqref{pivot-3} are just renaming (in terms of another letters) of the pivot elements $\{a_i,b_i|i\ge 0\}$~\eqref{aibip} of Example~\ref{E2}, the latter yielding the algebra $\LL(\Xi)$. Their commutators and $p$th powers yield exactly the (power) standard monomials of first type, which are just renaming of a basis of $\LL(\Xi)$, established in~\cite[Theorem~5.4]{Pe17}. Thus, the (power) standard monomials of first type span the subalgebra $\Lie_p(v_0,w_0)\cong \LL(\Xi)$. By computations in proof of Theorem~\ref{Tstmonom-p}, commutators of the standard monomial of second type with the standard monomials of any type yield monomials of second type, proving that $J$ is an ideal indeed. The semidirect decomposition of the second claim follows. \end{proof} \subsection{Periodic tuple and self-similarity} The notion of self-similarity for Lie algebras was introduced by Bartholdi~\cite{Bartholdi15}. A Lie algebra $L$ is called {\it self-similar} if it affords a homomorphism $\psi : L \to \Der R \rightthreetimes R \otimes L$, where $R$ is a commutative algebra and $\Der R$ its Lie algebra of derivations. A further study of the notion of self-similarity for Lie algebras was done in~\cite{FutKochSis}. The notion of self-similarity in case of Poisson superalgebras and Jordan superalgebras is considered in~\cite{PeSh18Jslow}. Both Lie superalgebras of~\cite{Pe16} are self-similar. The self-similarity of the two-generated subalgebra $\LL(\Xi)\cong \Lie_p(v_0,w_0)\subset \TT(\Xi)$ in case of a periodic tuple $\Xi$ was observed in~\cite{Pe17}. The same phenomena happens in our case as well. \begin{Lemma}\label{Lself} Let a tuple $\Xi=(S_i,R_i|i\ge 0)$ be constant or, more generally, periodic. Then the clover restricted Lie algebra $\TT(\Xi)$ is self-similar. \end{Lemma} \begin{proof} Let $N\in\N$ be the period of the tuple $\Xi$, namely $S_{i+N}=S_i$, $R_{i+N}=R_i$ for $i\ge 0$. By construction of the pivot elements~\eqref{aibi0} and periodicity, $$H:=\Lie_p(v_N,w_N,u_N)\cong \Lie_p(v_0,w_0,u_0)= \TT(\Xi) .$$ Consider the subalgebra of divided powers: $$R_{N-1}:=\big\langle x_0^{(\a_0)}y_0^{(\b_0)}z_0^{(\gamma_0)}\!\!\cdots x_{N-1}^{(\a_{N-1})}y_{N-1}^{(\b_{N-1})}z_{N-1}^{(\gamma_{N-1})} \big\rangle_K\subset R=R(\Xi), $$ where bounds for divided powers are the same as in $R$ and determined by $\Xi$. Using~\eqref{pivot-3} and~\eqref{aibi0}, we get \begin{multline* v_0 = \partial_{x_0} + x_0^{(p^{S_0}-1)}y_0^{(p^{R_0}-1)}\partial_{x_{1}} +x_0^{(p^{S_0}-1)}y_0^{(p^{R_0}-1)} x_1^{(p^{S_1}-1)}y_1^{(p^{R_1}-1)}\partial_{x_{2}}+\dots\\ \ldots +x_0^{(p^{S_0}-1)}y_0^{(p^{R_0}-1)}\cdots x_{N-2}^{(p^{S_{N-2}}-1)} y_{N-2}^{(p^{R_{N-2}}-1)} \partial_{x_{N-1}}\\ + x_0^{(p^{S_0}-1)}y_0^{(p^{R_0}-1)}\cdots x_{N-1}^{(p^{S_{N-1}}-1)} y_{N-1}^{(p^{R_{N-1}}-1)} v_N\\ = d_{N-1}+ p_{N-1} v_N, \qquad\text{where}\quad d_{N-1}\in\Der R_{N-1}, \quad p_{N-1}\in R_{N-1}. \end{multline*} Similar formulas are valid for $w_0$, $u_0$. These formulas for the generators yield an embedding of self-similarity for the whole of the algebra \begin{equation*} \TT(\Xi) \hookrightarrow \Der R_{N-1} \rightthreetimes R_{N-1} \otimes H, \qquad H=\Lie(v_N,w_N,u_N)\cong\TT(\Xi). \qedhere \end{equation*} \end{proof} \section{Weight function, $\Z^3$-grading, bounds on weights} \subsection{Weights} By {\it pure monomials} we call products of divided powers and pure derivations. In particular, if a monomial contains one pure derivation, we get a {\it pure Lie monomial}. Set $\a_n=\cwt(\dd_{x_n})=-\cwt(x_n)\in\C$, $\b_n=\cwt(\dd_{y_n})=-\cwt(y_n)\in\C$, $\g_n=\cwt(\dd_{z_n})=-\cwt(z_n)\in\C$, for all $n\ge 0$. This values are easily extended to a weight function on pure monomials, additive on their (Lie or associative) products. Next, consider weight functions such that all terms in recurrence relation~\eqref{pivot-3} have the same weight, thus, attaching the same value as a weight for the pivot element as well. We get a recurrence relation: \begin{equation} \label{matrix3} \begin{pmatrix} \a_{n+1}\\ \b_{n+1}\\ \g_{n+1} \end{pmatrix} = \begin{pmatrix} p^{S_{n }} & p^{R_{n }}-1 & 0\\ p^{S_{n }}-1 & p^{R_{n }} & 0\\ p^{S_{n }}-1 & 0 &p^{R_{n}} \end{pmatrix} \begin{pmatrix} \a_{n}\\ \b_{n}\\ \g_{n} \end{pmatrix}, \qquad n\ge 0. \end{equation} Recurrence relation~\eqref{matrix3} expresses weights of the pivot elements of length $n+1$ via weights of the pivot elements of length $n$. Hence, any weight function satisfying~\eqref{matrix3} is determined by its values on the pivot elements of zero length, namely, by $\cwt(v_0), \cwt(w_0),\cwt(u_0)$. Next, let $\wt_i(*)$ be a weight function which is equal to zero for all but $i$-th element in the list $\{v_0,w_0,u_0\}$, for $i=1,2,3$. Compose the {\it multidegree weight function} $\Gr(v):=(\wt_1(v),\wt_2(v),\wt_3(v))$, where $v$ is a pure monomial. By definition, $\Gr(v_0)=(1,0,0)$, $\Gr(w_0)=(0,1,0)$, $\Gr(u_0)=(0,0,1)$. Thus, the space of weight functions satisfying~\eqref{matrix3} is $3$-dimensional with a basis $\wt_1(*),\wt_2(*),\wt_3(*)$. Using~\eqref{matrix3}, we see that $\Gr(v_n)\in\NO^3$ for all $n\ge 0$. Finally, define the total {\it degree weight function} $\wt(v):=\sum_{j=1}^3 \wt_j(v)$. Its initial values are $\wt(v_0)=\wt(w_0)=\wt(u_0)=1$. \begin{Lemma} \label{Lweight_pivo} $\wt(v_n)=\wt(w_n)=\wt(u_n)=\prod\limits_{i=0}^{n-1}\big(p^{S_{i}}+ p^{R_{i}}-1\big)$, $n\ge 0$, where $\wt(v_0)=\wt(w_0)=\wt(u_0)=1$. \end{Lemma} \begin{proof} The base of induction is $n=0$, it is trivial. Assume that the formula holds for $n\ge 0$. By recurrence relation~\eqref{matrix3}, where $\a_n=\wt v_n=\b_n=\wt w_n=\gamma_n=\wt u_n=\prod\limits_{i=0}^{n-1}(p^{S_{i}}+ p^{R_{i}}-1)$, the first row yields \begin{align*} \wt v_{n+1}=\a_{n+1}=p^{S_n} \a_n+ (p^{R_n}-1)\b_n =(p^{S_n}+p^{R_n}-1)\prod\limits_{i=0}^{n-1}\big(p^{S_{i}}+ p^{R_{i}}-1\big)= \prod\limits_{i=0}^{n}\big(p^{S_{i}}+ p^{R_{i}}-1\big). \end{align*} Using~\eqref{matrix3}, the same formula holds for $\wt w_{n+1}$, $\wt u_{n+1}$. \end{proof} \begin{Remark} The proof shows importance of our peculiar construction~\eqref{pivot-3}. In particular, we essentially used that sums of elements of three rows of matrix~\eqref{matrix3} are equal, which originates from the fact that the top indices for divided powers $y_i,z_i$ are the same, determined by $R_i$, for $i\ge 0$. Otherwise it will not be possible to find any formulas for weights of the pivot elements. \end{Remark} Recall that $\wt(*)$ is the {\it total degree} function determined by $\wt(v_0)=\wt(w_0)=\wt(u_0)=1$. Let $\wt_{12}(*)$ be determined by the initial values $\wt_{12}(v_0)=\wt_{12}(w_0)=1$ and $\wt_{12}(u_0)=0$. Observe that $\wt_{12}(v)=\wt_1(v)+\wt_2(v)$ and $\wt(v)=\wt_{12}(v)+\wt_3(v)$ for any monomial $v$. Thus, $\wt_{12}(v)$ counts multiplicity of $v$ with respect to $v_0,w_0$ and $\wt_3(v)$ counts multiplicity with respect to $u_0$ only. \begin{Lemma} \label{Lweight_pivo2} For all $n\ge 0$ we have \begin{align*} \wt_3(u_n)&=p^{{R_0}+\cdots +R_{n-1}},\qquad \wt_3(v_n)=\wt_3(w_n)=0;\\ \wt_{12}(v_n)&=\wt_{12}(w_n)=\prod_{i=0}^{n-1}\big(p^{S_{i }}+ p^{R_{i }}-1\big);\\ \wt_{12}(u_n)&=\prod_{i=0}^{n-1}\big(p^{S_{i }}+ p^{R_{i }}-1\big)- p^{{R_0}+\cdots +R_{n-1}}. \end{align*} \end{Lemma} \begin{proof} Three formulas are checked by induction. Using $\wt(*)=\wt_{12}(*)+\wt_3(*)$ and Lemma~\ref{Lweight_pivo}, we get the last formula. \end{proof} \subsection{$\NO^3$-gradings} By a {\it generalized monomial} $a\in\End R$ we call any (Lie or associative) product of pure monomials and pivot elements. By construction, actual pivot elements and their products are generalized monomials. Observe that generalized monomials are written as infinite linear combinations of pure monomials. Our construction implies that these pure monomials have the same weight; we call this value the weight of a generalized monomial. Thus, the weight functions are well-defined on generalized monomials as well. Also, $\Gr(v)\in\NO^3$ for any generalized monomial $v$. In many examples studied before~\cite{PeSh09,PeSh13fib,Pe16,Pe17,PeOtto,PeSh18FracPJ} we were able, as a rule, to compute explicitly basis functions for the space of weight functions and study multigradings in more details. Using that base weight functions and multigradings we were able to get more information about our algebras. In a general setting of the present paper it is not possible. \begin{Theorem} $\strut$ \label{Tgraded} \begin{enumerate} \item the multidegree weight function $\Gr(v)$ is additive on products of generalized monomials $v,w\in\End R$: $$ \Gr([v, w])=\Gr(v)+\Gr(w),\qquad \Gr(v\cdot w)=\Gr(v)+\Gr(w). $$ \item $\TT=\Lie_p(v_0,w_0,u_0)$, $\AA=\Alg(v_0,w_0,u_0)$ are $\NO^3$-graded by multidegree in the generators $\{v_0,w_0,u_0\}$: $$ \TT=\mathop{\oplus}\limits_{(n_1,n_2,n_3)\in\NO^3} \TT_{n_1,n_2,n_3},\qquad \AA=\mathop{\oplus}\limits_{(n_1,n_2,n_3)\in\NO^3} \AA_{n_1,n_2,n_3}. $$ \item $\wt(*)$ counts the degree of $v\in\TT,\AA$ in $\{n_1,n_2,n_3\}$ yielding gradings: $$ \TT=\mathop{\oplus}\limits_{n=1}^\infty \TT_n,\qquad \AA=\mathop{\oplus}\limits_{n=1}^\infty \AA_n. $$ \end{enumerate} \end{Theorem} \begin{proof} Claim i) follows from the additivity of the weight function on products of pure monomials. Consider ii). Recall that $\Gr(v)\in\NO^3$ for any generalized monomial $v$ and $\Gr(*)$ is additive on their products. Thus, we get $\NO^3$-gradings on $\TT$, $\AA$. Let $v$ be a monomial in the generators $\{n_1,n_2,n_3\}$ each number $n_i$ counting entries of $v_0,w_0,u_0$, respectively. By additivity, $\Gr(v)=n_1\Gr(v_0)+n_2\Gr(w_0)+n_3\Gr(u_0)= (n_1,n_2,n_3)$. Hence, $\TT$, $\AA$ are $\NO^3$-graded by multidegree in the generators. Now, the last claim is evident. \end{proof} \subsection{Bounds on weights} \begin{Lemma}[\cite{Pe17}, Lemma 6.5]\label{Lbounds1} Let $w$ be a (power) standard monomial of first type of length $n\ge 0$. Then $$ \wt(v_{n-1})+1\le \wt(w)\le\wt(v_n). $$ \end{Lemma} We shall also write a standard monomial $w$ of second type (see~\eqref{rmmp3B},~\eqref{rmmp}) as: \begin{equation}\label{second_type2} \begin{split} &w=r_{n-3}(x,y,z) x_{n-2}^{(p^{S_{n-2}}-1-\a)} y_{n-2}^{(p^{R_{n-2}}-1-\b)} z_{n-2}^{(p^{R_{n-2}}-1-\g)}\cdot x_{n-1}^{(p^{S_{n-1}}-2-\xi)} z_{n-1}^{(p^{R_{n-1}}-1-\zeta)} u_{n},\\ &\text{ where } 0\le\a <p^{S_{n-2}},\ 0\le \b,\gamma<p^{R_{n-2}},\ 0\le \zeta<p^{R_{n-1}};\text { and }\ 0\le\xi \le p^{S_{n-1}}-2. \end{split} \end{equation} \begin{Lemma}\label{Lbounds2} Let $w$ be a (power) standard monomial of second type of length $n\ge 0$. \begin{enumerate} \item Let $w$ be a power standard monomial. Then $$ \wt(v_{n-1})+1\le \wt(w)\le\wt(v_n). $$ \item Let $w$ be a standard monomial of second type of length $n\ge 2$, then $$ (p^{S_{n-2}}{-}1)\wt(v_{n-2})< \wt(w)\le\wt(v_n). $$ \item Let $w$ of second type be presented as~\eqref{second_type2}, we get more precise bounds: \begin{align}\label{estimate} \wt(w)&> (p^{S_{n-2}}{-}1+\a+\b+\g)\wt (v_{n-2})+(\xi+\zeta)\wt(v_{n-1});\\ \wt(w)&\le\wt(v_{n-1})(\xi+\zeta+2). \label{estimate2} \end{align} \item Let $w$ be of second type~\eqref{second_type2} and assume that $\xi>0$ or $\zeta>0$, then $$ \wt(v_{n-1})+1\le \wt(w)\le\wt(v_n). $$ \end{enumerate} \end{Lemma} \begin{proof} i) Weights of power standard monomials of second type (i.e. powers of $u_{n-1}$) are equal to weights of powers of $w_{n-1}$, and we apply Lemma~\ref{Lbounds1}. ii) Let $w$ be a standard monomial of second type~\eqref{rmmp3B} Clearly, weight is bounded by $\wt(u_n)=\wt(v_n)$. We get a lower bound by taking the maximal allowed powers of variables. Below we get homogeneous components of partial recurrence expansions for $u_0$ and $v_0$, and use that $\wt v_0= \wt u_0=1$. \begin{align*} &\wt(w) \ge \wt \Big(\Big(\prod_{i=0}^{n-2}x_i^{(p^{S_i}-1)}y_i^{(p^{R_i}-1)}z_i^{(p^{R_i}-1)}\Big) x_{n-1}^{(p^{S_{n-1}}-2)}z_{n-1}^{(p^{R_{n-1}}-1)} u_n\Big)\\ &\qquad =\wt \Big(\big( \prod_{i=0}^{n-1} x_i^{(p^{S_i}-1)}z_i^{(p^{R_i}-1)}\big)u_n\Big)-\wt(x_{n-1}^{(1)}) +\wt\Big(\prod_{i=0}^{n-2 }y_i^{(p^{R_i}-1)}\Big) \\ &\qquad =\wt u_0+\wt(v_{n-1}) +\wt\Big(\prod_{i=0}^{n-2 }y_i^{(p^{R_i}-1)}\Big) =1+\wt\Big(\big( \prod_{i=0}^{n-2} x_i^{(p^{S_i}-1)}y_i^{(p^{R_i}-1)}\big)v_{n-1}\Big) -\sum_{i=0}^{n-2}\wt(x_i^{(p^{S_i}-1)}) \\[-3pt] &\qquad =1+\wt(v_0)+\sum_{i=0}^{n-2} (p^{S_i}-1)\wt(v_i) \ge 2+(p^{S_{n-2}}-1)\wt (v_{n-2}). \end{align*} iii). The preceding lower bound is given by the maximal allowed powers of the divided variables. In comparison with that bound we get additional terms $(\a+\b+\g) \wt(v_{n-2})$ and $(\xi+\zeta)\wt(v_{n-1})$. Recall that by~\eqref{heads2} the head of $w$ is $g_n^{\xi,\zeta}=[v_{n-1}^{\xi},u_{n-1}^{\zeta},[v_{n-1},u_{n-1}]]$, the latter multiplicands having the same weight, we get $\wt(g_n^{\xi,\zeta})=\wt(v_{n-1})(\xi+\zeta+2)$. Since tail variables only decrease the weight, we obtain~\eqref{estimate2}. iv). We use iii) and $(\xi+\zeta)\wt(v_{n-1})\ge \wt(v_{n-1})$. \end{proof} \section{Growth of general clover restricted Lie algebras $\TT(\Xi)$} \subsection{Arbitrary tuple $\Xi$} \begin{Lemma} \label{Lest} Fix numbers $p>1$ and $r,s>0$. Then $p^s+p^r-1>p^{(s+2r)/3}.$ \end{Lemma} \begin{proof} Assume that $s\ge r$, then $s\ge (s+2r)/3\ge r$ and $p^s+p^r-1\ge p^{(s+2r)/3}+(p^r-1)>p^{(s+2r)/3}$. Consider the case $s<r$, then $s< (s+2r)/3< r$ and $p^s+p^r-1> p^{(s+2r)/3}+(p^s-1)>p^{(s+2r)/3}$. \end{proof} \begin{Theorem}\label{Tgrowth3} Let $\Xi$ be an arbitrary tuple of parameters and $\TT(\Xi)$ the respective clover restricted Lie algebra. Then $1\le \GKdim \TT(\Xi)\le 3$. \end{Theorem} \begin{proof} By Theorem~\ref{Tsemidirect}, $\TT(\Xi)$ is a semidirect product of $\LL(\Xi)$ with the ideal $J$, where bases of $\LL(\Xi)$ and $J$ consist of monomials of first and second types, respectively. By~\cite[Theorem 7.2]{Pe17}, $\GKdim \LL(\Xi)\le 2$, yielding an upper bound on the number of the (power) standard monomials of first type. Since power standard monomials of second type (i.e. powers of $u_n$, $n\ge 0$) behave like powers of $w_n$, the same estimate on growth of $\LL(\Xi)$ applies to them. Fix a number $m>1$. It remains to derive an upper bound on the number of standard monomials of second type of weight at most $m$. Let $n=n(m)$ be such that \begin{equation}\label{wtan-1} \wt(v_{n-1})< m\le \wt(v_n). \end{equation} Put $m_0:=\wt(v_{n-1})$ and $m_1:=[m/m_0]$. By~\eqref{Lweight_pivo} and Lemma~\ref{Lest}, \begin{equation}\label{m0} m_0=\wt(v_{n-1})=\prod_{i=0}^{n-2}(p^{S_i}+p^{R_i}-1)> p^{(S_0+\cdots+ S_{n-2}+2(R_0+\cdots+R_{n-2}))/3},\quad n\ge 2. \end{equation} Let $w$ be a standard monomial of second type of length $n'$ and $\wt(w)\le m$. Assume that $n'\ge n+2$. By Claim ii) of Lemma~\ref{Lbounds2}, $m\ge \wt (w)>\wt(v_{n'-2})\ge \wt(v_n)$, a contradiction with~\eqref{wtan-1}. Hence, $w$ is of length at most $n+1$. 1) We evaluate a number $f_1(m)$ of standard monomials $w$ of second type of length $n+1$ satisfying $\wt(w)\le m$. By claim iv) of Lemma~\ref{Lbounds2}, $\xi=\zeta=0$ (i.e. the neck variables reach the maximal values). Thus, we get monomials \begin{equation}\label{tails2} w=r_{n-2}(x,y,z) x_{n-1}^{(p^{S_{n-1}}-1-\a)} y_{n-1}^{(p^{R_{n-1}}-1-\b)} z_{n-1}^{(p^{R_{n-1}}-1-\g)}\cdot x_{n}^{(p^{S_{n}}-2)} z_{n}^{(p^{R_{n}}-1)} u_{n+1}. \end{equation} Using~\eqref{rmmp} and~\eqref{m0}, we estimate a number of tails $r_{n-2}(x,y,z)$ in~\eqref{tails2} as: \begin{equation}\label{tails} p^{S_0+\cdots+ S_{n-2}+2(R_0+\cdots+R_{n-2})}< m_0^3. \end{equation} Using estimate~\eqref{estimate} (the indices are shifted by one!), $ m\ge \wt(w)> (1+\a+\b+\g)\wt (v_{n-1})$. We get estimates \begin{equation}\label{xi_eta} 0\le \a+\b+\g \le \Big[\frac{m}{\wt(v_{n-1})}\Big]-1= m_1-1,\qquad \a,\b,\g\ge 0. \end{equation} A number of possibilities for variables with indices $n-1$ in~\eqref{tails2} is bounded by a number of triples of integers $\a,\b,\g$ satisfying~\eqref{xi_eta}, which is equal to \begin{equation}\label{headss} \binom {m_1+2}{3}=\frac{m_1(m_1+1)(m_1+2)}6\le m_1^3, \end{equation} where the last estimate is checked directly for all $m_1\ge 1$. Using~\eqref{tails} and~\eqref{headss}, we get \begin{equation*} f_1(m)< m_0^3 m_1^3 = (m_0m_1)^3\le m^3. \end{equation*} 2) Let $f_2(m)$ be a number of standard monomials of second type~\eqref{second_type2} of length $n$ satisfying $\wt (w)\ \le m$. Using~\eqref{m0}, a number of possibilities for divided powers with indices $0,\ldots,n-2$ in~\eqref{second_type2} is evaluated by \begin{equation}\label{tails7} p^{S_0+\cdots+ S_{n-2}+2(R_0+\cdots+R_{n-2})}< m_0^3. \end{equation} Using estimate~\eqref{estimate}, $ m\ge \wt(w)> (\xi+\zeta)\wt (v_{n-1}).$ We get estimates \begin{equation}\label{xizeta8} 0\le \xi+\zeta \le \Big[\frac{m}{\wt(v_{n-1})}\Big]= m_1,\qquad \xi,\zeta\ge 0. \end{equation} A number of possibilities for the neck letters $x_{n-1},z_{n-1}$ in~\eqref{second_type2} is bounded by a number of pairs of integers $\xi,\zeta$ satisfying~\eqref{xizeta8}. We get a bound \begin{equation}\label{heads9} \binom {m_1+2}{2}\le 3 m_1^3, \qquad m_1\ge 1, \end{equation} where one checks the last estimate directly. Using~\eqref{tails7} and~\eqref{heads9} we obtain an estimate \begin{equation*} f_2(m)\le 3m_0^3 m_1^3 \le 3m^3. \end{equation*} 3) Let $f_3(m)$ be a number of all standard monomials of second type~\eqref{second_type2} of length $n-1$. Using~\eqref{m0}, a number of possibilities for all divided powers (having indices $0,\ldots,n-2$) is evaluated by \begin{equation*} f_3(m)\le p^{S_0+\cdots+ S_{n-2}+2(R_0+\cdots+R_{n-2})}< m_0^3\le m^3. \end{equation*} A similar estimate on the number of standard monomials of second type of length $n-2$ is smaller at least by factor $p^{-3}$ than the estimate above. The same applies to lengths $n-3,\ldots, 0$. Let $\tilde f_3(m)$ be the number of standard monomials of second type~\eqref{second_type2} of length at most $n-1$. We get a bound $$ \tilde f_3(m)\le \sum_{i=0}^{n-1}p^{-3i}\cdot f_3(m)\le \frac {m^3}{1-p^{-3}}\le \frac 87 m^3<2m^3. $$ Finally, the obtained bounds yield a desired estimate on the number of standard monomial of second type and weight at most $m$: \begin{equation*} f_1(m)+f_2(m)+\tilde f_3(m)\le 6m^3. \qedhere \end{equation*} \end{proof} \subsection{Periodic and constant tuples $\Xi$} \begin{Theorem}\label{Tperiod} Let a tuple $\Xi=(S_i,R_i| i\ge 0)$ be periodic: $S_{i+N}=S_i$, $R_{i+N}=R_i$ for $i\ge 0$, where $N\in\N$ is fixed. Denote $$ \mu:=\prod_{i=0}^{N-1} (p^{S_i}+p^{R_i}-1),\qquad \sigma:= \sum_{i=0}^{N-1 }(S_i+2R_{i}),\qquad \lambda:=\frac{\sigma\ln p}{\ln \mu}.$$ Consider the clover restricted Lie algebra $\TT=\TT(\Xi)$. Then \begin{enumerate} \item $\GKdim \TT=\LGKdim \TT=\lambda$. \item $C_1 m^{\lambda}< \gamma_\TT (m)< C_2 m^{\lambda}$ for $m\ge 1$, and $C_1$, $C_2$ being positive constants. \item $\lambda\in [1,3]$. \end{enumerate} \end{Theorem} \begin{proof} By~\eqref{Lweight_pivo} and periodicity, $\wt (v_{jN})=\mu^j$, $j\ge 0$. Fix a number $m>1$. We choose $n=n(m)$ satisfying \begin{equation}\label{wtan-A} \wt(v_{(n-1)N})=\mu ^{n-1}< m\le \wt(v_{nN})=\mu^{n}. \end{equation} Then \begin{equation}\label{nup} n< \log_{\mu}(m)+1. \end{equation} Consider a standard monomial $w$ of second type such that $\wt(w)\le m$. Assume that $w$ has length $n'\ge nN+2$. By claim ii) of Lemma~\ref{Lbounds2} $\wt(w)>\wt(v_{n'-2})\ge \wt v_{nN}\ge m$, a contradiction. Hence, $w$ is of length at most $nN+1$. Let $f_1(n)$ be a number of standard monomials of second type of length at most $(n+1)N$. We evaluate their number using a form of these monomials~\eqref{rmmp3B}, \eqref{rmmp}, periodicity, and~\eqref{nup}: $$ f_1(n)\le \prod_{i=0}^{(n+1)N-1} p^{S_i+2R_i} =p^{\sigma (n+1)} \le p^{\sigma (\log_{\mu}(m)+2)} \le p^{2\sigma} m^{\sigma \ln p/ \ln \mu}=p^{2\sigma} m^\lambda. $$ Let $w$ be a standard monomial of first type with $\wt(w)\le m$. As above, by the lower bound in Lemma~\ref{Lbounds1} and the upper bound in~\eqref{wtan-A}, $w$ is of length at most $nN$. Let $f_2(n)$ be the number of standard monomials of first type of length at most $nN$. Similarly, by~\eqref{rmmp3}, \eqref{rmmp}, and~\eqref{nup}, we get $ f_2(n)< p^{\sigma n}$ yielding a smaller bound than above. Let $f_3(n)$ be the number of all power standard monomials of weight at most $m$. By the lower estimates of Lemmas~\ref{Lbounds1}, \ref{Lbounds2}, and the upper bound in~\eqref{wtan-A}, they are of length at most $nN$. We apply Lemma~\ref{Lnum_power} and~\eqref{nup}; $$ f_3(n)\le \sum_{i=0}^{nN-1}(S_i+2R_i)= n\sigma \le (\log_{\mu}(m)+1)\sigma. $$ Now, the upper bound follows using that $\gamma_\TT(m)\le f_1(n)+f_2(n)+f_3(n)$. By the upper bound in~\eqref{wtan-A}, $n\ge \log_{\mu}(m)$. Consider standard monomials $w$ of second type~\eqref{rmmp3B} of length $(n-1)N$. By the lower bound in~\eqref{wtan-A}, $\wt(w)\le \wt(v_{(n-1)N}) <m$. We evaluate the number of different parts of their tails $r_{(n-1)N-2}(x,y,z)$ (see~\eqref{rmmp3B}, \eqref{rmmp}), yielding a lower bound: \begin{equation*} \gamma_\TT(m)\ge \prod_{i=0}^{(n-3)N-1} p^{S_i+2R_i} = p^{\sigma (n-3)}\ge p^{\sigma (\log_\mu (m)-3)} =p^{-3\sigma} m^{{\sigma \ln p}/ \ln \mu }=p^{-3\sigma} m^\lambda. \end{equation*} The second claim follows by our estimates. The last claim follows from Theorem~\ref{Tgrowth3}. \end{proof} \begin{Corollary}\label{Cconstant} Let a tuple $\Xi=(S_i,R_i| i\ge 0)$ be constant: $S_i=S$, $R_i=R$ for $i\ge 0$, where $S,R\in\N$. Denote $\displaystyle \lambda=\frac{(S+2R)\ln p}{\ln(p^S+p^R-1)}. $ Consider the clover restricted Lie algebra $\TT=\TT(S,R):=\TT(\Xi)$. Then $\GKdim \TT=\LGKdim \TT=\lambda$. \end{Corollary} \begin{Corollary}\label{Cinterval} Let $\ch K=p>0$. Consider the self-similar clover restricted Lie algebras $\TT(S,R)$ given by constant tuples $\Xi$ determined by two integers $S,R$. Then $\{\GKdim\TT(S,R)\mid S,R\in\N\}$ is dense on $[1,3]$. \end{Corollary} \begin{proof} By choosing the numbers $R=S$ sufficiently large, we can obtain $\GKdim\TT(S,R)$ arbitrarily close to 3. By fixing $R=1$ and choosing $S$ sufficiently large we can obtain $\GKdim\TT(S,R)$ arbitrarily close to 1. Consider a large positive integer $S$ and $R\in\{1,\ldots,S\}$. One checks that for $R$, $R'=R+1$ the respective Gelfand-Kirillov dimensions differ by $O(1/S)$, $S\to+\infty$. \end{proof} \section{Proof of main results: Clover restricted Lie algebras of Quasi-linear growth} \begin{proof}[Proof of Theorem~\ref{Tparam}] All claims follow from the first one. Recall that we consider the tuple of integers $\Xi_\kappa:=(S_i:=[(i+1)^{1/\kappa-1}], R_i:=1\mid i\ge 0)$, and the clover Lie algebra $\TT=\TT(\Xi_\kappa)$. By Theorem~\ref{Tsemidirect}, $\TT(\Xi)$ is a semidirect product of $\LL(\Xi)$ with the ideal $J$, where bases of $\LL(\Xi)$ and $J$ consist of monomials of first and second types, respectively. We start with general estimates used in the proof of the next theorem as well. By Lemma~\ref{Lweight_pivo}, \begin{align}\label{boundSlower} \wt(v_n)&=\prod_{i=0}^{n-1}(p^{S_i}{+}p{-}1)>p^{S_0+\cdots+S_{n-1}}, \quad n\ge 1;\\ \wt(v_n)&=\prod_{i=0}^{n-1}(p^{S_i}{+}p{-}1)<\theta p^{S_0+\cdots+S_{n-1}}, \quad \theta:=\prod_{i=0}^\infty (1{+}p^{1-S_i}), \quad n\ge 1. \label{boundSupper} \end{align} Indeed, it is well known that convergence of the infinite product is equivalent to convergence of the sum $\sum_{i=0}^\infty p^{1-S_i}$. We have $S_i>2\log_p i$ for $i\ge N$. Thus, $\sum_{i=N}^\infty p^{-S_i}\le \sum_{i=N}^\infty 1/i^2<\infty$. We shall use the following well-known estimates: \begin{equation}\label{boundsC} (\kappa+o(1)) n^{1/\kappa} =\sum_{i=0}^{n- 2}S_i < \sum_{i=0}^{n}(i+1)^{1/\kappa-1} =(\kappa+o(1)) n^{1/\kappa},\qquad n\to\infty. \end{equation} Let us prove the desired upper bound on the standard monomials of second type. Fix a number $m>1$. Choose $n=n(m)$ such that \begin{equation}\label{an1an} \wt(v_{n-1})< m\le \wt(v_n). \end{equation} Put $m_0:=\wt(v_{n-1})$ and $m_1:=[m/m_0]$. By~\eqref{boundSlower} and lower estimate in~\eqref{boundsC}, we get \begin{align}\label{m22} m_0&=\wt(v_{n-1})>p^{S_0+\cdots+S_{n-2}} \ge p^{(\kappa+o(1)) n^{1/\kappa}},\quad n\to\infty;\\ \label{llpm} n &\le \Big( (1/\kappa{+}o(1))\log_p m_0\Big)^{\kappa} \le \Big(\frac {1{+}o(1)}{\kappa\ln p}\ln m\Big)^{\kappa}, \qquad m\to \infty. \end{align} Let $w$ be a standard monomial of second type with $\wt(w)\le m$. Suppose that it has length $n'\ge n+2$. By claim ii) of Lemma~\ref{Lbounds2}, $\wt(w)> \wt(v_{n'-2})\ge \wt(v_n)\ge m\ge \wt(w)$. The contradiction proves that $w$ is of length at most $n+1$. 1) We evaluate a number $f_1(m)$ of standard monomials $w$ of second type of length $n+1$ satisfying $\wt(w)\le m$. By Claim~iv) of Lemma~\ref{Lbounds2}, the head variables in $w$ have the maximal degrees and we get monomials of the form: \begin{equation}\label{tails2a} w=r_{n-2}(x,y,z) x_{n-1}^{(p^{S_{n-1}}-1-\a)} y_{n-1}^{(p^{R_{n-1}}-1-\b)} z_{n-1}^{(p^{R_{n-1}}-1-\g)}\cdot x_{n}^{(p^{S_{n}}-2)} z_{n}^{(p^{R_{n}}-1)} u_{n+1}. \end{equation} Using~\eqref{m22}, we evaluate the number of tails $r_{n-2}(x,y,z)$ in~\eqref{tails2a} as: \begin{equation} \label{tails3} p^{S_0+\cdots+S_{n-2}}p^{2(R_0+\cdots +R_{n-2})} < m_0 p^{2n}. \end{equation} By estimate~\eqref{estimate}, $ m\ge \wt(w)> (1+\a+\b+\g)\wt (v_{n-1})$. We get estimates \begin{equation}\label{xi_eta2} 0\le \a+\b+\g \le \Big[\frac{m}{\wt(v_{n-1})}\Big]-1= m_1-1,\qquad\text{where} \ 0\le \b,\g<p^{R_{n-1}}=p. \end{equation} So, both $\b,\g$ have at most $p$ choices. Now the number of possibilities for variables with indices $n-1$ in~\eqref{tails2a} is equal to the number of integers $\a,\b,\g$ satisfying~\eqref{xi_eta2}, which is bounded by $p^2 m_1$. Combining with the bound on the number of tails~\eqref{tails3}, we get \begin{equation}\label{f1a} f_1(m)\le p^2m_1\cdot m_0 p^{2n}\le p^2 m p^{2n}. \end{equation} 2) We evaluate a number $f_2(m)$ of standard monomials of second type and length $n$ such that $\wt(w)\le m$. Using~\eqref{m22}, a number of possibilities for variables with indices $0,\ldots,n{-}2$ in~\eqref{second_type2} is evaluated by: \begin{equation}\label{tails8} p^{S_0+\cdots+ S_{n-2}+2(R_0+\cdots+R_{n-2})}< m_0 p^{2n}. \end{equation} Using estimate~\eqref{estimate}, we get $ m\ge \wt(w)> (\xi+\zeta)\wt (v_{n-1})$ and \begin{equation}\label{xizeta} 0\le \xi+\zeta \le \Big[\frac{m}{\wt(v_{n-1})}\Big]= m_1, \qquad\text{where} \ 0\le \zeta <p^{R_{n-1}}=p. \end{equation} Thus, the number of possibilities for the neck letters $x_{n-1},z_{n-1}$ in~\eqref{second_type2} is bounded by the number of integers $\xi,\zeta$ satisfying~\eqref{xizeta}, which is bounded by $p(m_1+1)$. Using the bound on the number of tails~\eqref{tails8}, we get \begin{equation}\label{f2} f_2(m)\le p(m_1+1)\cdot m_0 p^{2n} \le 2p\cdot m p^{2n}. \end{equation} 3) We evaluate a number $f_3(m)$ of standard monomials of second type and length $n-1$. Using~\eqref{m22}, a number of possibilities for all divided powers, now having indices $0,\ldots,n-2$ in~\eqref{second_type2} is evaluated by \begin{equation*} f_3(m)\le p^{S_0+\cdots+ S_{n-2}+2(R_0+\cdots+R_{n-2})}< m_0 p^{2n}\le mp^{2n}. \end{equation*} The number of standard monomials of second type of length $n-2$ is smaller at least by factor $p^{-3}$ than estimate above. The same applies to lengths $n-3,\ldots, 0$. Let $\tilde f_3(m)$ be the number of standard monomials of second type~\eqref{second_type2} of length at most $n-1$. Using~\ref{llpm}, we get \begin{equation}\label{bound3} \tilde f_3(m)\le \sum_{i=0}^{n-2}p^{-3i} f_3(m)\le \frac {m p^{2n}}{1-p^{-3}} \le 2m p^{2n}. \end{equation} Let $f_4(m)$ be the number of power standard monomials $w$ of second type of weight at most $m$. By~\eqref{an1an} and claim i) of Lemma~\ref{Lbounds2}, $w$ is of length at most $n$. By~\eqref{powers}, $f_4(m)\le R_0+\cdots+R_{n-1}= n$. Combining~\eqref{f1a}, \eqref{f2}, \eqref{bound3}, and using~\eqref{llpm}, the number of all standard monomials of second type and weight at most $m$ is evaluated by \begin{align}\label{upper} &f_1(m)+f_2(m)+\tilde f_3(m)+f_4(m)\le (p^2+2p+2)mp^{2n}+n \\ &\quad \le (p^2+2p+2) m \exp \bigg( 2\ln p \Big(\frac {1/\kappa{+}o(1)}{\ln p}\ln m\Big)^{\kappa}\bigg)\nonumber\\ &\quad = m\exp \bigg(\frac { 2(\ln p)^{1-\kappa}{+}o(1)}{\kappa^\kappa}(\ln m)^\kappa\bigg),\qquad m\to\infty. \nonumber \end{align} It remains to obtain an upper bound on the number of standard monomials of first type, these monomials being a basis of the subalgebra $\LL(\Xi)$, which growth was estimated in~\cite[Theorem 9.2]{Pe17}. That result has the upper and lower bounds with different constants $C_1$, $C_2$. Now we are proving a stronger asymptotic with bounds $C+o(1)$, constant being the same for both sides. A reader can trace and modify that computations or mimic ideas of more lengthy computations for monomials of second type above using~\eqref{boundsC}, \eqref{llpm} and obtain a bound having actually a smaller constant (because $z_i$s do not appear in tails resulting in less number of possibilities), not changing the upper bound given by monomials of second type established above. Finally, let us establish the lower bound. We keep notations~\eqref{an1an}. Similar to~\eqref{boundSupper} \begin{equation}\label{pCp} m\le \wt(v_{n})\le \prod_{i=0}^{n-1}(p^{(i+1)^{1/\kappa-1}}\!\!\!+p{-}1) <\theta\prod_{i=0}^{n-1} p^{(i+1)^{1/\kappa-1}}, \quad \theta:=\prod_{i=0}^\infty (1{+}p^{1-(i+1)^{1/\kappa-1}}). \end{equation} Using~\eqref{pCp} and the upper bound~\eqref{boundsC}, we get $m\le \theta p^{(\kappa+o(1)) n^{1/\kappa}}$. Hence \begin{equation}\label{ll-k} n\ge \bigg(\frac{\log_p (m/\theta)}{\kappa+o(1)}\bigg)^{\kappa} \ge \Big(\frac {1{+}o(1)}{\kappa\ln p}\ln m\Big)^{\kappa}, \qquad m\to \infty. \end{equation} By~\eqref{boundSupper}, \begin{equation} \label{pCp2} m_0= \wt(v_{n-1})= \prod_{i=0}^{n-2}(p^{S_i}+p-1) <\theta p^{S_0+\cdots+ S_{n-2}}. \end{equation} Consider standard monomials of second type $w=r_{n-2}(x,y,z)g_{n}^{\xi_{n-1},\zeta_{n-1}}$ ~\eqref{rmmp3B} of length $n$. We evaluate the number of their tails $r_{n-2}(x,y,z)$ using~\eqref{pCp2} \begin{equation}\label{tailsS} p^{S_0+\cdots+S_{n-2}}p^{2(R_0+\cdots+R_{n-2})} > \frac {m_0}{\theta}p^{2(n-1)}. \end{equation} By our construction and Lemma~\ref{Lweight_pivo} \begin{equation}\label{boundszeta} m_1=\Big[\frac m {m_0}\Big ]\le \frac {\wt(v_n)}{\wt(v_{n-1})}=p^{S_{n-1}}+p-1. \end{equation} Consider standard monomials of second type $w$ which heads satisfy $\xi_{n-1}\in\{0,\dots,m_1-p\}$, $\zeta_{n-1}=0$. Using~\eqref{boundszeta}, we have $0\le \xi_{n-1}<p^{S_{n-1}}$, so, we get standard monomials indeed. Also, using~\eqref{estimate2}, these monomials are of weight not exceeding $m$: $$\wt(w)\le \wt(v_{n-1})(\xi_{n-1}+\zeta_{n-1}+2)\le \wt(v_{n-1}) m_1=m_0m_1\le m.$$ There are $m_1-p+1$ such heads. We multiply this number by the number of different tails~\eqref{tailsS}, using estimate~\eqref{ll-k}, we obtain the desired lower bound: \begin{align}\label{lower} &(m_1{-}p{+}1) \frac { m_0 }{\theta} p^{2(n-1)} \ge \frac m{2p^2\theta } p^{2n}\\ &\quad \ge \frac m{2p^2\theta }\cdot p^{2\big (\textstyle \frac {1+o(1)} {\kappa \ln p} \ln m\big)^\kappa } = m\exp \bigg(\frac { 2(\ln p)^{1-\kappa}{+}o(1)}{\kappa^\kappa}(\ln m)^\kappa\bigg),\qquad m\to\infty. \qedhere \nonumber \end{align} \end{proof} \begin{proof}[Proof of Theorem~\ref{Tparam2}] We use estimates and notations of the previous proof. It is sufficient to prove the first claim. Fix the constant $\lambda:=(\ln p^2)/\kappa \in\R^+$. Now we consider the tuple $\Xi_{q,\kappa}=(S_i,R_i\mid i\ge 0)$, where $R_i:=1$ for all $i\ge 0$, and define integers $S_i$ by induction: $S_0=1$ and \begin{equation}\label{defS} S_n:=[\exp^{(q)}(\lambda (n+2) )]+1-S_0-\cdots- S_{n-1},\qquad n\ge 1. \end{equation} Let us prove the desired upper bound on the standard monomials of second type. Fix a number $m>1$. Choose $n=n(m)$ such that \begin{equation}\label{an1an2} \wt(v_{n-1})< m\le \wt(v_n). \end{equation} Put $m_0:=\wt(v_{n-1})$ and $m_1:=[m/m_0]$. By~\eqref{boundSlower} and~\eqref{defS} we get \begin{align* m&> m_0=\wt(v_{n-1})>p^{S_0+\cdots+S_{n-2}} \ge p^{\exp^{(q)}(\lambda n )};\\ \label{llpm} n &< \frac 1\lambda \ln^{(q)}\log_p (m);\\ p^{2n}&< \exp\bigg(\frac {\ln p^2}\lambda \ln^{(q)}\log_p (m) \bigg) =\Big (\ln^{(q-1)}\log_p (m)\Big)^{(\ln p^2)/\lambda}\\ &= (\ln^{(q)} m)^{\kappa+o(1)},\qquad m\to\infty. \end{align*} Using estimate~\eqref{upper} on the number of all standard monomials of second type of weight at most $m$, we get the desired upper asymptotic on the number of these monomials. Similar bounds are valid for monomials of first type. Let us check the lower bound. We use notations~\eqref{an1an2}. By estimate~\eqref{boundSupper} and~\eqref{defS} \begin{align*} m&\le \wt(v_{n}) <\theta p^{S_0+\cdots+S_{n-1}}<\theta p^{\exp^{(q)}(\lambda (n+1) )+1};\\ n&> \frac 1\lambda \ln^{(q)}\Big(\log_p (m/\theta)-1\Big)-1= \frac {1+o(1)}\lambda\ln^{(q+1)}(m), \quad m\to\infty; \\ p^{2n}&> \exp\bigg(\frac {\ln p^2+o(1)}\lambda \ln^{(q+1)}(m) \bigg) = (\ln^{(q)} m)^{\kappa+o(1)},\qquad\ m\to\infty. \end{align*} Finally, using the lower bound on the number of standard monomials of second type~\eqref{lower} and the bound above, we obtain the desired lower bound on the growth of $\TT$. \end{proof}
1,314,259,993,779
arxiv
\section{Introduction} Our objective is to study centrality of components of a network with respect to time-delay and coupling graph in presence of different sources of uncertainty. Measures of importance and influence in a network are called centrality measures and are exploited to provide a rank on the most important components of the network \cite{freeman1977set}. Centrality is a well-studied subject in network analysis and graph theory and several measures are developed to address this importance in complex networks. The degree centrality can be viewed as one of the simplest and most intuitive indices for centrality, which is defined as the number of connection that a node has to other nodes \cite{freeman1978centrality}. Betweenness centrality is based on shortest paths in a graph and its application vary from modeling traffic flows to telecommunication for ranking both nodes and links \cite{freeman1977set,girvan2002community}. Another popular class of indices for centrality is eigenvector centrality \cite{bonacich1972factoring,bonacich1987power}, which consists of PageRank \cite{page1999pagerank}. Other long-established centrality measures include Katz centrality \cite{katz1953new} and closeness centrality \cite{bavelas1950communication,sabidussi1966centrality}. Centrality indices for a noisy dynamical network was studied in \cite{siami2018centrality}. With time-delay being intrinsic to all real-world networks, this work studies effect of time-delay on centrality of nodes (agents) and edges (communication links) in a dynamical network. We borrow our notion of centrality from \cite{siami2018centrality}, where the authors define a performance measure and then quantify influence of each component of the network on the performance as their centrality index. Measures of performance for linear consensus networks have been subject to extensive study \cite{bamieh2008effect,Bamieh:2012,siami2016fundamental,young2010robustness,Zelazo:2011,moradian2018robustness}. Authors in \cite{bamieh2008effect} define a performance metric based on deviation from the average, while \cite{siami2016fundamental} did a thorough study of this measure for first-order consensus networks. In \cite{yaser2016delay}, authors study $\HH_2$-based performance of the first-order linear network in presence of time-delay and show how interconnection topology can be designed to enhance performance via sparsification, adding new communication links, and feedback gain adjustment. We study a class of time-delay first-order consensus networks in the presence of noise input. Motivated by ideas from \cite{siami2018centrality}, we classify six types of uncertainty structures that appear in most real-world applications and we derive centrality indices as a function time-delay, the underlying graph, and structure of additive noise. The focus of this paper is on effect of time-delay on centrality of individual agents and communications links. We argue that increasing time-delay may shuffle centrality rankings. {In addition, we address critical role of connectivity in the presence of time-delay and compare it with the case that time-delay is absent.} This manuscript is an extension of \cite{ghaedsharaf2017eminence} that includes all the missing proofs of its conference version alongside new examples and materials in Sections \ref{sec:ranking} and \ref{sec:discussions} that are published for the first time. \section{Preliminaries and Definitions} The set of non-negative (positive) real numbers is indicated by $\mathbb{R}_{+}$ ($\mathbb{R}_{++}$). An undirected weighted graph $\mathcal{G}$ is denoted by the triple $\mathcal{G=(V,E,}w)$, where $\mathcal{V}=\{v_1,v_2, \dots, v_{\N}\}$ is set of nodes (vertices) of the graph, $\mathcal{E}$ is set of links (edges) of the graph, and $w: {\mathcal{E}} \rightarrow \mathbb{R}_{++}$ is the weight function that maps each link to a positive scalar. We let $L$ to be the Laplacian of the graph, defined by $$L=\Delta-A,$$ where $\Delta$ is diagonal matrix of node degrees and $A$ is the adjacency matrix of the graph. Alternatively, we can write $L = E W E^{\T}$, where $E \in \mathbb{R}^{\N \times \mid {\mathcal{E}}\mid}$ is the signed vertex-edge incidence matrix of the graph defined by $$ [E]_{ie} = \begin{cases} +1 &\mbox{if } i \mbox{ is head of }e\\ -1 &\mbox{if } i \mbox{ is tail of }e\\ 0 &\mbox{otherwise,} \end{cases} $$ and $W \in \mathbb{R^{\mid {\mathcal{E}}\mid \times \mid {\mathcal{E}}\mid}}$ is the diagonal matrix of weights.\\ The $n \times 1$ vector of all zeros and ones are denoted by $0_{\N}$ and $\mathbf{1}_{\N}$, respectively, while $J_{\N}=\mathbf{1}_{\N}\mathbf{1}_{\N}^{\T}$ is the $n \times n$ matrix of all ones. Conjugate transpose of matrix $G$ is denoted by $G^{H}$. Furthermore, the $n \times n$ centering matrix is denoted by $M_{\N}=I_{\N}-\frac{1}{\N}J_{\N}$. For an undirected connected graph with $\N$ nodes, Laplacian eigenvalues are real and shown in an order sequence as $0=\lambda_1\leq\lambda_2 \leq\dots\leq \lambda_{\N}$. We indicate Moore-Penrose pseudo-inverse of a Laplacian matrix $L$ by $L ^{\dagger} = [l_{ij}^{\dagger}]$ that can be utilized to define the effective resistance between nodes $i$ and $j$ using the following formula \[r_e(L) = l_{ii}^{\dagger}+l_{jj}^{\dagger}-2 l_{ij}^{\dagger}\] for every given link $e=\{i,j\}$. For $X \in \mathbb{R}^{\N \times \N}$, the matrix-valued functions $\cos (X)$ and $\sin(X)$ are defined as \begin{align*} \cos(X) = \sum_{k=0}^{\infty}\frac{(-1)^k X^{2k}}{(2k)!},~~ \sin(X) = \sum_{k=0}^{\infty}\frac{(-1)^k X^{2k+1}}{(2k+1)!}. \end{align*} \section{Noisy Consensus Networks with Time-Delay} In this paper, we consider a class of linear consensus networks whose dynamics are defined over graphs $\mathcal G = (\mathcal V, \mathcal E, w)$, where each node corresponds to a subsystem with a scalar state variable. In study of consensus networks with delay, if a node has a delay in accessing or computing its state or has a delay in response, we add the self-delay to the model, i.e., \begin{align}\label{agentUpdate} \dot{x}_i=\sum_{j \neq i} {a_{ij}\big({x_j(t-\tau)-x_i(t-\tau)}\big)}, \end{align} where $a_{ij}$ is the $ij^\text{th}$ component of the adjacency matrix of the coupling graph. Therefore, the network that we study is the following single-delay consensus network with $\N$ nodes and underlying graph Laplacian $L$: \begin{align} \begin{aligned} \label{eq:system} \dot{x}(t)~&= -L\,x(t-\tau)+B\,\xi(t),\\ y(t)~&=~ M_{\N}\,x(t), \end{aligned} \end{align} with $x(t)= 0$ for all $t \in [-\tau, 0)$ and $x(0)=x^0$, where $x^0 = [x^0_1, \ldots, x^0_n]^{\rm T}$ is the initial condition, $x = [x_1, \ldots, x_n]^{\rm T}$ is the state, $y = [y_1, \ldots, y_n]^{\rm T}$ is the output, and $\xi = [\xi_1, \ldots, \xi_{\M}]^{\rm T}$ is the effect of an uncertain environment on agents or links. It is assumed that $\xi(t)$ is a vector of independent Gaussian white noise process with zero mean. The impact of an uncertain environment on each agent's dynamics is modeled by the exogenous noise input $\xi_i(t)$. Furthermore, we assume that every agent experiences a time-delay in accessing, computing, or sharing its own state information with itself and other neighboring agents. It is assumed that the time-delay for all agents are identical and equal to a nonnegative constant $\tau$. The coupling graph of the consensus network \eqref{eq:system} is a graph ${\mathcal{G}}=({\mathcal{V}},\mathcal E, w)$ with node set ${\mathcal{V}}=\{v_1,v_2,\ldots,v_{\N}\}$, edge set ${\mathcal{E}}=\Big\{ \{i,j\}~\big|~\forall~i,j \in {\mathcal{V}},~l_{ij} \neq 0\Big\}$ and weight function $w(\{i,j\})=-l_{ij}$ for all $e = \{i,j\} \in {\mathcal{E}}$. The Laplacian matrix of graph ${\mathcal{G}}$ is equal to $L=[l_{ij}]$. \vspace{0.2cm} For a consensus network, average consensus occurs if all agents converge to equal value in $\mathbb{R}$. For network \eqref{eq:system} it is known \cite{Olfati:2004} that this condition is equivalent to connectivity of the coupling graph and satisfying the following inequality $$\tau \lambda_{\N} < \frac{\pi}{2}.$$ With goal of reaching an agreement among agents in a consensus network, variance of state of agents can be a measure of their performance. In other words, for a consensus network with underlying graph $L$ and in presence of time-delay $\tau$, can be measured by \begin{align*} \rho_\text{ss}(L;\tau)\hspace{-0.07cm} \coloneqq\hspace{-0.07cm} \lim_{t\to \infty} \E \Big[\big(x(t) - \frac{1}{\N}J_{\N}\,x(t)\big)^{\T}\hspace{-0.07cm} \big(x(t)\hspace{-0.07cm} -\hspace{-0.07cm} \frac{1}{\N}J_{\N}\,x(t)\big)\hspace{-0.07cm}\Big], \end{align*} which can be equivalently be written as \begin{align*} \rho_\text{ss}(L;\tau) = \lim_{t\to \infty} \E \big[y(t)^{\T} y(t)\big]. \end{align*} We utilize the notion of centrality introduced in \cite{siami2018centrality} in which authors study a consensus network in absence of time-delay and apply it to a time-delay first-order consensus network. \begin{definition} For network \eqref{eq:system}, let ${\xi_i(t) \sim \mathcal{N}(0,\sigma_i^2)}$ be the noise that is affecting agent $i$ for all $i \in {\mathcal{V}} $. Then, the centrality of agent $i$ is defined by \begin{align}\label{def1} \eta_i \coloneqq \frac{\partial \rho_\text{ss}}{\partial \sigma_i^2}. \end{align} \end{definition} \vspace{0.2cm} Here, $\eta_i$ measures the rate of change in the performance with respect to variance of the affecting noise on the agent $i$. In other words, it captures the effect of the disturbances associated with agent $i$ on the performance. \vspace{0.2cm} \begin{definition} For network \eqref{eq:system}, let ${\xi_e(t) \sim \mathcal{N}(0,\sigma_e^2)}$ be the noise that is affecting the coupling $e$ for all $e \in {\mathcal{E}} $. Then, the centrality of link $i$ is defined by \begin{align}\label{def2} \nu_e \coloneqq \frac{\partial \rho_\text{ss}}{\partial \sigma_e^2}. \end{align} \end{definition} \vspace{0.2cm} Thus, $\nu_e$ is the rate of change of the performance measure with respect to variance of the disturbance on the link $e$. Therefore, it represents outcome of the noise $e$ on the performance. \vspace{0.2cm} \begin{definition} For the consensus network \eqref{eq:system} and fixed time-delay $\tau \geq 0$ with a given structure for the input matrix $B$ and {identity} covariance for the process $\xi(t)$, the {sensitivity coefficient} of the interconnection between nodes $i$ and $j$ is defined by \begin{align}\label{def3} \kappa_{e} \coloneqq \frac{\partial \rho_\text{ss}}{\partial w(e)}. \end{align} \vspace{0.2cm}\\ In other words, $\kappa_{e}$ is equal to derivative of the performance with respect to change in the weight of the interconnection. This quantity shows how much the performance will improve with a slight increase in weight of the interconnection. \end{definition} \begin{theorem}\label{th:Cent12_tau} Centrality indices $\eta_i$ and $\nu_e$ are increasing with respect to time-delay. \end{theorem} \begin{pf} Proof is straightforward by showing that the derivative of the centrality indices with respect to time-delay is positive in the stability region. \end{pf} \begin{theorem}\label{th:Cent12} For the consensus network \eqref{eq:system}, performance of the network can be written as a function of the Laplacian matrix $L$, uncertainty matrix $B$ and time-delay parameter $\tau$. In addition, we have the following identity \begin{align*} \rho_\text{ss}(L,\tau) = \sum_{i \in {\mathcal{V}}} \eta_i\sigma_i^2 \end{align*} with \begin{align*} \eta_i = \frac{1}{2}\big[B^{T}L^{\dagger}\cos(\tau L)\big(M_n - \sin(\tau L)\big)^{\dagger}B\big]_{ii}. \end{align*} Moreover, we can obtain \begin{align*} \rho_\text{ss}(L,\tau) = \sum_{e \in {\mathcal{E}}} \nu_e\sigma_e^2 \end{align*} with \begin{align*} {\nu_e} = \frac{1}{2}\big[B^{T}L^{\dagger}\cos(\tau L)\big(M_n - \sin(\tau L)\big)^{\dagger}B\big]_{ee}. \end{align*} \end{theorem} \begin{pf} We utilize the idea in \cite{siami2018centrality} for proof of this theorem. Let $\xi \in \mathbb{R}^{\M}$ in \eqref{eq:system}, then we define $\hat{\xi}_i \coloneqq {\xi}_i/\sigma_i$ for all $i\in \{1,\dots , \M \}$. From definition of $\xi_i$, we have \begin{align*} B \xi = \hat{B} \hat{\xi}, \end{align*} where $\hat{B} = B \diag\big([\sigma_1,\dots,\sigma_{\M}]^{\T}\big)$. Consequently, we can write dynamics of the network \eqref{eq:system} as \begin{align*} \dot{x}(t)~&= -L\,x(t-\tau)+\hat{B}\,\hat{\xi}(t), \end{align*} where from definition of $\hat{\xi}$ we note that $\hat{\xi}$ is a vector of unit variance and identically distributed Gaussian processes. In order to find the performance of the network (\ref{eq:system}), we utilize frequency domain definition of $\HH_2$-norm of the network \cite{Doyle89}, i.e., \begin{align}\label{H2normCalc} \rho_\text{ss}(L;\tau) &=& \frac{1}{2\pi}\Tr\Big[\int_{-\infty}^{+\infty}{G^{\CT}(j\omega)G(j\omega) \: d\omega}\Big] \end{align} with transfer matrix \begin{align}\label{eq:Gs} G(s)~=~ M_{\N} \Big(sI_{\N}+e^{-\tau s}L\Big)^{-1}\hat{B}. \end{align} Although $G(s)$ is not exponentially stable, its single marginally stable mode is not observable in the output which consequently results in a bounded $\HH_2$-norm for the network. % We consider spectral decomposition of Laplacian matrix $L$, which is, % \begin{align*} L~=~Q \Lambda Q^{\T}, \end{align*} % where $Q=[q_1,q_2, \dots , q_{\N}] \in \mathbb{R}^{n\times n}$ is the orthonormal matrix of eigenvectors and $\Lambda=\diag(\lambda_1,\ldots,\lambda_{\N})$ is the diagonal matrix of eigenvalues. We recall that $\lambda_1=0$ for the reason that the graph is undirected and it has no self-loops. Therefore, % \begin{align} M_{\N}~=&~I_{\N}-Q \diag([1,0, \dots,0]^{\T}) Q^{\T}\nonumber\\\label{eq:Mn} ~=&~Q \diag([0,1, \dots,1]^{\T}) Q^{\T}, \end{align} % and % \begin{align} L~=~Q \diag([0,\lambda_2, \dots, \lambda_{\N}]^{\T}) Q^{\T}. \label{eigen-decom} \end{align} % Also, substituting \eqref{eq:Mn} and \eqref{eigen-decom} into (\ref{eq:Gs}), we obtain % \begin{align*} G(s)~=~C Q \diag\hspace{-0.07cm}\Big(\big[0,\frac{1}{s+\lambda_2 e^{-\tau s}},\dots,\frac{1}{s+\lambda_{\N} e^{-\tau s}}\big]^{\T}\hspace{-0.07cm}\Big)Q^{\T}\hspace{-0.07cm}. \end{align*} % Hence, we have \begin{align}\label{eq:GHG} &\Tr\big[G^{\CT}(j \omega)G(j \omega)\big]\nonumber\\ =&\Tr\Bigg[ \hat{B}\hat{B}^{\T} Q \diag\Big(\Big[0,\frac{1}{\lambda_2 e^{j \tau \omega}-j \omega},\dots,\frac{1}{\lambda_{\N} e^{j \tau \omega}-j \omega }\Big]^{\T}\Big)\nonumber\\ \:\:&\diag\Big(\Big[0,\frac{d \omega}{j \omega+\lambda_2 e^{-j \tau \omega}},\dots,\frac{1}{j \omega+\lambda_{\N} e^{-j \tau \omega}}\Big]^{\T}\Big) Q^{\T}\Bigg] \end{align} and by substituting (\ref{eq:GHG}) in \eqref{H2normCalc}, we obtain \begin{align*} \hspace{-0.15cm}\rho_\text{ss}(L;\tau) = \frac{1}{2\pi} \sum_{i=2}^{\N}{\int_{-\infty}^{+\infty}\hspace{-0.3cm}\frac{b_{i} \hspace{0.05cm} d\omega}{\big(j \omega+\lambda_i e^{-j \tau \omega}\big)\big(\lambda_i e^{j \tau \omega}-j \omega\big)}}. \end{align*} where $b_{i}$ is the $i$'th diagonal element of the matrix $Q^{\T}\hat{B}\hat{B}^{\T} Q$. Simplifying the integral above we obtain \begin{equation} \rho_\text{ss}(L;\tau)=\sum_{i=2}^{\N} \frac{b_i}{2\lambda_i}~ \frac{\cos(\lambda_i \tau)}{1-\sin(\lambda_i \tau)}.\label{perf-meas} \end{equation} Now, we can rewrite equality \eqref{perf-meas} in the following compact matrix operator form \begin{align} \rho_\text{ss}(L;\tau) = \frac{1}{2}\Tr\Big[&\hat{B}\hat{B}^{\T} L^{\dagger} \cos(\tau L)\Big(M_{\N}-\sin(\tau L)\Big)^{\dagger}\Big]\nonumber\\ =\frac{1}{2}\Tr\Big[&\diag([\sigma_1^2,\dots,\sigma_{\M}^2]^{\T}){B}^{\T}\nonumber\\ &L^{\dagger} \cos(\tau L)\Big(M_{\N}-\sin(\tau L)\Big)^{\dagger}{B}\Big].\label{perfFormula} \end{align} From identity \eqref{perfFormula} it is clear that when ${\M} = n$, i.e., ${B \in \mathbb{R}^{\N \times \N}}$ we obtain \begin{align*} \eta_i = \frac{1}{2}\big[B^{T}L^{\dagger}\cos(\tau L)\big(M_n - \sin(\tau L)\big)^{\dagger}B\big]_{ii}, \end{align*} for all $i \in {{\mathcal{V}}}$. Consequently, it follows that \begin{align*} \rho_\text{ss}(L,\tau) = \sum_{i \in {\mathcal{V}}} \eta_i\sigma_i^2. \end{align*} Correspondingly, when existing links are affected by noises, i.e., $B \in \mathbb{R}^{\N \times \mid {\mathcal{E}} \mid}$, we have \begin{align*} \nu_e = \frac{1}{2}\big[B^{T}L^{\dagger}\cos(\tau L)\big(M_n - \sin(\tau L)\big)^{\dagger}B\big]_{ee}, \end{align*} for all $e \in \mathcal{E}$. Thus, in this case the following identity holds \begin{align*} \rho_\text{ss}(L,\tau) = \sum_{e \in {\mathcal{E}}} \nu_e\sigma_e^2. \end{align*} \end{pf} \section{Agent Associated Disturbances} In this section, we consider four structures of disturbance for the network \eqref{eq:system} that affects agents and then find their centrality index. Structures of the noises arise from source of the noises that are affecting the system. In a consensus network within a noisy environment, uncertainties will appear in dynamics of the agent. Each individual agent, updates its state by sensing its own state, transmitting its status to its neighbors, and receiving status of its neighbors. Since effect of uncertainties in each step of update affects the network in its sort of way, different types of uncertainty are modeled using different structures for the input matrix $B$. \subsection{Dynamics Noise} \begin{figure} \centering \includestandalone[width=0.42\textwidth]{dynamicNoise \caption{Block diagram of consensus network \eqref{eq:system} in presence of dynamics noise.} \label{fig:dynNoise} \end{figure} This type of noise can be considered as environmental noise that impacts the agents directly. Therefore dynamics of an agent under this uncertainty can be modeled as \begin{align}\label{agentUpdateDN} \dot{x}_i=\sum_{j \neq i} {a_{ij}\big({x_j(t-\tau)-x_i(t-\tau)}\big)}+\xi_i(t), \end{align} where $\xi_i \sim \mathcal{N}(0,\sigma_i^2)$ for all $i \in {\mathcal{V}}$. Performance of a network with this type of uncertainty structure was previously studied in \cite{siami2016fundamental,yaser2016delay} where dynamics of the network can be modeled by setting $B = I_{\N}$ in \eqref{eq:system}, i.e., \begin{align}\label{eq:systemDN} \begin{aligned} \dot{x}(t)~&= -L\,x(t-\tau)+\xi(t). \end{aligned} \end{align} \noindent Figure \ref{fig:dynNoise} is a representation of dynamics of the network. \begin{theorem} For consensus network \eqref{eq:systemDN}, centrality index of node $i$ is equal to \begin{align}\label{etaiDynamic} \eta_i = \frac{1}{2}\big[L^{\dagger}\cos(\tau L)\big(M_n - \sin(\tau L)\big)^{\dagger}\big]_{ii}, \end{align} for all $i \in {\mathcal{V}}$. \end{theorem} \begin{pf} Since ${B} = I_{\N}$ and $\xi_i$'s are independent, using Theorem \ref{th:Cent12}, \eqref{etaiDynamic} can be followed. \end{pf} Taking derivative of $\eta_i$'s with respect to the delay parameter $\tau$, we obtain \begin{align*} \frac{\partial \eta_i}{\partial \tau} = \frac{1}{2}\big[\big(M_{\N}-\sin(\tau L)\big)^{\dagger}\big]_{ii}, \end{align*} and since the function is not correlated to value of centrality index at $\tau = 0$, it hints us that centrality ordering can change as time-delay increases and the ranking might get inverted. Later on, in Example \ref{ex1} we verify that this is the case and order inversion can happen. \begin{theorem} For consensus network \eqref{eq:systemDN}, sensitivity coefficient of the interconnection between nodes $i$ and $j$ is equal to \begin{align*} \kappa_{e} = \frac{1}{2}r_{e}\Big(\big(L^2\big)^{\dagger}\big(\tau L - \cos(\tau L)\big)\big(M_n - \sin(\tau L)\big)^{\dagger}\Big), \end{align*} for all $i, j \in {\mathcal{V}}$. \end{theorem} \begin{pf} From definition of $\kappa_{e}$, taking derivative of $\rho_\text{ss}(L;\tau)$ with respect to $w(e)$, we have \begin{align} \kappa_{e} = \frac{1}{2}\Tr\Bigg[\hspace{-0.07cm}\NHS-\hspace{-0.07cm} L^{\dagger}E_{e}E_{e}^{\T}L^{\dagger}\cos(\tau L)\big(M_n - \sin(\tau L)\big)^{\dagger}\nonumber\\ -\tau L^{\dagger}E_{e}E_{e}^{\T}\sin(\tau L)\big(M_n - \sin(\tau L)\big)^{\dagger}\nonumber\\ +\tau L^{\dagger}\cos^2(\tau L)E_{e}E_{e}^{\T}\Big(\hspace{-0.07cm}\big(M_n\hspace{-0.07cm} - \sin(\tau L)\big)^2\Big)^{\dagger}\hspace{-0.07cm}\Bigg],\label{centrality3Dynamic} \end{align} where $E_{e}$ is the corresponding column of the link $e$ in the incidence matrix of the graph. Then, since $L$, $\cos(\tau L)$, $\sin(\tau L)$, and $(M_n-\sin(\tau L))$ commute, and also trace is invariant under cyclic permutation, a proof follows by rearranging matrices in \eqref{centrality3Dynamic}. \end{pf} \subsection{Sensor Noise} This type of noise as the name suggests, stems from uncertainties in the measurement of state of each agent measured by agent itself and eventually sent to other agents. As it is mentioned by \cite{siami2018centrality}, in an environment that is suffering from sensor noises and time-delay $\tau$, dynamics of each individual agent $i$ for all $i \in {\mathcal{V}}$ can be modeled as follows, \begin{align}\label{agentUpdateSN} \dot{x}_i=\sum_{j \neq i} {a_{ij}\Big({\big(x_j(t-\tau)+\xi_j(t)\big)-\big(x_i(t-\tau)+\xi_i(t)\big)}\Big)}, \end{align} where $\xi \sim \mathcal N (0, \sigma_i^2)$ for $i \in {\mathcal{V}}$. The rationale behind such modeling is that each agent models its state by state of some other. Consequently, dynamics of network can be formulated by the input matrix ${B} = L$ as follows \begin{align}\label{eq:systemSN} \begin{aligned} \dot{x}(t)~&= -L\,x(t-\tau)+L\xi(t). \end{aligned} \end{align} \begin{theorem} For consensus network \eqref{eq:systemDN}, centrality index of node $i$ is equal to \begin{align}\label{etaiSensor} \eta_i = \frac{1}{2}\big[L\cos(\tau L)\big(M_n - \sin(\tau L)\big)^{\dagger}\big]_{ii}, \end{align} for all $i \in {\mathcal{V}}$. \end{theorem} \begin{pf} Since ${B} = L$ and $\xi_i$'s are independent, using Theorem \ref{th:Cent12}, equation \eqref{etaiSensor} can be followed. \end{pf} \begin{theorem} For consensus network \eqref{eq:systemDN}, sensitivity coefficient of the interconnection between nodes $i$ and $j$ is equal to \begin{align}\label{kappaiDynamic} \kappa_{e} = \frac{1}{2}r_{e}\Big(\big(\tau L + \cos(\tau L)\big)\big(M_n - \sin(\tau L)\big)^{\dagger}\Big), \end{align} for all $i, j \in {\mathcal{V}}$. \end{theorem} \begin{pf} From definition of $\kappa_{e}$, taking derivative of $\rho_\text{ss}(L;\tau)$ with respect to $w(e)$, we have \begin{align} \kappa_{e} = \frac{1}{2}\Tr\Bigg[E_{e}E_{e}^{\T}\cos(\tau L)\big(M_n - \sin(\tau L)\big)^{\dagger}\nonumber\\ -\tau E_{e}E_{e}^{\T}\sin(\tau L)\big(M_n - \sin(\tau L)\big)^{\dagger}\nonumber\\ +\tau \cos^2(\tau L)E_{e}E_{e}^{\T}\Big(\hspace{-0.07cm}\big(M_n\hspace{-0.07cm} - \sin(\tau L)\big)^2\Big)^{\dagger}\Bigg],\label{centrality3Noise} \end{align} where $E_{e}$ is the corresponding column of the link $e$ in the incidence matrix of the graph. Then, since $L$, $\cos(\tau L)$, $\sin(\tau L)$, and $(M_n-\sin(\tau L))$ commute, and also trace is invariant under cyclic permutation, a proof follows by rearranging matrices in \eqref{centrality3Noise}. \end{pf} \begin{figure} \centering \includestandalone[width=0.42\textwidth]{snsrNoise \caption{Block diagram of consensus network \eqref{eq:system} in presence of sensor noise.} \label{fig:snsrNoise} \end{figure} \subsection{Receiver Noise} This type of uncertainty emerges when there exist noise on the receiver node. In other words, when agents $i$ is receiving $x_j(t-\tau)+\xi_i(t)$ as state the of agent $j$ whereas in absence of the disturbance, it would have received $x_j(t-\tau)$ as state of agent $j$. Consequently, when there exists such receiver noise in the system, the update law for dynamics is given by \begin{align}\label{agentUpdateRN} \dot{x}_i=\sum_{j \neq i} {a_{ij}\Big({\big(x_j(t-\tau)+\xi_i(t)\big)-x_i(t-\tau)}\Big)}, \end{align} for all agents $i \in {\mathcal{V}}$, where $\xi_i \sim \mathcal{N}(0,\sigma_i^2)$. This dynamics can be cast in the form of the consensus dynamics \eqref{eq:system}, where $B = \Delta$ which is the diagonal matrix of degrees of the nodes. The following theorem, discuss centrality of the agents in such network. \begin{theorem} For consensus network with update law \eqref{agentUpdateRN}, centrality index of agent $i$ is equal to \begin{align}\label{etaiReceiver} \eta_i = \frac{1}{2}\big[\Delta^2 L^{\dagger}\cos(\tau L)\big(M_n - \sin(\tau L)\big)^{\dagger}\big]_{ii}, \end{align} for all $i \in {\mathcal{V}}$. \end{theorem} \begin{pf} Since consensus network \eqref{agentUpdateRN} is a special case of \eqref{eq:system} with $B = \Delta$ and $\xi_i$'s are independent, using Theorem \ref{th:Cent12}, equation \eqref{etaiReceiver} follows immediately. \end{pf} \subsection{Emitter Noise} This type of noise can be generated by signal emitter of the agents and therefore as a result cause an uncertainty in signals that are received by neighboring agents. Thus, when agent $j$ sends its state $x_j(t-\tau)$ to the node $i$, what node $i$ receives is $x_j(t-\tau) +\xi_j(t)$. As a result, dynamics of each agent $i$ can be modeled by \begin{align}\label{agentUpdateEN} \dot{x}_i=\sum_{j \neq i} {a_{ij}\Big({\big(x_j(t-\tau)+\xi_j(t)\big)-x_i(t-\tau)}\Big)}, \end{align} for all agents $i \in {\mathcal{V}}$, where $\xi_i \sim \mathcal{N}(0,\sigma_i^2)$. This dynamics can be cast in the form of the consensus dynamics \eqref{eq:system}, with adjacency matrix of the underlying graph as the matrix $B$. \begin{theorem} For consensus network with update law \eqref{agentUpdateEN}, centrality index of agent $i$ is equal to \begin{align*} \eta_i = \frac{1}{2}\big[\big(\Delta^2 L^{\dagger}-\Delta +L\big)\cos(\tau L)\big(M_n - \sin(\tau L)\big)^{\dagger}\big]_{ii}, \end{align*} for all $i \in {\mathcal{V}}$. \end{theorem} \begin{pf} Observe that adjacency matrix $A = \Delta - L$ and consensus network \eqref{agentUpdateRN} is a special case of \eqref{eq:system} with adjacency matrix, $A$, of the graph as the input matrix, we have $B = A = \Delta - L$. Also, since $\xi_i$'s are independent, using Theorem \ref{th:Cent12}, we obtain \begin{align*} \eta_i = \frac{1}{2}\big[\big(\Delta - L\big)L^{\dagger}\cos(\tau L)\big(M_n - \sin(\tau L)\big)^{\dagger}\big(\Delta - L\big)\big]_{ii}. \end{align*} Finally, trace operator is invariant under cyclic permutation which yields the result. \end{pf} \section{Link Associated Disturbances} In this section with discuss the noises that affect the links between agents and their associated centrality indices. \begin{figure} \centering \includestandalone[width=0.40\textwidth]{comNoise \caption{Block diagram of consensus network \eqref{eq:system} in presence of communication noise.} \label{fig:comNoise} \end{figure} \subsection{Communication Channel Noise} This type of noise may arise because of signal distortion in a communication channel between two agents in the network. We assumed that each communication channel suffers from a Gaussian noise $\xi_e \sim \mathcal {N}(0, \sigma_e^2)$ for all $e \in {\mathcal{E}}$. Under this assumption, if agents $i$ and $j$ are communicating through the channel $e = \{i,j\}$, agent $i$, receives $x_j(t-\tau) +\xi_e(t)$, instead of $x_j(t-\tau)$ while agent $j$, receives $x_i(t-\tau) -\xi_e(t)$ rather than $x_i(t-\tau)$. In other words, the relative state of agents on head end of the communication channel is modified by $\xi_e(t)$ and the tail end is adjusted by $\xi_e(t)$. As a result, we obtain oriented incidence matrix $E$, however, we note that since the graph is undirected and Gaussian distribution is symmetric, choice of the head and tail ends for a link does not affect dynamics of the network. With this being the case, each agent uses the following update law \begin{align}\label{agentUpdateCN} \dot{x}_i\,=\sum_{e = \{i,j\} \in {\mathcal{E}}} {a_{ij}\big({x_j(t-\tau)-x_i(t-\tau)+\xi_e(t)}\big)}, \end{align} and since the noise on each link is independent of the other, dynamics of the network can be formulated in the form of \eqref{eq:system}, where $B = E W$. Figure \ref{fig:comNoise} illustrates structure of a network with this type of uncertainty. \begin{theorem} For consensus network with update algorithm \eqref{agentUpdateCN}, value of centrality for link $e = \{i,j\}$ is \begin{align*} \nu_e =\frac{1}{2} a_{e}^2r_{e}\Big(L\cos(\tau L)^{-1}\big(M_n - \sin(\tau L)\big)\Big). \end{align*} \end{theorem} \vspace{0.2cm} { \begin{pf} Using the same technique used for proof of Theorem \ref{th:Cent12}, matrix $\hat{B} = E W \diag([\sigma_1^2,\dots,\sigma_{\mid {\mathcal{E}} \mid}^2]^{\T})$ and consequently, we obtain the performance measure as \begin{align} &\rho_\text{ss}(L;\tau) = \frac{1}{2}\Tr\hspace{-0.07cm}\NHS\big[\hat{B}^{T}L^{\dagger}\cos(\tau L)\big(M_n - \sin(\tau L)\big)^{\dagger}\hat{B}\big]\nonumber\\ \hspace{-0.07cm}\NHS\hspace{-0.07cm}&\hspace{-0.07cm}\NHS\hspace{-0.07cm}=\hspace{-0.07cm}\frac{1}{2}\hspace{-0.07cm}\Tr\hspace{-0.07cm}\big[\hspace{-0.07cm}\diag([\sigma_1^2,\hspace{-0.07cm}\dots\hspace{-0.07cm},\hspace{-0.07cm}\sigma_{\mid {\mathcal{E}} \mid}^2]^{\T})\hspace{-0.07cm} W^2 \hspace{-0.07cm} E^{T}\hspace{-0.07cm} L^{\dagger}\hspace{-0.07cm}\cos(\tau\hspace{-0.07cm} L)\big(M_n \hspace{-0.07cm}\NHS-\hspace{-0.07cm} \hspace{-0.07cm} \sin(\tau L)\big)^{\dagger}\hspace{-0.07cm} E\hspace{-0.07cm}\big]\nonumber\\ \hspace{-0.07cm}\NHS\hspace{-0.07cm}&\hspace{-0.07cm}\NHS\hspace{-0.07cm}=\frac{1}{2}\sum_{e=\{i,j\} \in {\mathcal{E}}}\sigma_e^2 a_{ij}^2r_e\big(L\cos(\tau L)^{-1}\big(M_n - \sin(\tau L)\big)\big)\label{rhoocomm} \end{align} A proof follows by taking derivative of \eqref{rhoocomm} with respect to $\sigma_e^2$ for all $e \in {\mathcal{E}}$. \end{pf}} \subsection{Measurement Noise} \begin{figure} \centering \includestandalone[width=0.35\textwidth]{measNoise \caption{Block diagram of consensus network \eqref{eq:system} in presence of measurement noise.} \label{fig:measNoise} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.27\textwidth]{graph_ex1.pdf} \caption{Graph of the example \ref{ex1} with 8 nodes and 20 links.} \label{fig_3} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.82\linewidth,trim={0.5cm 0 0.5cm 0.5cm}]{fig1-eps-converted-to.pdf} \caption{{ Centrality index as a function of time-delay }(a) Agent centrality with dynamics noise in example \ref{ex1}, (b) Agent centrality with sensor noise in example \ref{ex1}.} \label{fig_1} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.82\linewidth,trim={0.5cm 0 0.5cm 0.5cm}]{fig2-eps-converted-to.pdf} \caption{{ Centrality index as a function of time-delay }(a) Agent centrality with receiver noise in example \ref{ex1},(b) Agent centrality with emitter noise in example \ref{ex1}.} \label{fig_2} \end{figure} { This type of noise is used to mimic the effect of measurement noise that occurs in practice (see \cite{siami2018centrality} for details). We can use the two-port representation of linear consensus network as described in \cite{Zelazo:2011,siami2018centrality}, then it follows that \begin{eqnarray}\label{updatem1} \dot{x}(t) & = & u(t), \\ z(t) & = & W E^{\rm T} x(t) + \xi(t),\label{updatem2} \end{eqnarray} where $\xi(t)= \left[~\xi_{e_1},~ \ldots~,~\xi_{e_{\mid {\mathcal{E}} \mid}} ~\right]^{\rm T}$ is the vector of input noise, $\xi_{e}(t) \sim N(0,\sigma_{e}^2)$ for all $e \in \mathcal{E}$, $E$ is the signed vertex-edge incidence matrix of the graph, and the internal feedback control law is given by \begin{equation*} u(t) = - E z(t). \end{equation*} By direct calculations, one can verify that dynamics of the network can be formulated in the form of \eqref{eq:system}, where ${B = - E}$. Figure \ref{fig:measNoise} depicts a representation of this linear consensus network with measurement noises. } \begin{theorem} For consensus network with update algorithm \eqref{updatem1}, \eqref{updatem2} value of centrality for link $e = \{i,j\}$ is \begin{align*} \nu_e =\frac{1}{2} r_e\Big(L\cos(\tau L)^{-1}\big(M_n - \sin(\tau L)\big)\Big). \end{align*} \end{theorem} \vspace{0.2cm} { \begin{pf} Using the same technique used for proof of Theorem \ref{th:Cent12}, matrix $\hat{B} = E \diag([\sigma_1^2,\dots,\sigma_{\mid {\mathcal{E}} \mid}^2]^{\T})$ and thus, we can obtain the performance measure as \begin{align} &\rho_\text{ss}(L;\tau) = \frac{1}{2}\Tr\hspace{-0.07cm}\big[\hat{B}^{T}L^{\dagger}\cos(\tau L)\big(M_n - \sin(\tau L)\big)^{\dagger}\hat{B}\big]\nonumber\\ \hspace{-0.07cm}\NHS\hspace{-0.07cm}&\hspace{-0.07cm}\NHS\hspace{-0.07cm}=\hspace{-0.07cm}\NHS\frac{1}{2}\Tr\big[\hspace{-0.07cm}\diag([\sigma_1^2,\hspace{-0.07cm}\dots\hspace{-0.07cm},\hspace{-0.07cm}\sigma_{m}^2]^{\T})\hspace{-0.07cm} E^{T}\hspace{-0.07cm} L^{\dagger}\hspace{-0.07cm}\cos(\tau\hspace{-0.07cm} L)\big(M_n \hspace{-0.07cm}\NHS- \sin(\tau L)\big)^{\dagger}E\big]\nonumber\\ \hspace{-0.07cm}\NHS\hspace{-0.07cm}&\hspace{-0.07cm}\NHS\hspace{-0.07cm}=\frac{1}{2}\sum_{e=\{i,j\} \in {\mathcal{E}}}\sigma_e^2 r_e\big(L\cos(\tau L)^{-1}\big(M_n - \sin(\tau L)\big)\big)\label{rhoomeas} \end{align} A proof follows by taking derivative of \eqref{rhoomeas} with respect to $\sigma_e^2$ for all $e \in {\mathcal{E}}$. \end{pf}} \section{Order of Precedence and Effect of Connectivity}\label{sec:ranking} One natural way to characterize the effect of noise structures on a dynamic network is by ordering the agents' or links' centrality indices, \cite{siami2018centrality}. We introduce the term \textit{precedence} when we refer to order of node centrality and the term \textit{ranking} when we refer to order link centrality. \begin{definition} In a network with $n$ agents, we say agent $i$ is higher in the {order of precedence} than agent $j$, if $\eta_i > \eta_j$. Moreover, for links $e_i$ and $e_j$, we say link $e_i$ has a higher rank than link $e_j$, if $\nu_{e_i} > \nu_{e_j}$. \end{definition} Theorem below, discusses the effect of uniform scaling of the connectivity across the network. \begin{theorem}\label{thm:uniformScaling} In the absence of time-delay, for all types of uncertainties, order of precedence of the agents and ranking of the links is invariant with respect to uniform scaling of the weight of all links. \end{theorem} { \begin{pf} In the absence of time-delay, $\cos(\tau L) = I_{\N}$, and $\sin(\tau L) = 0_{\N\times \N}$. In addition, scaling all the weights matrices by $\alpha>0$, scales $\Delta$, $L$ by $\alpha$, and scales $L^{\dagger}$ by $1/\alpha$. Thus, scaling the weights by $\alpha$, scales agent centrality indices with dynamics noise and link centrality measures by $1/\alpha$. Similarly, agent centrality indices with receiver noise, sensor noise, and emitter noise will be scaled by $\alpha$. \end{pf}} According to Theorem \ref{thm:uniformScaling}, in a synchronous consensus network, precedence and ranking remains invariant under uniform scaling of the coupling weights. This is not the case however when time-delay is present. Later in Example \ref{ex:uniform_connectivity}, we see that in the presence of time-delay, uniform scaling of the weights can indeed change ranking among the agents. The interplay between network connectivity, time-delay and ordering over nodes/links in a consensus network is quite perplexed. In the next result we establish an asymptotic relation between order of precedence and ranking of the links in the presence and absence of time-delay. \begin{theorem}\label{thm:uniformScaling2} For a network with Laplacian matrix $L$ and a specific type of uncertainty, without loss of generality, assume that the agents are labeled based on their order of precedence in the absence of time-delay, i.e., agent $i$ precedes agent $j$ in rank, if and only if $i < j$. Similarly, assume that links are labeled, based on their rank in the absence of time-delay. Then, in the presence of time-delay $\tau$, there exist a positive scalar $\alpha$, such that a network with the Laplacian matrix $\alpha L$, in which agent $i$ achieves rank $i$ for all $i \in \{1,2,\dots,\N\}$ and link $e$ achieves rank $e$, for all $e \in \{1,2,\dots,\mid{\mathcal{E}}\mid\}$. In other words, in the presence of time-delay, scaling all links by a small enough $\alpha$, provides the same ranking to the delay-free case with the same type of uncertainty. \end{theorem} \begin{pf} If we let $\alpha$ converge to zero, then $\cos(\tau \alpha L) $ converges to $ I_{\N}$ and $\sin(\tau \alpha L) $ approaches $ 0_{\N \times \N}$ and thus, $\Big(M_{\N}-\sin(\tau \alpha L)\Big)^{\dagger} $ approaches $ M_{\N}^{\dagger} = M_{\N}$. Thus, the centrality index converges to that of a network in the absence of time-delay with Laplacian matrix $\alpha L$. The rest of the proof follows by applying Theorem \ref{thm:uniformScaling} since the centrality ranking of nodes in the absence of time-delay is invariant with respect to scaling. \end{pf} Theorem \ref{thm:uniformScaling2} relies on the continuous dependence of the centrality indices with respect to coupling weights. It explains that for given time-delay $\tau>0$ there is a small enough uniform scaling parameter that can match the effect of noise on the network when time-delay is not present. Such scaling parameter $\alpha$ is in fact independent of network structure $L$. \begin{figure}[t] \centering \includegraphics[width=0.29\textwidth]{graph_ex2.pdf} \caption{\footnotesize{Erd\H{o}s-R\'enyi graph with $ n = 10$ nodes and probability $p = 0.3$.}} \label{Fig:exampleGraph_2} \end{figure} \section{Numerical Examples} \begin{example}\label{ex1} In this example, we consider a randomly generated network with $8$ agents and $20$ unweighted links illustrated in Figure \ref{fig_3}. We study centrality indices of agents in presence of 4 different uncertainty structures for different amount of time-delay. We note that as expected, all indices increase with the time-delay. In addition, we observe that since agents { 1 and 2} share same set of neighbors, i.e., there is an automorphism that maps them to one another, all their centrality indices are equal. Similarly, all indices of agents { 4 and 5}, are equal. { In this example, agent labeling is based on the value of centrality index in a consensus network with dynamics noise in absence of time-delay. More specifically, agents with greater centrality index in presence of dynamics noise, have greater label.} Interestingly, in Figures \ref{fig_1} and \ref{fig_2} we observe that centrality rankings are not invariant with respect to time-delay. { Also, another noteworthy observation is that although centrality rankings for different noise structures do not match in absence of time-delay \cite{siami2018centrality}, they are very similar to each other as time-delay increases. Our intuition behind this phenomenon is that as time-delay increases, eigenvectors of larger eigenvalues of the Laplacian matrix play a major role especially as $\tau \to \frac{\pi}{2\lambda_n}$}. \end{example} \begin{example}\label{ex:uniform_connectivity} \begin{figure}[t] \centering \includegraphics[width=0.75\linewidth]{barnodelay_normal-eps-converted-to.pdf} \caption{Normalized agent centrality index with dynamics noise in the absence of time-delay, is invariant with respect to uniform scaling of the weights.} \label{Fig:exam2_nodelay_node} \end{figure} \vspace{2cm} \begin{figure}[t] \centering \includegraphics[width=0.75\linewidth]{barwdelay_normal-eps-converted-to.pdf} \caption{Normalized agent centrality index with additive noise in the presence of time-delay.} \label{Fig:exam2_wdelay_node} \end{figure} In order to study effect of time-delay and connectivity on the centrality ranking of agents and links, consider the Erd\H{o}s-R\'enyi graph on $ n = 10$ agents and each link is included with probability $p = 0.3$, depicted in Figure \ref{Fig:exampleGraph_2}. First, we study centrality of the agents in the absence of time-delay. We consider three different uniform weights of $5, 7,$ and $9$ across the network. From Theorem \ref{thm:uniformScaling}, we expect the ranking of the agents to be unaffected by this change in connectivity. Thus, measuring the agent centrality in presence of the dynamics noise, normalized (i.e., divided by the sum of all indices) value of centrality indices, in Figure \ref{Fig:exam2_nodelay_node} is constant for all three weights. This means that the ranking is not changed by increasing the connectivity. On the other hand, when time-delay $\tau = 0.268$, increasing connectivity from $5$ to $7$ and $9$, changes the ranking of the agents. For example, in the presence of time-delay, agent $6$ and agent $1$ has the highest and lowest order of precedence, respectively, when the uniform weight of the couplings is $5$. On the contrary, when the uniform weights are increased to $9$, agent $1$ which was the lowest agent in the ranking, achieves the highest ranking among the agents and agent $6$ which had the highest rank, is demoted to the third place. Furthermore, we can observe the relative change of the centrality index and the ranking for other agents in Figure \ref{Fig:exam2_wdelay_node}. \begin{figure}[t] \centering \includegraphics[width=0.75\linewidth]{barnodelay_edge_normal-eps-converted-to.pdf} \caption{Normalized link centrality index with dynamics noise in the absence of time-delay, is invariant with respect to uniform scaling of the weights.} \label{Fig:exam2_nodelay_link} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.75\linewidth]{barwdelay_edge_normal-eps-converted-to.pdf} \caption{Normalized link centrality index with additive noise in the presence of time-delay.} \label{Fig:exam2_wdelay_link} \end{figure} \end{example} \begin{example}\label{ex:facebook} In this example we consider a dataset of Facebook users \cite{snapnets} and we analyze the effect of time-delay on link centrality indices and rankings. To do so, we find top 10 ranked links in the network both in the presence and in the absence of time-delay. Our observation is that by adding time-delay, the links close to the agent with highest degree become the links with higher rank. \begin{figure*}[!htb] \begin{minipage}{0.4\textwidth} \centering \includegraphics[width=1.1\linewidth ,trim={3.205cm 1.35cm 0 0},clip ]{delay_edge_cent-eps-converted-to.pdf} \caption{The graph of Example \ref{ex:facebook}, the 10 red links are those with highest centrality index in the presence of time-delay and they are in the vicinity of the agent with the largest degree. } \label{Fig:exam_facebook_delayed} \end{minipage} \hfill \begin{minipage}{0.4\textwidth} \centering \includegraphics[width=1.1\linewidth ,trim={3.356cm 1.25cm 0 0},clip ] {delay_free_edge_cent-eps-converted-to.pdf} \caption{The graph of Example \ref{ex:facebook}, the 10 red links are those with highest centrality index in the absence of time-delay and they are spread far from the agents with the largest degree. } \label{Fig:exam_facebook_delayfree} \end{minipage} \end{figure*} \end{example} {\section{Discussions}\label{sec:discussions} { \subsection{Computational Complexity} The first step in computing the centrality indices for a agent or a link, requires finding $L^{\dagger} = (L+\frac{1}{\N}J_{\N})^{-1}-\frac{1}{n}J_{\N}$ which requires $\mathcal{O}({\N}^3)$ arithmetic operations. Then we need to find $\cos(\tau L)$ and $\sin(\tau L)$ using pad\'e approximation. Computational complexity of this step is $\mathcal{O}({\N}^3)$ as well \cite{hargreaves2005efficient}. Quite similarly, computational complexity of finding finding $\big(M_n - \sin(\tau L)\big)^{\dagger}$ is $\mathcal{O}({\N}^3)$. Then we need to do up to four matrix multiplications between dense matrices which needs $\mathcal{O}({\N}^3)$. Summing up all the required operations, results in $\mathcal{O}(\N^3)$ computational complexity. \subsection{Networks with higher-order dynamics} The results of this manuscript can be extended to study influence of agents in a network with higher-order dynamics or multi-layered structure \cite{alemzadeh2018influence}. We discuss the extension of the results to a class of networks with second-order dynamics which is a widely used model for studying platoon of cars \cite{ren2008consensus,yu2010some,bamieh2012coherence}. Even though we discuss the centrality of the agents in the presences of dynamics noise, centrality of the agents and links can be driven for other types of uncertainty as well. For this class of networks, dynamics of the agent $i$ can be written as \begin{align} \dot{x}_i(t) &= v_i,\\ \dot{v}_i (t)&= \sum_{j \neq i}{a_{ij}\big({x_j(t-\tau)-x_i(t-\tau)}\big)} \nonumber\\ &+b\sum_{j \neq i} {a_{ij}\big({v_j(t-\tau)-v_i(t-\tau)}\big)} + \xi_i(t),\label{agents_with_input} \end{align} where $x_i$ is the position of agent $i$ and $v_i$ is the velocity of agent $i$. Thus, the network that we address in this section is the following second-order consensus network with $\N$ agents and uniform time-delay $\tau$ and underlying graph Laplacian $L$ \begin{align} \begin{aligned} \label{eq:system_SOC} \dot{x}(t)~&= v(t),\\ \dot{v}(t)~&= -Lx(t-\tau)-bL v(t-\tau) + \xi(t),\\ y(t)~&=~ M_{\N}x(t), \end{aligned} \end{align} where $x = [x_1, \ldots, x_{\N}]^{\T}$ and $v = [v_1, \ldots, v_{\N}]^{\T}$ are states and $y = [y_1, \ldots, y_{\N}]^{\T}$ is the output. \begin{theorem} For consensus network \eqref{eq:system_SOC}, centrality index of agent $i$ is equal to \begin{align}\label{platoon_cent} \eta_i = \sum_{j=1}^{\N}Q_{ij}^2f(\lambda_i,\tau,b). \end{align} where function $Q_{ij}$ is the $i^{\mathrm {th}}$ element of the the normalized eigenvector corresponding to $\lambda_j$, $$f(\lambda_i,\tau,b)= \int_{-\infty}^{+\infty}\frac{1}{2\pi}\frac{ d\omega}{h(\lambda_i,\tau,b,\omega)},$$ and \begin{align*} h(\lambda_i,\tau,b,\omega) = &{(\lambda _{i}}-\omega ^2\cos(\omega \tau))^2+\omega^2(b\lambda_i-\omega\sin(\omega\tau))^2. \end{align*} \end{theorem} \begin{corollary} In the absence of time-delay, in the second-order network \eqref{eq:system_SOC} with dynamics noise, centrality of the agents is simplified to \begin{align*} \eta_i = \frac{1}{2b}\Big[\big(L^2\big)^{\dagger}\Big]_{ii}. \end{align*} \end{corollary} \subsection{Designing robust networks} As it was discussed in Theorem \ref{th:Cent12}, centrality indices studied in this manuscript have a direct correlation with networks $\HH_2$-norm performance measure, and in fact, the performance of the network is a linear combination of the centrality indices. Thus, improving the indices can improve the performance of the network as well. However, adjusting centrality index of a agent might counter-effect the index of another agent. Thus, designing a network to improve centrality index of more than one agent (or link) is inherently a multi-objective optimization problem. We discuss the design problem in a network with dynamics noise but generalization for other types of uncertainty is possible. Scalarization \cite{marler2004survey} is a well-known class of approaches for solving multi-objective problems. For example we may consider a weighted sum of the centrality indices. From \ref{th:Cent12}, minimizing th weighted sum is equivalent to optimizing performance of the network. Another viable approach for scalarization of the multi-objective problem is considering the $L_{\infty}$-norm of the vector of centrality indices. Since centrality of the agents are positively correlated with variability of the state of agent with respect to average of the state of all agents, minimizing the $L_{\infty}$-norm of the centrality index of all agents improve the worst-case centrality in the network. If weight of the existing links in the network be a decision variable for improving the robustness of the network, then we can write the optimization problem in the following form \begin{flalign}\label{prob:opt1} \begin{aligned} & \underset{w(e),\forall e\in {\mathcal{E}}}{\text{minimize}} & & \underset{i}{\text{maximize }} \eta_i \\ & \text{subject to:} & & L = E_{e} E_{e}^{\T} w(e),\\ & & & \eta_i\! =\! \frac{1}{2}\big[L^{\dagger}\cos(\tau L)\big(M_n - \sin(\tau L)\big)^{\dagger}\big]_{ii},\; \forall i\in {\mathcal{V}} \end{aligned} \end{flalign} In the case that the disturbance on the system comes from an adversarial source with a \text{limited power}, the attacker aims at deteriorating the performance, while the goal is to design a network that is robust against the worst type of the attack. Thus, we need to solve the optimization problem \begin{flalign}\label{prob:opt2} \begin{aligned} & \underset{w(e),\forall e\in {\mathcal{E}}}{\text{minimize}} & & \underset{\sigma_i ,\forall i\in {\mathcal{V}}}{\text{maximize }} \rho_\text{ss}(L;\tau) \\ & \text{subject to:} & & L = E_{e} E_{e}^{\T} w(e)\\ & & & \sum_{i=1}^{\N} \sigma_i^2 = \N. \end{aligned}&& \end{flalign} \begin{theorem} Optimization problem \eqref{prob:opt2} is equivalent \eqref{prob:opt1}. In other words, decreasing the worst case centrality index is equivalent to a network with the best robustness against an adversarial attack. \end{theorem} \subsection{Centrality rankings in special graph structures}\label{structure_centrality} \begin{proposition} In a network with a vertex-transitive coupling graph, e.g., complete graph and ring graph with uniform weight, agent centrality index $\eta_i = \bar{\rho}/n$, where $\bar\rho $ is the performance of the network with $\sigma_1 = \sigma_2 = \dots = \sigma_n = 1$. Similarly, in a network with an edge-transitive coupling graph, e.g. complete (bipartite) graph, and ring graph, link centrality index $\nu_e = \dbar{\rho}/\mid\hspace{-0.07cm} {\mathcal{E}}\hspace{-0.07cm}\mid$, where $\dbar\rho $ is the performance of the network with $\sigma_1 = \sigma_2 = \dots = \sigma_{\mid{\mathcal{E}}\mid} = 1$. \end{proposition} \begin{proposition} In a network with a tree graph with uniform weight $\bar{w}$, in the absence of time-delay, $\nu_e = 1/\bar{w}$ for all links. \end{proposition} \begin{figure}[t] \centering \includegraphics[width=0.46\textwidth]{graph_ex_path.pdf} \caption{A path graph with labeled agents and links discussed in subsection \ref{structure_centrality}.} \label{fig_path} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.46\textwidth]{cent_edge_path-eps-converted-to.pdf} \caption{\footnotesize{Normalized link centrality in a path graph. Edge labels are provided in Fig \ref{fig_path}.}} \label{Fig:cent_edge_path} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.46\textwidth]{cent_node_path-eps-converted-to.pdf} \caption{\footnotesize{Normalized agent centrality in a path graph with dynamics noise.}} \label{Fig:cent_node_path} \end{figure} Figure \ref{Fig:cent_edge_path} depicts the centrality for a path graph with 8 agents in the presence of time-delay. We can see that initially ($\tau = 0$) all the links have equal centrality index, however, as time-delay increases, the central index of inner links increase with a higher rate than the outer links. In Figure \ref{Fig:cent_node_path}, centrality index of agents in a path graph with dynamics noise as a function time-delay is depicted. We can see that in the absence of time-delay, outer agents have higher centrality index. However, as time-delay increases, the inner agents gain higher centrality index. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{cent_node_star-eps-converted-to.pdf} \caption{\footnotesize{Normalized agent centrality in a star graph with dynamics noise.}} \label{Fig:cent_node_star} \end{figure} } \section{Conclusion} Interpretations of centrality and sensitivity measures, with respect to the $\mathcal H_2$-norm square, are proposed for networks with consensus dynamics subject to time-delays and structured additive noise inputs. In such networks, the centrality/sensitivity of each agent/communication link depends on the coupling graph of the network, time-delays and the structure of noise input. We consider several uncertainty structures that have real-world interpretations. It is shown that the centrality and sensitivity ranks of agents or links may vary substantially when comparing various noise structures and the time-delay. } \bibliographystyle{plain} {
1,314,259,993,780
arxiv
\section{Introduction} \noindent Let $(S,g)$ be a compact Riemann surface and consider the problem \begin{equation}\label{mfewt} -\lab u=\lambda_1\left({V_1(x)e^{u}\over\int_{S}V_1 e^{u}dv_g}- \frac{1}{|S|}\right)-\lambda_2\tau \left({V_2(x)e^{-\tau u}\over\int_{S}V_2 e^{-\tau u}dv_g}- \frac{1}{|S|}\right) \end{equation} where $\lambda_1,\lambda_2\ge0$, $\tau>0$, $V_1$ and $V_2$ are smooth nonnegative potentials in $S$ and $|S|$ is the area of $S$. Here, $\lab$ is the Laplace-Beltrami operator and $dv_g$ is the area element in $(S,g)$. This equation have attracted a lot of attention in recent years due to its relevance in the statistical mechanics description of 2D-turbulence, as initiated by Onsager \cite{o}. Precisely, in this context, under a \emph{deterministic} assumption on the distribution of the vortex circulations, Sawada, Suzuki \cite{ss} derive the following equation: \begin{equation}\label{p12} \begin{array}{ll} \ds -\Delta_g u=\lambda \int\limits_{[-1,1]}\alpha \bigg({e^{\alpha u}\over \int_S e^{\alpha u}dv_g} - {1\over |S|}\bigg) d \mathcal P(\alpha)& \hbox{in}\ S \end{array} \end{equation} where $u$ is the stream function of a turbulent Euler flow, $\lambda>0$ is a physical constant related to the inverse temperature and $\mathcal P$ is a Borel probability measure in $[-1,1]$ describing the point-vortex intensities distribution. \medskip \noindent Equation \eqref{p12} includes several well-known problems depending on a suitable choice of $\mathcal P$. For instance, if $\mathcal P=\delta_1$ is concentrated at $1$, then \eqref{p12} is related to the classical mean field equation \begin{equation}\label{mfe} -\Delta_g u=\lambda\left(\frac{Ve^{u}}{\int_S Ve^{u}\,dv_g} - \frac{1}{|S|}\right)\quad\text{in}\quad S, \end{equation} where $V$ is a smooth nonnegative function on $S$. The latter equation has been studied in several contexts such as conformal geometry \cite{ChY,ChGY,KW}, statistical mechanics \cite{CLMP1,CLMP2,ChKi,K} and the relativistic Chern-Simons-Higgs model when $S$ is a flat two-torus \cite{NT,T,Tbook}. Notice that solutions of \eqref{mfe} are critical points of the functional \begin{equation*} J_\lambda(u)={1\over 2}\int_S |\nabla u|^2_g\,dv_g-\lambda \log\left(\int_S V e^{u}\, dv_g \right),\qquad u\in \bar H, \end{equation*} where $\bar H=\{u \in H^1(S): \int_S u dv_g=0\}$. Minimizers of $J_\lambda$ for $\lambda<8\pi$ can be found by using Moser-Trudinger's inequality. The situation in the supercritical regime $\lambda\ge 8\pi$ becomes subtler and the existence of solutions could depend on the topology and the geometry of the surface $S$ (or the domain). A degree argument has been proved in \cite{CL0,CL} by Chen and Lin, completing a program initiated by Li \cite{Li}, and has received a variational counterpart in \cite{Dja,Mal} by means of improved forms of the Moser-Trudinger inequality. \medskip \noindent Equation \eqref{mfewt} is also related to \eqref{p12} when $\mathcal P=\sigma \delta_{1}+(1-\sigma)\delta_{-\tau }$ with $\tau \in[-1,1]$ and $\sigma\in[0,1]$. Furthermore, \eqref{mfewt} is the Euler-Lagrange equation of the functional \begin{equation}\label{energy} J_{\lambda_1,\lambda_2}(u)={1\over2}\int_S|\nabla u|_g^2\, dv_g - \lambda_1\log\left(\int_S V_1 e^{u} dv_g\right)- \lambda_2\log\left(\int_S V_2 e^{-\tau u} dv_g\right),\:\: u\in \bar H. \end{equation} If $\tau =1$ and $V_1=V_2\equiv 1$ problem \eqref{mfewt} reduces to mean field equation of the equilibrium turbulence, see \cite{bjmr,J0,JWY,OhSu,R} or its related sinh-Poisson version, see \cite{BaPi,BaPiWe,GP,JWY2,JWYZ}, which have received a considerable interest in recent years. Precisely, in \cite{OhSu} a Trudinger-Moser type inequality was proved: if $\lambda_1,\lambda_2\in[0,8\pi)$, which can be called the subcritical case, then solutions to \eqref{mfewt} are the minimizers of $J_{\lambda_1,\lambda_2}$, since this functional is coercive; but if $\lambda_1,\lambda_2\in[0,8\pi]$ and either $\lambda_1=8\pi$ or $\lambda_2=8\pi$ then the functional $J_{\lambda_1,\lambda_2}$ still has lower bound but it is not coercive. A minimization technique is no longer possible if $\lambda_i> 8\pi $ for some $i=1,2$ since $J_{\lambda_1,\lambda_2}$ becomes unbounded from below. In general, one needs to apply variational methods to obtain the existence of critical points (generally of saddle type) for $J_{\lambda_1,\lambda_2}$. Several results in the supercritical case can be found in \cite{R,Zhou0,Zhou}. A quantization property was derived in \cite{JWY2} for a blow-up sequence $\{u_n\}_n$ to \eqref{mfewt} with $\tau=1$, one has \begin{equation}\label{m12} m_k(p) = \lim_{r\to0}\lim_{n\to+\infty}\frac{\lambda_{k,n}\int_{B_r(p)}V_k e^{(-1)^{k-1} u_n}\,dv_g}{\int_S V_k e^{(-1)^{k-1} u_n}\,dv_g}\in 8\pi \N,\quad k=1,2, \end{equation} extending the corresponding ones for \eqref{mfe} in \cite{LSh} and for \eqref{mfewt} with $\tau=1$ and $V_1= V_2\equiv 1$ in \cite{JWYZ}. \medskip \noindent Concerning the version of problem \eqref{mfewt} on bounded domains Pistoia and Ricciardi built in \cite{pr1} sequences of blowing-up solutions when $\tau >0$ and $\lambda_1,\lambda_2\tau^2$ are close to $8\pi$, while in \cite{pr2} the same authors built an arbitrary large number of sign-changing blowing-up solutions when $\tau >0$ and $\lambda_1,\lambda_2\tau^2$ are close to suitable (not necessarily integer) multiples of $8\pi.$ Ricciardi and Takahashi in \cite{rt} provided a complete blow-up picture for solution sequences of \eqref{mfewt} and successively in \cite{rtzz} Ricciardi et al. constructed min-max solutions when $\lambda_1 \to 8\pi^+$ and $\lambda_2 \to 0$ on a multiply connected domain (in this case the nonlinearity $e^{-\tau u}$ may be treated as a lower-order term with respect to the main term $e^u$). \medskip \noindent In a compact Riemann surface $S$, a blow-up analysis in subcritical case $\lambda_1<8\pi$ and $\lambda_2<\frac{8\pi}{\tau^2}$, and supercritical case $\lambda_1<16\pi$ and $\lambda_2<\frac{16\pi}{\tau^2}$, characterizing the blow-up masses $m_k(p)$, $k=1,2$, defined similarly as in \eqref{m12}, has been obtained in \cite{j2}, when $0<\tau<1$. Furthermore, some existence results are deduced. The authors in \cite{rz} obtain the minimal blow-up masses and proved an existence result which generalize the one obtained in \cite{R} for $\tau=1$. \medskip \noindent To the extend our knowledge, there are by now just few results concerning the existence of bubbling solutions to \eqref{mfewt} and its variants in different framework. For instance bubbling solutions have been constructed for a sinh-Poisson equation ($\tau=1$) on bounded domains in \cite{BaPi,BaPiWe} with Dirichlet boundary condition and recently in \cite{FIT} with Robin boundary condition. Furthermore, recently in \cite{EFP} and \cite{F2}, the authors have constructed blowing-up solutions on pierced domains with Dirichlet boundary condition for any $\tau>0$. See also \cite{pr1,pr2} for generalizations to $\tau>0$ of results obtained in \cite{BaPi,GP} for $\tau=1$, respectively. The construction of sign-changing bubble tower solutions for sinh-Poisson type equations on pierced domains have been addressed in \cite{F3}. \medskip \noindent By following some ideas presented in \cite{BaPi,EF}, we are interested in to construct bubbling solutions $u_{\lambda_1,\lambda_2}$ to \eqref{mfewt} with $m_1$ positive bubbles and $m_2$ negative bubbles suitable centered at $m=m_1+m_2$ different points of $S$ as both $\lambda_1\to 8\pi m_1$ and $\lambda_2\tau^2 \to8\pi m_2$, with $m_1\in\{0,\dots,m\}$. To this aim, introduce the Green function $G(x,p)$ with pole at $p \in S$ as the solution of \begin{equation} \label{green} \left\{ \begin{array}{ll} -\Delta_g G(\cdot,p)= \delta_{p}-\frac{1}{|S|} &\text{in $S$}\\ \int_S G(x,p)dv_g=0 & \end{array} \right. \end{equation} where $\delta_p$ denote a Dirac mass in $p\in S$. Define for $\xi=(\xi_1,\dots,\xi_m)\in \tilde S^{m}\setminus\Delta$ the functional \begin{equation}\label{fim} \begin{split} \varphi_m^* (\xi) = &\ \frac{1}{4\pi}\sum_{j=1}^{m_1} \log V_1(\xi_j ) + \frac{1}{4\pi \tau^2}\sum_{j=m_1+1}^{m} \log V_2(\xi_j ) + \sum_{j=1}^{m_1} H(\xi_j,\xi_j) + {1\over\tau^2} \sum_{j=m_1+1}^{m} H(\xi_j,\xi_j)\\ & +\sum_{j=1}^{m_1}\sum_{i=1\atop i\not= j}^{m_1} G(\xi_i,\xi_j) - {2\over\tau} \sum_{j=1}^{m_1}\sum_{i=m_1+1}^{m} G(\xi_i,\xi_j) + {1\over\tau^2} \sum_{j=m_1+1}^{m}\sum_{i=m_1+1\atop i\not= j}^{m} G(\xi_i,\xi_j), \end{split} \end{equation} where $H(x,\xi)$ is the regular part of $G(x,\xi)$, $\tilde S=\{V_1,V_2>0\}$ and $\Delta=\{\xi \in S^m:\,\xi_i=\xi_j \hbox{ for }i\not=j\}$ is the diagonal set in $S^m$ with $m=m_1+m_2$. Setting for $j\in \mathcal J_1:=\{1,\dots,m_1\}$ \begin{equation}\label{ro1} \rho_j(x):=V_1(x)\exp\bigg(8\pi H(x,\xi_j)+ 8\pi \sum\limits_{i=1\atop i\ne j}^{m_1} G(x,\xi_i)-{8\pi \over\tau}\sum_{i=m_1+1}^mG(x,\xi_i)\bigg), \end{equation} and for $j\in \mathcal J_2:=\{m_1+1,\dots,m\}$ \begin{equation}\label{ro2} \rho_j(x):=V_2(x)\exp\bigg(8\pi H(x,\xi_j)- 8\pi \tau \sum_{i=1}^{m_1} G(x,\xi_i)+ 8\pi \sum_{i=m_1+1\atop i\ne j}^mG(x,\xi_i)\bigg), \end{equation} both for $\xi \in S^m \setminus \Delta$ we introduce the notation \begin{equation}\label{vk} A_k^*(\xi)=4\pi\sum_{j\in \mathcal J_k}\left[\Delta_g \rho_j(\xi_j)-2K(\xi_j)\rho_j(\xi_j)\right],\quad k=1,2 \end{equation} where $K$ is the Gaussian curvature of $(S,g)$. The sign of $A_k^*$, $k=1,2$ allows us to obtain a first existence result of bubbling solutions and several consequences, see Theorem \ref{main1} and section \ref{secex}. Unfortunately, there are cases where the sign of $A_k^*(\xi)$ either $k=1$ or $k=2$ or both is not available. For instance the case $S=\mathbb T$, $V_1=V_2\equiv1$, $m_1=m_2=1$ and $\tau=1$. See also \cite{EF} for several examples in case $\lambda_2=0$, namely, $m_2=0$, that could be extended here. Following ideas presented in \cite{EF}, in all these situations, a more refined analysis is necessary. To this aim, introduce the quantities for $k=1,2$ \begin{eqnarray}\label{Bk} B_k^*(\xi)&\hspace{-0.1cm}=&\hspace{-0.1cm} -2\pi \sum_{j\in \mathcal J_k} [\Delta_g \rho_j(\xi_j) -2 K(\xi_j) \rho_j(\xi_j)] \log \rho_j(\xi_j) -\frac{A_k^*(\xi)}{2}\\ &\hspace{-0.1cm}&\hspace{-0.1cm}+ \lim_{r\to0} \bigg[8 \int_{S \setminus \cup_{j\in \mathcal J_k} B_r(\xi_j)} V_1e^{8\pi(-\tau)^{k-1} \sum\limits_{j=1}^{m_1} G(x,\xi_j) + 8\pi (-\tau)^{k-2 }\sum\limits_{l=m_1+1}^m G(x,\xi_l) } dv_g-\frac{8\pi}{r^2} \sum\limits_{j\in \mathcal J_k} \rho_j(\xi_j)\nonumber\\ &&\qquad\qquad-A_k^*(\xi) \log \frac{1}{r}\bigg]\nonumber \end{eqnarray} where $B_r(\xi)$ denotes the pre-image of $B_r(0)$ through the isothermal coordinate system at $\xi$. These types of quantities has been first used and derived by Chang, Chen and Lin in \cite{ChChL} in the study of the mean field equation on bounded domains with Dirichlet boundary condition; for the case of the torus see \cite{CLW}. Moreover, the constant $B_k^*(\xi)$ has also been used in the construction of non-topological condensates for the relativistic abelian Chern-Simons-Higgs model as the Chern-Simons parameter tends to zero, see \cite{DEFM,EF,LinYan}. Our main result states as follows. \begin{theo} \label{main2} Let $\mathcal{D} \subset \subset \tilde S^m \setminus \Delta$ be a stable critical set of $\varphi_m^*$. Assume that \begin{equation} \label{cond0} \hbox{either }A_1^*(\xi) >0 \:(<0 \hbox{ resp.)} \qquad \hbox{or} \qquad A_1^*(\xi)=0, \:B_1^*(\xi)>0 \: (<0 \hbox{ resp.)} \end{equation} and \begin{equation} \label{cond1} \hbox{either }A_2^*(\xi) >0 \:(<0 \hbox{ resp.)} \qquad \hbox{or} \qquad A_2^*(\xi)=0, \:B_2^*(\xi)>0 \: (<0 \hbox{ resp.)} \end{equation} do hold in a closed neighborhood $U$ of $\mathcal{D}$ in $\tilde S^m \setminus \Delta$. Then, for all $\lambda_1$ in a small right (left resp.) neighborhood of $8 \pi m_1$ and $\lambda_2\tau^2$ in a small right (left resp.) neighborhood of $8 \pi m_2$ there is a solution $u_{\lambda_1,\lambda_2}$ of \eqref{mfewt} which concentrate (along sub-sequences) at $m$ points, positively at $q_1,\dots, q_{m_1}$ and negatively at $q_{m_1+1},\dots,q_m$, in the sense \begin{equation}\label{conc} \frac{\lambda_1V_1e^{u_{\lambda_1,\lambda_2}}}{\int_S V_1e^{u_{\lambda_1,\lambda_2}} dv_g}\rightharpoonup 8\pi \sum_{j=1}^{m_1}\de_{q_j}\quad\text{ and }\quad \frac{\lambda_2\tau^{2}V_2e^{ -\tau u_{\lambda_1,\lambda_2}}}{\int_S V_2e^{-\tau u_{\lambda_1,\lambda_2}} dv_g}\rightharpoonup 8\pi \sum_{j=m_1+1}^m\de_{q_j} \end{equation} as simultaneously $\lambda_1 \to 8\pi m_1$ and $\lambda_2\tau^{2}\to 8\pi m_2$ for some $q \in \mathcal{D}$. \end{theo} \noindent Notice that along with \eqref{conc} there hold $(-\tau)^{k-1}u_{\lambda_1,\lambda_2} - \log\int_S V_ke^{(-\tau)^{k-1}u_{\lambda_1,\lambda_2} }\to -\infty$ in $C_{\text{loc}}(S\setminus\{q_1,\dots,q_m\})$ and $$\sup_{\mathcal O_j}\bigg((-\tau)^{k-1}u_{\lambda_1,\lambda_2} - \log\int_S V_ke^{(-\tau)^{k-1}u_{\lambda_1,\lambda_2} }\bigg) \to +\infty$$ as simultaneously $\lambda_1 \to 8\pi m_1$ and $\lambda_2\tau^{2}\to 8\pi m_2$, for any neighborhood $\mathcal O_j$ of $q_j$ in $S$ with $k=1$ for $j=1,\dots,m_1$ and $k=2$ for $j=m_1+1,\dots,m$. Hence, we get that $u_{\lambda_1,\lambda_2}$ concentrates positively at $q_1,\dots,q_{m_1}$ and negatively at $q_{m_1+1},\dots,q_m$ as simultaneously $\lambda_1 \to 8\pi m_1$ and $\lambda_2\tau^{2}\to 8\pi m_2$. As in \cite{EF}, the notion of stability we are using here is the one introduced in \cite{Li0}, see Definition \ref{stable} below. Conditions (\ref{cond0})-\eqref{cond1} on a neighborhood of $\ml{D}$ are required to deal a with stable critical set $\ml{D}$ in the sense below. Arguing as in Remark 4.5 in \cite{EF}, the same conclusion of Theorem \ref{main2} follows under the validity of conditions (\ref{cond0})-\eqref{cond1} just on $\ml{D}=\{\xi_0\}$, where $\xi_0$ is a non-degenerate local minimum/maximum point of $\varphi_m^*$. Similarly, Theorem \ref{main2} is also valid in the special case $|A_k^*(\xi)|=O(|\nabla \varphi_m^*(\xi)|_g)$, $k=1,2$ in a neighborhood of $\ml{D}$ and $B_k^*(\xi)>0$ in $\ml{D}$. \medskip Now, we can address the case $S=\mathbb T$, $V_1=V_2\equiv1$, $m_1=m_2=1$ and $\tau=1$. When $\mathbb T$ is a rectangle, the constants like $B_k^*(\xi)$, $k=1,2$, has been used by Chen, Lin nd Wang \cite{CLW} in the computation of the Leray-Schauder degree. Due to $H(x,x)$ is constant in $\mathbb T$, we deduce that $\varphi_2^*(\xi)=-2G(\xi_1,\xi_2)+\text{const.}$. Also, it is known that the Green's function satisfies $G(\xi_1,\xi_2)=G(\xi_1-\xi_2,0)$ and the function $G(\cdot,0)$ has exactly three non-degenerate critical points $q_1$, $q_2$ (saddle points) and $q_3$ (minimum point). According to \eqref{Bk} we have that for $i,k\in\{1,2\}$ $$B_k^*(\xi)=\lim_{r\to0}\left[8\int_{\mathbb T\setminus B_r(\xi_k)} e^{8\pi G(x,\xi_k)- 8\pi G(x,\xi_i)} - {8\pi\over r^2}e^{8\pi H(\xi_k,\xi_k)-8\pi G(\xi_i,\xi_k)}\right], \ \ i\ne k.$$ Assuming that $\mathbb T=-\mathbb T$ it follows that $B_1^*(\xi)=B_2^*(\xi)$, $\xi=(\xi_1,\xi_2)$, since $G(z,0)=G(-z,0)$. Furthermore, it is known that $B_1^*(\xi)>0$ when either $\xi_1-\xi_2=q_1$ or $\xi_1-\xi_2=q_2$ and $B_1^*(\xi)<0$ when either $\xi_1-\xi_2=q_3$. By Theorem \ref{main2} we deduce the existence of \begin{itemize} \item two distinct families of solutions, for $\lambda_1,\lambda_2$ in a small right neighborhood of $8\pi$, concentrating positively at $\xi_1$ and negatively at $\xi_2$ with either $\xi_1-\xi_2=q_1$ or $\xi_1-\xi_2=q_2$ as $\lambda_1\to 8\pi$ and $\lambda_2 \to 8\pi $; \item one family of solutions, for $\lambda_1$, $\lambda_2$ in a small left neighborhood of $8\pi$, concentrating positively at $\xi_1$ and negatively at $\xi_2$ with $\xi_1-\xi_2=q_3$ as $\lambda_1\to 8\pi$ and $\lambda_2 \to 8\pi $. \end{itemize} \medskip The case $m_2=0$, namely, as $\lambda_2\tau^2\to 0^+$, can be also addressed by this approach. Thus, we have that \eqref{mfewt} can be seen as a perturbation of \eqref{mfe}. In this case the nonlinearity $e^{-\tau u}$ is treated as a lower-order term with respect to the main term $e^u$. For simplicity we denote $A(\xi)$ and $B(\xi)$ instead $A_1^*(\xi)$ and $B_1^*(\xi)$ with $m_1=m$ and $\mathcal J_2=\varnothing$, so that we have the following result. \begin{theo} \label{main3} Let $\mathcal{D} \subset \subset \tilde S^m \setminus \Delta$ be a stable critical set of $\varphi_m^*$. Assume that \begin{equation} \label{condm20} \hbox{either }A(\xi) >0 \:(<0 \hbox{ resp.)} \qquad \hbox{or} \qquad A(\xi)=0, \:B(\xi)>0 \: (<0 \hbox{ resp.)} \end{equation} do hold in a closed neighborhood $U$ of $\mathcal{D}$ in $\tilde S^m \setminus \Delta$. Then, for all $\lambda_1$ in a small right (left resp.) neighborhood of $8 \pi m_1$ and $\lambda_2\tau^2$ in a small right neighborhood of $0$ there is a solution $u_{\lambda_1,\lambda_2}$ of \eqref{mfewt} which concentrate positively (along sub-sequences) at $m$ points $q_1,\dots, q_m$ $$\frac{\lambda_1V_1e^{u_{\lambda_1,\lambda_2}}}{\int_S V_1e^{u_{\lambda_1,\lambda_2}} dv_g}\rightharpoonup 8\pi \sum_{j=1}^{m}\de_{q_j}\quad\text{in measure sense for some $q \in \mathcal{D}$}$$ $$\text{and}\quad \frac{\lambda_2\tau^{2}V_2e^{ -\tau u_{\lambda_1,\lambda_2}}}{\int_S V_2e^{-\tau u_{\lambda_1,\lambda_2}} dv_g}\to 0\quad\text{uniformly in $S$. }$$ \end{theo} \noindent Notice that a similar result can be obtained in case $m_1=0$ and $m_2=m$, namely, as $\lambda_1\to 0^+$ and $\lambda_2\tau^2\to 8\pi m$, and $u_{\lambda_1,\lambda_2}$ concentrates negatively at $m$ different points of $S$. The same conclusion of Theorem \ref{main3} follows: on one hand, under the validity of condition \eqref{condm20} just on $\ml{D}=\{\xi_0\}$, where $\xi_0$ is a non-degenerate local minimum/maximum point of $\varphi_m^*$; and on the other hand, in the special case $|A(\xi)|=O(|\nabla \varphi_m^*(\xi)|_g)$ in a neighborhood of $\ml{D}$ and $B(\xi)>0$ in $\ml{D}$. See proof of Theorem 3.2 and Remark 4.5 in \cite{EF} for more details. Several examples for Theorem \ref{main3} can be derived from each example provided in \cite{EF} for the case $\lambda_2=0$. \medskip\noindent The paper is organized as follows: Some consequences and examples are presented in section \ref{secex}. In Section \ref{approx}, we construct a first approximation to a solution to \eqref{mfewt} with the required properties and we estimate the size of the error of approximation with appropriate norms. In Section \ref{variat} we describe the scheme of our proofs, by stating the principal results we need, and we give the proof of our Theorem \ref{main2}. Section \ref{sec4} is devoted to the computation of the expansion of the energy functional on the first approximation we constructed in Section \ref{approx}. The proof of Theorem \ref{main3} is done in Section \ref{pthm3}. Sections \ref{appeA} and \ref{appeB} are devoted to prove the intermediate results we state in Section \ref{variat}. \section{Consequences and examples}\label{secex} \noindent In this section we present several consequences of Theorem \ref{main2} and some examples that illustrate our results in the sphere $\mathbb S^2$ and flat two-torus $\mathbb T$. A special case of Theorem \ref{main2} is the following: \begin{theo} \label{main1} Let $\mathcal{D} \subset \subset \tilde S^m \setminus \Delta$ be a stable critical set of $\varphi_m^*$. Assume that $A_1^*(\xi) >0$ ($<0$ resp.) and $A_2^*(\xi) >0$ ($<0$ resp.) for all $\xi\in\mathcal{D}$. Then, for all $\lambda_1$ in a small right (left resp.) neighborhood of $8 \pi m_1$ and $\lambda_2$ in a small right (left resp.) neighborhood of $\dfrac{8 \pi m_2}{\tau^2}$ there is a solution $u_{\lambda_1,\lambda_2}$ of \eqref{mfewt} which concentrate (along sub-sequences) at $m$ points $q_1,\dots, q_m$ in the sense \eqref{conc} for some $q \in \mathcal{D}$. \end{theo} \noindent The notion of stability we are using here is the following: \begin{dfn} \label{stable} A critical set $\mathcal{D} \subset \subset \tilde S^m \setminus \Delta$ of $\varphi_m$ is stable if for any closed neighborhood $U$ of $\mathcal{D}$ in $\tilde S^m \setminus \Delta$ there exists $\delta>0$ such that, if $\|G-\varphi_m\|_{C^1(U)}\leq \delta$, then $G$ has at least one critical point in $U$. In particular, the minimal/maximal set of $\varphi_m$ is stable (if $\varphi_m$ is not constant) as well as any isolated c.p. of $\varphi_m$ with non-trivial local degree. \end{dfn} \medskip \noindent Notice that from the definition of $\rho_j$ in \eqref{ro1}-\eqref{ro2} and $A_k^*(\xi)$ in \eqref{vk}, it is readily checked that $$A_k^*(\xi)= 4\pi \sum_{j\in \mathcal J_k} \rho_j(\xi_j) [\Delta_g \log V_1(\xi_j)+(-\tau)^{k-1}\frac{8\pi }{|S|} \left(m_1-{m_2\over \tau}\right)-2K(\xi_j)],\quad k=1,2$$ for $\xi$ a c.p. of $\varphi_m^*$, in view of $\nabla \rho_j(\xi_j)=0$ for all $j=1,\dots,m$. If $V_1\ge0$ and $V_2\ge0$ in $S$, then the function $\varphi_2^*$ with $m_1=m_2=1$ always attains its maximum value in $\tilde S^2\setminus\Delta$ and the maximal set is clearly stable. Let us stress that $V_1$ and $V_2$ can vanish at some points of $S$. Thus, we have deduced the following fact. \begin{cor}\label{cor1} Assume that $V_i\ge 0$ in $S$ for $i=1,2$. If either $\sup_S[2K-\lab\log V_1]<\frac{8\pi}{|S|}\big(1-{1\over \tau}\big)$ or $\inf_S[2K-\lab\log V_1]>\frac{8\pi}{|S|}\big(1-{1\over \tau}\big)$ and either $\sup_S[2K-\lab\log V_2]<\frac{8\pi}{|S|}\big(1- \tau \big)$ or $\inf_S[2K-\lab\log V_2]>\frac{8\pi}{|S|}\big(1-\tau \big)$ then there exist solutions $u_{\lambda_1,\lambda_2}$ to \eqref{mfewt} which concentrate at two points, positively at $q_1$ and negatively at $q_2$, in the sense \eqref{conc} as $\lambda_1\to 8\pi$ and $\lambda_2\tau^2\to 8\pi$, where $(q_1,q_2)$ is a maximum of $\varphi_2^*$ in $\ti S^2\setminus\Delta$. \end{cor} When $S=\mathbb S^2$ we have that $K=\dfrac{4\pi}{|\mathbb S^2|}$, so that, for $V_1=V_2\equiv 1$ and any $\tau>0$, Corollary \ref{cor1} then provides the existence of blow-up solutions $u_{\lambda_1,\lambda_2}$ concentrating at two points as $\lambda_1\to 8\pi$ and $\lambda_2\tau^2\to 8\pi$, where $\lambda_1$ and $\lambda_2\tau^2$ belongs to a small \emph{left} neighborhood of $8\pi$. In case of a flat two-torus $S=\mathbb T$, $K=0$, so that for $V_1=V_2\equiv 1$ and any $\tau>0$, $\tau\ne 1$, Corollary \ref{cor1} then provides the existence of blow-up solutions $u_{\lambda_1,\lambda_2}$ concentrating at two points as $\lambda_1\to 8\pi$ and $\lambda_2\tau^2\to 8\pi$, where $\lambda_1$ belongs to a small \emph{right} (\emph{left} resp.) neighborhood of $8\pi $ if $\tau>1$ ($<1$ resp.) and $\lambda_2\tau^2$ belongs to a small \emph{left} (\emph{right} resp.) neighborhood of $8\pi$. However, the case $S=\mathbb T$, $V_1=V_2\equiv 1$, $m_1=m_2=1$ and $\tau=1$ is an example for which $A_1^*$ and $A_2^*$ vanishes in $\mathbb T ^2\setminus\Delta$ and in particular at c.p.'s. \medskip Let us mention some examples where $V_1$ and $V_2$ vanish at some points of $S$. Precisely, assume that $$V_1(x)=e^{-4\pi \sum\limits_{i=1}^{l_1}n_{1,i}G(x,p_{1,i})}\qquad\text{and}\qquad V_2(x)=e^{-4\pi \sum\limits_{i=1}^{l_2}n_{2,i}G(x,p_{2,i})},$$ with $n_{1,i},n_{2,i}>0$ and $p_{1,i},p_{2,j}\in S$, $i=1,\dots,l_1$ and $j=1,\dots,l_2$ respectively. The zero sets are $\{p_{1,1},\dots,p_{1,l_1}\}$ for $V_1$ and $\{p_{2,1},\dots,p_{2,l_2}\}$ for $V_2$. So, for $m_1=m_2=1$, $m=2$ we have that $$\varphi_2^*(\xi)=- \sum\limits_{i=1}^{l_1}n_{1,i}G(\xi_1,p_{1,i}) - {1\over \tau^2} \sum\limits_{j=1}^{l_2}n_{2,j}G(\xi_2,p_{2,j})-{2\over \tau}G(\xi_1,\xi_2),$$ and if $\xi$ is a c.p. of $\varphi_2^*$ then $$A_k^*(\xi)=4\pi\rho_k(\xi_k)\left[ -\frac{4\pi}{|S|} \sum\limits_{i=1}^{l_k}n_{k,i}+{8\pi\over |S|} \Big(1- \tau^{2k-3}\Big)-2K(\xi_k)\right],\quad k=1,2.$$ In particular, if $S=\mathbb S^2$ then Corollary \ref{cor1} provides the existence of blow-up solutions $u_{\lambda_1,\lambda_2}$ concentrating at two points as $\lambda_1\to 8\pi$ and $\lambda_2\tau^2\to 8\pi$ when $\sum_{i=1}^{l_1}n_{1,i}\ne 1-\frac{2}\tau$ and $\sum_{j=1}^{l_2}n_{2,j}\ne 1-2\tau$. We deduce the same conclusion when $S=\mathbb T$ and $\sum_{i=1}^{l_1}n_{1,i}\ne 2-\frac{2}\tau$ and $\sum_{j=1}^{l_2}n_{2,j}\ne 2-2\tau$. Let us stress that there is no restriction on $n_{1,i},n_{2,j}$'s if $\tau=1$. \medskip Now, consider the case $m_1=m\ge 2$ and $m_2=1$, namely, $\lambda_1$ close to $8\pi m $ and $\lambda_2\tau^2$ close to $8\pi$. Roughly speaking, if $u_{\lambda_1,\lambda_2}$ concentrates negatively at $q$ then $$\lambda_2\tau\left(\frac{V_2e^{ -\tau u_{\lambda_1,\lambda_2}}}{\int_S V_2e^{-\tau u_{\lambda_1,\lambda_2}} dv_g} - {1\over |S|}\right)\quad\text{ behaves like }\quad 4\pi\cdot {2\over\tau}\left(\de_q - {1\over |S|}\right)\quad\text{ as $\lambda\tau^2\to 8\pi$}$$ and equation \eqref{mfewt} resembles the singular mean field equation \begin{equation*} -\Delta_g v=\lambda\left({h e^{v}\over\int_{S}h e^{v} dv_g}- \frac{1}{|S|}\right) - 4\pi \al \left(\de_q - {1\over |S|}\right) \qquad\text{in $S$}, \end{equation*} with $\al=\frac2\tau$. According to a result of D'Aprile and Esposito \cite[Theorem 1.4]{DaE}, it follows that the functional \begin{equation*}\label{fimm1} \begin{split} \varphi_{m+1}^* (\xi) = &\ \frac{1}{4\pi}\sum_{j=1}^{m} \log V_1(\xi_j ) + \frac{1}{4\pi \tau^2} \log V_2(\xi_{m+1}) + \sum_{j=1}^{m_1} H(\xi_j,\xi_j) + {1\over\tau^2} H(\xi_{m+1},\xi_{m+1} )\\ & +\sum_{j=1}^{m}\sum_{i=1\atop i\not= j}^{m} G(\xi_i,\xi_j) - {2\over\tau} \sum_{j=1}^{m} G(\xi_j,\xi_{m+1}) , \end{split} \end{equation*} has a $C^1$-stable critical value for $\xi_{m+1}\in S$ fixed under the assumptions $S\ne \mathbb S^2,\mathbb{RP}^2$ and $\frac{2}\tau\ne 1,\dots,m-1$. Thus, we deduce the next result. \begin{cor}\label{cor2} Assume that $V_i>0$ in $S$ for $i=1,2$, $S\ne \mathbb S^2,\mathbb{RP}^2$ and $\dfrac{2}\tau\ne 1,\dots,m-1$. If either $\sup_S[2K-\lab\log V_1]<\frac{8\pi}{|S|}\big(m-{1\over \tau}\big)$ or $\inf_S[2K-\lab\log V_1]>\frac{8\pi}{|S|}\big(m-{1\over \tau}\big)$ and either $\sup_S[2K-\lab\log V_2]<\frac{8\pi}{|S|}\big(1-m\tau \big)$ or $\inf_S[2K-\lab\log V_2]>\frac{8\pi}{|S|}\big(1-m\tau \big)$ then there exist solutions $u_{\lambda_1,\lambda_2}$ to \eqref{mfewt} which concentrate at $m+1$ points, positively at $q_1,\dots, q_{m}$ and negatively at $q_{m+1}$, in the sense \eqref{conc} as $\lambda_1\to 8\pi m$ and $\lambda_2\tau^2\to 8\pi$, where $(q_1,\dots,q_{m+1})$ is a max-min critical point of $\varphi_{m+1}^*$ in $S^{m+1}\setminus\Delta$. \end{cor} When $S=\mathbb T$ and $V_1=V_2\equiv 1$, for any $\tau>0$, $m\tau\ne 1$ and $\tau\notin\{2,1,{2\over 3},\dots,{2\over m-1}\}$, Corollary \ref{cor1} then provides the existence of blow-up solutions $u_{\lambda_1,\lambda_2}$ concentrating at $m+1$ points as $\lambda_1\to 8\pi m$ and $\lambda_2\tau^2\to 8\pi$, where $\lambda_1$ belongs to a small right (left resp.) neighborhood of $8\pi m$ if $m\tau>1$ ($<1$ resp.) and $\lambda_2\tau^2$ belongs to a small left (right resp.) neighborhood of $8\pi$. Notice that a similar result can be obtained in case $m_1=1$ and $m_2=m$, namely, $\lambda_1$ close to $8\pi$ and $\lambda_2\tau^2$ close to $8\pi m$. \medskip\noindent Observe that on one hand, we generalize existence results of blowing-up solutions for mean field equations \eqref{mfe} in \cite{EF} to an asymmetric problem \eqref{mfewt}. And on the other hand, we perform, in a compact Riemann surface $S$, a similar construction done for a sinh-Poisson equation in bounded domains with Dirichlet boundary conditions by \cite{BaPi} and extended to an asymmetric case in \cite{pr1}. Both problems in \cite{BaPi,pr1} do not contain any potential $V_k$ and the existence of $C^1$-stable critical points of the corresponding $\varphi_m^*$ implies the existence of blowing-up solutions. However, to prove our results is not enough to assume the existence of $C^1$-stable critical points of $\varphi_m^*$ in \eqref{fim}. Admissibility conditions in terms of quantities either $A_k^*$'s or $B_k^*$'s have to be used, in the same spirit of \cite{EF}. After completion of this work, we have learned that in \cite{AhBaFi} the existence of $C^1$-stable critical points of vortex type Hamiltonians, including $\varphi_m^*$ in \eqref{fim}, has been proved for a surface $S$ which is not homeomorphic to the sphere nor the projective plane. \medskip Finally, we point out that the type of arguments used to obtain our results have been also developed in several previous works by various authors. Let us quote a few papers from the vast literature concerning singular perturbation problems with nonlinearities of exponential type \cite{ChI,dmr,EMP,EW,FM}. \section{Approximation of the solution}\label{approx} \noindent The main idea to construct approximating solutions of \eqref{mfewt}, as in \cite{EF}, is to use as ``basic cells'' the functions \begin{equation*} u_{\delta,\xi}(x)=u_0 \Big(\frac{|x-\xi|}{\delta}\Big)-2\log \delta, \qquad \de>0,\: \xi\in\R^2,\end{equation*} where $\ds u_0(r)=\log\frac{8}{(1+r^2)^2}.$ They are all the solutions of \begin{equation*} \left\{ \begin{array}{ll}\Delta u+e^{u}=0 &\text{in $\R^2$}\\ \int_{\R^2} e^u <\infty, & \end{array} \right. \end{equation*} and do satisfy the following concentration property: $e^{u_{\delta,\xi}}\rightharpoonup 8\pi\delta_\xi$ in measure sense as $\delta \to 0$. We will use now isothermal coordinates to pull-back $u_{\delta,\xi}$ in $S$. Let us recall that every Riemann surface $(S,g)$ is locally conformally flat, and the local coordinates in which $g$ is conformal to the Euclidean metric are referred to as isothermal coordinates (see for example the simple existence proof provided by Chern \cite{Chern}). For every $\xi \in S$ it amounts to find a local chart $y_\xi$, with $y_\xi(\xi)=0$, from a neighborhood of $\xi$ onto $B_{2r_0}(0)$ (the choice of $r_0$ is independent of $\xi$) in which $g=e^{\hat \varphi_\xi(y_\xi(x))}dx$, where $\hat \varphi_\xi \in C^\infty(B_{2r_0}(0),\mathbb{R})$. In particular, $\hat \varphi_\xi$ relates with the Gaussian curvature $K$ of $(S,g)$ through the relation: \begin{equation} \label{equationvarphi} \Delta \hat \varphi_\xi(y) =-2K(y_\xi^{-1}(y)) e^{\hat \varphi_\xi(y)} \qquad \hbox{ for }y \in B_{2r_0}(0). \end{equation} We can also assume that $y_\xi$, $\hat \varphi_\xi$ depends smoothly in $\xi$ and that $\hat \varphi_\xi(0)=0$, $\nabla \hat \varphi_\xi(0)=0$. We now pull-back $u_{\delta,0}$ in $\xi \in S$, for $\delta>0$, by simply setting $\ds U_{\delta,\xi}(x)=u_{\delta,0}(y_\xi(x))=\log \frac{8\delta^2}{(\delta^2+|y_\xi(x)|^2)^2}$ for $x \in y_\xi^{-1}(B_{2r_0}(0))$. Letting $\chi\in C_0^\infty(B_{2r_0}(0))$ be a radial cut-off function so that $0\le\chi\le 1$, $\chi\equiv 1$ in $B_{r_0}(0)$, we introduce the function $PU_{\de,\xi}$ as the unique solution of \begin{equation}\label{ePu} \left\{ \begin{array}{ll} -\Delta_g PU_{\de,\xi} (x)=\chi_\xi(x) e^{-\varphi_\xi(x)} e^{U_{\de,\xi}(x)}-\frac{1}{|S|}\int_S \chi_\xi e^{-\varphi_\xi} e^{U_{\de,\xi}} dv_g &\text{in }S\\ \int_S PU_{\de,\xi} dv_g=0, \end{array}\right. \end{equation} where $\chi_\xi(x)=\chi(|y_\xi(x)|)$ and $\varphi_\xi(x)=\hat \varphi_\xi(y_\xi(x))$. Notice that the R.H.S. in (\ref{ePu}) has zero average and smoothly depends in $x$, and then (\ref{ePu}) is uniquely solvable by a smooth solution $PU_{\de,\xi}$. \medskip \noindent Let us recall the transformation law for $\Delta_g$ under conformal changes: if $\tilde g=e^{\varphi} g$, then \begin{equation} \label{laplacian} \Delta_{\tilde g}=e^{-\varphi} \Delta_g.\end{equation} Decompose now the Green function $G(x,\xi)$, $\xi \in S$, as $\ds G(x,\xi)=-\frac{1}{2\pi} \chi_\xi(x) \log |y_\xi(x)|+H(x,\xi),$ and by (\ref{green}) then deduce that \begin{equation*} \left\{ \begin{array}{ll} -\Delta_g H= - \frac{1}{2\pi} \lab \chi_\xi \,\log |y_\xi(x)| -\frac{1}{\pi}\langle \nabla \chi_\xi,\nabla \log |y_\xi(x)| \rangle_g-\frac{1}{|S|} &\text{in $S$}\\ \int_S H(\cdot,\xi)\, dv_g=\frac{1}{2\pi} \int_S \chi_\xi \log |y_\xi(\cdot)| dv_g.& \end{array} \right. \end{equation*} We have used that $\ds \Delta_g \log |y_\xi(x)|= e^{-\hat \varphi_\xi(y)} \Delta \log|y| \Big|_{y=y_\xi(x)}=2\pi \delta_\xi$ in view of (\ref{laplacian}). For $r\leq 2r_0$ define $B_r(\xi)=y_\xi^{-1}(B_r(0))$, $A_{r}(\xi)=B_{r}(\xi) \setminus B_{r/2}(\xi)$, and set $$f_\xi= {\lab\chi_\xi \over |y_\xi(x)|^2} +2\Big\langle \nabla\chi_\xi,\nabla |y_\xi(x)|^{-2} \Big\rangle_g+{2\over |S|} \int_{\mathbb{R}^2} {\chi'(|y|)\over |y|^3}\, dy.$$ Setting $\Psi_{\de,\xi}(x)= PU_{\de,\xi}(x)-\chi_\xi [U_{\delta,\xi}-\log(8\de^2)]-8\pi H(x,\xi),$ by the definition of $f_\xi$ we then have that $-\Delta_g \Psi_{\de,\xi}=-2\de^2 f_\xi+O(\de^4)$ in $S$ so that $$\int_S f_\xi dv_g=\frac{1}{2\delta^2} \int_S \Delta_g \Psi_{\delta,\xi} dv_g+O(\delta^2)=O(\delta^2)$$ for all $\delta>0$, and hence $\int_S f_\xi dv_g=0$. Therefore, $F_\xi$ is well defined as the unique solution of \begin{equation}\label{d2t} \left\{ \begin{array}{ll}-\lab F_\xi=f_\xi &\text{in }S\\ \int_S F_\xi dv_g=0.& \end{array}\right. \end{equation} We have the following asymptotic expansion of $PU_{\de,\xi}$ as $\delta \to 0$, as shown in \cite{EF}: \begin{lem}\label{ewfxi} The function $PU_{\delta,\xi}$ satisfies $$PU_{\delta,\xi}=\chi_\xi \left[U_{\delta,\xi}-\log(8\delta^2)\right]+ 8\pi H(x,\xi)+\alpha_{\delta,\xi}-2\delta^2 F_\xi+O(\delta^4|\log \delta|)$$ uniformly in $S$, where $F_\xi$ is given in \eqref{d2t} and $$\alpha_{\delta,\xi}=-{4\pi\over|S|} \delta^2 \log \delta +2{\delta^2\over|S|}\left(\int_{\mathbb{R}^2} \chi(|y|) \frac{e^{\hat \varphi_\xi(y)}-1}{|y|^2}dy+ \pi- \int_{\mathbb{R}^2} {\chi'(|y|) \log |y|\over |y| } dy \right).$$ In particular, there holds $$PU_{\delta,\xi}=8\pi G(x,\xi)-2{\de^2 \chi_\xi \over |y_\xi(x)|^2}+\alpha_{\delta,\xi}-2 \delta^2 F_\xi+O(\delta^4|\log \delta|)$$ locally uniformly in $S \setminus\{\xi\}$. \end{lem} \noindent The ansatz will be constructed as follows. Given $m\in \mathbb{N}$, let us consider distinct points $\xi_j\in S$ and $\delta_j>0$, $j=1,\dots,m$. In order to have a good approximation, we will assume that $\exists\, C_0>1\,:$ \begin{equation}\label{repla0} \delta_j^2= \begin{cases} \mu_1^2\delta^2 \rho_j(\xi_j) &\text{ for $j\in\{1,\dots,m_1\}$} \\ \mu_2^2\delta^2 \rho_j(\xi_j) &\text{ for $j\in\{m_1+1,\dots,m\}$} \end{cases},\ \ \ \text{with } \ 0< \mu_i\le C_0,\quad i=1,2 \end{equation} \begin{equation}\label{repla1} |\lambda_1-8\pi m_1 |\le C_0 \de^2|\log \de| \quad\text{and}\quad|\lambda_2\tau^2-8\pi m_2 |\le C_0 \de^2|\log \de|, \end{equation} where $\de >0$, $m_1\in\{1,\dots,m-1\}$, $m_2=m-m_1$ and $\rho_j$ is given by \eqref{ro1}-\eqref{ro2}. Up to take $r_0$ smaller, we assume that the points $\xi_j$'s are well separated and $V_1(\xi_j)$, $V_2(\xi_j)$ are uniformly away from zero, namely, we choose $\xi=(\xi_1,\dots,\xi_{m})\in\Xi$, where \begin{equation*} \Xi=\{(\xi_1,\dots,\xi_{m}) \in S^{m} \mid d_g(\xi_i,\xi_j)\geq 4r_0\: \text{ and }\: V_1(\xi_j),\: V_2(\xi_j)\ge r_0\:\:\forall\:i,j=1,\dots,m,\:i\not=j\}. \end{equation*} Denote $U_j:= U_{\delta_j,\xi_j}$ and $W_j=PU_j$, $j=1,\dots,m$, where $P$ is the projection operator defined by \eqref{ePu}. Thus, our approximating solution is $\ds W(x)=\sum_{j=1}^{m_1} W_j(x)-{1\over \tau}\sum_{j=m_1+1}^{m} W_j(x)$, parametrized by $(\mu,\xi) \in \mathcal{M} \times \Xi$, with $\mu=(\mu_1,\mu_2)$ and $\mathcal{M}=(0,C_0]\times (0,C_0] $. Notice that for $r_0$ small enough we have that $\mathcal{D}\subset\Xi\subset \tilde S^{m}\setminus\Delta$. We will look for a solution $u$ of \eqref{mfewt} in the form $u=W+\phi$, for some small remainder term $\phi$. In terms of $\phi$, the problem \eqref{mfewt} is equivalent to find $\phi\in \bar H$ so that \begin{equation}\label{ephi} L(\phi)=-[R+N(\phi)] \qquad\text{ in $S$}, \end{equation} where the linear operator $L$ is defined as \begin{equation}\label{ol} L(\phi) = \Delta_g \phi + \sum_{i=1}^2 \lambda_i\tau^{2(i-1)} {V_i(x)e^{(-\tau)^{i-1} W}\over\int_S V_ie^{(-\tau)^{i-1} W}dv_g}\left(\phi - {\int_{S} V_ie^{(-\tau)^{i-1} W}\phi dv_g \over\int_S V_ie^{(-\tau)^{i-1} W}dv_g} \right), \end{equation} the nonlinear part $N$ is given by \begin{equation}\label{nlt} N(\phi)=N_1(\phi)-N_2(\phi) \end{equation} with \begin{equation}\label{ni} \begin{split} N_i(\phi)=&\,\lambda_i \tau^{i-1} \bigg({V_ie^{(-\tau)^{i-1} (W+\phi)}\over\int_S V_ie^{(-\tau)^{i-1} (W+\phi) }dv_g}-{(-\tau)^{i-1} V_ie^{(-\tau)^{i-1} W}\over\int_S V_i e^{(-\tau)^{i-1} W}dv_g}\left[\phi-\frac{\int_{S} V_ie^{(-\tau)^{i-1} W}\phi dv_g}{\int_S V_i e^{(-\tau)^{i-1} W} dv_g}\right]\\ &\,-{V_ie^{(-\tau)^{i-1} W}\over\int_S V_i e^{(-\tau)^{i-1} W}dv_g }\bigg) \end{split} \end{equation} for $i=1,2$ and the approximation rate of $W$ is encoded in \begin{equation}\label{R} R=\Delta_g W+\lambda_1 \left({V_1(x)e^{W}\over\int_S V_1e^{W}dv_g} - {1\over |S|}\right)-\lambda_2\tau \left({V_2(x)e^{-\tau W}\over\int_S V_2e^{-\tau W}dv_g} - {1\over |S|}\right). \end{equation} Notice that for all $\phi \in \bar H$ $$\int_S L(\phi) dv_g=\int_S N(\phi)dv_g=\int_S R dv_g=0.$$ \noindent In order to get the invertibility of $L$, let us introduce the weighted norm for any $h\in L^\infty(S)$ \begin{equation*} \| h \|_*=\sup_{x\in S} \left[\sum_{j=1}^{m} \frac{\de_j^\sigma}{(\de_j^2 + \chi_{B_{r_0}(\xi_j)}(x) |y_{\xi_j}(x)|^2+r_0^2 \chi_{S\setminus B_{r_0}(\xi_j)}(x))^{1+\sigma/2}}\right]^{-1} |h(x)|, \end{equation*} where $0<\sigma<1$ is a small fixed constant and $\chi_A$ denotes the characteristic function of the set $A$. Let us evaluate the approximation rate of $W$ in $\|\cdot\|_*$ and recall that $m=m_1+ m_2 $: \begin{lem}\label{estrr0} Assume \eqref{repla0}-\eqref{repla1}. There exists a constant $C>0$, independent of $\de>0$ small, such that \begin{equation}\label{re} \|R\|_*\le C\left(\delta \,|\nabla\varphi_m^*(\xi)|_g + \de^{2-\sigma}|\log \de| \right) \end{equation} for all $\xi \in \Xi$, where $|\nabla \varphi_m^*(\xi)|_g^2$ stands for $\displaystyle \sum_{j=1}^m |\nabla_{\xi_j} \varphi_m^*(\xi)|_g^2$. \end{lem} \begin{proof}[\bf Proof:] We shall argue in the same way as in \cite[Lemma 2.1]{EF}. First, from Lemma \ref{ewfxi} we note that for any $j\in\{1,\dots,m\}$, $W_j(x)=U_j(x) - \log (8\delta_j^2) + 8\pi H(x,\xi_j)+O(\de^2 |\log \de|)$ uniformly for $x\in B_{r_0}(\xi_j)$ and $W_j(x)=8\pi G(x,\xi_j) +O(\de^2 |\log \de|)$ uniformly for $x$ on compact subsets of $S \setminus\{\xi_j\}$. Since by symmetry and $\hat \varphi_{\xi_j}(0)=0$ we have that $$\int_{B_{r_0}(\xi_j)} \rho_j(x) e^{U_j} dv_g=8 \pi \rho_j(\xi_j) + O(\de^2|\log\de|),$$ we then get that for $j\in\{1,\dots, m_1\}$ \begin{eqnarray}\label{iV1eW1} \int_{B_{r_0}(\xi_j)} V_1e^W dv_g &=& \frac{1}{8\delta_j^2}\int_{B_{r_0}(\xi_j)} V_1 e^{U_j + 8\pi H(x,\xi_j)+8\pi \sum\limits_{l=1,l\ne j}^{m_1} G(x,\xi_l)-{8\pi \over\tau} \sum\limits_{l=m_1+1}^m G(x,\xi_l)+O(\de^2|\log \de |)}dv_g\nonumber\\ &=& {1\over \delta_j^2}[\pi \rho_j(\xi_j) + O(\de^2|\log\de|)] = {\pi \over \mu_1^2\de^2} + O(|\log\de|) \end{eqnarray} and for $j\in\{m_1+1,\dots,m\}$ \begin{eqnarray}\label{iV1eW2} \int_{B_{r_0}(\xi_j)}\hspace{-0.1cm} V_1e^W dv_g&\hspace{-0.3cm}=& \hspace{-0.3cm} \int_{B_{r_0}(\xi_j)} V_1 e^{-{1\over\tau} [U_j -\log(8\de_j^2)+ 8\pi H(x,\xi_j)]+8\pi \sum\limits_{l=1}^{m_1} G(x,\xi_l)-{8\pi \over\tau} \sum\limits_{l=m_1+1,l\ne j}^m G(x,\xi_l)+O(\de^2|\log \de |)}dv_g \nonumber \\ &\hspace{-0.3cm}=&\hspace{-0.2cm} \int_{B_{r_0}(\xi_j)} V_1(x)\Big[{\rho_j(x)\over V_2(x)} \Big]^{-1/\tau} (\de_j^2+|y_{\xi_j}(x)|^2)^{2/\tau} (1 + O(\de^2|\log \de |))dv_g \nonumber \\ &\hspace{-0.3cm}=&\hspace{-0.2cm}O(1). \end{eqnarray} So, by using \eqref{iV1eW1}-\eqref{iV1eW2} we have that \begin{eqnarray}\label{iV1eW} \int_S V_1e^W dv_g =\sum_{j=1}^{m_1} \int_{B_{r_0}(\xi_j)} V_1 e^{W}dv_g + O(1) ={\pi m_1\over \mu_1^2 \de^2} + O(|\log\de|). \end{eqnarray} Similarly, for $j\in\{1,\dots, m_1\}$ we get that \begin{eqnarray}\label{iV2etW2} \hspace{-0.2cm}\int_{B_{r_0}(\xi_j)} V_2e^{-\tau W} dv_g &=& O(1) \end{eqnarray} and for $j\in\{m_1+1,\dots,m\}$ \begin{eqnarray}\label{iV2etW1} \int_{B_{r_0}(\xi_j)} V_2e^{-\tau W} dv_g &=& {1\over \delta_j^2}[\pi \rho_j(\xi_j) + O(\de^2|\log\de|)] = {\pi \over \mu_2^2 \de^2} + O(|\log\de|). \end{eqnarray} So, by using \eqref{iV2etW2}-\eqref{iV2etW1} we have that \begin{eqnarray}\label{iV2etW} \int_S V_2e^{-\tau W} dv_g &=&\sum_{j=m_1+1}^m \int_{B_{r_0}(\xi_j)} V_2 e^{W}dv_g + O(1)= {\pi m_2\over \mu_2^2 \de^2} + O(|\log\de|). \end{eqnarray} By Lemma \ref{ewfxi} and \eqref{repla0}, \eqref{iV1eW}, \eqref{iV2etW} we have that \begin{itemize} \item in $S \setminus \cup_{j=1}^m B_{r_0}(\xi_j)$ there holds $\lambda_1 \frac{ V_1 e^W}{\int_S V_1e^W dv_g}=O(\de^2)$ in view of $W(x)=O(1)$; \item in $B_{r_0}(\xi_j)$, $j\in\{1,\dots,m_1\}$, there holds \begin{eqnarray*} \frac{ V_1 e^W}{\int_S V_1e^W dv_g}&=& \frac{V_1 e^{-\log(8\delta_j^2)+8\pi H(x,\xi_j) + 8\pi\sum\limits_{l=1,l\ne j}^{m_1}G(x,\xi_l)-{8\pi\over\tau}\sum\limits_{l=m_1+1}^m G(x,\xi_l)+O(\de^2|\log \de|)}} {\pi m_1 \mu_1^{-2}\de^{-2} + O(|\log\de|)} e^{U_j}\\ &=& \frac{1}{8\pi m_1}\bigg[1+\Big\langle\frac{\nabla (\rho_j \circ y_{\xi_j}^{-1})(0)}{ \rho_j (\xi_j)},y_{\xi_j}(x)\Big\rangle+O(|y_{\xi_j}(x)|^2+\de^2 |\log \de|)\bigg] e^{U_j}; \end{eqnarray*} \item in $B_{r_0}(\xi_j)$, $j\in\{m_1+1,\dots,m\}$, there holds $$ \frac{ V_1 e^W}{\int_S V_1e^W dv_g} = \frac{V_1(x) [\rho_j(x)/V_2(x)]^{-1/\tau}+O(\de^2|\log \de|)} { \pi m_1 \mu_1^{-2} \de^{-2} + O( |\log\de|)} (\de_j^2+|y_{\xi_j}(x)|^2)^{2/\tau}= O(\de^2).$$ \end{itemize} Similarly as above, we have that \begin{itemize} \item in $S \setminus \cup_{j=1}^m B_{r_0}(\xi_j)$ there holds $\lambda_2\tau \frac{ V_2 e^{-\tau W} }{\int_S V_2 e^{-\tau W} dv_g}=O(\de^2)$ in view of $W(x)=O(1)$; \item in $B_{r_0}(\xi_j)$, $j\in\{1,\dots,m_1\}$, there holds $$\frac{ V_2 e^{-\tau W} }{\int_S V_2e^{-\tau W} dv_g} =\frac{V_2(x) [\rho_j(x) / V_1(x) ]^{-\tau}+O(\de^2|\log \de|)}{ \pi m_2 \mu_2^{-2} \de^{-2} + O( |\log\de|)} (\de_j^2+|y_{\xi_j}(x)|^2)^{2\tau} = O(\de^2),$$ \item in $B_{r_0}(\xi_j)$, $j\in\{m_1+1,\dots,m\}$, there holds $$\frac{ V_2 e^{-\tau W} }{\int_S V_2e^{-\tau W} dv_g}=\frac{1}{8\pi m_2 }\bigg[1+\Big\langle\frac{\nabla ( \rho_j \circ y_{\xi_j}^{-1})(0)}{ \rho_j(\xi_j)},y_{\xi_j}(x)\Big\rangle+O(|y_{\xi_j}(x)|^2+\de^2 |\log \de|)\bigg] e^{U_j}.$$ \end{itemize} Since as before $$\int_S \chi_j e^{-\varphi_j} e^{U_j} dv_g=\int_{B_{r_0}(0)} {8 \delta_j^2\over (\delta_j^2 + |y|^2 )^2} dy+O(\delta^2)=8\pi + O(\de^2)$$ with $\varphi_j=\varphi_{\xi_j}$, for $R$ given by \eqref{R} we then have that \begin{eqnarray*} R&=&-\sum_{j=1}^{m_1} \chi_j e^{-\varphi_j}e^{U_j} + {\lambda_1V_1e^{W}\over\int_S V_1e^{W}dv_g}+{8\pi m_1-\lambda_1\over |S|} + O(\de^2)\\ &&+{1\over\tau} \sum_{j=m_1+1}^{m} \chi_j e^{-\varphi_j}e^{U_j} - {\lambda_2\tau V_2e^{-\tau W}\over\int_S V_2e^{-\tau W}dv_g}+{\lambda_2\tau^2- 8\pi m_2\over |S|\tau} + O(\de^2), \end{eqnarray*} where $\chi_j=\chi_{\xi_j}$. By previous computations we now deduce that $R(x)=O(\de^2)$ in $S \setminus \cup_{j=1}^m B_{r_0} (\xi_j)$, \begin{eqnarray*} R&=& \left[-e^{-\varphi_j}+{\lambda_1\over 8\pi m_1}+O\big( |\nabla\log ( \rho_j \circ y_{\xi_j}^{-1})(0)||y_{\xi_j}(x)|+|y_{\xi_j}(x)|^2+\de^2 |\log \de|\big)\right] e^{U_j}\\ &&+O(|\lambda_1-8\pi m_1|+|\lambda_2\tau^2- 8\pi m_2 |+\de^2)\\ &=& e^{U_j}O\left(|\nabla \log( \rho_j \circ y_{\xi_j}^{-1})(0)| |y_{\xi_j}(x)|+|y_{\xi_j}(x)|^2+|\lambda_1-8\pi m_1|+\de^2|\log \de|\right)\\ && +O(|\lambda_1-8\pi m_1| +|\lambda_2\tau^2- 8\pi m_2|+ \de^2) \end{eqnarray*} in $B_{r_0} (\xi_j)$, $j\in\{1,\dots,m_1\}$ and similarly, \begin{eqnarray*} R&=& e^{U_j}O\left(|\nabla \log( \rho_j \circ y_{\xi_j}^{-1})(0)| |y_{\xi_j}(x)|+|y_{\xi_j}(x)|^2+|\lambda_2\tau^2- 8\pi m_2|+\de^2|\log \de|\right) \\ &&+O(|\lambda_1-8\pi m_1| +|\lambda_2\tau^2- 8\pi m_2| + \de^2) \end{eqnarray*} in $B_{r_0} (\xi_j)$, $j\in\{m_1+1,\dots,m\}$, in view of $\varphi_j(\xi_j)=0$ and $\nabla \varphi_j(\xi_j)=0$. From the definition of $\|\cdot \|_*$ and \eqref{repla1} we deduce the validity of \eqref{re}. This finishes the proof. \end{proof} \section{Variational reduction and proof of main results}\label{variat} \noindent The solvability theory for the linear operator $L$ given in (\ref{ol}), obtained as the linearization of \eqref{mfewt} at the approximating solution $W$, is a key step in the so-called nonlinear Lyapunov-Schimdt reduction. Notice that formally the operator $L$ approaches $\hat L$ defined in $\mathbb{R}^2$ as $$\hat L(\phi) = \Delta\phi+{8\over (1+|y|^2)^2}\left(\phi - {1\over\pi}\int_{\R^2}{\phi(z)\over (1+|z|^2)^2}\,dz\right),$$ by setting $y =y_{\xi_j}(x)/\delta_j $ as $\de \to 0$. Due to the intrinsic invariances, the kernel of $\hat L$ in $L^\infty(\R^2)$ is non-empty and is spanned by $1$ and $Y_j$, $j=0,1,2$, where $\ds Y_{i}(y) = { 4 y_i \over 1+|y|^2}$, $i=1,2$, and $\ds Y_{0}(y) = 2\,{1-|y|^2\over 1+|y|^2}$. Since \cite{DeKM,EF,EGP} it is by now rather standard to show the invertibility of $L$ in a suitable ``orthogonal" space, and a sketched proof of it will be given in Appendix A. However, as it was observed in \cite{EF}, for Dirichlet Liouville-type equations on bounded domains as in \cite{DeKM,EGP}, the corresponding limiting operator $\tilde L$ takes the form $\tilde L(\phi)=\Delta\phi+{8\over (1+|y|^2)^2}\phi$ and the function $1$ does not belong to its kernel, making possible to disregard the ``dilation parameters" $\de_i$ in the reduction. As we will see, two additional parameters $\mu_1$ and $\mu_2$ are needed in the reduction (one associated to all ``positive bubbles'' and the other one to all ``negative bubbles'') and in this respect our problem displays a new feature w.r.t. Dirichlet Liouville-type equations, making our situation very similar to the one arising in the study of critical problems in higher dimension. Roughly speaking, $L$ resemble a ``direct sum'' of linear operators for mean field type equations. \medskip \noindent To be more precise, for $i=0,1,2$ and $j=1,\dots,m$ introduce the functions \begin{equation*} Z_{ij}(x) = Y_i\left({y_{\xi_j}(x)\over \delta_j}\right)=\left\{\begin{array}{ll} \ds 2 {\de_j^2- |y_{\xi_j}(x)|^2\over \de_j^2+|y_{\xi_j}(x)|^2} &\hbox{for }i=0\\[0.5cm] \ds {4\de_j(y_{\xi_j}(x))_i \over \de_j^2+|y_{\xi_j}(x)|^2}&\hbox{for }i=1,2, \end{array} \right. \end{equation*} and set $Z_1=\displaystyle \sum_{l=1}^{m_1} Z_{0l}$ and $Z_2=\displaystyle \sum_{l=m_1+1}^m Z_{0l}$. For $i=1,2$ and $j=1,\dots,m$, let $PZ_i$, $PZ_{ij}$ be the projections of $Z_i$, $Z_{ij}$ as the solutions in $\bar H$ of \begin{equation} \label{ePZ} \begin{array}{rl} \Delta_g PZ_i&\ds=\chi_j \Delta_g Z_i-\frac{1}{|S|}\int_S \chi_j \Delta_g Z_i dv_g\\[0.5cm] \Delta_g PZ_{ij} &\ds=\chi_j \Delta_g Z_{ij}-\frac{1}{|S|}\int_S \chi_j \Delta_g Z_{ij} dv_g. \end{array} \end{equation} In Appendix A we prove the following result: \begin{prop} \label{p2} There exists $\delta_0>0$ so that for all $0<\delta\leq \delta_0$, $h\in C(S)$ with $\int_Sh\,dv_g=0$, $\mu\in\mathcal{M}$, $\xi \in \Xi$ there is a unique solution $\phi \in \bar H \cap W^{2,2}(S)$ and $c_{0i},c_{ij} \in \mathbb{R}$ of \begin{equation}\label{plco} \left\{ \begin{array}{ll} L(\phi) = h + \displaystyle \sum_{i=1}^{2}\Big[ c_{0i} \Delta_gPZ_i + \sum_{j=1}^m c_{ij} \Delta_g PZ_{ij}\Big]&\text{in }S\\ \ds\int_S \phi \Delta_g PZ_i dv_g=\int_S \phi \Delta_g PZ_{ij} dv_g=0 &\forall\: i=1,2,\, j=1,\dots,m. \end{array} \right. \end{equation} Moreover, the map $(\mu ,\xi) \mapsto (\phi,c_{0i},c_{ij})$ is twice-differentiable in $\mu$ and one-differentiable in $\xi$ with \begin{eqnarray} &&\|\phi \|_\infty \le C |\log \de| \|h\|_*\:,\qquad \displaystyle \sum_{i=1}^{2}\Big[|c_{0i}|+ \sum_{j=1}^m |c_{ij}| \Big] \le C\|h\|_* \label{estmfe1} \\ && \sum_{i=1}^2 \bigg[\|\partial_{\mu_i} \phi\|_\infty + \sum_{k=1}^2{1\over |\log \de|} \|\partial_{\mu_i\mu_k} \phi \|_\infty + \sum_{j=1}^m \de \|\partial_{(\xi_j)_i} \phi\|_\infty \bigg] \le C |\log \de|^2 \|h\|_*\label{estd} \end{eqnarray} for some $C>0$. \end{prop} \medskip \noindent Let us recall that $u=W+\phi$ solves \eqref{mfewt} if $\phi\in \bar H$ does satisfy (\ref{ephi}). Since the operator $L$ is not fully invertible, in view of Proposition \ref{p2} one can solve the nonlinear problem (\ref{ephi}) just up to a linear combination of $\Delta_g PZ_1$, $\Delta_g PZ_2$ and $\Delta_g PZ_{ij}$, as explained in the following (see Appendix B the proof): \begin{prop}\label{lpnlabis} There exists $\delta_0>0$ so that for all $0<\delta\leq \delta_0$, $\mu\in\mathcal{M}$, $\xi \in \Xi$ problem \begin{equation}\label{pnlabis} \left\{ \begin{array}{ll} L(\phi)= -[R+N(\phi)] + \displaystyle \sum_{i=1}^{2}\Big[ c_{0i} \Delta_gPZ_i + \sum_{j=1}^m c_{ij} \Delta_g PZ_{ij}\Big] & \text{in } S\\ \ds \int_S \phi \Delta_g PZ_i dv_g=\int_S \phi \Delta_g PZ_{ij} dv_g= 0 &\forall\: i=1,2,\, j=1,\dots,m \end{array} \right. \end{equation} admits a unique solution $\phi(\mu ,\xi) \in \bar H \cap W^{2,2}(S)$ and $c_{0i} (\mu,\xi),\,c_{ij}(\mu ,\xi) \in \R$, $i=1,2$ and $j=1,\dots,m$, where $\de_j>0$ are as in \eqref{repla0} and $N$, $R$ are given by \eqref{nlt}, \eqref{R}, respectively. Moreover, the map $(\mu,\xi)\mapsto (\phi(\mu,\xi),c_{0i}(\mu,\xi),c_{ij}(\mu,\xi))$ is twice-differentiable in $\mu$ and one-differentiable in $\xi$ with \begin{eqnarray} \label{cotaphi1bis} \|\phi\|_\infty &\hspace{-0.2cm} \le & \hspace{-0.2cm} C\left( \delta |\log \delta |\, |\nabla \varphi_m(\xi)|_g+ \delta^{2-\sigma}|\log\delta|^2\right)\\ \label{cotadphi1bis} \hspace{-0.6cm}\sum_{i=1}^2\Big[ \|\partial_{\mu_i} \phi \|_\infty + \sum_{j=1}^m \de \|\partial_{(\xi_j)_i} \phi \|_\infty +\sum_{k=1}^2{\|\partial_{\mu_i\mu_k} \phi \|_\infty\over |\log \de|} \Big] &\hspace{-0.2cm} \le &\hspace{-0.2cm}C \left(\de |\log \delta |^2 |\nabla \varphi_m(\xi)|_g+\delta^{2-\sigma}|\log\delta|^3\right) \end{eqnarray} \end{prop} \noindent The function $[W+\phi](\mu,\xi)$ will be a true solution of \eqref{ephi} if $\mu\in\mathcal M$ and $\xi\in \Xi$ are such that $c_{0i}(\mu,\xi)=c_{ij}(\mu, \xi)=0$ for all $i=1,2,$ and $j=1,\dots,m$. This problem is equivalent to finding critical points of the reduced energy $E_{\lambda_1,\lambda_2}(\mu, \xi)= J_{\lambda_1,\lambda_2}\big([W+\phi](\mu,\xi)\big)$, where $J_{\lambda_1,\lambda_2}$ is given by \eqref{energy}, as stated in (we omit its proof): \begin{lem}\label{cpfc0bis} There exists $\delta_0$ such that, if $(\mu,\xi)\in \mathcal{M}\times \Xi$ is a critical point of $E_{\lambda_1,\lambda_2}$ for $0< \de\le \de_0$, then $u=W(\mu,\xi)+\phi(\mu ,\xi)$ is a solution to \eqref{mfewt}, where $\de_i$ are given by \eqref{repla0}. \end{lem} \noindent Once equation \eqref{mfewt} has been reduced to the search of c.p.'s for $E_{\lambda_1,\lambda_2} $, it becomes crucial to show that the main asymptotic term of $E_{\lambda_1,\lambda_2} $ is given by $J_{\lambda_1,\lambda_2} (W)$, for which an expansion has been given in Theorem \ref{expansionenergy}. More precisely, by estimates in Appendix B we have that \begin{theo} \label{fullexpansionenergy} Assume \eqref{repla0} - \eqref{repla1}. The following expansion does hold \begin{eqnarray} \label{fullJUt} E_{\lambda_1,\lambda_2} (\mu,\xi) &=&-8\pi \Big(m_1 + {m_2\over\tau^2}\Big) - \lambda_1 \log (\pi m_1) - \lambda_2\log(\pi m_2) + 2\big(\lambda_1 -8\pi m_1 \big)\log\de \nonumber \\ &&\,\, + {2\over \tau^2} \big(\lambda_2\tau^2 -8\pi m_2 \big)\log\de -32\pi^2 \varphi_m^*(\xi)+ 2\big(\lambda_1 -8\pi m_1 \big)\log\mu_1 \\ &&\,\, + A_1^*(\xi) \mu_1^2\delta^2\log \delta + \left[A_1^*(\xi) \mu_1^2 \log\mu_1- B_1^*(\xi)\mu_1^2\right] \delta^2 + {1\over\tau^2}\Big\{ 2\big(\lambda_2\tau^2 -8\pi m_2 \big) \log\mu_2 \nonumber \\ &&\,\, + A_2^*(\xi) \mu_2^2 \de^2\log \de + \left[A_2^*(\xi) \mu_2^2\log\mu_2 - B_2^*(\xi) \mu_2^2\right] \de^2\Big\} \nonumber + o(\de^2) +r_{\lambda_1,\lambda_2}(\mu,\xi) \nonumber \end{eqnarray} in $C^2(\mathbb{R}^2)$ and $C^1(\Xi)$ as $\de\to 0^+$, where $\varphi_m^*(\xi)$, $A_k^*(\xi)$ and $B_k^*(\xi)$, $k=1,2$ are given by \eqref{fim}, \eqref{vk} and \eqref{Bk}, $k=1,2$, respectively. The term $r_{\lambda_1,\lambda_2}(\mu,\xi)$ satisfies \begin{eqnarray} \label{rlambda} |r_{\lambda_1,\lambda_2}(\mu,\xi)|&+&\frac{\de |\nabla_\xi r_{\lambda_1,\lambda_2}(\mu,\xi)|}{|\log \de|} +\frac{|\nabla_\mu r_{\lambda_1,\lambda_2}(\mu,\xi)|}{|\log \de|} \nonumber \\ &+&\frac{|D^2_{\mu} r_{\lambda_1,\lambda_2}(\mu,\xi)|}{|\log \de|^2} \leq C\left( \delta^2 |\log \delta |\, |\nabla \varphi_m^*(\xi)|_g^2 + \de^{3-\sigma}|\log\de|^2\right) \end{eqnarray} for some $C>0$ independent of $(\mu,\xi)\in \mathcal M\times\Xi$. \end{theo} \noindent We are now in position to establish the main result stated in the Introduction. We shall argue similarly to \cite[Theorem 1.5]{EF}. \begin{proof}[{\bf Proof (of Theorem \ref{main2}):}] According to Lemma \ref{cpfc0bis}, we just need to find a critical point of $E=E_{\lambda_1,\lambda_2}(\mu,\xi)$ with $\mu=(\mu_1,\mu_2)$. Recall that $\tau>0$ is fixed. Assumptions \eqref{cond0} and \eqref{cond1} allows us to choose $\mu_k=\mu_k(\lambda_k,\xi)$ for $\lambda_k\tau^{2(k-1)}$ close to $8\pi m_k$, $k=1,2$, respectively. Precisely, fixing $k\in\{1,2\}$ we choose $\lambda_k\tau^{2(k-1)}-8\pi m_k=\de^2$ ($-\de^2$ resp.) if either $A_k^*(\xi)>0$ ($<0$ resp.) or $A^*(\xi)=0$, $B_k^*(\xi)>0$ ($<0$ resp.) in $U$. Thus, we deduce the expansions \begin{equation*} \begin{split} \frac{ \tau^{2(k-1)} \partial_{\mu_k} E(\mu,\xi)}{\lambda_k\tau^{2(k-1)}-8\pi m_k} =&\, \frac{2}{\mu_k} + 2 A_k^*(\xi) \mu_k \log\de+A_k^*(\xi)(2\mu_k\log\mu_k+\mu_k)-2B_k^*(\xi)\mu_k\\ &\,+ o(1) + O\left(|\log\de |^2\,|\nabla\varphi_m^*(\xi)|^2_g\right) \end{split} \end{equation*} and \begin{equation*} \begin{split} \frac{ \tau^{2(k-1)} \partial_{\mu_k\mu_k} E(\mu,\xi)}{\lambda_k\tau^{2(k-1)}-8\pi m_k} =&\, -\frac{2}{\mu_k^2} + 2 A_k^*(\xi)\log\de +A_k^*(\xi)(2\log\mu_k+3) -2B_k^*(\xi) \\ &\,+ o(1) + O\left(|\log \de |^3\,|\nabla\varphi_m^*(\xi)|^2_g\right), \end{split} \end{equation*} as $\de\to 0^+$. Arguing in the same way as in the proof of Theorem 3.2 in \cite{EF}, we conclude existence of a $C^1$ map $\mu_k=\mu_k(\lambda_k,\xi)$ satisfying $\partial_{\mu_k} E(\mu(\lambda,\xi),\xi)=0$, with $\lambda=(\lambda_1,\lambda_2)$ and $\mu=(\mu_1,\mu_2)$ for all $\xi\in U$. Now, considering $\tilde E (\xi)=E_{\lambda}\big(\mu_1(\lambda_1,\xi),\mu_2(\lambda_2,\xi),\xi\big)$ and again arguing in the same way as in the proof of Theorem 3.2 in \cite{EF} it follows that $\tilde E (\xi)=-32\pi^2 \varphi_m^*(\xi)+ O(\de^2|\log\de|)$, \begin{equation*} \begin{split} \nabla_\xi \tilde E (\xi)& = \nabla_\xi E \big(\mu_1 (\lambda_1,\xi),\mu_2(\lambda_2,\xi),\xi\big)+\nabla_\mu E \big( \mu_1(\lambda_1,\xi),\mu_2(\lambda_2,\xi),\xi\big)\nabla_\xi \mu(\lambda,\xi)\\ &=-32\pi^2 \nabla \varphi_m^*(\xi)+ O(\de |\log \de|^{2}) \end{split} \end{equation*} uniformly in $\xi \in U$ and there exists a critical point $\xi_{\lambda_1,\lambda_2}=\xi_\de \in U$ of $\tilde E (\xi)$, since $\ml{D}$ is a stable critical set of $\varphi_m^*$ (see Definition \ref{stable}). Up to take $U$ smaller so that $\nabla\varphi_m^*(\xi)\ne 0$ for all $\xi\in U\setminus\ml{D}$, it can be deduced that the pair $\big(\mu(\lambda_1,\lambda_2,\xi_\de),\xi_\de\big)$ is a c.p. of $ E(\mu,\xi)$ and, along a sub-sequence, $\xi_\de \to q \in \ml{D}$ as $\de\to 0$, namely, as $\lambda_1 \to 8\pi m_1$ and $\lambda_2\tau^2\to 8\pi m_2$. By construction, the corresponding solution has the required asymptotic properties \eqref{conc}. See proof of Theorem 1.5 in \cite{EF} for more details. This completes the proof. \end{proof} \section{The reduced energy}\label{sec4} \noindent The purpose of this section is to give an asymptotic expansion of the ``reduced energy" $J_{\lambda_1,\lambda_2}(W)$, where $J_{\lambda_1,\lambda_2}$ is the energy functional given by \eqref{energy}. For technical reasons, we will be concerned with establishing it in a $C^2$-sense in $\mu$ and just in a $C^1$-sense in $\xi$. To this aim, the following result will be very useful, see \cite[Lemma 3.1]{EF} for a proof. \begin{lem}\label{ieuf} Letting $f\in C^{2,\gamma}(S)$ (possibly depending in $\xi$), $0<\gamma<1$, denote as $P_2(f)$ the second-order Taylor expansion of $f(x)$ at $\xi$: $$P_2 f(x)=f(\xi)+\langle\nabla (f \circ y_\xi^{-1}) (0), y_\xi(x)\rangle+{1\over2}\langle D^2 (f\circ y_\xi^{-1})(0)y_\xi(x), y_\xi(x) \rangle.$$ The following expansions do hold as $\delta \to 0$: \begin{eqnarray*} \int_S \chi_\xi e^{-\varphi_\xi} f(x) e^{U_{\de,\xi}} dv_g&=& 8\pi f(\xi)-2 \de^2 \Delta_g f (\xi) \left[ 2\pi \log \delta+ \int_{\mathbb{R}^2} {\chi'(|y|) \log |y|\over |y| } dy +\pi \right]\\ &&+8\de^2\int_S \chi_\xi e^{-\varphi_\xi} {f(x)-P_2(f)(x)\over |y_{\xi}(x)|^4}\,dv_g+ 4\de^2 f(\xi) \int_{\mathbb{R}^2} {\chi'(|y|) \over |y|^3 } dy+o(\delta^2), \end{eqnarray*} \begin{eqnarray*} \int_S \chi_\xi e^{-\varphi_\xi} f(x) e^{U_{\de,\xi}}\frac{dv_g}{\delta^2+|y_\xi(x)|^2} = \frac{4\pi}{\delta^2}f(\xi)+\pi \Delta_g f(\xi)+O(\delta^{\gamma}) \end{eqnarray*} and \begin{eqnarray*} \int_S \chi_\xi e^{-\varphi_\xi} f(x) e^{U_{\de,\xi}}\frac{a \delta^2-|y_\xi(x)|^2}{(\delta^2+|y_\xi(x)|^2)^2} dv_g =\frac{4 \pi}{3 \de^2}(2a-1) f(\xi)+(a-2)\frac{\pi}{3} \Delta_g f(\xi) +O(\delta^\gamma) \end{eqnarray*} for $a \in \mathbb{R}$. \end{lem} \medskip \noindent We are now ready to establish the expansion of $J_{\lambda_1,\lambda_2}(W)$: \begin{theo} \label{expansionenergy} Assume \eqref{repla0}-\eqref{repla1}. The following expansion does hold \begin{eqnarray} \label{JUt} J_{\lambda_1,\lambda_2} (W) &=&\,\, -8\pi \Big(m_1+{m_2\over\tau^2}\Big) -\lambda_1 \log (\pi m_1)-\lambda_2\log(\pi m_2) + 2\big(\lambda_1 -8\pi m_1 \big)\log\de \nonumber\\ &&\,\, + {2\over \tau^2} \big(\lambda_2\tau^2 -8\pi m_2 \big)\log\de -32\pi^2 \varphi_m^*(\xi)+ 2\big(\lambda_1 -8\pi m_1 \big)\log\mu_1\\ &&\,\, + A_1^*(\xi) \mu_1^2\delta^2\log \delta + \left[A_1^*(\xi) \mu_1^2 \log\mu_1- B_1^*(\xi)\mu_1^2\right] \delta^2+ {1\over\tau^2}\Big\{ 2\big(\lambda_2\tau^2 -8\pi m_2 \big) \log\mu_2 \nonumber \\ &&\,\, + A_2^*(\xi) \mu_2^2 \de^2\log \de + \left[A_2^*(\xi) \mu_2^2\log\mu_2 - B_2^*(\xi) \mu_2^2\right] \de^2\Big\} + o(\de^2)\nonumber \end{eqnarray} in $C^2(\mathbb{R}^2)$ and $C^1(\Xi)$ as $\de \to 0^+$, where $\varphi_m^*(\xi)$, $A_1^*(\xi)$, $A_2^*(\xi)$, $B_{1}^*(\xi)$ and $B_{2}^*(\xi)$ are given by \eqref{fim}, \eqref{vk} and \eqref{Bk}, $k=1,2$, respectively. \end{theo} \noindent As in \cite[Theorem 3.2]{EF}, the proof will be divided into several steps. \begin{proof}[{\bf Proof (of (\ref{JUt}) in $C(\mathbb{R}^2 \times \Xi)$):}] First, let us consider the term. Integrating by parts we have that \begin{equation*} \begin{split} \int_S |\nabla W|_g^2 dv_g &= \sum_{j,l=1}^{m_1} \int_S \chi_j e^{-\varphi_j} e^{U_j}W_l dv_g -{1\over \tau} \sum_{j=1}^{m_1}\sum_{l=m_1+1}^m \int_S \chi_j e^{-\varphi_j} e^{U_j}W_l dv_g\\ &\ \ \ -{1\over \tau} \sum_{j=m_1+1}^m\sum_{l=1}^{m_1} \int_S \chi_j e^{-\varphi_j} e^{U_j}W_l dv_g + {1\over \tau^2} \sum_{j,l=m_1+1}^m \int_S \chi_j e^{-\varphi_j} e^{U_j}W_l dv_g \end{split} \end{equation*} in view of $\int_S W dv_g=0$. Since by (\ref{green}) and (\ref{ePu}) \begin{eqnarray} \label{tricky} \int_S \chi_j e^{-\varphi_j} e^{U_j} G(x,\xi_l) dv_g= \int_S (-\Delta_g PU_j) G(x,\xi_l) dv_g= PU_j(\xi_l) \end{eqnarray} for all $j,l=1,\dots,m$, by Lemmata \ref{ewfxi}, \ref{ieuf}, (\ref{tricky}) and computations done in the proof of \cite[Theorem 3.2]{EF}, we have that for $l=j$ \begin{eqnarray*} \int_S \chi_j e^{-\varphi_j}e^{U_j}W_j dv_g=-16\pi -32 \pi \log \de_j +64\pi^2 H(\xi_j,\xi_j)+16\pi \alpha_{\de_j,\xi_j}-32\pi \delta_j^2 F_{\xi_j}(\xi_j)+O(\de^4 |\log \delta|^2). \end{eqnarray*} Similarly, by Lemmata \ref{ewfxi}, \ref{ieuf} and (\ref{tricky}) we have that for $l \not= j$ \begin{eqnarray*} \int_S \chi_j e^{-\varphi_j}e^{U_j}W_l dv_g &=& 64\pi^2 G(\xi_l,\xi_j) +8\pi (\alpha_{\de_j,\xi_j}+\alpha_{\de_l,\xi_l})-16\pi (\delta_j^2 F_{\xi_j}(\xi_l)+ \delta_l^2 F_{\xi_l}(\xi_j) \\&&+O(\de^4 |\log \delta|^2). \end{eqnarray*} Setting $\ds\alpha_{1,\de,\xi}=\sum_{j=1}^{m_1} \alpha_{\de_j,\xi_j}$, $\ds \alpha_{2,\de,\xi}=\sum_{j=m_1+1}^{m} \alpha_{\de_j,\xi_j},$ $\ds F_{1,\de,\xi}(x)=\sum_{j=1}^{m_1} \delta_j^2 F_{\xi_j}(x)$ and $\ds F_{2,\de,\xi}(x)=\sum_{j=m_1+1}^{m} \delta_j^2 F_{\xi_j}(x)$, we find that \begin{eqnarray*} &&\sum_{j,l=1}^{m_1} \int_S \chi_j e^{-\varphi_j} e^{U_j}W_l dv_g=-16\pi m_1 + \sum_{j=1}^{m_1}\Big[ -32\pi \log(\mu_1 \de) - 16\pi \log V_1(\xi_j) - 64\pi^2 H(\xi_j,\xi_j) \\ &&- 64\pi^2\sum_{i=1\atop i\ne j}^{m_1} G(\xi_j,\xi_i) + {128\pi^2\over \tau} \sum_{i=m_1+1}^m G(\xi_j,\xi_i)\Big] + 16\pi m_1\al_{1,\de,\xi} - 32\pi \sum_{j=1}^{m_1} F_{1,\de,\xi}(\xi_j) + O(\de^4|\log\de|^2), \end{eqnarray*} \begin{eqnarray*} \sum_{j=1}^{m_1}\sum_{l=m_1+1}^m \int_S \chi_j e^{-\varphi_j} e^{U_j}W_l dv_g & = & 64\pi^2 \sum_{j=1}^{m_1} \sum_{l=m_1+1}^{m} G(\xi_j,\xi_l) + 8\pi m_2 \al_{1,\de,\xi} + 8\pi m_1 \al_{2,\de,\xi}\\ && -16\pi \sum_{j=m_1+1}^{m} F_{1,\de,\xi}(\xi_j) -16\pi \sum_{j=1}^{m_1} F_{2,\de,\xi}(\xi_j) + O(\de^4|\log\de|^2), \end{eqnarray*} \begin{eqnarray*} \sum_{j=m_1+1}^{m}\sum_{l=1}^{m_1} \int_S \chi_j e^{-\varphi_j} e^{U_j}W_l dv_g & = & 64\pi^2 \sum_{j=m_1+1}^{m} \sum_{l=1}^{m_1} G(\xi_j,\xi_l) + 8\pi m_2 \al_{1,\de,\xi} + 8\pi m_1 \al_{2,\de,\xi}\\ & & -16\pi \sum_{j=m_1+1}^{m} F_{1,\de,\xi}(\xi_j) -16\pi \sum_{j=1}^{m_1} F_{2,\de,\xi}(\xi_j) + O(\de^4|\log\de|^2), \end{eqnarray*} and \begin{eqnarray*} &&\hspace{-0.6cm}\sum_{j,l=m_1+1}^{m} \int_S \chi_j e^{-\varphi_j} e^{U_j}W_l dv_g=-16\pi m_2 + \sum_{j=m_1+1}^{m}\Big[ -32\pi \log(\mu_2 \de) - 16\pi \log V_2(\xi_j) - 64\pi^2 H(\xi_j,\xi_j) \\ &&\hspace{-0.6cm} - 128\pi^2\tau \sum_{i=1}^{m_1} G(\xi_j,\xi_i) - 64\pi^2 \sum_{i=m_1+1\atop i\ne j}^m G(\xi_j,\xi_i)\Big] + 16\pi m_2\al_{2,\de,\xi} - 32\pi \sum_{j=m_1+1}^{m} F_{2,\de,\xi}(\xi_j) + O(\de^4|\log\de|^2) \end{eqnarray*} in view of (\ref{repla0}). Now, setting $\ds\alpha_{\de,\xi}=\alpha_{1,\de_j,\xi_j} - {1\over\tau} \al_{2,\de,\xi}$ and $\ds F_{\de,\xi}(x) = F_{1,\de,\xi}(x) - {1\over \tau} F_{2,\de,\xi}(x),$ summing up the four previous expansions, for the gradient term we get that \begin{eqnarray}\label{gtJW} {1\over 2}\int_S |\nabla W|_g^2 dv_g & = & -\, 8 \pi \left(m_1+{m_2\over \tau^2 }\right) -16 \pi \left(m_1 \log(\mu_1\de)+{m_2\over \tau^2}\log(\mu_2\de) \right) - 32 \pi^2 \varphi_m^*(\xi)\\ &&+\, 8 \pi \left(m_1 - {m_2\over \tau}\right) \alpha_{\de,\xi} -16 \pi \sum_{j=1}^{m _1}F_{\delta,\xi}(\xi_j) + {16 \pi\over \tau} \sum_{j=m_1+1}^{m}F_{\delta,\xi}(\xi_j) +o(\de^2)\nonumber \end{eqnarray} in view of \eqref{fim}. \medskip \noindent Let us now expand the potential terms in $J_{\lambda_1,\lambda_2}(W)$, similarly to the proof of \cite[Theorem 3.2]{EF}. By Lemma \ref{ewfxi} for any $j=1,\dots,m_1$ we find that \begin{eqnarray*} \int_{B_{r_0}(\xi_j)}V_1 e^W dv_g &=&{e^{\al_{\de,\xi}} \over 8\de_j^2}\left[\int_S \chi_j e^{U_j} \rho_j e^{ -2 F_{\delta,\xi}} dv_g -8\delta_j^2 \int_{A_{2r_0}(\xi_j)} \frac{\chi_j \rho_j}{|y_{\xi_j}(x)|^4} dv_g+O(\delta^4 |\log \delta|)\right]. \end{eqnarray*} By Lemma \ref{ieuf} (with $f(x)=e^{\varphi_j}\rho_j e^{\alpha_{\delta,\xi}-2F_{\delta,\xi}}$) we can now deduce that \begin{eqnarray*} && 8 \delta_j^2 e^{-\al_{\de,\xi}} \int_{B_{r_0}(\xi_j)}V_1 e^W dv_g = 8\pi \rho_j(\xi_j) e^{ -2F_{\delta,\xi}(\xi_j)} -4\pi \left(\Delta_g \rho_j (\xi_j)-2 K(\xi_j)\rho_j(\xi_j)\right) \delta_j^2 \log \delta_j \\ &&-2 \left(\Delta_g \rho_j (\xi_j)-2 K(\xi_j)\rho_j(\xi_j)\right) \left( \int_{\mathbb{R}^2} {\chi'(|y|) \log |y|\over |y| } dy +\pi \right) \de_j^2 +4 \de_j^2 \rho_j(\xi_j) \int_{\mathbb{R}^2} {\chi'(|y|) \over |y|^3 } dy\\ &&+8\de_j^2\int_{B_{r_0}(\xi_j)} \left[ V_1e^{8\pi \sum\limits_{j=1}^m G(x,\xi_j) - {8\pi\over\tau} \sum\limits_{i=m_1+1}^m G(x,\xi_i) } - e^{-\varphi_j} \frac{ P_2(e^{\varphi_j}\rho_j)}{|y_{\xi_j}(x)|^4}\right]dv_g\\ &&-8\de_j^2\int_{A_{2r_0}(\xi_j)} \chi_j e^{-\varphi_j} \frac{ P_2(e^{\varphi_j}\rho_j)}{|y_{\xi_j}(x)|^4}\,dv_g+o(\delta^2) \end{eqnarray*} in view of $\frac{\rho_j(x)}{|y_{\xi_j}(x)|^4}=V_1e^{8\pi \sum\limits_{j=1}^{m_1} G(x,\xi_j) - {8\pi\over\tau} \sum\limits_{i=m_1+1}^m G(x,\xi_i)}$ in $B_{r_0}(\xi_j)$ and by (\ref{equationvarphi}) \begin{eqnarray} \label{gaussian} \Delta_g \left[e^{\varphi_j} \rho_j \right](\xi_j)= \Delta_g \rho_j (\xi_j)-2 K(\xi_j)\rho_j(\xi_j). \end{eqnarray} Now, by Lemma \ref{ewfxi} for any $j=m_1+1,\dots,m$ we find that \begin{eqnarray*} \int_{B_{r_0}(\xi_j)}V_1 e^W dv_g &=&\int_{B_{r_0} (\xi_j)} V_1\Big[ {\rho_j\over V_2} \Big]^{-1/\tau} e^{-{1\over\tau}[U_j-\log(8\de_j^2)] +\al_{\de,\xi} + O(\de^2)} dv_g\\ &=& e^{\al_{\de,\xi}} \bigg[ \int_{B_{r_0} (\xi_j)} V_1 e^{ 8\pi \sum\limits_{j=1}^{m_1} G(x,\xi_j) - {8\pi\over\tau} \sum\limits_{i=m_1+1}^m G(x,\xi_i) } dv_g + O(\de^2 )\bigg]. \end{eqnarray*} On the other hand, we have that \begin{equation*} \int_{S \setminus \cup_{j=1}^m B_{r_0}(\xi_j)} V_1 e^W dv_g= e^{\al_{\de,\xi}} \bigg[\int_{S \setminus \cup_{j=1}^m B_{r_0}(\xi_j)} V_1 e^{ 8\pi \sum\limits_{j=1}^{m_1} G(x,\xi_j) - {8\pi\over\tau} \sum\limits_{i=m_1+1}^m G(x,\xi_i)}dv_g+O(\de^2 )\bigg]. \end{equation*} Since \begin{equation}\label{s1m1em2f} \sum_{j=1}^{m_1} e^{-2 F_{\delta,\xi}(\xi_j)}=m_1-2 \sum_{j=1}^{m_1} F_{\delta,\xi}(\xi_j)+O(\delta^4) \end{equation} and by (\ref{repla0}) there holds $\ds \delta_j^2 \log \delta_j=\rho_j(\xi_j) \mu_i^2 \delta^2 \log \delta + \rho_j(\xi_j)\mu_i^2\de^2\log\mu_i +\frac{1}{2}\rho_j(\xi_j) \log \rho_j(\xi_j) \mu_i^2 \delta^2,$ we then obtain that \begin{eqnarray} \frac{1}{\pi}e^{-\alpha_{\delta,\xi}}\mu_1^2\delta^2 \int_{S}V_1 e^W dv_g=m_1 - \frac{A_1^*(\xi)}{8\pi} \mu_1^2\delta^2 \log(\mu_1 \delta) +\frac{B_{1,\chi}(\xi)}{8\pi} \mu_1^2 \delta^2-2 \sum_{j=1}^{m_1} F_{\delta,\xi}(\xi_j)+o(\delta^2),\label{intV1eW} \end{eqnarray} where \begin{eqnarray*} B_{1,\chi}(\xi)&=&-2\pi \sum_{j=1}^{m_1} [\Delta_g \rho_j(\xi_j) -2 K(\xi_j) \rho_j(\xi_j)] \log \rho_j(\xi_j) \nonumber \\ &&-\frac{A_1^*(\xi) }{2 \pi} \bigg( \int_{\mathbb{R}^2} {\chi'(|y|) \log |y|\over |y| } dy +\pi \bigg) +4 \int_{\mathbb{R}^2} {\chi'(|y|) \over |y|^3 } dy \sum_{j=1}^{m_1} \rho_j(\xi_j)\\ && +8 \int_S \left[V_1e^{8\pi \sum\limits_{j=1}^{m_1} G(x,\xi_j)-{8\pi\over \tau}\sum\limits_{l=m_1+1}^{m}G(x,\xi_l)}-\sum_{j=1}^{m_1}\chi_j e^{-\varphi_j} \frac{ P_2(e^{\varphi_j}\rho_j)}{|y_{\xi_j }(x)|^4}\right] dv_g, \nonumber \end{eqnarray*} By integration by parts on integrals involving $\chi$ and the splitting of $S$ as the union of $\cup_{j=1}^{m_1} B_r(\xi_j)$ and $S \setminus \cup_{j=1}^{m_1} B_r(\xi_j)$, $r\leq r_0$, we easily deduce that \begin{eqnarray*} B_{1,\chi}(\xi)&=& -2\pi \sum_{j=1}^{m_1} [\Delta_g \rho_j(\xi_j) -2 K(\xi_j) \rho_j(\xi_j)] \log \rho_j(\xi_j) -\frac{A_1^*(\xi)}{2}\\ &&+ 8 \int_{S \setminus \cup_{j=1}^{m_1} B_r(\xi_j)} V_1e^{8\pi \sum\limits_{j=1}^m G(x,\xi_j) -{8\pi\over\tau }\sum\limits_{l=m_1+1}^m G(x,\xi_l) } dv_g-\frac{8\pi}{r^2} \sum_{j=1}^{m_1} \rho_j(\xi_j)-A_1^*(\xi) \log \frac{1}{r} \\ &&+8\sum_{j=1}^{m_1} \int_{B_r(\xi_j)}\frac{ e^{\varphi_j(x)}\rho_j(x)-P_2(e^{\varphi_j}\rho_j)(x)}{|y_{\xi_j}(x)|^4}\,e^{-\varphi_j(x)} dv_g \end{eqnarray*} in view of \eqref{gaussian} and the definitions of $A_1^*(\xi)$, $P_2(e^{\varphi_j}\rho_j)$. As a by-product we have that $B_{1,\chi}(\xi)$ does not depend on $\chi$ and $r \leq r_0$. Since $$\lim_{r \to 0} \int_{B_r(\xi_j)}\frac{ e^{\varphi_j(x)}\rho_j(x)-P_2(e^{\varphi_j}\rho_j)(x)}{|y_{\xi_j}(x)|^4}\,e^{-\varphi_j(x)} dv_g=0$$ in view of $e^{\varphi_j(x)}\rho_j(x)-P_2(e^{\varphi_j}\rho_j)(x)=o(|y_{\xi_j}(x)|^2)$ as $x \to \xi_j$, we have that $B_{1,\chi}(\xi)$ coincides with $B_1^*(\xi)$ as defined in \eqref{Bk} with $k=1$. \medskip\noindent Similar as above, by Lemmata \ref{ewfxi}, \ref{ieuf} (with $f(x)=e^{\varphi_j}\rho_j e^{-\tau \alpha_{\delta,\xi}+2\tau F_{\delta,\xi}}$), \eqref{gaussian}, \begin{equation}\label{sm11me2tf} \sum_{j=m_1+1}^{m} e^{2\tau F_{\delta,\xi}(\xi_j)}=m_2 + 2\tau \sum_{j=m_1+1}^{m} F_{\delta,\xi}(\xi_j)+O(\delta^4) \end{equation} and by (\ref{repla0}), we then obtain that \begin{eqnarray}\label{intV2etW} \frac{1}{\pi}e^{\tau \alpha_{\delta,\xi}}\mu_2^2\delta^2 \int_{S}V_2 e^{-\tau W} dv_g &= &m_2 - \frac{A_2^*(\xi)}{8\pi} \mu_2^2\delta^2 \log(\mu_2 \delta) +\frac{B_{2,\chi}(\xi)}{8\pi} \mu_2^2 \delta^2 \\ &&+ 2\tau \sum_{j=m_1+1}^{m} F_{\delta,\xi}(\xi_j)+o(\delta^2)\nonumber, \end{eqnarray} where \begin{eqnarray*} B_{2,\chi}(\xi)&=& -2\pi \sum_{j=m_1+1}^m [\Delta_g \rho_j(\xi_j) -2 K(\xi_j) \rho_j(\xi_j)] \log \rho_j(\xi_j) -\frac{A_2^*(\xi)}{2}\\ &&+ 8 \int_{S \setminus \cup_{j=m_1+1}^m B_r(\xi_j)} V_2e^{-8\pi\tau \sum\limits_{j=1}^m G(x,\xi_j) + 8\pi\sum\limits_{l=m_1+1}^m G(x,\xi_l)} dv_g-\frac{8\pi}{r^2} \sum_{j=m_1+1}^m \rho_j(\xi_j)\\ &&-A_2^*(\xi) \log \frac{1}{r} +8\sum_{j=m_1+1}^m \int_{B_r(\xi_j)}\frac{ e^{\varphi_j(x)}\rho_j(x)-P_2(e^{\varphi_j}\rho_j)(x)}{|y_{\xi_j}(x)|^4}\,e^{-\varphi_j(x)} dv_g, \end{eqnarray*} $B_{2,\chi}(\xi)$ does not depend on $\chi$ and $r \leq r_0$, and coincides with $B_2^*(\xi)$ as defined in (\ref{Bk}) with $k=2$. \medskip \noindent Finally, from \eqref{repla1}, expansions \eqref{gtJW}, \eqref{intV1eW} and \eqref{intV2etW} and Taylor's expansion for $a\ge 1$, $\log(a+t)=\log a + {t\over a} + O(t^2)$ as $t\to 0$, we get the expansion \eqref{JUt} as $\delta \to 0$ and the proof is complete. \end{proof} \medskip \noindent We establish now expansion \eqref{JUt} in a $C^1$-sense in $\xi$, where the derivatives in $\xi$ are with respect to a given coordinate system. Recall we use ideas in \cite[Theorem 3.2]{EF}. \begin{proof}[{\bf Proof (of (\ref{JUt}) in $C^1(\Xi)$):}] We just need to expand the derivatives of $J_{\lambda_1,\lambda_2}(W)$ in $\xi$. Let us fix $i\in\{1,2\}$ and $j\in\{1,\dots,m\}$. We have that $$\partial_{(\xi_j)_i}[J_{\lambda_1,\lambda_2}(W)]=-\int_S\left[\lab W+{\lambda_1 V_1e^{W}\over \int_S V_1e^W dv_g} -{\lambda_1\tau V_2e^{-\tau W}\over \int_S V_2e^{-\tau W} dv_g}\right]\partial_{(\xi_j)_i}W dv_g.$$ Notice that as in Lemma \ref{ewfxi}, it follows that \begin{eqnarray}\label{dxiw} \partial_{(\xi_j)_i}W_q&=&-2{\chi_q \over \de_q^2+|y_{\xi_q}(x)|^2}\left[\partial_{(\xi_j)_i} |y_{\xi_q}(x)|^2+\delta_q^2 \partial_{(\xi_j)_i} (\log \rho_q(\xi_q)) \right]\\ &&-4 \log |y_{\xi_q}(x)|\partial_{(\xi_j)_i}\chi_q +8\pi \partial_{(\xi_j)_i} H(x,\xi_q)+O(\de^2|\log\de|)\nonumber \end{eqnarray} does hold uniformly in $S$. Hence, by using \eqref{dxiw} and expansions in the proof of (35) in $C^1(\Xi)$ in \cite[Theorem 3.2]{EF}, we deduce that \begin{eqnarray} -\int_S \lab W\partial_{(\xi_j)_i}W dv_g &=& \sum_{l=1}^{m_1} \int_S \chi_l e^{-\varphi_l} e^{U_l}\partial_{(\xi_j)_i}W dv_g - {1\over \tau} \sum_{l=m_1+1}^m \int_S \chi_l e^{-\varphi_l} e^{U_l}\partial_{(\xi_j)_i}W dv_g \nonumber \\ &=& -32 \pi^2 \partial_{(\xi_j)_i} \varphi^*_m(\xi) +O(\de^2|\log\de|) \label{1term} \end{eqnarray} for $j\in\{1,\dots,m_1\}$. Similarly, for $j\in\{m_1+1,\dots,m\}$ we compute $$-\int_S \lab W\partial_{(\xi_j)_i}W dv_g=-32 \pi^2 \partial_{(\xi_j)_i} \varphi_m(\xi) +O(\de^2|\log\de|). $$ In order to give an expansion of the second term in $\partial_{(\xi_j)_i}[J_\lambda(W)]$, first observe that by Lemma \ref{ewfxi} there hold \begin{equation}\label{v1ewxij} V_1e^W={e^{\alpha_{\de,\xi}-2 F_{\delta,\xi}(x)}\over 8\de_j^2} \rho_je^{U_j}[1+O(\de^4|\log \delta|)]\quad\text{uniformly in $B_{r_0}(\xi_j)$, $j=1\dots,m_1$} \end{equation} \begin{equation}\label{v1ewsmxij} \text{$V_1e^W=O(1)$ uniformly in $S \setminus \cup_{j=1}^{m_1} B_{r_0}(\xi_j)$,} \end{equation} \begin{equation}\label{v2emtwxij} V_2e^{-\tau W}={e^{-\tau \alpha_{\de,\xi} + 2\tau F_{\delta,\xi}(x)}\over 8\de_j^2} \rho_je^{U_j}[1+O(\de^4|\log \delta|)]\quad\text{uniformly in $B_{r_0}(\xi_j)$, $j=m_1+1\dots,m$} \end{equation} \begin{equation}\label{v2emtwsmxij} \text{ and $V_2e^{-\tau W}=O(1)$ uniformly in $S \setminus \cup_{j=m_1+1}^{m} B_{r_0}(\xi_j)$}. \end{equation} So, arguing in the same way as in the proof of (35) in $C^1(\Xi)$ in \cite[Theorem 3.2]{EF} and taking into account that for $k=1,2$ $$\int_S V_ke^{(-\tau)^{k-1} W} \partial_{(\xi_j)_i}W dv_g= \sum_{l = 1}^{m_1} \int_SV_ke^{(-\tau)^{k-1} W}\partial_{(\xi_j)_i}W_l-{1\over \tau}\sum_{l=m_1+1}^m\int_SV_ke^{(-\tau)^{k-1} W} \partial_{(\xi_j)_i}W_l$$ we have that \begin{equation} \int_S {V_ke^{(-\tau)^{k-1} W}\over \int_S V_ke^{(-\tau)^{k-1} W} dv_g} \partial_{(\xi_j)_i}W dv_g=O(\de^2|\log\de|),\quad k=1,2.\label{2term} \end{equation} In conclusion, by (\ref{1term})-(\ref{2term}) we can write \begin{eqnarray*} \partial_{(\xi_j)_i}[J_{\lambda_1,\lambda_2}(W)]= -32 \pi^2 \partial_{(\xi_j)_i} \varphi^*_m(\xi)+O(\delta^2 |\log \delta|). \end{eqnarray*} and the proof is complete. \end{proof} \medskip \noindent Finally, we address the expansions for the derivatives of $J_{\lambda_1,\lambda_2}(W)$ in $\mu$. Recall that we argue similar to the proof of (35) in $C^2(\mathbb{R})$ in \cite[Theorem 3.2]{EF}. \begin{proof}[{\bf Proof (of (\ref{JUt}) in $C^2(\mathbb{R}^2)$):}] We just focus on the first and second derivative of $J_{\lambda_1,\lambda_2}(W)$ in $\mu_i$, $i=1,2$. Since $\partial_{\mu_i}= \de \rho_l^{\frac{1}{2}}(\xi_l) \partial_{\de_l} $, $i=1$ for $l\in\{1,\dots,m_1\}$ and $i=2$ for $l\in\{m_1+1,\dots,m\}$, in view of \eqref{repla0}, arguing as in Lemma \ref{ewfxi}, it is easy to show that \begin{eqnarray}\label{ddw} &&\de^{-1}\rho_l^{-\frac{1}{2}}(\xi_l) \partial_{\mu_i} W_l=- \chi_l \frac{4 \delta_l}{\delta_l^2+|y_{\xi_l}(x)|^2}+ \beta_{\delta_l,\xi_l}-4 \delta_l F_{\xi_l}+O(\delta^3 |\log \delta|)\\ &&\de^{-2}\rho_l^{-1}(\xi_l) \partial_{\mu_i\mu_i} W_l=4\chi_l \frac{\delta_l^2-|y_{\xi_l}(x)|^2}{(\delta_l^2+|y_{\xi_l}(x)|^2)^2}+ \gamma_{\delta_l,\xi_l}-4 F_{\xi_l}+O(\delta^2 |\log \delta|)\label{dddw} \end{eqnarray} do hold uniformly in $S$, where $$\beta_{\delta_l,\xi_l}=-{8\pi\over|S|} \delta_l \log \delta_l+{4\delta_l \over|S|}\left(\int_{\mathbb{R}^2} \chi(|y|) \frac{e^{\hat \varphi_\xi(y)}-1}{|y|^2}dy- \int_{\mathbb{R}^2} {\chi'(|y|) \log |y|\over |y| } dy \right)$$ and $$\gamma_{\delta_l,\xi_l}=-{8\pi\over|S|} \log \delta_l+{4 \over|S|}\left(\int_{\mathbb{R}^2} \chi(|y|) \frac{e^{\hat \varphi_\xi(y)}-1}{|y|^2}dy-2\pi -\int_{\mathbb{R}^2} {\chi'(|y|) \log |y|\over |y| } dy \right).$$ Note that $\partial_{\mu_i}W_l=0$ either if $i=1$ and $l\in\{m_1+1,\dots,m\}$ or $i=2$ and $l\in \{1,\dots,m\}$. Let us stress that $\partial_{\mu_i\mu_k}W_l=0$ for all $l=1,\dots,m$ and $i\ne k$, so that $\partial_{\mu_i\mu_k}W=0$ for $i\ne k$. By Lemma \ref{ieuf} we then have that either for $i=1$, $l\in\{1,\dots,m_1\}$ or $i=2$, $l\in\{m_1+1,\dots,m\}$ $$\de^{-1}\rho_l^{-\frac{1}{2}}(\xi_l) \int_S \chi_j e^{-\varphi_j}e^{U_j} \partial_{\mu_i} W_l dv_g= - \frac{16 \pi}{\delta_j} \delta_{jl} +8\pi \beta_{\delta_l,\xi_l}-32 \pi \delta_l F_{\xi_l}(\xi_j)+O(\delta^3|\log \delta|^2),$$ \begin{eqnarray}\label{ieuddmuw} \de^{-2}\rho_l^{-1}(\xi_l)\int_S \chi_j e^{-\varphi_j}e^{U_j} \partial_{\mu_i\mu_i} W_l dv_g = \frac{16 \pi}{3\delta_j^2} \delta_{jl} +8\pi \gamma_{\delta_l,\xi_l}-32 \pi F_{\xi_l}(\xi_j)+O(\delta^2 |\log \delta|^2) \end{eqnarray} and either for $k=1$, $j\in\{1,\dots,m_1\}$ or $k=2$, $l\in\{m_1+1,\dots,m\}$ \begin{eqnarray}\label{ieududw} &&\de^{-1} \rho_l^{-\frac{1}{2}}(\xi_l) \int_S \chi_j e^{-\varphi_j}e^{U_j} \partial_{\mu_k} U_j \partial_{\mu_i} W_l dv_g= \frac{2}{\mu_k}\de^{-1} \rho_l^{-\frac{1}{2}}(\xi_l) \int_S \chi_j e^{-\varphi_j}e^{U_j} \frac{|y_{\xi_j}(x)|^2-\de_j^2}{\de_j^2+|y_{\xi_j}(x)|^2} \partial_{\mu_i} W_l dv_g\nonumber\\ &&=\frac{32 \pi}{3 \de_j^2} \de\rho_j(\xi_j)^{\frac{1}{2}} \de_{jl}+O(\de^\gamma) \end{eqnarray} in view of $\int_{\mathbb{R}^2} \frac{|y|^2-1}{(1+|y|^2)^3}dy=0$, where $\delta_{jl}$ denotes the Kronecker's symbol. Note that $\partial_{\mu_k}U_j=0$ either for $k=1$ and $j\in\{m_1+1,\dots,m\}$ or $k=2$ and $j\in\{1,\dots,m_1\}$. Since $\int_S \partial_{\mu_i} W dv_g=\int_S \partial_{\mu_i\mu_k} W dv_g=0$, we then deduce the following expansions: \begin{eqnarray} \int_S (-\Delta_g W) \partial_{\mu_1} W dv_g&=& \sum_{j,l=1}^{m_1} \int_S \chi_j e^{-\varphi_j}e^{U_j} \partial_{\mu_1} W_l dv_g -{1\over\tau} \sum_{j=m_1+1}^m\sum_{l=1}^{m_1} \int_S \chi_j e^{-\varphi_j}e^{U_j} \partial_{\mu_1} W_l dv_g \label{deW} \nonumber\\ &=&- \frac{16 \pi m_1}{\mu_1} +8\pi m_1\de \sum_{l=1}^{m_1} \rho_l^{\frac{1}{2}}(\xi_l) \beta_{\delta_l,\xi_l}-32 \pi \mu_1\de^2 \sum_{j,l=1}^{m_1} \rho_l(\xi_l) F_{\xi_l}(\xi_j)\\ && - {8\pi m_2\over \tau}\de \sum_{l=1}^{m_1} \rho_l^{\frac{1}{2}}(\xi_l) \beta_{\delta_l,\xi_l} + {32 \pi\over \tau} \mu_1\de^2 \sum_{j=m_1+1}^{m}\sum_{l=1}^{m_1} \rho_l(\xi_l) F_{\xi_l}(\xi_j)+O(\delta^4|\log \de|^2), \nonumber \end{eqnarray} and \begin{eqnarray} \int_S (-\Delta_g W) \partial_{\mu_2} W dv_g&=& -{1\over \tau} \sum_{j=1}^{m_1}\sum_{l=m_1+1}^m \int_S \chi_j e^{-\varphi_j}e^{U_j} \partial_{\mu_2} W_l dv_g + {1\over\tau^2} \sum_{j,l=m_1+1}^m \int_S \chi_j e^{-\varphi_j}e^{U_j} \partial_{\mu_2} W_l dv_g \label{de2W} \nonumber\\ &=&- \frac{16 \pi m_2}{\mu_2\tau^2} - {8\pi m_1\over \tau}\de \sum_{l=1}^{m_1} \rho_l^{\frac{1}{2}}(\xi_l) \beta_{\delta_l,\xi_l} + {32 \pi \over \tau} \mu_2\de^2 \sum_{j=1}^{m_1}\sum_{l=m_1+1}^{m}\rho_l(\xi_l) F_{\xi_l}(\xi_j)\\ && + {8\pi m_2\over \tau^2}\de \sum_{l=m_1+1}^{m} \rho_l^{\frac{1}{2}}(\xi_l) \beta_{\delta_l,\xi_l} - {32 \pi\over \tau^2} \mu_2\de^2 \sum_{j,l=m_1+1}^{m} \rho_l(\xi_l) F_{\xi_l}(\xi_j)+O(\delta^4|\log \de|^2), \nonumber \end{eqnarray} as $\de \to 0$. Since by Lemma \ref{ewfxi} there hold \eqref{v1ewxij}, \eqref{v1ewsmxij} and $\partial_{\mu_1} W= O(\delta^2 |\log \delta|)$ uniformly in $S \setminus \cup_{j=1}^{m_1} B_{r_0}(\xi_j)$, by Lemma \ref{ieuf} we can write that \begin{eqnarray*} &\hspace{-0.2cm}\ds\int_S &\hspace{-0.3cm}V_1e^W \partial_{\mu_1} W dv_g = \sum_{j,l=1}^{m_1} \int_{B_{r_0}(\xi_j)} V_1e^W \partial_{\mu_1} W_l dv_g +O(\delta^2 |\log \delta|) \\ &=&- \sum_{j=1}^{m_1} {e^{\alpha_{\de,\xi}}\over 2 \mu_1} \int_{B_{r_0}(\xi_j)} e^{-2 F_{\delta,\xi}(x)} \frac{ \rho_je^{U_j}}{\delta_j^2+|y_{\xi_j}(x)|^2} dv_g\\ &&+\;\pi {e^{\alpha_{\de,\xi}}\over \mu_1^2\de} \left(m_1 \sum_{l=1}^m \rho_l^{\frac{1}{2}}(\xi_l) \beta_{\delta_l,\xi_l}-4 \sum_{j,l=1}^{m_1} \rho_l^{\frac{1}{2}}(\xi_l) \delta_l F_{\xi_l}(\xi_j) \right)+\,O(\de|\log \delta|)\\ &=&\pi {e^{\alpha_{\de,\xi}}\over \mu_1^2\de^2} \left(-\frac{2m_1}{\mu_1}+m_1\de \sum_{l=1}^{m_1} \rho_l^{\frac{1}{2}}(\xi_l) \beta_{\delta_l,\xi_l}-{\mu_1\de^2 \over 8\pi} A_1^*(\xi)- {4\over \mu_1\tau}\sum_{j=1}^{m_1} F_{2,\de,\xi}(\xi_j) +O(\delta^{2+\gamma})\right) \end{eqnarray*} in view of (\ref{gaussian}) and from \eqref{s1m1em2f} \begin{eqnarray*} \sum_{j=1}^{m_1} e^{-2F_{\de,\xi}(\xi_j)} = m_1 -2 \sum_{j,l=1}^{m_1} \de_l^2 F_{\xi_l}(\xi_j)+{2\over \tau}\sum_{j=1}^{m_1}F_{2,\de,\xi}(\xi_j)+O(\de^4). \end{eqnarray*} Combining with (\ref{intV1eW}) we then get that \begin{eqnarray} \label{Merry1} \frac{\int_S V_1e^W \partial_{\mu_1} W dv_g}{\int_S V_1e^W dv_g} &=& -\frac{2}{\mu_1}+\de \sum_{l=1}^{m_1} \rho_l^{\frac{1}{2}}(\xi_l) \beta_{\delta_l,\xi_l}-{\de^2A_1^*(\xi) \over 8\pi m_1} [\mu_1+2\mu_1\log\mu_1]\\ &&-\frac{A_1^*(\xi)}{4 \pi m_1} \mu_1\delta^2 \log \delta+\frac{B_1^*(\xi)}{4\pi m_1} \mu_1\delta^2-\frac{4}{m_1 \mu_1}\sum_{j=1}^{m_1} F_{1,\delta,\xi}(\xi_j)+o(\de^2).\nonumber \end{eqnarray} Similarly as above, there hold \eqref{v2emtwxij}, \eqref{v2emtwsmxij} and and $\partial_{\mu_1} W= O(\delta^2 |\log \delta|)$ uniformly in $S\setminus\cup_{j=1}^{m_1}B_{r_0}(\xi_j)$, so that \begin{eqnarray*} &&\int_S V_2e^{-\tau W} \partial_{\mu_1} W dv_g = \sum_{j,l=1}^{m_1} \int_{B_{r_0}(\xi_j)} V_2e^{-\tau W} \partial_{\mu_1} W_l dv_g+ \sum_{j=m_1+1}^{m}\sum_{l=1}^{m_1} \int_{B_{r_0}(\xi_j)} V_2e^{-\tau W} \partial_{\mu_1} W_l dv_g\\ && \ \ + \: O(\delta^2 |\log \delta|)\\ &&=\pi {e^{-\tau\alpha_{\de,\xi}}\over \mu_2^2\de^2} \left(m_2\de \sum_{l=1}^{m_1} \rho_l^{\frac{1}{2}}(\xi_l) \beta_{\delta_l,\xi_l} - {4\over \mu_1}\sum_{j=m_1+1}^{m} F_{1,\de,\xi}(\xi_j) +O(\delta^{4}|\log\de|)\right) \end{eqnarray*} in view of $\tau>0$, \eqref{sm11me2tf} and \begin{eqnarray*} \int_{B_{r_0}(\xi_j)} \frac{ V_2e^{-\tau W}}{\delta_j^2+|y_{\xi_j}(x)|^2} dv_g=O\left(\int_{B_{r_0}(\xi_j)} (\de_j^2+|y_{\xi_j}(x)|^2)^{\tau-1}dv_g\right)=O(1). \end{eqnarray*} Combining with (\ref{intV2etW}) we then get that \begin{eqnarray} \label{Merry0} \frac{\int_S V_2e^{-\tau W} \partial_{\mu_1} W dv_g}{\int_S V_2e^{-\tau W} dv_g} &=& \de \sum_{l=1}^{m_1} \rho_l^{\frac{1}{2}}(\xi_l) \beta_{\delta_l,\xi_l} - \frac{4}{m_2 \mu_1}\sum_{j=1}^{m_1} F_{1,\delta,\xi}(\xi_j)+o(\de^2), \end{eqnarray} which yields to \begin{eqnarray} \label{derivmu1} \partial_{\mu_1}[J_{\lambda_1,\lambda_2}(W)]&=& \int_S (-\Delta_g W) \partial_{\mu_1} W dv_g-\lambda_1 \frac{\int_S V_1e^W \partial_{\mu_1} W dv_g}{\int_S V_1e^W dv_g} +\lambda_2\tau \frac{\int_S V_2e^{-\tau W} \partial_{\mu_1} W dv_g}{\int_S V_2e^{-\tau W} dv_g} \nonumber \\ &=& \frac{2(\lambda_1-8\pi m_1)}{\mu_1}+2 A_1^*(\xi) \mu_1 \delta^2 \log \delta+[A_1^*(\xi)\{\mu_1+2\mu_1\log\mu_1\}-2 B_1^*(\xi)\mu_1] \delta^2\nonumber \\ &&+o(\de^2) \end{eqnarray} in view of \eqref{deW}, so that we deduce the validity of (\ref{JUt}) for the first derivative in $\mu_1$. Now, for the first derivative in $\mu_2$, similarly as above we have that \begin{eqnarray} \label{Merry} \frac{\int_S V_1e^{W} \partial_{\mu_2} W dv_g}{\int_S V_1e^{ W} dv_g} &=& -{\de\over\tau} \sum_{l=m_1+1}^{m} \rho_l^{\frac{1}{2}}(\xi_l) \beta_{\delta_l,\xi_l} + \frac{4 }{m_1 \mu_2\tau}\sum_{j=1}^{m_1} F_{2,\delta,\xi}(\xi_j)+o(\de^2). \end{eqnarray} in view of \eqref{intV1eW}, and \begin{eqnarray} \label{Merrymu2} \frac{\int_S V_2e^{-\tau W} \partial_{\mu_2} W dv_g}{\int_S V_2e^{-\tau W} dv_g} &=& \frac{2}{\mu_2\tau} - {\de\over\tau} \sum_{l=m_1+1}^{m} \rho_l^{\frac{1}{2}}(\xi_l) \beta_{\delta_l,\xi_l}+{\de^2A_2^*(\xi) \over 8\pi m_2\tau} [\mu_2+2\mu_2\log\mu_2]\\ &&+\frac{A_2^*(\xi)}{4 \pi m_2\tau} \mu_2\delta^2 \log \delta - \frac{B_2^*(\xi)}{4\pi m_2\tau} \mu_2\delta^2 + \frac{4}{m_2 \mu_2\tau}\sum_{j=m_1+1}^{m} F_{2,\delta,\xi}(\xi_j)+o(\de^2).\nonumber \end{eqnarray} by using \eqref{sm11me2tf} and combining with (\ref{intV2etW}). Thus, by using \eqref{de2W} we conclude the validity of (\ref{JUt}) for the first derivative in $\mu_2$: \begin{eqnarray} \label{derivmu2} \partial_{\mu_2}[J_{\lambda_1,\lambda_2}(W)] &=& \frac{2(\lambda_2\tau^2-8\pi m_2)}{\mu_2\tau^2}+\frac{2 A_2^*(\xi)}{\tau^2} \mu_2 \delta^2 \log \delta+[A_2^*(\xi)\{\mu_2+2\mu_2\log\mu_2\}-2 B_2^*(\xi)\mu_2] \frac{\delta^2}{\tau^2} \nonumber \\ &&+o(\de^2) \end{eqnarray} \medskip \noindent Towards the expansion of the second derivatives in $\mu$, we proceed in a similar way to obtain \eqref{derivmu1} and \eqref{derivmu2} with the aid of the expansions \eqref{ddw} for $\partial_{\mu_i}W$ and \eqref{dddw} for $\partial_{\mu_i\mu_i} W_l$, \eqref{ieuddmuw} and \eqref{ieududw} (see also validity of expansion (35) in $C^2(\mathbb{R})$ in \cite[Theorem 3.2]{EF}). We omit the details, thus, we conclude the validity of (\ref{JUt}) also for the second derivatives in $\mu$ and the proof is complete. \end{proof} \section{Proof of Theorem \ref{main3}}\label{pthm3} In this section, we shall study the existence of blowing-up solutions as $\lambda_1\to 8\pi m_1$ and $\lambda_2\tau^2\to 0$, which resembles the equation \eqref{mfe}. For simplicity, we shall denote $m_1=m$ so that our approximating solution is $\ds W(x)=\sum_{j=1}^m W_j(x)$, and we look for solutions to \eqref{mfewt} in the form $u=W+\phi$. Assumptions \eqref{repla0}-\eqref{repla1} are replaced by \begin{equation}\label{replam20} \begin{split} \de_j^2=\mu^2\de^2\rho_j(\xi_j),\ \ j=1,\dots,m\qquad &\text{with}\qquad 0<\mu\le C_0 \\[0.1cm] |\lambda_1-8\pi m |\le C \de^2|\log \de| \qquad&\text{and}\qquad 0<\lambda_2\tau^2 \le C \de^2|\log \de|. \end{split} \end{equation} Notice from similar computations above to obtain \eqref{intV2etW}, we have that $$\int_S V_2e^{-\tau W} dv_g= e^{-\tau \al_{\de,\xi}}\left[\int_S V_2 e^{-8\pi \tau \sum\limits_{j=1}^m G(x,\xi_j)}dv_g +O(\de^2)\right]\ge \eta_0>0$$ for some $\eta_0>0$. By conditions \eqref{replam20} we get that \begin{equation}\label{la2v2} {\lambda_2\tau V_2 e^{-\tau W}\over \int_S V_2e^{-\tau W}} =O(\de^2|\log\de|)\quad \text{uniformly in $S$}. \end{equation} Hence, it follows estimate \eqref{re}. Now, denote $\ds Z=\sum_{l=1}^m Z_{0l}$ and $PZ$ its projection according to \eqref{ePZ}. By using \eqref{la2v2} and similar arguments used in the proofs of \cite[Proposition 4.1]{EF} and Proposition \ref{p2}, it follows the invertibility of $L$ in \eqref{ol} in this case (as $\lambda_1\to 8\pi m $ and $\lambda_2\tau^2\to 0$), and we deduced the following fact. \begin{prop}\label{lpnlabis} There exists $\delta_0>0$ so that for all $0<\delta\leq \delta_0$, $\mu\in (0,C_0]$, $\xi \in \Xi$ problem \begin{equation*} \left\{ \begin{array}{ll} L(\phi)= -[R+N(\phi)] +c_0\Delta_g PZ +\displaystyle \sum_{i=1}^{2}\sum_{j=1}^m c_{ij} \Delta_g PZ_{ij}& \text{in } S\\ \int_S \phi \Delta_g PZ dv_g=\int_S \phi \Delta_g PZ_{ij} dv_g= 0 &\forall\: i=1,2,\, j=1,\dots,m \end{array} \right. \end{equation*} admits a unique solution $\phi(\mu,\xi) \in \bar H \cap W^{2,2}(S)$ and $c_0(\mu,\xi),\,c_{ij}(\mu,\xi) \in \R$, $i=1,2$ and $j=1,\dots,m$, where $\de_j>0$ are as in \eqref{replam20} and $N$, $R$ are given by \eqref{nlt}, \eqref{R}, respectively. Moreover, the map $(\mu,\xi)\mapsto (\phi(\mu,\xi),c_0(\mu,\xi),c_{ij}(\mu,\xi))$ is twice-differentiable in $\mu$ and one-differentiable in $\xi$ with $$\|\phi\|_\infty+{\|\partial_\mu \phi \|_\infty\over |\log\de|}+\sum_{i=1}^2 \sum_{j=1}^m{\de \|\partial_{(\xi_j)_i} \phi \|_\infty\over |\log\de|} + {\|\partial_{\mu\mu}\phi\|_\infty\over |\log\de|^2} \le C\left( \delta |\log \delta | |\nabla \varphi_m^*(\xi)|_g+ \delta^{2-\sigma}|\log\delta|^2\right)$$ \end{prop} \noindent As in the case $m_2\ge 1$, the function $[W+\phi](\mu,\xi)$ will be a true solution of \eqref{ephi} if $\mu\in[C_0^{-1},C_0]$ and $\xi\in \Xi$ are such that $c_{0}(\mu,\xi)=c_{ij}(\mu, \xi)=0$ for all $i=1,2,$ and $j=1,\dots,m$. Similarly to Lemma \ref{cpfc0bis}, this problem is equivalent to finding critical points of the reduced energy $E_{\lambda_1,\lambda_2}(\mu, \xi)= J_{\lambda_1,\lambda_2}\big([W+\phi](\mu,\xi)\big)$, where $J_{\lambda_1,\lambda_2}$ is given by \eqref{energy}. Notice that $$\lambda_2\log\left(\int_S V_2e^{-\tau W} dv_g\right)=-\lambda_2\tau \al_{\de,\xi} + \lambda_2\log\left(\int_S V_2 e^{-8\pi \tau \sum\limits_{j=1}^m G(x,\xi_j)}dv_g\right) + O(\de^4|\log\de|).$$ Let us stress that $\ds\lambda_2\log\bigg(\int_S V_2 e^{-8\pi \tau \sum\limits_{j=1}^m G(x,\xi_j)}dv_g\bigg) $ is independent of $\mu$. Taking into account computations in the proof of \cite[Theorem 3.2]{EF} and similar ones in the proof of Theorem \ref{expansionenergy} we have that \begin{equation*} \begin{split} J_{\lambda_1,\lambda_2}(W)=&\,-8\pi m -\lambda_1\log(\pi m) +2(\lambda_1-8\pi m)\log(\mu\de) -32\pi^2\varphi_m^*(\xi) +A(\xi)\mu^2\de^2\log\de\\ & + [A(\xi)\mu^2\log\mu - B(\xi) \mu^2]\de^2-\lambda_2\log\left(\int_S V_2 e^{-8\pi \tau \sum\limits_{j=1}^m G(x,\xi_j)}dv_g\right) + o(\de^2). \end{split} \end{equation*} Consequently, from estimates in Appendix B we obtain that \begin{theo} \label{fullexpansionenergym20} Assume \eqref{replam20}. The following expansion does hold \begin{eqnarray*} E_{\lambda_1,\lambda_2} (\mu,\xi) &=&-8\pi m- \lambda_1 \log (\pi m) - 2\big(\lambda_1 -8\pi m \big)\log\de -32\pi^2 \varphi_m^*(\xi)\\ &&\,\,+ 2\big(\lambda_1 -8\pi m \big)\log\mu + A(\xi) \mu^2\delta^2\log \delta + \left[A(\xi) \mu^2 \log\mu- B(\xi)\mu^2\right] \delta^2 \nonumber \\ &&\,\, -\lambda_2\log\left(\int_S V_2 e^{-8\pi \tau \sum\limits_{j=1}^m G(x,\xi_j)}dv_g\right) + o(\de^2) +r_{\lambda_1,\lambda_2}(\mu,\xi) \nonumber \end{eqnarray*} in $C^2(\mathbb{R})$ and $C^1(\Xi)$ as $\de\to 0^+$, where $\varphi_m^*(\xi)$, $A(\xi)$ and $B(\xi)$ are given by \eqref{fim}, \eqref{vk} and \eqref{Bk} with $k=1$, respectively. The term $r_{\lambda_1,\lambda_2}(\mu,\xi)$ satisfies \eqref{rlambda} for some $C>0$ independent of $(\mu,\xi)\in (0,C_0]\times\Xi$. \end{theo} \begin{proof}[{\bf Proof (of Theorem \ref{main3}):}] We argue in the same way as in the proof of Theorem \ref{main2} with $k=1$. \end{proof} \section{Appendix A}\label{appeA} \noindent We shall argue in the same way as in Appendix A in \cite{EF}. We first address a-priori estimates for the operator $L$ when all the $c_{ij}$'s vanish: \begin{prop} \label{p1} There exists $\delta_0>0$ and $C>0$ so that, for all $0<\delta\leq \delta_0$, $h\in C(S)$ with $\int_Sh dv_g=0$, $\xi \in \Xi$ and $\phi \in H_0^1(S) \cap W^{2,2}(S)$ a solution of \eqref{plco} with $c_{0i}=c_{ij}=0$, $i=1,2$ and $j=1,\dots,m$, one has \begin{equation*} \|\phi \|_\infty \le C | \log \de | \|h\|_*. \end{equation*} \end{prop} \begin{proof}[\bf Proof:] By contradiction, assume the existence of sequences $\de \to 0$, $\mu=(\mu_1,\mu_2)$ with $\mu\to\mu^*$, points $\xi \in \Xi$ with $\xi \to \xi^*$, functions $h$ with $|\log \de| \|h\|_*=o(1)$ and solutions $\phi$ with $\|\phi\|_\infty=1$. Recall that $\de_j^2=\mu_i\de^2 \rho_j(\xi_j)$. Setting $\mathcal{K}_i=\frac{\lambda_i\tau^{2(i-1)} V_ie^{(-\tau )^{i-1} W}}{\int_S V_ie^{(-\tau)^{i-1} W}dv_g}$, $\psi_i=\phi + \tilde c_i(\phi)$, $\tilde c_i(\phi)=-\frac{\int_S V_ie^{(-\tau)^{i-1} W}\phi dv_g}{\int_S V_ie^{(-\tau)^{i-1} W}dv_g}$ for $i=1,2$, we have that $\psi_1-\tilde c_1(\phi) = \psi_2 - \tilde c_2(\phi)$, $\Delta_g \psi_1+\mathcal{K}_1 \psi_1 + \mathcal{K}_2[\psi_1-\tilde c_1(\phi)+\tilde c_2(\phi)]=h$ and $\Delta_g \psi_2+\mathcal{K}_1 [\psi_2-\tilde c_2(\phi)+\tilde c_1(\phi)] + \mathcal{K}_2\psi_2=h$ in $S$ and $\psi_i$, $i=1,2$ does satisfy the same orthogonality conditions as $\phi$. \medskip \noindent Since $\|\psi_{i,n}\|_\infty\le 2\|\phi_n\|_\infty \le 2$ and $\Delta_g\psi_i =o(1)$ in $C_{\hbox{loc}}(S \setminus \{\xi_1^*,\dots, \xi_m^*\})$, we can assume that $\psi_{i,n} \to \psi_{i,\infty}$ in $C^1_{\hbox{loc}}(S \setminus\{\xi_1^*,\dots,\xi_m^*\})$. Since $\psi_{i,\infty}$ is bounded, it extends to an harmonic function in $S$, and then $\psi_{i,\infty}=\tilde c_{i,0}:= -\lim \frac{\int_S V_i e^{(-\tau)^{i-1} W} \phi dv_g}{\int_S V_i e^{(-\tau)^{i-1} W} dv_g}$ in view of ${1\over |S|}\int_S \psi_{i,n} dv_g=\tilde c_{i,n}(\phi)$. \medskip \noindent The function $\Psi_{i,j} =\psi_i(y_{\xi_j}^{-1}(\delta_j y))$ $i=1$, for $j=1,\dots,m_1$ and $i=2$ for $j=m_1+1,\dots ,m$ satisfy $\lap \Psi_{1,j} + \mathcal{\tilde K}_{1,j} \Psi_{1,j} + \mathcal{\tilde K}_{2,j}[\Psi_{1,j}-\tilde c_1+\tilde c_2]= \tilde h_j$ and $\lap \Psi_{2,j} + \mathcal{\tilde K}_{1,j}[\Psi_{2,j}-\tilde c_2+\tilde c_1] + \mathcal{\tilde K}_{2,j} \Psi_{2,j} = \tilde h_j$ in $B_{2r_0 \over \de_j}(0)$, where $\mathcal{\tilde K}_{i,j}=\de_j^2 \mathcal{K}_i(y_{\xi_j}^{-1}(\de_j y))$ and $\tilde h_j(y)=\de_j^2 h (y_{\xi_j}^{-1}(\de_j y))$. Since $|\ti h_j| \le C\|h \|_*$, $$\mathcal{\tilde K}_{1,j}= \begin{cases} {8\over (1+|y|^2)^2}(1+O(\delta^2|\log \delta|))&\text{ for $j=1,\dots,m_1$}\\ O(\de_j^2)&\text{ for $j=m_1+1,\dots,m$ } \end{cases}$$ and $$\mathcal{\tilde K}_{2,j}= \begin{cases} O(\de_j^2)&\text{ for $j=1,\dots,m_1$}\\ {8\over (1+|y|^2)^2}(1+O(\delta^2|\log \delta|))&\text{ for $j=m_1+1,\dots,m$ } \end{cases}$$ uniformly in $B_{\frac{2r_0}{\delta}}(0)$, in view of Lemma \ref{ewfxi}, \eqref{intV1eW} and (\ref{intV2etW}), up to a sub-sequence, by elliptic estimates $\Psi_{i,j} \to \Psi_{j,\infty}$ with $i=1$ if $j=1,\dots,m_1$ and $i=2$ if $j=m_1+1,\dots,m$ in $C^1_{\hbox{loc}}(\mathbb{R}^2)$, where $\Psi_{j,\infty}$ is a bounded solution of $\Delta \Psi_{j,\infty} + {8\over(1+|y|^2)^2}\Psi_{j,\infty}= 0$ of the form $\Psi_{j,\infty}=\displaystyle \sum_{i=0}^2 a_{ij}Y_i$ (see for example \cite{bp}). Since $-\Delta_g PZ_{ij} =\chi_j e^{-\varphi_j} e^{U_j} Z_{ij}-\frac{1}{|S|}\int_S \chi_j e^{-\varphi_j} e^{U_j} Z_{ij} dv_g$ in view of (\ref{ePZ}) and $\Delta_g=e^{-\varphi_j} \Delta$ in $B_{2r_0}(\xi_j)$ through $y_{\xi_j}$, we have that $$0=-\int_S \psi_l \Delta_g PZ_{ij}=32 \int_{\mathbb{R}^2} \Psi_{l,j} \frac{y_i}{(1+|y|^2)^3} dy -\frac{32}{|S|} \int_{\mathbb{R}^2} \frac{y_i}{(1+|y|^2)^3} dy \int_S \psi_{l,n} +O(\de^3),$$ with $l=1$ if $j=1,\dots,m_1$ and $l=2$ if $j=m_1+1,\dots,m$. Since then $\int_{\mathbb{R}^2} \Psi_{j,\infty} \frac{y_i}{(1+|y|^2)^3} dy=0$, we deduce that $a_{1j}=a_{2j}=0$. By the orthogonality condition $\int_S \phi \Delta_g PZ_1=0$, similarly we deduce that \begin{eqnarray*} 0&=&-\sum_{j=1}^{m_1}\int_S \psi_1 \Delta_g PZ_{0j}dv_g \\ &=&16 \sum_{j=1}^{m_1}\int_{\mathbb{R}^2} \Psi_j \frac{1-|y|^2}{(1+|y|^2)^3} dy -\frac{16}{|S|} m_1 \int_{\mathbb{R}^2} \frac{1-|y|^2}{(1+|y|^2)^3} dy \int_S \psi_{1,n} +O(\de^2), \end{eqnarray*} which implies $\displaystyle \sum_{j=1}^{m_1} a_{0j}=0$ in view of $\int_{\mathbb{R}^2} \frac{1-|y|^2}{(1+|y|^2)^3} dy=0$. By using the same argument, the orthogonality condition $\int_S \phi \Delta_g PZ_2=0$ implies that $\displaystyle \sum_{j=m_1+1}^{m} a_{0j}=0$. By dominated convergence we have that \begin{eqnarray*} &&\int_S G(y,\xi_j) \mathcal{K}_1 \psi_1 dv_g= -{1\over 2\pi } \log \de \int_{B_{r_0}(\xi_j)} \mathcal{K}_1 \psi_1 dv_g+\int_{\mathbb{R}^2} \Big[-{1\over 2\pi }\log |y|+H(\xi_j,\xi_j) \Big] \frac{8}{(1+|y|^2)^2} \Psi_{j,\infty} dy\\ &&+ \sum_{i=1 \atop i\not=j}^{m_1} G(\xi_i,\xi_j) \int_{\mathbb{R}^2} \frac{8}{(1+|y|^2)^2} \Psi_{i,\infty} dy+o(1)= -{1\over 2\pi } \log \de \int_{B_{r_0}(\xi_j)} \mathcal{K}_1 \psi_1 dv_g+4 a_{0j}+o(1) \end{eqnarray*} in view of $\int_{\mathbb{R}^2} \log |y| \frac{1-|y|^2}{(1+|y|^2)^3} dy=-\frac{\pi}{2}$ and \begin{equation*} \begin{split} \int_S G(y,\xi_j) \mathcal{K}_2 \psi_2 dv_g =&\,\sum_{i=m_1+1}^{m}G(\xi_i,\xi_j) \int_{\mathbb{R}^2} \frac{8}{(1+|y|^2)^2} \Psi_{i,\infty}(y) \, dy \\ & + O\bigg(\de^2 \int_{B_{r_0}(\xi_j)} |G(y,\xi_j)|dv_g\bigg) +o(1) = o(1) \end{split} \end{equation*} for $j=1,\dots,m_1$. In view of $\int_S \mathcal{K}_l \psi_l=0$, $l=1,2$ and $$\bigg|\int_S G(y,\xi_j) h dv_g \bigg| \le C |\log\de | \int_S |h| dv_g+\frac{\|h\|_*}{\delta^2}\bigg|\int_{B_\de(\xi_j)} G(y,\xi_j) dv_g\bigg|\leq C' |\log \delta|\|h\|_*=o(1),$$ by the Green's representation formula \begin{eqnarray*} \sum_{j=1}^{m_1} \Psi_j(0)&=& \sum_{j=1}^{m_1} \psi_1 (\xi_j) ={m_1 \over |S|}\int_S \psi_1 dv_g + \sum_{j=1}^{m_1} \int_S G(y,\xi_j) [ \mathcal{K}_1 \psi_1 + \mathcal{K}_2\psi_2- h ] dv_g\\ &=&m_1 \tilde c_{1,0} +4 \sum_{j=1}^{m_1} a_{0j}+o(1) \end{eqnarray*} which gives $\displaystyle 2\sum_{j=1}^{m_1} a_{0j}= m_1\tilde c_{1,0} +4 \displaystyle \sum_{j=1}^{m_1} a_{0j}$ as $n \to +\infty$. Since $\displaystyle \sum_{j=1}^{m_1} a_{0j}=0$, we get that $\tilde c_{1,0}=0$. By using a similar argument, we obtain that $$\int_S G(y,\xi_j) \mathcal{K}_1 \psi_1 dv_g = o(1)\quad\text{for $j=1,\dots,m_1$ and}$$ $$\int_S G(y,\xi_j) \mathcal{K}_2 \psi_2 dv_g = -{1\over 2\pi } \log \de \int_{B_{r_0}(\xi_j)} \mathcal{K}_2 \psi_2 dv_g+4 a_{0j}+o(1)$$ for $j=m_1+1,\dots,m$, so that, from the Green's representation formula for $\Psi_j(0)$ for $j=m_1+1,\dots,m$ we get that $\tilde c_{2,0}=0$.\\ Following \cite{EGP}, let $P\hat Z_j \in H_0^1(S)$ be s.t. $\Delta_g P \hat Z_j =\chi_j \Delta_g \hat Z_j -\frac{1}{|S|}\int_S \chi_j \Delta_g \hat Z_j dv_g$ in $S$, where $$\hat Z_j(x)=\beta_j\Big(\frac{y_{\xi_j}(x)}{\delta_j}\Big)\,,\qquad \beta_j(y)={4\over 3}[2\log \delta_j+\log (1 + |y|^2 )]\frac{1 - |y|^2}{1 + |y|^2} + {8\over 3} \frac{1}{1+ |y|^2}$$ satisfies $e^{\varphi_j}\Delta_g \hat Z_j+e^{U_j} \hat Z_j=e^{U_j}Z_{0j}$ in $B_{2r_0}(\xi_j)$. Since it is easily seen that $ P\hat Z_j= \chi_j \hat Z_j +{16 \pi \over 3}H(\cdot, \xi_j) +O(\de^2 |\log \de |^2)$ uniformly in $S$, we test the equation of $\psi_1$ against $P\hat Z_j$, $j=1,\dots,m_1$ to get: \begin{eqnarray*} &&\hspace{-0.7cm}o(1)=\int_S h P\hat Z_j=\int_S \psi_1 \bigg[\chi_j \Delta_g \hat Z_j -\frac{1}{|S|}\int_S \chi_j \Delta_g \hat Z_j dv_g \bigg]dv_g + \int_S \big[\mathcal{K}_1 \psi_1 + \mathcal{K}_2(\psi_1-\tilde c_1+\tilde c_2)\big] P\hat Z_j dv_g\\ &&= \int_S \chi_j \psi_1 [ \Delta_g \hat Z_j + \mathcal{K}_1 \hat Z_j]dv_g+o(1)=\int_S \chi_j \psi e^{U_j}Z_{0j} dv_g+o(1) =16 \int_{\mathbb{R}^2} \Psi_j \frac{1-|y|^2}{(1+|y|^2)^3}dy+o(1) \end{eqnarray*} in view of , $\int_S \mathcal{K}_1 \psi_1 dv_g=0$, $\int_S \mathcal{K}_2 [\psi_1-\tilde c_1+\tilde c_2]P\hat Z_j dv_g=o(1)$, $\int_S \psi_1 dv_g=o(1)$, $\int_S \chi_j \Delta_g \hat Z_j dv_g=O(1)$, $\int_S \chi_j \psi_1 [\mathcal{K}_1-e^{U_j}]\hat Z_j dv_g=O(\delta^2 |\log \delta|^2)$ and $\int_S h P\hat Z_j=O(|\log \delta|\|h\|_*)=o(1)$, $j=1,\dots,m_1$. Since $\int_{\mathbb{R}^2} \Psi_j \frac{1-|y|^2}{(1+|y|^2)^3}dy=0$ we have that $a_{0j}=0$, $j=1,\dots,m_1$. Now, testing the equation of $\psi_2$ against $P\hat Z_j$, $j=m_1+1,\dots,m$, lead us to deduce that $a_{0j}=0$,$j=m_1+1,\dots,m$. So far, we have shown that $\psi_i \to 0$ in $C_{\hbox{loc}}(S\setminus \{\xi_1^*,\dots,\xi_m^*\})$ and uniformly in $\cup_{j=1}^{m} B_{R \delta_j}(\xi_j)$, for all $R>0$ for both $i=1,2$, in view of $\psi_1-\tilde c_1=\psi_2-\tilde c_2$. \medskip \noindent Setting $\hat \psi_{i,j} (y)=\psi_i (y_{\xi_j}^{-1}(y))$, $\mathcal{\hat K}_{j}(y)=[\mathcal{K}_1 +\mathcal{K}_2] (y_{\xi_j}^{-1}(y))$ and $\hat h_j(y)=h(y_{\xi_j}^{-1}(y))$ for $y \in B_{2r_0}(0)$, we have that $ e^{\hat \varphi_j} \Delta \hat \psi_{1,j} + \mathcal{\hat K}_j \hat \psi_{1,j}=\hat h_j + \mathcal{K}_2(y_{\xi_j}^{-1} (y)) [\tilde c_1 - \tilde c_2] $. By now it is rather standard to show that the operator $\hat L_j=e^{\varphi_j} \Delta + \mathcal{\hat K}_j$ satisfies the maximum principle in $ B_r(0) \setminus B_{R\delta_j}(0)$ for $R$ large and $r>0$ small enough, see for example \cite{DeKM}. As a consequence, we get that $\psi_1 \to 0$ in $L^\infty(S)$. Similarly, we also get that $\psi_2 \to 0$ in $L^\infty(S)$. Since $\psi_i=\phi + \tilde c_i(\phi)$ and $\tilde c_i(\phi) \to \tilde c_{i,0}=0$ along a sub-sequence, $\|\psi_i \|_\infty \to 0$ implies $\phi \to 0$ in $L^\infty(S)$, in contradiction with $\|\phi\|_\infty=1$. This completes the proof. \end{proof} \noindent We are now ready for \begin{proof}[{\bf Proof (of Proposition \ref{p2}):}] Since $\|\lap_g PZ_{ij}\|_*\le C$ for all $i=0,1,2$, $j=1,\dots,m$, by Proposition \ref{p1} any solution of \eqref{plco} satisfies $$\|\phi\|_\infty\le C |\log \de| \bigg[\|h\|_*+ \sum_{i=1}^2\Big(|c_{0i}|+\sum_{j=1}^m |c_{ij}|\Big) \bigg].$$ To estimate the values of the $c_{ij}$'s, test equation \eqref{plco} against $PZ_{ij}$, $i=1,2$ and $j=1,\dots,m$: $$\int_S \phi L(PZ_{ij})dv_g =\int_S h PZ_{ij}dv_g + \sum_{k=1}^2\Big[c_{0k}\sum_{l=0}^m \int_S \lap_g PZ_{0l} PZ_{ij}dv_g + \sum_{l=1}^m c_{kl} \int_S \lap_g PZ_{kl} PZ_{ij}dv_g\Big].$$ Since for $j=1,\dots,m$ we have the following estimates in $C(S)$ \begin{equation}\label{pzij} PZ_{ij}=\chi_jZ_{ij}+O(\de)\,,\:\:\:i=1,2\,, \qquad PZ_{0j}=\chi_j(Z_{0j}+2)+O(\de^2|\log\de|), \end{equation} it readily follows that $\int_S \lap_g PZ_{kl} PZ_{ij}dv_g=-{32\pi\over 3}\delta_{ki}\delta_{lj}+O(\de)$, where the $\delta_{ij}$'s are the Kronecker's symbols. By Lemma \ref{ewfxi}, (\ref{repla0}), (\ref{intV1eW}), \eqref{intV2etW} and \eqref{pzij} we have that for $i=1,2$ $$L(PZ_{ij})=\chi_j \Delta_g Z_{ij}+e^{U_j} PZ_{ij}+O\Big(\delta^2 +\delta \sum_{k=1}^m e^{U_k}\Big)= e^{U_j} [PZ_{ij}-e^{-\varphi_j }\chi_j Z_{ij}]+O\Big(\delta^2+\delta \sum_{k=1}^m e^{U_k}\Big)$$ in view of $\frac{\int_S V_1 e^W PZ_{ij}dv_g}{\int_S V_1 e^W dv_g}=O(\delta)$ and $\frac{\int_S V_2 e^{-\tau W} PZ_{ij}dv_g}{\int_S V_2 e^{-\tau W} dv_g}=O(\delta)$ for all $j=1,\dots,m$, leading to $\|L(PZ_{ij})\|_*=O(\delta)$. Similarly, we have that \begin{eqnarray*} L(PZ_1) &=&\sum_{j=1}^{m_1} e^{U_j} [PZ_{0j}-\chi_j e^{-\varphi_j } Z_{0j}- 2\chi_j ]+O(\delta^2)+O\bigg(\delta \sum_{k=1}^m e^{U_k}\bigg) \end{eqnarray*} in view of $\frac{\int_S V_1 e^{W} PZ_{0j}dv_g}{\int_S V_1 e^{W} dv_g}=\frac{2}{m_1}+O(\delta^2|\log \delta|)$ and $\frac{\int_S V_2 e^{-\tau W} PZ_{0j}dv_g}{\int_S V_2 e^{-\tau W} dv_g}=O(\delta^2|\log \delta|)$ for $j=1,\dots,m_1$, leading to $\|L(PZ_1)\|_*=O(\delta)$. Also, by using a similar argument for $j=m_1+1,\dots,m$, we find that $\|L(PZ_2)\|_*=O(\delta)$. Hence, we get that \begin{equation*} \begin{split} \sum_{i=1}^2\Big[|c_{0i}|+ \sum_{j=1}^m |c_{ij}| \Big] &\leq C' \|h\|_*+\delta |\log \delta|O\Big(\sum_{i=1}^2 \Big[|c_{0i}|+ \sum_{j=1}^m |c_{ij}| \Big] \bigg), \end{split} \end{equation*} yielding to the desired estimates $\|\phi\|_\infty=O(|\log \delta| \|h\|_*)$ and $\displaystyle \sum_{i=1}^2 \Big[|c_{0i} |+ \sum_{j=1}^m |c_{ij}|\Big] =O(\|h\|_*)$. To prove the solvability assertion, problem \eqref{plco} is equivalent to finding $\phi\in H$ such that \begin{equation*} \begin{split} \int_S \langle\nabla \phi, \nabla \psi\rangle_g dv_g=\int_S &\left[{\lambda_1 V_1e^W\over \int_S V_1e^W dv_g}\left(\phi-{\int_S V_1e^W \phi dv_g \over \int_S V_1e^W dv_g}\right) \right. \\ & \left.\; +\, {\lambda_2\tau^2 V_2e^{-\tau W}\over \int_S V_2e^{-\tau W} dv_g}\left(\phi-{\int_S V_2e^{-\tau W} \phi dv_g \over \int_S V_2e^{-\tau W} dv_g}\right) -h\right]\psi dv_g\qquad \forall \, \psi \in \mathcal H, \end{split} \end{equation*} where $\mathcal{H}=\{\phi \in H_0^1(S) \,:\: \int_S \phi \lap_g PZ_{ij} dv_g=\int_S \phi \lap_g PZ_i dv_g=0,\, i=1,2,\, j=1,\dots,m \}$. With the aid of Riesz representation theorem, the Fredholm's alternative guarantees unique solvability for any $h$ provided that the homogeneous equation has only the trivial solution: for \eqref{plco} with $h=0$, the a-priori estimate (\ref{estmfe1}) gives that $\phi=0$. \medskip \noindent So far, we have seen that, if $T(h)$ denotes the unique solution $\phi$ of \eqref{plco}, the operator $T$ is a continuous linear map from $\{h \in L^\infty(S):\, \int_S h dv_g =0 \}$, endowed with the $\|\cdot\|_*$-norm, into $\{\phi \in L^\infty(S):\, \int_S \phi dv_g =0 \}$, endowed with $\|\cdot\|_\infty$-norm. The argument below is heuristic but can be made completely rigourous. The operator $T$ and the coefficients $c_{0i},\,c_{ij}$ are differentiable w.r.t. $\xi_{l}$, $l=1,\dots,m$, or $\mu_k$, $ k=1,2$. We shall argue in the same way to obtain (57) in \cite[Appendix A]{EF}, differentiating equation \eqref{plco}, we formally get that $X=\partial_\beta \phi$, where $\beta=\xi_{l}$ with $l=1,\dots,m$ or $\beta=\mu_k$, $k=1,2$, satisfies $L(X)=\ti h(\phi)+\sum_{i}d_{0i}\lap_g PZ+\sum_{i,j} d_{ij}\lap_g PZ_{ij}$, for a suitable choice of $\ti h(\phi)$, $d_{0i}=\partial_\beta c_{0i}$, $d_{ij}=\partial_\beta c_{ij}$, and the orthogonality conditions become $$\int_S X \lap_g PZ_{ij}dv_g =-\int_S \phi \partial_\beta(\lap_g PZ_{ij}) dv_g\,, \qquad \int_S X \lap_g PZ_i dv_g=-\int_S \phi \partial_\beta(\lap_g PZ_i)dv_g.$$ Now, finding and estimating suitable coefficients $b_{0i}$, $b_{ij}$ so that $Y=X+\sum_kb_{0k} PZ_k+ \sum_{k,l} b_{kl}PZ_{kl}$ satisfies the orthogonality conditions $\int_S Y \lap_g PZ_i dv_g=\int_S Y \lap_g PZ_{ij}dv_g=0$, the function $X$ can be uniquely expressed as $X=T(f)-\sum_i b_0PZ_i- \sum_{i,j} b_{ij}PZ_{ij}$, where $f=\ti h(\phi)+\sum_ib_{0i} L(PZ_i)+\sum_{i,j} b_{ij}L(PZ_{ij})$. Moreover, we find that $\|f\|_* \le C {|\log \de| \over \de} \|h\|_*,$ for $\beta=\xi_l$ and $\|f\|_*\le C|\log\de| \|h\|_*$ for $\beta=\mu_k$. By (\ref{estmfe1}) we deduce that for any first derivative $$\|\partial_{\xi_l} \phi\|_\infty \le C \Big[|\log \de|\|f\|_*+{\|\phi\|_\infty \over \de}\Big] \le C' {|\log \de|^2 \over\de} \|h\|_*. $$ and $\|\partial_{\mu_k} \phi\|_\infty \le C|\log\de|^2 \|h\|_*$. Differentiating once more in $\mu_j$ the equation satisfied by $\partial_{\mu_i} \phi$ and arguing as above, we finally obtain that $\|\partial_{\mu_i \mu_j} \phi \|_\infty \le C |\log \de|^3 \|h\|_*$, and the proof is complete. \end{proof} \section{Appendix B}\label{appeB} \noindent We shall argue in the same way to \cite[Proposition 4.2]{EF}, so that by Proposition \ref{p2} we now deduce the following. \begin{proof}[{\bf Proof (of Proposition \ref{lpnlabis}):}] In terms of the operator $T$, problem \eqref{pnlabis} takes the form $\mathcal{A}(\phi)=\phi$, where $\mathcal{A}(\phi):=-T(R+N(\phi))$. After \cite{DeKM,EF,EFP,EGP,F}, a standard fixed point argument can be used to obtain that $\mathcal{A}$ is a contraction mapping of $\mathcal{F}_\nu$ into itself, where $$\mathcal{F}_\nu=\left \{\phi\in C(S)\,:\: \|\phi\|_\infty \le \nu \bigg[\delta |\log \delta| \sum_{j=1}^m |\nabla \log(\rho_j \circ y_{\xi_j}^{-1})(0)|+\de^{2-\sigma}|\log \de|^2 \bigg] \right \}.$$ Therefore has a unique fixed point $\phi \in \mathcal{F}_\nu$. \medskip \noindent By the Implicit Function Theorem it follows that the map $(\mu,\xi) \to (\phi(\mu,\xi), c_{0i}(\mu,\xi), c_{ij}(\mu,\xi))$ is (at least) twice-differentiable in $\mu$ and one differentiable in $\xi$. Differentiating $\phi=-T(R+N(\phi))$ w.r.t. $\beta=\xi_l$, $l=1,\dots,m$, or $\beta=\mu$, we get that $\partial_\beta\phi=-\partial_\beta T(R+N(\phi))- T(\partial_\beta R+\partial_\beta N(\phi))$. By Lemma \ref{estrr0} and (\ref{estd}) we have that $$ \|\partial_{\xi_l} T(R+N(\phi))\|_\infty \le C {|\log \de|^2 \over\de} (\|R\|_*+\|N(\phi)\|_*)=O\bigg( |\log \de|^2 \sum_{j=1}^m |\nabla\log (\rho_j \circ y_{\xi_j}^{-1})(0)|+\de^{1-\sigma}|\log \de|^3 \bigg),$$ for $l=1,\dots,m$, in view of $\|\partial_{\xi_l} W\|_\infty \leq \frac{C}{\de}$ and $$ \|\partial_{\mu} T(R+N(\phi))\|_\infty \le C |\log \de|^2 (\|R\|_*+\|N(\phi)\|_*)=O\bigg( \de|\log \de|^2 \sum_{j=1}^m |\nabla\log (\rho_j \circ y_{\xi_j}^{-1})(0)|+\de^{2-\sigma}|\log \de|^3 \bigg),$$ in view of $\|\partial_\mu W\|_\infty \leq C$. So, differentiating $\partial_\beta N_i(\phi)$ as in \cite[Appendix A]{EF} with $N_i(\phi)$ in \eqref{ni}, we find that \begin{equation}\label{derivN} \|\partial_\beta N(\phi) \|_* \le C\left[\|\partial_\beta W\|_\infty\|\phi\|_\infty^2+\|\phi\|_\infty\|\partial_\beta\phi\|_\infty\right] \end{equation} and \begin{eqnarray*} \|\partial_{\xi_l} N(\phi) \|_* &=& O\bigg(\de |\log \delta|^2 \sum_{j=1}^m |\nabla \log(\rho_j \circ y_{\xi_j}^{-1})(0)|^2+\de^{3-2\sigma}|\log \de|^4\bigg) +o\left(\frac{\|\partial_{\xi_l}\phi\|_\infty}{|\log \de|}\right) \\ \|\partial_{\mu} N(\phi) \|_*&=& O\bigg(\de^2 |\log \delta|^2 \sum_{j=1}^m |\nabla \log(\rho_j \circ y_{\xi_j}^{-1})(0)|^2+\de^{4-2\sigma}|\log \de|^4\bigg) +o\left(\frac{\|\partial_\mu\phi\|_\infty}{|\log \de|}\right). \end{eqnarray*} Since $\int_S \chi_j e^{-\varphi_j} e^{U_j}dv_g=\int_{\mathbb{R}^2} \chi(|y|)\frac{8\mu_k^2\de^2 \rho_j(\xi_j)}{(\mu_k^2\de^2 \rho_j(\xi_j)+|y|^2)^2}dy$, either $k=1$ for $j=1,\dots,m_1$ or $k=2$ for $j=m_1+1,\dots,m$, we have that $$\partial_{\xi_l}\bigg(\int_S \chi_j e^{-\varphi_j} e^{U_j}dv_g\bigg)= 8 \partial_{\xi_l} \log \rho_j(\xi_j) \int_{\mathbb{R}^2} \frac{1-|y|^2}{(1+|y|^2)^3}+O(\de^2)=O(\de^2)$$ and similarly, \begin{equation*} \begin{split} \partial_{\mu_k} \bigg(\int_S \chi_j e^{-\varphi_j} e^{U_j} dv_g\bigg) & = \int_{\mathbb{R}^2} \chi(|y|)\frac{16 \mu_k\de^2 \rho_j(\xi_j) (|y|^2-\mu_k^2\de^2 \rho_j(\xi_j))}{(\mu_k^2\de^2 \rho_j(\xi_j)+|y|^2)^3} dy =O(\de^2). \end{split} \end{equation*} Since $\varphi_j(\xi_j)=0$ and $\nabla \varphi_j(\xi_j)=0$, we have that $e^{-\varphi_j}=1+O(|y_{\xi_j}(x)|^2)$ and $\partial_\beta(\chi_j e^{-\varphi_j}(x))=O(|y_{\xi_j}(x)|)$, and then $\ds \lab \partial_{\xi_l} W=-\sum_{j=1}^{m_1} \chi_j e^{U_j}\partial_{\xi_l} U_j + {1\over \tau}\sum_{j=m_1+1}^m\chi_j e^{U_j}\partial_{\xi_l} U_j+O(\de^{1-\sigma})$, in view of $|\partial_{\xi_l} U_j|=O(\frac{1}{\de})$, $l=1,\dots,m$ and $\ds \lab \partial_\mu W=-\sum_{j=1}^m \chi_j e^{U_j}\partial_\mu U_j+ {1\over \tau}\sum_{j=m_1+1}^m\chi_j e^{U_j}\partial_\mu U_j+O(\de^{2-\sigma})$, in view of $|\partial_\mu U_j|=O(1)$, where the big $O$ is estimated in $\|\cdot \|_*$-norm. Note that in $B_{r_0}(\xi_j)$ $$\partial_{\xi_l} W= \begin{cases} \partial_{\xi_l} U_j+O(\de^2 |\log \de|+|y_{\xi_j}(x)|+ |\nabla\log(\rho_j \circ y_{\xi_j}^{-1})(0)|),&\text{ for } j\in\{1,\dots,m_1\},\\[0.4cm] -\dfrac{1}\tau \big[\partial_{\xi_l} U_j+O(\de^2 |\log \de|+|y_{\xi_j}(x)|+ |\nabla\log(\rho_j \circ y_{\xi_j}^{-1})(0)|)\big],&\text{ for } j\in\{m_1+1,\dots,m\}, \end{cases}$$ and \begin{eqnarray*} \partial_{\mu_k} W=\begin{cases} \partial_{\mu_k} U_j-\dfrac{2}{\mu_k}+O(\de^2 |\log \de|),&\text{ for } j\in\{1,\dots,m_1\} \\[0.4cm] -\dfrac1\tau\Big[\partial_{\mu_k} U_j-\dfrac{2}{\mu_k}+O(\de^2 |\log \de|)\Big],&\text{ for } j\in\{m_1+1,\dots,m\} \end{cases}. \end{eqnarray*} Furthermore, $\partial_{\xi_l} W=O(1)$ and $\partial_{\mu_k} W=O(\de^2|\log\de|)$ in $S\setminus \cup_{j=1}^m B_{r_0}(\xi_j)$. From computations in the proof of Lemma \ref{ewfxi} we find that \begin{eqnarray} \label{important} &&\hspace{-1cm}\dfrac{\lambda_1 V_1 e^W}{\int_S V_1e^W dv_g} = \frac{\lambda_1}{8\pi m_1} \sum_{j=1}^{m_1} \chi_j \bigg[1+\Big\langle\frac{\nabla ( \rho_j \circ y_{\xi_j}^{-1})(0)}{ \rho_j(\xi_j)},y_{\xi_j}(x)\Big\rangle +O(|y_{\xi_j}(x)|^2+\de^2 |\log \de|)\bigg] e^{U_j} \\ &&\hspace{1.4cm} +\ O(\de^2) \chi_{S \setminus \cup_{j=1}^{m_1} B_{r_0}(\xi_j)},\nonumber \end{eqnarray} and \begin{eqnarray} \label{important2} &&\hspace{-1.5cm}\dfrac{\lambda_2\tau V_2 e^{-\tau W} }{\int_S V_2 e^{-\tau W} dv_g} = \frac{\lambda_2\tau }{8\pi m_2} \sum_{j=m_1+1}^{m} \chi_j \bigg[1+\Big\langle\frac{\nabla ( \rho_j \circ y_{\xi_j}^{-1})(0)}{ \rho_j (\xi_j)},y_{\xi_j}(x)\Big\rangle +O(|y_{\xi_j}(x)|^2+\de^2 |\log \de|)\bigg] e^{U_j}\\ &&\hspace{1.3cm} +\ O(\de^2) \chi_{S \setminus \cup_{j=m_1+1}^{m} B_{r_0}(\xi_j)}.\nonumber \end{eqnarray} By (\ref{2term}), \eqref{Merry1}, \eqref{Merry0}, \eqref{Merry}, (\ref{Merrymu2}), \eqref{important} and \eqref{important2} we deduce for $\partial_\beta R$ the estimate $$\|\partial_{\xi_l} R\|_*+{1\over \de}\|\partial_{\mu_k} R\|_*=O\bigg(\sum_{j=1}^m |\nabla \log(\rho_j \circ y_{\xi_j}^{-1})(0)|+\de^{1-\sigma} |\log \delta|\bigg), \quad l=1,\dots,m,\ k=1,2$$ Combining all the estimates, we then get that $$\|\partial_{\xi_l} \phi\|_\infty=O\bigg(|\log \delta|^2 \sum_{j=1}^m |\nabla \log(\rho_j \circ y_{\xi_j}^{-1})(0)|+\de^{1-\sigma}|\log \de|^3\bigg) +o\big(\|\partial_{\xi_l}\phi\|_\infty\big)$$ and $$\|\partial_{\mu_k}\phi\|_\infty=O\bigg(\de|\log \delta|^2 \sum_{j=1}^m |\nabla \log(\rho_j \circ y_{\xi_j}^{-1})(0)|+\de^{2-\sigma}|\log \de|^3\bigg) +o\big(\|\partial_{\mu_k} \phi\|_\infty\big),$$ which in turn provides the validity of (\ref{cotadphi1bis}). We proceed in the same way to obtain the estimate (\ref{cotadphi1bis}) on $\partial_{\mu_i\mu_j}\phi$, and the proof is complete. \end{proof} \noindent Lemma \ref{cpfc0bis} is rather standard and we will omit its proof. Since the problem has been reduced to find c.p.'s of the reduced energy $E_{\lambda_1,\lambda_2}(\mu, \xi)= J_{\lambda_1,\lambda_2} (W+\phi(\mu,\xi))$, where $J_{\lambda_1,\lambda_2}$ is given by \eqref{energy}, the last key step is show that the main asymptotic term of $E_{\lambda_1,\lambda_2}$ is given by $J_{\lambda_1,\lambda_2}(W)$. \begin{proof}[{\bf Proof (of Theorem \ref{fullexpansionenergy}):}] We argue in the same way as in the proof of \cite[Theorem 4.4]{EF}. For simplicity we write $J$ instead of $J_{\lambda_1,\lambda_2}$. Thus, we get that \begin{eqnarray*} J(W+\phi)-J(W) =-{1\over 2}\int_S \left[R\phi- N(\phi)\phi\right]dv_g+\int_0^1\!\!\! \int_0^1 [D^2 J(W+ts \phi)-D^2J(W)][\phi,\phi]\,t\, dsdt, \end{eqnarray*} so that, it is straighforward to deduce that $$|J(W+\phi)-J(W)|=O(\|R\|_*\|\phi\|_\infty + \|\phi\|_\infty^3) =O\left(\delta^2 |\log \delta |\, |\nabla \varphi_m^*(\xi)|^2+ \delta^{3-\sigma}|\log\delta|^2 \right)$$ in view of (\ref{cotaphi1bis}), $4\pi\nabla_{\xi_j}\varphi_m^*(\xi)=\nabla\log(\rho_j\circ y_{\xi_j}^{-1})(0)$, for $j=1,\dots,m_1$ and $4\pi\tau^2 \nabla_{\xi_j}\varphi_m^*(\xi)=\nabla\log(\rho_j\circ y_{\xi_j}^{-1})(0)$, for $j=m_1+1,\dots,m$. Now, differentiating w.r.t. $\be=\xi_{l}$, $l=1,\dots,m$, or $\be=\mu_k$, $k=1,2$ we get that \begin{eqnarray*} |\partial_\beta[J(W+\phi)-J(W)]|= O(\|\partial_\beta R\|_* \|\phi\|_\infty + \|R\|_* \|\partial_\beta \phi\|_\infty+ \|\phi\|_\infty^2 \|\partial_\beta \phi\|_\infty+\|\phi\|_\infty^3 \|\partial_\beta W\|_\infty) \end{eqnarray*} by using \eqref{derivN}, so that, \begin{eqnarray*} |\partial_{\xi_l}[J(W+\phi)-J(W)]|= O\left(\big[\delta^2 |\log \delta|\, | \nabla \varphi_m^*(\xi)|^2+\de^{3-\sigma}|\log \de|^2\big]{|\log\de|\over \de}\right) \end{eqnarray*} and $|\partial_{\mu_k}[J(W+\phi)-J(W)]|=O\left(\big[\delta^2 |\log \delta|\, | \nabla \varphi_m^*(\xi)|^2+\de^{3-\sigma}|\log \de|^2\big] |\log\de|\right)$ in view of (\ref{cotaphi1bis})-(\ref{cotadphi1bis}), $\|\partial_{\xi_l} W\|_\infty=O(\frac{1}{\de})$ and $\|\partial_{\mu_k} W\|_\infty=O(1)$. Arguing similarly for the second derivative in $\mu$, we get that $\left|\partial_{\mu_i \mu_k}[J(W+\phi)-J(W)]\right|=O\left(\big[\delta^2 |\log \delta|\, | \nabla \varphi_m^*(\xi)|^2+\de^{3-\sigma}|\log \de|^2\big]|\log\de|^2 \right)$. Combining the previous estimates on the difference $J(W+\phi)-J(W)$ with the expansion of $J(W)=J_{\lambda_1,\lambda_2}(W)$ contained in Theorem \ref{expansionenergy}, we deduce the validity of the expansion (\ref{fullJUt}) with an error term which can be estimated (in $C^2(\mathbb{R}^2)$ and $C^1(\Xi)$) like $o(\de^2)+r_{\lambda_1,\lambda_2}(\mu,\xi)$ as $\de \to 0$, where $r_{\lambda_1,\lambda_2}(\mu,\xi)$ does satisfy (\ref{rlambda}). \end{proof} \small
1,314,259,993,781
arxiv
\section{Introduction} The quest for the experimental realization of a chiral $p_x+ip_y$ superconductor in two dimensions (2D) is gathering increasing attention because this phase exhibits Majorana modes, which are relevant for constructing fault-tolerant topological quantum computers \cite{kitaev,zoller}. Although a chiral $p$-wave superfluid has been shown to occur in the A-phase of $^3$He at high pressure \cite{volovik} and experiments have revealed that Strontium ruthenate (Sr$_2$RuO$_4$) is a $p$-wave superconductor \cite{kallin}, the manipulation of the Majorana modes in these systems remains difficult. Therefore, the prospect to create a $p$-wave superfluid using ultracold atoms is very appealing because these systems allow for great control of the degrees of freedom. Several possibilities to generate chiral superfluids have been proposed in the context of ultracold atoms in optical lattices: by using orbital degrees of freedom \cite{sengstock,hemmerich}, spin-orbit coupling \cite{spielman,sarma} or dipolar interaction \cite{lewenstein,shlyapnikov}. However, these methods either bring new problems to the experimental implementation, such as heating and ultracold chemical-reactions, or require a sophisticated optical-lattice setup and further manipulations to populate the $p$-orbitals. Here, we adopt a completely different, but feasible route to produce $p$-wave superfluids, which consists of inducing the {pairing} among the 2D polarized fermionic atoms through a 3D bath of bosonic excitations. The dimensional mismatch between the fermions and the excitations that mediate their interaction leads to a huge increase of the superconducting gap, and consequently of the critical temperature for the observation of the chiral superfluid. The main advantage of our proposal is that it avoids three-body losses and dynamical instabilities (phase separation), which constitute major problems in a strongly-interacting Fermi-Bose mixture. Mixed-dimension mixtures of two-species fermions with weak interaction were investigated previously \cite{nishida,Nishida2010poa}, with the coupling between polarized fermions in 2D mediated by the particle-hole excitations of a 3D Fermi-sea background. In spite of the high stability of the Fermi-Fermi mixture, the Fermi-Bose mixture, with phonon excitations, provides much higher magnitude for the $p$-wave coupling between fermions. Recently, a 2D-3D mixture of fermions and bosons was considered, and the Berezinskii-Kosterlitz-Thouless (BKT) critical temperature was determined accounting for effects of retardation \cite{bruun}. However, many-body effects were neglected. We argue here that the proximity between the Fermi and sound velocities requires the inclusion of many-body corrections, namely the vertex ladder-diagrams and the RPA dressing of the phonon propagator {\cite{schrieffer,Roy2014mta}. We calculate these higher-order contributions, which are usually disregarded in the BCS treatment of conventional superconductors, and show that they significantly contribute to increase the magnitude of the anomalous $p$-wave gap in the Fermi-Bose mixture in mixed dimensions. In this calculation, however, we do not consider the renormalization of the pole of the Green's function, nor take into account retardation effects (the influence of the frequency of the irreducible vertex). The fermions self-energy due to the scattering of the background excitations can be neglected due to the small value of the Fermi-Bose coupling $g_{FB}$, and retardation effects should not provide a relevant contribution to the vertex \cite{kagan} because the singularity for pair formation must come from scattering in the Fermi-surface (Cooper instability \cite{schrieffer,abrikosov}). The simultaneous analysis of both these effects, i.e., retardation and high-order vertex correction, is a tremendous task. Since our calculations are performed in the small momentum limit, if we would consider retardation, it should enhance the positive region of the vertex because correlation between the fermions leads to an even higher prediction to the critical temperature for $p$-wave superfluid formation ($T_c^p$) \cite{Grimaldi}. Hence, the very high value of $T_c^p$ that we found due to the vertex correction is actually a lower bound, given the approximations performed. This paper is structured as follows: Sec.~II presents the system Hamiltonian for bosonic and fermionic species, whereas in Sec.~III the interaction between the fermions, mediated by the bosonic excitations, is characterized. In sections~IV and V, we build the BCS Hamiltonian for the 2D system and solve the associated gap equation, respectively. Higher-order corrections for the gap magnitude are evaluated in Sec.~VI, and the experimental feasibility, conclusions and implications of this work are discussed respectively in Sec.~VII and Sec.~VIII. \section{System Hamiltonian} We start by defining the Hamiltonian $\hat{H} = \hat{H}_B+\hat{H}_F+\hat{H}_{FB}$, where the boson-field operators $\hat{\phi}$ live in 3D, whereas the polarized fermions ${\hat{\psi}}$ live in 2D, (assuming $\hbar =1$) \begin{eqnarray} \nonumber && \hat{H}_B=\int dz \int d^2x \hat{\phi}^{\dag}(t,\mathbf{x},z)\Big[-\frac{\nabla^2}{2m_B} \\ && \qquad \qquad \quad + \frac{g_{B}}{2} \hat{\phi}^{\dag}(t,\mathbf{x},z)\hat{\phi}(t,\mathbf{x},z)-\mu_B\Big]\hat{\phi}(t,\mathbf{x},z), \\ \label{system} && \hat{H}_F=\int d^2x \hspace{0.15cm} \hat{{\psi}}^{\dag}(t,\mathbf{x})\Big[-\frac{\nabla^2}{2m_F}-\mu_F\Big]\hat{\psi}(t,\mathbf{x}), \\ \nonumber &&\hat{H}_{FB} \hspace{-0.1cm} =\hspace{-0.1cm}g_{FB}\hspace{-0.15cm} \int \hspace{-0.1cm} dz \hspace{-0.15cm}\int\hspace{-0.1cm} d^2x \delta(z) \hat{\psi}^{\dag}(t,\mathbf{x})\hat{\phi}^{\dag}(t,\mathbf{x},z)\hat{\phi}(t,\mathbf{x},z)\hat{\psi}(t,\mathbf{x}), \\ \end{eqnarray} with the mass of the bosonic and fermionic species given by $m_B$ and $m_F$, and their chemical potentials by $\mu_B$ and $\mu_F$, respectively. The intra- and interspecies contact repulsive interactions are characterized by the coupling constants $g_{B}$ and $g_{FB}$, respectively. We can express the boson-field operators in terms of a discrete set of bosonic modes $\hat{b}_{\mathbf{q}}$, with $V$ the volume of the 3D space, \begin{equation} \hat{\phi}(t,\mathbf{x},z) = \frac{1}{\sqrt{V}}\sum_{\mathbf{q}}e^{i \mathbf{q}\cdot\mathbf{r}} \hat{b}_{\mathbf{q}}(t), \end{equation} which allows us to rewrite the bosonic part of the Hamiltonian in momentum space, \begin{eqnarray} \nonumber &&\hat{H}_B(t) = \sum_{\mathbf{q}}\left(\frac{q^2}{2m_B}-\mu_B\right) \hat{b}^{\dagger}_{\mathbf{q}}(t) \hat{b}_{\mathbf{q}}(t) \\ && + \frac{g_{B}}{2V}\sum_{\mathbf{q},\mathbf{q'},\mathbf{q''}}\hat{b}^{\dagger}_{\mathbf{q}+\mathbf{q''}}(t) \hat{b}^{\dagger}_{\mathbf{q'}-\mathbf{q''}}(t) \hat{b}_{\mathbf{q}}(t) \hat{b}_{\mathbf{q'}}(t). \end{eqnarray} To characterize the Bose-Einstein condensate, we now use Bogoliubov theory to deal with the macroscopic occupation of the zero-momentum state, that is $\hat{b}_0 = \hat{b}_0^{\dagger} = \sqrt{N_0}$. Neglecting higher-order fluctuations, we obtain \begin{eqnarray} \nonumber &&\hat{H}_B(t) = \frac{g_{B}N_0^2}{2V}+ \sum_{\mathbf{q}}\Big(\frac{q^2}{2m_B} + n_Bg_{B}\Big) \hat{b}^{\dagger}_{\mathbf{q}}(t) \hat{b}_{\mathbf{q}}(t) \\ && + \frac{g_{B}n_B}{2}\sum_{\mathbf{q}} \left[\hat{b}^{\dagger}_{\mathbf{q}}(t) \hat{b}^{\dagger}_{-\mathbf{q}}(t)+\hat{b}_{\mathbf{q}}(t) \hat{b}_{-\mathbf{q}}(t)\right]. \end{eqnarray} After symmetrizing the above expression, with a sum covering half of the momentum space, and performing a Bogoliubov canonical transformation $\hat{b}_{\mathbf{q}}= u_q\hat{\beta}_{\mathbf{q}}-v_q\hat{\beta}^{\dagger}_{-\mathbf{q}}$ and $\hat{b}_{-\mathbf{q}}= u_q\hat{\beta}_{-\mathbf{q}}-v_q\hat{\beta}^{\dagger}_{\mathbf{q}}$, where we select the real parameters $u_q, v_q$ in order to have diagonal-base operators ($\hat{\beta},\hat{\beta}^{\dagger}$) for $H_B$, we find \begin{equation} \hat{H}_B(t) = \frac{g_{B}n_BN_0}{2}+ \sum_{\mathbf{q} (\mathbf{q}\neq0)} \omega_q \hat{\beta}^{\dagger}_{\mathbf{q}}(t) \hat{\beta}_{\mathbf{q}}(t) -\frac{1}{2} \sum_{\mathbf{q}(\mathbf{q}\neq0)} (\xi_q-\omega_q), \end{equation} with the energy spectrum for the free Bogoliubov-modes excitation $\omega_q=\sqrt{\xi_q^2-(g_{B}n_B)^2}$, where \begin{equation}\xi_q=\frac{q^2}{2m_B}+g_{B}n_B.\end{equation} Applying the same set of transformations for the interspecies-interaction Hamiltonian ($H_{FB}$), and considering $u_q = \sqrt{{\xi_q}/{\omega_q}+1}/{\sqrt{2}}$ and $v_q=\sqrt{{\xi_q}/{\omega_q}-1}/{\sqrt{2}}$, with $\hat{\psi}(t,\mathbf{x}) = ({1}/{\sqrt{S}})\sum_{\mathbf{p}} e^{i\mathbf{p}\cdot\mathbf{x}}\hat{a}_{\mathbf{p}}(t)$, where $S$ denotes the 2D surface, we get \begin{eqnarray} \nonumber \label{eq9} &&\hat{H}_{FB}(t) =g_{FB}n_BN_F \\ \nonumber && +\frac{g_{FB}\sqrt{N_0}}{V}\sum{}^{'}_{\mathbf{p},\mathbf{q_{\bot}},q_z} V_q \hat{a}^{\dag}_{\mathbf{p}}(t)\left[\hat{\beta}_{\mathbf{q}}(t)+\hat{\beta}^{\dagger}_{-\mathbf{q}}(t)\right]\hat{a}_{\mathbf{p}-\mathbf{q_{\bot}}}(t), \\ \end{eqnarray} with \vspace{-0.25cm}\begin{equation} \label{eqC} V_q=\left(\frac{{q^2}}{{q^2}+4m_Bg_{B}n_B}\right)^{1/4}. \end{equation} In Eq.~(\ref{eq9}), the prime symbol in the sum indicates that $\mathbf{q}\neq0$, and we separate the components of $\mathbf{q}=(\mathbf{q}_{\bot},q_z)$, to account for momentum conservation in the plane. \section{Effective Interaction} As expressed in Eq.~(\ref{system}), there is no direct interaction between the polarized fermions in $H_F$, due to the Pauli exclusion principle. We show here, however, how an indirect interaction between fermions arises from $H_{FB}$. For that, we define the effective coupling constant $\lambda_{\textrm{eff}}$ from the four-point function $\Gamma = \Gamma(\mathbf{p},\mathbf{p'},\mathbf{k},\mathbf{k'}; \varepsilon,\varepsilon',\nu,\nu')$ as follows \begin{eqnarray}\nonumber \label{eq1} \Gamma && = \hspace{-0.6cm}\prod\limits_{\substack{i=1..4 \\ \varepsilon_i=\varepsilon,\varepsilon',\nu,\nu'}} \hspace{-0.4cm} \int dt_i e^{i\varepsilon_i t_i}\Big\langle \hat{a}^{\dag}_{\mathbf{p}}(t_1) \hat{a}^{\dag}_{\mathbf{k}}(t_2)\hat{a}_{{\mathbf{p'}}}(t_3)\hat{a}_{{\mathbf{k'}}}(t_4) e^{-i \int d{t} \hat{H}_{FB}({t})}\Big\rangle \\ \nonumber && = \frac{1}{S} i \lambda_{\textrm{eff}} \delta_{\mathbf{p}+\mathbf{k},\mathbf{p'}+\mathbf{k'}}\delta(\varepsilon+\nu-\varepsilon'-\nu') \\ && \times G_0(\mathbf{p},\varepsilon) G_0(\mathbf{p'},\varepsilon-\omega) G_0(\mathbf{k},\nu) G_0(\mathbf{k'},\nu+\omega), \end{eqnarray} with $G_0$ corresponding to the free-fermion propagator and $\omega = \varepsilon - \varepsilon' = \nu'-\nu$. \begin{figure}[!ht] \centering \includegraphics[width=0.25\textwidth]{fig1.jpg} \caption{Second-order Feynman diagram for the interaction between two fermions in 2D induced by the Bogoliubov modes of the 3D BEC.} \label{fig1} \end{figure} Considering the weak-coupling regime, to second order in the interaction (see Fig.~\ref{fig1}), we obtain \begin{eqnarray} \label{eq2} \nonumber \Gamma^{(2)} && = i \frac{g^2_{FB}n_B}{V}\delta_{\mathbf{p}+\mathbf{k},\mathbf{p'}+\mathbf{k'}}\delta(\varepsilon+\nu-\varepsilon'-\nu') \sum_{q_z} V_q^2 D_0(\mathbf{q},\omega) \\ && \times G_0(\mathbf{p},\varepsilon) G_0(\mathbf{p'},\varepsilon-\omega) G_0(\mathbf{k},\nu) G_0(\mathbf{k'},\nu+\omega), \end{eqnarray} where $D_0(\mathbf{q},\omega)$ denotes the free-phonon propagator and $\mathbf{q}_{\bot}=\mathbf{p}-\mathbf{p'}=\mathbf{k'}-\mathbf{k}$. Comparing Eq.~(\ref{eq1}) and Eq.~(\ref{eq2}), we find \begin{eqnarray}\label{eq3} \nonumber \lambda_{\textrm{eff}} = g_{FB}^2 n_B \int_{-\infty}^{\infty} \frac{dq_z}{2\pi} \left(\frac{\frac{q^2}{2m_B}}{\frac{q^2}{2m_B}+2g_{B}n_B}\right)^{1/2} \hspace{-0.25cm} \frac{2\omega_q}{\omega^2-\omega_q^2+i\delta} . \\ \end{eqnarray} For low-energy processes, where the scattered fermions are kept around the 2D Fermi surface, we can assume $\omega \sim 0$, and Eq.~(\ref{eq3}) can be simplified as \begin{eqnarray}\nonumber \label{eq0} \lambda_{\textrm{eff}} \; &&= - \frac{2}{\pi} m_B g_{FB}^2 n_B \int_{-\infty}^{\infty} dq_z \frac{1}{q_z^2+{q_{\bot}}^2+4m_Bg_{B}n_B} \\ &&= - 2 m_B g_{FB}^2 n_B \frac{1}{\sqrt{{q_{\bot}}^2+4m_Bg_{B}n_B}}. \end{eqnarray} Hence, an effective potential $\lambda_{\textrm{eff}} = V_{\textrm{eff}}(q_{\bot}= |\mathbf{p'}-\mathbf{p}|)$ is generated between the fermions, as a function of the momentum exchange $\mathbf{Q}$ between the scattered particles. In 2D real space, with coordinate $\mathbf{R}$, this yields an attractive Yukawa potential between the fermionic particles in the plane, \begin{eqnarray}\nonumber V_{\textrm{eff}}(\mathrm{R}) = \int d^2Q e^{i \mathbf{Q}\cdot\mathbf{R}} V_{\textrm{eff}} (Q) \\ = - 2\pi \frac{g_{FB}^2}{g_{B}} \frac{1}{\xi^2} \frac{1}{\mathrm{R}} e^{- \frac{\sqrt{2}}{\xi} \mathrm{R}}, \end{eqnarray} with range given by the healing length $\xi = 1/\sqrt{2m_B g_{B} n_B}$ of the BEC. \section{BCS Hamiltonian} We consider the generalized BCS-type Hamiltonian in momentum space for the fermions in the plane, \begin{eqnarray} \nonumber \label{eq4} &&\hat{H}^{\prime}_F = \hspace{-0.2cm}\int\hspace{-0.2cm} \frac{d^2p}{(2\pi)^2}\hspace{-0.05cm} \bigg\{\hspace{-0.1cm} \left(\hspace{-0.05cm} \frac{p^2}{2m_F}-\mu\hspace{-0.05cm}\right)\hspace{-0.1cm} \hat{a}^{\dag}(\mathbf{p})\hat{a}(\mathbf{p}) +\frac{1}{2}\hspace{-0.1cm} \int \hspace{-0.1cm} \frac{d^2k d^2k'}{(2\pi)^4} V_{\textrm{eff}}(\mathbf{p},\mathbf{k}) \\ &&\times \hat{a}^{\dag}\hspace{-0.05cm} \left({\mathbf{k'}}/{2}+\mathbf{k}\right)\hspace{-0.05cm}\hat{a}^{\dag}\hspace{-0.05cm} \left({\mathbf{k'}}/{2}-\mathbf{k}\right)\hspace{-0.05cm} \hat{a}\hspace{-0.05cm} \left({\mathbf{k'}}/{2}-\mathbf{p}\right)\hspace{-0.05cm} \hat{a}\hspace{-0.05cm} \left({\mathbf{k'}}/{2}+\mathbf{p}\right) \bigg\}, \end{eqnarray} with a momentum-dependent mediated interaction $V_{\textrm{eff}}(\mathbf{p},\mathbf{k})$ and $\mu = \mu_F-n_Bg_{FB}$. According to Eq.~(\ref{eq0}), we consider the interaction potential \begin{eqnarray} \label{veff0} V_{\textrm{eff}}(\mathbf{p},\mathbf{k}) = - V_0\frac{1}{\sqrt{|\mathbf{p}-\mathbf{k}|^2+2\xi^{-2}}}, \end{eqnarray} with $V_0 = 2 g_{FB}^2n_Bm_B$. After symmetrizing the BCS Hamiltonian properly, we apply the Bogoliubov transformation and find a new basis of operators (see App.~\ref{apA} for details) to build the diagonal form \begin{eqnarray} && \hat{H}_F^{BCS} \nonumber = {\sum_{\mathbf{p}}} E_{p} \hat{\alpha}^{\dag}_{\mathbf{p}}\hat{\alpha}_{\mathbf{p}} + \\ &&+ \frac{1}{2}{\sum_{\mathbf{p}}}\bigg\{\frac{|{\triangle}_{\mathbf{p}}|^2}{E_{p}}\Big[1-2n_F(E_{p})\Big]+ \left(\epsilon_{p}-E_{p}\right)\bigg\},\end{eqnarray} with the energy dispersion $E_{p}=\sqrt{\epsilon_{p}^2+|{\triangle}_{\mathbf{p}}|^2}$ and the occupation function $n_{F}(E_{p}) = [\exp(\beta E_{p}) +1]^{-1}$ of the Bogoliubov modes, where $\beta=(k_BT)^{-1}$. As shown in App.~\ref{apA}, now we can also write the gap in terms of the mean value over this new basis, to obtain \begin{eqnarray}\label{eq5} \triangle_{\mathbf{p}} = - \int \frac{d^2k}{(2\pi)^2} V_{\textrm{eff}}(\mathbf{p},\mathbf{k}) \frac{{\triangle}_{\mathbf{k}}}{2E_{k}}\Big[1-2n_F(E_{k})\Big].\quad \end{eqnarray} \section{GAP Equation} To solve the integral equation for a momentum-dependent pairing gap in Eq.~(\ref{eq5}), it is convenient to use the 2D partial-wave expansion of the effective potential \cite{anderson,chubukov}, \begin{eqnarray} \label{eq6} V_{\textrm{eff}}(\mathbf{p},\mathbf{k}) = \sum_{\ell} V_{\textrm{eff}}^{(\ell)}(p,k) \cos[\ell (\theta -\varphi)], \end{eqnarray} with $\ell$ integer, $p = |\mathbf{p}|$, $k = |\mathbf{k}|$, and where we associated the angles ${\theta}_{\mathbf{\hat{p}}}=\theta$ and ${\theta}_{\mathbf{\hat{k}}}=\varphi$. Because we are assuming low-energy processes, with the scattered momentum close to the Fermi surface, it is reasonable to consider $p\sim k = k_{F}$ in the coefficients of Eq.~(\ref{eq6}). For $\ell=1$, considering the even parity of the potential, we have \begin{eqnarray}\nonumber \label{l1} V_{\textrm{eff}}^{(1)}(k_{F}) &&= \frac{1}{\pi^2}\int \hspace{-0.2cm}\int_{-\pi}^{\pi} \frac{-V_0 \cos\varphi \cos\theta }{\sqrt{2\xi^{-2}+2k_F^2\left[1-\cos(\theta-\varphi)\right]}} d\theta d\varphi \\ &&= \frac{2\sqrt{2}}{\pi}\; V_0 \xi \; \mathcal{F}(k_F\xi), \end{eqnarray} where \begin{eqnarray} \label{eqF} &&\mathcal{F}(X)= \frac{E[-2 X^2] - (1 + X^2) \; K[-2 X^2]}{X^2}, \end{eqnarray} \\ with $E[X]$ the complete elliptic integral, $K[X]$ the complete elliptic integral of the first kind, and $X= k_F\xi$ (see the inset of Fig.~\ref{fig2}). \begin{figure}[!ht] \centering \includegraphics[width=0.45\textwidth]{fig2.pdf} \caption{Profile of the function $\mathcal{F}(X)/X$ used to estimate the maximum gap in Eq.~(\ref{GAPmax}). Inset: harmonic $\ell=1$ of the effective potential, i.e. $\mathcal{F}(X)$ in Eq.~(\ref{l1}), as a function of $X=k_F\xi$.} \label{fig2} \end{figure} Since in the weak-coupling limit one expects that the mixing of different angular momentum $\ell$ will be small, we are in a position to solve the gap equation by applying the pure $\ell$-type ansatz $\triangle_{\mathbf{p}} = \triangle^{(\ell)} e^{i\ell\theta_{\hat{\mathbf{p}}}}$ in Eq.~(\ref{eq5}). That gives \begin{eqnarray} \nonumber \label{eq7} \nonumber \triangle^{(\ell)} e^{i\ell\theta_{\hat{\mathbf{p}}}} &=& - \int \frac{d^2k}{(2\pi)^2} V_{\textrm{eff}}(\mathbf{p},\mathbf{k})\frac{\triangle^{(\ell)} e^{i\ell\theta_{\hat{\mathbf{k}}}}}{2E_{k}}\left[1-2n_F(E_{k})\right] \\ \nonumber 1 &=& - \int \frac{k dk d\varphi}{(2\pi)^2} \sum_{\ell^{\prime}} V_{\textrm{eff}}^{(\ell^{\prime})}(k_F) \cos[\ell^{\prime} (\theta-\varphi)] \\ &\times& \frac{e^{i\ell(\varphi-\theta)}}{2E_{k}}\left[1-2n_F(E_{k})\right]. \end{eqnarray} Analytical solutions for $\triangle^{\textrm{Max}}$ and $T_c$ can be obtained in two limiting cases: 1)$\;T \rightarrow 0$, where we have the maximum gap value, and 2) $\;T \rightarrow T_c$, where the gap goes to zero. For the first limit, we find $E_{k} = \sqrt{\epsilon_{k}^2+|\triangle^{(\ell)}|^2}$ and $n_F(E_{k}) \rightarrow 0$. Then, applying the orthogonality condition given by the angular integral of equation (\ref{eq7}), we eliminate the sum in $\ell'$ to obtain \begin{eqnarray} \nonumber \label{eq8} && 1 = - \frac{1}{(2\pi)^2} \frac{\pi}{4}V_{\textrm{eff}}^{(\ell)}(k_F) \int k dk \frac{1}{\sqrt{\epsilon_{k}^2+|\triangle^{(\ell)}|^2}} \\ && 1 = - \frac{1}{2\pi} V_{\textrm{eff}}^{(\ell)}(k_F) \frac{\pi}{4} \frac{m_F}{2\pi}\int_{0}^{\Lambda_{\varepsilon}} d\varepsilon \frac{1}{\sqrt{\varepsilon^2+|\triangle^{(\ell)}|^2}}, \end{eqnarray} where we can identify the density of states in the Fermi surface $\rho_{2D} = m_F/2\pi$ and the cut-off energy scale given by the Fermi energy of the 2D system $\Lambda_{\varepsilon}\sim k_F^2/2m_F$. Since we consider the small-momentum regime, the fermions are scattered to states around the Fermi level. As can be seen from Table~\ref{TabLiYbParameters} in the experimental section, $k_F$ is very close to the healing length ($\xi^{-1}$), which characterizes the range of the interaction potential. One can show that the induced attraction Eq.~(\ref{veff0}) is strongest in the $p$-wave channel. That means that the dominant pairing instability is in the channel with orbital angular momentum $\ell=1$, and the most stable low-temperature phase, or with highest critical temperature, has $p_{x} + ip_{y}$ symmetry \cite{anderson, nishida}. We can then solve Eq.~(\ref{eq8}) for the maximum gap \begin{eqnarray} \label{GAP} \triangle^{\textrm{Max}} = \triangle^{(1)}= 2 \Lambda_{\varepsilon} \; \exp\bigg(\frac{1}{\rho_{2D}\tilde{V}_{\textrm{eff}}^{(1)}(k_F)}\bigg), \end{eqnarray} \\ with $\;\tilde{V}_{\textrm{eff}}^{(1)}(k_F) = {V}_{\textrm{eff}}^{(1)}(k_F)/8$. The vertex renormalization for two particles in vacuum allows us to express the bare coupling parameter as $g_{FB} \rightarrow -2\pi a_{\textrm{eff}} /\sqrt{m_Bm_{FB}}$ \cite{nishida2}, with the reduced mass $m_{FB}=m_Bm_F/(m_B+m_F)$ and the effective two-body scattering length $a_{\textrm{eff}}$ for a 2D-3D scattering. The latter will be a function of the original 3D scattering length $a_{FB}$ and of the axial confinement. That gives \begin{eqnarray} &&\tilde{V}_{\textrm{eff}}^{(1)}(k_F) = 2\sqrt{2}\pi \frac{n_B a_{\textrm{eff}}^2\xi}{m_{FB}} \mathcal{F}(k_F\xi). \end{eqnarray} Considering $k_F=\sqrt{4\pi n_F}$ and $\xi = 1/\sqrt{8\pi n_Ba_B}$, we get the variable \begin{eqnarray} \xi k_F= \frac{1}{\sqrt{2}} \sqrt{\frac{n_F}{a_Bn_B}}. \end{eqnarray} Thus, we estimate the gap in Eq.~(\ref{GAP}) using \begin{eqnarray} \label{maxgap}\rho_{2D} \tilde{V}_{\textrm{eff}}^{(1)}(k_F) = \frac{\sqrt{2}}{8\pi} \frac{m_F}{m_{FB}}\frac{a_{\textrm{eff}}^2k_F}{a_B}\frac{\mathcal{F}(k_F\xi)}{k_F\xi}. \end{eqnarray} For $a_B n_B^{1/3} \sim 0.01$ and $a_{\textrm{eff}}k_F\sim 0.1$, we consider the maximum value for $\rho_{2D} |\tilde{V}_{\textrm{eff}}^{(1)}(k_F)|$ with $\mathcal{F}(X)/X \sim -0.15$, restricting $X$ in the interval $[0.5-1.5]$ (see Fig.~\ref{fig2}), to determine \cite{calculo} \begin{eqnarray} \label{GAPmax} \triangle^{\textrm{Max}} \sim 0.01 \Lambda_{\varepsilon}. \end{eqnarray} \section{Higher order correction to the Effective 2D-3D interaction} The previous section shows how to optimize the gap value by manipulating the condensate density, which controls the magnitude and range of the induced potential. In addition, the importance of choosing an appropriate combination of the fermion and boson atomic masses (lighter bosonic species) to maximize the gap became clear. This issue will be further explored in Sec.~VII. By choosing the Fermi wavelength and the healing length such that $\xi k_F\sim1$, the Bogoliubov-sound ($c_s$) and the Fermi velocities ($v_F$) will also have close values. That requires the inclusion of higher-order {diagrammatic} terms in our ultracold-atoms model, which are usually disregarded in BCS studies. In the following, we calculate the four-point function to $4th$ order in the interaction constant $g_{FB}$ \small \begin{eqnarray} \nonumber \Gamma(\{\mathbf{k}_i,\tau_i\})&&=- \left\langle T_{\tau} \hat{a}_{\mathbf{k}_1}(\tau_1)\hat{a}_{\mathbf{k}_2}(\tau_2)\hat{a}^{\dagger}_{\mathbf{k}_3}(\tau_3)\hat{a}^{\dagger}_{\mathbf{k}_4}(\tau_4) e^{-\int_0^{\beta}\hspace{-0.1cm}d\tau \hat{H}_{int}(\tau)}\right\rangle. \\ \end{eqnarray} \normalsize We start with the interaction between the fermions in 2D and the ``phonons'' of the BEC in 3D as given by Eq.~(\ref{eq9}) and Eq.~(\ref{eqC}). Using the finite temperature formalism with the Matsubara Green's functions, the effective interaction between the fermions in 2D is given by \small \begin{eqnarray} \nonumber \Gamma_{\textrm{eff}}(\{\mathbf{k}_i,\nu_i\})=\lambda_{\textrm{eff}}\frac{\beta}{S}\delta_{\mathbf{k}_1+\mathbf{k}_2,\mathbf{k}_3+\mathbf{k}_4}\delta_{\nu_1+\nu_2,\nu_3+\nu_4} \hspace{-0.2cm} \prod_{i=1...4} \hspace{-0.2cm}\mathcal{G}_0(\mathbf{k}_i,\nu_i), \\ \end{eqnarray} \normalsize with the free-fermion propagator $\mathcal{G}_0$. As seen before, the second-order expansion in the coupling $g_{FB}$ provides \small \begin{eqnarray} \nonumber && \Gamma^{(2)}(\{\mathbf{k}_i,\nu_i\})= \frac{\beta}{V} g_{FB}^2 n_B \delta_{\mathbf{k}_1+\mathbf{k}_2,\mathbf{k}_3+\mathbf{k}_4} \delta_{\nu_1+\nu_2,\nu_3+\nu_4} \\ \nonumber && \times \sum_{q_z} V^2_{\mathbf{q}} \mathcal{D}_0(\mathbf{q},\nu_1-\nu_4) \prod_{i=1...4} \mathcal{G}_0(\mathbf{k}_i,\nu_i) \\ \nonumber && = \hspace{-0.1cm} \frac{-2 g_{FB}^2 n_B m_B}{\sqrt{|\mathbf{k}_1-\mathbf{k}_4|^2+2\xi^{-2}}} \frac{\beta}{S} \delta_{\mathbf{k}_1+\mathbf{k}_2,\mathbf{k}_3+\mathbf{k}_4}\delta_{\nu_1+\nu_2,\nu_3+\nu_4} \hspace{-0.2cm} \prod_{i=1...4} \hspace{-0.2cm}\mathcal{G}_0(\mathbf{k}_i,\nu_i), \\ \end{eqnarray} \normalsize where $\mathbf{q}\equiv(\mathbf{k}_1-\mathbf{k}_4,q_z)$ and we applied the static limit to the Bogoliubov-mode propagator $\mathcal{D}_0$. Within a higher-order expansion, we obtain the self-energy bubble diagram (see the details of the calculation in App.~\ref{apB}) \small \begin{eqnarray} \nonumber &&\Gamma_{RPA}^{(4)}(\{\mathbf{k}_i,\nu_i\})= \hspace{-0.1cm} \frac{4 g_{FB}^4 n_B^2 m_B^2}{|\mathbf{k}_1-\mathbf{k}_4|^2+2\xi^{-2}}\hspace{-0.1cm} \sum_{\mathbf{p}} \hspace{-0.1cm} \frac{n_F(\epsilon_{\mathbf{p}})-n_F(\epsilon_{\mathbf{p}+\mathbf{k}_4-\mathbf{k}_1})}{\nu_4-\nu_1+\epsilon_{\mathbf{p}}-\epsilon_{\mathbf{p}+\mathbf{k}_4-\mathbf{k}_1}} \\ && \times \frac{\beta}{S^2} \delta_{\mathbf{k}_1+\mathbf{k}_2,\mathbf{k}_3+\mathbf{k}_4}\delta_{\nu_1+\nu_2,\nu_3+\nu_4} \prod_{i=1...4} \mathcal{G}_0(\mathbf{k}_i,\nu_i), \end{eqnarray} \normalsize where we identify the static polarization-bubble diagram in 2D \begin{eqnarray} &&P_0(\mathbf{k}_1,\mathbf{k}_4) = \frac{1}{S} \sum_{\mathbf{p}} \frac{n_F(\epsilon_{\mathbf{p}})-n_F(\epsilon_{\mathbf{p}+\mathbf{k}_4-\mathbf{k}_1})}{\epsilon_{\mathbf{p}}-\epsilon_{\mathbf{p}+\mathbf{k}_4-\mathbf{k}_1}}. \end{eqnarray} For $|\mathbf{k}_1-\mathbf{k}_4| < 2 k_F $, i.e., the external momenta in the Fermi surface, we can easily calculate the RPA series, which yields \begin{eqnarray} \nonumber \lambda_{\textrm{eff}}^{RPA} && = \lambda_0+\lambda_0^2 P_0+\lambda_0^3 P_0^2+ ... \\ && = \lambda_0 [ 1+ \lambda_0 P_0+\lambda_0^2 P_0^2+ ...], \end{eqnarray} \\ where we defined $\lambda_0 = - {V_0 }/{\sqrt{|\mathbf{k}_1-\mathbf{k}_4|^2+2\xi^{-2}}}$ and $P_0=-{m_F}/{2\pi}=-\rho_{2D}$. For $ \lambda_0 P_0<1$, we find \begin{eqnarray} \label{RPA}\nonumber \lambda_{\textrm{eff}}^{RPA} = \frac{\lambda_0}{1-\lambda_0P_0} = \frac{-V_0}{\sqrt{|\mathbf{k}_1-\mathbf{k}_4|^2+2\xi^{-2}} - {V_0 \rho_{2D}}}. \\ \end{eqnarray} Replacing Eq.~(\ref{veff0}) by the effective potential coming from the RPA correction in Eq.~(\ref{RPA}), we obtain an increase in the gap magnitude, as predicted by Eq.~(\ref{GAP}) (see also App.~\ref{apB} and Fig.~\ref{fig3}). \begin{figure}[!ht] \centering \includegraphics[width=0.45\textwidth]{fig3.pdf} \caption{RPA correction to the $\ell=1$ component of the effective potential, according to Eq.~(\ref{l1}) and Eq.~(\ref{eqD}).}\label{fig3} \end{figure} Since we consider $\lambda_0 P_0$ smaller than one, we do not expect any phase instability driven by a divergence of $\lambda_{\textrm{eff}}^{RPA}$ caused by the vanishing of the denominator of Eq.~(\ref{RPA}). The critical condition given by Eq.~(\ref{GAP}) can be obtained alternatively through the singularity in the effective interaction, which appears when the total vertex function is calculated in the Fermi surface, considering small total momentum of the colliding particles \cite{abrikosov,pitaevskii,chubukov}. In this case, the $\ell$-th harmonic in the exponent of Eq.~(\ref{GAP}) will be associated with the irreducible part of the vertex. Here, we determined its $\ell=1$ projection solving the Bethe-Salpeter integral equation for the ladder-series contribution. To build the series, we start with the $4th$-order vertex-correction, which reads \small \begin{eqnarray} \nonumber \label{vertx1} && \Gamma_{V}^{(4)}(\{\mathbf{k}_i,\nu_i\}) = \frac{2 g_{FB}^4 n_B^2 m_B}{\sqrt{|\mathbf{k}_1-\mathbf{k}_4|^2+2\xi^{-2}}} \frac{1}{V} \sum_{\mathbf{p},q_z} \frac{q}{\sqrt{q^2+2\xi^{-2}}} \\ \nonumber &&\times\bigg[\frac{1}{(\omega_{\mathbf{q}}+\epsilon_{\mathbf{p}})(\omega_{\mathbf{q}}+\epsilon_{\mathbf{p}+\mathbf{k}_4-\mathbf{k}_1})} +\frac{4n_F(\epsilon_{\mathbf{p}})\omega_{\mathbf{q}}}{(\epsilon_{\mathbf{p}}-\epsilon_{\mathbf{p}+\mathbf{k}_4-\mathbf{k}_1})(\omega^2_{\mathbf{q}}-\epsilon^2_{\mathbf{p}})} \bigg] \\ && \times \frac{\beta}{S} \delta_{\mathbf{k}_1+\mathbf{k}_2,\mathbf{k}_3+\mathbf{k}_4} \delta_{\nu_1+\nu_2,\nu_3+\nu_4} \prod_{i=1...4} \mathcal{G}_0(\mathbf{k}_i,\nu_i), \end{eqnarray} \normalsize with $\omega_{\mathbf{q}}=\frac{q}{2m_B}\sqrt{q^2+2\xi^{-2}}$ and $\mathbf{q}\equiv(\mathbf{k}_3-\mathbf{p},q_z)$. \normalsize The first term of Eq.~(\ref{vertx1}) is related to single-particle behavior, i.e., the scattering of real phonons, whereas the second term corresponds to virtual phonon processes. Only the latter will be relevant in our calculation, which deals with the many-body effects with the 2D momentum integration performed near the Fermi surface. To evaluate the irreducible-vertex part around the Fermi surface, perturbation theory turns out to be insufficient and we must sum the whole ladder series of diagrams, with terms proportional to the ratio $c_s/v_F$. The resulting self-consistent vertex equation is presented and solved in the App.~\ref{apB}, after performing a partial expansion of the effective interaction $\lambda_{\textrm{eff}}^{V}$ in terms of the angular components $\lambda(|\mathbf{k}_4-\mathbf{k}_1|) = \sum_{\ell} \lambda^{(\ell)}(k_F) \cos[\ell (\theta_4 -\theta_1)]$ \cite{chubukov,pitaevskii}, which breaks the integral equation for the total pairing vertex to a set of decoupled algebraic equations for its partial components. Finally, we obtain the vertex correction for the component $\ell=1$ \begin{eqnarray}\label{Vc} {\lambda_{\textrm{eff}}^V}^{\hspace{-0.05cm}(1)}(k_F) = \frac{V_{\textrm{eff}}^{(1)}(k_{F})}{1+ \frac{1}{4}V_{\textrm{eff}}^{(1)}(k_{F})\rho_{2D} \frac{\mathcal{J}[X]}{\mathcal{F}[X]X^2 \sqrt{1+2X^2}}}, \end{eqnarray} \\ where we defined $\mathcal{J}[X] = (1+2X^2) E\left[1 - \frac{1}{1+2X^2}\right] - (1+X^2) K\left[1 - \frac{1}{1+2X^2}\right]$. Remarkably, $\frac{\mathcal{J}[X]}{\mathcal{F}[X]X^2 \sqrt{1+2X^2}} = 1!$ Including the correction given by Eq.~(\ref{Vc}) into the gap equation, according to Eq.~(\ref{GAP}), we get \begin{eqnarray} \label{GAPF} \nonumber \triangle_V^{\textrm{Max}} \nonumber &&= 2 \Lambda_{\varepsilon} \; \exp\bigg(\frac{8}{\rho_{2D}V_{\textrm{eff}}^{(1)}}+ 2\bigg) \\ && \sim 7.4 \; \triangle^{\textrm{Max}}. \end{eqnarray} This is the main result of this paper: the inclusion of higher-order diagrams, usually neglected due to their complexity, actually increases the $p$-wave gap by one order of magnitude and brings it to the verge of experimental possibilities. \section{Experimental implementation} We now discuss the experimental feasibility of our proposal. We first examine which quantum gas mixtures are suitable to implement it, then present a scheme for a mixed-dimensional trap, and finally we summarize the experimental proposals to detect a $p$-wave superfluid. \subsection{Mixture choice} The most important criterion to choose the mixture is that the critical temperature for $p$-wave superfluidity $T_c^p$ has to be experimentally reachable \cite{McKay2011cis}. As guidance, we note that BECs have been evaporatively cooled to $T=0.02 T_c^{\rm BEC} = 1\,$nK \cite{Olf2015tac} and Fermi gases with $T/T_F \leq 0.05$ have been reached \cite{Navon2010teo}. We maximize $T_c^p/T_F=\gamma \triangle_V^{\rm Max}/T_F$ \cite{schrieffer} under constraints imposed by the validity of our theory and experimental constraints ($\gamma$: Euler's constant $\simeq 0.57$). The static approximation requires that $\alpha=v_F/c_s \lesssim 1$ {\cite{Migdal,Roy2014mta}}. In addition, since the effective potential has been obtained within a perturbative treatment, it is necessary that $\gamma_{eff}^2<(8\pi\gamma_{BEC})^{1/2}$. Hence, the boundaries of validity of our theoretical studies request $\gamma_{BEC}=a_B n_B^{1/3} \gtrsim 10^{-3}$ \cite{bruun} and $\gamma_{\rm eff}=a_{\rm eff} n_B^{1/3} \lesssim (8\pi\gamma_{BEC})^{1/4} \approx 0.4$. To be in the superfluid regime we finally require $T^p_c<T_{\rm KT}$, where $T_{\rm KT}$ is the Kosterlitz-Thouless transition temperature \cite{Fisher,Dalibard}. Since $T_c^p/T_F=8.42 \exp(-1/|\rho_{2D}\tilde{V}_{\textrm{eff}}^{(1)}|)$ increases monotonically with $Y=|\rho_{2D}\tilde{V}_{\textrm{eff}}^{(1)}|$ it is sufficient to maximize $Y$, which can be expressed as \begin{eqnarray}\label{rho2DVefftilde} Y=\frac{1}{\sqrt{4\pi}}\left(1 + \frac{m_F}{m_B}\right)\frac{\gamma_{\rm eff}^2}{\sqrt{\gamma_{BEC}}} \left|\mathcal{F}(X)\right|, \end{eqnarray} with $X=\alpha(m_F/m_B)/\sqrt{2}$. For large $Y$, a high mass ratio $m_F/m_B$ should be selected, provided that $\alpha$ is chosen close to $\alpha_{\rm max}=3.56\,m_B/m_F$, which maximizes $|\mathcal{F}(X)|$. Since $T_F=(2\pi \hbar^2/k_B)(m_F/m^2_B)n_B^{2/3}\alpha^2\gamma_{BEC} \propto \alpha^2$, we chose in the following a slightly higher value, $\alpha = 1.5\, \alpha_{\rm max}$, which barely decreases $|\mathcal{F}(X)|$, but more than doubles $T_F$. Furthermore, a low value of $\gamma_{BEC}$ is desired and we chose a value close to its minimum. Finally, a high value of $\gamma_{\textrm{eff}}$ has to be achieved. In order to increase $\gamma_{\rm eff}$, we opt for the rather high value of $n_B=6\times 10^{14}\,$atoms/cm$^{-3}$ and the relatively low value of $a_{\rm eff}=204\,a_0$, where $a_0$ is the Bohr radius. The motivation for choosing a large density is that $T_F$ increases with $n_B$. On the other hand, low values of $a_{\rm eff}$ are more likely available in experiments than large values, and they can be reached without Feshbach or confinement induced resonances. Far from the resonances, the scattering length is given approximately by $a_{\rm eff} \sim \sqrt{m_B/m_{FB}}\,a_{\rm FB}$ \cite{Massignan2006tds,nishida2,Lamporesi2010sim}. Further limitations arise from experimental constraints. In our scheme, a few thousand fermions will be sympathetically cooled by a much larger bath of evaporatively cooled bosons. To effectively implement evaporative and sympathetic cooling, a sufficient rate of elastic collisions and low rates of heating and loss are required. These conditions limit the range of suitable interaction properties, the gas densities, and the trap designs. An upper limit on $n_B$ is imposed by the requirement to keep the BEC in the 3D regime for the finite number of bosons available. A lower limit on $a_B$ is imposed by the requirement of a sufficient elastic collision rate between bosons $\Gamma_{\rm el,B} \propto n_B a_B^2$. Together, these requirements lead to an additional, experimental, lower limit on $\gamma_{BEC}$. Attention has also to be given to the rate of 3-body losses involving one fermion and two bosons ($\Gamma_{\rm FBB}\propto n_B^2 a_{\rm FB}^4$ \cite{Esry2008,Laurent2017}), even considering the important role played by the mixed dimensionality in inhibiting the interspecies molecular formation \cite{nishida}. \begin{figure*}[!ht] \centering \includegraphics[width=\textwidth]{fig1exp.pdf} \caption{Maximum $p$-wave superfluid critical temperature $T_c^p/T_F$ (upper panels, solid lines) and $T_c^p$ (lower panels, solid lines) for fermions immersed in bosonic $^7$Li, as well as $T_{\rm KT}/T_F$ (upper panels, dotted lines) and $T_{\rm KT}$ (lower panels, dotted lines). a) Dependence on the mass of the fermions $m_F$. Here $n_B=d \times 10^{14}$\,atoms/cm$^3$, $a_B=8\,a_0$, $a_{\rm eff}=200\,a_0$ (corresponding to $\gamma_{BEC}=0.002\,d^{1/3}$ and $\gamma_{\rm eff}=0.05\,d^{1/3}$), and $\alpha=1.5\,\alpha_{\rm opt}$. Fermionic isotopes of elements that have been cooled to quantum degeneracy are marked by vertical lines. b) Dependence on $a_{\rm eff}$ for the fermion $^{171}$Yb, with all other parameters as before. The dashed lines in the upper panels mark the experimentally achieved $T/T_F$. The stars mark the example detailed in Table~\ref{TabLiYbParameters}. } \label{fig1exp} \end{figure*} \begin{table}[!ht] \centering \caption{Parameters of $^{171,173}$Yb-$^7$Li mixture. The elastic scattering rate $\Gamma_{\rm el, B}$ is given for thermal atoms at a temperature of $T=T_c^p$ colliding with a BEC at density $n_B$. $\Gamma_{\rm 3-body, B}=-\dot{N}_B/N_B$ is the initial 3-body loss rate of the BEC \cite{Gross2009oou,Pollack2009uit}.} \label{TabLiYbParameters} \begin{tabular}{@{\extracolsep{\fill}}ll} \hline\hline \noalign{\smallskip} $n_B$ & $6\times 10^{14}$\,atoms/cm$^3$ \\ $a_B$ & $8\,a_0$ \\ $a_{\rm FB}$ & $200\,a_0$ \\ $a_{\rm eff}$ & $\sqrt{m_B/m_{FB}}\,a_{\rm FB}=204\,a_0$ \\ $\alpha$ & $v_F/c_s=1.5\,\alpha_{\rm max}=0.22$ \\ $\gamma_{BEC}$ & $a_B n_B^{1/3}=0.004$ \\ $\gamma_{\rm eff}$ & $a_{\rm eff} n_B^{1/3}=0.1$ \\ $\xi$ & $1/\sqrt{8\pi n_B a_B}=0.4\,\mu$m\\ $X$ & $\xi k_F = \xi \sqrt{4\pi n_F} = 3.8$ \\ $v_F$ & $\hbar k_F/m_F=0.4\,$cm/s\\ $c_s$ & $\sqrt{n_B g_B/m_B}=1.6\,$cm/s\\ $\Gamma_{\rm el, B}$ & $21\,$s$^{-1}$ \\ $\Gamma_{\rm 3-body, B}$ & $0.002\,$s$^{-1}$ \\ $\mu_{BEC}$ & $g_B n_B = k_B\times 221\,$nK$=h\times 4.6\,$kHz \\ $T_c^{\rm BEC}$ & 16.4\,$\mu$K \\ $n_F$ & $720\,$atoms/(10\,$\mu$m)$^2$ \\ $E_F$ & $k_B\times 130\,$nK$=h\times 2.7\,$kHz$=0.6\,\mu_{BEC}$ \\ $T_c^p$ & $0.07\,T_F=5\times 10^{-4}\,T_c^{\rm BEC}=9.5\,$nK \\ $T_{\rm KT}$ & $0.09\,T_F=12\,$nK \\ \hline \hline \end{tabular} \end{table} We now discuss possible choices of elements for the mixture. Since $m_F/m_B$ should be large, we limit our choice of bosons to the lightweight isotopes that have been Bose condensed, $^4$He$^*$, $^7$Li, and $^{23}$Na. Among those, $^7$Li has the great advantage of possessing a broad Feshbach resonance, with which $a_B$ can be tuned \cite{Chin2010fri,Pollack2009eto,Gross2009oou,Pollack2009uit}. Feshbach resonances in $^4$He$^*$ and $^{23}$Na are expected or known to be accompanied by strong losses \cite{Vassen2012cat,Goosen2010fri,Borbely2012mfd,Inouye1998oof,Stenger1999sei}. In the following, we use the triplet-scattering length for $^4$He$^*$ and $^{23}$Na \cite{Tiesinga1996asd,Moal2006ado}. Considering BEC densities for which inelastic collisions limit the BEC lifetime to 10\,s \cite{Robert2001abe,PereiraDosSantos2001bec,StamperKurn1998oco}, fermion masses up to the mass of the heaviest naturally occurring fermionic isotope ($^{235}$U) and $a_{\rm eff}=600\,a_0$, we find that $T_c^p/T_F<10^{-2}$ for these bosons. Only larger values of $a_{\rm eff}$ might make them suitable for our purposes. We therefore limit our considerations to $^7$Li. This choice makes it possible to decrease $a_B$ and thereby increase $T_c^p/T_F$. To {choose} the fermionic element we plot in Fig.\,\ref{fig1exp}a) $T_c^p/T_F$ and $T_c^p$ as a fuction of $m_F$. Fermionic isotopes that have been cooled to quantum degeneracy and for which the experimentally relevant regime $T_c^p/T_F>0.05$ can be reached are $^{171,173}$Yb, $^{161}$Dy, and $^{167}$Er \cite{DeMarco1999oof,Naylor2015cdf,DeSalvo2010dfg,Taie2010roa,Lu2012qdd,Aikawa2014rfd}. A drawback of having to choose such heavy elements could be that they are not well sympathetically cooled by the lightweight Li because during each elastic collision, the energy transfer from the fermion to the boson is suppressed by $4 m_F m_B/(m_F+m_B)^2 \sim 0.15$ \cite{Mudrich2002scw}. A benefit of Dy and Er compared to Yb is that several interspecies Feshbach resonances will likely be available across the broad $^7$Li Feshbach resonance, making it possible to tune $a_{B}$ and $a_{\rm FB}$ somewhat independently and to access large values of $a_{\rm FB}$, which would also make tuning of $a_{\rm eff}$ by confinement induced resonances possible. Nevertheless, since $^{173,174}$Yb-$^6$Li mixtures are already available in the lab \cite{Hara2011qdm,Hansen2013poq}, we concentrate our discussion now on $^{171,173}$Yb-$^7$Li. Adapting the existing machines to operate with $^7$Li instead of $^6$Li should be straightforward. There are two fermionic Yb isotopes readily available, each providing a chance of possessing suitable interspecies interaction properties with $^7$Li. Figure\,\ref{fig1exp}b) shows the dependence of $T_c^p/T_F$ and $T_c^p$ on $n_B$ and $a_{\rm eff}$. Choosing $a_B=8\,a_0$ leads to the system parameters given in Table~\ref{TabLiYbParameters}. The dotted lines in Fig.\,\ref{fig1exp} are an estimation of the Kosterlitz-Thouless transition temperature, which is given by \cite{Fisher,Prokofev2001tdw} \begin{equation}\label{TKT2} T_{\rm KT} = 4 \pi \frac{\hbar^2}{2 m} n \ln^{-1}\Big[\ln\Big(\frac{1}{na^2}\Big)\Big], \end{equation} where $m$ and $n$ are the mass and density of the superfluid species, while $a$ characterizes the range of the interaction. In particular, for our case of fermionic-pair formation, the interaction between fermions that will form the Cooper pairs is proportional to $a_{FB}^2$, with $m=2m_F$ and $n\sim n_F/2$. Eq.~(\ref{TKT2}) is valid for small interaction parameters $a_B$ and $a_{FB}$ - the first makes the range of the potential long enough, such that the superfluid fraction achieves its maximum value \cite{Nishida2010poa,bruun}. The critical temperature $T_c^p = 0.07\,T_F = 9.5\,$nK is in the regime of temperatures that have already been achieved experimentally, albeit in systems with larger elastic scattering length. However, $T_c^p/T_c^{\rm BEC} = 5 \times 10^{-4} $ is more than one order of magnitude lower than what has been reached so far. To enhance evaporative cooling, it might be useful to first evaporate at a scattering length above 100\,$a_0$ and to tune the scattering length to a lower value only when approaching the required low temperature, while compressing the gas at the same time. In doing so, one could even profit from a Li 3-body recombination minimum at $a_B=119\,a_0$ \cite{Pollack2009uit}. \begin{figure*}[!ht] \centering \includegraphics[width=\textwidth]{fig2exp.pdf} \caption{Mixed-dimensional optical dipole trap. a) Beam configuration. Ytterbium is confined in a 2D plane of an optical lattice formed by two standing waves created by laser beam pairs L1a,b and L2a,b. Both standing waves have the same intensity profile near the trap centre and are attractive for Yb, but generate opposite potentials for Li. Lithium is confined vertically by an elliptical Gaussian beam (Lv), elongated in the out-of-plane direction. Both elements are horizontally confined by four repulsive dipole-trap walls (Lha,b,c,d), forming a rectangular box. The inset shows the region around the trap centre, with Lha,b in cross section and the lattice intensity profile. b) Dipole potential and scattering rate for Li and Yb, as a function of the wavelength \cite{Grimm2000odt,NISTDatabase}. The arrows above the graph indicate the wavelengths of the dipole-trap beams. Two choices are possible for Lh.} \label{fig2exp} \end{figure*} \subsection{Trap configuration} Next, we consider suitable trap configurations for the mixture. Whereas the bosons explore a 3D trap, the fermions have to be effectively confined in 2D by a harmonic trap of frequency $\nu_{\perp, F}$, which requires $h \nu_{\perp, F}-E_F \gg k_B T$. The sample should be as homogeneous as possible to avoid inhomogeneous broadening of $p$-wave superfluidity signals, especially because the number of fermions will be low. Efficient evaporative cooling of the bosons should be possible in order to reach low temperatures. We now take these requirements into account to design an optical dipole trap for the mixture, where we orient the 2D plane of the fermions in the horizontal direction, see Fig.~\ref{fig2exp}a. The bosonic lithium surrounds the fermions and can be confined by a Gauss-beam dipole trap using a wavelength of 1064\,nm. To reach a temperature $T$ by evaporation, the trap depth in the vertical direction $U_{\perp, B}$ should be $\mu_{\rm BEC} + \eta k_B T$, where $\mu_{\rm BEC}$ is the chemical potential of the BEC, and $\eta \sim 5$ {\cite{Ketterle1996eco}}. In order to provide a homogeneous vertical trap frequency across the cloud, the horizontal waist should be much larger than the cloud and the vertical Rayleigh length $z_R$ much longer than the horizontal sample size. The latter requirement and the additional requirement $h \nu_{\perp, B} \ll \mu_{\rm BEC}$ are only fulfilled if the vertical waist is larger than a minimum size. At the same time, the vertical waist should not be too large in order to limit the size of the $^7$Li sample in the vertical direction, thereby reducing the required number of $^7$Li atoms. Gravitational sag of the bosonic cloud is compensated by placing the focus of the Gauss beam slightly above the plane of the fermions. The Gaussian-beam trap creates a nearly constant potential on the fermions, since they explore only a small region in the centre of the trap. A constant potential offset is irrelevant and we can therefore ignore the influence of the Gauss-beam dipole trap on the fermions. \begin{figure}[!ht] \centering \includegraphics[width=0.48 \textwidth]{fig3exp.pdf} \caption{Optical dipole trap potential. A lattice confines Yb in 2D, whereas Li is levitated against gravity by a Gaussian beam. The potential experienced by thermal atoms $U_{\rm Li, thermal}$ consists of the dipole potential and twice the BEC mean-field potential \cite{PethickBook}. A phase fluctuation of a lattice beam by 0.1\,rad leads to the modulated Li potential shown around the ideal potential.} \label{fig3exp} \end{figure} To provide homogeneous confinement for bosons and fermions in the horizontal plane, repulsive dipole trap walls can be erected around the sample using vertically propagating Gauss beams \cite{Meyrath2005bec,Gaunt2013bec}. Four such beams can form a rectangular box with a size of $\sim 10\,\mu$m around the sample, if the waist of the beams is elongated along the sides of the rectangle ($w_{\rm Lh,\parallel}$ of a few 10\,$\mu$m) and is narrow orthogonal to that direction ($w_{\rm Lh,\perp}\sim 2\,\mu$m). This rectangular potential box also serves to select the most homogeneous central region of the traps that are used to confine bosons and fermions vertically. The sample density can easily be changed by moving the vertical walls towards each other, which is useful to do while $a_B$ is reduced to a low value. If in further studies a cylindrically symmetric system is required, for example to enable the creation of vortices \cite{Madison2000vfi}, a Laguerre-Gaussian beam can be used to confine the atoms horizontally \cite{Kaplan2002osb,Jaouadi2010bec,Gaunt2013bec}. The confinement of the fermions in quasi-2D is most conveniently done using optical lattices. In comparison to other trap configurations, such as a Hermite-Gaussian beam \cite{Meyrath2005ahf,Meyrath2005bec}, it is easier to create a more homogeneous confinement in the 2D plane by increasing the diameter of the lattice beams. In order to populate only a single plane of the lattice with fermions, one can use the techniques of Refs. \cite{Gemelke2009iso, Sherson2010sar, Yamamoto2016ayq, Ville2016lac}. The deep dipole potential used to confine the fermions in 2D may only have a negligible effect on the bosons. The parasitic potential on the bosons $U_{\rm lattice, B}$ must be much smaller than $\mu_{\rm BEC}$. This challenge has been met by species-specific dipole traps using a "tune-out" wavelength, for which the AC polarizability of one species is zero \cite{Massignan2006tds,LeBlanc2007sso,Catani2009eei,Lamporesi2010sim}. Unfortunately, this technique does not work for $^7$Li because its "tune-out" wavelength is too close to an atomic transition, leading to detrimental off-resonant scattering for the required trap depths \cite{LeBlanc2007sso}. Another option is to use a "tune-in" wavelength, close to an Yb transition and far detuned from any Li transition \cite{LeBlanc2007sso}. In this situation, the potential on Yb $U_{\rm lattice, F}$ can exceed the potential on Li many times. This technique is suitable for our situation, but will limit the lifetime of the fermionic cloud to a few seconds by off-resonant scattering. If this limit is significant depends on the other factors limiting the lifetime of the system, especially the unknown 3-body loss rate $\Gamma_{\rm FBB}$. \begin{table}[!ht] \centering \caption{Optical dipole trap configuration. $\lambda_{{\rm L}i}$ is the wavelength of dipole-trap beam ${\rm L}i$, with $i=1,2$. $w$ are the $1/e$ beam radii. The vertical trap depth for $^7$Li, $U_{\perp, B}$, takes the effect of gravity into account. $\alpha_{{\rm L}i}$ is the angle between lattice beams ${\rm L}i{\rm a}$ and ${\rm L}i{\rm b}$. $\Delta z$ is the lattice spacing. $n_{\rm 2D, B}$ is the density of bosons integrated over the vertical direction. $\tau_{\rm B,F}=1/\sum_i \Gamma_{i, {\rm B,F}}$ are {limits to }the lifetimes of bosons and fermions, where $\Gamma_{i, {\rm B,F}}$ is the off-resonant scattering rate of photons calculated at peak intensity of dipole trap beam L$i$, with $i$ running over all beams \cite{Grimm2000odt,McKay2011cis,Gordon1980moa}. } \label{TabDipoleTrapConfig} \begin{tabularx}{0.45 \textwidth}{@{\extracolsep{\fill}}lXll} \hline\hline \noalign{\smallskip} $\lambda_{\rm Lv}$ & 1064\,nm & $w_{\rm Lv}$ & 6\,$\mu$m \\ $z_R$ & 100\,$\mu$m & & \\ $U_{\perp, B}$ & $k_B\times 0.27\,\mu$K & $\nu_{\perp, B}$ & 1.1\,kHz \\ $\lambda_{\rm Lh}$ & 300\,nm or 554\,nm \\ $w_{\rm Lh,\perp}$ & 2\,$\mu$m & $w_{\rm Lh,\parallel}$ & 200\,$\mu$m \\ $\lambda_{\rm L1}$ & 1064\,nm & $\alpha_{\rm L1}$ & 60\degree \\ $\lambda_{\rm L2}$ & 470\,nm & $\alpha_{\rm L2}$ & 25.5\degree \\ $\Delta z$ & 1064\,nm \\ $U_{\perp, F}$ & $h \times 16\,$kHz & $\nu_{\perp, F}$ & 4.1\,kHz=1.5\,$E_F$ \\ $\tau_{\rm B}$ & 296\,s & $\tau_{\rm F}$ & 79\,s \\ $n_{\rm 2D, B}$ & \multicolumn{2}{l}{$3\times 10^5\,$atoms/(10\,$\mu$m)$^2$} &\\ \hline \hline \end{tabularx} \end{table} If the lifetime limit imposed by a "tune-in" lattice is too severe, a bichromatic dipole trap can be used, consisting of two optical lattices that both confine Yb, but compensate each other for Li. This technique overcomes the possibly excessive off-resonant scattering and replaces it by the technical challenge of creating two lattices with very well controlled intensity profiles. We will explore this scheme in the following. We chose optical lattices with wavelengths of 470\,nm and 1064\,nm, which are both attractive for Yb. In contrast, for Li only the 1064-nm lattice is attractive, the other is repulsive, see Fig.\,\ref{fig2exp}b. In order for the lattice potentials to add up for Yb and cancel sufficiently for Li, the intensity profile of both lattices need to be nearly identical in the region of the atomic clouds. The lattice-well spacing must be the same, and the intensity maxima need to overlap. The lattice spacing can be adjusted by the angle between the two lattice beams of each wavelength. Using an angle of 60$^\circ$ between the two beams forming the 1064-nm lattice leads to a lattice spacing of 1064\,nm. The same spacing is reached for the 470-nm lattice if the two corresponding beams intersect at an angle of 25.5$^\circ$, see Fig.\,\ref{fig2exp}a. The position of the intensity maxima along the lattice direction (the vertical direction) depends on the phase difference between the two beams forming a lattice. This phase difference has to be stabilized interferometrically for each lattice to a common reference, combining methods from Refs.\,\cite{Foelling2007doo,Wirth2010efo}. In order for the two lattice potentials to cancel for the bosons, the intensity of the 470-nm lattice beams has to be 1.8 times the intensity of the 1064-nm lattice beams. For Yb the two lattice potentials add up, giving a total potential that is 1.2 times larger than the potential of the 470-nm lattice alone. This total potential needs to confine Yb in quasi-2D and be also deep enough to suppress tunneling of Yb to neighboring lattice planes, see Fig.\,\ref{fig3exp}. The cancelation of the lattice potential for the bosons will not be perfect because of intensity and phase fluctuations leading to deviations from the ideal configuration. Phase fluctuations of 90\,mrad or intensity imbalances of 9\% lead to a residual potential on the order of 10\% of $\mu_{\rm BEC}$. This parasitic potential would be tolerable if the timescale of fluctuations is large enough to avoid heating of the sample. In principle, we could have chosen a wavelength for L2 that is further away from the Yb transition, e.g. 532\,nm, which would reduce off-resonant scattering and simplify phase locking of the laser sources used for L1 and L2. All the same, we chose 470-nm because at that wavelength we are profiting from less parasitic potential of L2 on Li, reducing the amount of compensation needed from L1. As a result, the overall parasitic potential created for a given intensity or phase mismatch between L1 and L2 is reduced. Example parameters for the bichromatic dipole trap and important results of using this trap for the Li-Yb mixture are given in Table~\ref{TabDipoleTrapConfig}. The $^7$Li atom number available in current experiments ($3\times 10^5$ atoms \cite{Pollack2009eto}) is sufficient for a square sample of 10\,$\mu$m size. A sample of this size contains about 700 fermions. If this proposal is realisable depends to a large extend on the unknown elastic and inelastic scattering properties of Li-Yb. Similar schemes can be applied to other mixtures, such as Li-Dy or Li-Er, for which some interspecies interaction tuning should be possible. \subsection{Detection of $p$-wave superfluidity} There are some predictable signatures for the experimental detection of the $p_{x} + ip_{y}$ superfluid phase. Particularly, the density of state (rf absorption spectrum) of a rotating weak pairing $p_{x} + ip_{y}$ phase is expected to exhibit a set of gapless modes \cite{Grosfeld2007pso}, which are a direct consequence of the zero-energy Majorana modes on the vortices. The rf-spectroscopy can be also applied to detect Majorana edge states of the topological superfluid in a 2D square lattice \cite{Midtgaard2016tso}. On the other hand, the time-reversal symmetry broken signature of the chiral $p_{x} + ip_{y}$ fermionic superfluid can be detected with time-of-flight image of the atomic density distribution: an external effective electric field (i.e., dipole interaction between the neutral atoms in the superfluid and the laser field) brings a nonzero antisymmetric transverse mass current in the velocity distribution of the atoms \cite{Zhang2008pxp}. \section{Conclusion} In the present work, we explored the feasibility of a $p$-wave superfluid by using a Fermi-Bose mixture in a mixed-dimension configuration, where $p$-wave interaction between spin-polarized degenerate fermions in 2D is induced indirectly, through the scattering of the Bogoliubov modes of condensed bosons moving in 3D. We have shown that, even in the weak-coupling regime, the appropriate renormalization of the phonon propagator (BEC modes) with particle-hole fluctuations and the vertex correction significantly increase the gap and the predicted critical temperature for the fermion-pair formation. It is important to remark that we adopt a minimum value for $\gamma_{BEC}\sim a_B n_B^{1/3}$, which yields $\upsilon_F/c_s \leq 1$, thus allowing to disregard retardation effects. According to Wu and Bruun \cite{bruun}, who performed calculations including retardation but no vertex correction to determine $T_{MF}$, in the limit $\upsilon_F/c_s \leq 1$, it holds that $T_{MF} \sim T_{BCS}$ (see Fig.2 in the cited reference), which confirms the validity of our approximation. We neglected decay of the BEC phonons, like the Beliaev damping and the lifetime due to the scattered particle-hole pairs of the degenerate fermionic sample. The Beliaev damping is given by the boson-boson scattering potential, resulting in a phonon lifetime proportional to $g_B$ \cite{Davidson, Shlyapnikov2}. In the small-momentum regime, however, the Beliaev decay mechanism is strongly suppressed \cite{Davidson}. On the other hand, if we consider the phonon dressed by particle-hole fluctuations of the Fermi sea, it will have a lifetime proportional to $g_{FB}^2$. In the static limit considered in the paper, however, the lifetime is infinite (see App.~\ref{apB} for details). Hence, we conclude that there is no damping mechanism that could hamper the stability of the BEC in the chosen regime of parameters. Exploiting the difference in polarizability and mass of the atomic species, and by optimizing the density $n_B$ and the scattering length $a_B$ of the bosons, our work sets the boundary for the experimental realization of a $p$-wave superfluid within the reachable limit of $T^p_c = 0.05 T_F$. It identifies a realistic route and provides the details to the accomplishment and manipulation of this long-sought fascinating chiral-superfluid phase in the realm of ultracold atoms in optical lattices. \section*{Acknowledgments} We thank Rodrigo G. Pereira, Servaas Kokkelmans, Fr\'{e}d\'{e}ric Chevy, and Subhadeep Gupta for discussions and insightful comments. This work was supported by CNPq (Brazil) through the Brazilian government project Science Without Borders. The work of C.M.S. is part of the DITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). {F.S. gratefully acknowledges funding from the European Research Council (ERC) under Project No. 615117 QuantStro and by NWO through Vici grant No. 680-47-619.}\\ \bibliographystyle{apsrev4-1}
1,314,259,993,782
arxiv
\subsection{Main Results} Here we shall state the main results proven in this paper. Recall that a symmetric pair $(G,H,\theta)$ is called stable if every closed $H$ double-coset in $G$ is stabilized by the anti-involution $g \mapsto \theta(g)^{-1}$. In section 4 we construct, for each semi-simple $r \in G$ of the form $g \theta(g)^{-1}$, a cohomological obstruction $[r] \in H^1(\theta,G)$. We then prove: \begin{theorem} $[r]$ is trivial if and only if $HgH$ is stabilized by the anti-involution $\sigma(g) := \theta(g)^{-1}$. \end{theorem} Thus, we reduce the varification of the stability of a pair to a cohomology computation. In section 5 we prove: \begin{theorem} If $(G,H,\theta)$ is stable then $H$ has a unique open orbit in every parabolic quotient of $G$. \end{theorem} The converse is not true in general, as we show in section 5 for the pair $(G,H,Ad_i)$ where $G$ is the group of elements of norm $1$ in a non-split quaternion algebra over $\QQ_p$ and $i$ is an imaginary quaternion. We call a symmetric pair $(G,H,\theta)$ p-stable if $H$ has a unique open orbit in every parabolic quotient of $G$. In section 7 we prove: \begin{theorem} If the pair $(G,H)$ is a Gelfand pair then the symmetric pair $(G,H,\theta)$ is p-stable. \end{theorem} We also use the results of section 6 regarding stability and the tools developed in \cite{dima} to verify the Gelfand property for several pairs. In the next results $F$ is either $\RR$ or a finite extension of $\QQ_p$. Let $S(GL_k(F) \times GL_{n-k}(F))$ be the Levi subgroup of $SL_n(F)$ consisting of block matrices with two blocks of sizes $k$ and $n-k$. \begin{theorem} The pair $(SL_n(F), S(GL_k(F) \times GL_{n-k}(F)))$ is a Gelfand pair if and only if $k \ne \frac{n}{2}.$ \end{theorem} \begin{theorem} Let $E/F$ be a quadratic extension. The pair $(SL_n(E), SL_n(F))$ is a Gelfand pair if and only if $n$ is odd. \end{theorem} The pairs $(GL_n(E),GL_n(F))$ and $(GL_{n+ k}(F), GL_n(F) \times GL_k(F))$ are proven to be Gelfand pairs in \cite{Fli,JR,dima}. For a quadratic extension $E/F$, embed $GL_{n}(E)$ (resp. $SL_n(E)$) in $GL_{2n}(F)$ (resp. $SL_{2n}(F)$) via restriction of scalars. \begin{theorem} The pair $(GL_{2n}(F),GL_n(E))$ is a Gelfand pair, while $(SL_{2n}(F),SL_n(E))$ is a Gelfand pair if and only if $F$ is Archimedean. \end{theorem} Let $B = B^+ \oplus B^-$ be a non-degenerate quadratic form represented as a direct sum of two quadratic forms $B^+,B^-$. The classification of the pairs $(O(B), O(B^+) \times O(B^-))$ is complicated and given in section 7. This classification gives several new small-dimensional examples of Gelfand pairs, such as $(O(B^+ \oplus B^-),O(B^+) \times O(B^-))$ for quadratic forms $B^+$ and $B^-$ of rank 2 over a non-Archimedean field of odd residual characteristic such that $det(B^+)\ne \pm det(B^-)$. If $F$ is non-Archimedean and $rank(B) \ge 6$ we show that the pair in a Gelfand pair if and only if one of the forms $B^+,B^-$ is of rank at most 1. In the Archimedean case, we have: \begin{theorem} The pair $(O_{p_{1}+p_{2},q_{1}+q_{2}}(\RR), O_{p_1,q_1}(\RR) \times O_{p_2,q_2}(\RR))$ is a Gelfand pair if and only if one of the numbers $p_1,p_2,q_1,q_2$ vanishes. \end{theorem} Similarly, for a Hermitian form $B = B^+ \oplus B^-$ we have: \begin{theorem} The pair $(U_{p_{1}+p_{2},q_{1}+q_{2}}(\RR), U_{p_1,q_1}(\RR) \times U_{p_2,q_2}(\RR))$ is a Gelfand pair if and only if one of the numbers $p_1,p_2,q_1,q_2$ vanishes. If $F$ is non-Archimedean the pair $(U(B),U(B^+) \times U(B^-))$ is a Gelfand pair if and only if one of the non-degenerate quadratic forms $B^+,B^-$ is of rank 1. \end{theorem} The pairs $(O(B),O(B^+) \times O(B^-))$ and $(U(B),U(B^+) \times U(B^-))$ with $B^+$ or $B^-$ of rank 1 over all local fields are proven to be strong Gelfand pairs in \cite{dima3,SZ}. A pair $(G,H,\theta)$ is a strong Gelfand pair if $(G\times H, \Delta_H)$ is a Gelfand pair. All these results depend on stability and p-stability verification, which is summarized in a table in subsection 7.12. \subsection {Structure of the Paper} In section 2 we introduce notations, partially standard and partially specific for this article, to be used in the theory of symmetric pairs. In section 3 we recall the theory of non-abelian group cohomology, which is the main tool we use for our treatment of symmetric pairs. In section 4 we show how, using only calculation of cohomology groups, one can determine the stability of a symmetric pair. In section 5 we consider other types of stability, in particular p-stability, and write cohomological invariants for them. We also state and prove several relations between these stability properties. In section 6 we illustrate the theory by verifying stability for many interesting pairs, some of them are known and some new. Finally, in section 7 we link the geometric properties we considered to the representation theory of the pair, and conclude by proving the Gelfand property for several pairs. \subsection {Acknowledgments} First, I want to deeply thank my advisor Dmitry Gourevitch for teaching me many interesting mathematics, exposing me to the importance of the stability of symmetric pairs and for his help during the entire research process and for being my teacher in the last few years. I would like to thank also Eugenii Shustin and Joseph Bernstein for teaching me a lot of interesting and important mathematics and for many valuable discussions. I want to thank also some other people for their valuable comments and references which pushed the research forward at many critical points. Among them are Aloysius G. Helminck, Avraham Aizenbud, Gal Dor, Michael Borovoi and many others. I would like to thank also Lev Radzivilovsky for his help in organizing the paper, for fixing some mistakes he found in it, and for his major part in my mathematical education. Finally, I want to thank my parents, family and friends for their support and inspiration. \section {Definitions and Notation} We shall use the following standard notation in this paper: \begin {itemize} \item For a group $G$ and a subset $X \subseteq G$ we let $Z\low{G}(X) = \{g \in G |: gx = xg \quad \forall x \in X\}$ be the centralizer of $X$. \item If $G$ acts on $X$ and $x \in X$, we let $Stb\low{G}(x) = \{g \in G |: g(x) = x\}.$ \item $F$ is a local field of characteristic 0. \item $\bar{F}$ is the algebraic closure of $F$. \item $\textbf{H}, \textbf{G}, \text{e.t.c.}$ will denote reductive algebraic groups defined over $F$ (which will always be clear from the context). \item $\Gamma_F$ will be the absolute Galois group of $F$. \item For a torus $\textbf{A}$ we let $X^*(\textbf{A}) = Hom(\textbf{A}, \textbf{G}_m)$ be the group of characters and $X_*(\textbf{A}) = Hom(\textbf{G}_m, \textbf{A})$ the group of one-parameter subgroups. \item For a character $\psi: \textbf{A} \to \textbf{G}_m$ and a one-parameter subgroup $\alpha : \textbf{G}_m \to \textbf{A}$, let $<\psi, \alpha>$ be the unique integer such that $t^{<\psi, \alpha>} = \psi(\alpha(t))$. If $\textbf{A}$ is defined over $F$, this pairing turn $X^*(\textbf{A})$ and $X_*(\textbf{A})$ into dual $\Gamma_F$-modules. \item For a linear algebraic group $\textbf{P}$ and a torus $\textbf{A} \subseteq \textbf{P}$, let $\Phi(\textbf{A}, \textbf{P}) \subseteq X^*(\textbf{A})$ be the set of roots of $\textbf{A}$ in $\textbf{P}$. \item If $\Phi \subseteq V$ is a symmetric finite set of vectors in a real linear space then by a \textbf{collection of positive roots} in $\Phi$ we mean an intersection of $\Phi$ with an half-space containing no element of $\Phi$. This is compatible with the notion of positive roots from the theory of root systems. \end{itemize} Let $G = \textbf{G}(F)$ be a reductive algebraic group over a local field $F$, and let $\theta$ be an involution of $G$ defined over $F$. We consider $G$ and all its subspaces and quotients with their usual topology arising from the topology of $F$, sometimes called \textbf{the $t$-topology}. Call $(G, H, \theta)$ a \textbf{symmetric pair}, where $H = G^\theta$ is the group of fixed points $\theta$. We will always assume that $\textbf{G}/ \textbf{H}$ is connected in the Zariski topology. We shall follow the following notation: for an algebraic group $\textbf{G}$ defined over $F$, the group $G$ will be the $F$-rational points of $\textbf{G}$, where the local field $F$ will always be assumed fixed. For an extension $K/F$, we will denote the $K$-rational points of $\textbf{G}$ by the standard notation $\textbf{G}(K)$. The following notions will be frequently used: \begin {itemize} \item $\sigma(g) = \theta(g^{-1})$ \item $H = G^\theta = \{g \in G |: \theta(g) = g\}$ will be referred to as the \textbf {orthogonal part} of G. \item $S = G^\sigma$ will be referred to as the \textbf{symmetric part} of G. \item $s (g) = g \sigma (g)$ will be called \textbf{symmetrization}. \item $\bar{s} (g) =\sigma (g) g$ will be called \textbf{anti-symmetrization}. \end{itemize} The notion extend the analogy with the example $(GL_n(\RR), O_n(\RR), (g \mapsto (g^{t})^{-1}))$, which will be discussed soon. Let $\frk{g}$ be the Lie algebra of G. The differential $d\theta$ acts on $\frk{g}$ as an involution. Since the minimal polynomial of linear operator $d\theta$ is $p(x) = x^2-1=(x-1)(x+1)$, the linear space $\frk{g}$ is naturally decomposed into a direct sum $\frk{g} = \frk{h} \oplus \frk{s}$, where $\frk{h}$ and $\frk{s}$ are respectively the $+1$ and $-1$ eigen-spaces of $d\theta$. The spaces $\frk{h}$ and $\frk{s}$ can be seen also as the tangent spaces of $H$ and $S$. Since $H$ is a Lie subgroup, $\frk{h}$ is a Lie subalgebra of $\frk{g}$. We are interested in deciding which symmetric pairs are \textbf{stable}, in the following sense. \begin {definition} A symmetric pair $(G,\theta)$ is called \textbf{stable} if every closed double coset of $H$ in $G$ is stable under $\sigma$. In other words, for each $g \in G$ such that $HgH$ is closed in $G$, \[\sigma (HgH) = HgH.\] \end {definition} \begin {remark} This notion is referred to as a \textbf{good} pair in \cite{dima}. We chose the word stable to indicate that the notion is related to the stability of the double cosets. \end {remark} Similarly, an element $g \in G$ will be called stable if $HgH$ is stable under $\sigma$. So, the pair $(G,H,\theta)$ is stable if and only if every $g \in G$ with closed $H\times H$-orbit is stable. Several examples of symmetric pairs: \begin {example} Let $G=\GL_n(\RR)$, $\theta\left(g\right) = (g^{-1})^t$. In this case $\sigma\left(g\right) = g^t$, $ H = O_n(\RR)$, $S$ is the set of symmetric matrices, $\frk{h}= \frk{so}_n$, and $\frk{s}$ is the set of symmetric matrices in $\frk{gl}_n$. The symmetrization map is given by $s(g) = gg^t$ and its opposite is given by $\bar{s}(g) = gg^t$. \end {example} \begin {example} \label {quadratic pairs} Let $\textbf{G}$ be a reductive group and let $K/F$ be a quadratic extension. Let $c$ denote the unique non-trivial element of $Gal_{K/F}$. Then $(\textbf{G}(K),G,c)$, when considered as a pair over $F$ (we consider $\textbf{G}(K)$ as an $F$-group!), is a symmetric pair. \end {example} Recall that the adjoint group of $G$ is defined as $Ad_G := G/Z(G)$ and for each $x \in G$ the corresponding $Ad_x \in Ad_G$ acts by conjugation on $G$, as well as on the Lie algebra $\frk{g}$. \begin {example} \label {inner pairs} Let $G$ be a reductive group defined over $F$ and let $h \in G$ be an element of order 2. Then $(G, Z_G(h), Ad_h)$ is a symmetric pair. The example works also if $Ad_h$ is of order 2 in $Ad_G$. \end {example} \begin {example} \label {diagonal pairs} Let $G$ be a reductive group, and let $\Delta(G)= \{(g,g) |: g \in G\} \subseteq G$ be the diagonal subgroup of $G \times G$. Then $(G \times G, \Delta(G), (x,y) \mapsto (y,x))$ is a symmetric pair. \end {example} \begin{example} \label{riemanian pairs} Let $G$ be a reductive Lie group and $\theta$ a Cartan involution of $G$. Then $H$ is a maximal compact subgroup, and $(G,H,\theta)$ is called a \textbf{Riemannian symmetric pair}. \end{example} \section {Preliminaries on Group Cohomology} The language of non-abelian first group cohomology is extremely useful in order to verify whether a given pair is stable or not. In this section we briefly remind some basic points of the theory, which are essential for our approach. We skip the proofs and the details. A complete treatment of the subject can be found in \cite[\S5]{ser}. \subsection {Definition of Group Cohomology} Let $G$ be an abstract group, and let $L$ be a group acting on $G$ by automorphisms. That is, for every $l \in L$ we have an automorphism of $G$ denoted by $g \mapsto l(g)$, such that: \begin {itemize} \item If $e_L \in L$ is the identity of $L$, then for every $g \in G$, $e_L(g) = g$. \item for all $l,t \in L$ and all $g \in G$, $lt(g) = l(t(g))$. \end {itemize} A group $G$ together with an $L$-action is called an \textbf{$L$-group}. To every $L$-group $G$ we can consider $G^L$, the group of $L$-fixed points in $G$. The first cohomology of $L$ with coefficients in $G$ is a measure for the lack of right-exactness of the fixed-points functor. It is a pointed set denoted $H^1(L,G)$. \begin {definition} Let $G$ be an $L$-group. An \textbf{1-cocycle} of $L$ with coefficients in $G$ is a function $\theta \mapsto a_\theta$ from $L$ to $G$, such that \[a_{\theta\tau} = a_{\theta}\cdot\theta(a_\tau).\] The set of all 1-cocycles of $L$ with coefficients in $G$ is denoted $Z^1(L, G)$. \end {definition} The formula $a\low{\theta\tau} = a\low{\theta}\cdot\theta(a\low{\tau})$ is called the \textbf{cocycle condition}. The group $G$ acts on $Z^1(L,G)$ from the right via \[(\delta_g(a))_l := g^{-1} a_l l(g).\] We call this action the \textbf{coboundary action}. Two cocycles in the same $G$-orbit are called \textbf{cohomologous}. In $Z^1(L,G)$ there is a distinguished point, namely the cocycle for which $a_\theta = e_G$ for all $\theta \in L$. We refer to this cocycle as the \textbf{trivial cocycle}. Every element in the $G$-orbit of the distinguished cocycle is called a \textbf{coboundary}. The set of coboundaries is denoted by $B^1(L,G)$. \begin {definition} Let $G$ be an $L$-group. The first cohomology of $L$ with coefficients in $G$ is the set of orbits $H^1(L,G) = Z^1(L,G)/\delta_G$. \end {definition} An $L$-equivariant homomorphism of $L$-groups $f: K \to G$ induces a map of pointed sets $f_*: H^1(L,K) \to H^1(L,G)$ given by \[f_*(a)_\theta :=f(a_\theta).\] If $K \subseteq G$ we let $i_K^G$ denote the inclusion. \begin{definition} Let $K \subseteq G$ be an $L$-subgroup of an $L$ group $G$. We will denote \begin{eqnarray*} &KH^1(L,K,G) := Ker((i_K^G)_*), \\ &IH^1(L,K,G) := Im((i_K^G)_*). \end{eqnarray*} \end{definition} Recall that the kernel of a map $f: (X,x_0) \to (Y,y_0)$ of pointed sets is by definition $f^{-1}(y_0)$. First cohomologies are not groups, but they still have the structure of pointed sets. The base-point of $H^1(L,G)$ is the equivalence class of the trivial cocycle $a\low{\theta} \equiv e\low{G}$. \subsection {Some properties of $H^1(L,G)$} The following facts will be useful in our treatment of the stability question. \begin {theorem} \label {long exact sequence} Let \[\SES{H}{G}{K}\] be an exact sequence of $L$-groups. There is a coboundary map \[\delta: K^L \to H^1(L,H) \\ \] such that the sequence \[\LES{H}{G}{K}{L}\] is an exact sequence of pointed sets. \end {theorem} For $k \in K^L$, $\delta(k)$ is computed as follows: \begin{itemize} \item Find $g \in G$ such that $g$ maps to $k$ under the map $G \to K$. \item Compute the coboundary $b_\sigma = \delta_{g}(e)_\sigma = g^{-1}\sigma(g) \in B^1(L,G)$. \item $\delta(k)$ is defined as a pre-image $a_\sigma \in Z^1(L,H)$ of $b_\sigma$. \end{itemize} That is, we chase $k$ along the following path: \[\begin{CD} \delta(k) \in Z^1(L,H) @>>> \delta_g(e)_\sigma \in B^1(L,G) \\ @. @AAA \\ @. g \in G @>>> k \in K^L \\ \\ \end{CD}\] This is not hard to see that $\delta$ is well defined and that the long sequence above is actually an exact sequence. Exactness for sequences of pointed sets is weaker than the corresponding property for sequences of groups. The statement of Theorem \ref{long exact sequence} can be strengthened in two ways. Firstly, one can replace the group $K$ in the sequence with an $L$-set on which $G$ acts transitively. Secondly, for an injection $f: H \to G$ of $L$-groups one can describe all the fibers of the map $f_*: H^1(L,K) \to H^1(L,G)$, and not only the pre-image of the trivial cocycle. The description of the fibers is given by the twisting operation, to be described in the next subsection. Let $G$ be an $L$-group acting transitively on an $L$-set $X$ having a base point $x_0 \in X$ which is stabilized by $L$. Let $K$ be the stabilizer of $x_0$ in $G$. Then we can define as in Theorem \ref{long exact sequence} a coboundary operator $\delta: X^L \rightarrow H^1(L,K)$. The definition is similar. Given $x \in X^L$, as $G$ acts transitively on $X$, we can find $g \in G$ such that $x = g(x_0)$. By the $L$-invariance of both $x$ and $x_0$, we get $x = g(x_0) = l(g)(x_0)$ for every $l \in L$. Therefore, $\delta_g(e)_l = g^{-1}l(g) \in K$. We define $\delta(x)$ to be the cocycle $l \rightarrow \delta_g(e)_l = g^{-1}l(g)$. \begin {theorem} \label {descent lemma} Let $G$ be an $L$-group acting transitively on an $L$-set $X$ possessing an $L$-fixed point $x_0$. Let $K = Stb_G(x_0)$. Then $\delta$ defines a bijection \[X^L/G^L \cong \KerH{L}{K}{G}.\] \end {theorem} For the proof see e.g. \cite{ber}. \subsection {Twisting} An important feature of non-abelian group cohomology is the twisting operation. Let $G$ be an $L$-group acting on an $L$-set $X$. Let $a$ be a 1-cocycle of $L$ with coefficients in $G$. Then, using $a$, it is possible to twist the action of $L$ on $X$, as well as on $G$, to obtain a new pair $(G,X)$ of an $L$-group acting on an $L$-set. It is done in the following way. Define a new action of $L$ on $X$, denoted $x \mapsto l*x$, by \[l*(x) := a\low{l}(l(x)).\] Similarly, define a new action of $L$ on $G$ by \[l*(g) := a\low{l}\cdot l(g) \cdot a\low{l}^{-1}.\] The pair $(G, X)$ with the new actions is the twisting of $X$ and $G$ by $a$. In order to distinguish it from the original pair, we denote it by $(\tau_a(G), \tau_a(X))$. Some routine verifications have to be performed, for example: \begin {align*} &lt*(x) = a\low{{lt}}(lt(x)) = &\text{(by the cocycle condition)} \\ &=a\low{l}l(a\low{t})(l(t(x)) = a\low{l}l(a\low{t} (t(x))) = l*(a\low{t} t(x)) = l*(t*(x)) \end {align*} which shows that the twisted action is associative, and \begin{align*} &l*(g(x)) = a\low{l} l(g(x)) = a\low{l} l(g)(l(x)) = a\low{l} l(g) (a\low{l}^{-1}a\low{l})(l(x)) = \\ &=(a\low{l} l(g) a\low{l}^{-1})(a\low{l}(l(x))) = (l*g)(l*x) \end{align*} which shows that the actions are compatible. The twisting operation is important for two reasons. Firstly, it allows one to compute cohomologies of a twisted pair by mean of the cohomology of the original pair. Secondly, using twisting we can describe completely the fibers of $(i_K^G)_*$ for $K \subseteq G$. Since from now on the $\delta$ operation can be defined using $G$ as well as $\tau_a(G)$, we will indicate which $\delta$ we use by writing the group $G$ on top of it: $\delta = \delta^G$. \begin {theorem} \label {twisting isomorphism} Let $G$ be an $L$-group, and let $a \in Z^1(L,G)$. Then there is a natural bijection $\tau_a : H^1(L,\tau_a G) \stackrel{\approx}{\to} H^1(L,G)$ given by $\tau_a(b)_l = b_l a_l$. \end {theorem} \begin {proof} We shall show that the map is well defined, as it is clearly bijective if it is well defined, its inverse being the twisting by $a\low{l}^{-1}$ of $\tau_a(G)$. Let $b$ be a cocycle of $\tau_a(G)$. We have to check the cocycle condition for $b\low{l}a\low{l}$ as a cocycle for $G$. By the cocycle condition in $\tau_a(G)$, we have $b\low{lt} = b\low{l}l*(b\low{t})$. Substitute the definition of $l*( )$ into this formula, we find that $b\low{lt} = b\low{l} \cdot a\low{l} \cdot l(b\low{t}) \cdot a\low{l}^{-1}$. Multiply both sides by $a\low{{lt}} = a\low{l} l(a\low{t})$ to get \begin {align*} b\low{lt} \cdot a\low{lt} = b\low{l}\cdot (a\low{l} \cdot l(b\low{t}) \cdot a\low{l}^{-1}) \cdot a\low{lt} = b\low{l} \cdot a\low{l} \cdot l(b\low{t}) \cdot a\low{l}^{-1} \cdot a\low{l} \cdot l(a\low{t}) = \\ =b\low{l} \cdot a\low{l} \cdot l(b\low{t}) \cdot l(a\low{t}) = b\low{l} \cdot a\low{l} \cdot l (b\low{t} \cdot a\low{t}) \end {align*} This is just the cocycle condition for $b\low{l}a\low{l}$ in $G$. So the map sends cocycles to cocycles. It is not harder to show that the map is equivariant with respect to the $\delta$ actions in $\tau_a(G)$ and $G$. By definition $\delta^{\tau_a(G)}_g(b)\low{l} = g^{-1} \cdot b\low{l} \cdot l*(g) = g^{-1} \cdot (b\low{l} \cdot a\low{l} \cdot l(g) \cdot a\low{l}^{-1}) = \delta^G_g(ba)\low{l} \cdot a\low{l}^{-1}$. Multiplying by $a\low{l}$ from the right both sides of this equation, we get $ \delta^{\tau_a(G)}_g(b)\low{l} \cdot a\low{l} = \delta^G_g(ba)\low{l}. $ \end {proof} This theorem is useful because it allows us to use any twist of $G$ in order to compute its cohomology. \subsection {Galois Cohomology} Suppose that $L$ is the Galois group of a separable extension $K/F$, and that $\textbf{G}$ is an algebraic group defined over $F$. Then $\COH{1}{L/F}{G(L)}$ is called the \textbf{Galois cohomology} of the extension $K/F$ with coefficients in $G$, and denoted $\COH{1}{K/F}{G}$. If $K = F^{sep}$ is the separable closure of $F$, we even write it as $\COH{1}{F}{G}$. The following computation is basic for many other computations of cohomologies. Note that in general Galois cohomology is defined as the continuous cohomology group, but we consider only finite extensions here anyway, so this issue is irrelevant for us. \begin {theorem} {(Hilbert's 90 Theorem)} \label {hilbert 90} Let $L/K$ be a Galois extension. Then \[\COH{1}{L/K}{GL_n(L)} = 1.\] \end {theorem} \begin{remark} This formulation of Hilbert's 90 Theorem is not the original formulation. Originally, the statement of this theorem is that elements of norm $1$ in a cyclic Galois extension $L$ of $F$ are all of the form $\frac{c(x)}{x}$ for $c$ a generator of $\Gamma_{L/F}$. However, this is the form in which the theorem is most useful for us, and in fact Theorem \ref{hilbert 90} is a generalization of the original statement. \end{remark} The following consequence of Hilbert's 90 Theorem is useful for us as well. \begin {theorem} \label {vanishing theorem for centralizers} Let $L/F$ be a separable extension, and let $A \in M_{n \times n}(F)$. Then \[H^1(F, Z_{\GL_n}(A)) = 1.\] \end {theorem} \begin {proof} Let $\textbf{X}$ denote the conjugacy class of $A$ in $M_{n \times n}(L)$. Then Theorem \ref{descent lemma} shows that \[\KerH{L/F}{Z_{GL_n}(A)}{GL_n} \cong \textbf{X}(F) / GL_n(F)\] where $GL_n(F)$ acts on $\textbf{X}(F)$ by conjugation. By Hilbert's 90 Theorem, \[\COH{1}{L/F}{GL_n} = 1\] and therefore \[H^1(L/F, Z_{GL_n}(A)) \cong \textbf{X}(F) / GL_n(F).\] But it is a classical theorem in linear algebra that if two $F$-matrices are conjugate over $L$, they are also conjugate over $F$. We deduce that $GL_n(F)$ acts transitively on $\textbf{X}(F)$. \end {proof} \section {Cohomology and Stability} Given a symmetric pair $(G, H, \theta)$, we would like to describe an efficient method, that after finitely many steps will allow us to decide whether or not $(G,H,\theta)$ is stable. In this section we describe such a method, and prove that it actually gives the correct answer. \begin {definition} Let $G$ be a topological group acting continuously on a topological space $X$. An element $x \in X$ is called $G$-semi-simple if the orbit $Gx \subset X$ is closed. \end {definition} When a reductive group $G$ over an algebraically closed field acts on itself by conjugation, this definition is consistent with the usual definition of semi-simplicity. We restate the condition of stability using this definition: the pair $(G,H,\theta)$ is stable if and only if every $H\times H$-semi-simple element of $G$ is stable. \subsection {Interpretation of $H^1(\theta, G)$} \label{cohomology of the pair} We would like to give an explicit description of the cohomology set $H^1(\ZZ/2\ZZ, G)$ with $\ZZ/2\ZZ$ acting by $\ZZ/2\ZZ = \{Id_G,\theta\}$. This treatment of involutions using cohomology can be found in \cite[\S II.5]{bj}. In this case, we shall denote the cohomology by $H^1(\theta,G)$. Let $a_{_\theta}$ be a cocycle. There are 4 cocycle conditions on $a$: \begin{align} &a_{Id} = a_{Id^2} = a_{Id} \cdot Id(a_{Id}) = a_{Id} \cdot a_{Id} \\ &a_{_\theta} = a_{_{Id\cdot \theta}} = a\low{{Id}} \cdot Id(a\low{\theta}) = a\low{{Id}} \cdot a\low{{\theta}} \\ &a\low{\theta} = a\low{{ \theta \cdot Id}} = a\low{{\theta}} \cdot \theta(a\low{{Id}}) \\ &a\low{{Id}} = a\low{{ \theta \cdot \theta}} = a\low{{\theta}} \cdot \theta(a\low{{\theta}}) \end{align} Condition (1) is equivalent to $a\low{Id} = e\low{G}$. Substituting this into (2) and (3) we see that they are consequences of (1). Finally, condition (4) is equivalent to $a\low{\theta} \in S$. Next, we describe the coboundary action on these cocycles. This is given by \[\delta_g(a)_{Id} = g^{-1} \cdot a\low{Id} \cdot g = g^{-1} \cdot g = e\low{G}\] \[\delta_g(a)_{\theta} = g^{-1} \cdot a\low{\theta} \cdot \theta(g)\] We deduce the following: \begin {lemma} \label {interpretation of cohomology} Let $(G,H,\theta)$ be a symmetric pair. Let $G$ act on $S$ by $\delta_g(r) = g^{-1} \cdot r \cdot \theta(g)$. Then there is a natural bijection $H^1(\theta,G) \equiv S/\delta_{G}$, given by $a \mapsto a\low{\theta}$. \end{lemma} \begin {corollary} Let $r \in S$. Then $r$ is a symmetrization of an element in $G$ if and only if the corresponding cohomology class $[r] \in H^1(\theta,G)$ is trivial. \end {corollary} According to Lemma \ref {interpretation of cohomology}, $H^1(\theta, G)$ has a natural topology, coming from the topology of $S$ as a closed subvariety of $G$. We shall prove that the induced topology on $H^1(\theta, G)$ is discrete. Equivalently, the $G$-orbits in $S$ are all open. This is a well known result, and will play a central role in our approach to stability. In order to prove this, we use the open mapping theorem for the action. \begin {definition} \label{def of submersive} Let $G$ be an analytic F-group acting on an analytic manifold $X$. Then $G$ acts \textbf{submersively} if for every $x \in X$, the action map \[\phi_x : G \rightarrow X\] given by $\phi_x(g) := g(x)$ is submersive. Equivalently, the action is submersive if the global vector fields on $X$ induced by $\frk{g}$ span the tangent space of $X$ at every point. \end {definition} \begin{remark} \label{submersive transitive action} It is clear that for a transitive action, submersivity in one point is equivalent to submersivity. \end{remark} Since submersions are always open, if $G$ acts submersively on $X$ then every orbit of $G$ in $X$ is open. Therefore, in order to prove that $H^1(\theta, G)$ is discrete, it will be sufficient to prove that $G$ act submersively on $Z^1(\theta, G) \cong S$. This is what we shall prove now. \begin {theorem} \label {submersive action} Let $(G,H,\theta)$ be a symmetric pair. Then the action of $G$ on $S$ given by $\delta$ is submersive. \end {theorem} \begin{proof} We have to prove that, at any point $r \in S$, the action map $\phi_r(g) = g^{-1} r \theta(g)$ is submersive. Consider first the special case $r = e\low{G}$. At this point, we get \[\phi_{e\low{G}}(g) = s(g^{-1}).\] We have a direct decomposition $\frk{g} = \frk{h} \oplus \frk{s}$. The map $ds$ is given by \[ds(X + Y) = 2Y,\] where $X \in \frk{h}, Y \in \frk{s}$. This map is clearly onto $T_{e\low{G}}S \cong \frk{s}$. This, together with Remark \ref{submersive transitive action} shows that the $\delta$ action is submersive at every point of $S_0$. For a point $r \in S - S_0$, we use twisting. Let $r \in S$, and let $G' = \tau\low{r}(G)$. We add $'$ to things related to $G'$, so that $S'$ will be cocycles of the twisted action and $\phi '$ the action map of the twisted action. Let $Id: G' \to G$ be the identity map sending each element to itself. Consider the square \[ \begin {CD} S' @>\cdot r >> S \\ @ A\phi'_eAA @A\phi_rAA \\ G' @>Id>> G \\ \end{CD} \] This square is commutative (it was shown in Theorem \ref{twisting isomorphism}). As the two horizontal maps are isomorphisms, the submersivity of $\phi_r$ follows from that of $\phi'_e$ which is the special case $r = e$ for $G'$. \end{proof} \begin {corollary} The orbits of $G$ in $S(G) = Z^1(\theta, G)$ are open. \end{corollary} \begin {corollary} $\COH{1}{\theta}{G}$ is discrete for every symmetric pair $(G,H,\theta)$. \end{corollary} \begin{remark} It is worth mentioning that in the proof of Theorem \ref{submersive action} we used the case $r = e$ for all the twistings of $G$ at the same time, and we could not deduce this using only the special case $r=e$ for $G$ itself. It is therefore \textbf{not} a direct consequence of the equivariance of the action map. \end{remark} \subsection {Equivalent Condition to Stability} Let $(G, H, \theta)$ be a symmetric pair. We would like to describe stable elements of $G$ in terms of the conjugacy classes of their symmetrizations. This is done as follows. Let $s:G \rightarrow S$ be the symmetrization map. Let $S_0$ be its image in $S$. As in \S \ref{cohomology of the pair}, the map $s$ is submersive everywhere, hence open. We can view it as the action map of the transitive left action of $G$ on $S_0$ given by $g \mapsto \delta_{g^{-1}}$. The map $s$ is invariant to multiplication from the right by $H$, as if $\theta(h) = h$, then $s(gh) = ghh^{-1}\sigma(g) = s(g)$. Therefore, $s$ induces a submersive $G$-equivariant map $s:G/H \rightarrow S_0$. \begin {lemma} \label {injectivity of symmetrization on orbits} The symmetrization map defines an isomorphism of analytic manifolds \[s:G/H \stackrel{~}{\longrightarrow}S_0.\] \end{lemma} \begin {proof} As $S_0$ is a homogenous space for $G$, it is sufficient to prove that $H$ is the stabilizer of $e$ in $S$. But \[s(g) = e \Leftrightarrow g\sigma(g) = e \Leftrightarrow g = \theta(g) \Leftrightarrow g \in H\] \end{proof} The $H$ action on $G/H$ by multiplication from the left, transformed by $s$ to conjugation. It follows from Lemma \ref{injectivity of symmetrization on orbits} that $H\backslash G/H \cong S_0/Ad_H$. The action of $\sigma$ on the double-cosets space pass by $s$ to the action $Ad_H(s(g)) \rightarrow Ad_H(\bar{s}(g))$, where $\bar{s}(g) = \sigma(g)g$. The next proposition follows: \begin{proposition} An element $g \in G$ is stable if and only if $s(g)$ and $\bar{s}(g)$ are conjugate to each other by an element of $H$. \end{proposition} This proposition means that we can check the stability of an element by looking at the $H$-conjugacy class of its symmetrization. Note that in particular, the stability of $g$ depends only on the $H$-conjugacy class of $s(g)$. As the map $s$ is a homeomorphism from $G/H$ to $S_0$, $g$ is $H\times H$ semi-simple if and only if $s(g)$ is $Ad_H$ semi-simple. In \cite[Theorem 7.2.1]{dima}, it was proved that the $Ad\low{H}$ semi-simplicity of $r \in S$ implies its $Ad\low{G}$ semi-simplicity. We complete the scheme by proving that if $r \in S$ is $Ad\low{G}$ semi-simple, it is also $Ad\low{H}$ semi-simple. Let $r \in S$ be $Ad\low{G}$ semi-simple. Then the orbit $Ad\low{G}(r)$ is closed in $G$. Clearly $Ad_H(r) \subset Ad\low{G}(r) \cap S$. The set $Ad\low{G}(r) \cap S$ is closed in $G$. Thus, it will be sufficient to prove that $Ad\low{H}(r)$ is closed in $Ad\low{G}(r) \cap S$. We shall actually prove a stronger result. \begin{lemma} \label{orbits of Ad_H} The orbits of $Ad_H$ in $Ad\low{G}(r) \cap S$ are open and closed. In other words, the quotient $(Ad\low{G}(r) \cap S)/Ad_{H}$ is discrete. \end{lemma} \begin{proof} We define a continuous injection \[(Ad\low{G}(r) \cap S)/Ad_{H} \hookrightarrow H^1(\theta, Z\low{G}(r)).\] Let $g \in G$, and assume that $grg^{-1} \in S$. Then $\theta(g)\theta(r) \theta(g)^{-1} = g r^{-1} g^{-1}$. On the other hand, \[\theta(g)\theta(r) \theta(g)^{-1} = \theta(g) r^{-1} \theta(g)^{-1}.\] Inverting the identity above, we get that \[grg^{-1} = \theta(g) r \theta(g^{-1}).\] Therefore, the coboundary $s(g^{-1}) = g^{-1}\theta(g)$ belongs to $Z_G(r)$. It is clear that changing the choice of $g$ change the cocycle to an equivalent one in $Z^1(\theta,G)$. Indeed, if $grg^{-1} = g'rg'^{-1}$, then $g'^{-1}g \in Z\low{G}(r)$. So \[s(g^{-1}) = s((gg'^{-1}g')^{-1}) = \delta(g'^{-1}g)(s(g^{-1}))\] which is equivalent to $s(g^{-1})$ as a cocycle of $Z\low{G}(r)$. The injectivity is also not hard to check. If $s(g^{-1}) = s(g'^{-1})$, the elements $g$ and $g'$ differ by a multiplication \textbf{from the left} by an element of $H$, by Lemma \ref{injectivity of symmetrization on orbits}. Write $g = hg'$ for $h \in H$, we get that $grg^{-1} =h(g'rg'^{-1})h^{-1}$. This does not change the $Ad_H$-orbit, so the map is injective. \end{proof} Lemma \ref{orbits of Ad_H} does not follow directly from Theorem \ref{descent lemma}, as we take anti-fixed points instead of fixed points. But the proofs of the two are identical. In particular, the morphism we constructed in the proof is actually an isomorphism of topological spaces \[(Ad\low{G}(r) \cap S)/Ad_{H} \cong \KerH{\theta}{Z\low{G}(r)}{G}\] As the cohomology $H^1(\theta, Z\low{G}(r))$ is discrete, we deduce that $(Ad\low{G}(r) \cap S)/Ad_{H}$ is also discrete, so the $Ad_H$ orbits in $Ad\low{G}(r) \cap S$ are open and closed. We proved therefore that $Ad_G$ semi-simplicity implies $Ad_H$ semi-simplicity for elements in $S$. Combining all the results above, we deduce the following: $\vspace{10 mm}$ \begin{theorem} \label{the three are equivalent} Let $(G,H,\theta)$ be a symmetric pair, and let $g\in G$. The following three are equivalent: \begin {enumerate} \item $g$ is $H\times H$ semi-simple. \item $s(g)$ is $Ad\low{H}$ semi-simple. \item $s(g)$ is $Ad\low{G}$ semi-simple. \end{enumerate} \end{theorem} \begin {proof} $(1)\Leftrightarrow(2)$ is a consequence of Lemma \ref{interpretation of cohomology}. The implication $(2)\Rightarrow(3)$ is already proved in \cite[Proposition 7.2.1]{dima}, and $(3)\Rightarrow(2)$ is a consequence of Lemma \ref{orbits of Ad_H}. \end {proof} We conclude with an application to the stability of Riemannian pairs, a fact which is simple and well known. \begin{theorem} \label{riemannian pairs} Let $(G,H,\theta)$ be a Riemannian symmetric pair. Then $(G,H,\theta)$ is stable. \end{theorem} \begin{proof} It will be sufficient to prove that $s(g) \cong \bar{s}(g)$ for every $g$ for which $s(g)$ is semi-simple. But $S_0 = s(G)$ is a Riemannian symmetric space and hence a complete Riemannian manifold. It follows that $exp: \frk{s} \to S_0$ is onto so $s(g) = exp(X) = [exp(X/2)]^2$ for some $X \in S_0$. So $s(g) = s(exp(X/2))$ and therefore $Ad_H(bar{s}(g)) = Ad_H(exp(X)) = Ad_H(s(g))$. \end{proof} \subsection{Stable Elements, $H^1(\theta, Z\low{G}(r))$ and $H^1(F, Z\low{H}(r))$} In the last section we reduced the problem of the stability of $g \in G$ to the problem of the $H$-conjugacy of $s(g)$ and $\bar{s}(g)$. In this section we construct cohomological obstructions for the conjugacy of $s(g)$ and $\bar{s}(g)$ under $H$. \begin {definition} Let $(G,H,\theta)$ be a symmetric pair, and $r$ a semi-simple element of $S$. The pair $(Z\low{G}(r),Z\low{H}(r), \theta)$ is a \textbf{descendant} of the pair $(G,H,\theta)$. \end{definition} The understanding of the descendants of a pair is an essential part in checking its stability. \subsubsection{First Invariant, $H^1(F, Z\low{H}(r))$} The first invariant of an element $r \in S_0(G)$ lives in the cohomology $H^1(F,Z\low{H}(r))$ and it vanishes exactly when $r$ and $\bar{r}$ are $H$-conjugate, where $\bar{r}$ denotes $\bar{s}(g)$ for some $g \in G$ with $s(g) = r$. In practice, we find the second invariant much more suitable for computation. We added the construction of the first invariant for theoretical completeness and because it is a direct generalization of the original method used by Aizenbud and Gourevitch to prove stability of certain pairs. For the construction of the first invariant, we need the notion of a sub-pair. \begin {definition} A symmetric pair $(G,H,\theta)$ is a \textbf{sub-pair} of the pair $(G',H',\theta')$ if $G \subseteq G'$ and $\theta'|_{G} = \theta.$ If there is a group $L$ acting on $G$ in a way compatible with the action of $\theta$, and such that $G = G'^L$, we say that $(G,H,\theta)$ is the \textbf{fixed pair} of $L$. \end {definition} Suppose that the symmetric pair $(G,H,\theta)$ can be embedded in a larger stable pair $(G',H',\theta')$ as the fixed pair of some group $L$. As we have two pairs to deal with, we will add $'$ to everything related to $G'$. For example, the symmetric part of $G'$ will be denoted by $S'$, and so on. Let $g \in G$, we would like to check if $s(g) = r \in S_0$ and $\bar{s}(g) = \bar{r}$ are $H$-conjugate. As the pair $(G',H',\theta)$ is stable, $\bar{s}(g) \in Ad\low{H'}(s(g))$. We are now in a situation similar to the situation of Theorem \ref{descent lemma}. Namely, let $X' = Ad_{H'}(s(g))$. As $H'$ acts transitively on $X'$ which has an $L$-fixed point with stabilizer $Z_{H'}(s(g))$, according to Theorem \ref{descent lemma} we find that \[(X'^L)/Ad\low{H} \cong \KerH{L}{Z\low{H'}(r)}{G'}.\] This isomorphism allows us to associate with $\bar{r}$ a cohomology class in $H^1(L,Z\low{{H'}}(r))$, which we denote by $[\bar{r}]$. Apparently, $[\bar{r}]$ depends on the choice of $g$. But by Lemma \ref{injectivity of symmetrization on orbits}, two different choices of $g$ differ by an element of $H$, so changing the choice of $g$ has no effect on the cohomology class. The injectivity of the association $r \mapsto [\bar{r}]$ implies the following: \begin {proposition} Let $(G',H',\theta')$ be a stable symmetric pair, and let $(G,H,\theta)$ be a sub-pair of $(G',H',\theta')$ which is the fixed pair of the group $L$. An element $r \in S_0$ is stable, in the sense that $\bar{r} \in Ad\low{H}(r)$, if and only if $[\bar{r}] = 0$ in $H^1(L, Z\low{{H'}}(r))$. \end {proposition} There is a general way to embed a given pair in a stable one. One can just take the base-change to the algebraic closure. It is true in general that if $\textbf{G}(\bar{F})/\textbf{H}(\bar{F})$ is connected, the pair $(\textbf{G}(\bar{F}), \textbf{H}(\bar{F}), \theta)$ is stable. For a proof, see e.g. \cite[Corollary 7.1.7]{dima}. As a consequence, we get: \begin {proposition} \label{Galois invariant} Let $(G,H,\theta)$ be a symmetric pair, and assume that $\textbf{G}(\bar{F})/\textbf{H}(\bar{F})$ is connected. Then for every $r \in S_0$ there is a well defined cohomology class $[\bar{r}] \in H^1(F, Z\low{H}(r))$ that vanishes if and only if $r$ is conjugate by $H$ to $\bar{r}$. \end {proposition} \subsubsection{Second Invariant, $H^1(\theta, Z\low{G}(r))$} There is an intrinsic invariant of semi-simple elements in $S_0$, which is far more useful in our treatment of stability. Let $r \in S_0$, and choose $g \in G$ such that $r = s(g)$. Then, by definition, \[g^{-1} r g = g^{-1} s(g) g = \bar{s}(g) = \bar{r}.\] This identity together with Theorem \ref{descent lemma} suggests that we should look at the cohomology class of $r = s(g) = \delta_{g^{-1}}(e)$ as a cohomology class in $\KerH{\theta}{Z\low{G}(r)}{G}$. We claim that this is indeed an obstruction for stability. This is a good place to recall what a normal element is. \begin {definition} Let $(G,H,\theta)$ be a symmetric pair, and let $g \in G$. Call $g$ \textbf{normal} if and only if $g$ commutes with $\theta(g)$. \end{definition} \begin {proposition} \label{symmetrization in centralizer} Let $(G,H,\theta)$ be a symmetric pair, and let $s(g) = r \in S_0$. Then $g$ is stable if and only if $r$ is a symmetrization of an element in $Z_G(r)$. \end{proposition} \begin{proof} Assume that $r = s(g)$ is stable. Then there is $h \in H$ such that $hrh^{-1} = \bar{r}$. It follows that $gh r (gh)^{-1} = g\bar{r} g^{-1} = r$ so $r = s(g) = s(gh)$ and $gh \in Z\low{G}(r)$. This shows that $r$ is a symmetrization of an element in $Z\low{G}(r)$. Conversely, assume that $r$ is a symmetrization of $g = Z\low{G}(r)$. Then $\bar{r} = g^{-1}s(g)g = r$, so $r$ is stable. \end{proof} Using cohomological notation, we can reformulate this result as follows: \begin{proposition} \label{the centralizer criterion} Let $(G,H,\theta)$ be a symmetric pair, and let $s(g) = r \in S_0$. Let $[r]$ be the corresponding cohomology class in $\KerH{\theta}{Z\low{G}(r)}{G}$. Then $g$ is stable if and only if $[r]$ is trivial in $H^1(\theta, Z_G(r)).$ \end{proposition} Using a small modification of the result above, we can give a cohomological criterion which has no reference to symmetric elements but only to subgroups of $G$. However, this will be only necessary and not sufficient condition. In order to do this, we need a definition: \begin{definition} A torus $\textbf{A} \subseteq \textbf{G}$ is called \textbf{$\theta$-split} if and only if $\theta(x) = x^{-1}$ for every $x \in \textbf{A}$. \end{definition} As usual, if $\textbf{A}$ is defined over $F$ we denote its $F$-points by $A$, and refer to $A$ as a $\theta$-split F-torus. \begin{theorem} \label{comological criterion for stability} If a symmetric pair $(G,H,\theta)$ is stable and $A$ is a maximal $\theta$-split $F$-torus of $G$, then \[\ImH{\theta}{A}{Z_G(A)} \cap \KerH{\theta}{Z_{G}(A)}{G} = 1.\] \end{theorem} \begin{proof} Assume that $(G,H,\theta)$ is stable, and let \[\alpha \in \ImH{\theta}{A}{Z_G(A)} \cap \KerH{\theta}{Z_{G}(A)}{G} .\] Let $r$ be a representative of $\alpha$ in $A$. Then $\alpha = [r] = [s^2r]$ for every $s \in A$, as $\theta|_A(x) = x^{-1}$ so $\delta_x(r) = x^2r$ in $A$. We can choose $s \in A$ general enough such that $Z\low{G}(s^2r) = Z\low{G}(A)$. But then $[r] = [s^2r] \in H^1(\theta, Z\low{G}(A)) = H^1(\theta, Z\low{G}(s^2r))$. By Proposition \ref{the centralizer criterion} and the stability of $(G,H,\theta)$, we deduce that $\alpha$ is trivial. \end{proof} \section {Open Orbits in a Parabolic Quotient} In this section we analyze the relation between the stability of a pair $(G,H,\theta)$ and another geometric property which we call p-stability. \begin {definition} Let $(G,H,\theta)$ be a symmetric pair. We say that $(G,H,\theta)$ is \textbf{p-stable} if $H$ has a single open orbit in each parabolic quotient $G/P$. \end{definition} It is clearly sufficient to check p-stability on the minimal parabolic subgroups. This property is of interest for us since the decomposition of $G/P$ into $H$ orbits carries information on the relative representation theory of $H$ in $G$. We shall return to the problem of linking p-stability and stability after recalling some machinery from the theory of involutions of reductive groups. \subsection {Preliminaries on Split Parabolic Subgroups and Split Tori} We recall some facts and notation regarding parabolic subgroups and tori in symmetric pairs, most of which can be found in \cite{hel}. For clarity, we shall refer to a subgroup of $P$ as a parabolic subgroup if it is the $F$-rational points of a parabolic subgroup of $\textbf{G}$ which is defined over $F$. Sometimes we also refer to such subgroups as parabolic $F$-subgroups. \begin {definition} Let $\textbf{A} \subseteq \textbf{G}$ be a torus defined over $F$. $\textbf{A}$ is $F$-split if and only if $X^*(\textbf{A})$ is a trivial $\Gamma_F$ module. \end {definition} The group $\textbf{G}$ is called \textbf{$F$-split} if it has a maximal torus which is $F$-split. All the maximal $F$-split tori in $\textbf{G}$ are conjugate by $G$. Moreover, there exist maximal $F$-split tori which are $\theta$-stable, and such tori exist in any parabolic subgroup defined over $F$. Let $\textbf{A} \subseteq \textbf{G}$ be a $\theta$-stable torus defined over $F$. We denote by $\textbf{A}^+, \textbf{A}^{-}$ its orthogonal and symmetric parts respectively. Similarly, we denote by $A^+, A^-$ the orthogonal and symmetric parts of its $F$-rational points. \begin{definition} A torus $\textbf{A}$ is called $\theta$-split if $\textbf{A} = \textbf{A}^{-}$. \end{definition} Let $g \in G$. We would like to define what it means for $g$ to be "diagonalizable over $F$". \begin {definition} A semi-simple element $g \in G$ is called \textbf{$F$-split} if $g$ is contained in a maximal $F$-split torus. \end{definition} \begin {lemma} Let $g \in G$ be semi-simple. Then $g$ is $F$-split if and only if $Z_G(g)$ contains a maximal $F$-split torus of $G$. \end{lemma} \begin{proof} There is a bijection between maximal tori containing $g$ and maximal tori of $Z_G(g)$. The result follows immediately. \end{proof} For us, the most important split elements are those contained in a maximal $(\theta, F)$-torus. \begin {definition} A semi-simple element $g \in G$ is called $(\theta,F)$-split if $g$ is contained in a maximal $(\theta,F)$-split torus. \end{definition} There is a detailed description of the $H$ orbits in $G/P$ for a minimal parabolic subgroup $P$, which is given in \cite{hel}. Since we are interested only in the open orbits, we recall only the part of the theory which is related to them. \begin{definition} Let $P$ be a parabolic subgroup of $G$. Then $P$ is called $\theta$-\textbf{split} parabolic subgroup if $P$ and $\theta(P)$ are opposite parabolic subgroups. \end{definition} \begin {lemma} Let $P$ be a $\theta$-split parabolic subgroup of $G$. Then $HP$ is open in $G$. \end {lemma} \begin {proof} It is sufficient to prove that $H \times P \to G$ is submersive at the identity, or that $\frk{g} = \frk{h} + \frk{p}$. But as $P$ is $\theta$-split we have \[\frk{g} = \frk{p} + d\theta(\frk{p}).\] The result follows from the simple observation that $d\theta(\frk{p}) \subseteq \frk{p} + \frk{h}$. \end{proof} \begin{remark} In fact, this is true also with $P$ a parabolic subgroup of a minimal $\theta$-split parabolic subgroup of $G$. This is essentially proved in \cite[Proposition 4.10]{hel}. \end{remark} We wish to classify minimal parabolic sungroups $P \subseteq G$ such that $HP$ is open. Recall that every minimal parabolic subgroup of $G$ contains a $\theta$-stable maximal $F$-split torus, unique up to conjugation by the unipotent radical $U_P$. Fix such a torus $A$ and let $A^- := A \cap S$. Then $\Phi(A,G)$ is a root system in $X^*(A)$ and $\Phi(A^-,G)$ is a root system in $X^*(A^-)$. We have a surjection $\pi^- : \Phi(A,G) \to \Phi(A^-,G)$ given by restriction. Every minimal parabolic subgroup $P \subseteq G$ containing $A$ defines a collection of positive roots $\Phi(A,P) \subseteq \Phi(A,G)$. We can now state the classification of minimal parabolic subgroups $P$ for which $HP$ is open: \begin {theorem} \label{classification of generic parabolics} Let $P$ be a minimal parabolic subgroup of $G$, and let $A$ be a $\theta$-stable maximal $F$-split torus of $G$ contained in $P$. Then $PH$ is open in $G$ if and only if $A^- := A \cap S$ is a maximal $(\theta,F)$-split torus of $G$ and $\pi^-(\Phi(A,P)) \backslash{0}$ is a choice of positive roots in $\Phi(A^-,G)$. \end{theorem} \begin{proof} First, note that if $V$ is a linear space over $\RR$, $C \subseteq V$ is a convex polyhedral cone and $\pi : V \to U$ is a linear projection then $\pi^-(C)$ is again a convex polyhedral cone. Let $C \subseteq X^*(A) \otimes \RR$ be the cone spanned by $\Phi(A,P)$. Then since $\Phi(A,G) \subseteq C \cup -C$, we have $\Phi(A^-,G) \subseteq \pi^-(C) \cup \pi^-(-C)$. Thus, since $\pi^-(C)$ is a polyhedral cone, it defines a collection of positive roots in $\Phi(A^-,G)$ if and only if it does not contain antipodal roots. This is equivalent to the condition that for every $\alpha \in \Phi(A,P)$, either $\theta(\alpha) = \alpha$ or $\theta(\alpha) \notin \Phi(A,P)$. The condition that $HP$ is open in $G$ is equivalent to the infinitesimal condition \[\mathfrak{h} + \mathfrak{p} = \mathfrak{g}.\] Thus, we have to prove that $\mathfrak{h} + \mathfrak{p} = \mathfrak{g}$ if and only if $A^-$ is a maximal $(\theta,F)$-split torus and for every $\alpha \in \Phi(A,P)$ either $\alpha = \theta(\alpha)$ or $\theta(\alpha)$ is a negative root. Let $P = MU^+$ be the Levi decomposition of $P$ corresponding to $A$, i.e. $M = Z_G(A)$ and $U^+$ is the unipotent radical of $P$. Let $U^-$ be the unipotent radical of the opposite parabolic subgroup of $G$ containing $A$. Let $K$ denote the Killing form of $\mathfrak{g}$. Then, with respect to $K$, we have \begin{align*} &\mathfrak{h}^\bot = \mathfrak{s} \\ &\mathfrak{p}^\bot = \mathfrak{u} \end{align*} and therefore \[(\mathfrak{h} + \mathfrak{p})^\bot = \mathfrak{u} \cap \mathfrak{s}.\] It follows that $HP$ is open in $G$ if and only if $\mathfrak{u} \cap \mathfrak{s} = 0$. Assume first that $\mathfrak{u} \cap \mathfrak{s} = 0$. We have to prove that $A \cap S$ is a maximal $(\theta,F)$-split torus and that the condition on positive roots above is satisfied. Let $Y \in \mathfrak{s}$ be an element which commutes with $\mathfrak{a} \cap \mathfrak{s}$. Let $Y = Y_0 + \sum_{\alpha \in \Phi(A,G)} Y_\alpha$ be a decomposition of $Y$ into a sum of root vectors. Since $Y$ commutes with $\mathfrak{a} \cap \mathfrak{s}$, $Y_\alpha = 0$ unless $\theta(\alpha) = \alpha$. It follows that \[Y = Y_0 + \sum_{\alpha \in \Phi(A,G)^\theta} Y_\alpha.\] Since $Y \in \mathfrak{s}$ we get that \[Y_\alpha \in \mathfrak{g}_\alpha \cap \mathfrak{s} \subseteq \mathfrak{u} \cap \mathfrak{s} = 0.\] It follows that $Y \in \mathfrak{m}$, but then by the maximality of $\mathfrak{a}$ we must have $Y \in \mathfrak{a} \cap \mathfrak{s}$. This proves the maximality of $A \cap S$. We next show that for every $\alpha \in \Phi(A,P)$ either $\theta(\alpha) = \alpha$ or $\theta(\alpha) < 0$. This is straight forward: if $\alpha \ne \theta(\alpha) > 0$ then \[ 0 \ne (\mathfrak{g}_\alpha + \mathfrak{g}_{\theta(\alpha)})\cap \mathfrak{s} \subseteq \mathfrak{u} \cap \mathfrak{s} = 0\] so we get a contradiction. Conversely, assume that $A^-$ is maximal and that for no $\alpha \in \Phi(A,P)$, $\alpha \ne \theta(\alpha) > 0$. We have to prove that $\mathfrak{u} \cap \mathfrak{s} = 0$. Clearly, $\mathfrak{u} \cap \mathfrak{s} \subseteq \mathfrak{u} \cap \mathfrak{\theta(u)}$. since $\mathfrak{u} \cap \mathfrak{\theta(u)}$ is spanned by the $\mathfrak{g}_\alpha$ for which $\alpha$ and $\theta(\alpha)$ are both positive, by the assumption on the roots we deduce that \[\mathfrak{u} \cap \mathfrak{s} = \sum_{\alpha \in \Phi(A,P)^\theta} (\mathfrak{g}_\alpha \cap \mathfrak{s}).\] Thus, it is sufficient to prove that $\mathfrak{g}_\alpha \cap \mathfrak{s} = 0$ for every $\alpha \in \Phi(A,P)^\theta$. If $0 \ne Y \in \mathfrak{g}_\alpha \cap \mathfrak{s}$ then as in \cite[Lemma 7.1.11]{dima} we can extend $Y$ to an $\mathfrak{sl_2}$ triple $(X,F,Y)$ such that $X \in \mathfrak{g}_{-\alpha} \cap \mathfrak{s}$ and $F \in \mathfrak{h}$. Since $\alpha = \theta(\alpha)$, $X$ and $Y$ commute with $A^-$. But then $\mathfrak{a}^- + span\{X + Y\}$ is a $(\theta,F)$-split torus in $\mathfrak{g}$ porperly including $\mathfrak{a}^-$, contrary to the maximality of $\mathfrak{a}^-$. \end{proof} The following result is useful for the classification of $H$-conjugacy classes of minimal $\theta$-split parabolic subgroups. \begin{proposition} {(\cite[proposition 4.9]{hel}} \label{conjugating parabolics} Let $P,P'$ be minimal parabolic subgroups of $G$ defined over $F$, and let $Q,Q'$ be minimal $\theta$-split parabolic subgroups containing $P,P'$ repsectively. If $g \in G$ satisfies $gPg^{-1} = P'$ then $gQg^{-1} = Q'$. \end{proposition} \subsection {Different Types of Stability} There are some other notions, related to stability, that we shall consider. \begin {definition} Let $(G,H, \theta)$ be a symmetric pair. The pair is: \begin {itemize} \item \textbf{s-stable} if every $g \in G$ such that $s(g)$ is $(\theta,F)$-split is stable. \item \textbf{p-stable} if $H$ has a unique open orbit in $G/P$ for every parabolic subgroup $P \subseteq G$. \item \textbf{t-stable} if all the $\theta$-stable maximal $F$-split tori in $G$ which contain a maximal $(\theta,F)$ split torus are conjugate by $H$. \end{itemize} \end{definition} All these properties are related. Even though not all of them are equivalent, we are able to prove the following scheme of implications: \fbox{ \label{implications} \xymatrix{ \fbox{stable} \ar@{=>}^{(1)}[rd] &\fbox{s-stable} \\ \fbox{t-stable} &\fbox{p-stable} \ar@{=>}^{(2)}[u] \ar@{=>}^{(3)}[l] & \ar@{=>}^{(4)}[l] \fbox{Gelfand prop.} \\ } } \vspace{0.5 cm} Implication (1) will be proved in the next section. Implication (4) will be proved in section 7. Implications (2) and (3) will be proved here after some preparation. \begin {proposition}[{\cite [4.21] {bt}}] \label{BT tori} Let $G$ be a reductive group over a local field $F$, then all the maximal F-split tori in $G$ are conjugate over $F$. \end {proposition} \begin {proposition} \cite [4.13.b] {bt} \label{BT minimal parabolics} Let $G$ be a reductive group over a local field $F$. Then all its minimal $F$-parabolic subgroups are conjugate over $F$. \end {proposition} \begin{proposition} \label{t-stable iff weakly t-stable} A pair $(G,H,\theta)$ is $T$-stable if and only if all the maximal $(\theta,F)$-split tori in $G$ are $H$-conjugate. \end{proposition} \begin {proof} We first prove that t-stability implies the conjugacy of maximal $(\theta,F)$-split tori. Let $(G,H,\theta)$ be a t-stable pair. Let $A, A'$ be two maximal $(\theta, F)$ split tori in $G$, we have to show that they are $H$-conjugate. Let $T, T'$ be $\theta$-stable maximal $F$-split tori containing $A,A'$ respectively. By the assumption, there is $h \in H$ such that $h T h^{-1} = T'.$ But then, as $h \in H$, we have \[h A h^{-1} = h T^{-} h^{-1} = (h T h^{-1})^{-} = T'^{-} = A'\] and therefore $A,A'$ are $H$-conjugate. Conversely, assume that all the maximal $(\theta,F)$-split tori are $H$-conjugate. We shall show it is t-stable. Let $T, T'$ be $\theta$-stable maximal F-split tori of $G$, with $T^-, T'^-$ maximal $(\theta, F)$-split. Let $A := T^-$ and $A' := T'^-$. By the assumption, there is $h \in H$ such that $h A h^{-1} = A'$. Therefore, replacing $T$ by $hTh^{-1}$ we may assume that $A = A'$. Let $K := Z\low{H}(A)$. Clearly, $K$ is a reductive subgroup of $H$. Since $T^+$ and $T'^+$ are both maximal F-split tori in $K$, by Proposition \ref{BT tori} there is $k \in K$ such that $k T^+ k^{-1} = T'^+$. But then $kTk^{-1} = T'$ and $T,T'$ are $H$-conjugate. \end {proof} Next we want to give an equivalent condition for p-stability. This require some preliminaries on $\theta$-split parabolic subgroups. \begin {lemma} \label{conjugacy of minimal split parabolics} Let $(G,H,\theta)$ be a symmetric pair. The minimal $\theta$-split parabolic F-subgroups of $G$ are all conjugate by $G$. \end{lemma} \begin {proof} Let $Q, Q'$ be minimal $\theta$-split parabolic F-subgroups of $G$, and let $P,P'$ be minimal parabolic F-subgroups of $G$ contained in $Q,Q'$ respectively. By Proposition \ref{BT minimal parabolics} there is $g \in G$ such that $gPg^{-1} = P'$. But then, by Proposition \ref{conjugating parabolics} we get that $gQg^{-1} = Q'$. \end {proof} \begin {lemma} \label {uniqueness of theta-split torus} Let $(G,H,\theta)$ be a symmetric pair and let $Q$ be a minimal $\theta$-split parabolic F-subgroup of $G$. Then $Q$ contains a unique maximal $(\theta, F)$-split torus. \end{lemma} \begin{proof} First note that $Q$ contains a maximal $(\theta,F)$-split torus. Indeed, by \cite[Proposition 4.7]{hel}, $Q$ contains a maximal $(\theta,F)$-split torus. Let $A \subseteq Q$ be a maximal $(\theta, F)$-split torus. Then $Q \cap \theta(Q) = Z\low{G}(A)$ and since $Q \cap \theta(Q)$ is reductive, its center is a torus. We then have by the maximality of $A$, $A = Z(Q \cap \theta(Q))^-$, which depends only on $Q$. \end {proof} \begin{proposition} \label{p-stable iff weakly p-stable} A pair $(G,H,\theta)$ is $P$-stable if and only if all the minimal $\theta$-split parabolic subgroups of $G$ are conjugate by $H$. \end{proposition} \begin{proof} First assume that the pair is p-stable. Since all the minimal $\theta$-split parabolic subgroups of $G$ are conjugate by $G$, the variety of $\theta$-split parabolic subgroups is isomorphic as a $G$-variety to $G/Q$ for a minimal $\theta$-split parabolic subgroup $Q \subseteq G$. Let $P$ be a minimal parabolic subgroup of $G$ contained in $Q$. Then we have natural $H$-equivariant submersion $\pi : G/P \to G/Q$. By the assumption, $H$ has a unique open orbit in $G/P$ and therefore it has an open dense orbit $\mathcal{O} \subseteq G/P$. But then $\pi(\mathcal{O})$ is an open dense orbit of $H$ in $G/Q$. We turn to the converse. Note that the p-stability can be checked only for the minimal parabolic F-subgroups, as the projections $G/P \to G/P'$ are submersions for $P \subseteq P'$. Furthermore, as parabolic subgroups are self-normalizing, the variety $G/P$ can be identified with the space of all conjugates of $P$. Assume that all minimal $\theta$-split parabolic subgroups of $G$ are $H$-conjugate. Let $P,P'$ be two minimal parabolic subgroups of $G$ such that $HP$ and $HP'$ are open on $G$. By \cite[Proposition 9.2]{hel}, there are minimal $\theta$-split parabolic F-subgroups $Q,Q'$ containing $P,P'$ respectively. By Proposition \ref{conjugacy of minimal split parabolics}, $Q,Q'$ are $G$-conjugate, hence both correspond to open $H$-orbits in $Ad_G(Q) \cong G/Q$. By the assumption, there is $h \in H$ such that $hQh^{-1} = Q'$. Replacing $P$ by $hPh^{-1}$, we may assume that $Q = Q'$. Let $A, A'$ be $\theta$-stable maximal F-split tori of $G$ contained in $P, P'$ respectively. By Lemma \ref{uniqueness of theta-split torus}, $A^- = A'^-$. By the conjugacy of maximal split tori in $Z_{H}(A^-)$, there is $k \in Z\low{H}(A^-)$ such that $kAk^{-1} = A'$. Replacing $P$ by $kPk^{-1}$, we may assume $A = A'$. The problem is now reduced to a problem on the root system $\Phi(A, G)$. Let $\pi^-: \Phi(A, G) \to \Phi(A^-, G)$ denote the natural projection. Let $\Sigma, \Sigma'$ be the positive roots corresponding to $P,P'$ in $X^*(A)$, respectively. Since both $HP$ and $HP'$ are open in $G$, we have by Theorem \ref{classification of generic parabolics} that $A^-$ is a maximal $(\theta,F)$-split torus in $G$ and that $\pi^- (\Sigma) \backslash \{0\}$ and $\pi^- (\Sigma') \backslash \{0\}$ are collections of positive roots in $\Phi(A^-,G)$. Clearly, both are the positive roots defined by $\Phi(A^-,Q)$, and in particular they are the same choices of positive roots. We claim that if $\alpha \in \Sigma$ and $\pi^-(\alpha) \ne 0$ then $\alpha \in \Sigma'$. Indeed, let $\alpha \in \Sigma$. If $\alpha \ne \theta(\alpha)$ then $\pi^-(\alpha) \in \Phi(A^-,Q)$. But if $\alpha \notin \Sigma'$ then $\alpha \in - \Sigma'$ which implies that $\pi^-(\alpha) \in - \Phi(A^-,Q)$, a contradiction to the fact that $\Phi(A^-,Q)$ strictly contained in a half-plane. By symmetry we have \[\Sigma \cap (\pi^-)^{-1}(\Phi(A^-,Q)) = \Sigma' \cap (\pi^-)^{-1}(\Phi(A^-,Q)).\] Let $\Sigma_H = \Sigma \cap Ker(\pi^-)$ and $\Sigma'_H = \Sigma \cap Ker(\pi^-)$. We wish to prove that there is $h \in N_H(A) \cap Z_H(A^-)$ such that $h(\Sigma_H) = \Sigma'_{H}$. Then, we would get $h(\Sigma) = \Sigma'$ since $h$ preserves the fibers of $\pi^-$. Consider the projection $\pi^+ : X^*(A) \otimes \RR \to X^*(A^+) \otimes \RR$. Then $\pi^+|_{Ker(\pi^-)}$ is an isomorphism, so it will be sufficient of prove that we can find $h \in N_H(A)$ such that $h(\pi^+(\Sigma_H)) = \pi^+(\Sigma'_H)$. We first claim that $\pi^+(\Sigma_H) \subseteq \Phi(A^+,Z_H(A^-))$. Indeed, if $\alpha \in \Sigma_H$ then $\mathfrak{g}_\alpha \subseteq Z_{\mathfrak{h}}(A^-)$ since $\mathfrak{u} \cap \mathfrak{s} = 0$. Next, we claim that $\pi^+$ maps $\Phi(A,Z_G(A^-))$ isomorphically into $\Phi(A^+,Z_H(A^-))$. Indeed, $\Phi(A,Z_G(A^-)) = Ker(\pi^-)$ so $\pi^+$ is injective on $\Phi(A,Z_G(A^-))$. Furthermore, if $\alpha \in \Phi(A^+,Z_H(A^-))$ and $Y$ is a root vector for $\alpha$ then $Y$ is automatically a root vector for $A$ in $Z_G(A^-)$ corresponding to a root which lifts $\alpha$, so $\pi^+$ is a surjection onto $\Phi(A^+,Z_H(A^-))$. Finally, we claim that $\Sigma_H$ is a collection of positive roots in $\Phi(A,Z_G(A^-))$ and therefore $\pi^+(\Sigma_H)$ is a collection of positive roots in $\Phi(A^+,Z_H(A^-))$. To see this, note that $\Sigma_H$ is a subset of $\Phi(A,Z_G(A^-))$ which is strictly contained in a half-space since $\Sigma$ is. Moreover, \[\Phi(A,Z_G(A^-)) \subseteq (\Sigma \cup -\Sigma) \cap Ker(\pi^-) = \Sigma_H \cup -\Sigma_H.\] This shows that $\Sigma_H$ is a collection of positive roots. By a similar argument, $\pi^+(\Sigma'_H)$ is a collection of positive roots in $\Phi(A^+,Z_H(A^-))$. Since $A^+$ is a maximal F-split torus in $Z_H(A^-)$, $\Phi(A^+,Z_H(A^-))$ is a (maybe non-reduced) root system, and its Weyl group acts transitively on the Weyl chambers. Since both $\Sigma_H$ and $\Sigma'_H$ are collections of positive roots in $\Phi(A^+,Z_H(A^-))$ there is a Weyl group element which maps one to the other. In particular, there exists an $h \in N_H(A^+) \cap Z_H(A^-)$ such that $h(\Sigma_H) = \Sigma'_H$. But then $h(\Sigma) = \Sigma'$ so $hPh^{-1} = P'$. It follows that the pair is p-stable. \end {proof} We now turn to the proof of the implications (2) and (3). \begin{theorem} [Implication (3)] Every p-stable pair is t-stable. \end{theorem} \begin{proof} Let $(G,H,\theta)$ be a p-stable pair. Let $A, A'$ be two maximal $(\theta, F)$-split tori in $G$. Let $Q,Q'$ be minimal $\theta$-split parabolic F-subgroups of $G$ containing $A,A'$ respectively. As $(G, H, \theta)$ is p-stable, by Proposition \ref{p-stable iff weakly p-stable} there is $h \in H$ such that $hQh^{-1} = Q'$. By Lemma \ref{uniqueness of theta-split torus}, $hAh^{-1} = A'$. Since by Proposition \ref{t-stable iff weakly t-stable} the $H$-conjugacy of all the maximal $(\theta,F)$-split tori is equivalent to $T$-stability, the pair is $T$-stable. \end{proof} \begin{theorem} [Implication (2)] Every p-stable pair is s-stable. \end{theorem} \begin{proof} Assume that $(G,H,\theta)$ is p-stable. We have to prove that it is s-stable. Let $r \in S_0$ be a $(\theta, F)$-split element which is the symmetrization of an element $g$. Let $A^{-}$ be the unique $(\theta, F)$-split torus such that $Z_G(r) = Z_G(A^{-})$. Let $P$ be a parabolic subgroup of $G$ corresponding to a choice of positive roots in $\Phi(A^{-}, G)$. $G/P$ is isomorphic as a $G$-variety to the variety of all conjugates of $P$ since $P$ is self-normalizing. Let $P' = g^{-1} P g$. We claim that both $P$ and $P'$ has open $H$-orbits in $G/P$. Indeed, by construction $P$ is $\theta$-split, as $P \cap \theta(P) = Z_G(A^{-})$ is a Levi-subgroup. But \begin {align*} \theta(P') = \sigma(g) \theta(P) \theta(g) = && (g \sigma(g) = r) \\ g^{-1} r \theta(P) r^{-1} g = && (r \in P \cap \theta(P)) \\ = g^{-1} \theta(P) g.&&\\ \end {align*} Therefore, $\theta(P') \cap P' = g^{-1} (P \cap \theta(P)) g$ is a Levi subgroup of $G$, so $P'$ and $\theta(P')$ are opposite. We deduce that $HP'$ is open in $G$, so $P'$ has open $H$-conjugacy class, and in fact is $\theta$-split parabolic subgroup of $G$. Since $(G,H,\theta)$ is p-stable and by Proposition \ref{p-stable iff weakly p-stable}, there is $h \in H$ such that $h P h^{-1} = P' = g^{-1} P g.$ It follows that $gh$ normalizes $P$. Since $P$ is self-normalizing, $gh \in P$. Since $r = s(g) = s(gh)$ we may assume that $g \in P$. Consider the semi-direct decomposition $P = M_P \cdot U_P$ where $M_P = Z_G(A^{-})$ is the Levi part of $P$ and $U_P$ is the unipotent radical. We can decompose $g$ as $g = u m$ where $u \in U_P$ and $m \in M_P$ (as the product is semi direct the factors can be interchanged!). We get \[r = s(g) = g \sigma(g) = u(m \sigma(m)) \sigma(u).\] Both the left hand side and the right hand side are Bruhat decompositions of $r$, so by the uniqueness of Bruhat decomposition we must have $m \sigma (m) = r$. But $m \in Z_G(A^{-}) = Z_G(r)$ so $r$ is a symmetrization of an element of its centralizer. By Proposition \ref{symmetrization in centralizer} $r$ is stable. \end{proof} In some cases, we can prove that t-stability and s-stability together imply the p-stability of a pair. Yet we can prove this only in the case that $G$ is connected in the Zariski toplology, split reductive and contains a $\theta$-split Borel subgroup. \begin {theorem} Let $(G,H,\theta)$ be a symmetric pair, and assume that $\textbf{G}$ is s split reductive connected algebraic group and that $G$ contains a $\theta$-split Borel subgroup. If $(G,H,\theta)$ is t-stable and s-stable, then in is also p-stable. \end {theorem} \begin{proof} First note that in this case the minimal $\theta$-split parabolic subgroups are precisely the $\theta$-split Borel subgroups of $G$. Thus,by Proposition \ref{p-stable iff weakly p-stable} it will be sufficient to prove that every two $\theta$-split Borel subgroups of $G$ are $H$-conjugate. Let $B$,$B'$ be $\theta$-split Borel subgroups. Let $A,A'$ be $\theta$-stable maximal F-split tori of $G$ contained in $B,B'$ respectively. Since the pair is t-stable we may assume that $A = A'$. By the transitivity of the action of the Weyl group on the Weyl chambers, there exists $w \in W_G(A)$ such that $w(\Phi(A,B)) = \Phi(A,B')$. But then since both $B$ and $B'$ are $\theta$-split, \[-\theta(w)(\Phi(A,B)) = \theta(w)(\theta(\Phi(A,B))) =\theta(w(\Phi(A,B))) = \theta(\Phi(A,B')) = -\Phi(A,B')\] so \[\theta(w)(\Phi(A,B)) = \Phi(A,B') = w(\Phi(A,B))\] and since the action of the Weyl group on the Weyl chambers is free we deduce that \[\theta(\omega) = \omega.\] Let $g \in N_G(A)$ be a representative of $w$. Then $r := s(g) \in Z_G(A) = A$ since $G$ is split and connected. It follows that $s(g) \in A \cap S = A^-$. For every $y \in A^-$ we have $s(yg) = yg\theta(g)^{-1}y = yry = ry^2.$ We can choose $y \in A^-$ for which $Z_G(ry^2) = Z_G(A^-)$, since the set of $y$-s for which the last equality does not hold is a union of countably many closed sets with empty interior. Replacing $g$ with $yg$ we may assume that $Z_G(r) = Z_G(A^-)$. Since the pair is s-stable and $r$ is a $\theta$-split symmetrization, we have $r = s(z)$ for $z \in Z_G(r) = Z_G(A^-)$. The equality $s(g) = s(z)$ implies that $z = gh$ for some $h \in H$ so replacing $B$ with $h^{-1}Bh$ we may assume that $g = z$, so $zBz^{-1} = B'$ It follows that $\Phi(A^-,B) = \Phi(A^-,B')$ since $\Phi(A,B)$ and $\Phi(A,B')$ are conjugate by $Z_G(A^-)$, which preserves the fibers of the map $\pi^-:\Phi(A,G) \to \Phi(A^-,G)$. Since $B$ is $\theta$-split we have $\Phi(A,B) = (\pi^-)^{-1}(\Phi(A^-,B))$, and similarly $\Phi(A,B') = (\pi^-)^{-1}(\Phi(A^-,B'))$. Since $\Phi(A^-,B) = \Phi(A^-,B')$, we deduce that $\Phi(A,B) = \Phi(A,B')$ so $B = B'$ and the pair is p-stable. \end{proof} \subsection {Verification Methods for Stability} In the next section we will check the different kinds of stability for many interesting symmetric pairs. Therefore, we would like to describe the methods we use to verify each of these properties. We have the first and second invariants, which in principle allows us to check stability. In general this is done by considering the different subgroups (maybe non-split ones), which form the stabilizers of semi-simple symmetric elements and computing their cohomologies. For s-stability we have the following cohomological criterion: \begin {theorem} \label{criterion for s-stability} A symmetric pair $(G,H,\theta)$ is s-stable if and only if for every maximal $(\theta,F)$-split torus $A\subseteq G$ , \begin{equation} \ImH{\theta}{A}{Z\low{G}(A)} \cap \KerH{\theta}{Z\low{G}}{G} = 1. \end{equation} \end{theorem} \begin {proof} In one direction, suppose that the pair is s-stable. If \[r \in \ImH{\theta}{A}{Z\low{G}(A)} \cap \KerH{\theta}{Z\low{G}}{G},\] then since $[r]$ is trivial in $H^1(\theta,G)$, $r$ is a symmetrization in $G$ and the stability of the pair implies that $[r]$ is the trivial cocycle of $Z_G(r)$. Replacing $r$ by $rs^2$ for $s \in A$ has no effect on the cohomology class of $r$ in $Z_G(A)$, but we can choose $s$ such that $Z\low{G}(s^2r) = Z\low{G}(A)$. Therefore, $[r]$ is the trivial class even in $H^1(\theta, Z_G(A))$, and we got that \[\ImH{\theta}{A}{Z\low{G}(A)} \cap \KerH{\theta}{Z\low{G}}{G} = 1.\] The other direction is easier. Let $r$ be a $(\theta,F)$-split element and let $A$ be a maximal $(\theta,F)$-split torus containing $r$. By the assumption $[r]$ is trivial in $H^1(\theta,Z_G(A))$, so as $Z_G(A) \subseteq Z_G(r)$, $[r]$ is trivial in $H^1(\theta, Z_G(r))$. \end{proof} Next, consider the t-stability property. In order to give a cohomological criterion for it, we need a fact about the conjugacy of $(\theta, F)$-split tori. \begin {proposition} {\cite[Lemma 10.3]{hel}} Let $(G, H, \theta)$ be a symmetric pair. All the $(\theta, F)$-split tori in $G$ are conjugate by elements of $G$. \end{proposition} \begin {theorem} \label{conjugacy of split tori} Let $(G,H, \theta)$ be a symmetric pair, and let $A$ be a maximal $(\theta, F)$-split torus in $G$. The maximal $(\theta, F)$-split tori in $G$ are classified up to $H$-conjugacy be the set \[\ImH{\theta}{Z_G(A)}{N_G(A)} \cap \KerH{\theta}{N\low{G}(A)}{G}.\] In particular, the pair is $T$-stable if and only if \begin{equation} \label{eqn criterion for t-stability} \ImH{\theta}{Z_G(A)}{N_G(A)} \cap \KerH{\theta}{N\low{G}(A)}{G} = 1. \end{equation} \end{theorem} \begin {proof} Set $X = \ImH{\theta}{Z_G(A)}{N_G(A)} \cap \KerH{\theta}{N\low{G}(A)}{G}$. Let $\mathcal{T}$ denote the set tori $A \subseteq G$ which conjugate to a maximal $(\theta,F)$-split torus in $G$. Let $A$ be any maximal $(\theta,F)$-split torus in $G$ and take it to be the base point of $\mathcal{T}$. By Proposition \ref{conjugacy of split tori}, $G$ acts transitively on $\mathcal{T}$ by conjugation. We have a natural action of $\theta$ on $\mathcal{T}$ via $A \mapsto \theta(A)$. By Theorem \ref{descent lemma} we have \[\mathcal{T}^\theta / Ad_H \cong KH^1(\theta, N_G(A),G)\] Under this identification, the torus $A' = gAg^{-1}$ corresponds to the cocycle $g^{-1}\theta(g) \in Z^1(\theta,N_G(A))$. We wish to determine those $g$ for which $gAg^{-1}$ is maximal $(\theta,F)$-split. This is the case exactly when $\theta(gag^{-1}) =ga^{-1}g^{-1}$ for every $a \in A$. But $\theta(a) = a^{-1}$ for $a \in A$ so in this case $\theta(g) a^{-1} \theta(g)^{-1} = g a^{-1} g^{-1}$ which means that $g^{-1} \theta(g)$ commutes with $a$ for every $a \in A$. Thus, the cocycles corresponding with maximal $(\theta,F)$-split tori are exactly these which live in $IH^1(\theta,Z_G(A),N_H(A))$. \end{proof} Finally, we treat the p-stability. \begin{theorem} \label{criterion for p-stability} The $H$-conjugacy classes of minimal $\theta$-split parabolic subgroups of $G$ are in 1-1 correspondence with the set \[KH^1(\theta,Z_G(A),G)\] for a maximal $(\theta,F)$-split torus. In particular, the pair $(G,H,\theta)$ is p-stable if and only if \[KH^1(\theta,Z_G(A),G) = 1.\] \end{theorem} \begin{proof} Let $\mathcal{P}$ denote the set of minimal $\theta$-split parabolic subgroups of $G$. Let $P$ be a minimal $\theta$-split parabolic subgroup of $G$. Consider $G/P$ as the variety of conjugates of $P$. Let $\theta$ acts on $G/P \times G/P$ by $\tilde{\theta}(P',P'') = (\theta(P''),\theta(P'))$. Let $\Delta_\theta = (G/P \times G/P)^\theta$. Let $\mathcal{O} \subseteq G/P \times G/P$ denote the unique open $G$-orbit which consists of pairs $(P',P'')$ of opposite parabolic subgroups. By Proposition \ref{conjugacy of minimal split parabolics}, the minimal $\theta$-split parabolic subgroups are all conjugates $P$. Let $\pi_1 : G/P \times G/P \to G/P$ be the projection onto the first factor. We deduce that $\mathcal{P} \cong \pi_1(\Delta_\theta \cap \mathcal{O})$. Since $\pi_1|_{\Delta_\theta}$ is injective and $H$-equivariant, we have \[\mathcal{P}/Ad_H \cong (\Delta_\theta \cap \mathcal{O}) /H \cong \mathcal{O}^\theta/H.\] By Theorem \ref{descent lemma} we get \[\mathcal{P}/Ad_H \cong \mathcal{O}^{\tilde{\theta}}/H \cong KH^1(\theta,Stb_{G}(P,\theta(P)),G)\] since we can choose the base-point of $\mathcal{O}$ to be $(P,\theta(P))$. But \[Stb_G((P,\theta(P))) = N_G(P) \cap N_G(\theta(P)) = P \cap \theta(P) = Z_G(A^-)\] where $A^-$ is a maximal $(\theta,F)$-split torus contained in $P$, by \cite[Lemma 4.6]{hel}. We deduce that \[\mathcal{P}/H \cong KH^1(\theta,Z_G(A),G).\] \end{proof} \begin{remark} In fact, all the relation between the different stability types can be deduced easily from the cohomological criteria. We decided to give a direct proofs for the implications (2) and (3) because they was not much longer than the cohomological proofs but much more explicit. \end{remark} At this point, we are ready to prove the remaining implication between stabilities, namely that stability implies p-stability. Before we prove it, we need a lemma: \begin{lemma} \label{semi-simple rep} Let $(G,H,\theta)$ be a symmetric pair, and let $x \in H^1(\theta,G)$. Then $x$ can be represented by a semi-simple element. \end{lemma} \begin{proof} Let $r$ be a representative of $x$, and let $r = r_sr_u$ be a Jordan decomposition of it. Let $U$ be a unipotent $F$-group containing $r_u$ which centralizes $r_s$, e.g. the Zariski closure of the subgroup generated by $r_u$. Then, by \cite[Lemma 0.6]{hel} we have $r_u = y^{-1}\theta(y)$ for some $y \in U$. But then, as $y$ commutes with $r_s$ we get $x = [r] = [r_s r_u] = [y^{-1}r_s\theta(y)] = [r_s]$ in $H^1(\theta,G)$. \end{proof} \begin{theorem} [{Implication (1)}] \label{stable then p-stable} A stable symmetric pair is p-stable. \end{theorem} \begin{proof} Let $(G,H,\theta)$ be a stable pair. In view of the cohomological criterion for p-stability (Theorem \ref{criterion for p-stability}), it will be sufficient to prove that, for a maximal $(\theta,F)$-split torus $A$, the natural mapping $H^1(\theta,Z\low{G}(A)) \to H^1(\theta, G)$ has trivial kernel. Let $[r] \in \KerH{\theta}{Z_G(A)}{G}$. We shall prove that there exists $s \in A$ such that \[Z_G(r s^{2}) \subseteq Z_G(A).\] Let $\textbf{T}$ be the torus topologically generated in the Zariski topology by $r$ and $A$. Then $\textbf{T}$ is defined over $F$. There is a finite field extension $E / F$ over which $\textbf{T}$ splits. Let $X^*(\textbf{T})(E)$ the the set of $E$-rational characters of $\textbf{T}$. For every $s \in A$ consider the torus $\overline{\{r^l s^{2l}\}_{l \in \ZZ}}$. Since $A$ is $F$-split we have $X^*(\textbf{A})(E) \cong X^*(A)$ via restriction. The condition that $\textbf{A}(E) \not\subseteq \overline{\{r^{l} s^{2l}\}_{l \in \ZZ}}$ is equivalent to the condition that there exists $\xi \in X^*(\textbf{A})(E)$ such that $\xi(r s^2) = 1$ but $\xi|_A \ne 1$. The first equation can be written as \[\xi(s)^2 = \xi(r)^{-1}.\] For each rational character $\xi$ the last equation cuts a closed set of $A$ (in the $t$-topology) with empty interior. As $A$ is a complete metric space it follows by the Baire category theorem that there is $s \in A$ which violates all these conditions simultaneously. Then $[r] = [\delta_{s^{-1}}(r)] = [rs^2]$ is in $\KerH{\theta}{Z_G(A)}{G}$ and is semi-simple. Furthermore, the stability of the pair implies that $[rs^2]$ is trivial in $H^1(\theta,Z_G(rs^2))$ so it also vanishes in $H^1(\theta,Z_G(A))$. Since $[r] = [rs^2]$, we deduce that $[r]$ is trivial in $H^1(\theta,Z_G(A))$. \end{proof} Empirically, it seems that in the Archimedean case, at least if $G$ is connected, the converse also holds. \begin{conjecture} If $(G,H,\theta)$ is a p-stable symmetric pair over $\RR$ and $G$ is connected in the real topology, then $(G,H,\theta)$ is stable. \end{conjecture} \begin{remark} We hope that some special features of the cohomology of real pairs might help, such as the results in \cite{adams}. \end{remark} We now illustrate the usage of the cohomological method by an example. \begin {example} \label{example 1} Not every p-stable pair is stable. For example, let $p \equiv 3 \mod 4$ and consider the pair $(G,H,\theta)$ where $G$ is the quaternions of norm 1 in the quaternion algebra \newline $\QQ_p[i,j]/(\{i^2 = p, j^2 = -1, ij = -ji\}) := \HH[p, -1]$, and $\theta(x) = ixi^{-1}= \frac{ixi}{p}.$ This pair is clearly P, S and t-stable because there are no parabolic subgroups or split tori or split elements at all. However, the pair is not stable as we shall prove in the next theorem. \end{example} \begin{theorem} The pair $(G,H,\theta)$ in Example \ref{example 1} is unstable for every $p \equiv 3 \mod 4$. \end{theorem} \begin{proof} Consider the maximal $\theta$-split torus $A = \{x \in G : x = a + bk\}$ where $k := ij$. We claim that $\KerH{\theta}{A}{G} \ne 1$, and therefore by Theorem \ref{comological criterion for stability} the pair is not stable, since it is straight forward to check that $Z_G(A) = A$. Let $\HH = \HH[p, -1]$ denote the quaternion algebra, then we have an exact sequence \[\SES{G}{\HH^\times}{\QQ_p^\times},\] where the right arrow is the reduced norm $N_{\HH/\QQ_p}(x + iy + jz + kw) := x^2 + y^2 - p(z^2 + w^2)$. Moreover, we have an exact sequence \[\SES{A}{\QQ_p[k]^\times}{B}\] where $B$ is the subgroup of $\QQ_p^\times$ generated by the norms from $\QQ_p[k]$. These are the elements with the least significant digit a square mod $p$. The two short exact sequences fit into a commutative diagram of sequences: \[\xymatrix{ 1 \ar[r]& A \ar[r] \ar[d] & \QQ_p[k]^\times \ar[r] \ar[d] & B \ar[d] \ar[r] & 1 \\ 1\ar[r] & G \ar[r] & \HH^\times \ar[r] & \QQ_p^\times \ar[r] & 1 \\ }\] Note that $(\HH^\times)^\theta = \QQ_p[i]^\times$. Using the isomorphism in Theorem \ref{descent lemma} for both horizontal lines, where we let the middle groups act on the groups on the right, we obtain a commutative square with injective horizontal arrows: \[\xymatrix{ B/(\QQ_p^\times)^2 \ar@{^{(}->}[r] \ar[d] & H^1(\theta, A) \ar[d]\\ \QQ_p^\times / N_{\HH/\QQ_p}(\QQ_p[i]^\times) \ar@{^{(}->}[r] & H^1(\theta, G) \\ }\] Namely, here the active groups are $\QQ_p[k]^\times$ and $\HH^\times$ and they act on $B$ and $\QQ_p^\times$ respectively. The upper horizontal map is an isomorphism, since by Hilbert's 90 Theorem \[H^1(\theta, \QQ_p[k]^\times) = 1.\] It is therefore sufficient to show that $B/(\QQ_p^\times)^2$ is non-trivial and the map \[B/(\QQ_p^\times)^2 \to \QQ_p^\times / N(\QQ_p[i]^\times))\] is trivial. Indeed, by the diagram above and the fact that the upper arrow is an isomorphism, $KH^1(\theta,A,G)$ is isomorphic to the kernel of the map $B/(\QQ_p^\times)^2 \to \QQ_p^\times / N(\QQ_p[i])^\times$ which appears in the upper part of the diagram. But it is immediate that \[B/(\QQ_p^\times)^2 \cong \ZZ / 2\ZZ\] and that the left hand side is generated by $-p$. Since $-p = N(i) \in N(\QQ_p[i]^\times)$, the mapping \[B/(\QQ_p^\times)^2 \to \QQ_p^\times / N(\QQ_p[i]^\times))\] is trivial, and the pair is unstable. \end{proof} \section {Calculations} In this section we always assume that $F = \RR$ or that $F$ is non-Archimedean. Since all the complex pairs are stable, these are the only interesting cases. In this section we apply the methods developed in the article to check the stability of many interesting pairs. For the pairs we consider, we check s-stability, p-stability and stability. We do not mention t-stability as it is unrelated to the Gelfand property, as far as we know. Note, however, that the p-stable pairs are automatically t-stable. During the calculation, we will use extensively the different cohomological criteria for the various kinds stability, sometimes implicitly. Before we treat particular pairs, we shall present some notation and results which we shall use in many of the examples. \subsection {Preliminaries} \subsubsection {Linear Algebra and Eigenvalues} To each point $\lambda \in spec(F[x])$ there is a unique monic irreducible polynomial $m_\lambda(x)$ generating the maximal ideal associated with the point $\lambda$. Let $V$ be a linear space over a field $F$. Let $r$ be a semi-simple automorphism of $V$. There is a decomposition \[V = \oplus_{\lambda \in spec(F[x])} V_\lambda(r)\] where $V_\lambda(r) = \{v \in V : m_\lambda(r)v = 0\}$. We think of $\lambda$ as a point of the algebraic closure $\bar{F}$, well defined only up to the action of $\Gamma_{\bar{F}/F}$. The space $V_\lambda(r)$ will be referred to as primary space with primary-value $\lambda$. The decomposition $V = \oplus_{\lambda \in spec(F[x])} V_\lambda(r)$ will be referred to as the primary decomposition of $V$ with respect to $r$. For $\lambda \in spec(F[x])$, let $F_\lambda \cong F[x] / (m_{\lambda}(x))$ denote the residue field of $spec{F[x]}$ at $\lambda$. If we consider $\lambda$ as an element of $\bar{F}$ then this field is just $F[\lambda]$. The spaces $V_\lambda(r)$ come endowed with a canonical structure of an $F_\lambda$ vector space, given by $p(x) v := p(r) v$ for $p$ a polynomial over $F$, considered as an element of $F_\lambda$. Under this identification $r$ is given by multiplication by $\lambda$ when we think of $\lambda$ as an element of $\bar{F}$. If $h \in GL(V)$ is an element such that $hr = r^{-1} h$, then $h|_{V_\lambda}$ gives rise to an isomorphism $V_\lambda \to V_{\lambda^{-1}}$. We have 3 options for the interrelation of $h$ and the primary subspaces: \begin{itemize} \item $\lambda$ and $\lambda^{-1}$ are different points of $spec(F[x])$. In this case $h|_{V_\lambda(r)} : V_\lambda(r) \stackrel{\approx}{\rightarrow} V_{\lambda^{-1}(r)}$. The set of primary values of this type will be called primary values of type $A$. \item $\lambda$ and $\lambda^{-1}$ are different as elements of $\bar{F}$, but correspond to the same point of $spec(F[x])$. In this case $V_\lambda = V_{\lambda^{-1}}$ and $h$ is an $F$-linear automorphism of it. We can consider $Ad_h$ as an automorphism of $F_\lambda \cong F[r]$ (with the action of conjugating $r$), and then $h$ is a semi-linear automorphism of $V_\lambda(r)$ over $F_\lambda$, corresponding to the non-trivial automorphism of $F[\lambda]$ over $F[\lambda + \lambda^{-1}]$. The set of primary values of this type will be called primary values of type $B$. \item $\lambda = \pm 1$. In this case $h|_{V_\lambda(r)}$ commutes with $r|_{V_\lambda(r)}$. The set of primary values of this type will be called primary values of type $C$. \end{itemize} We shall denote by $\mathcal{A}(r), \mathcal{B}(r), \mathcal{C}(r)$ the sets on primary values of type $A,B,C$ of $r$ respectively. The set $\mathcal{A}(r)$ consists of pairs of the form $(\lambda,\lambda^{-1})$. We shall denote by $\mathcal{A}^+(r)$ any choice of an element from each pair, and by $U_\lambda(r)$ the linear space $V_\lambda(r) + V_{\lambda^{-1}}(r)$. \begin{lemma} \label{dimension formula} Let $r \in GL(V)$ and let $h \in GL(V)$ such that $h^2 = Id_V$ and $hrh^{-1} = r^{-1}$. Then \[ dim(V_1(h)) =\frac{1}{2} \left(\sum_{\lambda \in \mathcal{A}(r) \cup \mathcal{B}(r)} dim(V_\lambda(r)) \right) + dim(V_1(r) \cap V_1(h)) + dim(V_{-1}(r) \cap V_{1}(h)) \] and \[ dim(V_1(rh)) =\frac{1}{2} \left(\sum_{\lambda \in \mathcal{A}(r) \cup \mathcal{B}(r)}dim(V_\lambda(r)) \right) + dim(V_1(r) \cap V_1(h)) + dim(V_{-1}(r) \cap V_{-1}(h)) \] \end{lemma} \begin{proof} Since for each primary value $\lambda$ of $r$ the space $V_\lambda(r) + V_{\lambda^{-1}}(r)$ is $rh$-invariant and $h$-invariant, it is enough to prove the statement of the lemma for each such subspace separately. If $\lambda$ is of type $A$, then $h$ and $rh$ map $V_\lambda(r)$ isomorphically into $V_{\lambda^{-1}}(r)$ and vice versa. Therefore, the maps $v \mapsto v + h(v)$ and $v \mapsto v + rh(v)$ give linear isomorphisms of $V_\lambda(r)$ with $(V_\lambda(r) + V_{\lambda^{-1}}(r)) \cap V_1(h)$ and $(V_\lambda(r) + V_{\lambda^{-1}}(r)) \cap V_1(rh)$ respectively. It follows that \[dim((V_\lambda(r) + V_{\lambda^{-1}}(r)) \cap V_1(h)) = dim(V_\lambda(r)) = \frac{1}{2}dim(V_\lambda(r) + V_{\lambda^{-1}}(r)),\] so the formula is correct in that case. Next assume that $\lambda$ is of type $B$. Then $V_\lambda(r) = V_{\lambda^{-1}}(r)$ and $V_\lambda(r)$ has a structure of a $F_\lambda$ vector space, with $h$ being a semi-linear automorphism, and the same is true for $rh$. It follows from Hilbert's 90 Theorem that $V_1(h) \cap V_\lambda(r) \otimes_{F[r + r^{-1}]} F[r] \cong V_\lambda(r)$ and since $F[r] / F[r + r^{-1}]$ is a quadratic extension it follows that \[dim(V_1(h) \cap V_\lambda(r)) = \frac{1}{2}dim(V_\lambda).\] The case where $\lambda$ is of type $C$ is immediate: if $\lambda = 1$ then $rh = h$ on $V_\lambda(r)$ and if $\lambda = -1$ then $rh = -h$ on $V_\lambda(r)$. \end{proof} \subsubsection {Quadratic Forms} Here we list some results about quadratic forms, necessary for the analysis of the stability of symmetric pairs of orthogonal and unitary groups. Let $V$ be a linear space over $F$. Let $Quad(V)$ denote the set of symmetric bilinear forms $B : V \times V \to F$. We identify $B$ with its associated quadratic form $v \mapsto B(v,v)$ and write it simply as $B(v)$. If $B' \in Quad(U)$ we say that the pair $(V,B)$ is equivalent to $(U,B')$ if there is a linear isomorphism $A: V \to U$ such that $B(u,v) = B'(Au,Av)$. Let $\mathcal{QF}_n(F)$ denote the set of equivalence classes of pairs $(B,V)$ of a space with a quadratic form on it such that $dim(V) = n$. For every quadratic form $B \in Quad(V)$, we let $[B]$ denote the class of $(V,B)$ in $\mathcal{QF}_{dim(V)}(F)$. There is a natural direct sum map $\oplus : \mathcal{QF}_n(F) \times \mathcal{QF}_m(F) \to \mathcal{QF}_{n + m}(F)$ given by orthogonal direct sum of forms. Let $\mathcal{QF}(F)$ be the set of equivalence classes of quadratic forms of any rank, i.e. the union of all the $\mathcal{QF}_n(F)$-s. If $B$ is a quadratic form on $V$ and $g: V \to V$ is a linear map such that $B(gv,u) = B(v,gu)$ for every $v,u \in V$ then we denote by $B_g$ the quadratic form $B_g(u,v) = B(gu,v)$. There is a natural map \[Q_n : (F^\times / (F^\times)^2)^n \to \mathcal{QF}_n(F)\] given by $Q_n([a_1,...,a_n]) = (F^n, \sum_{i = 1}^n a_i x_i^2)$. Diagonalization of quadratic forms shows that this map is onto. We let $Q : \coprod_n F^\times / (F^\times)^2)^n \to \mathcal{QF}(F)$ be the union of these maps. Note that $Q$ is not injective, as clearly several different sequences might correspond to the same form. Let $\{a,b\}$ denote the Hilbert symbol of $a$ and $b$, defined by \[\{a,b\} = \begin{cases} 1, \quad \exists x,y \in F : ax^2 + by^2 = 1 \\ -1, \quad \text{otherwise} \end{cases}.\] We have several invariants on quadratic forms. For $B \in Quad(V)$ let: \begin{itemize} \item $det(B) \in F^\times / (F^\times)^2$ be the determinant of $B$, represented as a matrix in some basis of $V$. \item $rank(B) = dim(V)$. \item $H(B)$ be the Hasse invariant of $B$, i.e. the product of all the Hilbert symbols of pairs of diagonal elements of $B$ in some orthogonal basis. \item $\mu(B)$ the maximal dimension of a subspace $U \subseteq V$ such that $B|_U = 0$. \end{itemize} One can show that all these are preserved by equivalence, and thus induce maps \[rank : \mathcal{QF}(F) \to \NN\] \[det : \mathcal{QF}(F) \to F^\times / (F^\times)^2\] \[H : \mathcal{QF}(F) \to \{\pm 1\}\] \[\mu : \mathcal{QF}(F) \to \NN\] Define similarly on sequences of elements of $F^\times / (F^\times)^2$ \[rank([a_1,...,a_n]) = n\] \[det([a_1,...,a_n]) = \prod_{i = 1}^n a_i\] \[H([a_1,...,a_n]) = \prod_{1 \le i < j \le n}\{a_i,a_j\}\] so that for every sequence $\ell$ of elements of $F^\times / (F^\times)^2$ we have $rank(\ell) = rank(Q(\ell))$, $det(\ell) = det(Q(\ell))$ and $H(\ell) = H(Q(\ell))$. For non-Archimedean fields, it turns out to be a complete set of invariants. \begin{theorem}[{\cite[Theorem 2.3.7]{ser2}}] \label{classification of quadratic forms over local fields} A quadratic form over a local non-Archimedean field of characteristic 0 is completely determined up to equivalence by its rank, determinant and Hasse invariant. \end{theorem} In the case $F = \RR$ this is not the case but Sylvester Theorem states that the rank and signature form a complete set of invariants for quadratic forms. The following classical result will be useful in some cohomology computations: \begin{theorem}[Witt Cancellation Theorem] \label{Witt cancellation theorem} Let $B,B',C$ be three quadratic forms over $F$. If $B \oplus C \equiv B' \oplus C$ then $B \equiv B'$. In other words, $\cdot \oplus C : \mathcal{QF}(F) \to \mathcal{QF}(F)$ is injective. \end{theorem} We shall say that $B \le B'$ if there is a quadratic form $C$ such that $B' \equiv B \oplus C$. In this case, by Witt Cancellation Theorem we can unambiguously define difference $[B] - [B']$ to be $[C]$. Let $V$ be a linear space over $F$ and $V^*$ the dual space. We associate with $V$ the \textbf{hyperbolic form} $\mathcal{H}_V$ on $V \oplus V^*$ given by $\mathcal{H}_V(v + \phi) = \phi(v)$. Its class depends only on $n = dim(V)$ and we denote $\mathcal{H}_n = [\mathcal{H}_{F^n}]$. Clearly $\mathcal{H}_n = \oplus_{i = 1}^n \mathcal{H}_1$. We say that a scalar $x \in F$ is \textbf{represented by $B$} if there is $v \in V$ such that $B(v) = x$. As $B(bv) = b^2 B(v)$ the set of scalars represented by $B$ is a union of $(F^\times)^2$ orbits. Let $Rep(B)$ denote the set of $(F^\times)^2$ orbits of elements $x \in F$ represented by $B$. As $Rep(B)$ depends only on the equivalence class of $B$ it can be regarded as a function \[Rep: \mathcal{QF}(F) \to SubSets(F / (F^\times)^2).\] Clearly $Rep(\mathcal{H}_V) = F / (F^\times)^2$. The following result follows from the classical Witt Decomposition Theorem. We give the proof nevertheless. \begin{proposition} \label{the hyperbolic lemma} Let $B$ be a quadratic form and $k$ the maximal integer such that $\mathcal{H}_k \le B$. Then $k = \mu(B)$. \end{proposition} \begin{proof} On one hand $\mu(\mathcal{H}_k) = k$, so if $\mathcal{H}_k \le B$ then \[k = \mu(\mathcal{H}_k) \le \mu(B).\] On the other hand, if $\mu(B) = k$ there are linear independent vectors $v_1,...,v_k \in V$ such that $B(v_i,v_j) = 0$ for all $1 \le i,j \le k$. Let $u_i \in V$ be vectors such that $B(u_i,v_j) = \delta_{i,j}$. Let $u_i' = u_i - \sum_j\frac{B(u_i,u_j)}{2} v_j$. Then $B(u_i') = 0$ and $B(u_i',v_j) = \delta_{i,j}$, so $B|_{span\{u_i',v_j\}_{i,j = 1}^k} \equiv \mathcal{H}_k$. This shows that $\mathcal{H}_k \le B$. \end{proof} Let $E/F$ be a finite extension. Let $V$ be a linear space over $F$ and $V(E)$ the extension of scalars form $F$ to $E$ of $V$. We have a map $Quad(V) \to Quad(V(E))$ given by extension of scalars. Namely, $B_E$ is the unique symmetric $E$-bilinear form on $V(E)$ which coincides with $B$ on $V$. The image is clearly the set of forms $B$ for which $B(V,V) \subseteq F$. Now let $U$ be a linear space over $E$ and let $U|_F$ denote the restriction of scalar to $F$. There is a natural map $Tr_{E/F}: Quad(U) \to Quad(U|_F)$ given by \[Tr_{E/F}(B)(u,v) := Tr_{E/F}(B(u,v)).\] \begin{proposition} \label{Im(Tr)} The mapping $Tr_{E/F}$ is injective. Its image is the set of those forms $B: U \times U \to F$ for which $B(\lambda u,v) = B(u,\lambda v)$ for each $\lambda \in E$. \end{proposition} \begin{proof} If $B \ne 0$ then there exist $u, v \in V(E)$ such that $B(u,v) = c \ne 0$. But then $0 = Tr_{E/F}(B(u,c^{-1}v)) = Tr_{E/F}(1) = [E:F] \ne 0$ since $char(F) = 0$, so $Tr_{E/F}(B) \ne 0$. Thus, $Ker(Tr_{E/F}) = 0$ and therefore $Tr_{E/F}$ is injective. For the description of the image, it is clear that $Tr_{E/F}(B)$ has the indicated property. Conversely, let $B$ be an $F$-quadratic form on $U$ for which $B(\lambda u,v) = B(u,\lambda v)$ for every $\lambda \in E$. Let $x_1,...,x_k$ be a basis of $E$ over $F$ and let $x_1^*,...,x_n^*$ be a dual basis with respect to the trace form, i.e. $Tr_{E/F}(x_i x_j^*) = \delta_{i,j}$. Define the (a priori only $F$-bilinear) form \[\tilde{B}(u,v) = \sum_i x_i^* B(u,x_i v).\] Then $\tilde{B}$ is in fact symmetric, $E$-bilinear and $Tr_{E/F}(\tilde{B}) = B$. The symmetry of $\tilde{B}$ follows easily from the assumption that $B(\lambda u,v) = B(u,\lambda v)$ for $\lambda \in E$. To see that $\tilde{B}$ is $E$-bilinear, we use the identity \[y = \sum_i x_i^* Tr_{E/F}(y x_i)\] and compute: \begin{align*} &\tilde{B}(u,\lambda v) = \sum_i x_i^* B(u,\lambda x_i v) = \\ &\sum_i x_i^* B(u, \sum_j Tr_{E/F}(\lambda x_i x_j^*) x_j v) = \sum_{i,j} x_i^* Tr_{E/F}(\lambda x_i x_j^*) B(u, x_j v) = \\ &= \sum_i x_i^* Tr_{E/F}(\sum_j(\lambda x_j^* B(u, x_j v)) \cdot x_i) = \lambda \sum_j(x_j^* B(u,x_j v)) = \\ &=\lambda \tilde{B}(u,v) \end{align*} Finally, the fact that $Tr_{E/F}(\tilde{B}) = B$ follows similarly: \begin{align*} &Tr_{E/F}(\tilde{B}(u, v)) = Tr_{E/F}(\sum_i x_i^* B(u, x_i v)) = \\ & = \sum_i B(u,x_i v) Tr_{E/F}(x_i^*) = B(u,(\sum_i Tr_{E/F}(x_i^* \cdot 1)x_i) \cdot v) = \\ &=B(u,v) \end{align*} This shows that $B$ is in the image of $Tr_{E/F}$. \end{proof} \subsubsection{Hermitian Forms} Let $E/F$ be a quadratic extension. Let $c : E \to E$ be the unique non-trivial automorphism over $F$., Let $U$ be a vector space over $E$. An $F$-bilinear form $U \times U \to E$ is called \textbf{Hermitian} if $B(av,bu) = ac(b)B(v,u)$ and $B(u,v) = c(B(v,u))$ for all $u,v \in U$ and $a,b \in E$. The space of all such forms will be denoted by $Her_F(U)$, to indicate the dependence on both $F$ and $E$. $GL_E(U)$ acts on $Her_F(U)$ via $g^*B(u,v) = B(g(u),g(v))$. Two Hermitian forms $B,B'$ are called \textbf{equivalent} if there is $g \in GL_E(U)$ such that $g^*B = B'$. As in the case of quadratic forms, we denote by $rank(B) \in \NN$ the dimension of $U$ over $E$ and by $det(B)$ the determinant of a matrix representing $B$. $det(B)$ is well defined as an element of $F^\times / N_{E/F}(E^\times)$. Let $\mathcal{HF}(E/F)$ denote the set of equivalence classes of $E$-Hermitian forms over $F$. The classification of Hermitian forms for $\CC / \RR$ is identical to that of quadratic forms: the equivalence class of an Hermitian form over $\CC$ is completely determined by the rank and signature of $B$. Over a non-Archimedean local field we have: \begin{theorem} [{\cite[Theorem 3.1]{jac}}] \label{classification of Hermitian forms} Hermitian forms over a local non-Archimedean field of characteristic $\ne 2$ are classified up to equivalence by their rank and determinant. That is, $\mathcal{HF}(E/F) \stackrel{rank,det}{\rightarrow} \NN \times F^\times / N_{E/F}(E^\times)$ is a bijection. \end{theorem} If $V$ is a vector space over $F$ and $B$ is a quadratic form on $V$ let $B_{E,c}(x,y) := B_E(x,c(y))$. It is an Hermitian form on $V(E)$. This gives a map \[(\cdot)_{E,c}:Quad(V) \to Her_F(V(E)).\] The image consisting of forms for which $c(B(c(x),c(y))) = B(x,y)$. Conversely, it is clear that every Hermitian form for which $c(B(c(x),c(y))) = B(x,y)$ is of the form $B'_{E,c}$ for some quadratic form $B'$. If $U$ is a vector space over $E$ we have a map $Tr_{E/F} : Her_F(U) \to Quad(U|_F)$ defined by \[Tr_{E/F}(B)(u,v) := Tr_{E/F}(B(u,v)).\] As in the case of Quadratic forms, this map is injective and the image consists of those $B$ for which $B(\lambda u,v) = B(u,c(\lambda)v)$. Indeed, the injectivity of $Tr_{E/F}$ can be deduced as in the proof of Proposition \ref{Im(Tr)}. Moreover, if $B = Tr_{E/F}(\tilde{B})$ then clearly for each $\lambda \in E$ we have $B(\lambda u,v) = B(u,c(\lambda)v)$. Conversely, if $B(\lambda u,v) = B(u,c(\lambda)v)$ and $E = F[\sqrt{D}]$ then the form $\tilde{B}(u,v) = \frac{1}{2}(B(u,v) - \frac{1}{\sqrt{D}}B(u,\sqrt{D}v)$ is Hermitian and $Tr_{E/F}(\tilde{B}(u,v)) = B(u,v)$. \subsection {Some Cohomology Computations} \label{some cohomology computations} Here we shall calculate the different cohomology sets that we need for the stability calculations. \begin{proposition} \label{cohomology of flip} Let $G$ be a group and let $\theta((g,h)) = (h,g)$. Then $H^1(\theta, G \times G) = 1$. \end{proposition} \begin{proof} Elements of $Z^1(\theta, G \times G)$ are of the form $(g,g^{-1})$ for $g \in G$. But \[(g,g^{-1}) = (g^{-1},e_G)^{-1} \theta((g^{-1},e_G)) \in B^1(\theta,G \times G).\] \end{proof} \begin{proposition} \label{cohomology of conjugacy} Let $G$ be a group, and $h \in G$ an element such that $h^2 \in Z(G)$. Then the mapping $r \mapsto rh$ gives a bijection \[H^1(Ad_h,G) \cong \{h' \in G : h'^2 = h^2\} / \{G-conjugacy\}.\] The neutral element in the cohomology set corresponds under this 1-1 correspondence to the conjugacy class of $h$. \end{proposition} \begin{proof} This mapping is clearly $G$-equivariant. Indeed, if $r$ represents a cocycle then \[\delta_g(r) h = g^{-1}r(hgh) h = g^{-1} rh g = Ad_{g^{-1}}(r) h.\] Moreover, if $hrh^{-1} = r^{-1}$ then $(rh)^2 = rr^{-1} h^2 = h^2$ so cocycles are mapped to elements of $\{h' \in G : h'^2 = h^2\}$. This map is 1-1 and onto as it has an inverse given by $h' \mapsto h' h^{-1}$. \end{proof} \begin{proposition} Let $G$ be a commutative group and $\theta(x) = x^{-1}$. Then $H^1(\theta,G) \cong G / G^2$. In particular, if $A \subseteq G$ is a $\theta$-split torus then $H^1(\theta,A) \cong A / A^2$. \end{proposition} \begin {proof} By definition, for the pair $(G,H,\theta)$ we have $S = G$ while $\delta_g(x) = g^{-2} x$, so \[S / \delta_G \cong G / G^2.\] \end{proof} \begin{proposition} Let $h \in GL(V)$ be an element of order 2. Then, as pointed sets, \[H^1(Ad_h,GL(V)) \cong \left(\{0,...,dim(V)\}, dim(V_1(h)) \right).\] \end{proposition} \begin{proof} By Proposition \ref{cohomology of conjugacy} $H^1(Ad_h,GL(V))$ is the set of linear involutions up to conjugacy. These are classified by the dimension of their $1$-eigenspace. The base point of the cohomology set corresponds to $h$ which has $dim(V_1(h))$-dimensional $1$-eigenspace. \end{proof} \begin{proposition} Let $h \in GL(V)$ be an element of order 2. Let $A = \{x \in \{0,...,dim(V)\} : x \equiv dim(V_1(h)) \mod{2}\}$. Then: \[H^1(Ad_h,SL(V)) \cong (A, dim(V_1(h))).\] \end{proposition} \begin{proof} Consider the exact sequence \[\SES{SL(V)}{GL(V)}{F^\times}.\] Clearly \[IH^1(Ad_h,SL(V),GL(V)) \cong \{h' \in GL(V) : det(h') = det(h)\} / Ad_{GL(V)} \cong A.\] On the other hand, \[KH^1(Ad_h,SL(V),GL(V)) \cong F^\times / det(Z_{GL(V)}(h)).\] By a twisting argument we see that the fiber of $h' \in \{h' \in GL(V) : det(h') = det(h)\}$ is exactly $F^\times / det(Z_{GL(V)}(h'))$. But $det(Z_{GL(V)}(h')) = F^\times$, for example as $GL(V_1(h'))$ and $GL(V_{-1}(h'))$ are subgroups of $Z_{GL(V)}(h')$ and at least one of them is $GL_k(F)$ for $k > 0$, hence has arbitrary determinants. \end{proof} \begin{proposition} \label{H^1(Ad_h,O(B))} Let $B$ be a quadratic form on a linear space $V$ and let $h \in O(B)$ be an element f order $2$. Then \[H^1(Ad_h,O(B)) \cong (\{B' \in \mathcal{QF}(F) : \exists C, B' \oplus C \equiv B\}, B^+)\] \end{proposition} \begin{proof} As usual $H^1(Ad_h,O(B))$ can be identified with the set of conjugacy classes of elements of order 2 in $O(B)$. Every such element $h'$ corresponds uniquely to a decomposition \[B = B|_{V_1(h')} \oplus B|_{V_{-1}}(h').\] By Witt Cancellation Theorem such a decomposition is determined by $B|_{V_1(h')}$. Thus, the map $[r] \mapsto B|_{V_1(rh)}$ gives the desired bijection. In particular, the base-point of $H^1(Ad_h,O(B))$ is mapped to $B|_{V_1(h)}$. \end{proof} \begin{proposition} \label{H^1(Ad_h,U(B))} Let $B$ be a Hermitian form for $E/F$. Let $U(B)$ be the group of unitary matrices with respect to $B$ and $h \in U(B)$ an element of order 2. Then \[H^1(Ad_h,U(B)) \cong (\{B' \in \mathcal{HF}(F) : \exists C, B' \oplus C \equiv B\}, B_{V_1(h)})\] \end{proposition} \begin{proof} The proof is identical to the case of quadratic forms. We have to check that the statement of Witt Cancellation Theorem holds also for Hermitian forms, but for the fields under consideration this follows directly from the classification of Hermitian forms. \end{proof} \begin{proposition} \label{H^1(Ad_h,Sp(w))} Let $\omega$ be a symplectic form on $V$, and $h \in Sp(\omega)$ be a symplectic automorphism of order 2 of $V$. Then $H^1(Ad_h,Sp(\omega)) \cong (\{0,2,4,...,dim(V)\}, dim(V_1(h))$. \end{proposition} \begin{proof} The map $H^1(Ad_h,Sp(\omega)) \to H^1(Ad_h,GL(V)) \cong \{0,1,2,...,\}$ has as its image the set of symplectic $h$-s. If $h \in Sp(\omega)$ then $\omega|_{V_1(h)}$ is non-degenerate so $dim(V_1(h)) = 2k$. Conversely, for every $ 0 \le k \le dim(V)$ even, there is a subspace $U$ of $V$ of dimension $k$ such that $\omega|_{U}$ is non-degenerate, and then $h' = Id_U \oplus (-Id)|_{U^\bot}$ maps to $k$ by this map. Thus, the image of the map $H^1(Ad_h,Sp(\omega)) \to \{0,,1,...,dim(V)\}$ is the set of even numbers. The fiber of each such number is, by Theorem \ref{descent lemma} applied to the action of $h$ on symplectic forms, the set of orbits of $GL(V_1(h)) \times GL(V_{-1}(h))$ in the space of $h$-invariant symplectic forms on $V$. It is straight forward to see that there is only one such orbit, hence the proposition. \end{proof} \begin{proposition} \label{H^1(c,G(E))} Let $E/F$ be a quadratic extension. Let $c : E \to E$ denote conjugation of $E$ over $F$. Then the following holds: \begin{eqnarray} &H^1(c,GL_n(E)) = 1 \\ &H^1(c,SL_n(E)) = 1 \\ &H^1(c,SP_{2n}(E)) = 1 \\ &H^1(c,O_E(B_E)) \cong (\{B' \in \mathcal{QF}(F) : B'_E \equiv B_E\}, B) \\ &H^1(c,U(B_{E,c})) = (\{B' \in \mathcal{QF}(F) : B'_{E,c} \equiv B\}, B) \end{eqnarray} \end{proposition} \begin{proof} The first equatality is Theorem Hilbert's 90. All the others follow from the first and Theorem \ref{descent lemma}, by considering the action of $GL(V)$ on $E^\times$ via $det$ and on the spaces of anti-symmetric, symmetric and Hermitian bilinear forms respectively. Note that up to equivalence there is a unique non-degerenrate anti-symmetric form. \end{proof} \subsection {The Pair $(GL(V),GL(V_1(h)) \times GL(V_{-1}(h)), Ad_h)$} Let $V$ be a linear space over $F$ and $h$ a linear automorphism of $V$ of order 2. Then $V = V_1(h) \oplus V_{-1}(h)$. We let $G = GL(V)$ and $\theta = Ad_h$, and then $H = GL(V_1(h)) \times GL(V_2(h))$. The stability of this pair have been checked in \cite[Proposition 7.7.4]{dima}. The descendants, $Z_{GL(V)}(r)$ for $r$ symmetric, where computed as part of the proof of its stability, and they are products of the following pairs: \begin{itemize} \item $(GL(U) \times GL(U), \Delta GL(U) , (x,y) \mapsto (y,x))$, one for each primary value of type $A$ of the symmetric element $r$. \item $(GL(U(L)), GL(U(K)), c)$ for a quadratic extension $L/K$ with conjugation $c$, one for each primary value of type $B$ of $r$. \item $(GL(U), GL(U_1(h)) \times GL(U_{-1}(h)) , Ad_h)$ for a vector subspace $U \subseteq V$, one for each primary value of type $C$. \end{itemize} We deduce from Theorem \ref{stable then p-stable} and the stability of the pair that the pair is p-stable, s-stable and t-stable. \subsection{The Pair $(SL(V),(GL(V_1(h)) \oplus GL(V_{-1}(h))) \cap SL(V), Ad_h)$} Consider, in the same settings as above, the pair given by restricting $Ad_h$ to $SL(V)$. Here $h : V \to V$ is a linear involution but we do not assume $det(h) = 1$. We can identify the maximal $(\theta,F)$-split tori using the following notion: \begin{definition} Let $U \subseteq V$ be a linear subspace. We call $U$ $h$-\textbf{split} if $U \cap h(U) = 0$. \end{definition} It is easy to see that a subspace $U \subseteq V$ is $h$-split if and only if $U \cap V_1(h) = U \cap V_{-1}(h) = 0$. To every $h$-split subspace $U \subseteq V$, an $F$-split torus $T \subseteq GL(U)$ and an $h$-invariant direct complement $W$ of $U + h(U)$ we can associate a $(\theta,F)$-split torus $A(U,T)$ as follows: \[A(U,T,W) = \{g \in SL(V) : g|_U \in T, \quad g_{h(U)} = h (g|_U)^{-1} h^{-1}, \quad g|_W = Id_W\}.\] \begin{proposition} \label{maximal h-split subspaces} Every maximal $(\theta,F)$-split torus of $SL(V)$ is of the form $A(U,T,W)$ for a maximal $h$-split subspace $U$ of $V$, an $F$-split maximal torus $T \subset SL(U)$ and some direct complement $W$ of $U + h(U)$. \end{proposition} \begin{proof} Let $A$ be any maximal $(\theta,F)$ split torus of $SL(V)$. Let $Y \subseteq X^*(A)(F)$ be the set of eigen-characters of $A$ in $V$. Then $\displaystyle V = \oplus_{\xi \in Y} V_\xi(A)$ where $V_\xi(A) = \{v \in V : av = \xi (a)v \forall g \in A\}$. Since $A$ is $Ad_h$-split, $h$ maps $V_\xi$ isomorphically onto $V_{\xi^{-1}}$. Choose a linear order of $X^*(A)$ and let $Y^+$ be the collection of positive characters in $Y$. Then $U := \oplus_{\xi \in Y^+} V_\xi(A)$ is $h$-split, $h(U) = \oplus_{\xi \in Y^-}(V_\xi(A))$ and $W = V_1(A)$ is a direct complement of $U + h(U)$, where $1$ denotes the trivial character. Let $T$ be the torus in $GL(U)$ generated by the restrictions of elements of $A$ to $GL(U)$. Then $A \subseteq A(U,T,W)$. By the maximality of $A$ we get that $A = A(U,T,W)$. \end{proof} \subsubsection {Stability of the Pair $(SL(V),(GL(V_1(h)) \times GL(V_{-1}(h))) \cap SL(V), Ad_h)$} In order to decide the stability of the pair, we need to consider cocycles of $Ad_h$ in $G$ of the form $[r]$ for $r$ a semi-simple symmetric element. \begin{lemma} Let $r \in S$ be semi-simple. Then \[dim(V_{-1}(rh)) - dim(V_{-1}(h)) = dim(V_1(h) \cap V_{-1}(r)) - dim(V_{-1}(h) \cap V_{-1}(r)).\] \end{lemma} \begin{proof} This follows from Lemma \ref{dimension formula} by subtracting the formula for $dim(V_1(rh))$ from that of $dim(V_1(h))$. \end{proof} \begin{theorem} The pair $(SL(V),(GL(V_1(h)) \times GL(V_{-1}(h))) \cap SL(V), Ad_h)$ is stable if and only if there is no $h$-split subspace $U \subseteq V$ such that $V = U + h(U)$. If there is such a subspace then the pair is not s-stable and hence also not p-stable. \end{theorem} \begin{proof} Assume first that there exists such a subspace $U$. Let $T \subseteq GL(U)$ be a maximal torus, and consider the torus $A = A(U,T,\{0\}) \subseteq SL(V)$. We have $H^1(Ad_h,A) \cong A / A^2$. Since for every $a \in A$, $U$ is $ah$-split, we see that $dim(V_{-1}(ha)) = dim(U) = dim(V_{-1}(h))$ so $[a] \in H^1(Ad_h,SL(V))$ is trivial. We have \[Z_{GL(V)}(A) = T \times Ad_h(T) \cong T \times T\] where $Ad_h$ acts by flipping the two factors. By Theorem \ref{cohomology of flip}, $H^1(Ad_h,Z_{GL(V)}(A)) = 1$. Therefore, by the exact sequence \[\SES{Z_{SL(V)}(A)}{Z_{GL(V)}(A)}{F^\times}\] and by Theorem \ref{descent lemma}, \[H^1(Ad_h,Z_{SL(V)}(A)) \cong F^\times / det(Z_{GL(V)}(A,h)) = F^\times / (F^\times)^2.\] Under the last isomorphism, an element $a \in A / A^2$ is mapped to $det(a|_U) \mod F^\times$. Indeed, the composition \[A/A^2 \stackrel{\approx}{\rightarrow} H^1(Ad_h, A) \to H^1(Ad,Z_{SL(V)}(A)) \cong F^\times / (F^\times)^2\] is defined as follows. Take an element $a \in A$, write it as $g^{-1} Ad_h(g)$ for $g \in Z_{GL(V)}(A)$ and we see that the image of $a$ in $F^\times / (F^\times)^2$ is $det(g)$. In $Z_{GL(V)}(A)$ we have $a = g^{-1} Ad_h(g)$ for $g = a|_U^{-1} \oplus Id_{h(U)}$ and the determinant of the latter is $det(a|_U)$. It follows that every $a \in A$ such that $det(a|_U)$ is not a square is an example of a non-trivial element in $H^1(Ad_h,Z_{SL(V)}(A))$ which is trivialized in $H^1(Ad_h,SL(V))$. By the cohomological criterion for s-stability (Theorem \ref{criterion for s-stability}) the pair is not s-stable in this case. Conversely, assume that no such $U$ exists, and let $r \in S_0$ be a semi-simple symmetrization in $SL(V)$. We need to calculate $[r] \in H^1(Ad_h,Z_{SL(V)}(r))$ and show that it vanishes. Since the pair $(GL(V), GL(V_1(h)) \times GL(V_{-1}(h)), Ad_h)$ is stable, $[r]$ is trivial as an element of $H^1(Ad_h,Z_{GL(V)}(r))$, so $[r] \in \KerH{Ad_h}{Z_{SL(V)}(r)}{Z_{GL(V)}(r)}.$ It will be sufficient, though, to prove that $\KerH{Ad_h}{Z_{SL(V)}(r)}{Z_{GL(V)}(r)} = 1$. By Theorem \ref{descent lemma}, this set is isomorphic to $F^\times / det(Z_{GL(V)}(r,h))$. Let $\displaystyle V = \oplus_\lambda V_\lambda(r)$. We shall construct an $h$-split subspace of $V$ as follows. For each pair $\lambda, \lambda^{-1}$ of primary values of type $A$, let $U_\lambda \subseteq V_\lambda + V_{\lambda^{-1}}$ be one of $V_\lambda$ or $V_{\lambda^{-1}}$. For each $\lambda$ of type $B$, let $\tau \in F1[r]$ be an element such that $\tau$ and $Ad_h(\tau)$ are linear independent over $F[r + r^{-1}]$, and let $U_\lambda = \tau (V_\lambda(r) \cap V_1(h))$. Finall1y, because $[r]$ is trivial in $H^1(Ad_h,SL(V))$ we have $dim(V_1(h) \cap V_{-1}(r)) = dim(V_{-1}(h) \cap V_{-1}(r))$ and we let $U_{-1} \subseteq V_{-1}(r)$ be any subspace of half the dimension of $V_{-1}(r)$ which is transversal to $V_1(h)$ and $V_{-1}(h)$. Let $\displaystyle U = \oplus_{\lambda \ne 1} U_\lambda$. It is easy to see that $U$ is $h$-split. By the assumption, $V = U + h(U) + V_1(r) \ne U + h(U)$ so $V_1(r) \ne 0$. But then $Z_{GL(V_1(r))}(h) \subseteq Z_{GL(V)}(r,h)$ contains elements of any determinant, so $F^\times / det(Z_{GL(V)}(r,h)) = 1$. \end{proof} The geometric condition of the existence of half-dimensional split subspace has a simple interpretation. \begin{lemma} The maximal dimension of an $h$-split subspace of $V$ is $min(dim(V_1(h), dim(V_{-1}(h))))$. In particular, there is an $h$-split subspace of $V$ of dimension $\frac{dim(V)}{2}$ if and only if \[dim(V_1(h)) = dim(V_{-1}(h)).\] \end{lemma} \begin{proof} Let $U \subseteq V$ be $h$-split. Then $U \cap V_1(h) = U \cap V_{-1}(h) = 0$ so \[dim(U) + dim(V_1(h)) \le dim(V)\] and \[dim(U) + dim(V_{-1}(h)) \le dim(V)\] and therefore \[dim(U) \le min(dim(V) - dim(V_1(h)), dim(V) - dim(V_{-1}(h))) = min(dim(V_1(h)), dim(V_{-1}(h))).\] Conversely, let $v_1,...,v_l$ be a basis of $V_1(h)$ and $u_1,...,u_k$ be a basis of $V_{-1}(h)$. Then the space $span\{v_i + u_i\}_{i = 1}^{min(l,k)}$ is $h$-split of dimension $min(dim(V_1(h)), dim(V_{-1}(h)))$. \end{proof} \begin{corollary} The pair $(SL(V),(GL(V_1(h)) \oplus GL(V_{-1}(h))) \cap SL(V), Ad_h)$ is stable if and only if $dim(V_1(h)) \ne dim(V_{-1}(h))$. The same is true for s-stability and p-stability. \end{corollary} \subsection {The Pair $(GL(V(E)), GL(V(F)), c)$} Let $E/F$ be a quadratic extension. Let $V(F)$ be a linear space over $F$ and let $\displaystyle V(E) = V \otimes_F E$. Let $c$ be the conjugation of $E$ over $F$. The stability of this pair is already proven. \begin{theorem} [{\cite[Proposition 2.8.5]{dima}}] The pair $(GL(V(E)), GL(V(F)), c)$ is stable. \end{theorem} \begin{corollary} The pair $(GL(V(E)), GL(V(F)), c)$ is p-stable and s-stable. \end{corollary} \subsection {The Pair $(SL(V(E)), SL(V(F)), c)$ } In order to check p-stability and s-stability of this pair, we need to identify the $(\theta, F)$ split tori. \begin{definition} We call a linear subspace $U \subseteq V$ is called $c$-split if $U$ is an $E$-subspace of $V(E)$ and it is $c$-split as an $F$-subspace of $V(E)$, where $c$ is considered as an $F$-linear involution of $V(E)$. \end{definition} For each $c$-split subspace $U \subseteq V(E)$, an $F$-split torus $T \subseteq GL_E(U)$ and a $c$-invariant $E$-complement for $U + c(U)$, $W$, we can form the $(\theta,F)$-split torus \[A(U,T,W) = \{g \in GL(V(E)) : g|_U \in T,\quad g|_{c(U)} = c(g|_U)^{-1}, \quad g|_W = Id_W\}.\] As in the last example, every maximal $(\theta,F)$-split torus of $SL(V(E))$ is of the form $A(U,T,W)$. It is easy to see that \[Z_{GL(V(E))}(A(U,T,W)) \cong Z_{GL_E(U)}(T) \times Z_{GL_E(c(U))}(c(T)) \times GL_E(W).\] \subsubsection{Stability of the Pair $(SL(V(E)), SL(V(F)), c)$} By Hilbert's 90 theorem, $H^1(c,SL(V(E))) = 1$. Moreover, for every $(\theta,F)$-split torus $A \subseteq SL(V(E))$ we have $H^1(c,Z_{GL(V(E))}(A)) = H^1(c, Z_{GL(V(E))}(a))$, for every generic $a \in A$. By theorem \ref{vanishing theorem for centralizers} this implies that $H^1(c,Z_{GL(V(E))}(A)) = 1$. It follows from Theorem \ref{descent lemma} and the exact sequence \[\SES{Z_{SL(V(E))}(A)}{Z_{GL(V)}(A)}{E^\times}\] that \[H^1(c,Z_{SL(V(E))(A)}) \cong (E^\times)^c / det(Z_{GL(V(F))}(A)) = F^\times / det(Z_{GL(V(F))}(A)).\] \begin{theorem} \label{p-stability of (SL(V(E)),SL(V(F)),c)} The following conditions are equivalent: \begin{enumerate} \item [1] The pair $(SL(V(E)), SL(V(F)), c)$ is p-stable. \item [2] The pair $(SL(V(E)), SL(V(F)), c)$ is s-stable. \item [3] $V(E)$ does not contain a $c$-split subspace $U$ such that $V(E) = U + c(U)$. \item [4] $dim(V)$ is odd. \end{enumerate} \end{theorem} \begin{proof} $(1) \Rightarrow (2)$ follows from implication (2) in diagram \ref{implications}. \newline $(2) \Rightarrow (3)$: Assume on the contrary that such a $U$ exists. Let $T \subseteq GL_E(U)$ be any maximal $F$-split torus, and $A = A(U,T,0)$. Then \[Z_{GL(V(E))}(A) = Z_{GL_E(U)}(T) \times Z_{GL_E(c(U))}(c(T)).\] It follows that \[Z_{GL(V(F))}(A) = \{(g,c(g)) : g \in Z_{GL(V(E))}(T)\} \subseteq Z_{GL_E(U)}(T) \times Z_{GL_E(c(U))}(c(T))\}.\] But $det(g \oplus c(g)) = N_{E/F}(det(g))$ so \[H^1(c,Z_{SL(V(E))}(A)) \cong F^\times / N_{E/F}(E^\times) \ne 1.\] As $H^1(c,SL(V(E))) = 1$, the pair is not s-stable by Theorem \ref{criterion for s-stability} (note that the cocycle we use can be chosen to come from $A$). \newline $(3) \Rightarrow (1)$: If there is no such $U$, then a maximal $(\theta,F)$-split torus of $SL(V(E))$ is of the form $A(U,T,W)$ for $W \ne 0$. As $det: GL(W(F)) \to F^\times$ is onto, we get that $H^1(c,Z_{SL(V(E))}(A)) = 1$ so the pair is p-stable. \newline $(3) \Leftrightarrow (4)$: If $dim(V(E))$ is odd, then clearly $V(E)$ is not of the form $U \oplus c(U)$. Conversely, if $dim(V(E))$ is odd, choose a basis $\{v_1,...,v_{2n}\}$ of $V(F)$ and let $\tau \in E$ be such that $\tau = -c(\tau)$. Then $U = span\{\tau v_1 + v_{n + 1},...,\tau v_n + v_{2n}\}$ is easily seen to be $c$-split. \end{proof} We turn to investigate the stability of the pair. If $dim(V)$ is even the pair is not stable as it is not p-stable. It remains to check the case where $dim(V)$ is odd. Before we do that, we shall calculate the descendants of the pair. We do this by comparing to the descendants of $GL(V(E))$. \begin{proposition} \label{descendants of GL(V(E))} Let $r \in GL(V(E))$ be a semi-simple symmetric element. Then the descendant $(Z_{GL(V(E))}(r), Z_{GL(V(F))}(r), c)$ is a product of the following pairs: \begin{itemize} \item $GL_{E[\lambda]}(V_\lambda(r)) \times GL_{E[\lambda]}(V_\lambda(r)), GL_{E[\lambda]}(V_\lambda(r)), (x,y) \mapsto (c(y),c(x))$. one for each pair of the form $(\lambda, c(\lambda)^{-1})$ of primary values such that $\lambda \ne c(\lambda)^{-1}$. \item $GL_{E[\lambda]}(V_\lambda(r)) \times GL_{F[\lambda]}(V_\lambda(r)), c)$, one for each primary value $\lambda$ such that $\lambda = c(\lambda)^{-1}$. \end{itemize} \end{proposition} \begin{proof} An element of $GL(V(E))$ commutes with $r$ if and only if it preserves the primary value decomposition. Since $r$ is symmetric, $c$ intertwines between $V_\lambda(r)$ and $V_{c(\lambda)^{-1}}(r)$. Thus, $Z_{GL(V(E))}(r)$ is a product of blocks as claimed. The description of each block is immediate. \end{proof} The descendants of $SL(V(E))$ are just the intersection of the descendants of $GL(V(E))$ with $SL(V(E))$. \begin{proposition} Suppose that $dim(V)$ is odd. Let $r \in SL(V(E))$ be a semi-simple symmetric element. Then $H^1(c,Z_{SL(V(E))}(r)) = 1$. \end{proposition} \begin{proof} We have, by Theorem \ref{vanishing theorem for centralizers}, $H^1(c,Z_{GL(V(E))}(r)) = 1$. Consider the exact sequence \[\SES{Z_{SL(V(E))}(r)}{Z_{GL(V(E))}(r)}{det(Z_{GL(V(E))}(r))}.\] By Theorem \ref{descent lemma} we deduce that \[H^1(c,Z_{SL(V(E))}(r)) \cong det(Z_{GL(V(E))}(r))^c / det(Z_{GL(V(F))}(r)).\] Let $\bar{\mathcal{A}}(r)$ denote the set of primary values of $r$ so that $\lambda \ne c(\lambda)^{-1}$. Let $\bar{\mathcal{B}}(r)$ denote the rest of the primary values. We have, by Proposition \ref{descendants of GL(V(E))}, that \[det(Z_{GL(V(E))}(r)) = \prod_{\lambda \in \bar{\mathcal{A}}(r) \cup \bar{\mathcal{B}}(r)}(N_{E[\lambda]/E}(E[\lambda]^\times))\] while \[det(Z_{GL(V(F))}(r)) = \prod_{\lambda \in \bar{\mathcal{A}}(r)}(N_{E[\lambda]/F}(E[\lambda]^\times)) \cdot \prod_{\lambda \in \bar{\mathcal{B}}(r)}(N_{F[\lambda]/F}(F[\lambda]^\times)).\] Thus, \[det(Z_{GL(V(E))}(r))^c / det(Z_{GL(V(F))}(r)) \cong \frac{\prod_{\lambda \in \bar{\mathcal{A}}(r) \cup \bar{\mathcal{B}}(r)}(N_{E[\lambda]/E}(E[\lambda]^\times)) \cap F^\times} {\prod_{\lambda \in \bar{\mathcal{A}}(r)}(N_{E[\lambda]/F}(E[\lambda]^\times)) \cdot \prod_{\lambda \in \bar{\mathcal{B}}(r)}(N_{F[\lambda]/F}(F[\lambda]^\times))} .\] Since $V$ is odd dimensional there must be $\lambda \in \bar{\mathcal{B}}(r)$ such that $[F[\lambda]:F]$ is odd. Let $\lambda_0$ denote this primary value, and let $d = [F[\lambda_0]:F]$. Since every $d$-th power in $F^\times$ is a norm from $F[\lambda]$, we deduce from the equation above that every element of $det(Z_{GL(V(E))}(r))^c / det(Z_{GL(V(F))}(r))$ is of order dividing $d$, and in particular this group is of odd exponent. On the other hand, we claim that this group has exponent at most 2. Indeed, let $a \in det(Z_{GL(V(E))}(r))^c / det(Z_{GL(V(F))}(r))$. Then \[a = \prod_{\lambda \in \bar{\mathcal{A}}(r) \cup \bar{\mathcal{B}}(r)}(N_{E[\lambda]/E}(a_\lambda))\] and therefore \begin{align*} &a^2 = (\prod_{\lambda \in \bar{\mathcal{A}}(r) \cup \bar{\mathcal{B}}(r)}(N_{E[\lambda]/E}((a_\lambda))))^2 = N_{E/F}(\prod_{\lambda \in \bar{\mathcal{A}}(r) \cup \bar{\mathcal{B}}(r)}(N_{E[\lambda]/E}(a_\lambda))) = \\ &=\prod_{\lambda \in \bar{\mathcal{A}}(r) \cup \bar{\mathcal{B}}(r)}(N_{E/F}(N_{E[\lambda]/E}(a_\lambda))) = \prod_{\lambda \in \bar{\mathcal{A}}(r) \cup \bar{\mathcal{B}}(r)}(N_{E[\lambda]/F}(a_\lambda)) \end{align*} and the last expression is in $\prod_{\lambda \in \bar{\mathcal{A}}(r)}(N_{E[\lambda]/F}(E[\lambda]^\times)) \cdot \prod_{\lambda \in \bar{\mathcal{B}}(r)}(N_{F[\lambda]/F}(F[\lambda]^\times))$. It follows that $det(Z_{GL(V(E))}(r))^c / det(Z_{GL(V(F))}(r))$ is trivial, since its exponent divides $d$ and $2$ simultaneously. \end{proof} We deduce from Theorem \ref{comological criterion for stability} and Theorem \ref{stable then p-stable} the following: \begin{theorem} \label{stability of (SL(V(E)),SL(V(F)),c)} The pair $(SL(V(E)),SL(V(F)),c)$ is stable if and only if $dim(V)$ is odd. \end{theorem} \subsection{The Pair $(GL_F(V), GL_E(V), Ad_J)$} Let $E = F[\sqrt{a}]$, let $V$ be a linear space over $F$ and let $GL_F(V)$ denote the $F$-linear transformations of $F$. The subscript is used to distinguish it from linear transformations with respect to other base fields. Let $J \in GL_F(V)$ be an element such that $J^2 = a$. Then $J$ gives $V$ the structure of an $F[J] \cong E$ vector space. We also have $Z_{GL_F(V)}(J) \cong GL_E(V)$. Thus we have a symmetric pair $(GL_F(V), GL_E(V), Ad_J)$. The cohomology of the pair is computed similarly to that of inner pairs. Multiplication from the right by $J$ gives an isomorphism between $H^1(Ad_J,GL_F(V))$ with the set of conjugacy classes of $J'$-s such that $J'^2 = a$. \begin{proposition} \label{cohomology of GL_F(V)} $H^1(Ad_J,GL_F(V)) = 1.$ \end{proposition} \begin{proof} Let $J'^2 = a$. We wish to prove that $J'$ is conjugate with $J$. Over $E$, $J'$ is conjugate to $diag(\sqrt{a},-\sqrt{a},\sqrt{a},-\sqrt{a},...,\sqrt{a},-\sqrt{a})$. In particular, $J'$ and $J$ are conjugate over $E$. By a well known result in linear algebra, this means that $J'$ and $J$ are conjugate over $F$. \end{proof} It is not hard to identify the descendants of this pair, and the proof is identical to the one given in the case $J^2 = Id$. \begin{proposition} The descendants of the pair $(GL_F(V), GL_E(V), Ad_J)$ are products of pairs of the form: \begin{itemize} \item $(G \times G, \Delta G, (x,y) \mapsto (y,x))$, one for each primary value of type $A$. \item $(GL(V(E_\lambda)), GL(V(F_\lambda)), c)$, one for each primary value of type $B$. \item $(GL_F(V), GL_E(V), Ad_J)$, one for each primary value of type $C$. \end{itemize} \end{proposition} Since the cohomologies of all these pairs are trivial, we deduce: \begin{theorem} \label{stability of (GL_F(V),GL_E(V),Ad_J)} The pair $(GL_F(V), GL_E(V), Ad_J)$ is stable. In particular it is p-stable and s-stable. \end{theorem} \subsection{The Pair $(SL_F(V), SL_E(V), Ad_J)$} We consider the same pair as above, with $GL$ replaced by $SL$. \begin{proposition} $H^1(Ad_J,SL_F(V)) \cong F^\times / N_{E/F}(F^\times)$ \end{proposition} \begin{proof} By Theorem \ref{descent lemma} and the vanishing of $H^1(Ad_J,GL_F(V))$, we have \[H^1(Ad_J,SL_F(V)) \cong det(GL_F(V))^J / det(GL_E(V)) = (F^\times)^J / N_{E/F}(E^\times) = F^\times / N_{E/F}(E^\times).\] \end{proof} \begin{theorem} Let $F$ be non-Archimedean. Then the pair $(SL_F(V), SL_E(V), Ad_J)$ is not s-stable. Thus, it is not p-stable and not stable. \end{theorem} \begin{proof} Choose a basis $\mathcal{B} = \{x_i\}_{i = 1}^n \cup \{y_i\}_{i = 1}^n$ such that \[J x_i = y_i, \quad J y_i = a x_i.\] Then \[A = \{g \in SL_F(V) : gx_i = \lambda_i x_i, \quad gy_i = \lambda_i^{-1} y_i, \quad \lambda_i \in F^\times\}\] is a maximal $(Ad_J,F)$-split torus in $SL_F(V)$. Let $T$ be the torus of all diagonal matrices in the basis $\mathcal{B}$. Then $Z_{SL_F(V)}(A) = T$. To compute $H^1(Ad_J,T)$, first note that since $A = T^-$ the map $H^1(Ad_J,A) \to H^1(Ad_J,T)$ induced by the inclusion is onto. Thus, the pair is not s-stable if the map $H^1(Ad_J,T) \to H^1(Ad_J,SL_F(V)) \cong F^\times/N_{E/F}(E^\times)$ has non-trivial kernel. In order to compute $H^1(Ad_J,T)$, let $\tilde{T} = Z_{GL_F(V)}(A)$. Then by the p-stability of $(GL_F(V), GL_E(V), Ad_J)$ and the vanishing of its cohomology we have \[H^1(Ad_J,\tilde{T}) = 1.\] We have an exact sequence of abelian $\ZZ/ 2\ZZ$ modules: \[\SES{T}{\tilde{T}}{F^\times}\] and as a result we get an exact sequence \[\begin{CD} det(\tilde{T}^+) @>>> F^\times @>>> H^1(Ad_J,T) @>>> 1 \end{CD}.\] But $det(\tilde{T}^+) = (F^\times)^2$ so \[H^1(Ad_J,T) \cong F^\times / (F^\times)^2.\] The induced map \[F^\times / (F^\times)^2 \cong H^1(Ad_J,T) \to H^1(Ad_J,SL_E(V)) \cong F^\times / N_{E/F}(E^\times)\] is the natural quotient map, which has non-trivial kernel. \end{proof} In the real case the situation is completely different. \begin{theorem} \label{stability of (SL_F(V),SL_E(V),Ad_J)} Let $F = \RR$. The pair $(SL_\RR(V), SL_\CC(V), Ad_J)$ is stable. \end{theorem} \begin{proof} We have $H^1(Ad_J,SL_\RR(V)) \cong \RR^\times / (\RR^\times)^2$. If $r$ is a semi-simple symmetrization, then we have \[H^1(Ad_J,Z_{GL_\RR(V)}(r)) = 1\] as in the proof of the stability of the pair $(GL_F(V),GL_E(V), Ad_J)$. By Theorem \ref{descent lemma} we deduce that $H^1(Ad_J,Z_{SL_\RR(V)}(r)) \cong det(Z_{GL_\RR(V)}(r)) / det(Z_{GL_\CC}(r))$. The last group is a quotient of $\RR^\times$ by an open subgroup, so it is either trivial or equal $\RR^\times / (\RR^\times)^2$. In any case, the map \[H^1(Ad_J,Z_{SL_\RR(V)}(r)) \to H^1(Ad_J,SL_\RR(V))\] is injective, since it is either a map from the trivial group or isomorphic to the identity map $\RR^\times / (\RR^\times)^2 \to \RR^\times / (\RR^\times)^2$. \end{proof} \subsection {The Pair $(GL(V), O(B), \theta)$} Let $B$ be a non-degenerate quadratic form on $V$, and let $O(B)$ be the group of $B$-orthogonal transformations of $V$. Let $\theta(g) = (g^{t})^{-1}$ where $g^t$ is the transpose of $g$ with respect to $B$. The following result follows directly from the definition of cohomology and equivalence of quadratic forms: \begin{proposition} $H^1(\theta,GL(V)) \cong (\mathcal{QF}_{rank(B)}(F),(V,B))$ as pointed sets. The identification is given by assigning to the cocycle $[r]$ the equivalence class of the quadratic form $B_r(u,v) := B(r(u),v)$ on $V$. \end{proposition} The group $GL(V)$ contains maximal $(\theta,F)$-split tori which are also maximal $F$-split tori. In fact, given an orthogonal basis $\mathcal{B}$ of $V$, the torus $A(\mathcal{B})$ of all linear automorphisms of $V$ which are represented by diagonal matrix in the basis $\mathcal{B}$ is such a torus, and every maximal $F$-split torus is of this form. In particular, since $Z_{GL(V)}(A(\mathcal{B})) = A(\mathcal{B})$ we see that the pair is $S$-stable if and only if it is $P$-stable. \subsubsection{Stability of the Pair $(GL(V), O(B), \theta)$} To determine whether the pair is $S$-stable, we shall calculate the map \[(i_A)_* : H^1(\theta,A(\mathcal{B})) \to H^1(\theta,GL(V))\] induced by the inclusion. We use the basis $\mathcal{B}$ to represent linear maps and forms as matrices. As $\mathcal{B}$ is an orthogonal basis, $B = diag(b_1,...,b_n)$ in this basis and therefore $B \equiv Q([b_1,...,b_n])$. If $r = diag(\lambda_1,...,\lambda_n) \in A(\mathcal{B})$ then $B_r \equiv Q([\lambda_1 b_1,...,\lambda_n b_n])$. Let $\phi([r]) = [B_r] \in \mathcal{QF}_{rank(B)}(F)$, a bijection by the computation of the cohomology above. We have \[H^1(\theta,A(\mathcal{B})) \cong (F^\times / (F^\times)^2)^{dim(V)}.\] Since $\phi(diag(\lambda_1,...,\lambda_n)) = Q([\lambda_1 b_1,...,\lambda_n b_n])$ we deduce: \[ \KerH{\theta}{A(\mathcal{B})}{GL(V)} \cong \{(\lambda_1,...,\lambda_n) \in (F^\times / (F^\times)^2)^{dim(V)} : Q([\lambda_1 b_1,...,\lambda_n b_n]) \equiv Q([b_1,...,b_n])\} \] \begin{theorem} The pair $(GL(V), O(B), \theta)$ is $S$-stable if and only if there is a unique sequence $[b_1,...,b_n]$ such that $B \equiv Q([b_1,...,b_n])$. \end{theorem} \begin{proof} In one direction, if there is a unique such sequence $[b_1,...,b_n]$ then $\KerH{\theta}{A(\mathcal{B})}{GL(V)} = 1$. Indeed, if $Q([\lambda_1 b_1,...,\lambda_n b_n]) \equiv Q([b_1,...,b_n])$ then by the uniqueness assumption we have $b_i \equiv \lambda_i b_i \mod{(F^\times)^2}$ so $[diag(\lambda_1,...,\lambda_n)]$ is trivial in $H^1(\theta,A(\mathcal{B}))$. On the other hand, if $B \equiv Q([b_1,...,b_n]) \equiv Q([c_1,...,c_n])$ for different tuples $[b_1,...,b_n]$ and $[c_1,...,c_n]$ then, choosing a basis $\mathcal{B}$ in which $B = diag(b_1,...,b_n)$ the element $[diag(b_1c_1^{-1},...,b_n c_n^{-1})]$ is a non-trivial element of $\KerH{\theta}{A(\mathcal{B})}{GL(V)}$. \end{proof} We can translate this criterion into a simple characteristic of the form $B$. \begin{proposition} A quadratic form $B$ is equivalent to $Q([b_1,...,b_n])$ for a unique sequence $[b_1,...,b_n] \in F^\times / (F^\times)^2$ if and only if $F = \RR$ and $B$ is definite or $F$ is non-Archimedean and $dim(V) = 1$. \end{proposition} \begin{proof} If $F = \RR$, the only invariant of the form $Q([b_1,...,b_n])$ is the number of positive $b_i$-s. In particular, the only sequences of $\pm 1$-s which are the only representatives of the form they represent are $Q([1,...,1])$ and $Q([-1,...,-1])$, which represents the classes of definite forms. If $F$ is non-Archimedean and $dim(V) = 1$ then clearly $Q([b_1])$ is the only representative of its class, for example because of the invariance of the determinant. On the other hand if $dim(V) > 1$ then $B \equiv Q([b_1,b_2,...,b_n])$. If $b_1 \ne b_2$ then $B \equiv Q([b_1,b_2,...,b_n]) \equiv Q([b_2,b_1,...,b_n])$ are two different representations, while if $b_1 = b_2$ then $Q([b_1,b_2,...,b_n]) \equiv Q([b_1 \lambda, b_2 \lambda,...,b_n])$ for $\lambda$ a non-square of valuation 0 in $F$, such that $\{\lambda,\lambda\} = 1$. Indeed, \[det([b_1 \lambda, b_2 \lambda,...,b_n]) = \lambda^2 det(B) \equiv det(B) \mod{(F^\times)^2}\] and \[H([b_1 \lambda, b_2 \lambda,...,b_n]) = \{\lambda,\lambda\}\{\lambda,det(B)\}^2 H(B) =\{\lambda,\lambda\} H(B) = H(B)\] so the equivalence follows from Theorem \ref{classification of quadratic forms over local fields}. \end{proof} \begin{remark} Note that in the cases where the pair is $S$-stable it is also stable for trivial reasons: if $dim(V) = 1$ the pair is commutative and if $B$ is real definite the pair is Riemannian. \end{remark} \begin{corollary} The pair $(GL(V),O(V),\theta)$ is stable, p-stable or s-stable if and only if either $F = \RR$ and $B$ is definite or $F$ is non-Archimedean and $dim(V) = 1$. \end{corollary} \subsection{The Pair $(GL_E(V),U(B),\theta)$} Let $E/F$ be quadratic extension and let $V$ be a vector space over $E$. Let $B: V \times V \to E$ be a Hermitian form and $U(B)$ the group of linear transformations preserving $B$. Let $\theta(g)$ be defined via the equation $B(u,v) = B(gu,\theta(g)v)$. Let $\mathcal{B}$ be an orthogonal basis of $V$ with respect to $B$. Let $A(\mathcal{B})$ be the torus of all invertible transformations expressed as diagonal matrices with entries on $F$ in the basis $\mathcal(B)$. $A(\mathcal{B})$ is a maximal $F$-split torus of $GL_E(V)$, and it is $\theta$-split, hence it is a maximal $(\theta,F)$-split torus. We have by a direct calculation that $H^1(\theta,GL_E(V))$ the the set of equivalence classes of Hermitian forms on $V$. \subsubsection{Stability for the Pair $(GL_E(V),U(B),\theta)$} \begin{theorem} \label{stability of the pair (GL_E(V),U(B),theta)} The pair $(GL_E(V),U(B),\theta)$ is s-stable if and only if $F = \RR$ and $B$ is definite or $F$ is non-Archimedean and $dim(V) = 1$. In these cases the pair is also p-stable and stable. \end{theorem} \begin{proof} If $\mathcal{B}$ is an orthogonal basis, and $B$ is represented by the diagonal matrix $diag(a_1,...,a_n)$, then a cocycle $[diag(\lambda_1,...,\lambda_n)]$ in $H^1(\theta,A(\mathcal{B}))$ corresponds to the Hermitian form represented by the matrix $diag(\lambda_1 a_1,...,\lambda_n a_n)$. Thus, if $F = \RR$ and $B$ is represented by $diag(1,-1,...)$ then the cohomology class of $diag(-1,-1,1,...,1)$ in $H^1(\theta,A(\mathcal{B}))$ is a non-trivial element of \[IH^1(\theta,A(\mathcal{B}),Z_{GL_E(V)}(A(\mathcal{B}))) \cap KH^1(\theta,Z_{GL_E(V)}(A(\mathcal{B})), GL_E(V))\] so the pair is not s-stable. In the p-adic case, regardless of the form $B$, any cocycle of the form $diag(b,b,1,...,1)$, $b \in N_{E/F}(E^\times) \backslash (F^\times)^2$ will do, by the classification of Hermitian forms over non-Archimedean field. If $B$ is definite or of rank 1 the pair is stable for trivial reasons: in the first case it is Riemannian and in the second it is commutative. \end{proof} \subsection{The Pair $(O(B), O(B^+)\times O(B^-), Ad_h)$} \label{(O(B),O(B^+)timesO(B^-),Ad_h)} Let $V$ be a finite dimensional vector space over $F$ and $B$ a non-degenerate quadratic form on $V$. Let $O(B)$ be the group of all linear transformations of $V$ preserving $B$, and $h \in O(B)$ an element of order 2. Let $B^+ = B|_{V_1(h)}$, $B^- = B|_{V_{-1}(h)}$, and denote $V^+ = V_1(h)$ and $V^- = V_{-1}(h)$. Consider the symmetric pair $(O(B), O(B^+) \times O(B^-), Ad_h)$. The cohomology of the pair has been determined in Proposition \ref{H^1(Ad_h,O(B))}. By this computation we get: \begin{lemma} \label{consequence of Witt Cancellation} Let $W \subseteq V$ be an $h$-invariant subspace of $V$ such that $B|_W$ is non-degenerate. Then the inclusion $i: O(B|_W) \to O(B)$ given by $i(g) = g \oplus Id_{W^\bot}$ induces an injection \[i_* : H^1(Ad_h,O(B_W)) \to H^1(Ad_h,O(B)).\] \end{lemma} \begin{proof} The cohomology of $O(B)$ is naturally identified with the set of forms $B' \le B$ up to equivalence. The cohomology of $O(B_W)$ is naturally isomorphic to the set of forms $B' \le B_W$, up to equivalence. It is straight-forward to check that \[i_*(B') = B' \oplus B^+|_{W^\bot}.\] The injectivity of $i_*$ now follows from the Witt Cancellation Theorem. \end{proof} The following description of the descendants of the pair is given in \cite[Theorem 6.5.1]{dima2}. We rewrite it here in a more detailed form, since we need to compute the map between cohomology sets. \begin{proposition} \label{descendants of O(B)} Let $r \in O(B)$ be a semi-simple symmetric element of $O(B)$. Choose a collection $\mathcal{A}(r)^+ \subseteq \mathcal{A}(r)$ of primary values such that for each primary value $\lambda$ of type $A$ exactly one of $\lambda,\lambda^{-1}$ is in $\mathcal{A}(r)^+$. Then the descendant $Z_{O(B)}(r)$ is a product of pairs: \begin{enumerate} \item $(GL_{F[\lambda]}(V_\lambda(r)), O(B^\lambda), \theta)$, one for each $\lambda \in \mathcal{A}(r)^+$. Here $B^\lambda$ is the unique $F[\lambda]$-bilinear form on $V_\lambda(r)$ for which \[Tr_{F[\lambda]/F}(B^\lambda)(x,y) = B_h(x,y) := B(x,h(y)).\] \item $(U(B^\lambda_{F[\lambda],c}), O(B^\lambda), c)$, one for each $\lambda \in \mathcal{B}(r)$. Here $B^\lambda$ is the unique $F[\lambda + \lambda^{-1}]$-quadratic form on $V_\lambda(r)$ such that $Tr_{F[\lambda + \lambda^{-1}]/F}(B^\lambda) = B|_{V_\lambda(r)}$. \item $(O(B|_{V_\lambda(r)}), O(B|_{V_\lambda(r) \cap V_1(h)}) \times O(B|_{V_\lambda(r) \cap V_{-1}(h)}), Ad_h)$, one for each $\lambda \in \mathcal{C}(r)$. \end{enumerate} \end{proposition} \begin{proof} The spaces $V_\lambda(r)$ and $V_\tau(r)$ are $B$-orthogonal unless $\lambda$ and $\tau^{-1}$ represent the same point of $spec(\bar{F})$. Indeed, we have \[V_\lambda(r) \otimes_F \bar{F} \subseteq \oplus_{g \in Gal_{\bar{F}/F}} (V \otimes \bar{F})_{g(\lambda)}(r),\] and similarly \[V_\tau(r) \otimes_F \bar{F} \subseteq \oplus_{g \in \Gamma_{\bar{F}/F}} (V \otimes \bar{F})_{g(\tau)}(r),\] so it will be sufficient to show that the direct summands are mutually orthogonal. But if we denote by $\bar{B}$ the base change of $B$ to $\bar{F}$ we have \[\bar{B}(u,v) = \bar{B}(ru,rv) = g(\lambda) g'(\tau)\bar{B}(u,v)\] for every $u \in V \otimes \bar{F}_{g(\lambda)}(r)$ and $v \in V \otimes \bar{F}_{g'(\tau)}(r)$. It follows from the last equation that $v \bot u$ unless $\lambda$ and $\tau^{-1}$ are conjugate, For each primary value $\lambda$ of $r$ let $U_\lambda(r) = V_\lambda(r) + V_{\lambda^{-1}}(r)$. It follows that \[Z_{O(B)}(r) \cong \prod_\lambda Z_{O(B|_{U_\lambda(r)})}(r|_{U_\lambda(r)})\] where the product is arranged so that for each pair $(\lambda,\lambda^{-1})$ the space $U_\lambda(r)$ appears once. It is sufficient though to identify each factor in the product. If $\lambda \in \mathcal{A}(r)$, then $U_\lambda(r) = V_\lambda(r) \oplus V_{\lambda^{-1}}(r)$. An automorphism of $U_\lambda(r)$ commutes with $r$ if and only if it stabilizes $V_\lambda(r)$ and $V_{\lambda^{-1}(r)}$ and it is $F[\lambda]$-linear on each of them. In matrix notation with respect to some bases of $V_\lambda(r)$ and $V_{\lambda^{-1}}(r)$ the linear transformations commuting with $r|_{U_{\lambda}(r)}$ are of the form $\begin{pmatrix}X & 0 \\ 0 & Y \end{pmatrix}$. Orthogonality with respect to $B$ then gives $Y = (X^{t})^{-1}$ where transposition is taken with respect to the quadratic form $B_h$. Note that since $rh = hr^{-1}$ and $r$ preserves $B$, \[B_h(r(x),y) = B(r(x),h(y)) = B(x,r^{-1}h(y)) = B(x,h(r(y))) = B_h(x,ry).\] By linearity it is true for every element of $F[r] \cong F[\lambda]$. It follows from Proposition \ref{Im(Tr)} that there is an $F[\lambda]$-quadratic form $B^\lambda$ on $V_\lambda(r)$ such that \[Tr_{E/F}(B^\lambda) = B_h.\] The uniqueness of $B^\lambda$ shows that transposition with respect to $B_h$ and $B^\lambda$ agree. Finally, since \[h \begin{pmatrix}X & 0 \\ 0 & (X^{t})^{-1} \end{pmatrix} h^{-1} = \begin{pmatrix}(X^{t})^{-1} & 0 \\ 0 & X \end{pmatrix}\] restriction to $V_\lambda(r)$ then gives an isomorphism between the block of $Z_{O(B)}(r)$ corresponding to $U_\lambda(r)$ and the symmetric pair \[(GL_{F[\lambda]}(V_\lambda(r)), O(B^\lambda), \theta).\] Next assume that $\lambda \in \mathcal{B}(r)$. Then commuting with $r$ is equivalent to being $F[r]$-linear. By the relation $B(rx,y) = B(x,r^{-1}y) = B(x,c(r)y)$ and by the fact that $r$ is a generator of $F[r]$ over $F[r + r^{-1}]$ we deduce that $B(\tau x, y) = B(x,c(\tau) y)$ for each $\tau \in F[r]$. In particular every element of $F[r + r^{-1}]$ is symmetric with respect to $B$ so there is $B^\lambda \in Quad(V_\lambda(r) \cap V_1(h))$ such that $Tr_{F[r + r^{-1}]/F}(B^\lambda) = B$. The elements of $O(B^\lambda)$. Finally, since $Tr$ is injective and \[Tr_{F[\lambda]/F}(B^\lambda_{F[r],Ad_h}) = 2Tr_{F[r + r^{-1}]/F}(B^\lambda) = 2B,\] we deduce that $Z_{O(B|_{V_\lambda(r)})}(r) \cong U(B^\lambda_{F[r],c})$, where $c = Ad_h$. The case $\lambda \in \mathcal{C}(r)$ is much easier and we leave it for the reader. \end{proof} Since $Z_{O(B)}(r)$ is a product of subgroups stabilized by $Ad_h$, its cohomology is a product of the corresponding cohomology sets. By the computations in Propositions \ref{H^1(c,G(E))}, \ref{H^1(Ad_h,O(B))} and the well known cohomology of $\theta(g) = (g^{t})^{-1}$ with coefficients in $GL(V)$ we get: \begin{proposition} \label{cohomology of descendant of O} \begin{align*} &H^1(Ad_h,Z_{O(B)}(r)) \cong \\ &\prod_{\lambda \in \mathcal{A}(r)^+} \{QF_{dim_{F[\lambda]}}(F[\lambda])\} \times \\ &\prod_{\lambda \in \mathcal{B}(r)} \{B' \in QF_{dim_{F[\lambda]}(V_\lambda(r))}(F[\lambda]) : B'_{F[\lambda],c} \equiv B^\lambda_{F[\lambda],c} \} \times \\ &\prod_{\lambda \in \mathcal{C}(r)} \{B' \in QF(F) : B' \le B^+|_{V_\lambda(r) \cap V_1(h)}\} \\ \end{align*} \end{proposition} The inclusion $i_r : Z_{O(B)}(r) \to O(B)$ induces a map \[(i_r)_* : H^1(Ad_h,Z_{O(B)}(r)) \to H^1(Ad_h,O(B)).\] We shall identify $H^1(Ad_h,O(B))$ with a subset of $\mathcal{QF}(F)$ and $H^1(Ad_h,Z_{O(B)}(r))$ with the product above. Then we have additivity in $(i_r)_*$: If we let $i_\lambda$ denote the natural inclusion of $Z_{O(B|_{U_\lambda(r)})}(r)$ into $O(B|_{U_\lambda(r)})$ then we have \[ (i_r)_* (\prod_\lambda B'^\lambda) = \oplus_{\lambda} (i_\lambda)_*(B'^\lambda) \] where $B'^\lambda$ is the form representing the $\lambda$-component of the cohomology class, with respect to the decomposition $H^1(Ad_h,Z_{O(B)}(r)) \cong \prod_\lambda H^1(Ad_h,Z_{O(B^\lambda)(r|_{U_\lambda}(r))})$. Thus, in order to determine $(i_r)_*$ we only need to calculate $(i_\lambda)_*$ for each primary value $\lambda$ of $r$. \begin{proposition} \label{computation of i_*} Let $B'^\lambda$ be a quadratic form representing a class in $H^1(Ad_h,Z_{O(B|_{U_\lambda(r)})}(r))$. Then \[(i_\lambda)_*(B'^\lambda) = \begin{cases} \frac{1}{2} Tr_{F[\lambda]/F}(B'^\lambda) & \text{ if } \lambda \in \mathcal{A}(r) \\ Tr_{F[r + r^{-1}]/F}(B'^\lambda) & \text{ if } \lambda \in \mathcal{B}(r) \\ B'^\lambda & \text{ if } \lambda \in \mathcal{C}(r) \end{cases} \] \end{proposition} \begin{proof} The case $\lambda \in \mathcal{C}(r)$ is immediate: $i_\lambda$ is the identity in that case. Consider the case $\lambda \in \mathcal{A}(r)$. Let $g \in GL_{F[\lambda]}(V_\lambda(r))$ represent the cohomology class of $B'^\lambda$ in $H^1(Ad_h,Z_{O(B^\lambda)}(r|_{U_\lambda(r)}))$, so that $g^t = g$ and \[B'^\lambda(u,v) = B^\lambda_g(u,v) := B^\lambda(g(u),v).\] Note that \[i_\lambda(g) = g \oplus hg^{-1}h : V_\lambda(r) \oplus V_\lambda(r) \to V_\lambda(r) \oplus V_\lambda(r)\] and hence $i_\lambda(g) h = gh \oplus g^{-1} h.$ It follows that $V_1(i_\lambda(g) h) \cap U_\lambda(r)$ is isomorphic to $V_1(h) \cap V_\lambda(r)$ via the linear isomorphism $\phi(x) = x + gh(x)$. But then $B|_{V_1(i_\lambda(g) h)} \equiv \phi^*(B|_{V_1(i_\lambda(g) h)})$. The left hand side is the quadratic form representing $(i_\lambda)_*([g])$ and the right hand side is \begin{align*} &\phi^*(B)(v,v) = B(v + gh(v),v+gh(v)) = 2B(v,gh(v)) = \\ &= 2B_h(v,g(v)) = 2Tr_{F[\lambda]/F}(B^\lambda_g)(v,v) = 2Tr_{F[\lambda]/F}(B'^\lambda)(v,v). \end{align*} It follows that $(i_\lambda)_*([g])$ is represented by the quadratic form $2Tr_{F[\lambda]/F}(B'^\lambda)$ as we had to prove. Consider now the case $\lambda \in \mathcal{B}(r)$. Let $g \in U(B^\lambda_{F[\lambda],c})$ be a representative of a cohomology class. Then we associate with $[g]$ a quadratic form as follows: Let $g = x^{-1} c(x)$ for $x \in GL_{F[\lambda]}(V_\lambda(r))$ (There is such $x$ by Hilbert's Theorem 90). Then the form $B'^\lambda := (x^{-1})^*(B_\lambda)$ is a quadratic form over $F[\lambda + \lambda^{-1}]$ for which $B'^\lambda_{F[\lambda],c} \equiv B^\lambda_{F[\lambda],c}$, and the equivalence class of $B'^\lambda$ is the class of quadratic forms which corresponds to $[g]$ (note that we pull back by $x^{-1}$ in order to get a left action on quadratic forms). On the other hand, $i_\lambda(g)$ is the restriction of scalars of $g$ from $E$ to $F$. In order to compute the quadratic form corresponding to $i_\lambda(g)$ we need to restrict $B$ to $V_1(gh) \cap V_\lambda(r)$. But we can identify $h$ with $c$, and $g = x^{-1}c(x)$ so \[gh(v) = x^{-1}c(x)c(v) = x^{-1} c(x(v)).\] It follows that \[gh(v) = v \Leftrightarrow c(x(v)) = x(v)\] and therefore \[V_1(gh) \cap V_\lambda(r) = x^{-1}(V_1(h) \cap V_\lambda(r)).\] By pulling back along $x^{-1}$ we get an equivalence \[B|_{V_1(gh) \cap V_\lambda(r)} \equiv (x^{-1})^*(B|_{V_1(h) \cap V_\lambda(r)}) = Tr_{F[\lambda + \lambda^{-1}]/F}(B'^\lambda).\] It follows that the quadratic form representing $(i_\lambda)_*([g])$ is $Tr_{F[\lambda + \lambda^{-1}]/F}(B'^\lambda)$. \end{proof} We turn to identify the maximal $(Ad_h,F)$-split tori. \begin{definition} We call a subspace $U \subseteq V$ $(h,B)$-split if: \begin{enumerate} \item $U \cap h(U) = 0$. \item $B|_U = 0$. \item $B|_{U + h(U)}$ is non-degenerate. \end{enumerate} \end{definition} \begin{definition} Let $U$ be an $h$-split subspace, and let $T$ be a maximal $F$-split torus of $GL(U)$. Define \[A(U,T) = \{g \in O(B) : g|_U \in T,\quad g_{h(U)} = h((g|_U)^t)^{-1}h,\quad g|_{(U + h(U))^\bot} = Id\}.\] Here transposition is taken with respect to the quadratic form $B_h$. \end{definition} $A(U,T)$ is easily seen to be a $(\theta,F)$-split torus of $O(B)$. It turn out that all maximal $(Ad_h,F)$ split tori come from this construction. \begin{proposition} Every maximal $(Ad_h,F)$-split torus in $O(B)$ is of the form $A(U,T)$ for some maximal $(h,B)$-split subspace $U \subseteq V$ and $T$ as above. \end{proposition} \begin{proof} If $A$ is a maximal $(\theta,F)$-split torus, then let $Y$ be the set of weights of $A$ in $V$ and let $Y^+$ be a set of positive weights and $Y^-$ the corresponding negative weights. Then $U = \oplus_{\xi \in Y^+} V_\xi(A)$ is an $h$-split subspace. This was shown in the proof of Proposition \ref{maximal h-split subspaces}. It remains to check that $B|_U = 0$ and that $B$ defines a non-degenerate pairing between $U$ and $h(U)$. But it is straight-forward to check that $V_\xi(A) \bot V_{\psi}(A)$ unless $\xi = \psi^{-1}$, and that $B$ defines a non-degenerate pairing between $V_\xi(A)$ and $V_{\xi^{-1}}(A)$. Thus, $B|_U = 0$ and $U$ is in perfect duality with $h(U) = \oplus_{\xi \in Y^-} V_\xi(A).$ \end{proof} Let $A = A(U,T)$ be a maximal $(Ad_h,F)$-split torus and let $j_A : A \to Z_{O(B)}(A)$ and $i_A : Z_{O(B)}(A) \to O(B)$ denote the inclusions. We wish to compute $(i_A)_*$ and $(j_A)_*$. We identify cohomology classes of $O(B)$ with quadratic forms. Let $W = (U + h(U))^\bot$. As $Z_{O(B)}(A) \cong A \times O(B|_W)$ we have \[H^1(Ad_h,Z_{O(B)}(A)) \cong (A / A^2) \times H^1(Ad_h,O(B|_W)).\] We identify the second factor with a set of quadratic forms. Then: \begin{proposition} \label{i_A_*} \[(j_A)_*(x) = (x,B|_{W \cap V^+})\] and \[(i_A)_*(x,B') = 2(B_{xh}|_{U}) \oplus B'\] \end{proposition} \begin{proof} The first equation follows from the fact that the cohomology of a product is the product of cohomologies. The second follows from Proposition \ref{computation of i_*}, since the quadratic form corresponding to $x \in A$ is $B_{xh}(u,v) = B(xh(u),v)$, and $W$ consisting of only primary values of type $C$ of all the elements of $A$. \end{proof} In coordinates, if we let $T$ be the torus of diagonal matrices in some orthogonal basis of $U$ then $B_h \equiv Q([b_1,...,b_k])$ for some $b_i \in F^\times / (F^\times)^2$ and $x = diag(x_1,...,x_k)$. Then $B_{xh}$ is given in this basis of $U$ by $Q([x_1b_1,...,x_k b_k])$. Thus, we can write \begin{eqnarray*} \label{i_A and j_A} &(j_A)_*(diag(x_1,...,x_k)) = (diag(x_1,...,x_k), B|_{W \cap V^+})\\ & (i_A)_*(diag(x_1,...,x_k),B') = Q([b_1 x_1,...,b_k x_k]) \oplus B' \end{eqnarray*} \subsubsection{Stability of the Pair $(O(B),O(B^+) \times O(B^-) , Ad_h)$} Let $U$ be a maximal $(h,B)$-split subspace of $V$ and let $A = A(U,T)$ for some maximal $F$-split torus $T \subseteq GL(U)$. Let $W = (U + h(U))^\bot$. Let $k$ be the rank of the pair and fix a basis $v_1,...,v_k$ of $U$, orthogonal with respect to $B_h$. Let $u_1,...,u_m$ be a basis of $W$. Then $X = \{v_1,...,v_k,h(v_1),...,h(v_k),u_1,...,u_m\}$ is a basis of $V$. We shall use this basis to represent linear endomorphisms of $V$ as matrices. We also assume that $T$ consists of diagonal matrices with respect to $\{v_1,...,v_k\}$. \begin{theorem} \label{O archimedean} Let $F = \RR$. The following are equivalent: \begin{enumerate} \item The pair $(O(B),O(B^+) \times O(B^-) , Ad_h)$ is s-stable. \item The pair $(O(B),O(B^+) \times O(B^-) , Ad_h)$ is p-stable. \item The pair $(O(B),O(B^+) \times O(B^-) , Ad_h)$ is stable. \item Either $B^+$ or $B^-$ is definite. \end{enumerate} \end{theorem} \begin{proof} Since stability implies p-stability, which implies s-stability, it will be sufficient to prove that $(1) \Rightarrow (4)$ and $(4) \Rightarrow(3)$. $(1) \Rightarrow (4)$: Assume on the contrary that $B^+$ and $B^-$ are both non-definite. We claim that in this case $(B_h)|_U$ in not definite. Indeed, let $u,v \in V^+$ be orthogonal vectors so that $B(u,u) = 1$ and $B(v,v) = -1$. Choose orthogonal $u',v' \in V^-$ such that $B(u',u') = 1$ and $B(v',v') = -1$. Then $B_h|_{Span\{u + v', u' + v\}}$ is not definite. Extend $Span\{u + v', u' + v\}$ to a maximal $(B,h)$-split subspace we see that there is a choice of $U$ for which $(B_h)|_U$ is not definite. It can be shown that this property is independent on the choice of $U$ but we shall just assume that we chose $U$ to have this property, since we are free to choose any maximal $(B,h)$-split subspace to make the computation with. Then, rearranging the basis of $U$ if necessary, we have $(B_h)|_U \equiv Q([1,-1,b_3,..,b_k])$. But then \[x = [diag(-1,-1,1,...,1)] \in KH^1(Ad_h,Z_{O(B)}(A),O(B)) \cap IH^1(Ad_h,A,Z_{O(B)}(A)).\] Indeed, by Proposition \ref{i_A_*} we have \[(i_A)_*([x]) = 2(B_{xh})|_U \oplus B|_{W \cap V^+}\] but \[B_{xh} \equiv Q([-1,1,b_3,...,b_k]) \equiv Q([1,-1,b_3,...,b_k]) = B_h\] so \[(i_A)_*([x]) \equiv 2(B_h)|_U \oplus B|_{W \cap V^+} = B.\] Since $x$ is non-trivial, the pair is not s-stable. $(4) \Rightarrow (3):$ Assume without loss of generality that $B^+$ is positive definite. We should prove that the pair is stable. Let $r$ be a semi-simple symmetrization. Let $i_r : Z_{O(B)}(r) \to O(B)$ denote the inclusion. Then by Theorem \ref{the centralizer criterion} we have to show that $[r]$ is trivial in $H^1(Ad_h,Z_{O(B)}(r))$. Let $V = \oplus_\lambda V_\lambda(r)$ be the corresponding primary decomposition of $V$. We have to show that $\oplus_{\lambda} (i_{\lambda})_*(B'^\lambda) \equiv B$ where $B'^\lambda$ is the quadratic form representing the cocycle $r|_{U_\lambda(r)}$. Recall that we have the forms $B^\lambda$ representing the trivial cocycle of $Z_{O(B|_{U_\lambda(r)})}(r)$. If $\lambda \in \CC - [-\infty,0)$ then $r|_{U_\lambda}$ has a square root which is symmetric with respect to $Ad_h$, so it represents the trivial cocycle in $O(B|_{U_\lambda(r)})$. Thus, in this case $(i_\lambda)_*(B'^\lambda) = B|_{U_\lambda \cap V^+}$. Assume that $\lambda < 0$. Then by Proposition \ref{computation of i_*}, $B'^\lambda = -B^\lambda$ so \[(i_\lambda)_*(B'^\lambda) = -(i_\lambda)_*(B^\lambda) = -B|_{U_\lambda(r) \cap V^+}.\] As $(i_r)_*([r])$ is trivial we have \[(i_r)_*([r]) = \oplus_\lambda (i_\lambda)_*(B'^\lambda) \equiv B|_{V^+}.\] It follows that $(i_\lambda)_*(B^\lambda) \le B|_{V^+}$ so it is positive definite, but on the other hand $(i_\lambda)_*(B'^\lambda) = - (i_\lambda)_*(B^\lambda) \le B|_{V^+}$, so it must also be positive definite. This is absurd, unless there are no $\lambda \in [-\infty,0)$ at all. But then we already saw that $[r]$ is trivial in $H^1(Ad_h,Z_{O(B)}(r))$. \end{proof} We turn to the non-Archimedean case. \begin{theorem} \label{s-stability for non-Archimedean orthogonal} Let $F$ be non-Archimedean. The pair $(O(B),O(B^+) \times O(B^-) , Ad_h)$ is s-stable if and only if it is of rank at most 1. \end{theorem} \begin{proof} Assume that the rank of the pair is at most 1. If the rank is 0 there is nothing to prove, so we may assume that the rank of the pair is exactly 1. Let $A = A(U,T)$ be a (one dimensional) maximal $(Ad_h,F)$-split torus of $O(B)$. Let $i_A : Z_{O(B)}(A) \to O(B)$ and $j_A : A \to Z_{O(B)}(A)$ denote the inclusions. Then by Proposition \ref{i_A_*}, $(j_A)_*$ is injective. The obstruction for s-stability is $Im((j_A)_*) \cap Ker((i_A)_*)$ which is isomorphic to $Ker((i_A)_* \circ (j_A)_*)$ since $(j_A)_*$ is injective. But \[(i_A)_* \circ (j_A)_*(x) = 2(B_{xh})|_{U} \oplus B|_{W \cap V^+} \equiv x B_h|_U \oplus B|_{W \cap V^+}.\] By the Witt Cancellation Theorem this form is equivalent to $B^+$ only if \[x B_h|_U \equiv B_h|_U,\] and since $B_h|_U$ is of rank 1 it follows that $x \in (F^\times)^2$ so it represents the trivial cocycle in $H^1(Ad_h,A)$. It follows that $Ker((i_A)_* \circ (j_A)_*)$ is trivial so the pair is s-stable. Conversely, if $dim(U) \ge 2$ then $B_h \equiv Q([b_1,b_2,...])$ for some $b_1,b_2 \in F^\times / (F^\times)^2$. But then if $b_1 \ne b_2$ the element $x = diag(b_1 b_2,b_1b_2,1,...,1)$ represents a non-trivial cocycle in $Ker((i_A)_* \circ (j_A)_*)$, since \begin{align*} &(i_A)_* \circ (j_A)_*(x) = 2B_{xh}|_U \oplus B|_{W \cap V^+} = \\ &= Q([b_2,b_1,...]) \oplus B|_{W \cap V^+} \equiv Q([b_1,b_2,...]) \oplus B|_{W \cap V^+} = B^+ \end{align*} If $b_1 = b_2$, choose $x = diag(a,a,1,...,1)$ for $a \in F^\times - (F^\times)^2$ such that $\{a,a\} = 1$. Then by a similar computation $x \in Ker((i_A)_* \circ (j_A)_*)$. \end{proof} In fact, most of the non-Archimedean pairs of this type are of rank $\ge 2$, hence not s-stable. Recall that the rank of the pair is the dimension of a maximal $(Ad_h,F)$-split torus, which is the dimension of a maximal $(h,B)$-split subspace of $V$ since $rank(A(U,T)) = dim(U)$. \begin{proposition} \label{rank estimate} Let $r$ denote the rank of the pair $(O(B), O(B^+) \times O(B^-), Ad_h)$. Then \[r = min\{dim(V^+),dim(V^-), \mu(B)\}.\] In particular, if $F$ is non-Archimedean, $dim(V) \ge 7$, $dim(V^+) \ge 2$ and $dim(V^-) \ge 2$, then the pair is not s-stable. \end{proposition} \begin{proof} Let $m = min\{dim(V^+),dim(V^-), \mu(B)\}$ and we prove the proposition by induction on $m$. If $m = 0$ it is clear. Assume that $m > 0$. We claim that in this case there is a non-trivial $(h,B)$-split subspace in $V$. Indeed, since $\mu(B) > 0$ there is $v \in V$ such that $B(v) = 0$. Let $v = v^+ \oplus v^-$ be a decomposition of $v$ into a sum of eigen-vectors for $h$. If $B(v^+,v^+) \ne 0$ and $B(v^-,v^-) \ne 0$ then $U = Span\{v\}$ is $(h,B)$-split. If, on the other hand, one of them vanishes, say $B(v^+,v^+)$, then $\mu(B^+) \ge 1$. Thus, by Proposition \ref{the hyperbolic lemma} $\mathcal{H}_1 \le B^+$ and therefore $Rep(B^+) = F/(F^\times)^2$. Let $u \in V^-$ be any vector such that $B(u) \ne 0$, We can choose $w \in V^+$ such that $B(w) = -B(u)$. But then the space $Span\{u + w\}$ is $(h,B)$-split. Take any one-dimensional $(h,B)$-split subspace $U = Span\{u\}$ of $V$ and consider the subspace $W = (U + h(U))^\bot$ of $V$. We claim that $r = rank(O(B|_W), O(B|_{W \cap V^+}) \times O(B|_{W \cap V^-}), Ad_h) + 1.$ Indeed, the rank is the dimension of any maximal $(h,B)$-split subspace. Let $X \subseteq W$ be a maximal $(h,B|_W)$-split subspace. We claim that $U \oplus W$ is a maximal $(h,B)$-subspace, so \[r = dim(U \oplus X) = 1 + dim(X) = 1 + rank(O(B|_W), O(B|_{W \cap V^+}) \times O(B|_{W \cap V^-}), Ad_h).\] Indeed, $X + U$ is $(h,B)$ split, so we have to show that it can not be extended to a larger $(h,B)$-split subspace. Suppose that $X + U \subset Y$ where $Y$ is $(h,B)$-split, and let $y \in Y \backslash X \oplus U$ be a vector in $Y$ such that $y \bot h(X + U)$ and $B(y,h(y)) \ne 0$. We can write $y = a u + b h(u) + w$ for $a,b \in F$, $w \in W$. Then the condition $ B(y,h(u)) = 0$ implies $a = 0$. Since $Y$ is $(h,B)$-split, \[B(y,u) = 0 \Rightarrow b = 0.\] But then $y \in W$ and $Span\{y\} + X$ is an $(h,B)$-split subspace of $W$. Indeed, $B|_{X + Span\{y\}} = 0$ and $h(Span\{y\} + X) \cap (Span\{y\} + X) = \{0\}$ since $Span\{y\} + X \subseteq Y$. Also $B_h|_{X + Span\{y\}}$ is non-degenerate since it is a direct sum of $B|_X$ and $B|_{Span\{y\}}$ which are both non-degeneraate by construction. By the maximality of $X$ we deduce that $y \in X$ so $Y = X + U$ and $X + U$ is a maximal $(h,B)$-split subspace of $V$. We saw, though, that restricting to $W$ decreases the rank by $1$. If we can show that \[min\{dim(W \cap V^+), dim(W \cap V^-), \mu(B|_W)\} = min\{dim(V^+), dim(V^-), \mu(B)\} - 1\] then we are done by the inductive hypothesis. We have \begin{eqnarray*} &dim(V^+) = dim(V^+ \cap (U + h(U))) + dim(V^+ \cap W) = 1 + dim(V^+ \cap W)\\ &dim(V^-) = dim(V^- \cap (U + h(U))) + dim(V^- \cap W) = 1 + dim(V^- \cap W)\\ &\mu(B) = \mu(B|_W \oplus B|_{U + h(U)}) = \mu(B|_W \oplus \mathcal{H}_1) = \mu(B|_W) + 1, \\ \end{eqnarray*} so $m$ also decreases by $1$ when restricting to $W$. For the "in particular" part, since every quadratic form $B$ of rank $\ge 5$ over a non-Archimedean field represents $0$, if $dim(V) \ge 7$ then $\mu(B) \ge 2$. If also $dim(V^+) \ge 2$ and $dim(V^-) \ge 2$ then $m \ge 2$ so by the inequality above $r \ge 2$ and the pair is not s-stable. \end{proof} We next consider the p-stability of the pair in the non-Archimedean case. As p-stability implies s-stability and by Theorem \ref{rank estimate} this happens only if the pair is of rank at most 1, the pair cannot be p-stable unless it is of rank $\le 1$. By the "in particular" part of Proposition \ref{rank estimate} we have to check only the cases where either $min(dim(V^+),dim(V^-) \le 1$ or $dim(V) \le 6$. We start with the first case. \begin{proposition} If $min\{dim(V^+),dim(V^-)\} = 1$ then $(O(B),O(B^+) \times O(B^-) , Ad_h)$ is stable, and in particular it is p-stable. \end{proposition} \begin{proof} As stability implies p-stability, it is sufficient to prove the stability of the pair. Assume, without loss of generality, that $dim(V^+) = 1$. Let $r$ be a semi-simple symmetrization. We wish to prove that $[r]$ is trivial in $H^1(\theta,Z_{O(B)}(r))$. Let $V = \oplus_{\lambda} V_\lambda(r)$ be a primary decomposition of $V$ with respect to $r$. By Lemma \ref{dimension formula}, the condition $dim(V^+) = 1$ implies that \[|\mathcal{A}(r) \cup \mathcal{B}(r)| \le 2.\] Since $|\mathcal{A}(r)|$ is always even, $r$ can not have primary-values of type $A$ and $B$ simultaneously. Furthermore, since $dim(V_\lambda(r)) \ge 2$ for every $\lambda \in \mathcal{B}(r)$, we can have either one pair of primary values of type $A$ which are in $F$, or a single primary value of type $B$ in a quadratic extension of $F$, or only primary values of type $C$. In the third case we have $Z_{O(B)}(r) = O(B|_{V_1(r)}) \times O(B|_{V_{-1}(r)})$, and $r$ represents a cocycle which is trivial on the first factor. By Lemma \ref{consequence of Witt Cancellation} we see that the map $(i_r)_*$ is injective on the set of cocycles which come from the second factor. This shows that $[r]$ is trivial in $H^1(Ad_h,O(B))$. Assume that $r$ has a single pair of primary values of type $A$ or a single primary value of type $B$. Then we have \[V = U_\lambda(r) \oplus V_1(r) \oplus V_{-1}(r)\] where as usual $U_\lambda(r) := V_\lambda(r) + V_{\lambda^{-1}}(r)$. Since $[r]$ is trivial in $H^1(Ad_h,GL(V))$ we have $dim(V_1(h)) = dim(V_1(rh)) = 1$. It follows that $V_1(rh) + V_1(h) \subseteq U_\lambda(r)$, since both are one dimensional and intersects $U_\lambda(r)$ non-trivially. But then $V_{-1}(r) \cap V_{-1}(h) \subseteq V_{1}(rh) \subseteq U_\lambda(r)$ so $V_{-1}(r) \cap V_{-1}(h) = 0$. Furthermore, $V_{-1}(r) \cap V_1(h) \subseteq V_1(h) \subseteq U_\lambda(r)$ so $V_{-1}(r) \cap V_1(h) = 0$. Since $V_{-1}(r)$ is the sum of its intersections with the eigen-spaces of $h$, we deduce that $V_{-1}(r) = 0.$ But then $Z_{O(B)}(r) = Z_{O(B|_{U_\lambda(r)})}(r) \times O(B|_{V_1(r)})$. In both cases the map $(i_\lambda)_*$ is injective. Indeed, if $\lambda$ is of type $A$ then $(i_\lambda)_* (B'^\lambda) = 2B'^\lambda$, and multiplication by $2$ is injective on $\mathcal{QF}(F)$. A similar argument works also in the where $\lambda$ is of type $B$. The injectivity of $(i_r)_*$ now follows from Witt Cancellation Theorem. \end{proof} \subsubsection{Small Dimensions} So far we investigated the stability of all Archimedean pairs of this type and all the non-Archimedean pairs except the pairs with the following dimensions of $V^+$ and $V^-$, up to symmetry: \begin{itemize} \item $dim(V_1(h)) = 4, dim(V_{-1}(h)) = 2$. \item $dim(V_1(h)) = 3, dim(V_{-1}(h)) = 3$. \item $dim(V_1(h)) = 3, dim(V_{-1}(h)) = 2$. \item $dim(V_1(h)) = 2, dim(V_{-1}(h)) = 2$. \end{itemize} Before we treat them one by one, we want to present some general results concerning p-stability and stability of the pair. \begin{proposition} \label{criterion for p-stability - orthogonal} Let $A(U,T)$ be a maximal $(Ad_h,F)$-split torus of rank 1 in $V$. Let $W = (U + h(U))^\bot$ and ${B_h}_U \equiv Q([a])$. Then the pair $(O(B),O(B^+) \times O(B^-), Ad_h)$ is p-stable if and only if there does not exist a scalar $\lambda \in F^\times / (F^\times)^2$ and a form $B' \le B|_W$ such that \[ Q([2a\lambda]) \oplus B' \equiv B^+.\] \end{proposition} \begin{proof} We have $Z_{O(B)}(A(U,T)) \cong A(U,T) \times O(B|_W)$, so \[H^1(Ad_h,Z_{O(B)}(A(U,T))) \cong F^\times / (F^\times)^2 \times \{B' \in QF(F) : B' \le B|_W\}.\] By the calculation of $i_*$ we know that, under the identification above, $i_*((\lambda, B')) = Q([2a\lambda]) \oplus B'$, and it is in the kernel of $i_*$ if and only if $Q([2a\lambda]) \oplus B' \equiv B^+$. Thus, the result follows from the cohomological criterion for p-stability. \end{proof} The following lemma is an immediate consequence of the classification of quadratic forms. \begin{lemma} \label{quadratic forms inclusion criterion} Let $F$ be a non-Archimedean local field. Let $B \in QF_n(F)$ and $C \in QF_m(F)$, where $m \le n$. Then $C \le B$ if and only if there exists a form $D \in QF_{n - m}(F)$ such that \[det(D) = det(B)det(C)\] and \[H(D) = \{det(B),-det(C)\}H(B)H(C).\] \end{lemma} \begin{proposition} Let $F$ be non-Archimedean. If either $dim(V^+) = 2$ and $dim(V^-) = 4$ or $dim(V^+) = dim(V^-) = 3$, then the pair $(O(B), O(B^+) \times O(B^-), Ad_h)$ is not p-stable. \end{proposition} \begin{proof} Since in this case $dim(V) = 6 \ge 5$, it follows that $\mu(B) \ge 1$. If $\mu(B) \ge 2$ then by Lemma \ref{rank estimate} the rank of the pair is at least $2$ and therefore by Proposition \ref{s-stability for non-Archimedean orthogonal} it is not s-stable, hence not p-stable. Thus, we may assume that the pair is of rank 1. Let $A(U,T)$ be an $(Ad_h,F)$-split torus and $W = (U + h(U))^\bot$. Assume first that $dim(V^+) = 2$. Then, by diagonalizing $B$ on $U + h(U)$ and then on $W$ we can find representations $B^+ \equiv Q([a,b])$ and $B^- \equiv Q([-a,c,d,e])$. By Lemma \ref{criterion for p-stability - orthogonal}, we only have to find $1 \neq \lambda \in F^\times / (F^\times)^2$ and $\tau \in Rep(Q([b,c,d,e]))$ such that $Q([\lambda a, \tau]) \equiv Q([a,b])$. Since every non-degenerate quadratic form of rank $4$ represents all the non-zero quadratic residues, $\tau$ in fact can be any element of $F^\times / (F^\times)^2$. If $a \ne b$ we can choose $\lambda = ab$ and $\tau = a$, otherwise choose any $ \lambda \ne 1$ with $\{\lambda,\lambda\} = 1$ and take $\tau = \lambda b$, to get a non-trivial element of the kernel. Consider now the case $dim(V^+) = 3$. We can find representations $B^+ \equiv Q([a,b,c])$ and $B^- \equiv Q([-a,d,e])$. To prove that the pair is not p-stable, we need to find $\lambda \ne 1$ and a quadratic form of rank 2, $C$, such that $C \le Q([b,c,d,e])$ and $Q([\lambda a]) \oplus C \equiv Q([a,b,c])$. Set $D = Q([b,c,d,e])$. Then by Lemma \ref{quadratic forms inclusion criterion}, $C \le D$ if and only if there exists a form $E$ of rank 2 such that \[det(E) = det(D)det(C)\] and \[H(E) = \{-det(C), det(D)\}H(C)H(D).\] The only pair $(H(E), det(E))$ which it is not realizable as the invariants of a rank $2$ form $E$ is $(-1,-1)$. Thus, the only form which is not $\le D$ is $\mathcal{H}_1$, if $D$ does not represent $0$. On the other hand, $|Rep(B^+)| \ge 3$ since it is a form of rank $3$. Thus, there are at least two options for $C$, namely $B^+ - Q([\alpha])$ and $B^+ - Q([\beta])$ where $\alpha, \beta$ are two different quadratic residues represented by $B^+$ which are different from $a$. Since one of them is different from $\mathcal{H}_1$, at least one of the pairs $(\alpha a,B^+ - Q([\alpha]))$ or $(\beta a, B^+ - Q([\alpha]))$ corresponds to a non-trivial cocycle in $\KerH{Ad_h}{Z_{O(B)}(A(U,T))}{O(B)}$, so the pair is not p-stable. \end{proof} Next we consider the case $dim(V^+) = dim(V^-) = 2$. We consider this case first because we will use it for the analysis of the case $dim(V^+) = 2, dim(V^-) = 3$. \begin{proposition} \label{case 2,2} Assume that $dim(V^+) = dim(V^-) = 2$, and assume that the pair is of rank $1$. Let $U$ be a maximal $(Ad_h,B)$-split subspace of $V$ and $W = (U + h(U))^\bot$. Then the pair is p-stable if and only if $F$ is of odd residual characteristic and none of $B^+,B^-$ or $B|_W$ is equivalent to $\mathcal{H}_1$. \end{proposition} \begin{proof} Write $B^+ = Q([a,b])$ and $B^- = Q([-a,c])$. Then by Proposition \ref{criterion for p-stability - orthogonal}, the pair is stable if there are no $x,y \in F^\times / (F^\times)^2$ such that $x \ne a$, $y \in Rep(Q([b,c]))$ and $Q([a,b]) \equiv Q([x,y])$. By comparing determinants we see that that if $Q([a,b]) \equiv Q([x,y])$ then there exists $z$ so that $x = za, y = zb$. An element $\gamma \in F^\times$ is represented by $Q([\alpha,\beta])$ if and only if $\{\gamma, -\alpha \beta\} = \{\alpha,\beta\}$. Thus, $z$ must be a solution of the system of two linear equations over $\ZZ/ 2 \ZZ$: \[\begin{cases} &\{bz,-bc\} = \{b,c\} \\ &\{az,-ab\} = \{a,b\} \\ \end{cases}\] or equivalently \[\begin{cases} &\{z,-bc\} = 1 \\ &\{z,-ab\} = 1. \\ \end{cases}\] If $F$ is of even residual characteristic, then $dim_{\ZZ / 2\ZZ}(F^\times / (F^\times)^2) \ge 3$ so since these equations are homogeneous there is a non-trivial solution, which gives a counter example for p-stability. If $F$ is of odd residual characteristic then $dim_{\ZZ / 2\ZZ}(F^\times / (F^\times)^2) = 2$ so there is a non-trivial solution if and only if the equations are linearly dependent. This happens exactly when $-bc = -ab$ or $b = -c$ or $a = -b$. In the first case $B^-$ is hyperbolic, in the second $B|_W$ is hyperbolic and in the third $B^+$ is hyperbolic. \end{proof} Next we treat the only remaining case: $dim(V^+) = 2$, $dim(V^-) = 3$. \begin{lemma} \label{case 2,3 prep} If $dim(V^-) = 3$ and $dim(V^+) = 2$ then the pair is p-stable (resp. stable) if and only if there does not exist a one dimensional subspace $W \subseteq V^-$ for which the pair $(O(B|_{W^\bot}),O(B|_{W^\bot \cap V^+}) \times O(B|_{W^\bot \cap V^-}), Ad_h)$ is not p-stable (resp. not stable). \end{lemma} \begin{proof} We prove this for stability. The case of p-stability is similar and we leave it to the reader. In one direction, assume that $r$ is an unstable semi-simple symmetrization. Then we claim that \[dim(V_1(r) \cap V^-) \ge 1.\] Indeed, we have \begin{align*} &-1 = dim(V_1(rh)) - dim(V_{-1}(rh)) = \\ &=dim(V_1(r) \cap V_1(h)) + dim(V_{-1}(r) \cap V_{-1}(h)) - dim(V_1(r) \cap V_{-1}(h)) - dim(V_{-1}(r) \cap V_1(h)) \end{align*} while \begin{align*} &-1 = dim(V_1(h)) - dim(V_{-1}(h)) = \\ &=dim(V_1(r) \cap V_1(h)) + dim(V_{-1}(r) \cap V_{1}(h)) - dim(V_1(r) \cap V_{-1}(h)) - dim(V_{-1}(r) \cap V_{-1}(h)). \end{align*} Subtracting these two equations we get \[0 = 2(dim(V_{-1}(r) \cap V_{-1}(h)) - dim(V_{-1}(r) \cap V_1(h)))\] so $V_{-1}(r)$ intersects $V^+$ and $V^-$ ins subspaces of the same dimension. But then \[dim(V_1(r) \cap V_1(h)) - dim(V_1(r) \cap V_{-1}(h)) = -1\] so $dim(V_1(r) \cap V_{-1}(h)) > dim(V_1(r) \cap V_1(h)) \ge 0$. Let $W \subseteq V^- \cap V_1(r)$ be a one-dimensional subspace. Then $r$ is an unstable symmetrization in $O(B|_{W^\bot})$. It is clearly unstable, and it is a symmetrization by Lemma \ref{consequence of Witt Cancellation}. Conversely, suppose that there is such a line $W$. Then, given an element $r \in O(B|_W)$ which is unstable symmetrization, the element $r \oplus Id_W$ is an unstable symmetrization in $O(B)$. The fact that it is unstable follows from Lemma \ref{consequence of Witt Cancellation}. \end{proof} \begin{proposition} \label{case 2,2 implies case 2,3} Assume that $dim(V^+) = 2$ and $dim(V^-) = 3$. The pair is p-stable if and only if it is of rank at most $1$, $F$ is of odd residual characteristic and none of $B^+,B^-$ or $B|_{W}$ represents 0. \end{proposition} \begin{proof} The result follows directly from Proposition \ref{case 2,3 prep} and Proposition \ref{case 2,2}. \end{proof} Finally, we consider the stability of the pair in the cases $(dim(V^+),dim(V^-)) =(2,2) \text{ or }(2,3)$. \begin{proposition} \label{case 2,2 stable} Let $dim(V^+) = dim(V^-) = 2$ and $F$ non-Archimedean. The pair is stable if and only if it is p-stable, $F$ is of odd residual characteristic and neither $B^+ \equiv B^-$ nor $B^+ \equiv Q([a,c])$ and $B^- \equiv Q([a,-c])$ for some $a,c \in F$. \end{proposition} \begin{proof} We start by showing that, assuming the conditions of the proposition, the pair is stable. Let $r$ be a semi-simple symmetrization. Let $X = \mathcal{A}(r) \cup \mathcal{B}(r) \cup \mathcal{C}(r)$ be the set of primary values of $r$. We shall consider all possible options for $X$ and show that in each case either $r$ is stable or one of the conditions listed in the proposition is violated. This will show that if all the conditions hold then the pair is stable. We first list the options for $X$: since primary values of type $A$ come in pairs, \[|\mathcal{A}(r)| \in \{0,2,4\}.\] Since every primary subspace of type $B$ is even dimensional we have \[|\mathcal{B}(r)| \in\{0,1,2\}.\] Finally, since $r$ is a symmetrization in $GL(V)$ we have $dim(V_{-1}(r) \cap V^+) = dim(V_{-1}(r) \cap V^-)$. Then, since the same equality holds for all primary values of type $A$ and $B$ we deduce from the equation $dim(V^+) = dim(V^-)$ that also $dim(V_{1}(r) \cap V^+) = dim(V_{1}(r) \cap V^-)$. We deduce that both $V_1(r)$ and $V_{-1}(r)$ are even dimensional. These facts combined show that one of the following holds: \begin{enumerate} \item $r$ has only primary values of type $A$. \item $r$ has one primary value of type $A$ and one of type $B$ or $C$. \item $X = \{1, -1\}$. \item $X = \{-1\}$. \item $X = \{1\}$. \item $X = \{\lambda, 1\}$ for $\lambda \in \mathcal{B}(r)$ coming from a quadratic extension of $F$. \item $X = \{\lambda, -1\}$ for $\lambda \in \mathcal{B}(r)$ coming from a quadratic extension of $F$. \item $X = \{\lambda, \tau\}$ for $\lambda, \tau \in \mathcal{B}(r)$ coming from quadratic extensions of $F$. \item $X = \{\lambda\}$ for $\lambda \in \mathcal{B}(r)$ in a degree-4 extension of $F$. \end{enumerate} In case 1, since $B|_{U_\lambda(r)}$ is hyperbolic for each $\lambda$ of type $A$, we have $\mu(B) \ge 2$ so by Proposition \ref{rank estimate} the pair is of rank $2$, contrary to the assumptions. In case 2 let $\{\lambda, \lambda^{-1}\} = \mathcal{A}(r)$. Then $r \in A(V_\lambda(r),Z_{GL(V_\lambda(r))}(r))$, so it lies in a maximal $(Ad_h,F)$-split torus. As the pair is s-stable we deduce that $r$ is stable. Case 3 follows from Lemma \ref{consequence of Witt Cancellation}, as well as case 6. In cases 4,5 we have $r \in Z(O(B))$ so it is clearly stable ($r = \bar{r}$ in this case). Consider case 8. Let $F[\lambda] = E$ and $F[\tau] = K$. Then $V_\lambda(r) \cong E$ as an $E$-vector space and similarly $V_\tau(r) \cong K$. We have, by Proposition \ref{cohomology of descendant of O} and the fact that rank-1 Hermitian forms are classified by their determinant: \[H^1(As_h,Z_{O(B)}(r)) \cong N_{E/F}(E^\times) / (F^\times)^2 \times N_{K/F}(K^\times) / (F^\times)^2 \cong \ZZ/ 2\ZZ \oplus \ZZ / 2\ZZ,\] where the second isomorphism is due to the fact that for a local field of odd residual characteristic, $|F^\times / (F^\times)^2| = 4$ while $|F^\times / N_{E/F}(E^\times)| = 2$ for every quadratic extension $E/F$. We have $V^+ \cong F \oplus F$ where the sum is orthogonal with respect to $B$. Choose a generator of each factor, and represent $B^+$ as $Q([a,b])$. Then the quadratic form corresponding to $[r]$ is $Q([a \alpha,b \beta])$, which must be equivalent to $B^+$ since $r$ is a symmetrization. This forces $\alpha = \beta$ and in particular \[B^+ \equiv \alpha B^+.\] If $\alpha = \beta = 1$ then $r$ is stable. Otherwise, since a quadratic extension is characterized by its norms into $F$, $E \cong K$ and $\alpha = \beta$ is the unique non-square norm from $E$ to $F$. We shall treat this case. Let $x \in E$ be an element of trace 0. Then $V^- = x V^+$, and therefore $B^- \equiv N_{E/F}(x)B^+$. If $N_{E/F}(x)$ is a square in $F$ then $B^+ \equiv B^-$. Otherwise, since there is a unique non-trivial element in $N_{E/F}(E^\times) / (F^\times)^2$, we have \[\alpha = N_{E/F}(x).\] But then the equations $B^+ \equiv \alpha B^+$ and $B^- \equiv N_{E/F}(x)B^+$ together give \[B^+ \equiv \alpha B^+ \equiv N_{E/F}(x)B^+ = B^-.\] We turn to case 7: Let $E = F[\lambda]$. We have by Proposition \ref{cohomology of descendant of O}, \[H^1(Ad_h,Z_{O(B)}(r)) \cong N_{E/F}(E^\times) / (F^\times)^2 \times \{C \in \mathcal{QF}(F) : C \le B|_{V_{-1}(r)}\} \},\] The cocycle $[r] \in H^1(Ad_h,Z_{O(B)}(r))$ is represented by a pair of the form $(\alpha,B|_{V_{-1}(r) \cap V|_{-1}(h)})$ where $\alpha \in N_{E/F}(E^\times) / (F^\times)^2$. The quadratic form corresponding to $[r]$ is then \[\alpha B|_{V_{\lambda}(r) \cap V_1(h)} \oplus B|_{V_{-1}(r) \cap V_{-1}(h)}.\] Let $B|_{V_{\lambda}(r) \cap V_1(h)} \equiv Q([a])$, $B|_{V_{\lambda}(r) \cap V_{-1}(h)} \equiv Q([b])$, $B|_{V_{-1}(r) \cap V_{1}(h)} \equiv Q([c])$ and $B|_{V_{-1}(r) \cap V_{-1}(h)} \equiv Q([d])$. Then we have $a = b N_{E/F}(x)$ for $x$ an element of trace 0 in $E$. The fact that $r$ is a symmetrization implies \[Q([a,c]) \equiv Q([a \alpha, d]),\] so $\alpha = cd$. If $\alpha = 1$ then $[r]$ is trivial in $H^1(Ad_h,Z_{O(B)}(r))$, hence stable. Thus, we may assume that $\alpha$ is the unique non-trivial element of $N_{E/F}(E^\times) / (F^\times)^2$. We now consider two possibilities: either $N_{E/F}(x) \in (F^\times)^2$ or $N_{E/F}(x) = \alpha$. If $N_{E/F}(x) = \alpha$ then we have \[B^+ \equiv Q([a,c]) \equiv Q([a \alpha, d]) = Q([a N_{E/F}(x),d]) \equiv B^-.\] If $N_{E/F}(x) \in (F^\times)^2$ then necessarily $E = F[\sqrt{-1}]$ and $\alpha$ can be taken to be $-1$. In this case we get $c = -d$ and therefore \begin{eqnarray*} &B^+ \equiv Q([a,c]) \\ &B^- \equiv Q([a,-c]) \end{eqnarray*}. Finally, we treat case 9. Let $K = F[\lambda]$ and $E = F[\lambda + \lambda^{-1}]$. Then $V^+ \cong E$ and $V^- \cong x E$ for some $x$ of trace $0$. If $N_{K/F}(x) \in (E^\times)^2$ then $B^+ \equiv B^-$. Otherwise $N_{K/E}(x)$ is the unique non-trivial element of $N_{K/E}(K^\times) / (E^\times)^2$ and thus either $[r] \in H^1(Ad_h,Z_{O(B)}(r)) \cong N_{K/E}(K^\times) / (E^\times)^2$ is trivial and then $r$ is stable, or $[r]$ corresponds to $N_{K/E}(x)$ and then $B^+ \equiv B^-$. Conversely, we have to show that if the pair is not p-stable, $B^+ \equiv B^-$, $F$ is of even residual characteristic or $B^+ \equiv Q([a,c])$ and $B^- \equiv Q([a,-c])$ , the pair is not stable. In the first case it follows from Theorem \ref{stable then p-stable}. Assume that $B^+ \equiv B^-$, Let $B^+ = B^- = Q([a,b])$. We may assume that $a \ne -b$, otherwise $\mu(B) = 2$ so by Proposition \ref{rank estimate} the pair is of rank $2$ and hence is not p-stable. Let $E = F[\sqrt{-ab}]$. Let $x$ be the unique non-trivial quadratic residue of $F$ which is a norm from $F$. Consider the Hermitian form $Q([a,b])_c(u,v) = Q([a,b])(u,c(v))$. Then $B \equiv \frac{1}{2} Tr_{E/F}(Q([a,b])_c)$. Choose $\xi \in E$ such that $\xi^{-1}c(\xi) \notin F$ and $N_{E/F}(\xi) = x$. Then $r = \xi^{-1}c(\xi)$ is unstable. Indeed, under the isomorphism $H^1(Ad_h,Z_{O(B)}(r)) \cong N_{E/F}(E^\times) / (F^\times)^2$, $[r]$ corresponds to $x$ which is chosen to be non-trivial. On the other hand, the form corresponding to $[r]$ is $Q([xa,xb])$. If $x = ab$ then it is clear that $Q([xa,xb]) \equiv Q([a,b])$. If $x \ne ab$, then $ab = 1$ since $N_{E/F}(E^\times) = \{1,x\}$. But then $E = F[\sqrt{-1}]$ and then $x$ is a lift from the residue field of $F$ of a non-square. But then $\{r,r\} = 1$ so $Q([xa,ab]) \equiv Q([a,b])$ since $a = b$ and $\{x,x\} = 1$. If $B^+ \equiv Q([a,c])$ and $B^- \equiv Q([a,-c])$ then we can get a counter example to stability as in the analysis of case 7: If $\sqrt{-1} \in F$ then the pair is of rank 2 in this case, hence not s-stable. Otherwise, we let $E = F[\sqrt{-1}]$. We can write $V = E \oplus F \oplus F$ where \[B|_E(x) = N_{E/F}(x) = (Re(x))^2 + (Im(x))^2\] and $B|_{F \oplus F} = Q([c,-c])$, in a basis of the form $\{(u,0), (0,v)\}$. We also choose $h$ so that $h|_E$ is conjugation and $h|_{F \oplus F} = diag(1,-1)$. Then, let $y \in E$ be an element with non-trivial norm modulo squares in $F$, the element $r$ so that $r|_E = \frac{y} {c(y)}$ and $r|_{F \oplus F} = -1$ is easily seen to be unstable. Finally, assume that $F$ is of residual characteristic 2. Then the pair is unstable unless $B$ is unisotropic, by Proposition \ref{rank estimate} and Proposition \ref{case 2,2}. But then $det(B) = 1$ so $B^+ \equiv Q([a,b])$ and $B^- \equiv Q([xa,xb])$ for some $a,b,x \in F^\times / (F^\times)^2$. Since we assume that $B^+$ and $B^-$ are not equivalent we may assume that $\{x,-ab\} = -1$. If $x = -1$ than $\mu(B) = 2$ and the pair is not p-stable. Assume that $x \ne -1$ and $\{x,-ab\} = -1$. Let $E = F[\sqrt{-x}]$ we can write $B$ as $\frac{1}{2} Tr_{E/F}(Q([a,b])_c)$. Since \[dim_{\FF_2}(N_{E/F}(E^\times) / (F^\times)^2) = dim_{\ZZ/ 2\ZZ}(F^\times / (F^\times)^2) - 1 \ge 2,\] there is non-trivial $y \in N_{E/F}(E^\times) / (F^\times)^2$ such that $\{y,-ab\} = 1$. Let $\xi$ be any element such that $r = \xi^{-1} c(\xi) \notin F$ and $N(\xi) = y$. Then exactly as in the case $B^+ \equiv B^-$, $r$ is an unstable symmetrization, so the pair is unstable. \end{proof} Thus, we finished the verification of stability for pair $(O(B),O(B^+) \times O(B^-), Ad_h)$. Let us summarize the results: \begin{theorem} The pair $(O(B), O(B^+) \times O(B^-), Ad_h)$ is s-stable exactly in the following cases: \begin{itemize} \item $F = \RR$ and either $B^+$ is definite or $B^-$ is definite. \item $F$ is non-Archimedean and \[min\{dim(V^+),dim(V^-), \mu(B)\} \le 1.\] \end{itemize} \end{theorem} \begin{theorem} \label{p-stability for O} The pair $(O(B), O(B^+) \times O(B^-), Ad_h)$ is p-stable exactly in the following cases: \begin{itemize} \item $F = \RR$ and either $B^+$ is definite or $B^-$ is definite. \item $F$ is non-Archimedean and $\mu(B) = 0$. \item $F$ is non-Archimedean and $V^+$ or $V^-$ is one dimensional. \item $F$ is non-Archimedean, $dim(V) \le 5$, $\mu(B) = 1$, $\mu(B^+) = 0$ and $\mu(B^-) = 0$. \end{itemize} \end{theorem} \begin{theorem} \label{stability for O} The pair $(O(B), O(B^+) \times O(B^-), Ad_h)$ is stable exactly when it is p-stable and one of the following holds: \begin{itemize} \item $F = \RR$ and either $B^+$ is definite or $B^-$ is definite. \item $F$ is non-Archimedean and $V^+$ or $V^-$ is one dimensional. \item $F$ is non-Archimedean of odd residual characteristic, $dim(V) \le 5$ and $(B^+, B^-)$ can not be represented as $(Q([\pm a, \pm b ,...]), Q([\pm a, \pm b,...]))$ for some (not necessarily equal) signs $\pm$. \end{itemize} \end{theorem} \begin{corollary} While the stability of these pairs was known for pairs which satisfy the conditions in the first two bullets, there are examples of pairs $(B^+,B^-)$ for which the conditions in the first two bullets does not hold but the condition in the third does. For example, every pair of forms $(B^+,B^-)$ for which $rank(B^+) = rank(B^-) = 2$ and $det(B^+)det(B^-) \ne \pm 1$ gives a new example of a stable pair. \end{corollary} We already did everything needed for this complete classification. We just make few comments. In the stability part, the conditions on the signs ensure that the rank of the pair is $\le 1$, that none of the forms is a direct factor of the other and also exclude the case $B^+ = Q([a,c])$ and $B^- = Q([a,-c])$, so it is just a way to encode these 3 conditions in one statement. For the p-stability we excluded the condition that $B|_W$ does not represent 0, since in this case we anyway have $\mu(B) \ge 2$. \subsection{The Pair $(U(B), U(B^+) \times U(B^-), Ad_h)$} Let $E/F$ be a quadratic extension with conjugation $c$ and let $V$ be a linear space over $E$. Let $B: V \times V \to E$ be a Hermitian form, and let $h \in U(B)$ be an element of order $2$. Let $V^+$ and $V^-$ be the eigen-spaces of $h$ in $V$, and $B^+,B^-$ be the restriction of $B$ to $V^+,V^-$ respectively. The maximal $(Ad_h,F)$-split tori can be again described via split subspaces. \begin{definition} An $E$-linear subspace $W \subseteq V$ is $(B,h)$-split if it is $(B|_F,h)$-split as an $F$-linear subspace. \end{definition} Let $W$ be a $(B,h)$-split subspace of $V$. Let $T \subseteq GL_E(W)$ be a maximal $F$-split torus. Then we define \[A(W,T) = U(B) \cap A(U|_F,T_|F).\] This is by definition the set of unitary linear transformations $g$ such that $g|_U \in T$, $g|_{h(U)} = h(g|_U^{-1})h$ and $g|_{(U + h(U))^\bot} = Id_{(W + h(W))^\bot}$. An argument similar to the one given for $O(B)$ shows that all maximal $(Ad_h,F)$-split tori in $U(B)$ are of this form. Let $R = Z_{GL_E(W)}(T)$. This is a maximal $E$-split torus of $GL_E(V)$ and \[Z_{U(B)}(A(W,T)) \cong A(W,R) \times U(B|_{(W + h(W))^\bot}) \cong (E^\times)^k \times U(B|_{(W + h(W))^\bot}) \] where $k$ is the rank of the pair and $Ad_h$ acts by $Ad_h(a_1,...,a_k) = (c(a_1)^{-1},...,c(a_k)^{-1})$ on the first factor. By Propostion \ref{H^1(Ad_h,U(B))}, one can identify $H^1(Ad_h,U(B))$ with a subset of $Her(E/F)$, and we shall make this identification from now on. Let $A = A(W,T)$ be a maximal $(Ad_h,F)$-split torus, and $i_A, j_A$ be the inclusions of $Z_{U(B)}$ in $U(B)$ and of $A$ in $Z_{U(B)}(A)$ respectively. The computation of the induced mappings on cohomology is similar to the computation in \ref{(O(B),O(B^+)timesO(B^-),Ad_h)}. We get: \begin{align*} &H^1(Ad_h,Z_{U(B)}(A)) \cong (F^\times / N_{E/F}(E^\times))^k \times \{C \in Her(E/F) : C \le B|_{(W + h(W))^\bot}\} \\ &H^1(Ad_h,A) \cong (F^\times / (F^\times)^2)^k \end{align*} and under these identifications \begin{align*} &(j_A)_* (a_1,...,a_k) = ((a_1,...,a_k), B|_{(W + h(W))^\bot \cap V^+}) \\ &(i_A)_*(x, C) = 2 (B_{xh})|_W \oplus C \\ \end{align*} \subsubsection{Stability of the Pair $(U(B), U(B^+) \times U(B^-), Ad_h)$} \begin{proposition} Let $F$ be non-Archimedean. The pair $(U(B), U(B^+) \times U(B^-), Ad_h)$ is s-stable if and only if it is of rank 1. \end{proposition} \begin{proof} Let $A(W,T) = A \subseteq U(B)$ be a maximal $(Ad_h,F)$-split torus. We have to compute $im((j_A)_*) \cap ker((i_A)_*)$ and check whether it is trivial. If the pair is of rank 1 then $A \cong F^\times / (F^\times)^2$ and $im((j_A)_*) \cong F^\times / N_{E/F}(E^\times)$. But then $W$ is one dimensional and if $det((B_h)|_W) = d$ then $det(det((B_xh)|_W)) = xd$ so if $x \notin N_{E/F}(E^\times)$ then the two forms are not equivalent. But then by the Hermitian version of Witt Cancellation Theorem, $(i_A)_*([x])$ is non-trivial, so $im((j_A)_*) \cap ker((i_A)_*)$ is trivial. On the other hand, if the pair is of rank $\ge 2$ then we can find a non-square $x \in T$ such that $det(x) = 1$ and then $[x]$ is a non-trivial element of $im((j_A)_*) \cap ker((i_A)_*)$. \end{proof} As in the orthogonal case, we can easily calculate the rank of the pair. \begin{proposition} The rank of the pair $(U(B), U(B^+) \times U(B^-), Ad_h)$ is $min\{dim(V^+), dim(V^-), \mu(B)\}$. In particular the pair is s-stable if and only if \[min\{dim(V^+), dim(V^-), \mu(B)\} \le 1.\] \end{proposition} \begin{proposition} Let $F$ be non-Archimedean. If the pair $(U(B), U(B^+) \times U(B^-), Ad_h)$ is p-stable then $V^+$ or $V^-$ is one dimensional. In particular, if the pair is stable then $V^+$ or $V^-$ is one dimensional. \end{proposition} \begin{proof} Assume that $dim(V^+) \ge 2$ and $dim(V^-) \ge 2$. Assume without loss of generality that $dim(V^-) \ge dim(V^+)$. Since both $B^+$ and $B^-$ are of rank at least $2$, we can find $v^+ \in V^+$ and $v^- \in V^-$ such that $B^+(v^+) = -B^-(v^-) = 1$, by Theorem \ref{classification of Hermitian forms}. Then $W' := Span\{v^+ + v^-\}$ is easily seen to be $(B,h)$-split, so the pair is of rank at least one. Let $A = A(W,T)$ be a maximal $(Ad_h,F)$-split torus. Then \[H^1(Ad_h,Z_{U(B)}(A)) \cong (F^\times / N_{E/F}(E^\times))^k \times \{C \in Her(E/F) : C \le B|_{(W + h(W))^\bot}\}\] for $k > 0$. If $k > 1$ then the pair is not s-stable, hence also not p-stable. As $B^+$ is of rank at least $2$, by Theorem \ref{classification of Hermitian forms} we have $B^+ \equiv Q([x])_{E,c} \oplus C$ for some Hermitian form $C$. As \[dim((W + h(W))^\bot) = dim(V) - 2 \ge dim(V^+) > rank(C),\] we have $C \le B|_{(W + h(W))^\bot}$. It follows that $(x,C)$ represents a non-trivial element of $ker((i_A)_*)$. \end{proof} In fact, this completes the stability verification in the non-Archimedean case, due to the following theorem of Aizenbud, Gourevitch, Rallis, Schiffmann, Sun and Zhu: \begin{theorem}[{\cite[\S 5]{dima3} and \cite[\S 7]{SZ}}] If $V^+$ or $V^-$ is one dimensional, the pair $(U(B),U(B^+) \times U(B^-), Ad_h)$ is stable. \end{theorem} We turn to the Archimedean case. \begin{proposition} Let $F = \RR$. The following are equivalent: \begin{itemize} \item The pair $(U(B), U(B^+) \times U(B^-), Ad_h)$ is s-stable. \item The pair $(U(B), U(B^+) \times U(B^-), Ad_h)$ is p-stable. \item The pair $(U(B), U(B^+) \times U(B^-), Ad_h)$ is stable. \item Either $B^+$ or $B^-$ is definite. \end{itemize} \end{proposition} The proof is identical to the proof of Proposition \ref{O archimedean}. Just note that in the case of a primary value in $\RR$ the image of $[r_\lambda]$ is $\sign(\lambda) B_h|_{V_\lambda(r)}$, exactly like in the case of $O(B)$. We conclude: \begin{theorem} The pair $(U(B), U(B^+) \times U(B^-), Ad_h)$ is s-stable exactly in the following cases: \begin{itemize} \item $F = \RR$ and either $B^+$ or $B^-$ is definite. \item $F$ is non-Archimedean and the pair if of rank at most 1. \end{itemize} \end{theorem} \begin{theorem} \label{stability U(B)} The pair $(U(B), U(B^+) \times U(B^-), Ad_h)$ is p-stable if and only if it is stable. This is the case exactly when one of the following holds: \begin{itemize} \item $F = \RR$ and either $B^+$ or $B^-$ is definite. \item $F$ is non-Archimedean and either $B^+$ or $B^-$ is one-dimensional. \end{itemize} \end{theorem} \subsection {Summary} The following table summarizes the computations carried in this section. In the columns stable (resp. P-st., S-st.) we list necessary and sufficient conditions for stability (resp. p-stability, s-stability.). The columns labeled $\RR$ are for the real pairs while the columns labeled "non-Archomedean" are for the non-Archimedean pairs. The entries (A), (B), (C) and (D) described below the table due to their length. \newline \newline \begin{tabular}{ |l|l l l|l|l|l| } \hline &\multicolumn{3}{|c|}{$\mathbb{R}$}&\multicolumn{3}{|c|}{non-Archimedean}\\ \hline Pair & Stable \vline& S-st. \vline & P-st. & Stable & S-st. & P-st. \\ \hline {\footnotesize $GL(V),GL(V^+)\times GL(V^-)$} & \multicolumn{6}{|c|}{Always} \\ \hline {\scriptsize $SL(V),(GL(V^+)\times GL(V^-))\cap SL(V)$} & \multicolumn{6}{|c|}{$dim(V^+) \neq dim(V^-)$} \\ \hline {\footnotesize $GL_F(V),GL_E(V)$} & \multicolumn{6}{|c|}{Always} \\ \hline {\footnotesize $SL_F(V),SL_E(V)$} & \multicolumn{3}{|c|}{Always} & \multicolumn{3}{|c|}{dim V = 1} \\ \hline {\footnotesize $GL(V(E)),GL(V(F))$} & \multicolumn{6}{|c|}{Always} \\ \hline {\footnotesize $SL(V(E)),SL(V(F))$} & \multicolumn{6}{|c|}{dim(V)=2k+1} \\ \hline {\footnotesize $O(B),O(B^+)\times O(B^-)$} & \multicolumn{3}{|c|}{$B^+$ or $B^-$ is definite} & (A) & (B) & (C) \\ \hline {\footnotesize $U(B),U(B^+)\times U(B^-)$} & \multicolumn{3}{|c|}{$B^+$ or $B^-$ is definite} & (D) & (C) & (D) \\ \hline {\footnotesize $GL(V),O(B)$} & \multicolumn{3}{|c|}{$B$ is definite} & \multicolumn{3}{|c|}{dim(V)=1} \\ \hline {\footnotesize $GL(V),U(B)$} & \multicolumn{3}{|c|}{$B$ is definite} & \multicolumn{3}{|c|}{dim(V)=1} \\ \hline \end{tabular} \newline \newline Recall that $\mu(B)$ is the maximal dimension of a subspace on which $B$ vanishes, and that $Q([a_1,...,a_k])$ is the quadratic form with diagonal entries $a_i$. \underline{(A)} : At least one of the following holds: \begin{itemize} \item $dim(V^+) = 1$ or $dim(V^-) = 1$. \item $F$ is of odd residual characteristic, $dim(V) \le 5$, $\mu(B) \le 1$, $\mu(B^+) = \mu(B^-) = 0$ and $(B^+,B^-)$ can not be represented as $(Q([a,b,...]), Q([\epsilon_1 a, \epsilon_2 b,...]))$ for $\epsilon_i \in \{\pm 1\}$. \end{itemize} \underline{(B)} : At least one of the following holds: \begin{itemize} \item $dim(V^+) = 1$ or $dim(V^-) = 1$. \item $F$ is of odd residual characteristic, $dim(V) \le 5$, $\mu(B) \le 1$ and $\mu(B^+) = \mu(B^-) = 0$. \item $\mu(B) = 0$. \end{itemize} \underline{(C)} : $min\{dim(V^+), dim(V^-), \mu(B)\} \le 1.$ \underline{(D)} : $min\{dim(V^+), dim(V^-)\} \le 1$. \section{Applications to Representation Theory} In this section we link our geometric results to the representation theory of the symmetric pair. \subsection{p-stability and Multiplicity One for Principal Series Representations} Let $(G,H,\theta)$ be a symmetric non-Archimedean pair which is not p-stable. In this case, generically, one can associate a functional on the generalized principal series representation of $G$ to each open orbit of $H$ in $G/P$, where $P$ is a $\theta$-split parabolic subgroup of $G$. In particular, a generic generalized principal series has multiplicity more than 1. This result is due to Blanck and Delorme in the non-Archimedean case and to van den Ban in the real case for unitary principal series, and we shall quote these just after specifying some terminology. From now on, when talking about induced representations, we consider only representation induced from unramified characters of a parabolic subgroup in the non-Archimedean case and to representations induced from unitary characters of parabolic subgroups in the real case. For a parabolic subgroup $P \subseteq G$, a Levi subgroup $M \subseteq P$ and a character $\psi : M \to \CC$ we let $I_P(\psi)$ denote the normalized parabolic induction of $\psi$ to $G$. Let $P$ be a $\theta$-split parabolic subgroup, and $A$ a maximal $(\theta, F)$-split torus in $P$. Let $M = Z_G(A) = P \cap \theta(P)$, and let $X$ be the variety of complex unramified characters of $M$ which are defined over $F$ and preserved by $\sigma$. Since there is a homomorphism $A \to M/[M,M]$ with finite kernel and cokernel, $X$ can be identified with the space of complex unramified characters of $A$. Let $J_P(\psi)$ be the $H$ sub-representation of $Res_H^G I_P(\psi)$ corresponding to the open orbits of $H$ in $G/P$ in the natural filtration of $Res_H^G I_P(\psi)$. \begin{theorem} \label{analytic continuation} Let $F$ be non-Archimedean. For $\psi$ outside of a union of complex hypersurfaces in $X$, there is an isomorphism $Hom_H(I_P(\psi), \CC) \cong Hom_H(J_P(\psi), \CC)$. \end{theorem} This follows from the main theorem of \cite{hel}, by dropping the unnecessary simplifying assumption of single open orbit. \begin{lemma} For $\psi$ outside a union of complex hypersurfaces, we have $dim(Hom_H(I_P(\psi), \CC))$ is the number of open $H$-orbits in $G/P$. In the real case, by complex hypersurface in $X$ we mean an intersection of such hypersurface with the real subspace of unitary characters. Such intersection is easily seen to be a proper subspace of the space of unitary characters. \end{lemma} \begin{proof} In the real case it follows directly from \cite[Theorem 5.10]{ban} and \cite[Corollary 5.11 (ii)]{ban} by substituting $\xi = 1$, in the notation of Van Den Ban. We consider the non-Archimedean case, which, using the result of Blanck and Delorme above, goes along the same lines of the proof of the real case. Assume on the contrary that $(G,H,\theta)$ is not p-stable. It will be sufficient to prove that, in the settings of \ref{analytic continuation}, $dim(Hom_H(J_P(\psi), \CC)) > 1$. Indeed, the representation $I_P(\psi)$ is irreducible for a generic $\psi$ (\cite{}), $I_P(\psi)$ serves as a counter example to the Gelfand property for $\psi$ outside some non-trivial Zariski-closed subset of $X^*(A)$. In fact, we can prove that outside a proper Zariski-closed subset, $dim(Hom_H(J_P(\psi),\CC))$ is exactly the number of open orbits of $H$ in $G/P$. Let $J_O(\psi)$ be the sub-space of $J_P(\psi)$ of functions supported on $\pi^{-1}(O)$, where $O$ is an open orbit of $H$ in $G/P$ and $\pi: G \to G/P$ is the natural projection. Then, \[J_P(\psi) \cong \oplus_{O \text{open orbit}} J_O(\psi)\] so it will be sufficient to prove that $Hom_H(J_O(\psi), \CC)$ is one-dimensional. By Frobenius reciprocity, if $x_0 \in O$, then \[Hom_H(J_O(\psi), \CC) \cong Hom_{Stb_H(x_0)}(\psi \otimes (\Delta_{O,x_0})^*, \CC),\] where $\Delta_{O,x_0}$ is the fiber of the sheaf of densities on $O$. We claim that $\Delta_{O,x_0}$ is trivial as a representation of $Stab_H(x_0)$. Indeed, as $Stab_H(x_0) = H \cap P'$ where $P'$ is some $\theta$-split parabolic subgroup, we have \[Stab_H(x_0) = H \cap P' = H \cap P' \cap \theta(P') = H \cap Z_G(A') = (Z_G(A'))^\theta.\] As $Z_G(A')$ is reductive, and the fixed points subgroup of an involution is always reductive, $Stab_H(x_0)$ is reductive. It follows that the modular character of $Stab_H(x_0)$ is trivial, as for $H$. Thus, as $\Delta_{O,x_0}$ is the quotient of these, it is trivial. Finally, because $\psi \in X$, it is trivial on $H$. It follows that \[Hom_H(J_O(\psi), \CC) \cong Hom_{Stb_H(x_0)}(\psi \otimes (\Delta_{O,x_0})^*, \CC) \cong Hom_{Stb_H(x_0)} (\CC,\CC) \cong \CC. \] \end{proof} \begin{theorem} \label{gelfand then p-stable} Let $(G,H,\theta)$ be a symmetric pair. If $(G,H)$ is a Gelfand pair, then $(G,H,\theta)$ is $P$-stable. \end{theorem} \begin{proof} We prove the contra-positive. Since for $\psi$ outside a union of finitely many complex hypersurfaces the representation $I_P(\psi)$ has two independent $H$-invariant functionals, it will be sufficient to prove that we can choose $\psi$ so that $I_P(\psi)$ is irreducible outside this union of complex hyperplanes. In the real case the irreducibility follows from \cite[Proposition 3.7]{ban}. Since the proof uses only unitarity and the filtration of $r_P(I_P(\psi))$ where $r_P$ is the Jaquet functor, the same proof works for the non-Archimedean case. \end{proof} \subsection{Stability and the Gelfand Property} Throughout this section we assume that $F \ne \CC$. In the case $F = \CC$ stability holds for every connected pair so it is irrelevant for the verification of the Gelfand property. As we already mentioned, for many pairs the stability is proved to imply the Gelfand property. A list of such pairs can be found in \cite{dima2, Rami}. Since we proved stability for many pairs, we get as a result several new Gelfand pairs. First, let us present without proofs the main factors in the method used to show for a given pair that stable $\Rightarrow$ Gelfand. Let $X$ be the $F$-points of an affine algebraic variety over $F$. Then $S^*(X)$ denote the space of Schwartz distributions on $X$. If $G$ acts on $X$, we denote by $X/G$ the set $\textbf{X}(F)/\textbf{G}(F)$. \begin{definition} Let $(G,H,\theta)$ be a symmetric pair. An element $g \in G$ is called admissible if $Ad_g$ commutes with $\theta$ and $Ad_g|_{\mathfrak{s}}$ stabilize all the closed $H$-orbits in $\mathfrak{s}$. \end{definition} Recall that $\mathfrak{s}$ is the space of symmetric elements in the Lie algebra $\mathfrak{g}$. Let $\textbf{K}$ be a reductive group defined over $F$ and let $(\pi,V)$ be a finite dimensional algebraic representation of $K = \textbf{K}(F)$. Then we let $Q(V)$ be the direct complement of $V^K$, $\Gamma(V) \subseteq Q(V)$ the set of elements $v \in V$ such that $0 \in \overline{\pi(G)v}$ and $R(V) = Q(V) - \Gamma(V)$. \begin{definition}A symmetric pair $(G,H,\theta)$ is called \textbf{regular} if for every admissible $g \in G$ for which \[S^*(R(\mathfrak{s}))^{H \times H} \subseteq S^*(R(\mathfrak{s}))^{Ad_g}\] we have \[S^*(Q(\mathfrak{s}))^{H \times H} \subseteq S^*(Q(\mathfrak{s}))^{Ad_g}\] \end{definition} \begin{definition} A pair $(G,H)$ is \textbf{GP2} if for every irreducible admissible representation $\pi$ of $G$, we have \[dim(Hom_H(\pi,\CC)) \cdot Hom_H(\tilde{\pi},\CC) \le 1.\] \end{definition} \begin{theorem} [{\cite[Theorem 7.4.5]{dima}}] Let $(G, H, \theta)$ be a stable symmetric pairs such that all its descendants are regular. Then $(G,H,\theta)$ is GP2. \end{theorem} With just a little more one can actually deduce the Gelfand property from stability and the regularity of the descendants of the pair. \begin{theorem}[{\cite[Corollary 8.2.3]{dima}}] If $(G,H)$ is GP2 and there is an $Ad_G$-tame anti involution $\tau : G \to G$ with $\tau(H) = H$ then the pair $(G,H)$ is a Gelfand pair. \end{theorem} \begin{corollary} \label{criterion for gelfand} Let $(G,H,\theta)$ be a symmetric pair. If $(G,H,\theta)$ is good, all its descendants are regular, and there is an $Ad_G$-tame anti-involution $\tau : G \to G$ with $\tau(H) = H$, then the pair $(G,H)$ is a Gelfand pair. \end{corollary} This method applies to the pair $(GL(V), GL(V^+) \oplus GL(V^-), Ad_h)$ as by \cite[\S 7.7.2]{dima} all the descendants of the pair are regular, and we can choose $\tau(x) = x^t$. We will show how to use this result to deduce the same for the pair $(SL(V), GL(V^+) \times GL(V^-) \cap SL(V), Ad_h)$. \begin{theorem} All the descendants of the pair $(SL(V), GL(V^+) \times GL(V^-) \cap SL(V), Ad_h)$ are regular. \end{theorem} \begin{proof} Let $K = Z_{SL(V)}(r)$ be a descendant, and $\frk{k}$ its Lie algebra. Then \[K = Z_{GL(V)}(r) \cap SL(V).\] Let $g \in K$ be admissible. We claim that it is automatically admissible as an element of $Z_{GL(V)}(r)$. Indeed, it is true in general that the closed orbits of $G$ on $Lie(G)^\sigma$ are the same as these of $Ad_G$. Since \[Ad_{Z_{SL(V)}(r)} \cong Ad_{Z_{GL(V)}(r)}\] while \[Lie(K)^\sigma \cong Lie(Z_{GL(V)}(r))^\sigma\] since symmetric elements are automatically of trace 0, the criterion for admissibility is the same for $K$ and for $Z_{GL(V)}(r)$. As the action of $G$ on its Lie algebra factors through $Ad_G$ and the symmetric part of the Lie algebra of $K$ and $Z_{GL(V)}(r)$ is the same, the condition \[S^*(R(\frk{k}^\sigma))^{H \cap K} \subseteq S^*(R(\frk{k}^\sigma))^{Ad_g}\] is equivalent to \[S^*(R(Z_{\frk{gl}(V)}(r)^\sigma))^{Z_{GL(V)}(r) \cap Z_{GL(V)}(h)} \subseteq S^*(R(Z_{\frk{gl}(V)}(r)^\sigma))^{Ad_g}\] and similarly with $R$ replaced by $Q$. Thus, the regularity of $(GL(V), GL(V^+) \oplus GL(V^-), Ad_h)$ implies the regularity of $(SL(V), GL(V^+) \times GL(V^-) \cap SL(V), Ad_h)$. \end{proof} As a consequence of the last theorem and the computation carried in section 6, we get: \begin{theorem} The pair $(SL(V), (GL(V^+) \times GL(V^-) \cap SL(V)), Ad_h)$ is a Gelfand pair if and only if $dim(V^+) \ne dim(V^-)$. \end{theorem} \begin{proof} If $dim(V^+) \ne dim(V^-)$ then the pair is stable and all its descendants are regular, so it is a Gelfand pair. If $dim(V^+) = dim(V^-)$ the pair is not p-stable, hence by Theorem \ref{gelfand then p-stable} it is not a Gelfand pair. By Corollary \ref{criterion for gelfand}, it is left to find an admissible $\tau : G \to G$ stabilizing $H$. We can choose $\tau(x) = x^t$, in some basis in which $h$ is diagonal. Then $\tau$ is admissible since $\tau^2 = Id$ and every diagonalizible matrix in $SL_n(F)$ is $\SL_n(F)$-conjugate to its transposed matrix, and it preserves $H$ since $h$ is diagonal with entries $\pm 1$. \end{proof} \begin{theorem} Let $B = B^+ \oplus B^-$ be a non-degenerate quadratic form over $\RR$. The pair $(O(B),O(B^+) \times O(B^-))$ is a Gelfand pair if and only if either $B^+$ or $B^-$ is definite. \end{theorem} \begin{proof} Let $\tau: O(B) \to O(B)$ be the involution $\tau(x) = x^{-1}$. Then $\tau$ is tame. Indeed, let $x \in O(B)$ be semi-simple. Let $V = \oplus_\lambda V_\lambda(x)$ be a primary decomposition of $V$. Let $U_\lambda = V_\lambda(x) + V_{\lambda^{-1}}(x)$. Then the $U_\lambda$-s are mutually orthogonal and span $V$. Let $\lambda \in spec(F[x])$ be a primary value of $x$. If $\lambda \ne \lambda^{-1}$, then any orthogonal involution that intertwine $V_\lambda(x)$ and $V_{\lambda^{-1}}(x)$ conjugates $x|_{U_\lambda}$ to its inverse. If $\lambda = \lambda^{-1}$, then the unique non-trivial element of $Gal_{F[\lambda] / F[\lambda + \lambda^{-1}]}$ conjugates $x|_{U_\lambda}$ to its inverse, and it represents an orthogonal $F$-automorphism of $U_\lambda$. In any case, $x|_{U_\lambda}$ is conjugate to its inverse, hence $x$ and $x^{-1}$ are conjugate. Since $\tau(H) = H$, by Corollary \ref{criterion for gelfand}, the pair is a Gelfand pair if it is stable and all its descendants are regular. Since by \cite[Theorem 3.0.5]{dima2} all the descendants of this pair are regular, it is a Gelfand pair if it is stable, so one direction follows from Theorem \ref{stability for O}. For the other direction, if neither $B^+$ nor $B^-$ is definite, then the pair is not p-stable by Theorem \ref{p-stability for O}, and thus by Theorem $\ref{gelfand then p-stable}$ the pair is not a Gelfand pair. \end{proof} Similarly, we get: \begin{theorem} $B = B^+ \oplus B^-$ be a non-degenerate quadratic form over non-Archimedean local field $F$ of characteristic 0. The pair $(O(B), O(B^+) \times O(B^-))$ is a Gelfand pair in the following situation: \begin{itemize} \item $rank(B^+) = 1$ or $rank(B^-) = 1$. \item if $rank(B) \le 5$, $\mu(B) \le 1$, $\mu(B^+) = \mu(B^-) = 0$, $F$ is of odd residual characteristic and there is no representation of $(B^+,B^-)$ as $(Q([a,b,...]), Q([\epsilon_1 a, \epsilon_1 b,...]))$ for some $\epsilon_i \in \{\pm 1\}$. \end{itemize} Moreover, if $rank(B^+) > 1$ and $rank(B^-) > 1$ and either $F$ is of even residual characteristic or $\mu(B^+) > 0$ or $\mu(B^-) > 0$ or $rank(B) \ge 6$ the pair is not a Gelfand pair. \end{theorem} There are still few interesting cases where this pair is not stable but is p-stable. They probably have to be treated using other methods. \begin{theorem} Let $B = B^+ \oplus B^-$ be a Hermitian form over $\CC$. The pair $(U(B), U(B^+) \times U(B^-))$ is a Gelfand pair if and only if $B^+$ or $B^-$ is definite. \end{theorem} \begin{proof} Let $\tau(x) = \bar{x}^{-1}$. By an argument similar to the one given for $(O(B),O(B^+) \times O(B^-))$, this anti-involution is $Ad_{U(B)}$-tame. Since it stabilizes $U(B^+) \times U(B^-)$, we deduce from Corollary \ref{criterion for gelfand} that the pair is a Gelfand pair if it is stable and all of its descendants are regular. By \cite[Theorem 3.0.7]{dima2} all the descendants of this pair are regular. Since by Theorem \ref{stability U(B)} this pair is stable exactly when $B^+$ or $B^-$ is definite we see that the pair is a Gelfand pair in this case. Conversely, by the same theorem if none of $(B^+, B^-)$ is definite then te pair is not p-stable, hence it not a Gelfand pair. \end{proof} Similarly, we get \begin{theorem} Let $B = B^+ \oplus B^-$ be a Hermitian form for a quadratic extension $E/F$ of non-Archimedean local fields of characteristic 0. The pair $(U(B), U(B^+) \times U(B^-))$ is a Gelfand pair if and only if $B^+$ or $B^-$ is of rank 1. \end{theorem} Note that, since the Gelfand property for pairs where $rank(B^+) = 1$ is already proven in \cite{dima3} the last theorem should be considered as a negative result. \begin{proof} Similarly to the Archimedean case, we deduce that the pair is a Gelfand pair if it is stable and all its descendants are regular. By \cite[Theorem 3.0.7]{dima2} all the descendants of this pair are regular. Since by Theorem \ref{stability U(B)} this pair is stable exactly when $B^+$ or $B^-$ is definite we see that the pair is a Gelfand pair in this case. Conversely, by the same theorem if none of $(B^+, B^-)$ is definite then the pair is not p-stable, hence it not a Gelfand pair. \end{proof} \begin{theorem} Let $E/F$ be a quadratic extension of local fields of characteristic 0. The pair $(SL_n(E),SL_n(F))$ is a Gelfand pair if and only if $n$ is odd. \end{theorem} \begin{proof} If $n$ is even then by Theorem \ref{p-stability of (SL(V(E)),SL(V(F)),c)} the pair is not p-stable, hence not a Gelfand pair. Assume that $n$ is odd. Since the anti-involution $\tau(x) = x^t$ is $Ad_{SL_n(E)}$-tame and stabilizes $SL_n(F)$, this pair is a Gelfand pair if it is stable and has regular descendants. By \cite[Theorem 7.6.2]{dima} along with \cite[Remark 7.3.2]{dima}, if the pair is stable then it is a Gelfand pair. But this pair is stable if $n$ is odd by Theorem \ref{stability of (SL(V(E)),SL(V(F)),c)}. \end{proof} We turn to prove the Gelfand property for the pairs $(GL_F(V),GL_E(V),Ad_J)$ and $(SL_F(V),SL_E(V),Ad_J)$ in the cases where they are stable. \begin{proposition} All the descendants of the pairs $(GL_F(V),GL_E(V),Ad_J)$ and $(SL_F(V),SL_E(V),Ad_J)$ are regular. In particular, if a pair of one of these types is stable then it is a Gelfand pair. \end{proposition} \begin{proof} We prove for $(GL_F(V),GL_E(V),Ad_J)$, the case $(SL_F(V),SL_E(V),Ad_J)$ is similar. Let $(G,H,\theta)$ be a pair of this type, and let $K \subseteq G$ be a descendent. We wish to prove that it is regular. If every admissible element of $K$ lies in $H$ then it is regular for trivial reasons. As a consequence of \cite[Proposition 7.3.7]{dima} and the implication $\text{"special"} \Rightarrow \text{"regular"}$ in the diagram in appendix E of the same paper, the descendent is regular also if for every $\mathfrak{sl}_2$-triple $(X,Y,Z)$ such that $X \in \mathfrak{k}^\sigma$ and $Y \in \mathfrak{k}^\theta$, we have $Tr(ad(Y)|_{Z_{\mathfrak{h}}(X)}) \ne dim(Q(\mathfrak{k}^\sigma))$ and $F$ is non-Archimedean or $Tr(ad(Y)|_{Z_{\mathfrak{h}}(X)}) < dim(Q(\mathfrak{k}^\sigma))$ and $F = \RR$. Thus, it is sufficient to prove that one of the possibilities holds for $(K,K \cap H, \theta)$. Note that the above statement is true for the pairs of type $(GL(V), GL(V^+) \times GL(V^-), Ad_h)$, a fact which is proven, even though it is not stated in this way, in \cite[\S 7.7.2]{dima}. But $(\textbf{G}(E),\textbf{H}(E),\theta(E))$ is a pair of type $(GL(V), GL(V^+) \times GL(V^-), Ad_h)$ so every descendent of it satisfies one of the two required properties. In particular either $(\textbf{K}(E),\textbf{K}(E) \cap \textbf{H}(E), \theta(E))$ has no admissible elements out of $\textbf{H}(E)$ or every $Y$ in a suitable $\mathfrak{sl}_2$-triple satisfies the requirement above on the trace. In the first case, $K$ has no admissible elements out of $H$ since otherwise the same element will be a non-trivial admissible in $\textbf{K}(E)$. In the second case, if $Y$ is the middle element in an $\mathfrak{sl}_2$-triple with the above properties in $\mathfrak{k}$, let $Y_E$ denote the corresponding element of $\mathfrak{k} \otimes _F E.$ Then \[Tr(ad(Y_E)|_{Z_{\mathfrak{h}}(X) \otimes_F E}) = Tr(ad(Y)|_{Z_{\mathfrak{h}}(X)}\] since $Tr$ is stable under base-change. Moreover, \[dim(Q(\mathfrak{k}^\sigma)) = dim(Q((\mathfrak{k} \otimes_F E)^\sigma))\] since dimension is stable under base-change. As $Y_E$ is the middle of an $\mathfrak{sl}_2$ triple in $\mathfrak{k}\otimes_F E$, $(X_E,Y_E,Z_E)$, satisfying the conditions that $Y_E \in (\mathfrak{k} \otimes_F E)^\theta$ and $X_E \in \mathfrak{k} \otimes_F E)^\sigma$, \[ Tr(ad(Y)|_{Z_{\mathfrak{h}}(X)}) = Tr(ad(Y_E)_{Z_{\mathfrak{h}}(X) \otimes_F E}) \ne dim(Q((\mathfrak{k} \otimes_F E)^\sigma)) = dim(Q(\mathfrak{k}^\sigma))\] in the non-Archimedean case or \[ Tr(ad(Y)|_{Z_{\mathfrak{h}}(X)}) = Tr(ad(Y_E)_{Z_{\mathfrak{h}}(X) \otimes_F E}) < dim(Q((\mathfrak{k} \otimes_F E)^\sigma)) = dim(Q(\mathfrak{k}^\sigma))\] in the real case. This proves the regularity of the pair $(K,K \cap H, \theta)$. \end{proof} \begin{remark} Informally, we can explain the above result as follows. As all the ingredients in the method used to prove the regularity of many pairs are stable under base-change, in many cases regularity of a pair after base-change implies its regularity. \end{remark} As a result, and using the decision of the stability and p-stability of these pairs in Theorems \ref{stability of (GL_F(V),GL_E(V),Ad_J)} and \ref{stability of (SL_F(V),SL_E(V),Ad_J)}, we get: \begin{theorem} The pair $(GL_F(V),GL_E(V),Ad_J)$ is a Gelfand pair. \end{theorem} \begin{proof} In view of Corollary \ref{criterion for gelfand}, we only have to find a tame anti-involution for this pair. By choosing $J$ to be orthogonal, we can choose $\tau(x) = x^t$. \qed \end{proof} Similarly, we get: \begin{theorem} The pair $(SL_F(V),SL_E(V),Ad_J)$ is a Gelfand pair if and only if $dim(V) = 1$ or $F$ is Archimedean. \end{theorem}
1,314,259,993,783
arxiv
\section{Introduction} \vspace*{-0.4cm} In supernova remnants (SNRs), interaction between shock waves and surrounding interstellar gas is a key element for understanding the SNR evolution, cosmic ray acceleration, and origin of gamma-ray radiation. Recently, \cite{2015ApJ...799..175S} revealed that the shock-cloud interaction in a young Galactic SNR generates the turbulence and strong magnetic field, which enhance the non-thermal X-rays and an efficient acceleration of the cosmic ray electrons around the interacting gas clumps. Therefore, it is important to search for more evidence to examine its universality.\vspace*{0.2cm} The SNR N132D is a bright X-ray emitter in the Large Magellanic Cloud (LMC; Figure \ref{fig1}a), and identified as a young SNR ($\sim$3150 yr). It is also known as an Oxygen-rich SNR \citep[e.g.,][]{2007ApJ...671L..45B} and is hence considered to be a remnant of a core-collapse supernova of a massive star. Furthermore, the SNR is thought to be possibly interacting with molecular gas as shown by CO observations with SEST \citep{1997ApJ...480..607B}. Therefore, it is the best target to study the shock-cloud interaction in the extra galaxies. In the present paper, we report the large-scale distribution of the associate molecular gas with N132D and its physical properties. \articlefigure[width=0.9\textwidth]{fig1}{fig1}{(a) The $Chandra$ X-ray tricolor image of SNR N132D \citep{2007ApJ...671L..45B}. (b) Integrated intensity map of $^{12}$CO($J$=1--0) \citep{2011ApJS..197...16W} in a velocity range of $V_{\mathrm{LSR}}$ = 256.5--268.6 km s$^{-1}$ is shown in color (unit is K km s$^{-1}$). The white contours correspond to the $Chandra$ X-ray flux in the energy band 0.5--7.0 keV.} \vspace*{-0.35cm} \section{Results} \vspace*{-0.4cm} Figure \ref{fig1}a shows $Chandra$ X-ray tricolor image of N132D \citep{2007ApJ...671L..45B}, which has shell-like and filamentary structures especially in the southwest, while the blowout structure appears in the northeast. Thermal X-rays (corresponding to a energy range of 0.5--1.2 keV) are bright over the whole SNR, whereas non-thermal X-rays (2.0--7.0 keV) emit only in the southern half.\vspace*{0.2cm} Figure \ref{fig1}b represents the $^{12}$CO($J$=1--0) integrated intensity map taken by the Magellanic Mopra Assessment \citep[MAGMA;][]{2011ApJS..197...16W} with the Mopra 22-m telescope. The existence of the giant molecular cloud (GMC) in the southern part of Figure \ref{fig1}b was known by a previous study \citep{1997ApJ...480..607B}. We newly found three molecular clouds interacting with the SNR. The two clouds are located in the west and the east, which form the cavity-like CO structure along the X-ray shell. Another CO cloud is located in the the northeast. The total molecular mass of the newly found 3 clouds is $\sim$$10^4 M_{\odot}$ using the $X$-factor 7 $\times$ 10$^{20}$ [$W(^{12}$CO)/(K km s$^{-1}$)] (cm$^{-2}$) \citep{2008ApJS..178...56F}, which is smaller than the previous study by an order of magnitude \citep{1997ApJ...480..607B}. It is because the most part of the GMC is not interacting with the SNR. \vspace*{-0.35cm} \section{Discussion and Summary} \vspace*{-0.4cm} We discussed that the CO cavity-like structure was created by the stellar wind / UV photons from the massive star prior to the supernova explosion and is now interacting with the SNR shocks. Furthermore, the enhancement of non-thermal X-rays in the southern part can be described as a result of shock interaction with clumpy CO structures \citep[e.g.,][]{2015ApJ...799..175S}. Therefore, we predict that the surroundings of N132D have clumpy CO structures corresponding to the non-thermal X-ray filaments. We continue follow up observations with ALMA, ASTE, and Mopra, which allow us for the first time to study the interaction of a SNR with CO gas in detail outside our own Galaxy. \vspace*{-0.35cm}
1,314,259,993,784
arxiv
\section*{Introduction} \indent \indent The \emph{EPIC-pn camera} aboard \emph{XMM-Newton} is designed to read out data in different modes with respect to time and spatial resolution. For the timing analysis we use the so called Small Window, Timing and Burst modes, with time resolutions of 6 ms, 0.03 ms and 7 $\mu$s, respectively. Using these modes and a number of isolated pulsars as calibration targets we investigate the relative timing capabilities of the \emph{EPIC-pn camera}. \par Isolated pulsars are characterized by stable pulse periods and pulse profiles making them useful calibration targets (even under the existence of irregularities like glitches and timing noise). For our timing analysis we use photons from 0.2 keV to 15 keV. \par Here we present an analysis of the relative timing accuracy of the EPIC-pn camera onboard \emph{XMM-Newton} using six different pulsars with periods between 15 and 200 ms. We have selected pulsars with a variety of different pulse profiles in order to see whether the timing results depend on pulse shape. \section*{Observations and data analysis} \indent \indent We have analyzed different pulsars observed by \emph{XMM-Newton} between 2000 and 2007. As the main calibration source for timing, the Crab pulsar is regularly observed twice per year (spring and autumn) to make sure that the measurements do not depend on the Earth's orbital phase. The exposure times of the observations analyzed are between 3 and 50 ks. The observations were mainly done in Timing or Burst mode, except those of the Vela pulsar which were done in Small Window mode. In Figure~1 we show the pulse profiles of all pulsars studied. We define the relative timing accuracy with reference to the highly accurate measurements by radio telescopes trough expression (1), where $P_{X}$ is the X-ray period calculated by us and $P_{R}$ is the radio period extrapolated or interpolated from data found in radio pulsar databases (e.g. the Jodrell Bank Observatory \footnote{http://www.jb.man.ac.uk/~pulsar/crab.html}, ATNF, European Pulsar Network, or others). \begin{equation} \centering \dfrac{\Delta{P}}{P}=\dfrac{P_{X}-P_{R}}{P_{R}} \end{equation} \par For the Crab pulsar, the Jodrell Bank pulsar group provides monthly ephemeris of the pulsar to the scientific community. Through linear interpolation we can estimate the radio period for the time at which the X-ray observation were made. Assuming a negligible uncertainty in the radio period the $\Delta$P obtained is the error of our analysis. For the other pulsars we have less complete information, such that in general only an extrapolation to the time of the X-ray observation is possible, leading to a less accurate period estimate. Glitches between the radio observation and the X-ray observation can dramatically change the ephemeris and ruin our analysis. Therefore we have taken great care to find the closest ephemeris. For pulsars like the Crab pulsar or Vela pulsar an ephemeris less than one months ago needs to be taken, whereas for other more stable pulsars an ephemeris from even years ago may be used. The extrapolated or interpolated radio period is then used for a first trial in the search of the X-ray period. This search is made using the \textit{Xronos} routine called \textit{efsearch}. \textit{efsearch} is performing an epoch-folding with a range of trial periods, calculating the $\chi^{2}$ sum which describes the deviation of the resulting profile from a flat distribution. The period at which $\chi^{2}$ is maximum is considered to be the correct pulse period. In practice we do not use the maximum, but rather the weighted mean of the $\chi^{2}$ distribution from a fit with a triangular or a Gaussian profile. In Figure~2 we show $\chi^{2}$ distributions of the period search for one observation of each pulsar. The width of the distribution is different for each pulsar, depending on the pulse period and the elapsed time of observation (as discussed below). \par As mentioned earlier, we consider that the value of the radio period calculated on the epoch of the X-ray observation has no error, so the difference between the radio period and our X-ray period $\Delta{P}$ is taken as the error of our measurement. We can approximate the $\chi^{2}$ distribution by a triangle where the maximum corresponds to the true period P$_{0}$ and the points P$_{1}$ and P$_{2}$ where the legs of the triangle meet the level of constant $\chi^{2}$ define the total width of the $\chi^{2}$ distribution. For a pulse profile with a small single peak, P$_{1}$ and P$_{2}$ can be calculated by expression (2), where T$_{obs}$ is the elapsed observational time and N$_{per}$ is the number of pulse periods in this time. \begin{equation} \centering P_{1}=\dfrac{T_{obs}}{N_{per}+1}; \;\; P_{2}=\dfrac{T_{obs}}{N_{per}-1} \;\;\; where \; N_{per}=\dfrac{T_{obs}}{P} \end{equation} For a triangular function the Full Width Half Maximum (FWHM) is equal to $(P_{1} - P_{2})/2$ and can be expressed as in expression (3) as function of the period and the elapsed observation time. \begin{equation} \centering FWHM=\dfrac{P_{2}-P_{1}}{2} \Rightarrow FWHM=\dfrac{P^{2}}{T_{obs}} \end{equation} We use this expression to predict the FWHM of the $\chi^{2}$ distribution, which also serves as an upper limit of the uncertainty of the measured period. Empirically we have found that a rough estimate of the uncertainty of the measured period can be found by dividing the FWHM by the number of phase bins used to construct the pulse profile. We have generally used 10 bins for the pulse profiles, except in the case of the Crab pulsar where we used 100 bins. In Fig.~2 (right) one line is shown for each pulsar giving the predicted FWHM as calculated by expression (3) normalized to the period P. All the values of FWHM/P measured are close to the predicted ones. \section*{Results} \indent \indent We have measured the relative timing accuracy of the \emph{EPIC-pn camera} for six pulsars (see Fig.~1), most frequently for the Crab pulsar. As we have mentioned earlier \emph{XMM-Newton} performs two observation of the Crab every year, such that we were able to analyze 25 observations (of duration between $2\times 10^{3}$ and $4\times 10^{4}$ sec each). For the Vela pulsar we have four observations, for PSR B1509-58 two observations, and for the remaining two pulsars only one observation each. \par In Fig.~3 (top/left) we compare the estimated timing accuracy with the actually observed one. There is one line for each pulsar representing the estimated relative accuracy as a function of observing time (using the FWHM as calculated by expression (3) and divided by the number of bins used to construct the pulse profiles). The observed accuracy, represented by the data symbols, are from the difference between the periods determined in the X-rays from observations by \emph{XMM-Newton} and the radio periods (taking the absolute value). We find that all observed data points are below the lines of the estimated accuracies, except two: one corresponding to the Vela pulsar, the other to PSR B1509-58. We find that in both cases, the used radio periods appear unreliable since they were determined far away from the time of the X-ray observations (more than one year for the Vela pulsar and about five years for PSR B1509-58). So, we exclude those two points from the following discussion. Fig.~3 (top/right) shows the absolute $\Delta$P/P as a function of observing date. There is no obvious change of the timing accuracy of the \emph{EPIC pn-camera} over its lifetime. \par In the lower two panels of Fig.~3 only results from observations of the Crab pulsar are shown. Again, there is no obvious dependence on date, but there is a tendency of a smaller uncertainty for longer observations, as would be expected. \par For a quantitative measure of the timing accuracy we use the standard deviation of the distributions of $\Delta$P/P values (shown in Fig.~3). Fitting the distributions with a Gaussian normal distribution we find a standard deviation of $7\times 10^{-9}$ for all pulsars (including the Crab pulsar) and $5\times 10^{-9}$ for the Crab pulsar alone. While the distribution for Crab pulsar values is centered at zero (within uncertainty) the mean value of the distribution for all data is slightly offset, in the sense that the X-ray period is slightly larger on average than the radio period (this is due to data from pulsars other than the Crab pulsar). \section*{Conclusions} \indent \indent We have determined X-ray pulse periods by epoch folding of six different pulsars (with periods between 15 and 200 ms), including the Crab pulsar from observations by the \emph{EPIC-pn camera} of \emph{XMM-Newton}. By comparing the X-ray periods with inter-(extra-)polated radio periods from public archives we find generally very good agreement (except in three cases where the radio periods were taken far away from the time of the X-ray observations). Under the assumption that the radio periods have no uncertainty, the difference between the X-ray and radio periods give an estimate of the accuracy of the X-ray measurements. We conclude that (for integration times of a few ks) the relative timing accuracy of the \emph{EPIC-pn camera} is generally better than $1\times 10^{-8}$. \par Further analysis on the timing accuracy of \emph{XMM-Newton's} \emph{EPIC-pn camera} will be presented in M. Kirsch et al. 2007 (in preparation). \section*{Acknowledgments} \indent \indent The \emph{XMM-Newton} project is an ESA science mission with instruments and contributions directly funded by ESA Member States and the USA (NASA). A. Martin-Carrillo would like to thank the ESAC Faculty group for their financial support.
1,314,259,993,785
arxiv
\section{Introduction} There has been much recent work on time dependence in gravitational backgrounds \cite{branonium, time_dependence}. The basic idea has been to introduce a probe BPS $Dp$ brane into the non-trivial geometry of a large number of background branes and study its corresponding dynamics. This has also been extended to include non-BPS branes and supertube probes. The introduction of a probe brane tends to break all the supersymmetry associated with the background configurations, and therefore the probe will experience a gravitational force due to the source branes. Of course, by selecting specific probes in backgrounds we can preserve the superymmetries and there will be no net force. However we generally see that probes placed in the non-trivial backgrounds are unstable, and share many similarities to the condensation of the open string tachyon \cite{sen, non_bps_dynamics}. In particular, it can be seen that the energy momentum tensor localised on the probe brane has vanishing pressure at late times which is similar to the fluid at the tachyonic vacuum \footnote{Recall that the open string degrees of freedom at the tachyon vacuum vanish and only closed string modes remain. This is due in part to the reduction of the metric to a Carollian form \cite{tachyon_stuff}.}. It has also been suggested that the open string tachyon may have a geometrical interpretation in terms of one dimensional brane motion in a confined, bounded non-trivial background with $\mathbb{Z}_2$ symmetry \cite{geometrical_tachyons, kinks}. Specifically we see that the radion field, parameterising the distance of a probe brane from the source branes, becomes tachyonic when placed at the unstable point in the background. In addition, we have seen that these geometrical tachyons exhibit similar kink solutions to those of the open string tachyon, and we would also expect there to be stable vortex solutions. Although this has proven to be tantalising, it still remains to be seen whether it is possible to determine the true relationship between the open string tachyon and its geometrical cousin. Most of this work, however, has focused on a solitary probe brane thus it seems logical that this program should be extended to include multiple branes. As is well known, the presence of N coincident $Dp$-branes implies that there is a unitary $U(N)$ gauge theory, due to the open string degrees of freedom \cite{myers}. This means that the effective DBI action is no longer applicable and we must resort to using the non-Abelian extension \cite{myers, tseytlin}. One of the major differences between the two is that the scalar fields must now transform under a representation of a gauge group. Therefore they no longer commute with one another, leading us to introduce the notion of non-commutative coordinates and hence many of the ideas associated with non-commutative geometry. Although this approach has been useful, it is known that the non-Abelian action agrees only up to terms of order $F^6$ \cite{myers} when compared to exact open string calculations. Furthermore, there has been no satisfactory resolution to the problem of the finite $N$ expansion of the action. Despite this, there has been an incredible amount of work done in this field with regards to intersecting brane configurations leading to the construction of fuzzy funnels. One of the byproducts of this has been the large-small dualities between funnel solutions and collapsing spheres sourced by $D0$-branes \cite{costis}. Again it seems only logical to look at non-trivial backgrounds to see if these dualities still hold. It has also been suggested that the event horizon of black holes should be described by fuzzy spheres. If this is the case, then our analysis would hopefully yield some solutions with regard to the classical stability of such as system. This paper will attempt to analyse the dynamics of several probe branes in curved backgrounds of coincident $D$-branes and $NS$5-branes using the irreducible representation of $SU(2)$, which corresponds to a fuzzy sphere geometry. We will only consider flat static branes all localised at the same point in the bulk space-time. More complicated backgrounds such as the ring configuration will not be analysed \cite{sfetsos, israel}, although should be tackled at some point in the future. One of the most important things to note is that there are exact conformal field theories associated with coincident background solutions, and so any results obtained here will correspond to operators in the CFT. We begin by constructing the low energy action for coincident $Dp'$-branes in a $Dp$-brane background, and examine the solutions. \section{Background solution and brane action.} We consider the standard type II supergravity background solution for $M$ coincident $Dp$-branes. These source branes are all assumed to be parallel in the sense that their world volumes are oriented in the same directions, and that they are static. This will ensure that our solutions are as simple as possible. The 10-dimensional bulk spacetime is assumed to be infinite in extent, and there are no gravitational moduli in the problem. The solutions for the metric, dilaton and R-R field are given by \cite{branonium, stelle} \begin{eqnarray} ds^2&=&H^{-1/2}\eta_{\mu \nu} dx^{\mu} dx^{\nu} + H^{1/2} dx^m dx^n \nonumber \\ e^{\phi} &=& H^{(3-p)/4} \nonumber \\ C_{0 \ldots p} &=& 1-H^{-1}, \end{eqnarray} where $\mu, \nu$ represent directions parallel to the background branes, whilst $m, n$ are transverse directions. The harmonic function $H$ satisfies the Laplace equation in the transverse Euclidean space. In general it can be written as a multi-centred function of the transverse coordinates: \begin{equation} H= 1 + \sum_{i=1}^M \frac{\tilde{k}_p}{|\textbf x-\textbf x_{i}|^{7-p}} \end{equation} which for coincident $D$-branes reduces to \begin{equation} H = 1 + \frac{k}{r^{7-p}}. \end{equation} where, $r=\sqrt{x_m x^m}$ and $k_p=(2 \sqrt{\pi})^{5-p} M \Gamma(\frac{7-p}{2}) g_s l_s^{7-p}$. As usual $l_s$ is the string length and $g_s$ is the string coupling at infinity. Into this background we wish to insert $N$ probe $Dp'$-branes where we must ensure that $N<M$ and also that $p \ge p'$ in order to satisfy the supergravity constraints (note that we will neglect the case of $p'=-1$ in IIB, which corresponds to the D-instanton). Because there is more than a single probe brane we can no longer use the Abelian DBI action, as the extra massless string modes enhance the gauge symmetry on the world-volume. In order to proceed we must first introduce the non-Abelian action for the bosonic fields. The first part is the enhanced Born-Infeld contribution, \begin{equation}\label{eq:actiondef} S_{BI} = - \tau_p \int d^{p'+1}\zeta STr e^{-\phi} \sqrt{-det({\mathcal P}[E_{ab}+E_{ai}(Q^{-1}-\delta)^{ij}\\ E_{jb}]+\lambda F_{ab})}\sqrt{det(Q^i \ _j)}. \end{equation} where we have the usual definitions \begin{equation} \lambda = 2 \pi l_s^2, \hspace{1cm} E_{\mu \nu} = G_{\mu \nu}+B_{\mu \nu}, \hspace{0.5cm} \rm{and} \hspace{0.5cm} Q^i \ _j \equiv \delta^i \ _j + i \lambda[ \phi^i, \phi^k] E_{kj}. \end{equation} The second part of the action is the Chern-Simons part coupling the background R-R field to the probe branes world volumes. \begin{equation} S_{CS}=\mu_{p} \int STr(\mathcal P[e^{i\lambda i_{\phi} i_{\phi}}\sum C^{(n)}e^B]e^{\lambda F}). \end{equation} As usual $\mathcal P[\ldots]$ represents the pullback of the spacetime tensors to the brane worldsheet. The action contains $\phi_i$ terms, where $i=p+1 \ldots 9$ run over the transverse coordinates. In fact these are the transverse scalars in the action which are actually $N \times N$ matrix representations of the $U(N)$ worldsheet symmetry. The $STr(\ldots)$ denotes the symmetrized trace operation, the prescription for which is to take a symmetrized average over all the possible orderings of the $F_{ab}, D_a \phi^i, i[\phi^i, \phi^k]$, and all the possible orderings of the individual scalars prior to taking the trace.\footnote{In \cite{bordalo}, two loop corrections to the DBI action resulting from the curved background were computed. These lead to modifications of the symmetrized trace prescription and it would be of interest to see if this results in modifications to our fuzzy solutions.} In the Chern-Simons action we see the $i_\phi$ denote the interior product by $\phi^i$ regarded as a vector in the transverse space. For a general $p$-form, we see the interior derivatives act as \begin{equation} i_\phi i_\phi C^{(p)} = \frac{1}{2} [\phi^i, \phi^j] C_{ji}^{(p)}. \end{equation} It is well known that a $Dp$-brane is electrically charged under the $(p+1)$ form RR potential, with a charge $\mu_p$. Supersymmetry constraints impose the additional condition that $\mu_p= \pm \tau_p$. The non-Abelian Chern-Simons action shows that a $Dp$-brane can couple to R-R charges of higher dimensionality, and thus permits the possibility of a brane dielectric effect. For example, if we expand the Chern-Simons action to leading order with no gauge field or $B$ field, we have \begin{equation} S_{CS}=\mu_p \int Tr( \mathcal{P} [C^{(p+1)}+i \lambda i_\phi i_\phi C^{(p+3)} - \frac{\lambda^2}{2} (i_\phi i_\phi)^2C^{(p+5)}]). \end{equation} In this note we are assuming that all the probe branes are parallel to the source branes, therefore we find that the leading order contribution to the Chern-Simons coupling reduces to: \begin{equation} S_{CS} = \mu_p \int Tr (\mathcal{P}[C^{p+1}]) \end{equation} which, upon insertion of the background solutions, becomes \begin{equation} S_{CS} = -q\int dt N H^{-1} \end{equation} up to an arbitrary constant, where $q=+1$ corresponds to a $D$-brane probe and $q=-1$ corresponds to an antibrane. Now, in the Abelian case we know that there is only a coupling if $p=p'$ or if $p=6, p'=0$. Since we are neglecting higher order corrections to the Chern-Simons action, we effectively have the same situation and so we must remember to include these couplings in our effective theory. To simplify the analysis as much as possible we will only consider time dependent solutions for the transverse scalars. This will ensure that no caustics form in the action. We will also set $F_{ab}$ to zero, and allow the only fields to be excited on the branes to be those which are not in the angular directions. This will also ensure that the $B$ field will drop out of the action. To ensure that the action is dimensionally consistent, we must be aware that the $x_i$ (i=$p+1 \dots 9$) coordinates transform as \begin{equation} x_i = \lambda \phi_i, \end{equation} and the physical distance between background branes and probe branes in the harmonic function becomes \begin{equation} r^2 = \frac{\lambda^2}{N}Tr( \phi^i \phi^j \delta_{ij}). \end{equation} Now that we have set the stage, we can use our $Dp$-brane solutions to determine the dynamics of a collapsing fuzzy sphere in this background, which we assume can be regarded as a probe of the geometry. Therefore we are neglecting any back reaction and $1/N$ corrections in what follows. \section{Radial Collapse.} In this section, we will consider the purely radial motion of the $N$ $Dp'$-branes in the background of the $M$ $Dp$-branes, where we must ensure that $M>N$ for the supergravity solutions to hold. To simplify the problem even further, we set all the coordinates to zero with the exception of $x_7 \ldots x_9$. For simplicity we will only examine the $p=p'$ case in detail, as there are difficulties associated with the solutions when $p\ne p'$. This shouldn't be surprising as the same thing happens in the Abelian case, where we must look for world-volume symmetry transformations in order to solve the equations of motion. We expect this to hold in the non-Abelian case, which poses questions about the relationship between non-Abelian brane solutions and the space-time uncertainty principle. Although we will not discuss the implications in this note, it would certainly be interesting for future investigations. \subsection{Dynamics in the $p=p'$ case.} In this particular instance, the background solution allows us to write the action as follows; \begin{displaymath}\label{eq:action2} S=-\tau_{p'} \int d^{p'+1} \zeta STr\left( H^{-1} \sqrt{(1-H \lambda^2 \dot{\phi^i} \dot{\phi^j}\\ \delta_{ij})({1-\frac{1}{2} \lambda^2 H [\phi^i, \phi^j][\phi^j,\phi^i])}} \right)\\ \end{displaymath} \begin{equation} S_{CS}=-\tau_{p'} \int d^{p'+1} \zeta \frac{qN}{H}, \end{equation} where we have made the approximation $Q^{ij} \sim \delta^{ij}$, and only expanded the second square root term to leading order. Our approximation that the inverse matrix $Q^{ij}$ is treated as unity to leading order in lambda is consistent as long as our solution only probes distances greater than the string length. As the fuzzy sphere radius starts approaching $l_s$ we anticipate that higher order terms in $Q^{ij}$ (and in the square root of det(Q) ) would need to be kept for consistency. This approximation has been used by other authors who have investigated fuzzy spheres in the nonabelian DBI theory, see for example the second paper in \cite{myers}. In order to simplify the expression to something more useful we need to expand the commutator terms. The simplest ansatz possible is to make the transverse scalars all commuting, however it has been shown that the system will be unstable since it will not be at its minimal energy. This can be easily be verified by expanding out the last term in the action \cite{myers}. Instead we opt for the more familiar $SU(2)$ ansatz which parameterises a non-commutative object known as a fuzzy 2-sphere. The definition of which can be seen via \begin{equation}\label{eq:fuzzyansatz} \phi^i= R(t) T^i, \hspace{0.5cm} i=1, 2, 3 \end{equation} where the $T^i$ are an $N \times N$ matrix representation of the generators of the $SU(2)$ algebra. \begin{equation} [T^i, T^j] = 2i \varepsilon_{ijk} T^k \end{equation} The remaining fields $\phi^i, i= 4,5...$ are set to zero or more gemerally to constant matrices that commute with the $SU(2)$ generators.Let us make some comments concerning the generality and validity of this 'round'fuzzy sphere ansatz in (\ref{eq:fuzzyansatz}). Our ansatz sets the nonabelian transverse fields $\phi^i $ either to be $SU(2)$ valued fields (the fuzzy sphere ansatz) or to constant commuting matrices. The latter are taken to commute with both the $SU(2)$ generators and themselves. These latter fields have no potential because of they commute with everything, so the assumption that they are constant is consistent with their equations of motion; they simply parameterise flat directions of the theory. There is a related issue of what is the most general time dependent configuration..which is very interesting question. For example one could imagine that there will be non-spherical fluctuations because there are a tidal effects in the direction of motion in the curved backgrounds which should alter the geometry of the fuzzy sphere...maybe leading to a fuzzy 'egg' . But these are deformations of the spherical solution ..so we would argue that in the first instance one should study the latter first and then investigate fluctations on about this solution. There are other known fuzzy geometries with different topology such as fuzzy cylinders which one could also investigate in the context of curved backgrounds..but again this is outside the remit of our paper which focusses on spherical solutions. To check that our speherical ansatz is at least a consistent one we consider the equations of motion for the nonabelian fields $\phi^i $ in a general curved background. Let us consider a background metric of the form \begin{equation}\label{eq:metric} ds^2 = -g_{00}dt^2 + g_{xx}dx^a dx^b \delta_{ab} + g_{zz} dz^i dz^j \delta_{ij} \end{equation} where $a, b$ run over the $q$ worldvolume directions and $i, j$ are transverse directions to the source. This background could obviously be generated by a stack of coincident branes, or something more exotic. The non-Abelian action then take sthe form \begin{equation}\label{eq:nonabaction} S =-\tau_p' \int d^{p'+1} \zeta STr \left(e^{-\phi}\sqrt{g_{xx}^p g_{00} (1-\lambda^2 g_{zz}g_{00}^{-1}\dot{\phi^i}\dot{\phi^j}\delta_{ij})} \sqrt{1-\frac{1}{2} \lambda^2 g_{zz}^2 [\phi^i, \phi^j][\phi^j,\phi^i] } \right) \end{equation} Note that restricting the metric components $g_{00}=g_{xx}=g_{zz}^{-1} = H^{1/2} $ the above action reproduces that in (\ref{eq:action2}) above. Now working to leading order in $\lambda$ the equations of motion for $\phi^i$ are \begin{equation}\label{eq:eqmotion} \frac{d}{dt} (e^{-2\phi}g_{xx}^{p/2}g_{00}^{-1/2}g_{zz} \dot{\phi^i} ) = g_{zz}^2 [\phi^i,[\phi^j,\phi^i]] \end{equation} Now consider the more general ansatz for $\phi^i $ \begin{equation}\label{eq:genansatz} \phi^i= R(t) T^i + \beta(t) Y^i, \hspace{0.5cm} i=1, 2, 3 \end{equation} where the matrices $Y^i$ represent some non-spherical orthogonal directions to the $SU(2)$ generators $T^i $. Without loss of generality we can assume that $Tr(T^i Y^i) =0$. Using this property one can easily obtain equations of motion for $R(t) $ and $\beta(t)$ by substituting the above ansatz into (\ref{eq:eqmotion}). In the limit when we send $\beta(t)=0$ (ie our spherical fuzzy sphere ansatz) the equation of motion for $\beta(t) $ becomes \begin{equation}\label{eq:eqmotion1} \frac{d}{dt} (e^{-2\phi}g_{xx}^{p/2}g_{00}^{-1/2}g_{zz} \dot{\beta} ) = \frac{1}{Tr(Y^iY^j)}g_{zz}^2Tr( [T^i,[T^j,T^i]] Y^j) \end{equation} Due to the orthogonality of $T^i$ and $Y^j$ the second trace factor in (\ref{eq:eqmotion1}) vanishes so $e^{-2\phi}g_{xx}^{p/2}g_{00}^{-1/2}g_{zz} \dot{\beta} $ is a constant. We can choose this constant to be zero and hence $\dot{\beta}$ also vanishes. It is therefore consistent to set $\beta(t)=0 $ at the outset as in our spherical fuzzy sphere ansatz (\ref{eq:fuzzyansatz}). Returning then to our spherical fuzzy sphere ansatz for $\phi^i $, as argued in \cite{myers}, we can choose the generators to be the fundamental representation of the algebra since this will correspond to the minimum energy configuration. The physical radius of the fuzzy sphere is given by \begin{equation} r^2 = \frac{\lambda^2}{N} Tr(\phi^i \phi^j \delta_{ij}) = \lambda^2 R(t)^2 C, \end{equation} where $C$ is the quadratic Casimir of the representation defined by \begin{equation} \sum_i^3 (T^i)^2 = C 1_N, \end{equation} and $1_N$ is the $N \times N$ identity matrix. We also note that for the irreducible representation, $C = N^2 -1 $, which can be approximated by $ N^2$ in the large $N$ limit. In our analysis we will only be interested in this limit, as the case of finite $N$ has additional complications due to the prescription of the symmetrized trace. Combining all this information allows us to write the final form of the action as \begin{equation}\label{eq:p=p'_action} S= -\tau_{p'} \int d^{p'+1} \zeta N H^{-1} \sqrt{(1-H \lambda^2 \dot{R}^2 C)\\ (1+4\lambda^2 H C R^4)}- \tau_{p'} \int d^{p'} \zeta dt \frac{qN}{H}. \end{equation} Now, from the definition of the harmonic function, we see that the large $r$ limit corresponds to Minkowski space, and the non-Abelian action reduces to the usual form for flat space \cite{myers, ramgoolam, costis} We can now calculate the associated canonical momentum and energy density from the action, which are defined as follows \begin{equation} \tilde \Pi = \frac{\Pi}{\tau_p'} = N \lambda^2 \dot{R} C \sqrt{\frac{(1+4\lambda^2HCR^4)}{(1- H \lambda^2 \dot{R}^2 C)}} \end{equation} \begin{equation} \tilde E = \frac{E}{\tau_p'} = \frac{N}{H} \sqrt{\frac{(1+4\lambda^2 C H R^4)} {(1-H\lambda^2 \dot{R}^2 C)}}-\frac{qN}{H}, \end{equation} where the momentum is the derivative of the Lagrangian with respect to $\dot{R}$, and the energy is constructed via Legendre transform. In addition we have divided out by a factor of $\int d^{p'} \zeta$ which loosely corresponds to the 'volume' of each $Dp'$-brane. To construct the potential energy we will find it useful to switch to the Hamiltonian formalism, where we write the energy in terms of the conjugate variables. \begin{equation} \tilde E = \sqrt{N^2 H^{-2} (1+4\lambda^2 C H R^4) + \frac{\tilde \Pi^2}{H\lambda^2 C}}-\frac{qN}{H}, \end{equation} which allows us to define the non-Abelian static potential via $V_{\rm eff} =\tilde E(\tilde \Pi = 0)$. \begin{equation}\label{eq:potential} V_{\rm eff} = NH^{-1} \left(\sqrt{1+4 \lambda^2 C H R^4} - q \right), \end{equation} In order to consider the collapse of the fuzzy sphere, it will be more convenient to work in term of the physical radius $r$ rather than $R$. In which case the potential can be written \begin{equation} V_{\rm eff} = N H^{-1} \left( \sqrt{1+\frac{4 H r^4}{\lambda^2 C}} -q \right), \end{equation} which is the gravitational potential generated by the background branes located at $r=0$. It is useful to compare this result with that from the Abelian case, which was determined to be \cite{branonium} \begin{equation} V^{abelian} = N\frac{(1-q)}{H}, \end{equation} when we have $N$ probe branes separated by a distance larger than the string length. Clearly we see that there is an additional term present arising from the non-Abelian nature of the effective action. Naively one might have assumed that the potential for $N$ branes would be just $N$ times that for a single brane at lowest order. However, as we can see there is an extra term corresponding to the additional energy of the fuzzy sphere (or the vacuum energy of the non-commutative spacetime). It is instructive to consider the behaviour of the potential in the different regions of spacetime, but first we must ensure that there are no limiting constraints to be imposed on the configuration. Solving the energy equation for $\dot{r}$, we obtain the following equation of motion which in turn will yield a constraint on the dynamics. \begin{equation}\label{eq:radial_eom} \dot{r}^2 = \frac{1}{H} \left( 1 - \frac{N^2}{(\tilde EH+qN)^2} \left \lbrace 1 + \frac{4 H r^4}{\lambda^2 C} \right \rbrace \right). \end{equation} Since this equation is non-negative we see that the following constraint must be satisfied, when we set the Chern-Simons part to zero, \begin{displaymath} 1 \ge \frac{N^2}{\tilde E^2 H^2} \left \lbrace 1 + \frac{4Hr^4}{\lambda^2C} \right \rbrace. \end{displaymath} We consider what happens when we are in the near horizon geometry, as the constraint reduces to the following expression \begin{equation}\label{eq:throatconstraint} 1 \ge \frac{N^2}{\tilde E^2} \left(\frac{r^{7-p}}{k_p} \right)^2 \left \lbrace 1 + \frac{4k_pr^{p-3}}{\lambda^2 C} \right \rbrace, \end{equation} For $p \ge 3$ the leading term in the expression is dominant and so we are effectively left with the following constraint \begin{equation} 1 \ge \frac{N^2}{\tilde E^2} \left(\frac{r^{7-p}}{k_p}\right)^2. \end{equation} The supergravity solution implies that the term in parenthesis is already vanishingly small, which in turn implies that the ratio $N/\tilde E$ can take a wide range of values and still satisfy this constraint. We must emphasise at this point that the classical analysis may break down as the fuzzy sphere collapses toward zero size, since the back reaction upon the source branes will no longer be negligible and there will doubtless be correction terms to the energy in this case which will invalidate this constraint. Furthermore there will also be the problem of open string tachyon modes, which will arise as the branes approach distances comparable to the string length. If we now consider the limiting case where $p<3$, the constraint equation becomes \begin{equation} 1 \ge \frac{4}{\tilde E^2}\left(\frac{r^{7-p}}{k_p}\right)\frac{r^4}{\lambda^2}, \end{equation} when we take the large $N$ limit. This solution has explicit dependence upon the ratio of the radius to the string length, which we would expect to be larger than unity in order for us to have any faith in the effective field theory description. This implies that the energy density can again be reasonably arbitrary as the supergravity constraint implies that the other term is already vanishingly small. To be safe we will assume that $\tilde E >> N$ in what follows, as there is no ambiguity in the constraints if this is fulfilled. Interestingly if we reinstate the Chern-Simons contribution we find, to leading order, that the same constraints apply. We now turn out attention to the large $r$ region ie flat space. In the Abelian case there are no constraints to be imposed, and so the probe branes can move to an infinitely large distance from the sources. In the non-Abelian case however, we can obtain an equation for the maximum radius of the fuzzy sphere which can be written \begin{equation}\label{eq:max_radius} r^4_{max} = \frac{\lambda^2 C \tilde E^2}{4N^2} \left(1+\frac{2qN}{\tilde E}\right), \end{equation} from which we deduce that the orientation of the $Dp'$-branes plays the role of a small correction term provided we take our $\tilde E > N$ approximation. This maximal distance represents the limit of our effective action, and it is likely that higher order correction terms will allow us to consider limits such as $r_{max} \to \infty$. We note, however, that this maximal distance is dependent upon the energy of the probe branes, and that by tuning the energy we can effectively consider an unbounded solution in Minkowski space. If we take the large $N$ limit and neglect the Chern-Simons part, this equation simplifies to \begin{equation}\label{eq:r_max_approx} r_{max} = \sqrt{ \frac{ \tilde E \lambda}{2}}\left(1+\frac{qN}{2\tilde E}+\ldots \right) \hspace{0.5cm} = \sqrt{\tilde E \pi l_s^2}\left(1+\frac{qN}{2\tilde E} +\ldots\right) \end{equation} which shows that the size of the fuzzy sphere is only dependent upon the energy of the solution. This is what we expect from our knowledge of dielectric branes \cite{myers, hyakutake} and Giant Gravitons \cite{superstars, giant_gravitons}, which are expanding brane solutions sourced by non-trivial background fields. Even though we are only looking at a relatively simple example, we would expect to find some similarities between these problems. Armed with this our knowledge from the constraints we may proceed to investigate the behaviour of the effective potential. A quick calculation shows that the potential has no turning point, therefore we shouldn't expect any stable bound states between the fuzzy sphere and the background branes. It will be easier to analyse the behaviour of the solution in the two regions of spacetime, to learn more about the dynamics. For vanishing $r$ we find the potential becomes \begin{equation} V_{\rm eff} \sim \frac{N r^{7-p}}{k_p} \left( \sqrt{1+\frac{4k_pr^{p-3}}{\lambda^2C}}-q \right). \end{equation} Now for $p \ge 3$, and sufficiently small radial distance, we may again ignore the radial contribution in the square root, provided that \begin{equation} r << \left(\frac{\lambda^2 C}{4k_p} \right)^{3-p} \nonumber, \end{equation} and we find this reduces to \begin{equation} V_{\rm eff} \sim \frac{N r^{7-p}}{k_p} (1-q), \end{equation} which we can see is identically zero if $q=1$, and is attractive if $q=-1$. This is the same behaviour as seen for arbitrary $p$ in the Abelian case, and implies that the configuration can become BPS at sufficiently small distances. However the size of this stabilisation radius is likely to be smaller than the string length, where our effective action is not valid. Now if we consider $p <3$ we find the potential is given by \begin{equation} V_{\rm eff} \sim \frac{N r^{7-p}}{k_p}\left( \sqrt{ \frac{4k_p}{\lambda^2 C r^{3-p}}} - q \right). \end{equation} which is attractive for all valid $p$ in this region. Therefore we see that to leading order, the probe branes are always gravitationally attracted toward the source branes. In the large $r$ limit, remembering that there is a maximum radius for the fuzzy sphere solution to hold, the potential becomes. \begin{equation} V_{\rm eff} \sim N \left( \sqrt{\frac{4r^4}{\lambda^2 C}}-q \right), \end{equation} which we see will tend to a positive constant depending upon the exact size of the maximum radius. If we substitute our solution (\ref{eq:max_radius}) into the potential, we find \begin{equation} V_{\rm eff} \sim \tilde E \sqrt{1+\frac{2Nq}{\tilde E}}-qN \sim \tilde E, \end{equation} where we have explicitly expanded out the square root term using our energy constraints. Thus the potential energy is effectively the energy density at large $r$. Before proceeding to solve (\ref{eq:radial_eom}), it is worth mentioning that the 'velocity' of the collapse is a decreasing function of time. This is in stark contrast to the fuzzy sphere in a flat Minkowski background, where we find that at small $r$, the velocity is a substantial fraction of the speed of light. The curved geometry of spacetime in the near horizon limit acts in such a way as to slow the rate of collapse, in fact for an observer on the background branes it would take an infinite amount of time for the sphere to reach zero size. Only if we switch to conformal time will we see a finite time solution. This is an example of the usual red shift problem in General Relativity. In the large $r$ region, we see that the harmonic function becomes unity and thus we would expect to find the usual equations of motion for collapsing fuzzy spheres in flat space. Using the fact that the energy is conserved in time, we can integrate the equation of motion to obtain the general form of the radial collapse in terms of Jacobi elliptic functions. By carefully selecting our initial value of $r_0$ to be \begin{equation} r_0^4 = \frac{\lambda^2 C \tilde E}{4N} \left(\frac{\tilde E}{N} + 2q \right), \end{equation} we find that the equation of motion is given by \begin{equation} r(t) = \pm r_0 {\rm JacobiCN} \left \lbrack 2\sqrt{\frac{2}{C}}\frac{r_0 t}{\sqrt{1+\frac{4r_0}{\lambda^2 C}}} ,\frac{1}{\sqrt{2}} \right \rbrack \end{equation} The form of this solution has been extensively discussed in \cite{ramgoolam, costis}, and so we will not say much about it here. In this instance we know that the regime of validity for the solution is $r^{7-p}>>k_p$ and so we find a simple monotonically expanding/contracting solution without collapse toward zero size. Thus the effective action should remain a valid description of the dynamics, and we do not have to worry about the physical nature of the coordinate system being employed \cite{costis}. Interestingly this solution appears to be valid for arbitrary values of $p$ since all the $p$ dependence arises in the form of the harmonic function, and gives rise to another example of the so called $p$-brane democracy. The form of the equation of motion makes it difficult to obtain smooth analytic solutions interpolating between flat space and the near horizon geometry. As a result we must regard the two regions as being distinct and choose boundary conditions such that it is possible to match the solutions by hand. Turning our attention to the throat solutions, we see that the complicated form of the equation of motion makes analytic solutions difficult to obtain. One case where we can make some progress is the $p=3$ background, as the 'fuzzy' term loses all radial dependence in this instance. The solution is given in terms of a hypergeometric function, and it thus difficult to invert \begin{equation} t-t_0 \sim \pm \frac{\sqrt{k_3}}{r} \ _2 F_1 \left(\frac{1}{2}, \frac{-1}{8}, \frac{7}{8}, \frac{N^2 r^8}{\tilde E^2 k_3^2} \left\lbrace1+\frac{4k_3}{\lambda^2C} \right\rbrace \right). \end{equation} In the limit that the sphere collapses toward zero size, we can expand the hypergeometric function using the well known series expansion \begin{equation} t-t_0 \sim \pm \frac{\sqrt{k_3}}{r} \left(1-\frac{N^2 r^8}{14 \tilde E^2 k_3^2}\left\lbrace1+\frac{4k_3}{\lambda^2 C} \right\rbrace \right), \end{equation} which implies that at very late times the solution behaves as \begin{equation} r \sim \pm \frac{\sqrt{k_3}}{t-t_0}. \end{equation} The collapse of the sphere is described by the positive branch of the above solution, and is in fact an example of a simple power law solution. This power law behaviour can be explicitly seen at late times by assuming that the dominant contribution to the denominator of (\ref{eq:radial_eom}) is unity. The resulting integral is trivial to perform and we obtain the general late time solution (dropping constants of integration) \begin{equation}\label{eq:late_time_soln} r \sim \pm \left(\frac{(p-5)(t-t_0)}{2\sqrt{k_p}} \right)^{2/(p-5)}, \end{equation} the solution for $p=5$ must be calculated separately, but is simply proportional to an exponential \begin{equation} r \sim \exp\left(\pm\frac{t}{\sqrt{k_5}}\right). \end{equation} Thus we have shown that the solutions obey simple power law equations of motion as $r \to 0$. Of course, we must be careful in our interpretation of these results as we expect correction terms to affect the validity of our effective action as the fuzzy sphere collapses. We can solve the equations of motion numerically, which gives us some indication of the late time dynamics as measured by observers on the source branes. For example, Figure 1 shows the numerical solution for $D0$ and $\bar{D}0$ branes. In order to generate this solution we took $l_s =1$, $g_s=0.1$, $N=100$, $\tilde E=200$ and $M=1000$, whilst retaining the full form of the harmonic function but taking the large $N$ limit. Although the parameter space of solutions is large, we expect the numerical solutions to be representative of more general behaviour. In fact we investigated the dynamics for various ranges of energy, and found approximately the same solutions with all the solution curves collapsing onto one another at very small distances. The analytic solution clearly shows that the radius collapses rapidly when the metric is approximately flat, but decelerates as the sphere enters the near horizon geometry. We expect that our solutions will break down as the probes near the source branes, although it is useful to recall that $D0$-branes can probe distances smaller than the string length and so the solution may be valid for some time. The plot shows that the brane and anti-brane follow similar trajectories as they cross into the near horizon region and are thus indistinguishable. Our analysis of the potential suggests that it should vanish for the $D0$-brane solution as $r\to 0$. Clearly our plot shows that this must happen at a distance smaller than the string scale. Figure 2 shows the solutions for the $D4$ and $D5$-brane backgrounds using the same parameters, but ignoring the Chern-Simons term. The five brane solution indeed tends toward an exponential at late times as expected from our simplified analytic solution. Figure 3 shows the solution for the $D3$ and $\bar{D}3$-branes. In this instance we can clearly see that the fuzzy sphere associated with the $D3$ solution collapses faster than the $\bar{D}3$ solution when they are in flat space. This is because the $D3$-branes are more strongly attracted to the sources than the $\bar{D}3$-branes. However as they cross into the near horizon geometry, both spheres tend to the same radius as the Wess-Zumino term becomes negligible which accounts for the similarity in their dynamics. \begin{figure} \begin{center} \epsfig{file=fuzzy4.eps, width=8cm,height=8cm} \caption{Numerical solution to the equations of motion for the $D0$-brane background.} \end{center} \end{figure} \begin{figure} \begin{center} \epsfig{file=fuzzy2.eps, width=8cm,height=8cm} \caption{Solutions for $D4$ and $D5$ brane backgrounds, ignoring the Chern-Simons coupling.} \end{center} \end{figure} \begin{figure} \begin{center} \epsfig{file=fuzzy3.eps, width=8cm,height=8cm} \caption{Solutions for the fuzzy sphere sourced by $D3$ and anti $D3$-branes.} \end{center} \end{figure} The difficulty in analytically solving the integral equation of motion is related to the fact that it describes curves on hyper-elliptic Riemann surfaces, with the infinitesimal time playing the role of a holomorphic differential. The velocity and the radius can each be regarded as two complex variables related by a single constraint. We can use the simplified Riemann Hurwitz formula to calculate the genus, g, of the underlying surface \begin{equation} g = \frac{1}{2}(B-2), \end{equation} where $B$ refers to the number of branch points of our solution. It is fairly straight forward to see that the $p=6$ and $p=5$ cases correspond to genus 2 surfaces, $p=3, 4$ give rise to genus 3 surfaces, $p=2, 1$ are genus 5 surfaces and $p=0$ defines a genus 7 surface. Thus as we decrease the dimensionality of the background branes, we find surfaces of higher and higher genus. Obviously this leads to the difficulty in obtaining an analytic solution to the equation of motion. Even if we include the Chen-Simons term in the equation of motion, this doesn't modify the number of branch points. As in \cite{costis} it may be possible to reduce the integral for the genus 3 and 5 surfaces into integrals over products of genus 1 surfaces using the special symmetries present. The solution in flat space corresponds to a genus 1 surface, which is why we find an explicit solution to the equation of motion. This suggests that the Riemann surface describing the curved backgrounds is actually of high genus, with the branch points on the complex plane being totally unresolvable when the cycles are large. \subsection{Dynamics in the $p \ne p'$ case.} We now turn our attention to the more general case where $p \ne p'$. However, as we are only looking at the leading order terms in the action we find that there is no Chern-Simons term except for the $p=6, p'=0$ case. But for the purpose of this note, we will neglect this contribution. The action in this instance is a simple extension of (\ref{eq:p=p'_action}) and can be written as \begin{equation} S=-\tau_{p'} \int d^{p'+1}\zeta N H^{(p-p'-4)/4} \sqrt{(1+4H\lambda^2CR^4)(1-H\lambda^2C\dot{R}^2)}, \end{equation} which clearly reduces to the expression in the previous section when taking the $p=p'$ limit. We will again divide out by the 'mass' of the brane to find a closed expression for the canonical momentum , which turns out to be \begin{equation} \tilde \Pi = NH^{(p-p'-4)/4} \lambda^2 C \dot{R} \sqrt{\frac{1+4H\lambda^2CR^4}{1-H\lambda^2C\dot{R}^2}}, \end{equation} and the corresponding energy is obtained via Legendre transformation in the usual manner. \begin{eqnarray} \tilde E &=& NH^{(p-p'-4)/4} \sqrt{\frac{1+4H\lambda^2CR^4}{1-H\lambda^2C\dot{R}^2}} \\ &=& \sqrt{N^2H^{(p-p'-4)/2}(1+4H\lambda^2CR^4)+\frac{\tilde \Pi^2 }{H\lambda^2C}}, \nonumber \end{eqnarray} Following results from the previous section we define the effective potential to be \begin{equation} V_{eff} = N H^{(p-p'-4)/4}\sqrt{1+\frac{4Hr^4}{\lambda^2C}}, \end{equation} which is clearly the general extension of (\ref{eq:potential}) when there is no Chern-Simons coupling term. Interestingly the extra energy due to the fuzzy sphere actually breaks the supersymmetry in this case. Using the conservation of energy we also have a modified constraint condition \begin{equation} 1 \ge \frac{N^2 H^{(p-p'-4)/2}}{\tilde E^2}\left(1+\frac{4Hr^4}{\lambda^2C} \right). \end{equation} In the near horizon geometry we see that the RHS blows up as as the radius tends to zero when $p-p'>4$ which, because of the dimensionality of the branes, implies that for the $p=6, p'=0$ case the energy must go to infinity as the fuzzy sphere collapses in order to satisfy the constraint. All of the other solutions are satisfied for arbitrary energy in this limit. This tells us that the $D6-D0$ solution will not collapse to zero size, instead it will be energetically favourable for the fuzzy sphere to expand in the near horizon geometry. In the large $r$ limit we again expect there to be a maximum size for the fuzzy sphere solution, which is given by (\ref{eq:r_max_approx}) when we take the large $N$ limit. By analysing the behaviour of the effective potential, we should get a general understanding of the dynamics of the fuzzy sphere as the probe branes are attracted to the source branes. In general we see that the potential is always attractive, implying that the fuzzy sphere will eventually collapse down toward zero size. The cases where this isn't true are for $p=6, p'=0$ which has a repulsive potential at small radius \cite{susskind}, exactly as we would expect from energy considerations. We will have more to say about the $D6-D0$ configuration in a later section as we expected it to be related to the non-Abelian extension of the Quantum Hall soliton. The other case where the potential does not vanish is when $p-p'=4$, corresponding to the cases $p=6,p'=2$; $p=5,p'=1$ and $p=4,p'=0$. In these cases we see that the potential tends to $N$ with vanishing radius. Again this should be expected as the branes are all parallel and this is precisely the supersymmetry preserving condition in the Abelian theory \cite{branonium}, however this may well occur at distances beyond the regime of validity of our effective theory. Solving the equations of motion in the general case is far from trivial, as the integral equation describes surfaces of varying genus. For completeness we have written the genus associated with all the possible values of $p, p'$ in our analysis. Note that as the factor $p-p'$ increases, the genus of the surface associated with the solution decreases. For example in the $p-p'=4 $ case (not including $p=6$), we see that the Riemann surface becomes a simple two-sphere. This is interesting as we know that this is exactly the supersymmetry preserving condition in the Abelian theory \cite{branonium}, and a quick calculation verifies that the Abelian equation also yields a genus 0 surface even in the $p=6, p'=2$ case. This poses the question of whether there is some deeper connection between the preservation of supersymmetry and the underlying Riemannian geometry. An example solution can be found in the $p=4, p'=0$ case which will be valid when $r$ satisfies the following constraint, $\lambda^2 \tilde E^2 >> 4k_4 r$. Upon integration we find \begin{equation} r \sim r_0 \pm \frac{4 \tilde E k_4}{(\tilde E^2 -N^2)t^2}, \end{equation} where we must take the negative branch of the solution to approximate the collapsing fuzzy sphere. \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $p$& 6& & & &5& & &4& & &3& &2& &1&0\\ \hline $p'$& 6&4&2&0&5&3&1&4&2&0&3&1&2&0&1&0\\ \hline genus & 2&2&1&1&2&1&0&3&2&0&3&1&5&2&5&7\\ \hline \end{tabular} \end{center} \subsection{Corrections from the symmetrized trace.} In our work so far we have only considered the leading order Lagrangian, and neglected any $1/N$ corrections. However, these terms can be calculated allowing corrections to the effective potential to be found. We remind the reader that to lowest order, we have calculated the energy density to be \begin{displaymath} \tilde E = \frac{\delta \mathcal{L}}{\delta \dot{R}}\dot{R}-\mathcal{L}. \end{displaymath} Based upon arguments in \cite{ramgoolam} we know that the corrections to next order are given by \begin{equation} \tilde E_1 = \left( 1-\frac{2}{3}C \frac{\partial^2}{\partial C^2} \right) \tilde E, \end{equation} where we have dropped all the Chern-Simons terms to make things clearer. We differentiate our expression for $\tilde E$ in order to find the next order corrections to the effective potential. Note that for static BPS configurations such as the $D1-D3$ intersection, all the symmetrized trace correction terms are zero. We don't anticipate the same situation occurring here because the Chern-Simons coupling is independent of $C$ and will drop out when we differentiate the Lagrangian. Since it is this coupling which (in the Abelian case at least) preserves the bulk supersymmetries, we expect that higher order corrections will not be BPS configurations, and so we will find non-zero correction terms to all orders. Our calculation for the general case, gives us the first order correction to the potential \begin{equation} \Delta V_{\rm eff} = \frac{8NH^{(p-p'+4)/4}r^8}{3 \lambda^4 C^3 \left( 1+\frac{4Hr^4}{\lambda^2 C} \right)^{3/2}}, \end{equation} where we have made use of the near horizon approximation. Once more we find that the solution depends heavily upon the dimensionality of the branes involved. Firstly, we consider the case when $p \ge 3$. In this instance the correction term becomes; \begin{equation} \Delta V \sim \frac{8Nr^8b}{3 \lambda^4 C^3}\left(\frac{1}{r^{7-p}}\right)^{(p-p'+4)/4}. \end{equation} Where we have introduced $b = k^{(p-p'+4)/4}$ for simplicity. In general the factor of $p-p'$ can only take the integral values of $6, 4, 2$ or $0$, and so it is easily noted that the potential tends to zero as $r\to 0$ for all values of $p$ and $p'$ in this particular range. If we move on to consider the case where $p<3$ then $p-p'$ is limited to be either $2$ or $0$. The correction term in this instance reduces to \begin{equation} \Delta V \sim \frac{8Nr^8b}{3 \lambda^4 C^3}\left( \frac{1}{r^{7-p}}\right)^{(p-p'+4)/4} \left( \frac{\lambda^2C}{4k_pr^{p-3}}\right)^{3/2} \end{equation} This potential again tends to zero with $r$ for all values of $p$ and $p'$, which is in agreement with our general expectations from the behaviour of the leading order term. Thus the correction doesn't alter the overall dynamics of the fuzzy sphere, and we don't find any bounce solutions. However it should be noted that if we relax our throat approximation and look at large $r$, we would expect to find differing behaviour. For example \cite{ramgoolam} showed that there are bounce solutions for the $D0$-solution in flat Minkowski space when the $1/N$ sub leading order terms are taken into account. It is well known that $D0$-branes may probe distances much smaller than the string length \cite{douglas}, however the curved backgrounds we have been studying in this section appear to impose constraints upon this behaviour. \subsection{Remarks on the $D6$-$D0$ solution.} In this section we will briefly comment on the $p=6, p'=0$ solution as there is a similarity with the Quantum Hall Soliton (QHS), which we will briefly introduce below. The stringy QHS was introduced \cite{susskind} as a way of establishing the link between condensed matter physics and string theory. To construct the QHS, we imagine a background of $k$ coincident $D6$-branes with $k$ strings emerging from them. The transverse space can be parameterised simply by $\mathbb{R} \times \mathbb{S}^2$, and we wrap a $D2$-brane over the $\mathbb{S}^2$. However it is known that this configuration is unstable, and so we are forced to introduce $N D0$-branes, which are dissolved into the $D2$-brane world volume. Since it is well known that $D6$ and $D0$-branes repel each other (due to the energy becoming infinite at small distances), this stabilises the QHS. The world volume of the spherical $D2$-brane, in this instance, becomes the surface where the quantum hall fluid lives. This is a purely Abelian theory in terms of the $D2$ picture, however our Non-Abelian construction can provide information on the dual picture. This is because we can consider $N D0$-branes in the supergravity background of $M$ coincident $D6$-branes. We expect that the fuzzy sphere ansatz will play the role of the $D2$-brane with flux on the Abelian side, furthermore we anticipate that the $D0$-branes can be regarded as being the endpoints of fundamental strings which start on the background $D6$-branes. The only difference is that we are neglecting the open string contributions from the background branes to the probe branes. We have already seen that the effective potential for this (bosonic) configuration can be written as \begin{equation} V = N \sqrt{H}\sqrt{1+\frac{4Hr^4}{\lambda^2 c}}, \end{equation} where the harmonic function, $H$, can be approximated in the near horizon limit by \begin{equation} H \approx \frac{Mg_sl_s}{2r}. \nonumber \end{equation} We now determine, by differentiating the potential, that there is a minimum at the distance \begin{equation} r_{min} = \left( \frac{\pi^2 l_s^3 N^2}{Mg_s} \right) ^{1/3}, \end{equation} where we have explicitly employed the use of the large $N$ limit. This is exactly the same result that was obtained for the stability of the spherical $D2$-brane with flux in terms of the gravitational Myers effect effect \cite{gravitational_dielectric}. We wish to compare this result to the one calculated in \cite{susskind}. In that paper they used a coordinate rescaling to simplify the initial background metric. The scaling is given by \begin{displaymath} r= \rho \left(\frac{Mg_s}{2} \right)^{-1/3}, \end{displaymath} and consequently the equation for the stabilisation radius is given by \begin{equation} \rho_* = \frac{ (N\pi)^{2/3} l_s}{2}. \end{equation} Performing the same rescaling in our Non-Abelian dual picture gives the result \begin{equation} \rho_* = \frac{(N\pi)^{2/3}l_s}{2^{1/3}} \end{equation} which is almost identical to the Abelian theory. In fact the discrepancy between the two radii is due to the contribution from the $k$ strings on the Abelian side, which has been neglected in our analysis. In fact the string contribution alters the stabilization radius by a factor of $2^{-2/3}$. If we reconstruct the QHS, but neglect the stringy contribution and allow for time dependent radial solutions we obtain the following action \begin{equation} S = -\tau_2 \int d^3 \zeta Hr^2 \sin(\theta) \sqrt{(1-H\dot{r}^2)\left(1+\frac{\lambda^2N^2}{4Hr^4}\right)}, \end{equation} where we use the usual spherical coordinates on the $D2$-brane worldvolume and the flux on the brane is given by \begin{equation} F_{\theta \phi} = \frac{N\sin(\theta)}{2}, \end{equation} which satisfies the usual quantization conditions. For a more rigorous explanation of the derivation we refer the reader to \cite{susskind} for more details. We can integrate out the angular dependence to find an exact expression for the Lagrangian \begin{equation} \mathcal{L}=-\tau_2 4\pi r^2 H \sqrt{(1-H\dot{r}^2)\left(1+\frac{\lambda^2 N^2}{4Hr^4} \right)}. \end{equation} Using this we can easily construct the static potential for the Abelian theory in the near horizon geometry, which we find to be \begin{equation} V = \frac{kr}{\lambda}\sqrt{1+\frac{\lambda^2 N^2}{2kg_sl_sr^3}}. \end{equation} Although this appears to be different from the non-Abelian potential, they are in fact identical as can be verified with a simple expansion. Thus the theories are in fact dual to one another, which we can further exhibit by analysing the equations of motion for the radion fields. Using subscripts $A$ and $N$ to represent the two theories, we find the result \begin{eqnarray} \dot{r}_A^2 &=& \frac{1}{H}\left(1-\frac{16 \pi^2 \tau_2^2 H^2 r^4}{E^2}\left\lbrace 1+\frac{\lambda^2 N^2}{4Hr^4} \right\rbrace \right), \\ \dot{r}_N^2 &=& \frac{1}{H}\left(1-\frac{\tau_0^2 N^2 H}{E^2}\left\lbrace1+\frac{4Hr^4}{\lambda^2 C} \right\rbrace \right) \nonumber. \end{eqnarray} If we take the large $N$ limit and carefully expand these equations we see that they are identical. This was noted \cite{ramgoolam} for the case of a fuzzy sphere in flat space, and as expected this duality continues to hold in a curved geometry. On the Abelian side we find an explicit example of the gravitational dielectric effect, whilst on the non-Abelian side we have the gravitational Myers effect. It would be useful to include the terms coming from the strings in our work, as this would be the dual of the QHS, however this is expected to be complicated as the strings are charged under $U(M)$ on one end and $U(N)$ on the other. The corresponding trace over the Chan-Paton factors will be expected to yield an extra term in the DBI forcing the fuzzy sphere to stabilise at a smaller radius due to the tension of the strings. As a further remark, we should note that this duality only holds for the $p=6, p'=0$ case. We could consider a different background source such as $D4$, $D2$ or $D0$-branes, with the $D2$-wrapped over a transverse $\mathbb{S}^2$ whilst the remaining transverse coordinated are set to zero. Unfortunately the corresponding solutions do not map across to the non-Abelian construction where we would have $D0$-branes probing each of these background solutions. This is because we are losing information about the theory by setting some of the Abelian degrees of freedom to zero. It is interesting to examine the stability of our solution with regards to $D0$-brane emission. It was argued for the QHS that there is an energy barrier proportional to $N$, preventing the tunnelling of $D0$-branes out of the $D2$ brane. In fact it requires energy to be out into the system to remove the $D0$-brane. Therefore the QHS appears to be stable with respect to particle emission \footnote{ \cite{susskind} also noted that there could be possible nucleation of the $D2$-brane causing another $D2$ brane to appear inside the original one. Although we can consider multiple fuzzy spheres by selecting an ansatz which is a reducible representation, this does not correspond to the picture on the Abelian side. It would be certainly interesting to consider a non-Abelian description of this.}. The potential at the stable radius in our dual picture can be written explicitly as \begin{equation} V = N\sqrt{\frac{(Mg_s)^{4/3}}{2(N\pi)^{2/3}}}\sqrt{1+\frac{N^2}{2C}}, \end{equation} where we are using the dimensionless potential obtained from $\tilde E$. We now revert to proper time as measured by an observer on the fuzzy sphere, which allows us to re-write the minimised potential with respect to proper time \begin{equation} V_T(N) = \sqrt{\frac{N}{\pi}}\frac{Mg_s}{2^{3/4}}\sqrt{1+\frac{N^2}{2C}}. \end{equation} Now imagine that the soliton emits a single $D0$-brane into the bulk, the change in the potential - to leading order in $1/N$, and taking the large $N$ limit - can be approximated by \begin{equation} V_T(N) - V_T(N-1) \sim \sqrt{\frac{3}{N\pi}} Mg_s. \end{equation} We now need to compare this with the potential energy of a single $D0$-brane attached to a fuzzy sphere located at the stabilisation radius. Although our effective action is valid as a large $N$ expansion, we can use it to determine the potential for a single brane provided that we neglect the back reaction terms between brane and fuzzy sphere. By adding this contribution to the one calculated in the previous line we see that \begin{equation} V_{tot} \sim \frac{Mg_s}{\sqrt{\pi}}\left(\sqrt{\frac{3}{N}}+\frac{1}{\sqrt{2}} \right), \end{equation} which is larger than the potential of the stable fuzzy sphere. Thus we conclude that the solution appears to be stable with regard to emission. This gives us an estimate of the binding energy of the $D0$-branes in the near horizon region, which we interpret as the energy barrier needed for quantum tunnelling \begin{equation} E_{\rm binding} \sim \nu g_s \sqrt{N}, \end{equation} where we have made use of the ratio $\nu =M/N$ to simplify the result. In the QHS picture this corresponds to the definition of the filling ratio. Clearly the barrier is proportional to $N$, thus in the large $N$ limit we would expect the fuzzy sphere to be stable. The supergravity picture of this case is then the following. If the fuzzy sphere is initially large, then the metric is approximately Minkowski and we have our usual collapsing solution \cite{ramgoolam} with velocity approaching that of light. As the $D0$-branes enter the near horizon geometry they decelerate (from the $D6$ viewpoint) until they oscillate around the minimum of the potential, eventually forming a bound state at $r_{min}$. If on the other hand, the fuzzy sphere is initially small, then the gravitational dielectric effect forces the configuration to expand until it reaches the stabilisation radius - at which point it settles into its bound state after oscillation. \section{Inclusion of Angular Momentum.} In the Abelian case, the inclusion of angular momentum terms in the action is trivial since all the coordinates commute. This will clearly not be the case in the Non-Abelian version, and so we must choose a specific ansatz. A fuzzy cylinder ansatz was introduced in \cite{fuzzy_cylinder}, which was able to rotate about three independent axes. However, this ansatz proves to be restrictive on the dimensionality of the background brane solutions limiting them to $p \le 3$, although it may be useful in describing dual versions of supertubes \cite{supertubes} and we will have a closer look at it in the next section. Instead we choose a different ansatz corresponding to rotation in the $\phi^6-\phi^7$ plane, \begin{eqnarray} \phi^6 &=& R(t) \cos(\theta) T_3, \nonumber \\ \phi^7 &=& R(t) \sin(\theta) T_3, \nonumber \\ \phi^8 &=& R(t) T_1, \nonumber \\ \phi^9 &=& R(t) T_2. \end{eqnarray} This means that the resulting action will only be valid for $p<6$, and so we will not be able to consider rotation in the gravitational Myers effect picture. The action for this particular ansatz can be calculated, and we find \begin{equation} S=-\tau_{p'} \int d^{p'+1} \zeta \sum_{j=0}^N NH^{(p-p'-4)/4} \sqrt{(1+4H\lambda^2CR^4)(1-H\lambda^2C \dot{R}^2-H\lambda^2R^2\dot{\theta}^2 \lambda_j^2)}. \end{equation} where $\lambda_j$ is the $j$th eigenvalue of the matrix $(T_3)^2$ (using a matrix representation for the diagonal generator). If we expand the action out to leading order this enables us to isolate the $\lambda_j$ dependence and we can perform the sum to obtain \begin{equation} \sum_{j=0}^N \lambda_j^2 = \frac{N}{12}(N^2-1) = \frac{CN}{12}. \end{equation} In general, the inclusion of angular momentum for the fuzzy sphere is non-trivial. If we employ a convention where the subscript on the $\lambda$ implies summation over that variable then we find the exact solution for the static potential in physical radius is given by \begin{equation} V_{eff} = \frac{NH^{(p-p'-4)/4}}{\sqrt{1-H\lambda^2 R^2\dot{\theta}^2 \lambda_{j}^2}}\sqrt{1+\frac{4Hr^4}{\lambda^2C}}\left(\frac{HNr^2\dot{\theta}^2}{12}+\sqrt{1-H\lambda^2R^2\dot{\theta}^2 \lambda_{k}^2} \sqrt{1-H\lambda^2 R^2\dot{\theta}^2 \lambda_{j}^2} \right), \end{equation} where $\dot{\theta}$ corresponds to the angular velocity of the fuzzy sphere. By setting this term to zero we recover the result for the purely radial collapse, as we would anticipate. Even though we cannot find a closed form solution for the potential we can still make some comments about the dynamics of the fuzzy sphere. Interestingly we expect that the potential will vanish in the $r \to 0$ limit, as the only case where there is the possibility of a bound state is when $p-p' > 4$ corresponding to the $p=6, p'=0$ case we investigated in the previous section. Unfortunately our choice of ansatz doesn't allow for this to be investigated here. This tells us that the angular momentum term cannot counteract the gravitational force exerted by the source branes, and the fuzzy sphere will always collapse. \subsection{Alternative ansatz.} Thus far our analysis has been exact but not concise, so it is useful to consider an alternative ansatz which allows us to incorporate angular momentum in a clear manner. Since we require two transverse scalars to define a plane in the transverse space, and at most each plane is parameterised by one of the generators of the representation, we are led to the conclusion that we need six transverse scalars to introduce angular momentum. This will place severe restriction upon the dimensionality of the branes that we can consider in our solution. In fact we find that at most we can consider a $D3$-brane background. We choose to parameterise the six transverse scalars as follows: \begin{eqnarray} \phi^1 &=& R(t) \rm cos(\theta) \hspace{0.5cm} \phi^2 = R(t) \rm sin(\theta) \nonumber \\ \phi^3 &=& R(t) \rm cos(\theta) \hspace{0.5cm} \phi^4 = R(t) \rm sin(\theta) \nonumber \\ \phi^5 &=& R(t) \rm cos(\theta) \hspace{0.5cm} \phi^6 = R(t) \rm sin(\theta) \end{eqnarray} Thus we are breaking the $SO(6)$ symmetry of the transverse space to $SO(2) \times SO(2) \times SO(2)$, and choosing the same angle $\theta$ to parameterise the three planes. This may seem a rather restrictive ansatz, but it will actually allow us to make some progress. The action in this case becomes \begin{equation} S=-\tau_{p'} \int d^{p'+1} \zeta STr \left( H^{(p-p'-4)/4}\sqrt{1-H\lambda^2 C (\dot{R}^2+R^2 \dot{\theta}^2))(1+4\lambda^2 H R^4C)}\right), \end{equation} with a possible Chern-Simons term, defined up to a constant factor \begin{equation} S_{CS}= -\tau_{p'}\delta_{p'}^p \int dt \frac{q}{H}. \end{equation} Since both terms in the Born-Infeld part of the action are proportional to the identity matrix, we find that the $STr$ reduces to $Tr$ to leading order in large $N$. Finally we obtain \begin{equation} S=-\tau_{p'} \int d^{p'+1} \zeta NH^{(p-p'-4)/4}\sqrt{(1-H\lambda^2 C (\dot{R}^2+R^2 \dot{\theta}^2))(1+4\lambda^2 H R^4C)}. \end{equation} We can now proceed as usual by switching to the Hamiltonian formalism and writing the canonical energy density as \begin{equation} \tilde E = \sqrt{ N^2 H^{(p-p'-4)/2} (1+4\lambda^2 CHR^4) +\frac{1}{H\lambda^2 C} \left(\tilde \Pi^2 + \frac{\tilde L^2}{R^2}\right)}. \end{equation} Switching to the physical radius $r$, we find that the effective potential becomes \begin{equation} V_{\rm eff} = \sqrt{N^2 H^{(p-p'-4)/2} \left( 1+\frac{4Hr^4}{\lambda^2 C} \right) + \frac{\tilde L^2}{Hr^2}} \end{equation} Where we must remember that this equation is only valid for $p \le 3$, and the energy density and the angular momentum are the conserved charges If we set the angular momentum term to zero we recover the potential for a radially collapsing solution, as we would expect. For ease of calculation we choose to rescale the potential by a factor of $N$. This is possible because there is an $N^2$ term in the angular momentum density. The resulting non-Abelian and Abelian potentials are written below for comparative purposes \begin{eqnarray} \bar{V}_{\rm eff} &=& \sqrt{H^{(p-p'-4)/2} \left( 1+ \frac{4Hr^4}{\lambda^2C} \right) + \frac{\tilde L^2}{Hr^2}}, \\ V^{\rm abelian} &=& \sqrt{H^{(p-p'-4)/2} + \frac{\tilde L^2}{H r^2}}. \nonumber \end{eqnarray} Simple analysis of the potential in the non-Abelian case shows that it is a monotonically decreasing function for all valid $p$ and $p'$ in this region. Therefore there is no possibility of the formation of bound states, in the same way that there are no bound orbits in the Abelian theory \cite{branonium}. Once again it is useful to look at the equations of motion to determine if there are any constraints to be imposed on the solution. We wish to consider a case where the energy density and the angular momentum density are constant. Thus, we find the following expression \begin{equation} \dot{r}^2 = \frac{1}{H} \left( 1 - \frac{1}{\tilde E^2}\left\lbrack N^2 H^{(p-p'-4)/2}\left\lbrace1+\frac{4Hr^4}{\lambda^2 C} \right\rbrace + \frac{\tilde L^2}{Hr^2} \right\rbrack \right). \end{equation} If we assume that the angular momentum takes some fixed, non-zero value - then we can consider how the constraint equation is modified in the asymptotic limit of $r \to 0$ \begin{equation} 1 \ge \frac{1}{\tilde E^2} \left( \frac{4N^2 k_p^{(p-p'-4)/4} k_p}{\lambda^2 C r^{((7-p)(p-p'-4)+6-2p)/2}} + \frac{\tilde L^2r^{5-p}}{k_p} \right). \end{equation} This appears to have a complicated dependence upon $r$, however because of the restrictions from the ansatz we know that there are only two possible cases we can consider, $i)$ $p-p'=2$ and $ii)$ $p-p'=0$. The first case reduces the constraint to the following \begin{equation} 1 \ge \frac{1}{\tilde E^2} \left( r^4 + \tilde L^2 r^{5-p} \right). \end{equation} It is clear that as $r$ vanishes the contribution from the angular momentum term also vanishes and the energy density can be relatively arbitrary, as already discussed. The second condition implies a similar result, however the dimensionalities of the branes involved plays a role in determining how quickly the lead term vanishes. \section{Non-BPS branes.} It is well known that BPS branes are soliton solutions of Non-BPS branes, so it is natural to enquire about the dynamics of these branes in various backgrounds. In this section we will look at the action for $N$ Non-BPS branes in the $Dp$-brane background and try and study the dynamical evolution of the fuzzy sphere in this instance. This will not be as straight forward to analyse as the BPS case \cite{non_bps_dynamics}, as there is the additional complication of open string tachyon modes condensing on the world volume. We start with the generalised non-Abelian action for the probes, which can be expanded to lowest order \cite{non_abelian_non_bps}. \begin{equation} S=-\tau_{p'} \int d^{p'+1} x Str V(T) H^{(p-p'-4)/4} \sqrt{(1-\lambda^2 H \dot{\phi}^2 - H^{1/2}\lambda \dot{T}^2)(1-\frac{\lambda^2 H }{2}[\phi^i,\phi^j][\phi^j,\phi^i])}. \end{equation} The tachyon field is dimensionless and we are assuming, like the transverse scalars, that it is purely time dependent. This also ensures that the Chern Simons term vanishes to lowest order when we use the static gauge. $V(T)$ is the potential for the tachyon field, which describes the changing tension of each of the branes. Note that in this section we will be using the standard form of the tachyon potential \cite{non_bps_action, tachyon_transform, sen, time_dependence, non_bps_dynamics} where $V(T) \propto 1/\cosh(T)$. It would be certainly be interesting to study the case of spatially dependent tachyon fields, as their classical solutions give rise to kink-antikink solutions on the world volume \cite{sen}. We now make use of the $SU(2)$ ansatz, $\phi^i = R(T) T^i$ as usual and find that the action reduces to the form \begin{equation} S=-\tau_{p'} \int d^{p'+1} x N V(T) H^{(p-p'-4)/4} \sqrt{(1-\lambda^2 CH \dot{R}^2-\lambda H^{1/2} \dot{T}^2)(1+4H\lambda^2 CR^4)}, \end{equation} where we have performed the symmetrized trace to bring the Casimir into the action. As it stands this is perfectly acceptable for us to analyse the dynamics. However the presence of the tachyon makes things difficult since it will not decouple from the equation of motion for the radion field. It is more useful to modify this action to another equivalent form, and investigate the dynamics by finding another conserved charge. In order to do this, we choose to rescale the tachyon field \cite{tachyon_transform, non_bps_dynamics} \begin{equation} \frac{\tilde T}{\sqrt{2}}= \sinh \left(\frac{T}{\sqrt{2}}\right), \end{equation} which transforms the action into \begin{equation} S=-\tau_{p'} \int d^{p'+1}x \frac{N H^{(p-p'-4)/4}}{\sqrt{F}}\sqrt{1+\frac{4Hr^4}{\lambda^2C}}\sqrt{1-H \dot{r}^2-\frac{H^{1/2}\lambda \dot{T}^2}{F}}. \end{equation} Where $F$ now controls the behaviour of the tachyon and the changing tension of the probe branes, which is simply \begin{equation} F(T)=1+\frac{T^2}{2}, \end{equation} and we have also chosen to write the new tachyon field in terms of $T$ for ease of notation. This form of the action allows us to investigate the dynamics of the Non-BPS brane when the tachyon field is large \cite{non_bps_dynamics}. At this juncture we must point out that there may be objections to using this form of rescaling, as we are assuming that it will hold true in a gravitating background. It is well known that there are many effective descriptions for the tachyon field, with each one defined on a specific section of tachyon moduli space. However as there has been little progress in constructing non-Abelian versions of these effective actions, we must use the DBI and hope that it provides an adequate description of the physics at late time. It turns out that making the field redefinition will still not be enough to simplify the problem, and so we are also forced to consider the throat geometry around the source branes. In terms of field space definitions we are probing the large $T$, small $r$ region of the theory. We can now use the Noether method to find the charge associated with a scaling symmetry on the brane world volume. We postulate that the fields and the time scale as follows:. \begin{equation} t'= \Gamma^{\alpha}t, \hspace{0.5cm} r'=\Gamma^{\beta}r, \hspace{0.5cm} T'=\Gamma^{\gamma}T. \end{equation} Inserting these transformations into the action yields the following constraints, \begin{equation} \beta(p-3)= 0, \hspace{0.5cm} \alpha = -\beta, \hspace{0.5cm} \gamma = -\alpha p'. \end{equation} The first of these is the most important, since we have two possible solution branches. Firstly we can have $\beta=0$, which in turn leads to $\alpha = \gamma =0$ and so there are no field symmetries. However the second solution gives $p=3$, which implies that the scaling variables are arbitrary. What we have found is that the symmetry on the world-volumes of the probe branes imposes a constraint on the allowed dimensionality of the background. If we were to allow extended transformations, for example a rescaling of the string coupling, we find that the background constraint becomes $p=5$. Only in the case where we rescale all the fields, the string coupling and the string length can we eliminate this background constraint. For simplicity, we will only look at the basic case in this note. The extension to more general scaling symmetries is left for future endeavour. As the scaling variables are arbitrary, we find it convenient to choose $\alpha = -1$, thus the scalings become \begin{equation} t'=\Gamma^{-1}t, \hspace{0.5cm} r'=\Gamma r, \hspace{0.5cm} T'=\Gamma^{p'} T, \end{equation} and we find a representation of the conserved charge generating these transformations, which is \begin{equation}\label{concharge} D= t \tilde E + r \tilde \Pi + p'T P_T, \end{equation} where $\tilde E, \tilde \Pi$ and $P_T$ are the canonical energy density, radial momentum and tachyon momentum respectively. Now it is useful to write the energy density in canonical form \begin{equation}\label{hamilton} \tilde E = \sqrt{\frac{2N^2}{T^2}\left(\frac{k_3}{r^4}\right)^{-(1+p')/2}\left \lbrace 1+ \frac{4k_3}{\lambda^2C}\right \rbrace+\frac{\tilde \Pi^2r^{4}}{k_3}+\frac{T^2P_T^2r^2}{2\lambda \sqrt{k_3}}}, \end{equation} where we have written $k_3$ to denote the constant charge of the $D3$-brane background. Using this expression, we find the equations of motion for the radion and tachyon fields reduce to \begin{equation}\label{eqm} \dot{r}=\frac{\tilde \Pi r^4}{\tilde E k_3}, \hspace{1cm} \dot{T}=\frac{T^2P_T r^2}{2\tilde E \lambda \sqrt{k_3}}. \end{equation} Note that in this instance, neither $\tilde \Pi$ or $P_T$ is a conserved charge which makes it difficult to solve the equations of motion. However due to our world-sheet transformations we have discovered a charge, $D$, that is conserved and so we can use this to simplify the equations of motion. In order to do this we will have to consider specific decompositions of the symmetry charge, as the general expression does not lead to simple analytic solutions. \subsection{Decomposition of charge.} Even with the existence of the conserved charge (\ref{concharge}) does not allow an easy split between the variables $r$ and $T$ which would allow us to solve the (\ref{eqm}).\footnote{The only exception is the case $p'=0 $ which we shall discuss later}. In order to try and find analytic solutions (even approximate ones) we need to impose further conditions on the canonical variables in a manner consistent with the equations of motion. Let us write the conserved scaling charge $D$ in (\ref{concharge}) as the condition \begin{equation} \Phi = (t \tilde{E} + r \tilde{\Pi} + p'T P_T -D) = 0 \end{equation} This constraint is preserved under Hamiltonian flow since it can be verified that $\dot{\Phi} = d \Phi/dt +\{H,\Phi \} = 0 $ where $\{, \} $ defines the usual Poisson bracket and $H$ is the Hamiltonian defined in (\ref{hamilton}) . Now decompose $\Phi = \Phi_1 +\Phi_2$ where \begin{eqnarray} \Phi_1 &= & \tilde{E}_1 t +r \tilde{\Pi} -D_1\nonumber\\ \Phi_2 &=& \tilde{E}_2 t +p' T P_T - D_2 \end{eqnarray} with $ \tilde{E}_1 + \tilde{E}_2 = \tilde{E} $ and $ D_1 +D_2 = D $. If we now impose for example, the additional constraint $\Phi_1 = 0$ (and hence $\Phi_2 = 0 $ as a consequence) then this would allow us to solve for $r(t) $ and $T(t)$. However we must check that this additional constraint is preserved under Hamiltonian flow, ie that \begin{equation} \dot{\Phi_1} = d \Phi_1/dt +\{H, \Phi_1 \} = 0 \end{equation} This leads to the following algebraic constraint between $r $ and $T$ :- \begin{equation}\label{con2} \tilde{E}_1 - \tilde{E} - \frac{2 N^2 p'}{\tilde{E} T^2} \left( \frac{k_3}{r^4} \right)^{-(1+p')/2} \left \lbrace 1+ \frac{4 k_3}{\lambda^2 C} \right\rbrace= 0 \end{equation} The case $p'=0 $ is special in that the original constraint, $\Phi = 0$, can be used to solve the $r, T$ system completely (see later). For now we will assume that $p' \ne 0$ . Since we are considering $p' < p = 3 $ we only need consider the case when $p'=2 $. It's clear from (\ref{con2}) that $\tilde{E}_1 > \tilde{E} $ if this constraint is to be solved exactly. But one can then show an inconsistency appears when this algebraic constraint is applied to (\ref{eqm}). Thus at best we can only solve (\ref{con2}) approximately. One such solution is to take $\tilde{E}_1 \approx \tilde{E} $ and assume $T$ is large. We remind the reader that we already assumed that $T$ is large in order to obtain the scaling symmetries earlier. We can now go ahead and solve the $r,T$ system of equations. Solving for the radial equation of motion we find \begin{equation} \frac{1}{r^2}= \frac{1}{r_0^2}-\frac{t}{\tilde E k_3}(2D_2-\tilde E t) \end{equation} Now for small values of $D_2$ the dynamics of the probe obeys a $ 1/t$ relationship. The exact description of the dynamics will depend on the relative sizes of $D_2$ and $\tilde E$. If $\tilde E >> D_2$, then the quadratic term will be dominant. This ensures that the solution starts at some maximal distance and tends to zero. Conversely if $D_2$ is much larger than $\tilde E$, then the linear term is dominant and this describes an expanding solution which will break down when the supergravity constraint is no longer satisfied. However, when the two charges are of the same order of magnitude we find a turning solution. The sphere initially expands from $t=0$ until it reaches a stationary point at $t=D_2/\tilde E$, before collapsing toward zero size. Using the second constraint to solve for the tachyon momentum yields the solution to the tachyon equation of motion \begin{equation} T \sim T_0 \exp\left(\frac{\sqrt{k_3}r_0^2}{4\lambda} f(t) \right) \end{equation} where the function $f(t)$ is proportional to $\rm{arctanh}(t\tilde E - D_2)$. Thus the general behaviour of the tachyon solution is that it is an exponential function of time. The results obtained so far have all been for the case $p'=2$. In order to determine the dynamics of the $p'=0$ case corresponding to $N$ coincident D-particles we see that the tachyon dependence drops out of of the conserved charge. First, solving for the radial equation of motion, we find the solution \begin{equation} \frac{1}{r^2}=\frac{1}{r_0^2}-\frac{t}{\tilde E k_3}(2D-t\tilde E) \end{equation} which is a similar solution as the one obtained in the charge decomposition above for $p'=2$. Therefore we also expect to find a similar turning solution for the fuzzy sphere parameterised by the time $t=D/\tilde E$. We have no other constraint to impose on the equation of motion for the tachyon field, but we can write the tachyon momentum in terms of the other canonical forms \begin{equation} P_T^2 = \frac{2\sqrt{k_3}\lambda}{T^2 r^2} \left(\tilde E^2 - \frac{2r^2}{T^2 \sqrt{k_3}}\left \lbrace 1 + \frac{4k_3}{\lambda^2C}\right \rbrace-\frac{\tilde \Pi^2 r^4}{k_3} \right). \end{equation} In general we can use this solution to exactly solve for the tachyon field, however this is extremely difficult and we will find it much more useful to find an approximate solution. From the above equation, we see that the supergravity solution implies $r^4/k_3 << 1$, and so we can effectively neglect the contribution from the final two terms. Inserting this into the equation of motion yields the solution \begin{equation} T \sim T_0 \exp\left( \left( \frac{\sqrt{k_3}}{2\lambda}\right)^{1/2} \rm{ln} \left[\sqrt{\tilde E k_3- r_0^2t(2D-t\tilde E)}+\frac{r_0(tE-D)}{\sqrt{\tilde E}} \right] \right), \end{equation} which we expect to provide a reasonable approximation as $r\to0$, and once again shows the increasing exponential dependence of the tachyon field. Again the contribution from the two charges can change the dynamics of the field, as described earlier. The general solution for the tachyon field is expected to be background dependent \cite{non_bps_dynamics}, however we see that in the $D3$-case, it is roughly exponential in all cases. The fuzzy sphere appears to always collapse, but there is an intricate relationship between the tachyon condensation and the radial modes which depends upon the conserved charges. When both terms appear in the radial equation of motion we see that there can be turning solutions, describing an initial expansion which eventually contracts within finite time. This is a result of the tachyon condensation which decreases the tension of the branes so that they feel a weaker gravitational attraction. However, the combination of the charges in the tachyon solution also implies a turning point for the tachyon field and so the tension eventually increases and the fuzzy spheres collapses - provided that the tachyon solution still remains valid. \section{NS5-brane background.} The work in the previous sections has only been concerned with coincident $Dp$-brane backgrounds, but we wish to extend this to the $NS5$-brane background. This particular background is important for several reasons. In many cases there is an exact conformal field theory description, allowing BCFT calculations. Secondly, there is an interesting duality which relates six dimensional string theory on the $NS$5-brane world-volume (LST) \cite{lst} to supergravity in the bulk, permitting an understanding of the dynamics in terms of defects of the LST. Importantly for our purposes, there has been recent work on probe dynamics in this background which has provided insights into the nature of the rolling tachyon, and perhaps even a geometrical origin for the open string tachyon in Abelian theories. Much of the construction of the non-Abelian theory follows a similar line to that of the $D$-brane backgrounds. We begin with the background solution for $k$ coincident $NS$5-branes, given by the usual CHS solutions \cite{chs} \begin{eqnarray} ds^2 &=& \eta_{\mu \nu} dx^{\mu} dx^{\nu} + H(x^n) dx^m dx^n \nonumber \\ e^{2(\phi-\phi_0)} &=& H(x^n) \nonumber \\ H_{mnp} &=& -\epsilon_{mnp}^q \partial_q \phi. \end{eqnarray} The coincident fivebranes form an infinite throat which can be seen from the dilaton term. We will refer to the throat geometry as the near horizon part of the bulk space-time. The usual definitions apply as in the $Dp$-brane solution, with the addition of the 3-form field strength for the Kalb-Ramond field. The harmonic function describing this background is simply \begin{equation} H(x^n) = 1 + \frac{kl_s^2}{r^2}, \end{equation} where $r$ is the physical radius given in terms of the transverse scalars, $r=\sqrt{x_i^2}$. Note that there is no WZ term in this solution, since the $NS$5-branes are not sources of Ramond-Ramond charge. This is because the fivebrane is the magnetic dual of the fundamental string, and as such we expect that no open strings will end on any of the $k$ $NS$5-branes. The probe branes themselves will carry R-R charge, which we anticipate will be radiated as the probes move in the background. This has important consequences, as we know that the classical Abelian theory is only valid for $3 \le p < 5$ \cite{time_dependence} due to the emission of closed string modes. This tells us that the DBI only describes the motion of the open string degrees of freedom, and radiative correction terms due to the closed strings must be studied separately. It would be a useful exercise to check if this relation also holds in the non-Abelian theory. We also know that the background preserves different halves of the supersymmetry algebra, therefore it is explicitly broken and we will find a gravitational force acting on the fuzzy sphere causing it to collapse. This is also seen in the Abelian theory of a spherical $D2$-brane with magnetic flux \cite{hyakutake}, which should be equivalent to our construction of $D0$-branes on a fuzzy sphere. We now insert these background solutions into our Non-Abelian action. Once again, we expand the terms to leading order and assume that the transverse scalars are time dependent, which will ensure that our solutions are homogeneous and thus there will be no formation of caustics. Hence we arrive at the following form of the action \begin{equation} S=-\tau_{p} \int d^{p+1} \zeta STr \left( H^{-1/2} \sqrt{1-H \lambda^2 \dot{\phi}^i \dot{\phi}^j \delta_{ij}} \sqrt{1-1/2 \lambda^2 H^2 [\phi^i, \phi^j][\phi^j,\phi^i]} \right). \end{equation} Note that the $NS$5-branes have a tension that goes as $1/g_s^2$, whilst the $Dp$-branes each have tensions proportional to $1/g_s$, thus the five-branes are heavier in the large $k$ limit, however as we will be interested in the large $N$ limit we may find there is considerable back reaction upon the throat in the target space which may deform it substantially. However for the purpose of this note we will ignore this effect, and simply assume that we can fine tune the parameters such that the back reaction is negligible. The action is given by \footnote{For simplicity we do not include angular momentum though this can be done as in section 4} \begin{equation} S=-\tau_p \int d^{p+1} \zeta \frac{N}{\sqrt{H}}\sqrt{(1-H\lambda^2 \dot{R}^2C)(1+4\lambda^2 CH^2 R^4)}, \end{equation} with $C$ being the usual quadratic Casimir of the $N$-dimensional representation. Switching now to physical distances, we arrive at the final form of the action \begin{equation} S=-\tau_p \int d^{p+1} \zeta \frac{N}{\sqrt{H}} \sqrt{\left(1-H \dot{r}^2\right)\left(1+\frac{4H^2r^4}{\lambda^2C}\right)}, \end{equation} from which we can derive the usual canonical momenta and energy densities, where we have explicitly divided out the 'mass' of each brane. \begin{eqnarray} \tilde \Pi &=& \frac{NH\dot{r}}{\sqrt{H}}\sqrt{1+\frac{4H^2r^4}{\lambda^2C}} \frac{1}{\sqrt{1-H \dot{r}^2}} \nonumber \\ \tilde E &=& \frac{N}{\sqrt{H}}\sqrt{1+\frac{4H^2r^4}{\lambda^2C}} \frac{1}{\sqrt{1-H \dot{r}^2 }}. \end{eqnarray} We solve the equation for the energy, which is conserved, to obtain the following constraint on the dynamics of the probe branes assuming a fixed energy density \begin{equation} 1 \ge \frac{N^2}{ \tilde E^2 H} \left(1+\frac{4H^2r^4}{\lambda^2C}\right) \end{equation} We are going to be interested in the near horizon geometry of the fivebrane background, and so can make the usual approximation with regards to the harmonic function. Again we will also anticipate that there is a maximum size for the fuzzy sphere in the large $r$ region, since in this region the metric reduces to the metric for the $Dp$-brane background, namely Minkowski space. In the throat, we find that the constraint becomes \begin{equation} 1 \ge \frac{N^2 r^2}{\tilde E^2 k l_s^2} \left(1+\frac{4k^2l_s^4}{\lambda^2C}\right) \end{equation} which is automatically satisfied for the radial part since we know that $H >> 1$ in this region. This actually allows us to find the following constraint on the energy density the following constraint on the energy density \begin{equation} \frac{\tilde E^2}{N^2} \ge \frac{r^2}{kl_s^2}\left(1+\frac{k^2}{\pi^2C}\right). \end{equation} The supergravity solution tells us that $r^2/kl_s^2$ must be extremely small, and we can select $k/N$ to be small even when $k$ and $N$ are individually large, thus the last term is simply $\mathcal O(1)$ This implies that we should take $\tilde E$ to be larger than $N$ to ensure that the constraint is satisfied. Thus like the majority of the $Dp$-brane solutions we find that the fuzzy sphere can collapse down toward zero size. In this background though we expect that the moving $D$-branes will shed their energy, which will appear as modes living on the fivebranes, and eventually form a $(k, N)$ bound state in analogy to the $(k,1)$ state in the Abelian case. As we have already seen, one of the main differences between the usual fuzzy sphere solutions in flat space and those in curved background is that the velocity term decreases with the radius. In flat space we find that the collapsing configuration approaches the speed of light at late times and thus the $1/N$ corrections due to the symmetrized trace become important. Clearly we don't see the same behaviour in this case, in fact a six dimensional observer on the $NS$5-brane world volume will record that it takes an infinite amount of time for the fuzzy sphere to collapse to zero size \footnote{Of course if we switch to 'proper' co-moving time coordinates $\tau$, then the collapse will occur in finite time \cite{time_dependence}, and an observer on the probe branes will record the velocity as tending to the speed of light.} . This is interesting as it appears that the energy of a collapsing fuzzy sphere in flat space is the same as an essentially static sphere in a space-time throat, and is related to the formation of a bound state of $(p,q)$ fivebranes \cite{time_dependence}. In the large $r$ region we find \begin{equation} 1 \ge \frac{4N^2r^4}{\tilde E \lambda^2 C}, \end{equation} which translates into the condition that the fuzzy sphere has a maximum radius given by \begin{equation} r_{max} = \sqrt{ \frac{ \tilde E \lambda C^{1/2}}{2N}}. \end{equation} Which, as anticipated, is the same result derived for the $D$-brane background. We now look at the static potential associated with the fivebrane background. Following the convention employed in the Abelian cases \cite{time_dependence}, we easily find that the potential can be written \begin{equation} V_{eff} = \frac{N^2}{\tilde E^2 H^2} \left(1+\frac{4H^2r^4}{\lambda^2C}\right) - \frac{1}{H} \end{equation} The interesting question is what happens in the throat, since we know that in the large $r$ region the potential will be a simple monotonically increasing function, which goes as $r^4$ \begin{equation} V_{eff} \sim \frac{4r^4}{\lambda^2 \tilde E^2}. \end{equation} Dropping the factor of unity as before, we find that as $r \to 0$ the potential becomes \begin{equation} V_{eff} \approx \frac{N^2 r^4}{\tilde E^2 k^2 l_s^4} \left(1+\frac{4k^2l_s^4}{\lambda^2 C}\right) - \frac{r^2}{kl_s^2}. \end{equation} which indeed tends to zero with $r$, for fixed $\tilde E$ as expected In any case, We wish to solve the equation of motion for the probe branes in the throat. Because the energy is conserved, the solution, up to constants of integration, is simply \begin{equation} \frac{1}{r} = \sqrt{ \frac{N^2}{\tilde E^2 kl_S^2}\left(1+\frac{4k^2l_s^4}{\lambda^2c}\right)}\cosh\left(\frac{t}{\sqrt{kl_s^2}}\right), \end{equation} which is actually an extension of the solution for a single probe $Dp$-brane in the Abelian theory, which was shown to be \cite{time_dependence} \begin{equation} \frac{1}{r}=\frac{1}{\sqrt{\tilde E^2 kl_s^2 - \tilde L^2}}\cosh\left(\frac{t}{\sqrt{kl_s^2}}\sqrt{1-\frac{\tilde L^2}{kl_s^2\tilde E^2}} \right). \end{equation} where the effect of the angular momentum is to act as a 'braking' term. If we consider performing a Wick rotation of the time coordinate for the collapsing solution we find a periodic solution in terms of a cosine function. This can be interpreted as the collapse and subsequent bounce of the fuzzy sphere in imaginary time - although the physical interpretation of this solution is not clear, however we expect it to approximate the time dependent solution for Euclidean branes. This sinusoidal behaviour can also be seen if we switch to conformal (or 'proper') time where an observer sees that the collapse occurs in finite time. In this case we would expect $1/r$ to be proportional to $\sin(t)$ \cite{time_dependence} which again is suggestive of a periodic collapse and expansion. However this solution would indeed probe the non-perturbative region of the theory, and it is not clear if the corrections (e.g quantum, 1/$k$ and back-reaction) would admit such a solution. One further thing to note is that using S-duality we may map this solution to that of the coincident $D5$-brane background being probed by coincident $D3$-branes, as their actions are identical. This agrees with our expectation that the $D5$-brane background yields exponential solutions at late times. We may enquire about the validity of the classical solution in the throat region. Using our time dependent ansatz we see that the dilaton is also a time dependent function, in fact for a purely radially collapsing solution we find that the dilaton behaves as \begin{equation} e^{\phi} = \frac{Ng_s}{\tilde E}\sqrt{1+\frac{4k^2l_s^4}{\lambda^2 C}}\cosh \left(\frac{t}{\sqrt{kl_s^2}} \right). \end{equation} Note that quantum effects can be neglected provided that $e^{\phi} << 1$, however as we know that $\tilde E>> N$ from our constraint equation we expect that the classical analysis will provide an accurate description of the solution, at least for early times. This can be 'fine tuned' for specific values of $k$ and $N$ so that the classical solution continues to hold at late times. \subsection{Correction from symmetrized trace.} Thus far we have investigated the dynamics of the action at leading order, and seen that the fuzzy sphere will generally collapse down to small size. It is expected that the effective action will break down at distances comparable with the string length, and thus $1/N$ corrections will become important. In order to deal with this situation we look at the next order terms due to the symmetrized trace corrections. As we have already seen, we can write the first order correction to the energy as \begin{equation} \tilde E_1 = \left(1-\frac{2}{3}C\frac{\partial^2}{\partial C^2}\right) \tilde E_0, \end{equation} which yields \begin{equation} \tilde E_1 = \frac{N}{\sqrt{H}\sqrt{1-H\dot{r}^2}} \left(W(k,C) + \frac{2Ck^4}{3W(k,C)^{3/2}\pi^4 C^4} - \frac{2k^2C}{3W(k,C) \pi^2 C^3} \right). \end{equation} Where, for simplicity, we have re-introduced the notation \begin{equation} W(k,C) = \sqrt{1+\frac{k^2}{\pi^2 C}}. \end{equation} This term can be thought of as a mass term, by seeing how it arises in the context of the energy. In the $Dp$-brane case (and in flat space) this term will be position dependent, and we have the notion of a position dependent mass. However, the near horizon of the $NS$5-brane background removes the radial dependence leaving us with a constant. Because we are using the supergravity approximations in our analysis, we are taking $k$ and $N$ to be large, and so this 'mass' term is positive, but may be small if we demand that the ratio $k/N$ be small. If we now employ the canonical formulation of the energy, we can set the $\tilde \Pi$ terms to zero to find the corrected potential for the probe branes up to leading order in $1/C$ \begin{equation} V_1 = \frac{N}{\sqrt{H}}\left( W(k,C) -\frac{2k^2}{3W(k,C) \pi^2 C^2} \right). \end{equation} The potential does not vanish with this correction because we have the supergravity condition $k>>N$ where both $k$ and $N$ are integers. In fact, even taking into account higher order corrections \cite{ramgoolam}, the potential is nowhere vanishing. Thus the symmetrized trace correction does not yield a bounce solution. \subsection{Non-Abelian tachyon map.} It has been shown in the case of a single probe brane, that the unstable dynamics in the $NS$5-brane background is more easily understood in terms of the rolling tachyon, since the energy momentum tensors have similar behaviour at late times. We may ask what the implications are when we have multiple coincident branes with a $U(N)$ symmetry on their worldvolumes. This relationship can be explicitly demonstrated by mapping the probe brane action into that of the tachyon action in flat Minkowski space. This is particularly simple in the Abelian case, but we wish to show that it is also possible in our non-Abelian construction. The corresponding non-Abelian action for tachyons in a flat background, to leading order \cite{non_abelian_non_bps}, can be written \begin{equation} S=-\tau_p V_p \int dt N V(T)\sqrt{1-\dot{T}^2}. \end{equation} Because the tachyon field does not take values in the $SU(2)$ algebra we find that the action is simply $N$ times that of a single non-BPS brane. In fact this corresponds to a configuration of branes each separated by distances larger than the string length, as found in constructions of Assisted Inflation \cite{assisted_inflation}. In this scenario each of the tachyons is assumed to follow a similar trajectory toward the late time attractor point, namely $T_1 \sim T_2 \ldots \sim T_N \equiv T$. Here $V_p$ is the effective 'volume' of each brane, whilst $V(T)$ is the tachyon potential which we will assume to be of the form; \begin{equation} V(T)=\frac{1}{\cosh(T/T_0)}, \end{equation} where the tachyon is a scalar field with dimensions of length. We remind the reader of the action for the probe brane in the $NS$5-background, which we have already show to be \begin{equation} S=-\tau_p V_p \int dt \frac{N W(r)}{\sqrt{H}} \sqrt{1-H \dot{r}^2}, \end{equation} where, for simplicity, we have absorbed the potential term into our definition of $W(r)$. Clearly we can map this action to that of the non-Abelian tachyon by making the identification \begin{equation} d\tilde T = \sqrt{H} dr, \hspace{0.5cm} V(\tilde T) = \frac{W(r)}{\sqrt{H}}. \end{equation} Using the near horizon approximation we can solve for the geometrical tachyon in terms of the physical radius of the fuzzy sphere. The result, up to arbitrary constants of integration, is simply an exponential as expected from the Abelian case which allows us to write the tachyon field as \begin{equation} \tilde T \sim \sqrt{kl_s^2}\ln(r). \end{equation} The solution tells us that as $r \to 0, \tilde T \to -\infty$ as expected, whilst as $r \to r_{max}$ we find $\tilde T \to \tilde T_{max}$. Clearly this is not the general behaviour associated with the open string tachyon solution, which we should have expected from the Abelian theory, but we may anticipate that the decay of the fuzzy sphere will also be describable in terms of this rolling tachyon solution. Using our field redefinition, we write the expression for the tachyon potential as \begin{equation} V(\tilde T) = \sqrt{\frac{1}{kl_s^2} \left( 1 + \frac{k^2}{\pi^2 C} \right)} \exp \left( \frac{\tilde T}{\sqrt{kl_s^2}}\right). \end{equation} The form of the potential shows that it had its maximum at $\tilde T=0$, and tends to zero for $\tilde T \to -\infty$. The exact maximum will be defined by the number of source branes, as expected from the Abelian case. However note that there is a correction term present here due to the fuzzy sphere, which does not occur in the leading order tachyon action as we know that the tachyonic scalar field is a commuting variable. Therefore although we can capture the general behaviour of the tachyon action, we must go beyond leading order to find closer agreement. If we construct the Energy-Momentum tensor associated with this rolling tachyon solution, omitting the delta functions which localise the tensor on the brane world-volumes, we obtain \begin{eqnarray} T_{00} &=& \frac{NV(\tilde T)}{\sqrt{1-(\partial_t \tilde T)^2}} \nonumber \\ T_{ij} &=& -NV(\tilde T) \sqrt{1-(\partial_t \tilde T)^2}, \end{eqnarray} which shows that the pressure tends to zero as the potential tends to zero, ie, when the probe branes approach the fivebranes at late times. This is because the probe brane will emit energy in closed string modes as the fuzzy sphere collapses, and the resulting matter will be Non-Abelian pressureless fluid. One must also imagine that because the fuzzy sphere collapses in the near throat region of the fivebranes, becoming pointlike at distances approaching the string length, the harmonic function approximation may fail, and there will certainly be quantum corrections to take into account. This is due in part to the back reaction of the probes on the source branes and the throat, therefore in order to determine the physics of this non-Abelian fluid it will be necessary to calculate this back reaction term and incorporate it into the action. In any case, it would be useful to compute the dynamics of this configuration using the exact CFT on the world volume which would help shed further light on the validity of the classical solution. \section{Non-BPS branes in fivebrane backgrounds.} We now wish to extend our discussion to include non-BPS branes in this coincident fivebrane background. As is well known the BPS $Dp$-brane is a soliton solution of the non-BPS $D(p+1)$-brane, where the soliton is associated with the condensation of the open string tachyon on the world-volume. We begin by introducing the natural extension of the Abelian action for $N$ Non BPS branes \cite{non_abelian_non_bps}. \begin{equation} S=-\int d^{p+1} \zeta Str V(T) e^{-(\phi-\phi_0)} \sqrt{-\det (\mathcal{P}[ E_{ab}+E_{ai}(Q^{-1}-\delta)^{ij}E_{jb}]+\lambda F_{ab}+T_{ab})} \nonumber \end{equation} \begin{equation} \times \sqrt{\det Q_j^i} \end{equation} where $T_{ab}$ is the tensor containing all the open string tachyon terms, and is given by \begin{equation} T_{ab}=\lambda D_a T D_b T -D_a T[x^i,T](Q^{-1})_{ij}[x^j,T]D_b T+ \ldots. \end{equation} Note that we are now taking the tachyon to be a dimensionless scalar field on the world-volumes of the $N Dp$-branes by reinstating the factors of $\alpha'$. We now expand the action to lowest order, and we will drop the gauge field term so that covariant derivatives reduce to normal derivatives. The resulting action can be written \begin{equation} S=-\int d^{p+1}\zeta STr \frac{V(T)}{\sqrt{H}}\sqrt{\left(1+H\lambda^2 \partial_0 \phi^i \partial^0 \phi^j \delta_{ij}+\lambda \partial_0 T \partial^0 T\right) \left(1-\frac{1}{2} H^2 [\phi^i, \phi^j][\phi^j,\phi^i]\right)}. \end{equation} We again use the $SU(2)$ ansatz for the radially dependent transverse scalars which reduces the action to a more tractable form \begin{equation} S=-\int d^{p+1}\zeta N\frac{V(T)}{\sqrt{H}}\sqrt{(1-H\lambda^2C \dot{R}^2 -\lambda \dot{T}^2) (1+4H^2\lambda^2 C R^4)}. \end{equation} The presence of the open string tachyon will generally prohibit exact solutions to the equations of motion for the radion field unless we take various asymptotic limits. This is obvious, as the form of the action shows that the conjugate momenta associated with the radion and tachyon fields will not be conserved. Fortunately there is a way to resolve this problem by using symmetry transformations of the various fields, which allows us to construct a new conserved charge and hence solve the equations of motion for specific regions of field space. We will start by considering the usual form of the tachyon potential for the superstring, given by \begin{equation} V(T)=\frac{1}{\cosh(T/ \sqrt{2})}, \end{equation} which tends to an exponential for large $T$ in agreement with calculations from string field theory and BCFT \cite{sen}. We insert this into the action, and once again switch to using physical distance. We note that the current form of the potential will make it difficult to find symmetries of the action as it stands, thus it will be more useful to make the following field redefinition \cite{tachyon_transform, non_bps_dynamics} as we did for the coincident $D$-brane background \begin{equation} \frac{\tilde T}{\sqrt{2}} = \sinh \left(\frac{T}{\sqrt{2}}\right), \end{equation} and for convenience we re-write $\tilde T = T$ for ease of calculation, although we will always imply that this is the re-definition of the original tachyon field. As mentioned previously there may be objections to performing this kind of field redefinition using the non-Abelian action in this gravitational background. Assuming that this won't be too problematic, we can now proceed to analyse the resulting action, \begin{equation} S=-\tau_p \int d^{p+1} \zeta \frac{N}{\sqrt{HF}} \sqrt{\left(1-H\dot{r}^2-\frac{\lambda \dot{T}^2}{F}\right)\left(1+\frac{4H^2r^4}{\lambda^2C}\right)}, \end{equation} where we have introduced the following definitions \begin{equation} F=\left(1+\frac{T^2}{2} \right), \hspace{1cm} H=1+\frac{kl_s^2}{r^2}. \end{equation} We can now try to find the conserved charge associated with transformations of this action, and use that in conjunction with the energy density to solve the equations of motion. Unfortunately we see that this is still non trivial unless we make further approximations, thus we will look at the theory in the large $T$ and small $r$ limit. Since the large tachyon field gives rise to a gas of closed strings arising due to tachyon condensation, we expect to find that the radial field on the probe branes will describe the late time dynamics of this gas. The action in this instance, reduces to \begin{equation} S=-\tau_p \int d^{p+1} \zeta \frac{\sqrt{2}Nr}{\sqrt{kl_s^2}T}\sqrt{1-\frac{kl_s^2 \dot{r}^2}{r^2}-\frac{2\lambda\dot{T}^2}{T^2}} \sqrt{1+\frac{k^2}{\pi^2C}}, \end{equation} At this juncture we will reintroduce the $W(k,C)$ notation to simplify things, and furthermore, we postulate that the action be invariant under the following transformations \begin{equation} \delta T = \epsilon T, \hspace{1cm} \delta r= \epsilon r, \end{equation} for some parameter $\epsilon$. Note that this is a transformation involving the open strings on the world-volume and also the transverse scalars, and can be thought of as an example of the stringy space-time uncertainty principle \cite{uncertainty_principle} \begin{equation} \Delta t \Delta X \ge \alpha' \end{equation} where distances on the world-sheet are inversely related to distances in the bulk. Since the $NS$5-brane world-volume theory is related to a Little String Theory (LST), it would be interesting to find out the implications of the transformations on the LST side. By variation of the action, we determine that the charge associated with this symmetry is given by \begin{equation} D = \frac{ N r \sqrt{2}}{T\sqrt{kl_s^2}} \left(\frac{kl_s^2 \dot{r}}{r}+\frac{2\lambda \dot{T}}{T}\right) \frac{W(k,C)}{\sqrt{1-\frac{kl_s^2 \dot{r}^2}{r^2}-\frac{2\lambda \dot{T}^2}{T^2}}}, \end{equation} which can be seen to have dimensions of length. We can also derive the canonical energy density associated with the action, using the canonical momenta of the radion and the tachyon fields. For brevity we will simply state the resultant dimensionless energy density and not the individual momenta \begin{equation} \tilde E= \frac{Nr\sqrt{2}W(k,C)}{T\sqrt{kl_s^2}} \frac{1}{\sqrt{1-\frac{kl_s^2 \dot{r}^2}{r^2}-\frac{2\lambda \dot{T}^2}{T^2}}}. \end{equation} It can be seen that both $\tilde E$ and $D$ are conserved, as expected, and it will be useful to combine both of these charges to form a solitary conserved charge \begin{equation}\label{eq:charge} Q=\frac{D}{\tilde E}=\frac{kl_s^2 \dot{r}}{r} + \frac{2 \lambda \dot{T}}{T}, \end{equation} which after some manipulation can be used to define the tachyon field via \begin{equation} T= C_o \exp\left(\frac{Qt}{2\lambda}\right)r^{-k/4\pi}, \end{equation} where $C_0$ is a constant of integration. Furthermore, from (\ref{eq:charge}) we can also find the time dependence of the tachyon field in this condensing limit. \begin{equation} \dot{T} = T \left( \frac{Q}{4\pi l_s^2}-\frac{k\dot{r}}{4\pi r} \right). \end{equation} As we are probing the large $T$ region of field space, we expect that the dominant contribution to the charge will come from the radial modes. Now that we have written the tachyon field in terms of this conserved charge we can attempt to solve the radial equations of motion. Note that this would be extremely challenging if we had tried to proceed from the original form of the action without this enhanced symmetry. We will initially consider the case where $Q=0$. This obviously implies that we are setting $D \to 0$, which may seem strange, however we have used the charge to construct an expression for the tachyon field and so it is valid. By setting $Q=0$ we are identifying the condensation of the tachyon field with the inverse of the radion field on the probe branes (up to some power), and so small $r$ will automatically imply large $T$. The simplicity of this approach is now clear, since we began with two distinct fields and have effectively coupled them via the conserved charge thus only requiring us now to solve for one of the fields. We now substitute the expressions for the tachyon into the energy equation, which will now be a function of $r$. \begin{equation}\label{eq:energy_solution} \tilde E=\frac{NW(k,C)r^y\sqrt{2}}{C_0\sqrt{kl_s^2}}\frac{1}{\sqrt{1-\frac{kl_s^2\dot{r}^2}{r^2}\left(1+\frac{k}{4\pi}\right)}} \end{equation} and for future reference, we will introduce the simplifying notation \begin{equation} B= \frac{N W(k,C) \sqrt{2}}{\tilde E C_0 \sqrt{kl_s^2}}, \hspace{0.5cm} y= 1+\frac{k}{4\pi}, \hspace{0.5cm} x=kl_s^2\left(1+\frac{k}{4\pi}\right) \end{equation} Rearranging the energy equation allows us to solve for $r(t)$, which we find to be, up to constants of integration \begin{equation} \frac{1}{r} \sim \left(B \cosh\left\lbrack\frac{\pm y(t-t_0)}{\sqrt{x}}\right\rbrack \right)^{1/y}, \end{equation} where $t_0$ parameterises some initial time for the dynamics. This solution describes an expanding fuzzy sphere which reaches its maximum size at $t=t_0$ and thereafter collapses down to zero size. We easily find that the maximum radius will be given by \begin{equation} r_{\rm max} = \left(\frac{\tilde E C_0 \sqrt{kl_s^2}}{NW(k,C)\sqrt{2}}\right)^{1/y}. \end{equation} The physics behind this solution can be understood. As the fuzzy sphere expands, the tension of the non-BPS branes is increased as the tachyon moves closer to the top of its potential (assumed to be located at $T=0$ ). Thus the expanding solution has a natural braking force that restricts it to expand to a certain size. Conversely in the collapsing phase, the non-BPS branes feel a decreasing tension which goes to zero as the solution collapses to the origin. We can also now determine the constant of integration by demanding that $T=T_0$ at $t=t_0$, and since we are in the large $T$ region of field space we will assume that $|T_0| >> 1$. After some manipulation we find \begin{equation} C_0 = T_0^{y}\left(\frac{NW(k,C) \sqrt{2}}{\tilde E l_s^2 \sqrt{k}}\right)^{k/ 4\pi }, \end{equation} and therefore we can completely determine the behaviour of the tachyon near condensation, in the approximation where $Q=0$. It is natural to now consider the case where $Q \ne 0$, however we should note that this case is not solvable exactly, and we must be forced into making approximations,r If we insert the full expression for the tachyon field into the energy equation we find \begin{equation} 1-\frac{kl_s^2\dot{r}^2}{r^2}-\frac{l_s^2}{4\pi}\left( \frac{Q^2}{l_s^4}-\frac{2Qk\dot{r}}{l_s^2r}+\frac{k^2\dot{r}^2}{r^2} \right) =B^2 e^{-Qt/\lambda} r^{2y}. \end{equation} Now at late times we see that the RHS of this equation will become vanishingly small, and so we neglect it in our analysis. This allows us to rewrite the LHS as a quadratic equation, which we solve to find \begin{equation}\label{eq:beta} \frac{\dot{r}}{r}= \frac{Qk \pm 2\sqrt{k\pi (4\pi l_s^2+kl_s^2-Q^2)}}{(4\pi k+k^2)l_s^2} =\beta, \end{equation} and upon integration we can determine the late time behaviour of the fuzzy sphere \begin{equation}\label{eq:gen_rad_soln} r \simeq r_0 \exp(\beta t), \end{equation} with the corresponding late time solution for the tachyon field given by \begin{equation} T \simeq \exp\left(\frac{Qt}{4\lambda}\right)\exp(-k\beta t /8\pi ). \end{equation} Now if we look for a collapsing solution we must take $\beta$ to be negative in (\ref{eq:gen_rad_soln}), where we must bear in mind that the solution is only valid for large $t$, corresponding to late time dynamics of the radial modes. In this case the tachyon field will be large even if the charge $Q$ is small, and so our analysis is consistent. Furthermore having non-zero $Q$ appears to imply that there will not be a bounce solution, rather the probe branes will eventually reach the source branes and the fuzzy sphere will collapse to zero size. This can be seen from (\ref{eq:beta}) which suggests that for a real solution, we must ensure that $(4 \pi +k)l_s^2 \ge Q^2$. In the large $k$ limit this is approximated by the constraint $kl_s^2 \ge Q^2$. Clearly if this is saturated then we find \begin{equation} \beta \to \frac{Q}{(4\pi +k)l_s^2}, \end{equation} which is dependent upon the sign of $Q$. If we accept the constraint, then for $\beta$ to be negative we require \begin{equation} 4\pi (4\pi l_s^2 + kl_s^2-Q^2) > Q^2k , \end{equation} which becomes \begin{equation} 4\pi l_s^2 > Q^2 \end{equation} when we ensure $k>>1$. Clearly the only way to satisfy this constraint is to assume that $Q$ is vanishingly small. This is inconsistent with (\ref{eq:charge}) for both expanding and contracting solutions. One way of interpreting the physical aspect of the conserved charge is that it parameterises the deviation from the single field duality we found when we identified the tachyon field with the inverse of the radial mode. \section{Higher (even) dimensional fuzzy spheres.} So far our analysis has dealt with collapsing fuzzy $2$-spheres in curved backgrounds, thus it would be useful to extend this to higher dimensional fuzzy spheres. We will briefly look at the fuzzy $4$-sphere before commenting on how our analysis can generalise to the fuzzy $2k$ sphere where $k$ is an integer. In the following discussion we will concern ourself with $D$-brane backgrounds for simplicity, as to consider the $NS$5-brane background we will have to use T-duality. We begin by constructing the fuzzy $\mathbb{S}^4$, where we need five transverse scalar fields satisfying the following ansatz \begin{equation} \phi^i = \pm R G^i, \hspace{1cm} i=1 \ldots 5. \end{equation} This will obviously imply that we can only look at $p\le 4$ backgrounds. The $G^i$ matrices above arise through the totally symmetric $n$-fold tensor product of the gamma matrices of $SO(5)$, which have dimension \begin{equation} N =\frac{(n+1)(n+2)(n+3)}{6}. \end{equation} For a detailed description of these constructions we refer the interested reader to \cite{myers2, ramgoolam2, costis} and the references therein In terms of the physical radius we find a similar relationship to the case of the $SU(2)$ algebra, where we write \begin{equation} r = \lambda \sqrt{C} R, \end{equation} note that in this instance $R$ must be positive definite and the Casimir is given by products of the $G^i$, as usual, where we have $G^i G^i = C\mathbf{1}_{N \times N} = n(n+4) \mathbf{1}_{N \times N}$. We can now use this ansatz in our non-Abelian DBI effective action, which we again treat as a lowest order expansion. The resultant action may be written \begin{equation} S = -\tau_{p'} \int d^{p'+1} \zeta N H^{(p-p'-4)/4} \sqrt{1-H\lambda^2 C \dot{R}^2}\left(1+4H\lambda^2CR^4 \right) - \tau_{p'} \delta_{p p'} \int d^{p'+1} \zeta \frac{qN}{H}, \end{equation} where the Chern-Simons term only couples to the action for $p=p'$ as usual. From this action we can derive the usual canonical momenta and energy, which yields the following static potential in terms of physical distances \begin{equation} V_{eff} = \tau_{p'} N H^{(p-p'-4)/4} \left(1+\frac{4Hr^4}{\lambda^2 C} \right), \end{equation} note that this appears to gave exactly the same basic structure as the fuzzy $\mathbb{S}^2$ potential except that now $p \le 4$ because of our ansatz. Before we comment on this solution, we should discuss the extension to the fuzzy $\mathbb{S}^6$. We again use the $G^i$ matrices which are now representations of $SO(7)$ as $i$ runs over seven transverse dimensions. Again the $G$'s arise from the action of gamma matrices on the traceless, symmetric $n$-fold tensor product of the spinor, and we have the following relationship between the dimension of the matrices and the number of branes \begin{equation} N = \frac{(n+1)(n+2)(n+3)^2(n+4)(n+5)}{360}. \end{equation} The relationship between the physical radius and the transverse scalar ansatz is the same as before except that the Casimir has a different definition $G^i G^i = C\mathbf{1}_{N \times N} = n(n+6) \mathbf{1}_{N \times N}$. This suggests that we can make the following generalisation. For the fuzzy $\mathbb{S}^{2k}$ sphere in ten dimensions, where $k \le 4$ we require $2k+1$ transverse scalar fields which can be parameterised by the action of $SO(2k+1)$ gamma matrices on tensor products of the spinor. If we assume that this is correct then we propose to write the general form of the action for fuzzy $\mathbb{S}^{2k}$ in a curved $D$-brane background \cite{costis} \begin{equation} S = -\tau_{p'} \int d^{p'+1} \zeta N H^{(p-p'-4)/4} \sqrt{(1-H \lambda^2 C_k \dot{R}^2)(1+4H\lambda^2 C_k R^4)^k} - \tau_{p'} \delta_{p p'} \int d^{p'+1} \zeta \frac{qN}{H}. \end{equation} Where we have written $C_k$ to indicate that the Casimir refers to the gauge group $SO(2k+1)$. The factor of $k$ imposes restrictions upon the dimensionality of the background branes, in fact the maximum value of $p$ is $p_{max}=8-2k$. Thus we see that for the fuzzy $\mathbb{S}^8$ we can only consider $D0$-branes probing the $D0$-brane background. Using the general form of the action we define the effective potential, in physical coordinates, to be \begin{equation} V_{eff} = N \tau_{p'} \left\lbrace H^{(p-p'-4)4}\left(1+\frac{4Hr^4}{\lambda^2 C_k} \right)^{k/2} -q \delta_{p p'} \right\rbrace. \end{equation} In general we see that the bosonic part of the potential will always tend to zero in the near horizon region, implying that the fuzzy spheres will collapse toward zero size. Thus the only case of interest relates to $p=p'$ when there is the additional term coming from the bulk RR charge of the background branes. In the small radius limit we find that the potential reduces to \begin{equation} V_{eff} = \frac{N \tau_{p'}}{H} \left\lbrace \left(1+ \frac{4k_p r^{p-3}}{\lambda^2 C_k} \right)^{k/2} -q \right\rbrace. \end{equation} We can differentiate this potential to see if there are any solutions corresponding to stable mimima at which point the fuzzy sphere may stabilise, however we see that there are no real solutions again implying that all fuzzy spheres are unstable in $D$-brane backgrounds with the exception of $p=6, p'=0$ which we discussed in a previous section. The generalised form of the equation of motion can be written as \begin{equation} \dot{r}^2 = \frac{1}{H} \left\lbrace 1-\frac{N^2 \tau_{p'}^2 H^{(p-p'-4)/2}}{E^2} \left(1+\frac{4Hr^4}{\lambda^2 C_k} \right)^{k/2} \right\rbrace, \end{equation} where we are using a generalised expression for the energy. If we again assume that the velocity and the radius can be treated as complex variables with the equation of motion as a constraint, we can calculate the genus of the underlying Riemannian surface. Interestingly the results similar to those obtained in section 3, with the number of branch points being the same, though the the genus is dependent upon the dimensionality of the branes and on the non-Abelian group structure. This may change once corrections to the symmetrized trace are taken into account. \section{Discussion.} In this note we have extended the work on time dependent solutions to include multiple probe branes via the non-Abelian effective DBI action. In particular we have focused on the dynamics of BPS and non-BPS branes in the curved backgrounds of $Dp$-branes and $NS$5-branes. This preliminary analysis has not dealt with the difficult problem of $Dp'$-branes in the $Dp$-background, nor the fundamental string background - which is exactly soluble in the Abelian case. It would certainly be useful to continue this line of enquiry in the future. It would also be useful to consider ring backgrounds, both for the $Dp$-brane and $NS$5-brane backgrounds as a natural extension of the work in \cite{israel, asano, huang, time_dependence} which could shed further light on the geometrical nature of the tachyon. The $D6$ ring could also be an interesting case to study, as we could imagine a more general construction of a toroidal QHS. We have seen that the fuzzy sphere is not generally a stable object when placed in non-trivial backgrounds (the exception is the $D6-D0$ system). Of course, this has only been investigated to leading order and there are many ways in which to stabilise the spheres using fluxes \cite{myers}. Furthermore, we have only treated the configuration as a probe of the geometry. In more realistic scenarios there will be considerable back reaction upon the background which needs to be taken into account, as well as quantum corrections when the radius becomes significantly small. We also know that the classical motion of $D$-branes in $NS$5-brane background \cite{time_dependence} has a potential divergence in the closed string energy emission. We have not calculated this term here, but it would be interesting to verify that the same thing occurs in the non-Abelian picture, and also determine whether this imposes any constraints upon the dimensionality of the probe branes. Additionally we have looked at the underlying geometry of the holomorphic differentials in curved backgrounds, which suggests that they correspond to surfaces of high genus which are clustered near the origin and thus unresolvable when we are far away, mimicking the results obtained in flat space \cite{costis}. The genus of the particular surface is dependent upon the dimensionality of the branes in the bulk spacetime. Smaller values of $p$ correspond to surfaces of higher genus, however larger values of $p-p'$ correspond to surfaces of smaller genus. In the Abelian theory we see that there appears to be a relationship between the existence of supersymmetry and a Riemmanian sphere, which is broken when we lift to the non-Abelian theory. The underlying reasons for this are unclear as the $p=6, p'=2$ solution now describes a torus as opposed to the sphere. Furthermore we know that the automorphisms of the curves in flat space are destroyed when we move into the near horizon geometry, and the large-small dualities between collapsing and static solutions must be modified accordingly. We would like to know whether this duality holds (albeit modified) in the throat geometry, and what are the implications for the automorphisms. By careful choice of ansatz for the non-commuting coordinates, we have been able to study a rotating fuzzy sphere. In the first instance we were able to find an expansion of the action allowing for small angular momentum densities, but only for $p \le 5$. This showed that there were no bound states permitted for the fuzzy sphere. A second ansatz for general angular momentum imposed stricter restrictions upon the dimensions of the branes, limiting us to $p \le 3$. Again, we saw no possibility for bound states and thus the fuzzy sphere with angular momentum is still unstable. In any case, we would not anticipate the configuration to be stable as the D-branes can emit their Ramond-Ramond charge via synchrotron emission \cite{radiation}. It would be useful to find an ansatz to allow for the inclusion of $p=6$ backgrounds as there is the possibility of a bound state in that case, however this may require uplifting to M-theory. The non-BPS system in both $Dp$-brane and $NS$5-brane background leads to richer dynamics than in the BPS case, thanks to the existence of world-volume symmetries. In both cases we looked at the classical solutions when the tachyon field affects the strength of the gravitational attraction of the branes to the background. From the Abelian field theory description of unstable branes, we know that as $T \to \pm \infty$ the open string modes are confined leading to the destruction of the brane and the appearance of a stringy fluid \cite{tachyon_stuff}. The dual picture should give us some insight into what happens in the non-Abelian theory, and whether the Open/Closed duality conjecture remains valid. We used the symmetry of the fields to explicitly examine the $D3$-backgrounds, but it would be useful to extend the work of Kluson \cite{non_bps_dynamics} to the full symmetry transformation for general $Dp$-brane backgrounds. This additional symmetry has profound effects on the dynamics of the fuzzy sphere in the near horizon geometry, however we do not know if the symmetry even exists in the quantum theory. We have briefly examined the dual version of the QHS and found agreement \cite{susskind, hyakutake} with aspects of the Abelian theory, namely that the stabilisation radius of the system is almost identical. The origin of the string contribution is unclear in our non-Abelian construction and so we have only constructed the dual picture to the dielectric effect, namely the Myers effect. Furthermore, we have shown that the equations of motion in the two pictures are identical in a curved background when we take the large $N$ limit of the symmetrized trace. It would be useful to extend the work initiated here to the study of the non-Abelian QHS model and compare the results to those obtained using matrix theory. Finally we have investigated higher (even) dimensional fuzzy spheres in the $Dp$-brane background and found that only collapsing solutions are admissible. The case of odd dimensional fuzzy spheres is non-trivial and certainly merits future investigation. In addition, it would be interesting to try and construct the dual pictures of these collapsing solutions to see if the equations of motion are identical in the Abelian theory. This is complicated by many factors, for example the classical limit of the fuzzy $4$-sphere is six dimensional because the algebra belongs to the coset $SO(5)/U(2)$. We must project out the $U(2)$ states in order to construct the dual picture. We hope that this note has provided some small insight into the dynamics of fuzzy spheres in selected curved backgrounds, and we hope to return to some of these problems in the future. \begin{center} \textbf{Acknowledgements.} \end{center} Thanks go to C. Papageorgakis, S. McNamara, J. Bedford and S. Ramgoolam for their useful insights and comments. JW is supported by a QMUL studentship, and thanks the theoretical physics group at Stockholm University for its kind hospitality. This work was in part supported by the EC Marie Curie Research Training Network MRTN-CT-2004-512194.
1,314,259,993,786
arxiv
\section{Introduction} Spontaneous symmetry breaking is a central concept in condensed matter physics. Two types of collective mode emerge in association with spontaneous breaking of continuous symmetries. One is the Nambu-Goldstone (NG) mode \cite{nambu-60,goldstone-61} and the other is the Higgs mode \cite{higgs-64,littlewood-81}. The NG mode is a gapless excitation that arises from phase fluctuation of the order parameter. Nambu-Goldstone modes dominate low-energy properties of the system and have been studied in various condensed matter systems. On the other hand, the Higgs mode is a gapped mode that involves amplitude fluctuation of the order parameter. Since it is difficult to excite and probe Higgs modes selectively, it is only recently that experimental progress has enabled systematic investigation of Higgs modes in condensed matter systems \cite{pekker-15}.\footnote[1]{For recent progress in the study of Higgs modes in condensed matter systems, see, for example, Ref.~\cite{pekker-15} } In particular, Bose superfluids in optical lattices offer an ideal playground for investigating various aspects of Higgs modes due to the high controllability of the system \cite{bissbort-11,endres-12}. \par The tunnel effect is a pure quantum-mechanical phenomenon and has attracted much interest. Collective modes exhibit interesting tunneling properties that are very different from those of single particles. For example, NG modes in Bose-Einstein condensates (BECs) have been predicted to perfectly transmit a potential barrier in the low-energy limit, which is referred to as anomalous tunneling \cite{kagan-03,danshita-06,kato-08,tsuchiya-09,kato-12}. It has been found that NG modes in Bose superfluids in optical lattices cause Fano resonance mediated by Higgs bound states when they tunnel through potential barriers \cite{nakayama-15}. However, little is known about tunneling properties of Higgs modes. \par In the present paper, extending our recent work \cite{nakayama-15}, we study tunneling of Higgs modes in Bose superfluids in optical lattices. Solving the time-dependent Ginzburg-Landau (TDGL) equation that describes the superfluid dynamics in the vicinity of the phase boundary to the Mott insulating state, we show that Higgs modes perfectly transmit a potential barrier introduced by local modulation of the hopping amplitude when the barrier potential is weak. The perfect transmission does not occur for a strong potential barrier when the odd bound state of Higgs modes exist. We investigate the origin of the perfect transmission and find that it is mediated by the antibound states of Higgs modes. \par This paper is organized as follows. In Sec.~\ref{sec:Model} we introduce the Bose-Hubbard (BH) model and the TDGL equation including the effects of external potentials. In Sec.~\ref{sec:Higgs_tunneling} we study the tunneling problem of Higgs modes solving the TDGL equation in the presence of a $\delta$-function potential and a rectangular potential. We show that perfect transmission of Higgs modes occurs and discuss the origin of it relating it to the antibound states. In Sec.~\ref{sec:Conclusion} the results are summarized. \section{Model}\label{sec:Model} We consider bosons trapped in a cubic optical lattice. They are well described by the tight-binding BH model \cite{fisher-89,jaksch-98} \begin{eqnarray} \mathcal H= -\sum_{{\bm i},{\bm j}}J_{{\bm i},{\bm j}}b_{\bm i}^\dagger b_{\bm j} -\sum_{\bm i} \mu_{\bm i} b_{\bm i}^\dagger b_{\bm i} +\frac{U}{2}\sum_{\bm i} b_{\bm i}^\dagger b_{\bm i}^\dagger b_{\bm i} b_{\bm i}. \label{eq:BH} \end{eqnarray} The vector ${\bm i}\equiv \sum_{\alpha=1}^{d}i_{\alpha}{\bm e}_{\alpha}$ denotes the lattice site, where $i_{\alpha}$ is an integer, $d$ is the spatial dimension, and ${\bm e}_{\alpha}$ is a unit vector in the direction $\alpha$. In addition, $b_{\bm i}^\dagger$ ($b_{\bm i}$) is a creation (annihilation) operator of bosons at site ${\bm i}$, and $U>0$ is the on-site repulsive interaction. The chemical potential $\mu_{\bm i}\equiv \mu_0 - V_{\bm i}$ includes the homogeneous contribution $\mu_0$ and the external potential $V_{\bm i}$. Further, $J_{{\bm i},{\bm j}}=\sum_{\alpha}(J^{(\alpha)}_{\bm j} \delta_{{\bm i},{\bm j}+{\bm e}_{\alpha}}+ J^{(\alpha)}_{{\bm j}-{\bm e}_{\alpha}} \delta_{{\bm i},{\bm j}-{\bm e}_{\alpha}})$ is the hopping matrix element between adjacent sites, where $J^{(\alpha)}_{\bm j}$ denotes the hopping amplitude between sites ${\bm j}$ and ${\bm j}+{\bm e}_{\alpha}$. We neglect the harmonic trapping potential for simplicity. We set $\hbar=1$ and assume zero temperature throughout the paper. \par In previous work \cite{nakayama-15} we proposed to study tunneling effects of the NG mode in the superfluid phase by introducing the local shift of the chemical potential $V_{\bm i}$ and hopping amplitude $J_{\bm i}^{(\alpha)}$ independently. The former can be introduced by imposing an optical dipole potential, while the latter can be introduced by imposing an additional lattice potential in the Gaussian profile with the same lattice spacing as that of the overall potential (see Fig.~3 in Ref.~\cite{nakayama-15}). A local potential barrier that modulates the hopping amplitude locally can be created also by using a digital micromirror device \cite{islam-15}. In this paper, we focus on the latter for simplicity and set $V_{\bm i}=0$. We further assume the inhomogeneity of the hopping only in the $x$ direction: $J_{\bm i}^{(\alpha)}= J+J'_{i_1}\delta_{\alpha,1}$. The system is assumed to be homogeneous in the $y$ and $z$ directions. \par The TDGL equation that governs the dynamics of the superfluid order parameter $\psi(\bm x,t)$ can be derived in the vicinity of the superfluid -- Mott-insulator (SF-MI) transition point by taking the low-energy and continuum limit~\cite{fisher-89,sachdev-11}. The TDGL equation including the effects of the inhomogeneous hopping reads \cite{nakayama-15} \begin{eqnarray} iK_0\frac{\partial \psi}{\partial t} \!-\! W_0\frac{\partial^2 \psi}{\partial t^2} \!=\! \left(-\frac{\nabla^2}{2m^{\ast}}+r_0+v_r+u_0|\psi|^2\right)\psi, \label{eq:tdgl_b} \end{eqnarray} where $K_0$, $W_0$, $r_0 (<0)$, $m^{\ast}$, and $u_0$ are functions of the original parameters in the BH model $(J, \mu_0, U)$ (their expressions are given in Appendix A in Ref.~\cite{nakayama-15}). Here, $v_{r}(x)\equiv - 2J'(x)$ represents the potential due to the inhomogeneous hopping in the continuum limit $J'_{i}\to J'(x)$. \par We assume a commensurate filling, which results in the approximate particle-hole symmetry in the vicinity of the SF-MI transition point \cite{altman-02,huber-07,huber-08}. Since the TDGL equation should be invariant under the charge-conjugation transformation $\psi \leftrightarrow \psi^{\ast}$ in the presence of the particle-hole symmetry, the first-order time-derivative term should vanish $K_0=0$. Thus, Eq.~(\ref{eq:tdgl_b}) reduces to the nonlinear Klein-Gordon equation in the relativistic field theory~\cite{higgs-64} that exhibits the emergent Lorentz invariance. \par We employ the TDGL equation in the dimensionless form, \begin{equation} -\frac{\partial^2\tilde{\psi}}{\partial \tilde{t}^2}=\left(-\frac{\tilde{\nabla}^2}{2}-1+|\tilde{\psi}|^2+\tilde{v}_r\right)\tilde{\psi}, \end{equation} where the variables are normalized as \begin{eqnarray} \begin{split} \tilde{\psi}=\psi/(\left|r_0\right|/u_0)^{1/2}, \quad \tilde{t}=t(\left|r_0\right|/W_0)^{1/2},\\ \tilde{\bm{x}}=\bm{x}/\xi,\quad \tilde{v}_r=v_r/\left|r_0\right|, \end{split} \label{eq:Dless} \end{eqnarray} with $\xi \equiv (m_{\ast}\left|r_0\right|)^{-1/2}$ denotes the healing length. Hereafter, we omit the tilde. \par We consider fluctuations of the order parameter $\psi({\bm x},t)$ around its static solution $\psi_0({\bm x})$, \begin{eqnarray} \psi({\bm x},t)=\psi_0({\bm x})+\mathcal{U}({\bm x})e^{-i\omega t}+\mathcal{V}({\bm x})^*e^{i\omega t}. \end{eqnarray} Here $S(\bm{x})\equiv\mathcal{U}(\bm{x})-\mathcal{V}(\bm{x})\propto \delta \theta(\bm{x})$ and $T(\bm{x})\equiv \mathcal{U}(\bm{x})+\mathcal{V}(\bm{x})\propto \delta n(\bm{x})$ represent phase and amplitude fluctuations of the order parameter, respectively. In addition, $\psi_0({\bm x})$ satisfies the static Gross-Pitaevskii (GP) equation~\cite{pitaevskii-61}: \begin{eqnarray} \left(-\frac{\nabla^2}{2}-1+|\psi_0({\bm x})|^2+v_r(x)\right)\psi_0(\bm{x})=0~. \label{eq:static_3d} \end{eqnarray} The equations for phase and amplitude fluctuations read, respectively, \begin{eqnarray} \begin{split} \left(-\frac{\nabla^2}{2}-1+|\psi_0({\bm x})|^2+v_r(x)\right)S(\bm{x}) =\omega^2S(\bm{x})~, \label{eq:S(x)_3d} \end{split} \\ \begin{split} \left(-\frac{\nabla^2}{2}-1+3|\psi_0({\bm x})|^2+v_r(x)\right)T(\bm{x}) =\omega^2T(\bm{x})~. \label{eq:T(x)_3d} \end{split} \end{eqnarray} Equations~(\ref{eq:S(x)_3d}) and (\ref{eq:T(x)_3d}) demonstrate that phase and amplitude fluctuations are uncoupled due to the particle-hole symmetry \cite{tsuchiya-18}. \par Without the potential barrier ($v_r=0$), assuming plane wave solutions $(S(\bm{x}),T(\bm{x}))=(S_{\bm{k}},T_{\bm{k}})e^{i\bm{k}\cdot\bm{x}}$, we obtain the dispersion relations for the NG and Higgs modes as, respectively, \begin{eqnarray} \begin{split} &\quad \omega^2=\frac{k^2}{2}, \label{NGdisp}\\ &\quad \omega^2=\frac{k^2}{2}+\Delta^2. \label{Higgsdisp} \end{split} \end{eqnarray} The NG mode indeed has a gapless dispersion, while the Higgs mode has the energy gap $\Delta=\sqrt{2}$. Since phase and amplitude are uncoupled, the NG and Higgs modes involve pure phase and amplitude oscillations, respectively. \section{Tunneling problem of Higgs modes}\label{sec:Higgs_tunneling} We study tunneling problem of Higgs modes through a potential barrier $v_r(x)$. Since the static order parameter $\psi_0(x)$ is assumed to be homogeneous in the $y$ and $z$ directions, the GP equation reduces to \begin{eqnarray} \left[-\frac{1}{2}\frac{d^2}{dx^2}-1+|\psi_0(x)|^2+v_r(x)\right]\psi_0(x)=0~. \label{eq:static} \end{eqnarray} We assume the plane wave forms in the $y$ and $z$ directions as \begin{eqnarray} S({\bm x})=S_{\rm 1D}(x)e^{i\bm{k}_{\parallel}\cdot \bm{x}_{\parallel}}, \\ T({\bm x})=T_{\rm 1D}(x)e^{i\bm{k}_{\parallel}\cdot \bm{x}_{\parallel}}, \end{eqnarray} where $\bm{k}_{\parallel}=(k_y,k_z)$ and $\bm{x}_{\parallel}=(y,z)$. In the following analysis, we assume that the NG and Higgs modes propagate only in the $x$ direction, i.e., $\bm{k}_{\parallel}=\bm{0}$. Thus, Eqs.~(\ref{eq:S(x)_3d}) and (\ref{eq:T(x)_3d}) reduce to \begin{eqnarray} \left[-\frac{1}{2}\frac{d^2}{dx^2}-1+|\psi_0(x)|^2+v_r(x)\right]S_{\rm 1D}(x)=\omega^2S_{\rm 1D}(x)~, \label{eq:S(x)}\\ \left[-\frac{1}{2}\frac{d^2}{dx^2}-1+3|\psi_0(x)|^2+v_r(x)\right]T_{\rm 1D}(x)=\omega^2T_{\rm 1D}(x)~. \label{eq:T(x)} \end{eqnarray} We henceforth denote $S_{\rm 1D}(x)$ and $T_{\rm 1D}(x)$ by $S(x)$ and $T(x)$ for brevity. \subsection{$\delta$-function potential barrier} We first study tunneling of Higgs modes across a $\delta$-function potential barrier $v_r(x)=V_r\delta(x)$ ($V_r>0$). Note that any potential barrier that spatially varies in the order of the lattice spacing can be approximated as a $\delta$-function potential in the vicinity of the phase boundary with the MI phase, where the healing length $\xi$ gets much larger than the lattice spacing. The analytic solution of Eq.~(\ref{eq:static}) under the $\delta$-function potential \cite{kovrizhin-01} is given by \begin{equation} \psi_0(x)=\tanh(|x|+x_0), \label{eq:static_kink} \end{equation} where $x_0$ is determined by the boundary conditions at $x=0$, \begin{eqnarray} &\displaystyle \psi_0(-0)=\psi_0(+0), \label{bdcondp1}\\ &\displaystyle \left.\frac{d \psi_0}{d x}\right|_{+0}-\left.\frac{d \psi_0}{d x}\right|_{-0}=2V_r\psi_0(0). \label{bdcondp2} \end{eqnarray} We thus obtain \begin{equation} \tanh(x_0)=-\frac{V_r}{2}+\sqrt{\frac{V_r^2}{4}+1}\equiv \eta. \end{equation} Note that $\eta=\psi_0(0)$, which is the amplitude of the order parameter under the barrier, satisfies $0\leq \eta\leq 1$. Here $\eta$ decreases from $\eta(V_r=0)=1$ with increasing $V_r$ and has the asymptotic form $\eta\sim 1/V_r$ as $V_r\gg 1$. \par We consider tunneling of Higgs modes through the $\delta$-function potential barrier. We assume that Higgs modes with energy $E\geq\Delta$ and wave vector $k=\sqrt{2}\sqrt{E^2-\Delta^2}$ are injected from $x\to-\infty$. The solution of Eq.~(\ref{eq:T(x)}) can be written in a linear combination of the plane waves on a static kink condensate \cite{nakayama-15,lamb} as \begin{widetext} \begin{eqnarray} T(x)=\left\{ \begin{array}{ll} \dfrac{3\psi_0^2+3ik\psi_0-k^2-1}{2+3ik-k^2}e^{ikx}+r_{\rm{h}}\dfrac{3\psi_0^2-3ik\psi_0-k^2-1}{2-3ik-k^2}e^{-ikx} & (x<0) \\\\ t_{\rm{h}} \dfrac{3\psi_0^2-3ik\psi_0-k^2-1}{2-3ik-k^2}e^{ikx} & (x>0) \end{array} \right. .\label{eq:higgs_tunnel_WF} \end{eqnarray} \end{widetext} Here, $t_{\rm h}$ and $r_{\rm h}$ denote the transmission and reflection amplitudes, respectively. The asymptotic form of Eq.~(\ref{eq:higgs_tunnel_WF}) far away from the potential barrier is given by \begin{eqnarray} T(x)\rightarrow \left\{ \begin{array}{l} e^{ikx} + r_{\rm h}e^{-ikx} \quad(x\to-\infty) \\ \\ t_{\rm h} e^{ikx} \quad(x\to\infty) \end{array} \right.. \label{eq:asymptoticT} \end{eqnarray} The reflection and transmission probabilities of Higgs modes are given by $\mathcal{R}\equiv |r_{\rm h}|^2$ and $\mathcal{T}\equiv |t_{\rm h}|^2$, respectively. They satisfy the conservation law $\mathcal{R}+\mathcal{T}=1$. \par The coefficients $r_{\rm h}$ and $t_{\rm h}$ are determined so as to satisfy the boundary conditions: \begin{eqnarray} &T(-0)=T(+0)\label{eq:higgs_connection1},\\ &\displaystyle \left.\frac{d T}{d x}\right|_{+0}-\left.\frac{d T}{d x}\right|_{-0}=2V_rT(0).\label{eq:higgs_connection2} \end{eqnarray} From the above conditions, the transmission amplitude of Higgs modes can be calculated as \begin{eqnarray} t_{\rm{h}}&= &e^{i\delta}\frac{ik(k^2+1)(k^2+4)}{\left(c_1+V_rc_2\right)c_2}, \label{eq:t_amp} \end{eqnarray} where \begin{eqnarray} e^{i\delta}&=&\left(2-3ik-k^2\right)/\left(2+3ik-k^2\right),\\ c_1&=&ik^3-3\eta k^2-ik(6\eta^2-4)+6\eta(\eta^2-1),\label{eq:c1}\\ c_2&=&-k^2-3i\eta k+3\eta^2-1.\label{eq:c2} \end{eqnarray} We thus obtain the transmission probability $\mathcal{T}(k)=|t_{\rm h}|^2$ as \begin{eqnarray} \mathcal{T}_{\rm{h}}^{-1}= 1+ V_r^2 \frac{(k^2+1-3\eta^2)^2(k^2+1+3\eta^2)^2}{k^2(1+k^2)^2(4+k^2)^2}. \label{eq:higgs_transmission} \end{eqnarray} Equation~(\ref{eq:higgs_transmission}) shows that a perfect transmission ($\mathcal{T}=1$) occurs at $k^{\rm c}=\sqrt{3\eta^2-1}$, if the strength of the potential barrier is smaller than the critical value $V_r\leq 2/\sqrt{3}\equiv V_r^{\rm c}$ ($\eta\geq1/\sqrt{3}$). Figure~\ref{fig:delta} shows the transmission probability (\ref{eq:higgs_transmission}) as a function of $k$ for various values of $V_r$. It exhibits the perfect transmission at $k^{\rm{c}}$ for weak potential barriers ($0<V_r\leq V_r^{\rm c}$). It is remarkable that the perfect transmission occurs in the long-wavelength limit $k\to0$ at the critical barrier strength $V_r^{\rm c}$. For strong potential barrier $V_r>V_r^{\rm c}$, perfect transmission no longer takes place. \par The origin of the perfect transmission is the main focus of our paper. One may naively think that the diminishing order parameter combined with the repulsive barrier constitutes an effective double-well potential for Higgs modes, so the perfect transmission is due to the resonant tunneling that is induced when the wavelength of Higgs modes matches the characteristic length of the double-well potential. However, this possibility is denied because the perfect transmission occurs even in the long wavelength limit $k\to0$. The perfect transmission of Higgs modes reminds us of the anomalous tunneling of NG modes in BECs~\cite{kagan-03, danshita-06,kato-08,tsuchiya-09,kato-12}, where NG modes in BECs with momentum $k$ perfectly transmit a potential barrier in the limit $k\to 0$. The anomalous tunneling occurs because the wave function of the NG mode coincides with the condensate wave function at $k=0$. However, the wave function of Higgs modes $T(x)$ is not identical to the order parameter $\psi_0(x)$ at $k^{\rm c}$. Moreover, the perfect transmission does not occur for a sufficiently strong potential barrier $(V_r>V_r^{\rm c})$. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{delta_paper} \caption{Transmission probability of Higgs modes $\mathcal{T}$ through a $\delta$-function potential $V_r\delta(x)=V_r\delta(x)$ ($V_r>0$) as a function of the wave vector of the injected Higgs mode $k$ for various values of the barrier strength $V_r$. The horizontal axis is in the unit of $\xi^{-1}$.} \label{fig:delta} \end{center} \end{figure} \subsection{Perfect transmission and antibound states} We discuss the mechanism of the perfect transmission in more detail. In order to investigate the origin of the perfect transmission of Higgs modes, we study the solution of Eq.~(\ref{eq:T(x)}) in the following form~\cite{siegert-39}: \begin{eqnarray} T(x)=\left\{ \begin{array}{ll} A\dfrac{3\psi_0^2-3ik\psi_0-k^2-1}{2-3ik-k^2}e^{-ikx} & (x<0) \\\\ B \dfrac{3\psi_0^2-3ik\psi_0-k^2-1}{2-3ik-k^2}e^{ikx} & (x>0) \end{array} \right. . \label{eq:Siegert} \end{eqnarray} The above form, which only involves the outgoing waves, is referred to as the Siegert condition~\cite{siegert-39,hatano-08}. Under the boundary condition (\ref{eq:higgs_connection1}) and (\ref{eq:higgs_connection2}), we find that the solution satisfies one of the conditions \begin{eqnarray} c_1+V_rc_2&=&0 \label{eq:even_root},\\ c_2&=&0\label{eq:odd_root}, \end{eqnarray} where $c_1$ and $c_2$ are given in Eqs.~(\ref{eq:c1}) and (\ref{eq:c2}). \par One obtains from Eq.~(\ref{eq:even_root}) an even-parity solution $A=B$ with the wave vector $k=i\kappa_{\rm e}$ ($\kappa_{\rm e}>0)$, where \begin{widetext} \begin{eqnarray} 6\kappa_{\rm e}&=&-2\left(V_r+3\eta\right)+\frac{2^{4/3}\left(V_r^2+6V_r\eta +3\right)}{\left(-2V_r^3+45V_r\eta^2+\sqrt{-4\left(V_r^2+6V_r\eta +3\right)^3+\left(2V_r^3-45V_r\eta^2\right)^2}\right)^{1/3}} \nonumber \\ &&+ 2^{2/3} \left(-2V_r^3+45V_r\eta^2+\sqrt{-4\left(V_r^2+6V_r\eta +3\right)^3+\left(2V_r^3-45V_r\eta^2\right)^2}\right)^{1/3}. \end{eqnarray} \end{widetext} The excitation energy $E_{\rm e}=\sqrt{2-{\kappa_{\rm e}}^2/2}<\sqrt{2}$ is below the gap of the Higgs mode $\Delta=\sqrt{2}$. It is a true bound state involving an exponentially decaying wave function at $|x|\to\infty$. This solution is the even-parity Higgs bound state reported in Ref.~\cite{nakayama-15} which exists for any barrier strength $V_r>0$. \par On the other hand, solving Eq.~(\ref{eq:odd_root}), one obtains the odd-parity solutions with the wave vectors \begin{eqnarray} k_{\rm o}^\pm=\frac{i}{2}\left(\pm\sqrt{4-3\eta^2}- 3 \eta\right) \equiv i \kappa_{\rm o}^\pm. \end{eqnarray} Note that, since $4-3\eta^2>0$, $k_{\rm o}^\pm$ is pure imaginary. The solution involving exponentially decaying wave function $k_{\rm o}^+=i\kappa_{\rm o}^+$ ($\kappa_{\rm o}^+>0$) is a true bound state that exists when $V_r>V_r^{\rm c}$. Its binding energy is given by $E_{\rm o}=\sqrt{2-{\kappa_{\rm o}^+}^2/2}<\Delta$. This solution is the odd-parity Higgs bound state also reported in Ref.~\cite{nakayama-15}. The other solution $k_{\rm o}^-=i\kappa_{\rm o}^-$ ($\kappa_{\rm o}^-<0$) has an exponentially growing wave function at $|x|\to\infty$. Such a state, referred to as an antibound state~\cite{sasada-11,klaiman-10}, is not a true bound state. However, as we discuss below, it plays a crucial role in the perfect transmission of Higgs modes. We note that, since Eq.~(\ref{eq:even_root}) is a cubic equation of $k$, there are two other even-parity solutions with complex $k$. These solutions are referred to as a resonant state if ${\rm Re}(k)>0$ and an antiresonant state if ${\rm Re}(k)<0$ \cite{sasada-11,klaiman-10}. However, it turns out that they are not related to the perfect transmission of Higgs modes. \par The odd-parity Higgs bound state becomes an antibound state with $\kappa_{\rm o}^+<0$ for $V_r<V_r^{\rm c}$. Meanwhile, the transmission probability $\mathcal T_{\rm h}$ exhibits the perfect transmission when $V_r<V_r^{\rm c}$. Therefore, it is natural to suppose that the emergence of the perfect transmission may be related to the fact that the odd-parity Higgs bound state changes into an antibound state. We show that they are indeed closely related. \par All the poles of the transmission amplitude~(\ref{eq:t_amp}) are given by the solutions of Eqs.~(\ref{eq:even_root}) and (\ref{eq:odd_root}). Figures~\ref{fig:comlex_k_plane}(a) and \ref{fig:comlex_k_plane}(b) show the distribution of the poles on the complex $k$ plane. If $V_r>V_r^{\rm c}$ [see Fig.~\ref{fig:comlex_k_plane} (a)], the poles of the even and odd bound states, $k=i\kappa_{\rm e}$ and $k=i\kappa_{\rm o}^+$, are located on the upper plane on the imaginary axis, while the pole of the antibound state $k=i\kappa_{\rm o}^-$ is on the lower plane on the imaginary axis. The pole of the odd bound state $k=i\kappa_{\rm o}^+$ moves downward as $V_r$ decreases. It enters the lower plane when $V_r<V_r^{\rm c}$ and becomes an antibound state, while other poles do not cross the real axis, as shown in Fig.~\ref{fig:comlex_k_plane}(b). \begin{figure} \begin{center} \includegraphics[width=\linewidth]{comlex_k_plane} \caption{Distribution of the poles of the transmission amplitude~(\ref{eq:t_amp}) on the complex $k$ plane when (a) $V_r >V_r^{\rm c}$ and (b) $V_r <V_r^{\rm c}$. The poles of the bound states and antibound states are located in the upper and lower half plane on the imaginary axis, respectively. The complex poles in the lower half plane correspond to the (anti)resonant states.} \label{fig:comlex_k_plane} \end{center} \end{figure} We can understand the origin of the perfect transmission of Higgs modes by examining the poles of the odd (anti)bound states. The perfect transmission of Higgs modes cannot be interpreted in the usual resonance tunneling picture where the transmission probability has the Breit-Wigner form: $\mathcal T = (\Gamma/2)^2/\{(E-\varepsilon)^2+(\Gamma/2)^2\}$. Here, $\varepsilon$ is given by the real part of the pole and $\Gamma$ is the twice the imaginary part of the pole. In fact, the transmission probability near the peak cannot be approximated in this form. Instead, Eq.~(\ref{eq:higgs_transmission}) around $k\simeq k^{\rm{c}}$ can be approximated as \begin{eqnarray} \mathcal{T} \simeq \frac{k^2 \left(\kappa_{\rm o}^+ + \kappa_{\rm o}^-\right)^2}{\left(k^2 - \kappa_{\rm o}^+ \kappa_{\rm o}^-\right)^2+k^2 \left(\kappa_{\rm o}^+ +\kappa_{\rm o}^-\right)^2}, \label{eq:Klaimanform} \end{eqnarray} where $\kappa_{\rm o}^+ + \kappa_{\rm o}^-= -3\eta$ and $ \kappa_{\rm o}^+ \kappa_{\rm o}^-=3\eta^2-1$. It is remarkable that the above equation coincides with the asymptotic form of the transmission probability for the double barrier potential in the presence of two antibound poles [see Eq.~(3) in Ref.~\cite{klaiman-10}]. Equation~(\ref{eq:Klaimanform}) shows that the position of the peak for the perfect transmission is determined by the product of $\kappa_{\rm o}^+$ and $\kappa_{\rm o}^-$. If $V_r\leq V_r^{\rm c}$, the presence of the two odd antibound states with $\kappa_{\rm o}^\pm<0$ leads to the perfect transmission at $k^{\rm c}=\sqrt{\kappa_{\rm o}^+ \kappa_{\rm o}^-}$ in Eq.~(\ref{eq:Klaimanform}). The perfect transmission can thus be understood as being mediated by the antibound states. On the other hand, if $V_r>V_r^{\rm c}$, since one of the antibound states transforms into a true bound state, $\kappa_{\rm o}^+ \kappa_{\rm o}^-$ becomes negative and thus perfect transmission no longer occurs. At the critical barrier strength $V_r=V_r^{\rm c}$, the perfect transmission occurs precisely at $k^{\rm c}=0$. \subsection{Rectangular potential barrier} We demonstrate that the perfect transmission of Higgs modes is not due to the special feature of the $\delta$-function potential. For this purpose, we show that the perfect transmission also occurs in the presence of a rectangular potential barrier. We assume that Higgs modes are incident to a rectangular potential with finite width $a$, \begin{eqnarray} v_r(x)=V_r\theta\left(-(|x|-\frac{a}{2})\right), \label{eq:recpotential} \end{eqnarray} where $\theta(x)$ is the step function. The analytic solution of Eq.~(\ref{eq:static}) is obtained as \begin{eqnarray} \psi_0(x) =\left\{ \begin{array}{ll} \tanh\left(\left|x-\frac{a}{2}\right|+\tanh^{-1}\gamma\right) & \left(|x|>\frac{a}{2}\right) \\ \beta/{{\rm cn}\left(\sqrt{K^2+\beta^2} x, \;\frac{K}{\sqrt{K^2+\beta^2}} \right)} & \left(|x|\leq\frac{a}{2}\right) \end{array} \right., \end{eqnarray} if $\beta^2+2(V_r-1)\equiv K^2>0 $ and \begin{eqnarray} \psi_0(x) =\left\{ \begin{array}{ll} \tanh\left(\left|x-\frac{a}{2}\right|+\tanh^{-1}\gamma\right) & \left(|x|>\frac{a}{2}\right) \\ \psi_0= \beta/{{\rm cd}\left(\beta x,\; \frac{\kappa}{\beta}\right)} & \left(|x|\leq\frac{a}{2}\right) \end{array} \right. , \end{eqnarray} if $\beta^2+2(V_r-1)\equiv -\kappa^2<0 $. Here ${\rm cn}(x)$ and ${\rm cd}(x)$ denote the Jacobi elliptic functions; $\beta\equiv\psi_0(0)$ and $\gamma\equiv\psi_0(a/2)$ are determined numerically. We employ the finite-element method~\cite{zienkiewicz-00} to numerically solve Eq.~(\ref{eq:T(x)}). Details of the finite-element method are given in the Appendix. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{FEM_d1_paper} \caption{Transmission probability of Higgs modes $\mathcal{T}$ as a function of $k$ for a rectangular potential barrier with various strength $V_r$. We set, in the dimensionless unit, $a=1$, $-200\leq x\leq200$, and $N=8000$. The horizontal axis is in the units of $\xi^{-1}$.} \label{fig:FEM} \end{center} \end{figure} Figure~\ref{fig:FEM} shows the transmission probability of Higgs modes as a function of $k$ for $a=1$. It exhibits qualitatively the same feature as Fig.~\ref{fig:delta}: The perfect transmission occurs at $k^c$ when the strength of the potential is smaller than the critical value $V_r\leq V_r^{\rm c}=0.950$; $k^{\rm c}$ decreases as $V_r$ increases and the perfect transmission no longer occurs when $V_r>V_r^{\rm c}$. \par In order to study the (anti)bound states, we numerically diagonalize Eq.~(\ref{eq:T(x)}) by the central difference method \cite{cntmethod}. Figure~\ref{fig:bound_mix}(a) shows the wave functions of the (anti)bound states. The excitation energies of the true bound states are plotted as functions of $V_r$ in Fig.~\ref{fig:bound_mix}(b). The lowest odd bound state turns into an antibound state when $V_r\leq V_r^{\rm c}$, as expected. The wave function of the antibound state is delocalized over the system, as shown in the lower panel of Fig.~\ref{fig:bound_mix}(a). Thus, the perfect transmission in Fig.~\ref{fig:FEM} is considered to occur in the same mechanism as the one for a $\delta$-function potential. \par We note that the critical strength of the square potential $V_r^{\rm c}$ depends on the potential width $a$. We calculate $V_r^{\rm c}$ as a function of $a$ and find that $V_r^{\rm c}$ monotonically increases with $1/a$ increasing (the potential narrowing) as shown in Fig.~\ref{fig:critical_Vr}. This implies that a narrow potential barrier is favorable for experimental observation of the perfect transmission of the Higgs mode as well as the transition between the reflectionless and the reflection regime. Figure~\ref{fig:critical_Vr} shows that $V_r^{\rm c}$ quadratically increases with $1/a$ increasing for a wide potential barrier ($a\gtrsim 2\xi$), while $V_r^{\rm c}$ linearly increases with $1/a$ increasing for a narrow potential barrier ($a\lesssim 2\xi$). \begin{figure} \begin{center} \includegraphics[width=\linewidth]{bound_mix_paper} \caption{(a) Wave functions for the lowest even bound state and the second lowest odd (anti)bound state. The solid lines represent the true bound states and the dashed line represents the antibound state. The odd bound state in the upper panel for $V_r=1.5>V_r^{\rm c}$ becomes an antibound state in the lower panel for $V_r=0.8<V_r^{\rm c}$. We set, in dimensionless unit, $a=1$, $-30a\leq x\leq 30 a$ and $N=1200$. The vertical and horizontal axes are in units of $\sqrt{\left|r_0\right|/u_0}$ and $\xi$, respectively. (b) Excitation energy of the lowest even bound state ($E_{\rm e}$) and second lowest odd bound state ($E_{\rm o}$) as functions of the potential strength $V_r$. We set, in dimensionless units, $a=1$, $-200a\leq x\leq 200 a$, and $N=8000$. The vertical and horizontal axes are in units of $\sqrt{\left|r_0\right|/W_0}$ and $\left|r_0\right|\xi$, respectively.} \label{fig:bound_mix} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{critical_Vr} \caption{Critical strength of the square potential $V_r^{\rm c}$ as a function of the inverse of the barrier width $1/a$. The vertical and horizontal axes are in units of $\xi^{-1}$ and $\sqrt{\left|r_0\right|/u_0}$, respectively.} \label{fig:critical_Vr} \end{center} \end{figure} \section{Conclusion}\label{sec:Conclusion} We have studied tunneling properties of Higgs modes in Bose gases in optical lattices through a potential barrier introduced by local modulation of hopping amplitude. Higgs modes have been found to perfectly transmit through a potential barrier if the barrier strength is weak. We have found that the perfect transmission disappears at the critical barrier strength above which one of the odd antibound state turns into a true bound state. We demonstrated that the perfect transmission involves resonance with the antibound states. \par We propose detection of the perfect transmission of the Higgs mode by Bragg scattering~\cite{bissbort-11}: Exciting the Higgs mode by Bragg laser beams in the presence of a potential barrier introduced by an additional lattice potential in the Gaussian profile or a digital micro-mirror device, one can measure the amplitude of the transmitted wave of the Higgs mode through the potential, from which the transmission probability can be estimated. Observing the perfect transmission of the Higgs mode provides strong evidence for the existence of the antibound states of the Higgs mode. \par We finally note that another approach for studying transmission properties of the Higgs mode is the Gutzwiller approximation, which allows us to explore a broader parameter region than the TDGL theory \cite{kovrizhin-07}. The TDGL theory and the Gutzwiller approximation, however, agree quantitatively in the vicinity of the SF-MI transition. Danshita and Tsuchiya compared the two methods regarding the Higgs bound states in Ref.~\cite{danshita-17} and it has been shown, in fact, that the results of the two methods agree well if the system is close enough to the critical point (see, for example, Figs.~8 and 9 in Ref.~\cite{danshita-17}). This demonstrates that these two approximations take into account fluctuations to the same extent in the vicinity of the SF-MI transition point. Thus, the transmission property of the Higgs mode does not change qualitatively upon approaching the SF-MI transition within the TDGL theory as well as the Gutzwiller approach. The only quantitative changes appear through the scaling of the parameters in Eq.~(\ref{eq:Dless}). \par It would be interesting to study the transmission of the Higgs mode in the region where the system is so close to the transition point that the fluctuation of the order parameter gets larger than the order parameter itself. In this regime, the TDGL theory and the Gutzwiller approximation fail to describe the system and one needs an alternative approach based on, for instance, a renormalization-group study. However, that study is beyond the scope of the present paper. \section*{Acknowledgments} We acknowledge I. Danshita and T. Nikuni for helpful discussions. In particular, we appreciate N. Hatano for valuable discussions and informing us Ref.~\cite{klaiman-10}. T. N. thanks M. Imada and Y. Yamaji for useful comments. T. N. was supported by JSPS through Program for Leading Graduate Schools (MERIT). S. T. was supported by Chuo University Grant for Special Research. This work was supported by KAKENHI Grant No.~JP19K03691. \vspace{5mm}
1,314,259,993,787
arxiv
\section{Introduction} \label{sec:introduction} \vspace{-0.1cm} The visual world is full of objects moving around in predictable ways. Examples of these \emph{spatial trajectories} include human motion, such as people dancing or exercising; objects moving, such as a ball rolling; trajectories of cars and bicycles; or animal migration patterns. Evidence suggests that the human perceptual system encodes motion into high-level neural codes that represent the motion holistically, going beyond the specific input observations \cite{johansson1973visual}. Humans use this abstract representation for downstream tasks like inferring intention \cite{blakemore2001perception}. Computer vision systems likely need similar mechanisms to encode trajectories and motions into global representations. Representation learning has been transformative in other domains such as images and text for its ability to obtain high-level representations that reorganize the information in the input, and are better at downstream tasks than the original signals. A global representation of trajectories would allow us to evaluate a trajectory at any point in time, even ones not yet observed. However, modeling trajectories presents a series of challenges for representation learning. First, in real-time scenarios, the future of the trajectory is never observed. Second, temporal and spatial occlusions may impede observing part of a trajectory. Third, trajectories are by nature continuous in time. And finally, a trajectory-level metric is usually not well defined and application-dependent. We propose a representation learning framework for trajectories that deals with all these challenges in a unified way. Our key contribution is the representation of a partial observation of a trajectory as a probability distribution in a learned latent space, that represents all the possible trajectories the observation could have been sampled from. Our framework's simplicity and generality allows it to be flexible: it does not constrain the input-space metric, accepts observations of different lengths and at any (irregularly sampled) point in time, can be implemented using different families of latent space distributions, and is capable of performing inference-time tasks for which it has not been explicitly trained. Our experiments on human movement datasets show that our method can accurately predict the past and future of a trajectory segment, as well as the interpolation between two different segments, outperforming autoregressive baselines. Additionally, it can do so for any continuous point in time. We also show how we can modify given trajectories by manipulating their representations. \begin{figure} \centering \includegraphics[width=\columnwidth]{teaser.pdf} \caption{\textbf{Predictions on figure skating data (FisV)}. Our model is capable of predicting the future (top row), past, and interpolation (bottom row) of a trajectory given partial observations, at any continuous time. The inputs to the model are the keypoints in the images. See more examples in \cref{fig:predictions}.} \label{fig:teaser} \vspace{-0.2cm} \end{figure} \section{Method} \label{sec:method} \vspace{-0.15cm} \subsection{Framework, Definitions and Notation} \vspace{-0.15cm} The input to our framework is a sequence of samples obtained from a (continuous in time and infinite) spatial trajectory $u$, which we define as the continuous temporal evolution of a set of spatial coordinates. We call each sample a \emph{point} $x$, which lives in the \emph{input space} $\mathbb{R}^K$. We call the sequence of points, together with the times $t$ at which they were sampled, a \emph{segment} $s$, which can be understood as a partial observation of $u$. We define a distance metric $\delta$ between points $x$ in the input space. Our goal is to transform these measurements of motion $s$ into a representation $z$ that will be useful for downstream tasks. We define a \emph{latent space} $\mathbb{R}^N$ of \emph{trajectories} $z$. Each $z$ in this space represents the full extent of a trajectory, both in time and in space. We use $Q$ to represent probability distributions over trajectories $z$, in the latent space. We define a distance function $D$ between distributions of trajectories, which assumes an underlying distance function $d$ between trajectories $z$. We use an encoder $\Theta$ to encode every segment $s$ to a probability distribution $Q(\cdot;s)=\Theta(s)$ over trajectories, where $Q(z;s)$ represents the probability that $s$ was sampled from the trajectory represented by $z$. Additionally, we can decode a trajectory $z$ at a specific time $t$ by using a decoder $\Phi$, obtaining a point $x=\Phi(z,t)$. $\Phi$ takes any continuous $t$ as input. See Fig.~\ref{fig:schematic} for a schematic. \subsection{Representation Learning} \label{sec:training} \vspace{-0.15cm} When observing a segment $s$, one may have some uncertainty about the specific trajectory it was sampled from. For instance, a segment showing a person jumping may correspond to a trajectory that continues with the person falling, or to a trajectory that proceeds with them doing a backflip and landing on their feet, but it will not belong to a trajectory of a person swimming. Therefore, we represent the segment as a distribution over trajectories, where $Q(z;s)$ represents the likelihood of a trajectory given the segment. During training, the goal is to learn this mapping from the input space (segments of trajectories) to the latent space (distributions over trajectories). \looseness=-1 Concretely, given two segments $s^a, s^b$ that have been obtained from the same underlying trajectory, we want some $z$ to exist such that its likelihood under the distributions $Q^a$ and $Q^b$ representing each of the segments is high. To encourage this, we train the model to maximize the overlap between the distributions $Q^a$ and $Q^b$. Similarly, we minimize the overlap between (the distribution representations of) segments sampled from different trajectories, under the assumption that no trajectory $z$ exists that contains both segments. Specifically, we minimize a self-supervised triplet loss: \begin{figure} \centering \vspace{-0.1cm} \includegraphics[width=\columnwidth]{schematic.pdf} \caption{\textbf{Schematic of our framework}. We show the input space $\mathbb{R}^K$, the latent space $\mathbb{R}^N$, and the mappings between the two (encoder $\Theta$ and decoder $\Phi$). A segment $s$ belonging to a trajectory $u$ is encoded into a distribution $Q$, from which a trajectory $z$ is sampled and decoded at a time $t$, to get $\hat{x}_t$.} \label{fig:schematic} \vspace{-0.2cm} \end{figure} \begin{equation}\label{eq:traj} \mathcal{L}_{\mathrm{enc}} =\sum_{(i,k^+,k^-)\in\mathcal{T}}\max \left[D\left(Q^i, Q^{k^+}\right)-D\left(Q^i, Q^{k^-}\right)+\alpha, 0\right], \end{equation} where $\alpha$ is a margin hyperparameter, and $\mathcal{T}$ is a set of triplets: for every segment $i$ in the dataset, we define several triplets by sampling pairs consisting of a positive segment $k^+$ (such that $(i,k^+)$ is a positive pair) and a negative segment $k^-$ (such that $(i,k^-)$ is a negative pair). In addition to learning representations of trajectories, we also wish to be able decode them back to input-space points. To achieve this, we train a decoder $\Phi$ that allows us to obtain the specific value of any trajectory at any continuous time $t$. In order to train the decoder $\Phi$, we sample trajectories $z\sim Q(\cdot;s)$ from each segment representation, and decode them at specific time-steps $t$ which were contained in $s$, obtaining a prediction $\hat{x}_t=\Phi(z,t)$ for which we have ground truth $x_t$. There is no uncertainty in this prediction, as $x_t$ was part of the segment $s$ in the first place; the decoder is only explicitly trained for reconstruction, not extrapolation. We train the decoder via regression, using the \emph{point-wise} distance $\delta$. Note that we never explicitly define a trajectory-level distance in the input space; it is implicitly learned by the model. The reconstruction loss is mathematically defined as: \begin{equation}\label{eq:rec} \mathcal{L}_{\mathrm{dec}} = \frac{1}{N}\sum_{i=1}^N\mathbb{E}_{z\sim \Theta(s^i)}\sum_t^{T^i}\delta\left(\Phi(z,t), x_t^i\right), \end{equation} where $N$ is the number of segments in the dataset, and $T^i$ is the number of points in segment $s^i$. We minimize \cref{eq:traj,eq:rec} jointly and end-to-end. We implement the encoder $\Theta$ using a Transformer Encoder architecture \cite{transformers}, and the decoder $\Phi$ using a ResNet \cite{resnet}. See Appendix~\ref{apx:details} for more details. \subsection{Creating Positive and Negative Pairs} \label{sec:pairs} In order to define positives and negative pairs for \cref{eq:traj}, we use the following: \begin{itemize}[topsep=0pt,itemsep=0ex,partopsep=1ex,parsep=1ex,leftmargin=0.5cm] \item \textbf{Input-space relationships}. The simplest way is to take segments from the same trajectory as positives and segments from other random trajectories as negatives. The initial segments can have different relationships, such as precedence, containment, or overlap \cite{allen_algebra}. In our experiments, we sample three segments for every trajectory: a \textit{past} segment ({\color[HTML]{ef476f}{\fontfamily{phv}\selectfont\small\textbf P}} in Fig.~\ref{fig:latent_space}), a \textit{future} segment ({\color[HTML]{118aff} \fontfamily{phv}\selectfont\small\textbf{F}}) whose starting time comes right after the end of the past segment, and a \textit{combination} segment ({\color[HTML]{ffd166} {\fontfamily{phv}\selectfont\small\textbf C}}), which contains both the past and the future segments. \item \textbf{Intersection}. An intersection {\color[HTML]{b36b00} {\fontfamily{phv}\selectfont\small\textbf {I}}} of two distributions $Q$ in the latent space will represent all the trajectories that have a high likelihood for both intersected segments. Note that an intersection in the latent space is a union in the input space: the intersection constrains the possible trajectories to those that are consistent simultaneously for the two segments. Similarly, an intersection in the input space (assuming an overlap between segments) is a union in the latent space. In the latent space, the intersection of the past and future segments should be equal to the representation of the combination segment, and therefore the pair ({\color[HTML]{ffd166} {\fontfamily{phv}\selectfont\small\textbf C}}, {\color[HTML]{b36b00} {\fontfamily{phv}\selectfont\small\textbf {I}}}) is a positive one. \item \textbf{Re-encoding}. Given a trajectory $z$, we can decode it into any set of times $t$, obtaining a new segment. This segment can be (re-)encoded using $\Theta$, and a representation $Q$ can be obtained for it, resulting in a new positive or negative for other segment representations. For example, when given the past we randomly sample a possible the future, the resulting segment ({\color[HTML]{000000} {\fontfamily{phv}\selectfont\small\textbf {FP}}} - \textit{future given past}) will be \textit{different} than the ground truth future, so the pair ({\color[HTML]{118aff} \fontfamily{phv}\selectfont\small\textbf{F}}, {\color[HTML]{000000} {\fontfamily{phv}\selectfont\small\textbf {FP}}}) will be treated as a negative. \end{itemize} \begin{figure} \centering \includegraphics[width=\columnwidth]{schematic_input_output_legend.pdf} \caption{\textbf{Examples of segments}. We illustrate how spatial trajectories (left) are ideally encoded into the latent space (right). The intersection between two segment representations (boxes in the figure) represents the trajectories that contain the two segments. ``Future given past'' represents a segment decoded at a future time, from a trajectory sampled from the past representation. It is effectively a sample of a possible future given the past. Other segments are defined similarly. For clarity, we do not show other options like ``past given past'', which would be the same box as past {\color[HTML]{ef476f}{\fontfamily{phv}\selectfont\small\textbf P}}. Best viewed in color.} \label{fig:latent_space} \end{figure} We exemplify a combination of these possibilities in Fig.~\ref{fig:latent_space}. In order to determine which pairs of segments are positive, and which are negative, the rule is always the same: if they can belong to the same trajectory they are positives, otherwise they are negatives. For example, looking at Fig.~\ref{fig:latent_space} it is clear that, as discussed above, there is no trajectory that can contain both {\color[HTML]{118aff} \fontfamily{phv}\selectfont\small\textbf{F}} and {\color[HTML]{000000} {\fontfamily{phv}\selectfont\small\textbf {FP}}}. We list all negative and positive pairs in Appendix~\ref{sec:all_pairs}. \subsection{Comparing Distributions} \label{sec:approaches} \cref{eq:traj} uses the distance function $D$ to compare distributions of trajectories. In this section, we introduce two different ways of designing $D$, resulting in different intuitions about the latent space. \paragraph{Symmetric Distance} If two segments can belong to the same trajectory, the distributions $Q$ representing each segment should be similar and close to each other (positives), and the (symmetric) distance $D$ between them should be small. For example, in Fig.~\ref{fig:latent_space}, the representations of the past {\color[HTML]{ef476f}{\fontfamily{phv}\selectfont\small\textbf P}} and future {\color[HTML]{118aff} \fontfamily{phv}\selectfont\small\textbf{F}} segments belonging to the same trajectory are treated as positives. \paragraph{Conditional} Instead of computing a distance or a similarity, we compute the probability that a segment $s^a$ belongs to the same trajectory as another segment $s^b$. We model this as a conditional probability $P(Q^a|Q^b)$. There are four possibilities: \begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex,leftmargin=0.5cm] \item $P(Q^a|Q^b)=1$, when $s^b$ includes $s^a$, like the combination segment {\color[HTML]{ffd166}{\fontfamily{phv}\selectfont\small\textbf C}} including the past {\color[HTML]{ef476f}{\fontfamily{phv}\selectfont\small\textbf P}}. \item $0<P(Q^a|Q^b)<1$, when $Q^a$ is possible but not necessary given by $Q^b$, like {\color[HTML]{ef476f}{\fontfamily{phv}\selectfont\small\textbf P}} and {\color[HTML]{118aff} \fontfamily{phv}\selectfont\small\textbf{F}} in Fig.~\ref{fig:latent_space}. \item $0<P(Q^b|Q^a)<1$, defined in a similar manner. \item $P(Q^a|Q^b)=0$, for unrelated segments, like {\color[HTML]{ffd166} {\fontfamily{phv}\selectfont\small\textbf C}} and {\color[HTML]{06d6a0} {\fontfamily{phv}\selectfont\small\textbf O}}. \end{enumerate} We treat the first three cases as positives, and the last case as a negative. Because pairs belonging to the first case have a stricter correspondence than those belonging to the second and third cases, we sample them more often during training. Note that under this interpretation, a past {\color[HTML]{ef476f}{\fontfamily{phv}\selectfont\small\textbf P}} and a future {\color[HTML]{118aff} \fontfamily{phv}\selectfont\small\textbf{F}} from the same trajectory do not have a strong correspondence (first case), but a softer one (second and third cases): one does not fully define the other. This approach results in probability values that we either maximize (positives) or minimize (negatives), so we define $D(A,B)=1-P(A|B)$. The previous approaches require a way of computing either a distance between the distributions $Q$, or a conditional probability between them. In the next section, we show two families of distributions for which these can be defined. \subsection{Trajectory Segments as Distributions} In order to obtain $Q$, the encoder $\Theta$ predicts the parameters of a distribution family. Conditions for the distribution families are: 1) we can sample from it in a differentiable way, 2) we can parameterize it, 3) we can compute, in closed form, an intersection that returns a distribution from the same family, and 4) we can compute either a similarity function or a conditional probability, or both (see Sec.\ref{sec:approaches}). Next, we introduce two distribution families that meet the previous criteria. \paragraph{Normal distributions} We use uncorrelated multivariate normal distributions, and parameterize them with a mean $\bm{\mu}$ and a standard deviation $\bm{\sigma}$. We compute the intersection as the product of two normal distributions, which remains normal when the dimensions are uncorrelated (see Appendix~\ref{apx:distributions}). We use the symmetrized Kullback-Leibler (KL) divergence between distributions as a distance function. This distance is not a proper metric; alternatives are discussed in Appendix~\ref{apx:distributions}. Normal distributions assume an underlying Euclidean distance metric $d$ between trajectories $z$. \paragraph{Box embeddings} Box embeddings \cite{vilnis2018probabilistic} represent objects with high-dimensional products-of-intervals (or boxes), parameterized by their two extreme vertices $z^{\wedge}$ and $z^{\vee}$. The intersection between box embeddings is well defined and results in another box embedding. This makes them a natural choice to represent conditional probabilities, which can be computed as $P(A|B)=\mathrm{Vol}(A\cap B)/\mathrm{Vol}(B)$, where $\mathrm{Vol}(A)=\prod_{i}^N \max(z_i^{\vee}-z_i^{\wedge}, 0)$ is the volume of the box, and $\cap$ represents the intersection operation. These operations are straightforward to compute. Boxes are not actual distributions, as they need not integrate to one. However, they are easily normalized by dividing by their volume, and therefore they can be treated as distributions for all the practical purposes required in our framework (\textit{i.e.} sampling, where we approximate the boxes with a uniform distribution). Symmetric distance functions can also be defined on box embeddings; we define a few in Appendix~\ref{sec:apx_boxes}. In both cases, we use the reparameterization trick \cite{kingma2013auto} in order to sample from the distributions while keeping gradient information. We found the best-performing option was using box embeddings under the conditional scenario; the values reported in Section~\ref{sec:quantitative} use this setting. \subsection{Inference} Once trained, our decoder $\Phi$ is able to decode a trajectory at any continuous time $t$, including times that were not part of the input. For example, our framework can decode a future segment given an input past segment, by sampling from its representation, and evaluating that sample at some future times. This future segment will not necessarily be equal to the ground truth future segment (in case it exists), because a single past can have multiple futures. Overall, our framework is capable of doing 1) future and past prediction, by decoding a segment at times outside of its range; 2) continuous reconstruction given a discrete input, by decoding at any continuous time $t$; 3) interpolation between two segments, by decoding trajectories in their latent-space intersection; and 4) modifying existing trajectories, by manipulating the latent space. All the previous tasks are possible without explicitly training to do any of them. We show examples in Sec.~\ref{sec:experiments}. \section{Experiments} \label{sec:experiments} \begin{table} \caption{\textbf{Prediction results}. We report the mean squared error (the lower the better) across keypoints, after normalizing each trajectory to be contained in a region of size $100\times100$. F, P and I stand for ``future'', ``past'' and ``interpolation'', respectively. Values are obtained over 10 runs with different test-time random seeds (changes include sampled segments and sampled $z$). An extended table with standard deviations is in Appendix~\ref{sec:additional}.} \label{tab:results} \begin{subtable}{1\linewidth} \centering \tabcolsep=0.11cm \caption{Long sequences} \label{tab:long} \begin{tabular}{l c ccc c ccc c ccc} \toprule && \multicolumn{3}{c}{\textbf{FineGym}} && \multicolumn{3}{c}{\textbf{Diving48}} && \multicolumn{3}{c}{\textbf{FisV}} \\ \cmidrule(r){3-5} \cmidrule(r){7-9} \cmidrule(r){11-13} && F & P & I && F & P & I && F & P & I \\ \midrule \textbf{VRNN} \cite{vrnn} & & \small{ 15.85 }& \small{ 15.93 }& \small{ 16.10 }& & \small{ 23.51 }& \small{ 27.97 }& \small{ 25.66 }& & \small{ 14.95 }& \small{ 15.03 }& \small{ 15.08 }\\ \textbf{Trajectron++ uni.} \cite{salzmann2020trajectron++} & & \small{ \ 9.54 }& \small{ \ 9.98 }& \small{ \ 9.73 }& & \small{ 11.67 }& \small{ 16.52 }& \small{ 11.98 }& & \small{ 11.42 }& \small{ 11.85 }& \small{ 11.68 }\\ \textbf{Trajectron++} \cite{salzmann2020trajectron++} & & \small{ \ 9.72 }& \small{ 10.01 }& \small{ \ 9.89 }& & \small{ 11.59 }& \small{ 16.23 }& \small{ 12.68 }& & \small{ 11.41 }& \small{ 11.71 }& \small{ 11.63 }\\ \textbf{TrajRep (ours, ablation)} & & \small{ \ 8.82 }& \small{ \ 9.07 }& \small{ \ 7.57 }& & \small{ 10.00 }& \small{\textbf{ 11.74 }}& \small{ 10.06 }& & \small{ 10.62 }& \small{ 11.27 }& \small{ \ 9.70 }\\ \textbf{\quad + re-encoding (ours)} & & \small{\textbf{ \ 8.50 }}& \small{\textbf{ \ 8.83 }}& \small{\textbf{ \ 7.11 }}& & \small{\textbf{ \ 9.81 }}& \small{ 12.00 }& \small{\textbf{ \ 9.58 }}& & \small{\textbf{ 10.32 }}& \small{\textbf{ 10.77 }}& \small{\textbf{ \ 9.22 }}\\ \bottomrule \end{tabular} \vspace{0.1cm} \caption{Short sequences} \label{tab:short} \begin{tabular}{l c ccc c ccc c ccc} \toprule && \multicolumn{3}{c}{\textbf{FineGym}} && \multicolumn{3}{c}{\textbf{Diving48}} && \multicolumn{3}{c}{\textbf{FisV}} \\ \cmidrule(r){3-5} \cmidrule(r){7-9} \cmidrule(r){11-13} && F & P & I && F & P & I && F & P & I \\ \midrule \textbf{VRNN} \cite{vrnn} & & \small{ 12.77 }& \small{ 13.20 }& \small{ 13.40 }& & \small{ 18.36 }& \small{ 20.14 }& \small{ 19.86 }& & \small{ 13.26 }& \small{ 13.44 }& \small{ 13.45 }\\ \textbf{Trajectron++ uni.} \cite{salzmann2020trajectron++} & & \small{ \ 7.80 }& \small{ \ 8.28 }& \small{ \ 7.48 }& & \small{ \ 9.05 }& \small{ 10.36 }& \small{ \ 8.29 }& & \small{ \ 9.23 }& \small{ \ 9.68 }& \small{ \ 8.86 }\\ \textbf{Trajectron++} \cite{salzmann2020trajectron++} & & \small{ \ 7.26 }& \small{ \ 7.93 }& \small{ \ 6.94 }& & \small{ \ 8.74 }& \small{ 11.35 }& \small{ \ 8.31 }& & \small{ \ 8.70 }& \small{ \ 9.28 }& \small{ \ 8.28 }\\ \textbf{TrajRep (ours, ablation)} & & \small{ \ 6.49 }& \small{ \ 6.59 }& \small{ \ 5.15 }& & \small{ \ 6.94 }& \small{ \ 6.99 }& \small{\textbf{ \ 5.00 }}& & \small{ \ 7.83 }& \small{ \ 8.17 }& \small{ \ 6.01 }\\ \textbf{\quad + re-encoding (ours)} & & \small{\textbf{ \ 6.20 }}& \small{\textbf{ \ 6.36 }}& \small{\textbf{ \ 4.88 }}& & \small{ \textbf{\ 6.76 }}& \small{\textbf{ \ 6.85 }}& \small{ \ 5.04 }& & \small{\textbf{ \ 7.54 }}& \small{\textbf{ \ 7.78 }}& \small{\textbf{ \ 5.88 }}\\ \bottomrule \end{tabular} \end{subtable}% \end{table} \subsection{Datasets} \label{sec:datasets} For our experiments, we selected data adhering to the following criteria. First, there has to be uncertainty in the trajectory when given just a segment (for instance, the future is not fully specified given the past). Second, the prediction should not require external contextual information. Context can be seamlessly added to our architecture, but it involves additional task-specific engineering decisions, and we want our evaluation to be orthogonal to them. Similarly, we avoid trajectories that require highly-engineered point-level distances $\delta$. Finally, we prefer our trajectories to be obtained from real-world data. For all the previous reasons, we implement our framework on \emph{human movement datasets}. Specifically, we extract keypoints from human action datasets using OpenPose \cite{openpose}. For every video, we keep the most salient human trajectories. This results in sequences of dimension $[L, 25, 2]$, where $L$ is the number of frames in the trajectory, 25 is the number of joints in a human skeleton extracted by OpenPose, and 2 corresponds to the number of spatial coordinates for every joint. We refer to the whole skeleton at every time-step ---the combination of all joints, resulting in a $K=50$-dimensional vector--- as a \emph{point}. As a distance function $\delta$ between points (\textit{i.e.} skeletons) we use $l^2$-norm distance per-joint, and average across all visible joints. We extract human movement trajectories from the FineGym \cite{finegym}, Diving48 \cite{diving48} and FisV \cite{fisv} datasets, which correspond to gymnastics, diving and figure skating, respectively. For each of the datasets, we experiment with short sequences (up to 10 time-steps, or slightly over one second) and long ones (up to 30 time-steps, representing slightly under four seconds of the trajectory), and report results for both. We provide more details on the dataset creation in Appendix~\ref{apx:datasets}. \begin{figure} \begin{subfigure}{1\linewidth} \centering \vspace{-1cm} \includegraphics[width=\columnwidth]{future_prediction.pdf} \vspace{-0.9cm} \caption{\textbf{Future prediction}. We show an example of a future prediction, where the input are eight irregularly sampled frames during one second (we only show three of them), and we predict up to three seconds into the future.} \vspace{-0.8cm} \label{fig:future_pred} \vspace{1cm} \includegraphics[width=\columnwidth]{past_prediction.pdf} \vspace{-0.7cm} \caption{\textbf{Past prediction}. We show an example of a past prediction, where the input are eight irregularly sampled frames during one second (we only show three of them), and we predict up to three seconds into the past.} \label{fig:past_pred} \vspace{1cm} \includegraphics[width=\columnwidth]{interpolation.pdf} \caption{\textbf{Interpolation}. We provide the model with skeleton keypoints coming from two separate segments, and sample points at times in between these two segments, from the intersection of the two segments in latent space. The model produces sensible interpolations that are not simply a linear interpolation at the joint level: in the first example, the gymnast first stands, then turns; in the second example, the gymnast swings right and left, just in time to end up meeting the future segment at the right position.} \label{fig:interpolation} \end{subfigure} \vspace{0.2cm} \caption{\textbf{Predictions on gymnastics data (FineGym)}. We show examples of past, future, and interpolation predictions. The input to our model are (irregularly in time) sampled keypoints obtained from human movement datasets, and the outputs are predictions of the trajectories at different continuous times (past, future, or in between the inputs). The only input to the model are the keypoints, not the images. Results show our model's capabilities for modeling trajectories well outside of the input's temporal range, for dealing with spatial and temporal occlusions, and for doing so at a large temporal resolution. See Section~\ref{sec:qualitative} for a deeper analysis.} \label{fig:predictions} \end{figure} \subsection{Quantitative Experiments} \label{sec:quantitative} \paragraph{Baselines and ablations} As baselines, we select trajectory-modeling methods that are capable of encoding uncertainty about the future. \textbf{Variational RNNs} \cite{vrnn} extend recurrent neural networks (RNNs) to the non-deterministic case, by modeling every step with a variational auto-encoder (VAE) \cite{kingma2013auto}. \textbf{Trajectron++} \cite{salzmann2020trajectron++} is a state-of-the-art trajectory-modeling framework which also builds on top of RNNs and (conditional) VAEs \cite{kingma2014semi}. Uncertainty is modeled as a Gaussian mixture model (GMM). We adapt Trajectron++ to our data, making the encoding and decoding as similar to our setting as possible (for fairness), while keeping the core of the framework intact. We train two Trajectron++ versions, one with uniformly-sampled inputs and outputs (``Trajectron++ uni.''), and a second one with non-uniform sampling, following the setup in our models (``Trajectron++''). We also ablate our model, and report results with and without training with re-encoded segments. \paragraph{Tasks and metric} We evaluate our framework on three different tasks: future prediction, past prediction, and interpolation between two segments. Future prediction consists in predicting points from a future segment given a past segment. Past prediction is defined symmetrically. In the interpolation task, we input two \emph{separated} segments (past and future) from a trajectory, and predict the segment in between them. In our model, we do so by decoding from the latent-space intersection of the two input segments. Baselines (which are autoregressive) are not capable of doing this combination, so we only use the past segment as input. Baselines are also not capable of directly performing past prediction, so we predict the future of the reversed trajectory instead. As a metric, we use the average of the $l^2$-norm distance across joints, which is used by all methods during training, and report the best out of $M=10$ samples, to account for multiple modes and uncertainty in the prediction. Note that our model has never been explicitly trained to perform any of the previous tasks. We show results in Tables~\ref{tab:long} and \ref{tab:short}, for long and short trajectories respectively. Our model outperforms baselines in all the tasks, which proves its value and flexibility. We also show how creating more interesting negatives with the re-encoding of decoded trajectories results in more accurate prediction results. However, our method performs well even without re-encoding. \subsection{Qualitative Experiments} \label{sec:qualitative} \begin{figure} \centering \begin{subfigure}{0.8\linewidth} \vspace{-0.5cm} \includegraphics[width=\columnwidth]{speed_change.pdf} \vspace{-1cm} \caption{Speed change.} \label{fig:speed} \vspace{0.5cm} \centering \includegraphics[width=\columnwidth]{offset_change.pdf} \caption{Temporal offset.} \label{fig:offset} \end{subfigure} \caption{\textbf{Temporal editing}. We decode, for the same times $t$, different trajectories $z$ in the latent space. These trajectories have been obtained by encoding the segment in the top row, and moving in small increments in the latent space along directions that represent speed (a) or temporal offset (b). We highlight in green and pink some correspondences between different decoded trajectories, to emphasize that the spatial trajectories are the same but with variations in some time-related attribute, such as speed or temporal offset.} \label{fig:editing} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{multimodal_figure.pdf} \caption{\textbf{Multiple futures}. Given a few input frames, our model is capable of predicting the future. It does so by modeling a distribution over the possible trajectories: by sampling from this distribution, we can obtain different plausible futures given the input (past) segment. In this figure we show, for specific inputs, the ground truth future, as well as two different futures sampled from the input segment distribution, which have been sampled randomly. This figure shows that our model is indeed capturing the multi-modal nature of the trajectories under uncertainty.} \label{fig:multimodes} \vspace{-0.2cm} \end{figure} We show examples of our model's inputs and outputs in Fig.~\ref{fig:predictions}. Specifically, we show future, past and interpolation predictions. The results reflect that the model learns to predict sequences up to four times longer than the input. They also show the large temporal resolution of our model: the model predictions evolve smoothly and sensibly for time-steps separated by a few hundredths of a second (\cref{fig:future_pred,fig:past_pred}). When not all joints are present in the input (first frame in Fig.~\ref{fig:future_pred}), our model is still capable of reconstructing the full spatial extent of the position. Finally, note how the model can take as inputs irregularly sampled time-steps (Fig.~\ref{fig:past_pred}), which makes it adaptable to temporal occlusions. Baselines are only capable of predicting the future, and are restricted to predicting uniform time-steps. \paragraph{Temporal editing} The segments are directly tied to the temporal span they represent. For example, two segments with the exact same coordinates and evolution across time, but starting at different times will result in different (albeit similar) representations. The speed of a movement and the time in the trajectory where the movement is done are important attributes of that movement, and the representation should not be invariant to them: they belong to different trajectories. However, because these trajectories are very similar, our model learns to represent them close in the latent space. In Fig.~\ref{fig:editing}, we show that the model encodes different temporal variations. Moving along specific directions in the latent space results in progressively faster trajectories (\cref{fig:speed}), or in trajectories with an increasing temporal offset with respect to the original one (\cref{fig:offset}). See Appendix~\ref{sec:additional} for details. \paragraph{Representing multiple futures} A crucial aspect of our formulation is the assumption that the future is uncertain, and that our model has to be capable of modeling the different modes of the trajectory distribution. In Fig.~\ref{fig:multimodes} we show examples of multiple predicted futures given a single past segment, proving that our model captures the multi-modal nature of the trajectories under uncertainty. \section{Related work} \label{sec:related_work} \paragraph{Modeling trajectories} Spatial trajectories are usually modeled in the literature in an autoregressive (AR) fashion \cite{song2017end,martinez2017human,kratzer2020prediction,alahi2016social,sun2019stochastic,ivanovic2019trajectron,salzmann2020trajectron++,li2019propagation,yan2018spatial,tang2019multiple,kosiorek2018sequential,hsieh2018learning}, where trajectories are defined conditioned on previous time-steps. Despite their success, AR models are incapable of dealing with some of the challenges stated in \cref{sec:introduction}, most notably they do not represent time as a continuous variable, they cannot model the full extent of a trajectory (simultaneously both past and future), and no learned trajectory-level metric can be obtained from them. Some of them model the uncertainty in the prediction \cite{tang2019multiple,kosiorek2018sequential,hsieh2018learning,ivanovic2019trajectron,salzmann2020trajectron++,vrnn,sun2019stochastic}, and we use two representative ones \cite{vrnn,salzmann2020trajectron++} as our baselines. A different line of work is focused on representing segments of trajectories (not just points) as points in a latent space \cite{yao2017trajectory,zhang2020trajectory,yao2019computing,yao2020linear,li2018deep,zeng2020dsdnet,butepage2017deep,liu2022cstrm}. However, they are not capable of modeling the full extent of a trajectory outside the limits of the considered segment. Additionally, the segment-level metric is either unstructured \cite{yao2017trajectory}, or is explicitly given \cite{zhang2020trajectory,yuan2017review,zeng2020dsdnet}. \paragraph{Continuous time} Modeling time as a continuous signal has gained traction recently in fields such as graphics \cite{xian2021space,vanhoorick2022revealing,pumarola2021d,sitzmann2019siren} or physics modeling \cite{michaelbeyond2022,chamberlain2021grand}, because it accurately represents the underlying (continuous) world being modeled. In the graphics neural implicit functions literature, time is used to condition the prediction of the network. We adopt the same approach in our decoder. We encode the set of continuous times in a segment by using a Transformer network \cite{transformers}, which by construction is permutation invariant, but allows temporal embeddings to be concatenated with the input, both discrete \cite{gberta_2021_ICML,arnab2021vivit} and continuous \cite{vanhoorick2022revealing,fourier_encodings}. \paragraph{Self-supervised representation learning} Finding self-supervised representations for temporal data has been the subject of a large amount of work in domains such as trajectories \cite{liu2022cstrm}, video \cite{pan2021videomoco,han2019video,qian2021spatiotemporal}, or audio processing \cite{jansen2018unsupervised,saeed2021contrastive,fonseca2021unsupervised}. Most methods, however, represent segments as simple points in a Euclidean space. Structured representations for temporal data \cite{suris2021hyperfuture,park2022probabilistic} allow the latent space to follow certain inductive biases, like our framework's idea that segments compose trajectories. We model segments as either normal distributions or box embeddings \cite{vilnis2018probabilistic}. The latter have been used to represent hierarchical relationships in domans such as text \cite{patel2020representing,onoe2021modeling}, knowledge bases \cite{abboud2020boxe}, or images \cite{rau2020predicting}. We use them to represent temporal information. In recent work, Park et al. \cite{park2022probabilistic} also model segments using normal distributions, where trajectories are weighted sums of the segment representations. \begin{ack} We thank Arjun Mani and Mia Chiquier for helpful feedback. This research is based on work partially supported by the NSF NRI Award \#2132519 and the DARPA MCS program. DS is supported by the Microsoft PhD Fellowship. \end{ack} { \clearpage \small \bibliographystyle{ieee_fullname}
1,314,259,993,788
arxiv
\section{Introduction} Over the past decade, synthetic biologists have made great strides in the engineering of biological systems. One vision that has served as something of a roadmap is the model enunciated by Endy~\cite{Endy05}, a model comprising four levels of increasing abstraction: DNA, Parts, Devices, and Systems. Consistent with this model, the field has developed a plethora of basic parts such as promoters, terminators, coding sequences, and functional RNAs, which can be combined into composite DNA sequences through a variety of assembly methods (e.g., ~\cite{shetty2011assembly,gibson2009enzymatic,Engler2008GoldenGate,Weber2011MoClo,kok2014rapid}) or low-cost nucleic acid synthesis~\cite{CarlsonCurve}. At higher levels of abstraction, the field has produced families of biological devices with a variety of sensing, communication, or computational functions (e.g.,~\cite{endy-pnas-2012, kiani2014crispr, nielsen2016genetic, gander2017digital, weinberg2019high, chen2020genetic, sexton2020multiplexing, kiwimagi21quantitative}), as well as methods for insulating devices from context (e.g.,~\cite{RiboJ12, mutalik2013BCDs, carr2017reducing}), and for characterizing and predicting their behavior (e.g.,~\cite{del2008modular, Salis09, beal2015replicon, davidsohn2015accurate, nielsen2016genetic, wang2019modeling, chen2020genetic, sexton2020multiplexing, kiwimagi21quantitative, beal2018quantification,beal2020robust,pine2016evaluation}). Despite this progress, significant challenges remain in the practice of synthetic biology at a systems level. Definitions for parts and devices are typically unavailable, incomplete, or inconsistent. Likewise, little information is generally provided regarding interfaces, functionality, or host context, and such information is rarely available in a tool-friendly format. This leads to significant difficulty in searching for appropriate parts or devices to use, in adapting parts and devices from their original context for use within a new project, and in predicting the behavior of even the most basic systems from information available about the components used to create them. These difficulties in turn lead to significant time requirements and technical risk to achieve even modest engineering goals. Under these conditions, engineering success can certainly be achieved, as illustrated by the many billions of dollars of industrial impact from synthetic biology, but it is slow and costly to do so. On the other hand, if engineering could be made simple and predictable, even just for systems comprising small numbers of devices, it would radically lower cost, reduce barriers to access, increase democratization, and unleash a wave of innovative applications of synthetic biology by small organizations tackling local problems. We contend that the primary barrier to achieving this vision is no longer biological, given the myriad advances that have been achieved. Rather, we argue that the problem is one of knowledge synthesis and organization: given the complexity of biological systems, no practitioner can be expected to even be aware of all of the relevant parts, methods, and models, let alone have the detailed expertise in all of them to use and combine them effectively in practice. How then can the advances and expertise dispersed across the synthetic biology community be marshaled in order to enable practitioners to effectively utilize them in their engineering projects? We propose that in order to meet this challenge, synthetic biology must be reoriented from its current sequence-centric approach to instead center on function. Specifically, a Functional Synthetic Biology approach focuses on: \begin{itemize} \item descriptions of behavior over descriptions of structure, \item predictability and flexibility over optimization of function, and \item risk reduction over novelty. \end{itemize} A focus on behavior means a biological component's structure (i.e., genetic sequence) should be able to be changed and improved without damaging the functionality of a system that includes it. A focus on predictability means identifying classes of changes that are unlikely to damage functionality, and a focus on flexibility means valuing the breadth of such classes of changes when developing biological components. A focus on risk reduction means recognizing that there are many failure modes that can impair the functionality of a biological system, and that there is value in capturing knowledge about failure modes (and how to avoid them) in the form of automation tools and machine-readable component specifications. \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{figures/abstraction.pdf} \caption{Abstraction layers in a function-centric view: design focuses on biological functions, which are abstracted to produce devices by adding a description of an interface, predicted range of behavior, and operational context. Devices may then be combined to produce a biological system, whose function in turn may be abstracted to create a new device at a higher level of abstraction. Devices may have many different options for how they can be implemented with parts, where parts are sequences with a defined interface for combining them to build composite sequences, which also may in turn be abstracted into higher-level parts.} \label{f:function} \end{figure*} Together, these approaches will allow the community of synthetic biologists to more effectively share successes and avoid failures. Below, we will expand on each of these three key goals—describing behavior, predictability and flexibility, and risk reduction—and lay out a roadmap for achieving these goals over the course of the coming decade. \section{Function-Centric Design Descriptions} Current practices and representations in synthetic biology are mainly sequence-centric, meaning that the first-class objects of a design are sequences---typically DNA, though sometimes RNA or amino acids. Information about function is then annotated as features or metadata used to describe the sequence. Many design representations (e.g., FASTA, GenBank, GFF), reinforce this notion by providing no way of expressing functional information without reference to a fully-specified sequence. In practice, however, synthetic biologists tend to think about designs more in terms of function. Consider, for example, green fluorescent protein (GFP) as a reporter of transcriptional activity (left half of Figure~\ref{f:function}). A typical user of GFP does not think about the actual sequences and would be unlikely even to recognize either the nucleic acid or amino acid sequences for GFP if they saw them. Instead they are likely to think about a functional relationship between a coding sequence (CDS) that produces a GFP protein, which in turn will fluoresce with a predictable excitation and emission behavior. This is a fully coherent notion of biological function, independent of sequence, and is the typical subject of discussions and diagrams regarding design. Also associated with the notion of function is a concept of an interface to that function (e.g., embedding the GFP in a transcriptional unit whose activity is to be reported) and of the type of environments where the function is expected to behave as predicted (e.g., aerobic environments with relatively neutral pH across a broad range of cell types). For Functional Synthetic Biology, then, we will define a device as a function that has been associated with a definition of an interface and a context for its operation. This information is sufficient to combine a device together with others to implement a biological system (e.g., to sense some condition and report it via green fluorescence). Finally, the function of that system may be itself described and possibly abstracted into a higher-level device by identifying its interface and context for operation. To actually implement a device or system, of course, a specific sequence must be identified. Typically there are many sequences that may suffice to implement any given device. GFP, for example, at the time of this writing has 160 different amino acid sequences on FPbase~\cite{lambert2019fpbase}. Reverse translation of any of these amino acid sequences further produces myriad distinct nucleic acid sequences, all of which encode the same amino acid sequence. These alternatives are not identical, of course, and likely some will not be functional at all. Still, a great many are expected to be sufficiently good implementations of a Green Fluorescence Reporter device. As with devices, we often need to combine sequences together with other sequences in order to form the composites that implement systems, whether through assembly protocols or by directly synthesizing the composite sequence. The right half of Figure~\ref{f:function} shows this parallel abstraction hierarchy, in which a part is defined as a sequence that has been associated with a definition of a build interface, parts combine to form composites, and those composites may in turn be abstracted into higher-level parts. \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/devices.pdf} \caption{Other examples of functional devices: A) an enzyme that catalyzes the transformation of one small molecule chemical into another, B) excision of a drop-out sequence using Cre recombinase targeting loxP sites, C) constitutive expression of non-coding RNA using a U6 promoter, and D) CRISPR-based gene editing with constitutive expression of Cas9 and sgRNA.} \label{f:devices} \end{figure*} This approach to defining functional devices can be applied to any well-described biological function: Figure~\ref{f:devices} shows other examples of functional devices, some of which would necessarily need to be implemented using multiple parts. Indeed, this separated hierarchy is {\it de facto} what is already in informal use throughout much of the community. From whiteboards to journal articles, synthetic biologists typically communicate in terms of function. Sequences, meanwhile, are often selected arbitrarily or inherited from prior projects via a shared laboratory freezer, and in publications are typically relegated to supplementary information or even---problematically---entirely omitted~\cite{peccoud2011essential}. Functional Synthetic Biology proposes that we recognize this distinction and the separation between sequence and function, then explore its consequences in order to improve our representations, tooling, engineering approaches, and collaboration strategies. \section{Predictability and Flexibility} The importance of predictability and flexibility as design requirements is heightened when a device's functional definition is divorced from its realization as a composite of discrete parts. Implementation of a function, whether at the level of a single device or a composite system, is a matter of identifying an appropriate part or sequence, and typically there are many that could potentially apply. With sequence-centric engineering, characterization of a part has tended to ask questions of the form ``How does this part behave?'' Given a functional definition of a device, however, a fundamentally different question needs to be answered: ``Is this part's behavior good enough to be an implementation for that device?'' This question cannot be answered without considering what ``good enough'' means in terms of the function of a device. For example, a Green Fluorescence Reporter device turns transcriptional activity into a strong fluorescent signal. How strong is strong enough? The answer is determined by the signal strength required to discriminate various levels of expression from background with a given class of instrument (e.g., plate reader or flow cytometer), which in turn determines the flexibility of the specifications for the device instantiation, i.e., what types and degrees of imperfections can be tolerated. Device context must also be recorded in the specification, as any GFP coding sequence will fail to produce strong fluorescence if it is placed in the wrong operational context (e.g., in anaerobic conditions, with an incompatible 5' UTR, or in an incompatible host). Useful device specifications thus require at least some predictions about device behavior. There is an inherent tension and interplay between predictability and flexibility. Any device can be made impossible to realize by making its specification require too much precision and/or too great an operational range, e.g., looking for a Green Fluorescence Reporter that always produces exactly the same number of molecules in wildly different cell types. Likewise, a device can be rendered inoperable if the constraints placed on values are too lax (e.g., accepting a red fluorescent protein as a legitimate implementation of a Green Fluorescence Reporter) or if the operational range applied is too tightly constrained (e.g., predicting that the device will work only in the exact construct where it was characterized). Success in designing and building engineered systems depends on finding a middle ground where practitioners are readily able both to find devices and parts to realize a system and also to predict the outcome that system will generate with a satisfactory degree of reliability and accuracy. There is likewise a tension between these goals and the desire to obtain the best performance from a device. For example, in selecting a part to realize a Green Fluorescence Reporter device, it may be desirable to select a less bright GFP variant that is better understood (thus more predictable), known to operate in a wider range of organisms, or available in a preferred assembly format. Ultimately, this is a matter of trust and focus: the more that a practitioner can trust the predictability of the less interesting parts of their system (e.g., the Green Fluorescence Reporter), the more that they can focus on their primary goals (e.g., improvement of a novel metabolite sensor whose activity is being reported). Navigating the relationship between flexibility, predictability, and optimization of function is in general a challenging and unresolved problem in all engineering fields, and particularly so for biology. Nevertheless, there are a number of bioengineering tools, such as GFP, that are in common use precisely because they are reasonably effective at producing reasonably predictable behavior across a fairly flexible range of applications and operating conditions. Complementarily, there are operating conditions that are known to be unworkable, such as using GFP in an anaerobic environment. Practitioners who have applied these tools have acquired a great deal of pragmatic knowledge about their range of flexibility and the conditions under which their behavior can be predicted. Some of this accumulated knowledge has found its way into scientific publications, but much of it is still communicated only through informal channels and by word of mouth. Functional Synthetic Biology proposes that we should begin capturing such knowledge in device specifications. Prior work on predictive modeling (e.g.,~\cite{Salis09,davidsohn2015accurate,nielsen2016genetic}) and reproducibility (e.g.,~\cite{beal2018quantification,beal2020robust,pine2016evaluation}) can provide initial information for some common devices. Similar information can be captured for other devices by collecting it from experts and the literature or by conducting similar studies. Such knowledge can then be applied, if desired, in optimization processes, particularly multi-objective optimization that can assign value to flexibility (e.g.,~\cite{boada2016multi}). Flexibility and predictability of devices may also be improved by applying methods for insulating devices from context (e.g.,~\cite{RiboJ12, mutalik2013BCDs, carr2017reducing}). \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{figures/workflow.pdf} \caption{Decoupling function and sequence enables the development of a collaborative ecosystem for distributing, using, and improving biological devices. In this vision, (1) an expert curates a device and a set of parts that can implement the device, (2) then publishes these in a collection where they can be discovered by other synthetic biologists, (3) who put the device to use in various contexts. Those practitioners may (4) contribute back ``patches'' to improve the device, e.g., improved characterization or context tolerance information, (5) which are then reviewed by either the original expert or others helping to maintain the collection. The experts may also (6) improve the collection in other ways, e.g., by improving the design of the parts that implement the device. All of these improvements are then (7) made available when the collection is republished as a new version, and (8) the device users receive the benefits of these improvements by updating to the newest version.} \label{f:workflow} \end{figure*} In sum, the focus on the functionality of a device rather than the sequence of a part reinforces the need for better documentation that can usefully describe the behavior, context, and constraints of a genetic object in the same way that other engineering disciplines seek to describe the specifications of a component. It also drives a need for characterization focused on improving the understanding of composability, flexibility, predictability, and robustness. Explicitly studying and recording such information in a tool-friendly manner will allow the information to be shared and redistributed more broadly, and will allow practitioners to more readily benefit from knowledge and advances produced by others in the field. \section{Collaborative Reduction of Technical Risk} Decoupling functional specifications and part sequences also allows new approaches to collaboration and information sharing that can reduce the risk of failure in engineering synthetic biology systems. Here the key idea is that changes in parts do not necessarily entail changes in devices or vice versa. When a part is improved, the new version that results may still conform to the same device specifications that the old version satisfied, and thus be able to be substituted as an implementation for that device. For example, a GFP part might be codon optimized to produce the protein more efficiently, or have a restriction site eliminated to make it compatible with a wider range of assembly methods. If the new version of the part still satisfies the Green Fluorescence Reporter device specification, then it can be safely predicted that systems using the old version can be upgraded to use the new version. Likewise, a device may be improved with models that better predict its interactions with other devices, or with better documentation of its expected operational range, while having no impact on the sequence of parts that implement the device. This decoupling offers a new means of capturing expert knowledge, in the form of curated collections of devices and the recommended parts to implement them. For example, a typical synthetic biologist should never need to consider the 160 different variations of GFP in FPbase, let alone other green proteins like ZsGreen or mNeonGreen. Instead, they should be able to determine their intended operating range (e.g., {\it E. coli} DH5$\alpha$ in M9 media at 30 $^{\circ}$C to 37 $^{\circ}$C), select a Green Fluorescence Reporter device that operates in that range, and then implement it with any of the parts that are currently recommended by experts as a good implementation for that device. If better parts become available or a problem is detected with one of the current parts, then all that needs to be changed is the recommendation. As long as the functional characteristics of the newly recommended part can be assessed to remain within the range of predictability that has been established for the Green Fluorescence Reporter device, any system using the device should be able to be safely updated to use the new part. Indeed, such an upgrade recommendation can already be found in the scientific literature for red fluorescence, when the developers of mCherry suggested phasing out its predecessor, mRFP1, given mCherry's improved ``higher extinction coefficient \dots, tolerance of N-terminal fusions and photostability.''~\cite{shaner2004improved} Assessing the potential impact from a change to a part or device, however, is often not clear-cut or straightforward. If a model is not sufficiently accurate, there is a risk that changes assessed as safe will instead produce unwanted side-effects or even result in a system-wide degradation. This risk from adopting a recommendation, however, must be balanced against the risk and costs associated with ignoring a recommendation from experts who are likely to be more familiar with the specialized matter at hand and better able to sort through the myriad possible implementations of the device. Ultimately, then, this is a question of building trust around changes to complex systems with implications that are difficult to predict. The software engineering community has faced a very similar problem of managing technical risk and building trust around changes with difficult-to-predict systems implications. That community has addressed its analogous challenge with a now-mature ecosystem of tools and collaborative processes (often collected under the name ``agile software development'') for managing the development of complex systems. Foundational to these processes are distributed version control systems (e.g., git) that afford communities of experts a convenient way to organize, share, and maintain packages of information. These are already used in the bioinformatic community to manage knowledge collections such as the Systems Biology Ontology (SBO), Sequence Ontology (SO), and Gene Ontology (GO). \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/roadmap.pdf} \caption{Suggested roadmap of goals, from immediate to long-term, for achieving Functional Synthetic Biology.} \label{t:roadmap} \end{figure*} Functional Synthetic Biology can take advantage of these same mechanisms to curate collections of devices, collaboratively maintain them, and reduce technical risk related to updating these collections. Figure~\ref{f:workflow} illustrates the type of interactions that can be enabled. In this vision, an expert curates a device such as a Green Fluorescence Reporter, along with options for implementing the device using one of several parts containing a coding sequence for GFP. This device is aggregated with others into a fluorescent reporters collection, which can be published as a package in a catalog where it can be shared with other synthetic biologists. In the course of applying the device in various contexts, synthetic biologists will undoubtedly identify ways to improve the collection. Those improvements might include expanding or sharpening characterization, augmenting context tolerance information, clarifying documentation, or enhancing the design of the parts that implement the device. Distributed version control can make it easy for users to contribute their improvements back as suggested changes to the collection. Agile software tooling can support the curation of such contributions, making it simple to the maintainers of a collection to review and discuss a contribution, request changes where needed, and apply automation-assisted validation checks for quality control. Once incorporated, improvements can be taken up in a new version of the collection that is made available to the community when the collection is republished. Users can then take advantage of the benefits these curated improvements provide simply by updating their copy of the collection to the latest version. When implemented well, the transparency and checking in such processes can help build trust within a community of users and maintainers. The mechanisms of distributed version control also help to sustain an open marketplace for competition between packages of information and their attendant processes and maintainers. Just as in the software world, Functional Synthetic Biology packages will compete not just on the basis of technical efficacy, but also their trustworthiness, reliability, ease of use, and responsiveness to user needs. \section{A Roadmap to Functional Synthetic Biology} In this section, we propose a multi-year roadmap for realizing the vision for Functional Synthetic Biology set out above. This roadmap foregrounds the vision's most consequential aspects and specifies the timeframes within which we predict that these aspects can reasonably be achieved (Figure~\ref{t:roadmap}). Some aspects of the vision that the roadmap accounts for are well-defined and can be achieved with technologies and techniques that are available today. They simply need to be adapted and applied. For example, an integrated representation of the part and device hierarchies from Figure~\ref{f:function} can already be implemented using the SBOL~2~\cite{roehner2016sharing} or SBOL~3~\cite{mclaughlin2020synthetic} standards. Alternatively, devices could also be represented with a modeling language such as SBML~\cite{SBML}, then associated with parts represented in GenBank or GFF. In either case, a modern version control system such as git can be employed to manage and disseminate the genetic design files generated. Another advantage of such a version control system is that it supports mature agile software workflows (e.g., trunk-based development) and tooling (e.g., continuous integration) that foster collaboration and enable quality control automation. Short term targets for development include basic parts such as constitutive promoters, terminators, sensors, and reporters, as well as regulatory insulation (e.g.,~\cite{RiboJ12, mutalik2013BCDs, carr2017reducing}) and families of transcriptional computing devices (e.g.,~\cite{endy-pnas-2012, kiani2014crispr, nielsen2016genetic, gander2017digital, chen2020genetic}). Developing a collection of design packages covering these elements, along with function-centric design and build tools, should enable routine engineering of 2-3 device sense/compute/actuate circuits in model microbes and cell-free systems. The core packages that result from these initial engineering initiatives can be made available through a central repository serving as a rendezvous point for early adopters to discover and make use of these packages. As this Functional Synthetic Biology ecosystem matures and expands, practitioners will undoubtedly enhance the performance, flexibility, and reliability of the parts, will add new packages focused on their own areas of interest, and will expand the functionality of existing packages by making incremental upgrades to them. The complexity of systems that can be routinely engineered will increase, as will levels of automation. The ability to support more effective sharing of data will then facilitate better understanding and characterization of operational context, expansion into non-model microbes, and the creation of more context-agnostic devices that can operate effectively in multiple types of organisms. Over the longer term, we anticipate a widespread adoption of Functional Synthetic Biology, driven by the standardization of complex biological system engineering, the development of extremely flexible devices, and extension even to effective operation {\it in vivo} in complex multicellular organisms. Following this roadmap will also require dealing with non-technical challenges regarding incentives. For example, an initial investment in time and resources is needed from experienced practitioners before the community can benefit, and the people investing will not necessarily be the ones who most benefit. Given the struggle to even obtain DNA sequence information from scientific publications~\cite{peccoud2011essential}, we can expect that current academic incentives will not be sufficient on their own to motivate widespread investment in the curation and publication of functional information. There will also be questions around governance and degree of centralization for the evolving collection of functional packages. Finally, there is an important collective action problem to be solved if the community wishes resources to be available under free and open licenses, as opposed to needing to pay for access to this information. \section{Conclusions} Many practitioners of synthetic biology recognize the impact that standardizing systems-level engineering could have on the development of biological systems and are eager to exploit the potential it has to democratize access to complex biotechnology and effect transformative change in a broad range of sectors and application areas such as healthcare, manufacturing, energy, agriculture, and environmental sustainability. To date, however, realizing that potential has proven elusive, in large part because the information required for effectively engineering with biological parts and devices has been inaccessible, insufficient or incomplete. The Functional Synthetic Biology framework presented here lays the groundwork for a shift in orientation from the focus on sequences that defines the field today to a focus on functionality that will transform the way synthetic biology is practiced in the future. This represents a critical step forward along the path to achieving systems level engineering, with improvements in data sharing leading to increases in flexibility and predictability, which in turn open up opportunities for enhanced collaboration and dissemination. It is our hope that this paper will encourage others to build on the framework presented, to join us on our journey to transform the practice of synthetic biology using the roadmap we have laid out as a guide, and to help mobilize the resources that will be required to bring that journey to a successful conclusion. \section{Competing interests} No competing interest is declared. \section{Material Availability Statement} Not applicable. \section{Data Availability Statement} Not applicable. \section{Author contributions statement} All authors contributed to conceptualization, writing the original draft, and review and editing. \section{Funding} This work was supported in part by the following funding sources: J.Be. was supported by Air Force Research Laboratory (AFRL) and DARPA contract FA8750-17-C-0184. J.Bo. was supported by a Horizon Postdoctoral Fellowship from Concordia University. M.H. was supported by funding from the ERC Advanced Grant LoopingDNA (no. 883684). N.S. was supported by funding from the BBSRC under award BB/M011178/1. G.V. was supported by a scholarship from the School of Computing, Newcastle University. A.V. was funded by Grant MINECO/AEI, EU DPI2017-82896-C2-1-R and MCIN/AEI/10.13039/501100011033 grant number PID2020-117271RB-C21. This document does not contain technology or technical data controlled under either the U.S. International Traffic in Arms Regulations or the U.S. Export Administration Regulations. Views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
1,314,259,993,789
arxiv
\section{Discussion and Conclusion} In this work, we propose approaches to incorporate feasible grasping constraints into real-time localizing the 6D pose of an in-hand suture needle. We define a novel state space for the 6D pose of a needle. This state space allows efficient sampling from the feasible grasping manifold, which requires no feasibility checks with external software. We incorporate the proposed state space and feasible grasping constraints into Bayesian filters for real-time needle pose tracking and demonstrate that our constrained approaches outperform previous ones. Our methods focus on tracking the relative poses between the needle and the gripper holding it and rely more on the end-effector poses. However, a surgical manipulator can be tracked accurately by previous methods~\cite{richter2021robotic}, and we show in our experiments that accurately tracking the relative poses is more crucial in successfully automating suture needle manipulation tasks. \section{Draft} \begin{equation} \begin{aligned} \textrm{Track} \quad & p(\mathbf{p}_t | \mathbf{o}_{0:t}) \\ \textrm{where} \quad & \mathbf{p}_t =\mathbf{p}_{t-1} + \mathbf{w}_t \sim \mathcal{W}(\cdot | \mathbf{p}_{t-1}) \\ & \mathbf{o}_t \sim \mathcal{V}_t(\cdot | \mathbf{p}_t) \\ & \mathbf{p}_t \sim \psi_t(\cdot | \mathbf{p}_{t-1}) \\ & \ell_t = \mathcal{L}(\mathbf{p}_t) + u_t, u_t \sim \varphi(\cdot) \\ & \left| \mathcal{L}(\mathbf{p}_t) - \mathcal{L}(\mathbf{p}^*) \right| \leq \varepsilon, \forall t \geq T \\ \end{aligned} \end{equation} \section{Introduction} Automating surgical procedures such as suturing has drawn increased interest within the robotics community during the past two decades~\cite{yip2019robot}. The advantage of automation is that it relieves surgeons from time-consuming, tedious, and challenging tasks that often emerge in Minimally Invasive Surgeries~\cite{garcia1998manual,hubens2003performance,corcione2005advantages}. One of the key components of achieving autonomous procedures is the accurate localization of surgical instruments in the surgical scene~\cite{li2020super,lu2021super,richter2021robotic}. This localization ability serves as the foundation for automating various surgical tasks in previous work, including needle regrasping~\cite{chiu2021bimanual,wilcox2022learning}, knot tying~\cite{chow2013improved,lu2018vision}, and blood suction~\cite{richter2021autonomous}. A surgical scene often contains multiple surgical instruments, and previous studies localize them separately without considering their physical interactions~\cite{iyer2013single,ferro2017vision,sen2016automating,chiu2021bimanual,wilcox2022learning}. This can lead to unrealistic environmental reconstruction when combining tracking results of different tools. For example, if a needle is held by a surgical manipulator, tracking them separately can result in the needle being in a non-feasible grasp configuration (e.g., in-collision or floating, as shown in Fig. \ref{fig:cover_image}). This can be a dangerous situation because dropping needles can result in damage to surrounding tissue and additional trauma with repetitive needle pick-up. Therefore, in this work, we focus on considering the physical interactions between a suture needle and a surgical manipulator to ensure feasible results in real-time tracking of an in-hand suture needle. Real-time localization is necessary since, in practice, grasping a needle causes it to re-orient in the gripper, and further re-orientation or slippage can happen once the needle interacts with the environment. \begin{figure}[t!] \centering \vspace{2mm} \includegraphics[width=\linewidth]{images/cover_image.pdf} \caption{Live image of a daVinci robot instrument grasping a suture needle, top and side views of tool reconstruction from unconstrained and our constrained needle tracking results. The scene reconstruction of our constrained method is always feasible. This feasibility is not ensured by unconstrained approaches, even when the top view of tool reconstruction aligns well with the live image.} \label{fig:cover_image} \vspace{-3mm} \end{figure} \subsection{Related Work} Current literature on suture needle localization mostly focuses on the features of a needle extracted from camera data and assumes the needle can be anywhere in the space. Several methods reconstruct the pose of a static needle by observing detected markers or learned segmentation~\cite{iyer2013single,d2018automated,wilcox2022learning}. Others have considered uncertainty in the features and motions of a needle and use Bayesian filters with different observation models to track its pose~\cite{kurose2013preliminary,ferro2017vision,sen2016automating}. However, these methods do not consider the physical interactions between the needle and the environment and thus do not guarantee that the needle pose is feasible. Some studies in suture needle localization consider the physical interactions between a surgical manipulator and a needle when the manipulator holds the needle. One way is to perform tracking and assume that the configuration between the needle and the manipulator tip is known and remains unchanged over time~\cite{ozguner2018three}. Then the robot Jacobian and joint-sensor readings are used to estimate the motions of the needle. However, getting to this known state is nontrivial, as grasping a needle itself is a non-deterministic action, and grasp pose is situation-dependent, such as during regrasping~\cite{chiu2021bimanual}. Thus, the work in \cite{chiu2021markerless} does not assume a known configuration of the needle held by an end-effector, and its motions are estimated by a tool-tracking method~\cite{richter2021robotic} that tracks the pose of the end-effector. These approaches take into account that the needle should move concurrently with the gripper when held by it. Nonetheless, they do not ensure that the suture needle pose lies inside the feasible grasping manifold of the gripper Tracking the poses of an in-hand needle is a constrained pose tracking problem, where the needle should always lie inside the feasible grasping manifold of the gripper. However, there is no unified approach to define a feasible grasping manifold since grippers and grasped objects can be in arbitrary shapes, making this task highly nonlinear. To incorporate constraints into nonlinear tracking problems, previous work follow two approaches: acceptance/rejection sampling~\cite{lang2007bayesian} and optimization~\cite{kong1994sequential,zhao2014constrained}. Acceptance/rejection methods are known to reduce the diversity of the tracked pose~\cite{zhao2014constrained} and require an excessive number of feasibility checks, making them not desirable for real-time tracking~\cite{hu2020particle}. On the other hand, optimization methods project the estimated pose onto a feasible manifold. However, they require the manifold to be defined as equality or inequality constraints~\cite{zhao2014constrained,hu2020particle}, and describing the feasible grasping manifold in such a way would be highly nontrivial. \subsection{Contributions} In this work, we achieve state-of-the-art performance for real-time suture needle tracking in robotic surgery by incorporating grasping constraints. To this end, we present the following novel contributions: \begin{enumerate} \item the first approach to probabilistically track a suture needle in real-time with grasping constraints, \item a state-space to describe a grasped suture needle for efficient sampling on the feasible grasping manifold, \item and a comparison of Bayesian filter approaches that incorporate the grasping constraints. \end{enumerate} The proposed methods are evaluated in both simulation and real-world environments. In simulation environments, we demonstrate that our proposed methods outperform other unconstrained/constrained tracking approaches. Moreover, we evaluate different tracking methods on the suture needle regrasping task~\cite{chiu2021bimanual,wilcox2022learning}. The results indicate that incorporating grasping constraints makes the regrasping policy more robust to noise in detections. In real-world environments, we use marker-less feature detections from a Deep Neural Network (DNN) as needle observations and reconstruct the tracked tool poses from ex-vivo images. An example is shown in Fig. \ref{fig:cover_image}. The results demonstrate that our constrained approach ensures a feasible estimated pose, and an unconstrained method can lead to unrealistic reconstructions. \section{Methods} \subsection{Problem Formulation} We aim to solve the in-hand suture needle pose, $\mathbf{s}_t$, tracking problem probabilistically from a sequence of observations, $\mathbf{o}_{0:t}$, which can be formulated as: \begin{equation} \begin{aligned} \textrm{Track} \quad & p_{t|t}(\mathbf{s}_t) \coloneqq p(\mathbf{s}_t | \mathbf{a}_{0:t-1}, \mathbf{o}_{0:t}) \\ \textrm{s.t.} \quad & \mathbf{s}_t \in \mathcal{F}_t \\ \textrm{where} \quad & \mathbf{s}_t = f(\mathbf{s}_{t-1}, \mathbf{a}_{t-1}, \mathbf{w}_{t-1}) \sim p_f(\cdot | \mathbf{s}_{t-1}, \mathbf{a}_{t-1}) \\ & \mathbf{o}_t = h(\mathbf{s}_t, \mathbf{v}_t) \sim p_h(\cdot | \mathbf{s}_t) \label{equ:cBF_formulation} \end{aligned} \end{equation} where $\mathcal{F}_t$ is the feasible grasping space, $f(\cdot)$ and $h(\cdot)$ are the motion and observation models with noise $\mathbf{w}_{t-1}$ and $\mathbf{v}_t$ respectively, and $\mathbf{a}_{t-1}$ is the action applied to the suture needle. In our task, $\mathcal{F}_t$ in (\ref{equ:cBF_formulation}) is the feasible grasping manifold of the surgical manipulator that is holding a suture needle at time step $t$. Usually, a grasping manifold should consider two feasibility constraints: geometric and dynamic constraints. Geometric constraints include~\cite{hasson2019learning}: \begin{enumerate} \item The object’s surface should be in contact with the gripper’s surface, i.e., $Surface(\mathbf{{s}_t}) \cap Surface(\mathbf{e}_t) \neq \emptyset$, where $\mathbf{e}_t$ is the state of the gripper at time step $t$. \item The object should not penetrate with the gripper, i.e., $Interior(\mathbf{s}_t) \cap Interior(\mathbf{e}_t) = \emptyset$. \end{enumerate} Dynamic constraints include that if there is no external force except gravity acting on both the object and the gripper, the linear and angular velocities of the object relative to the gripper should be $\mathbf{0}$. Hence, the feasible grasping manifold $\mathcal{F}_t$ can be represented as \begin{alignat}{3} \quad & \mathcal{F}_t = \{ \mathbf{s}_t | && \mathbf{s}_t \in \mathcal{G}_t \cap \mathcal{D}_t\}, \label{equ:F_definition}\\ \textrm{where} \quad & \mathcal{G}_t = \{ \mathbf{s}_t | && Surface(\mathbf{{s}_t}) \cap Surface(\mathbf{e}_t) \neq \emptyset \text{ and } \notag\\ \quad & && Interior(\mathbf{s}_t) \cap Interior(\mathbf{e}_t) = \emptyset\}, \label{equ:G_definition}\\ \quad & \mathcal{D}_t = \{ \mathbf{s}_t | && \text{If } ExternalForce \setminus Gravity = \emptyset, \notag\\ \quad & && LinearVelocity(\mathbf{s}_t, \mathbf{e}_t) = \mathbf{0} \text{ and } \notag\\ \quad & && AngularVelocity(\mathbf{s}_t, \mathbf{e}_t) = \mathbf{0} \}. \label{equ:D_definition} \end{alignat} Due to the special design and property of surgical manipulators and suture needles, we can simplify the requirements of defining the feasible grasping manifold for an in-hand needle. More specifically, the dynamic constraints in (\ref{equ:D_definition}) are ignored because (1) a suture needle is very light compared to the gripper, and (2) grippers for surgical manipulators are designed to increase the friction between themselves and the objects they are holding (e.g., Needle Drivers). Hence, $\mathcal{F}_t = \mathcal{G}_t, \forall t \in [1, \dots, T]$. Since the robot end-effector or the grasped object can have a complex shape, the feasible grasping manifold $\mathcal{F}_t$ in (\ref{equ:F_definition}) is difficult to define for the object pose, $[\mathbf{b}_t^\top \ \mathbf{q}_t^\top]^\top$, where $\mathbf{b}_t \in \mathbb{R}^3$ is the position, and $\mathbf{q}_t \in \mathbb{R}^3$ is the axis-angle orientation. The object pose, which is described in a global frame such as the camera frame or in the ego-centric end-effector frame, is often directly used as the state in (\ref{equ:cBF_formulation})~\cite{kurose2013preliminary,sen2016automating,ferro2017vision,ozguner2018three,chiu2021markerless}. However, without a feasibility-checking library or a physical simulator, it is difficult to tell whether a pose state $\mathbf{s}^p_t = [\mathbf{b}_t^\top \ \mathbf{q}_t^\top]^\top$ belongs to $\mathcal{F}_t$. These libraries and simulators require the mesh files of the end-effectors and objects, and multiple proximity queries on geometric models can slow down tracking. Moreover, $\mathcal{F}_t$ for the pose state is generally irregular, so randomly sampling a pose state from $\mathcal{F}_t$ can take even more time. \subsection{Needle Pose Reparameterization} Instead of defining the state as the needle pose in the camera frame or in the end-effector frame, we propose a \textit{\textbf{reparameterization trick}} that defines a new set of parameters, $(\alpha, w, u, v)$, to describe the pose of an in-hand suture needle in the end-effector frame. First, we introduce their \textit{intermediate} parameters, $(\alpha, d, \theta, \phi)$, for the pose which was originally defined in our previous work~\cite{chiu2021bimanual}. This set of parameters provides a more intuitive understanding of how to describe the pose of a needle held by a gripper. The first parameter, $\alpha \in [\frac{1}{2}\pi, \frac{3}{2}\pi] \subseteq \mathbb{R}$, indicates which point on the needle is grasped, and the other three parameters, $d \in [d_{min}, d_{max}] \subseteq \mathbb{R}$, $\theta \in [\theta_{min}, \theta_{max}] \subseteq \mathbb{R}$, and $\phi \in [\phi_{min}, \phi_{max}] \subseteq \mathbb{R}$, describe the position of the end-effector relative to the grasped point of the needle in the spherical coordinate system. Although the $(\alpha, d, \theta\, \phi)$-parameter is intuitive, independently sampling for $d, \theta$, and $\phi$ can lead to high biases when transforming from spherical coordinates to Cartesian space~\cite{weisstein2002sphere}, which is where the suture needle pose is defined. Thus, the following change of variables is applied~\cite{cundy1989sphere,harman2010decompositional}: \begin{equation} w = d^3, \quad u = \frac{\theta}{2\pi}, \quad v = \frac{1}{2}(\cos\phi + 1), \label{equ:wuv} \end{equation} where $w \in [d_{min}^3, d_{max}^3] \subseteq \mathbb{R}$, $u \in [\frac{1}{2\pi}\theta_{min}, \frac{1}{2\pi}\theta_{max}] \subseteq \mathbb{R}$, and $v \in [\frac{1}{2}(\cos\phi_{max}+1), \frac{1}{2}(\cos\phi_{min}+1)] \subseteq \mathbb{R}$. Then a reparameterized state $\mathbf{s}^r_t$ can be represented as $\mathbf{s}^r_t = [\alpha_t\ w_t\ u_t\ v_t]^\top \in \mathbb{R}^4$. Reparameterizing the state of a suture needle as $(\alpha, w, u, v)$ has several benefits. First, the geometrically feasible space, $\mathcal{G}_t$ in (\ref{equ:G_definition}), has a concrete definition: \begin{align} \mathcal{G}_t = \mathcal{G} = \left\{ \right. & \mathbf{s}^r = [\alpha\ \ w\ \ u\ \ v]^\top | \notag\\ & \alpha \in \left[\frac{\pi}{2}, \frac{3\pi}{2}\right], w \in \left[d_{min}^3, d_{max}^3\right], \notag\\ & u \in \left[\frac{\theta_{min}}{2\pi}, \frac{\theta_{max}}{2\pi}\right], \notag\\ & v \in \left[\frac{1}{2}(\cos\phi_{max}+1), \frac{1}{2}(\cos\phi_{min}+1)\right] \left. \right\}. \end{align} As long as a state $\mathbf{s}^r$ belongs to $\mathcal{G}$, the needle is in contact with the gripper's surface and not in collision with the end-effector. Hence, $\mathbf{s}^r$ is a geometrically feasible state. Second, the motion model, $f(\cdot)$ in (\ref{equ:cBF_formulation}), for a $(\alpha, w, u, v)$-state becomes very simple: \begin{align} \label{equ:s_r_motion_model} \mathbf{s}^r_{t+1} & = f(\mathbf{s}^r_t, \mathbf{a}^r_t, \mathbf{w}^r_t) = \textrm{clip}(\mathbf{s}^r_t + \mathbf{a}^r_t + \mathbf{w}^r_t, \mathbf{s}^r_{min}, \mathbf{s}^r_{max}), \\ \mathbf{s}^r_{min} & = \left[\frac{\pi}{2}\ \ d_{min}^3\ \ \frac{\theta_{min}}{2\pi}\ \ \frac{1}{2}(\cos\phi_{max}+1)\right]^\top, \\ \mathbf{s}^r_{max} & = \left[\frac{\pi}{2}\ \ d_{max}^3\ \ \frac{\theta_{max}}{2\pi}\ \ \frac{1}{2}(\cos\phi_{min}+1)\right]^\top, \label{equ:s_r_motion_model_end} \end{align} where $\mathbf{a}^r_t = [a_{t,\alpha}\ a_{t,w}\ a_{t,u}\ a_{t,v}]^\top \in \mathbb{R}^4$ is the variation applied to $\mathbf{s}^r_t$, and $\mathbf{w}^r_t \in \mathbb{R}^4$ is the motion noise. This simple form ensures feasible outputs from the motion model while requiring no time-consuming post-processing, such as rejection sampling or optimization. Finally, the geometrically feasible space, $\mathcal{G}$, when using $(\alpha, w, u, v)$-state, is the shape of a hyperrectangle, so it is a convex space. The convexity of $\mathcal{G}$ enables the direct weighted sum of multiple tracked state candidates to also be feasible, which is important when computing averages. Meanwhile, for pose-states, the feasible space is usually non-convex due to complex gripper and object shapes in the 3D space, and the weighted average of multiple poses can be infeasible. Fig. \ref{fig:state_average_example} shows an example of averaging two pose-states and two $(\alpha, w, u, v)$-states. The former results in a floating needle, whereas the latter remains feasible grasping. \begin{figure}[t!] \vspace{1.5mm} \centering \includegraphics[width=0.45\textwidth]{images/state_average_example.pdf} \caption{Average of two pose-states and two $(\alpha, w,u,v)$-states. A direct average of two pose-states can lead to an infeasible needle state. However, the direct weighted sum of two $(\alpha, w,u,v)$-states is still feasible since the feasible grasping manifold for $(\alpha, w,u,v)$-states is convex.} \label{fig:state_average_example} \vspace{-3.5mm} \end{figure} Mappings between a $(\alpha, w, u, v)$-parameter and its corresponding needle pose in the camera frame, $\mathbf{p}^c_n = [\mathbf{b}^{c\top}_n\ \mathbf{q}^{c\top}_n]^\top$, are necessary since the observations of the needle are from the images taken by the camera. The steps to transform from $(\alpha, w, u, v)$ to $\mathbf{p}^c_n$ are summarized as follows: \begin{enumerate} \item Transform $(w, u, v)$ to $(d, \theta, \phi)$ using the inverse of (\ref{equ:wuv}). \item Transform $(\alpha, d, \theta, \phi)$ to $\mathbf{p}^n_e$ and $\mathbf{p}^c_n$ using the methods described in~\cite{chiu2021bimanual}, where $\mathbf{p}^n_e = [ \mathbf{b}^{n\top}_e\ \mathbf{q}^{n\top}_e ]^\top$ is the pose of the end-effector in the needle frame. \end{enumerate} Transforming from $\mathbf{p}^c_n$ to $(\alpha, w, u, v)$ requires some properties of an in-hand suture needle. First, we list the steps of this transformation as follows: \begin{enumerate} \item Calculate $\mathbf{p}^n_e$ using $\mathbf{p}^c_n$ and $\mathbf{p}^c_e$, where $\mathbf{p}^c_e = [ \mathbf{b}^{c\top}_e\ \mathbf{q}^{c\top}_e ]^\top$ is the pose of the end-effector in the camera frame. \item Find out the needle's grasped point in the needle frame, $\mathbf{p}^n_g = [\mathbf{b}^{n\top}_g\ \mathbf{q}^{n\top}_g]^\top$. \item Calculate $(\alpha, d, \theta, \phi)$ using $\mathbf{p}^n_g$ and $\mathbf{p}^n_e$. \item Transform $(d, \theta, \phi)$ to $(w, u, v)$ using (\ref{equ:wuv}). \end{enumerate} The first step can be calculated by \begin{equation} \mathbf{H}(\mathbf{b}^n_e, \mathbf{q}^n_e) = \left( \mathbf{H}(\mathbf{b}^c_n, \mathbf{q}^c_n) \right)^{-1} \mathbf{H}(\mathbf{b}^c_e, \mathbf{q}^c_e). \end{equation} where $\mathbf{H}(\cdot) \in SE(3)$ is the homogeneous transform from position and orientation vectors. To obtain $\mathbf{p}^n_g$ in the second step, we use the following property of an in-hand suture needle: \textit{If a needle is stably grasped by a surgical manipulator, the y-axis of the end-effector frame will pass through the needle’s grasped point.} An example can be observed in Figure \ref{fig:stable_grasping_example}. This property allows us to calculate $\mathbf{p}^n_g$ by \begin{align} & \mathbf{b}^n_g = \mathbf{b}^n_e + \beta \mathbf{R}^n_{e,y}, \label{equ:b_n_g}\\ & \mathbf{q}^n_g = \mathbf{0}, \\ \textrm{where} \quad & \beta = -\left( [0\ 0\ 1] \mathbf{b}^n_e \right) / \left( [0\ 0\ 1] \mathbf{R}^n_{e,y} \right). \end{align} The coefficient $\beta \in \mathbb{R}$ in (\ref{equ:b_n_g}) is obtained by \textit{finding the intersection between $\mathbf{R}^n_{e,y}$ and the xy-plane of the needle frame}, where $\mathbf{R}^n_{e,y} \in \mathbb{R}^3$ is the y-axis of the end-effector frame in the needle frame. With $\mathbf{p}^n_g$, $\alpha$ can be calculated by \begin{equation} \vspace{-2mm} \alpha = \tan^{-1}\left( \frac{b^n_{g,y}}{b^n_{g,x}} \right), \vspace{1mm} \end{equation} where $b^n_{g,x} \in \mathbb{R}$ and $b^n_{g,y} \in \mathbb{R}$ are the x- and y-coordinates of $\mathbf{b}^n_g$~\cite{chiu2021bimanual}. To obtain $(d, \theta, \phi)$ in the third step, we first calculate \begin{equation} \mathbf{b}^g_e = [b^g_{e,x}\ \ b^g_{e,y}\ \ b^g_{e,z}]^\top = \mathbf{b}^n_e - \mathbf{b}^n_g. \end{equation} Then the $(d, \theta, \phi)$ -parameters become~\cite{chiu2021bimanual} \begin{equation} d = \left\lVert \mathbf{b}^g_e \right\rVert_2, \theta = \tan^{-1}\left( \frac{b^g_{e,y}}{b^g_{e,x}} \right), \phi = \cos^{-1} \left( \frac{b^g_{e,z}}{d} \right). \end{equation} Finally, $(w, u, v)$ in the fourth step can be calculated by (\ref{equ:wuv}). \begin{figure}[t!] \vspace{1.5mm} \centering \includegraphics[width=0.35\textwidth]{images/stable_grasping_example.pdf} \caption{The top and side views of a surgical manipulator stably grasping a suture needle and the coordinate frames used in this work. It can be observed that when a needle is stably grasped, the y-axis of the end-effector frame will pass through at least one point on the needle.} \label{fig:stable_grasping_example} \vspace{-3.5mm} \end{figure} \begin{figure*}[t!] \vspace{1.5mm} \centering \begin{subfigure}{0.48\linewidth} \centering \includegraphics[width=\textwidth]{images/results/sim_pos_tracking_errors.pdf} \end{subfigure}% \begin{subfigure}{0.48\linewidth} \centering \includegraphics[width=\textwidth]{images/results/sim_ori_tracking_errors.pdf} \end{subfigure} \caption{The positional errors (left) and orientational errors (right, log-scale) of tracking an in-hand suture needle grasped by a moving gripper. Our proposed state-reparameterization methods outperform other approaches since the new state space allows a concrete definition of the feasible grasping manifold and can easily incorporate grasping constraints into tracking.} \label{fig:simulation_tracking_errors} \vspace{-5.5mm} \end{figure*} \subsection{Bayesian Filter with Feasibility Constraints} \begin{algorithm}[t!] \vspace{0.5mm} \footnotesize \SetAlgoLined \KwIn{maximal and minimal values of $(\alpha,w,u,v)$-parameters $\mathbf{s}^r_{min}$ and $\mathbf{s}^r_{max}$, motion noise covariance $\Sigma^r_m$, image observations $\mathbb{I}_{1:T}$, observation noise variance $\sigma^2_o$, needle radius $r$, tracking algorithm HF or PF} \KwOut{tracked needle state $\mathbf{s}^r_{1:T}$} \tcp{Initialization} $\{ \mathbf{s}^r_{0,i} \}_{i=1}^n \leftarrow sampleFeasible(\mathbf{s}^r_{min}, \mathbf{s}^r_{max}$) \\ \label{alg:initialize_sample} $\{ \gamma_{0,i} \}_{i=1}^n \leftarrow \{ \frac{1}{n}, \dots, \frac{1}{n} \}$ \\ \label{alg:initialize_weights} \For{timestep $t = 1,\dots, T$}{ \tcp{Predict Step} \uIf{HF}{ \label{alg:start_mm} $\{ \gamma_{t,i} \}_{i=1}^n \leftarrow Predict(\{ \mathbf{s}^r_{t-1,i}, \gamma_{t-1,i} \}_{i=1}^n, \Sigma^r_m)$ \\ $\{ \mathbf{s}^r_{t,i} \}_{i=1}^n \leftarrow \{ \mathbf{s}^r_{t-1,i} \}_{i=1}^n$ \\ } \ElseIf{PF}{ $\{ \mathbf{s}^r_{t,i} \}_{i=1}^n \leftarrow Predict(\{ \mathbf{s}^r_{t-1,i} \}_{i=1}^n, \Sigma^r_m)$ \\ $\{ \gamma_{t,i} \}_{i=1}^n \leftarrow \{ \gamma_{t-1,i} \}_{i=1}^n$ \\ } \label{alg:end_mm} \tcp{Update Step} $\mathbf{o}_t \leftarrow getDetections(\mathbb{I}_t)$\\ \label{alg:start_om} $\{ \mathbf{s}^p_{t,i} \}_{i=1}^n \leftarrow \{ \alpha wuv2pose(\mathbf{s}^r_{t,i}, \mathbf{p}^c_{e,t}, r) \}_{i=1}^n$ \\ $\{ \gamma_{t,i} \}_{i=1}^n \leftarrow weightUpdate(\{ \mathbf{s}^p_{t,i}, \gamma_{t,i} \}_{i=1}^n, \mathbf{o}_t, \sigma^2_o)$ \\ \label{alg:end_om} \If{PF}{ \label{alg:start_pf_om} $\{ \mathbf{s}^r_{t,i} \}_{i=1}^n \leftarrow \{ pose2\alpha wuv(\mathbf{s}^p_{t,i}, \mathbf{p}^c_{e,t}) \}_{i=1}^n$ \\ \tcp{Stratify Resampling \cite{kitagawa1996monte}} \If{$effectiveParticles(\{ \gamma_{t,i} \}_{i=1}^n) < N_{eff}$}{ $\{ \mathbf{s}^r_{t,i}, \gamma_{t,i} \}_{i=1}^n \leftarrow resample(\{ \mathbf{s}^r_{t,i}, \gamma_{t,i} \}_{i=1}^n)$ \\ } } \label{alg:end_pf_om} \tcp{Return Mean Needle Pose} $\mathbf{s}^r_{t} \leftarrow \sum_{i=1}^{n} \gamma_{t,i} \mathbf{s}^r_{t,i}$ \\ \label{alg:weight_sum} } \caption{Constrained Needle Tracking with HF/PF} \label{alg:constrained_needle_tracking} \end{algorithm} Bayesian filtering is applied to solve (\ref{equ:cBF_formulation}) with the $(\alpha, w, u, v)$-state representation, which is easy to sample and constrain on $\mathcal{G}$. Due to the nonlinearity in the observation models, Histogram Filter (HF) and Particle Filter (PF) are both implemented. A summary is provided in Algorithm \ref{alg:constrained_needle_tracking}. \textit{Initialization:} Upon initialization of the filters, a set of discrete states $\mathcal{S}^r$ such that $\mathcal{S}^r = \{ \mathbf{s}^r_1, \dots, \mathbf{s}^r_n | \mathbf{s}^r_i \in \mathcal{G}, \forall i \in \{1, \dots, n\} \}$ is collected. We sample directly in our proposed $(\alpha, w, u, v)$-state space \begin{equation} \mathbf{s}^r = [\alpha\ \ w\ \ u\ \ v]^\top, \text{~~~where~~~~} \label{equ:s_r_definition} \end{equation} \begin{align} & \alpha \sim \mathcal{U}\left( \frac{\pi}{2}, \frac{3\pi}{2} \right) \\ & w \sim \mathcal{U}\left( d_{min}^3, d_{max}^3 \right) \\ & u \sim \mathcal{U}\left( \frac{\theta_{min}}{2\pi}, \frac{\theta_{max}}{2\pi} \right) \\ & v \sim \mathcal{U}\left( \frac{1}{2}(\cos\phi_{max}+1), \frac{1}{2}(\cos\phi_{min}+1) \right). \label{equ:sample_s_r} \end{align} $\mathcal{U}(\cdot, \cdot)$ is the uniform distribution. The initialization is done in lines \ref{alg:initialize_sample} and \ref{alg:initialize_weights} of Algorithm \ref{alg:constrained_needle_tracking}. \textit{Motion Model:} The motion model is with zero-mean Gaussian noise since a grasped suture needle is not expected to move largely inside a gripper. Therefore, the motion model's probability distribution is \begin{equation} p_f(\cdot | \mathbf{s}^r_t, \mathbf{a}^r_t) \sim \mathcal{N}(\mathbf{s}^r_t + \mathbf{a}^r_t, \Sigma^r_m). \end{equation} where $\mathbf{a}^r_t = \mathbf{0}$ $\forall$ $t\in \{1, \dots, T\}$, and $\Sigma^r_m \in \mathbb{R}^{4 \times 4}$ is the covariance matrix of the motion noise. The motion model is applied in lines \ref{alg:start_mm}-\ref{alg:end_mm} in Algorithm \ref{alg:constrained_needle_tracking}. \textit{Observation Model:} The observation model is our previously proposed \textit{Points Matching to Ellipse Observation Model}~\cite{chiu2021markerless} which can be applied to any pixel detections of a suture needle. Note that the observation model requires the pose-state, $\mathbf{s}^p_t = [\mathbf{b}^{c\top}_n\ \mathbf{q}^{c\top}_n]^\top$, as the input since the detections are from an image. For more details on the observation model, see~\cite{chiu2021markerless}. In Algorithm 1, the observation model is applied in lines \ref{alg:start_om}-\ref{alg:end_om}, where $\sigma_o^2 \in \mathbb{R}$ is the variance of the observation model. Additional steps for PF, such as resampling particles, are detailed in Lines \ref{alg:start_pf_om}-\ref{alg:end_pf_om}. \textit{Output:} To get a mean result from the HF and PF algorithms, the weighted average of the discrete states is computed as shown in line \ref{alg:weight_sum} in Algorithm \ref{alg:constrained_needle_tracking}. Note that $\mathbf{s}^r_t$ is still feasible since $\mathcal{G}$ is a convex space. \section{Related Work} \cite{ozguner2018three}: track 6D pose with particle filter; use stereo images, robot kinematics; allow tracking under occlusion; assume the initial pose of the needle is known; needle in an image is segmented using proposed thin feature segmentation; requires virtually rendering the whole needle image \section{Experiment and Results} \begin{figure*}[t!] \vspace{1.5mm} \centering \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\textwidth]{images/results/real_world_results_1.pdf} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\textwidth]{images/results/real_world_results_2.pdf} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\textwidth]{images/results/real_world_results_3.pdf} \end{subfigure} \caption{Raw image, top and side views of tool reconstruction from an unconstrained (PF) and our constrained (cPFrp) needle tracking approaches, across three examples. Without incorporating grasping constraints into tracking, PF often estimates the needle pose with inaccurate depth and orientation, whereas cPFrp provides a more realistic, feasible reconstructed pose.} \label{fig:real_world_results} \vspace{-3.5mm} \end{figure*} \subsection{Tracking Experiments in Simulation} We first evaluate our proposed constrained in-hand suture needle pose tracking algorithms in a CoppeliaSim environment. The simulation environmental settings are similar to the ones in~\cite{chiu2021markerless}, where a surgical manipulator holds a radius of 5.4mm suture needle. A stereo camera is set up to capture images with a size of $256 \times 256$, and five needle points are extracted from each image as the detections. Each detected point is disturbed by a noise sampled from $\mathcal{N}(\mathbf{0}, \Sigma_n)$, where $\Sigma_n \in \mathbb{R}^{2\times 2}$ is a diagonal matrix with each diagonal element being $\sigma_n^2$. Each experiment is repeated for 20 trials, and each trial contains 100 time steps. We compare the following state definitions for an in-hand suture needle and tracking algorithms: \begin{enumerate}[leftmargin=*,align=left] \item Unconstrained pose-state, PF (PF)~\cite{chiu2021markerless,ozguner2018three}: naive PF with pose-states, and the needle motions are set to be the same as that of the end-effector. \item Constrained pose-state by rejection sampling, HF (cHFrj)~\cite{boyko2021histogram}: constrained HF with feasible states pre-collected by rejection sampling. \item Constrained pose-state by rejection sampling, PF (cPFrj)~\cite{lang2007bayesian}: constrained PF with rejection sampling applied to the outputs of the motion model. \item Reparameterized $(\alpha, w, u, v)$-state, HF (cHFrp): our proposed state space with HF (Algorithm \ref{alg:constrained_needle_tracking}). \item Reparameterized $(\alpha, w, u, v)$-state, PF (cPFrp): our proposed state space with PF (Algorithm \ref{alg:constrained_needle_tracking}). \end{enumerate} For all algorithms, we use the state-of-the-art \textit{Points Matching to Ellipse Observation Model} for needle detections~\cite{chiu2021markerless}. Each algorithm is run with 2000 particles or pre-collected states. We use the following two equations to calculate the positional and orientational errors of a tracked needle pose: \begin{equation} err_{\mathbf{b}} = \lVert \mathbf{b}_t - \overline{\mathbf{b}}_t \rVert_2,\ err_{\mathbf{q}} = \lVert \mathbf{q}_t (\overline{\mathbf{q}}_t)^{-1} \rVert_2, \end{equation} where $\overline{\cdot}$ is the ground truth obtained in the simulator. Fig. \ref{fig:simulation_tracking_errors} shows the pose errors of tracking an in-hand suture needle while moving the end-effector. Although Bayesian filters with rejection sampling consider the feasible grasping constraints, their tracking errors are larger than other methods, even than the unconstrained one. This is because the feasible grasping manifold for pose-states is too irregular and difficult to sample uniformly from. With such an irregular space, the states sampled from it usually lack diversity since some states are easier to be sampled than others. Moreover, it requires lots of samples and feasibility checks to obtain enough feasible states. The average run time of cPFrj, which runs the rejection sampling process online, is 3.6 seconds per image frame, whereas other methods take 0.15 seconds per image frame. Tracking an in-hand needle with our reparameterized $(\alpha, w, u, v)$-states achieves the lowest pose errors. Sampling from the feasible grasping manifold of $(\alpha, w, u, v)$-states is much easier and requires no feasibility checks. Hence, both tracking errors and run time can be much lower than rejection-sampling-based methods. In addition, our proposed methods consider the physical interactions between the gripper and the needle. Therefore, when the environmental noise increases, the tracking errors of our methods remain low compared to the unconstrained method. Finally, the performance of cPFrp is better than cHFrp since PF allows online adjustment of the tracked state candidates. This adjustment moves those candidates closer to the real state. \subsection{Automated Suture Needle Regrasping} \begin{table}[t] \centering \caption{Success rate for needle regrasping policy~\cite{chiu2021bimanual}.} \label{tab:regrasping_results} \begin{tabular}{c|ccccc} \hline \multicolumn{6}{c}{$\sigma_{e,p} = 0$ mm, $\sigma_{e,o} = 0$ degree} \\ \hline $\sigma_n$ (pixels) & 1 & 2 & 3 & 4 & 5 \\ \hline PF & 0.9 & 0.77 & 0.6 & 0.6 & 0.43 \\ cPFrp & 0.97 & 0.87 & 0.97 & 0.87 & 0.77 \\ \hline\hline \multicolumn{6}{c}{$\sigma_{e,p} = 1$ mm, $\sigma_{e,o} = 5$ degrees} \\ \hline $\sigma_n$ (pixels) & 1 & 2 & 3 & 4 & 5 \\ \hline PF & 0.67 & 0.6 & 0.6 & 0.37 & 0.4 \\ cPFrp & 0.83 & 0.9 & 0.9 & 0.87 & 0.87 \\ \hline\hline \multicolumn{6}{c}{$\sigma_{e,p} = 2$ mm, $\sigma_{e,o} = 10$ degrees} \\ \hline $\sigma_n$ (pixels) & 1 & 2 & 3 & 4 & 5 \\ \hline PF & 0 & 0.07 & 0.03 & 0 & 0 \\ cPFrp & 0.63 & 0.67 & 0.6 & 0.57 & 0.73 \\ \hline \end{tabular} \vspace{-5mm} \end{table} To demonstrate the importance of integrating grasping constraints into suture needle tracking, we compare the unconstrained PF and our proposed cPFrp methods on the suture needle regrasping task~\cite{chiu2021bimanual} in the simulation environment. In these experiments, we add noise not only to needle detections but also to the detected end-effector poses. The positional noise of end-effector poses is sampled from $\mathcal{N}(\mathbf{0}, \Sigma_{e,p})$, where $\Sigma_{e,p} \in \mathbb{R}^{3 \times 3}$ is a diagonal matrix with each diagonal element being $\sigma^2_{e,p}$. The orientational noise of the end-effector pose is $\theta_e [0\ 1\ 0]^\top$, where $\theta_e \in \mathbb{R}$ is sampled from $\mathcal{N}(0, \sigma^2_{e,o})$. Each experiment is run for 30 trials. Table \ref{tab:regrasping_results} shows the regrasping success rate of PF and cPFrp under different environmental noise. PF barely shows any success when more noise is added to detected end-effector poses. On the other hand, cPFrp succeeds in multiple regrasping trials under the same condition. These results suggest that when automating suture needle manipulation tasks, it is essential to consider the relationship between the needle and the gripper that is interacting with it. \subsection{Tracking Experiments in Real World} We evaluate PF and our cPFrp methods on ex-vivo datasets and demonstrate the tool reconstruction results. The suture needles tested are with radii 7mm and 11.5mm, and the algorithms are run with 2000 particles. In the ex-vivo datasets, a needle is grasped by a Large Needle Driver (LND) installed on a Patient Side Manipulator (PSM) arm of the da Vinci Research Kit (dVRK)~\cite{kazanzides2014open}. dVRK’s stereo endoscopic camera, which is 1080p and runs with 30 fps, is used to capture images for end-effector and needle detections. The end-effector poses are tracked by our previous method~\cite{richter2021robotic}, and the markerless needle detections are obtained by DeepLabCut~\cite{mathis2018deeplabcut}, the state-of-the-art keypoint detector. Figure \ref{fig:cover_image} and \ref{fig:real_world_results} show the top and side views of tool reconstruction. From the top views, the results of both the unconstrained and constrained methods align well with the raw images. However, the side views clearly show their differences. The unconstrained method can lead to the reconstructed needle floating in the air or in collision with the gripper. Also, the inaccuracy mainly happens when reconstructing the depths of a needle from the camera. This is because the detections of a needle with different depths do not show many differences in the endoscopic images. Therefore, without integrating the feasibility constraints into needle tracking, the unconstrained method is likely to estimate a needle pose with inaccurate depth. Our proposed constrained method ensures the needle pose is always feasible given the estimated end-effector pose. Note that although there can be noise in end-effector poses, the reconstructed relative pose between the needle and the end-effector is accurate for our constrained method. In Table \ref{tab:regrasping_results}, we show that the accuracy of this relative pose is more crucial in automating suture needle manipulation tasks.
1,314,259,993,790
arxiv
\section{Introduction and main results} Let $(\Sigma,g)$ be a compact Riemannian surface with boundary. In this paper we always assume that $\Sigma$ is connected and the boundary of $\Sigma$ is non-empty and smooth. Consider \textit{the Steklov problem} defined in the following way \begin{gather*} \begin{cases} \Delta u=0&\text{in $\Sigma$},\\ \frac{\partial u}{\partial n}=\sigma u&\text{on $\partial \Sigma$}, \end{cases} \end{gather*} where $\Delta=-\operatorname{div}_g \circ \operatorname{grad}_g$ is the Laplace-Beltrami operator and $\frac{\partial}{\partial n}$ is the outward unit normal vector field along the boundary. The collection of all numbers $\sigma$ for which the Steklov problem admits a solution is called the \textit{Steklov spectrum} of the surface $\Sigma$. The Steklov spectrum is a discrete set of real numbers called Steklov eigenvalues with finite multiplicities satisfying the following condition (see e.g. \cite{girouard2017spectral}) \begin{align*} 0=\sigma_0(g) < \sigma_1(g) \leq \sigma_2(g) \leq\ldots\nearrow +\infty. \end{align*} The Steklov spectrum enables us to define the following homothety-invariant functional on the set $\mathcal{R}(\Sigma)$ of Riemannian metrics on $\Sigma$ \begin{align*} \overline{\sigma}_k(\Sigma,g):=\sigma_k(g)L_g(\partial \Sigma), \end{align*} where $L_g(\partial \Sigma)$ stands for the length of the boundary of $\Sigma$ in the metric $g$. The functional $\overline{\sigma}_k(\Sigma,g)$ is called the \textit{$k-$th normalized Steklov eigenvalue}. It was shown in~\cite{colbois2011isoperimetric} (see also~\cite{hassannezhad2011, kokarev2014variational}) that if $\Sigma$ is an orientable surface then the functional $\overline{\sigma}_k(\Sigma,g)$ is bounded from above. Moreover, the following theorem holds \begin{theorem}[\cite{girouard2012upper}] \label{Kor} Let $(\Sigma,g)$ be a compact orientable surface of genus $\gamma$ with $l$ boundary components. Then one has \begin{gather*} \overline{\sigma}_k(\Sigma,g) \leq 2\pi k(\gamma+l). \end{gather*} \end{theorem} In this paper we prove that a similar estimate holds for non-orientable surfaces. \begin{theorem}\label{non-bound} Let $\Sigma$ be a compact non-orientable surface of genus $\gamma$ with $l$ boundary components. Then one has \begin{gather*} \overline{\sigma}_k(\Sigma,g) \leq 4\pi k(\gamma+2l). \end{gather*} \end{theorem} Here the genus of a non-orientable surface is defined as the genus of its orientable cover. \begin{remark} The estimate in Theorem~\ref{Kor} has been improved in~\cite{karpukhin2015bounds} by a bound which is linear in $k+\gamma+l$ instead of $k(\gamma+l)$. However, the proof of this result uses orientability in an essential way, see \cite[Section 6]{karpukhin2015bounds}. It would be interesting to obtain a similar improvement in Theorem~\ref{non-bound}. \end{remark} Theorems~\ref{Kor} and~\ref{non-bound} enable us to define the following functionals $$ \sigma^*_k(\Sigma):=\sup_{\mathcal{R}(\Sigma)} \overline{\sigma}_k(\Sigma,g), $$ and $$ \sigma^*_k(\Sigma,[g]):=\sup_{[g]} \overline{\sigma}_k(\Sigma,g). $$ \begin{remark} Note that we cannot define the functionals $\sigma^*_k(\Sigma)$ and $\sigma^*_k(\Sigma,[g])$ in higher dimensions. Indeed, it was proved in the paper~\cite{colbois2019compact} that if $n=\dim M \geq 3$ then the functional $\overline{\sigma}_k(M,g):=\sigma_k(g)Vol(\partial M, g)^{1/(n-1)}$, where $Vol(\partial M, g)$ denotes the volume of the boundary with respect to the metric $g$, is not bounded from above on the set of Riemannian metrics $\mathcal R(M)$. Moreover, it is not even bounded from above in the conformal class $[g]$. \end{remark} The functional $\sigma^*_k(\Sigma)$ is an object of intensive research during the last decade (see e.g. \cite{fraser2011first, fraser2016sharp, colbois2016steklov, petrides2019maximizing, girouard-lagace, matpet}). The functional $\sigma^*_k(\Sigma,[g])$ which is called the \textit{$k-$th conformal Steklov eigenvalue} is less studied. Let us mention some results concerning $\sigma^*_k(\Sigma,[g])$. First since the disc admits the unique conformal structure one can conclude that $\sigma^*_k(\mathbb D^2,[g_{can}])=\sigma^*_k(\mathbb D^2),$ where $g_{can}$ stands for the Euclidean metric on $\mathbb D^2$ with unit boundary length. The value of $\sigma^*_k(\mathbb D^2)$ is known: $\sigma^*_k(\mathbb D^2)=2\pi k$ (see \cite{weinstock1954inequalities} for $k=1$ and \cite{girouard2010hersch} for all $k \geq 1$). Let us also mention the resent paper \cite{fraser2020some}, where the authors particularly obtain new results about the functional $\sigma^*_k(\mathbb D^2)$. The functional $\sigma^*_k(\Sigma,[g])$ is the main research object of the paper \cite{petrides2019maximizing}. \begin{theorem}[\cite{petrides2019maximizing}]\label{petridesmax} For every Riemannian metric $g$ on a compact surface $\Sigma$ with boundary one has \begin{align}\label{petya} \sigma^*_k(\Sigma,[g]) \geq \sigma^*_{k-1}(\Sigma,[g])+\sigma^*_1(\mathbb{D}^2, [g_{can}]), \end{align} particularly \begin{align}\label{el soufi} \sigma^*_k(\Sigma,[g]) \geq 2\pi k. \end{align} Moreover, if the inequality~\eqref{petya} is strict then there exists a Riemannian metric $\tilde g\in [g]$ such that $\overline\sigma_k(\Sigma,\tilde g)=\sigma^*_k(\Sigma,[g])$. \end{theorem} New interesting results about the functional $\sigma^*_k(\Sigma,[g])$ were recently obtained in the paper~\cite{karpukhin-stern}. \begin{remark} The result analogous to Theorem~\ref{petridesmax} for the conformal spectrum of the Laplace-Beltrami operator on closed surfaces also holds (see \cite{nadirashvili2015conformal, MR3438833, petrides2014existence, petrides2018existence, karpukhin2020conformally}). For further information concerning the spectrum of the Laplace-Beltrami operator on closed surfaces see the surveys~\cite{MR3203194, MR4017613} and references therein. \end{remark} It is easy to see that the connection between the functionals $\sigma^*_k(\Sigma)$ and $\sigma^*_k(\Sigma,[g])$ is expressed by the formula \begin{align*} \sigma^*_k(\Sigma)=\sup_{[g]} \sigma^*_k(\Sigma,[g]). \end{align*} One can ask what do we get if we replace $\sup_{[g]}$ by $\inf_{[g]}$ in this formula? In this case we get the following quantity \begin{align*} I^\sigma_k(\Sigma):=\inf_{[g]} \sigma^*_k(\Sigma,[g]), \end{align*} It is an analog of the Friedlander-Nadirashvili invariant of closed manifolds. The first Friedlander-Nadirashvili invariant of a closed manifold was introduced in the paper \cite{MR1717641} in 1999. The $k-$th Nadirashvili-Friedlander invariant of a closed surface has been recently studied in the paper \cite{karpukhin2019friedlander}. \begin{figure}[h!] \centering \def\columnwidth{\columnwidth} \includegraphics[scale=0.5]{intro.pdf} \footnotesize \caption{ An example of a degenerating sequence of conformal classes $\{c_n\}$ on a surface $\Sigma$ of genus $2$ with $4$ boundary components. $a)$ The \textit{red} curves correspond to collapsing geodesics for the sequence of metrics of constant Gauss curvature and geodesic boundary $\{h_n\}, ~h_n\in c_n$ corresponding to the degenerating sequence of conformal classes $\{c_n\}$. $b)$ The compactified limiting space $\widehat{\Sigma_\infty}$ (see Section~\ref{geometry}). The black points correspond to the points of compactification. $c)$ The surface $\widehat{\Sigma_\infty}$ is homeomorphic to the disjoint union of a disc and a surface of genus $1$ with $1$ boundary component. } \label{F} \end{figure} In the study of functionals like $\sigma^*_k(\Sigma)$ and $I^\sigma_k(\Sigma)$ one considers maximizing and minimizing sequences of conformal classes $\{c_n\}$ on the \textit{moduli space of conformal classes on $\Sigma$}, i.e. $\sigma^*_k(\Sigma,c_n) \to \sigma^*_k(\Sigma)$ or $\sigma^*_k(\Sigma,c_n) \to I^\sigma_k(\Sigma)$ as $n\to\infty$. Due to the Uniformization theorem conformal classes on $\Sigma$ are in one-to-one correspondence (up to an isometry) with metrics on $\Sigma$ of constant Gauss curvature and geodesic boundary. Therefore, any sequence of conformal classes $\{c_n\}$ on $\Sigma$ corresponds to a sequence of Riemannian surfaces of constant Gauss curvature and geodesic boundary $\{(\Sigma,h_n)\},~h_n\in c_n$ and we can consider the moduli space of conformal classes on $\Sigma$ as the set of all $(\Sigma,h)$, where $h$ is a metric of constant Gauss curvature and geodesic boundary, endowed with $C^\infty-$topology (see Section~\ref{geometry}). Note that the moduli space of conformal structures is a non-compact topological space. For any sequence $\{c_n\}$ there are two possible scenarios: either this sequence remains in a compact part of the moduli space or it escapes to infinity. Let $(\Sigma_\infty, c_\infty)$ denote the \textit{limiting space}, i.e. $(\Sigma_\infty, c_\infty)=\lim_{n\to\infty}(\Sigma,c_n)$. We compactify $\Sigma_\infty$ if necessary. Let $\widehat{\Sigma_\infty}$ denote the compactified limiting space. It turns out that if the first scenario realizes then we get $\widehat{\Sigma_\infty}=\Sigma$ and $c_\infty$ is a genuine conformal class on $\Sigma$ for which the value $\sigma^*_k(\Sigma)$ or $I^\sigma_k(\Sigma)$ is attained. If the second scenario realizes then we say that the sequence $\{c_n\}$ \textit{degenerates}. It turns out that in this case there exists a finite collection of pairwise disjoint geodesics for the metrics $h_n$ whose lengths in $h_n$ tend to $0$ as $n$ tends to $\infty$. We refer to these geodesics as \textit{pinching} or \textit{collapsing}. They can be of the following three types: the collapsing boundary components, the collapsing geodesics with no self-intersection crossing the boundary $\partial\Sigma$ at two points and the collapsing geodesics with no self-intersection which do not cross $\partial\Sigma$. Note that in this case the topology of $\Sigma$ necessarily changes when we pass to the limit as $n\to\infty$, i.e. the compact surfaces $\widehat\Sigma_\infty$ and $\Sigma$ are of different topological types. In particular, the surface $\widehat\Sigma_\infty$ can be disconnected (see Figure~\ref{F}). We refer to Section~\ref{geometry} for more details. The following theorem establishes the correspondence between $\sigma^*_k(\widehat\Sigma_\infty,c_\infty)$ and the limit of $\sigma^*_k(\Sigma,c_n)$ when the sequence of conformal classes $c_n$ degenerates (see Section~\ref{geometry} for the definition). It is an analog of \cite[Theorem 2.8]{karpukhin2019friedlander} for the Steklov setting. \begin{theorem}\label{conf&conv} Let $\Sigma$ be a compact surface of genus $\gamma$ with $l>0$ boundary components and let $c_n\to c_\infty$ be a degenerating sequence of conformal classes. Consider the corresponding sequence $\{h_n\}$ of metrics of constant Gauss curvature and geodesic boundary. Suppose that there exist $s_1$ collapsing boundary components and $s_2$ collapsing geodesics with no self-intersection which cross the boundary at two points. Moreover, suppose that $\widehat{\Sigma_{\infty}}$ has $m$ connected components $\Sigma_{\gamma_{i},l_i}$ of genus $\gamma_i$ with $l_i>0$ boundary components, $\gamma_i+l_i<\gamma+l$, $i=1,\ldots,m$. Then one has \begin{equation} \label{deg_limit} \begin{split} \lim_{n \to \infty} \sigma^*_k (\Sigma, c_n)= \max \Big(\sum^{m}_{i=1} \sigma^*_{k_i}(\Sigma_{\gamma_{i},l_i}, c_\infty)+\sum_{i=1}^{s_1+s_2}\sigma^*_{r_i}(\mathbb{D}^2)\Big), \end{split} \end{equation} where the maximum is taken over all possible combinations of indices such that $$ \sum_{i=1}^{m} k_i + \sum_{i=1}^{s_1+s_2} r_i = k. $$ \end{theorem} \begin{remark}\label{main_remark} Let $\Sigma$ denote either cylinder or the M\"obius band. Theorem~\ref{conf&conv} particularly implies that if the sequence of conformal classes $\{c_n\}$ on $\Sigma$ degenerates then we necessarily have: $$ \lim_{n \to \infty} \sigma^*_k (\Sigma, c_n)=2\pi k. $$ \end{remark} \begin{remark} In Theorem~\ref{conf&conv} the sequence $\{h_n\}$ can also have collapsing geodesics not crossing the boundary of $\Sigma$. Moreover, it can happen that the limiting space $\widehat{\Sigma_{\infty}}$ has \textit{closed} components (see Figure~\ref{F'}). Anyway, in Theorem~\ref{conf&conv} we take only components of $\widehat{\Sigma_{\infty}}$ which have non-empty boundary. \end{remark} \begin{figure}[h!] \centering \def\columnwidth{\columnwidth} \includegraphics[scale=0.45]{into2.pdf} \footnotesize \caption{ An example of a degenerating sequence of conformal classes $\{c_n\}$ on a surface of genus $2$ with $1$ boundary components such that the limiting space contains a closed component. In Theorem~\ref{conf&conv} we take only the component on the left which has non-empty boundary. Note that in this case $s_1=s_2=0$. } \label{F'} \end{figure} The main tool that we use in the proof of Theorem~\ref{conf&conv} is the \textit{Steklov-Neumann boundary problem} also known as the \textit{sloshing problem}. Let $\Omega$ be a Lipschitz domain in $(\Sigma,g)$ such that $\overline \Omega \cap \partial \Sigma = \partial^S\Omega \neq \O$. Let $\partial^N\Omega=\partial \Omega \setminus \partial \Sigma$. Then the Steklov-Neumann problem is defined as: \begin{gather}\label{SN} \begin{cases} \Delta_g u=0&\text{in $\Omega$},\\ \frac{\partial u}{\partial n}=0&\text{on $\partial^N\Omega$},\\ \frac{\partial u}{\partial n}=\sigma^Nu&\text{on $\partial^S\Omega$}. \end{cases} \end{gather} The numbers $\sigma^N$ for which the Steklov-Neumann problem admits a solution are called \textit{Steklov-Neumann eigenvalues}. It is known (see \cite{banuelos2010eigenvalue} and references therein) that the set of Steklov-Neumann eigenvalues is not empty and discrete \begin{align*} 0=\sigma^N_0(g) < \sigma^N_1(g) \leq \sigma^N_2(g) \leq\ldots\nearrow +\infty. \end{align*} Every Steklov-Neumann eigenvalue admits the following variational characterization: \begin{gather}\label{charSN} \sigma^N_k(g)=\inf_{V_k\subset \mathcal H^1(\Omega)}\sup_{0 \neq u \in V_k}\frac{\int_\Omega|\nabla u|^2dv_g}{\int_{\partial^S\Omega}u^2ds_g}, \end{gather} where the infimum is taken over all $k-$dimensional subspaces of the space $\mathcal H^1(\Omega)=\{u\in H^1(\Omega,g)~|~\int_{\partial^S\Omega} uds_g=0\}$. Similarly to the case of the Steklov problem we define normalized Steklov-Neumann eigenvalues as $$ \overline\sigma^N_k(\Omega, \partial^S\Omega,g):=\sigma^N_k(g)L_g(\partial^S\Omega). $$ In this notation we always indicate the Steklov part of the boundary at the second place. Sometimes we also use the notation $\sigma^N_k(\Omega, \partial^S\Omega,g)$ for $\sigma^N_k(\Omega,g)$ to emphasize that the Steklov boundary condition is imposed on $\partial^S\Omega$. \begin{remark} Consider $\Omega$ as a surface with Lipschitz boundary. It also follows from~\cite[Theorem $A_k$]{kokarev2014variational} that the quantity $\overline\sigma^N_k(\Omega, \partial^S\Omega,g)$ is bounded from above on $[g]$ and we can define the invariant $\sigma^{N*}_k(\Omega, \partial^S\Omega,[g])$ in the same way as the invariant $\sigma^{*}_k(\Sigma,[g])$ \end{remark} Theorem~\ref{conf&conv} enables us to establish the value of $I^\sigma_k$. \begin{theorem} \label{disproof} Let $\Sigma$ be a compact surface with boundary. Then one has $I^\sigma_k(\Sigma)=I^\sigma_k(\mathbb D^2)=2\pi k$. \end{theorem} \subsection{Discussion} Let us discuss the estimate obtained in Theorem~\ref{non-bound}. The first estimate on $\overline{\sigma}_1(\Sigma,g)$ where $\Sigma$ is a non-orientable surface of genus $\gamma$ with boundary was obtained in the paper \cite{MR3167132}. It reads $$ \overline{\sigma}_1(\Sigma,g) \leq 24\pi (\gamma+1), $$ if $\gamma\geq 1$ and $$ \overline{\sigma}_1(\Sigma,g) \leq 12\pi, $$ if $\gamma=0$. Moreover, it follows from the papers~\cite{kokarev2014variational, MR3579963} that \begin{gather}\label{Kokarev} \overline{\sigma}_1(\Sigma,g) \leq 16\pi \Big[\frac{\gamma+3}{2}\Big], \end{gather} were $[x]$ stands for the integer part of the number $x$. Very recently in the paper~\cite{karpukhin-stern} estimate~\eqref{Kokarev} has been improved and extended for $k=2$: consider $\Sigma$ as a domain with smooth boundary on a closed surface $M$, then one has \begin{gather}\label{Karpukhin} \overline{\sigma}_k(\Sigma,g) \leq \Lambda_k(M),~k=1,2. \end{gather} In this estimate $\Lambda_k(M):=\sup_{g\in \mathcal R(M)} \lambda_k(g)\operatorname{Vol}(M,g)$, where $\lambda_k(g)$ is the $k-$th Laplace eigenvalue of the metric $g$, $\operatorname{Vol}(M,g)$ is the volume of $M$ in the metric $g$ and $\mathcal R(M)$ is the set of Riemannian metrics on $M$. Note that estimate~\eqref{Karpukhin} does not depend on the number of boundary components. Combining estimate~\eqref{Karpukhin} with our estimate we get $$ \overline{\sigma}_k(\Sigma,g) \leq \min\{\Lambda_k(M), 4\pi k(\gamma+2l)\},~k=1,2. $$ Particularly, for the M\"obius band one has $$ \overline{\sigma}_k(\mathbb{MB},g) \leq \min\{\Lambda_k(\mathbb{RP}^2), 8\pi k\},~k=1,2, $$ since $\mathbb{MB} \subset \mathbb{RP}^2$. The value $\Lambda_k(\mathbb{RP}^2)$ is known for all $k$ (see~\cite{karpukhin2019index}): $\Lambda_k(\mathbb{RP}^2)=4\pi(2k+1)$. Hence $$ \overline{\sigma}_k(\mathbb{MB},g) \leq \min\{4\pi(2k+1), 8\pi k\}=8\pi k,~k=1,2. $$ In the paper~\cite{fraser2016sharp} it was shown that $\overline{\sigma}_1(\mathbb{MB},g) \leq 2\pi\sqrt{3}$ which is obviously $\leq 8\pi$. We proceed with the discussion of the functional $I^\sigma_k$. Unlike Theorem 1.4 in \cite{karpukhin2019friedlander} Theorem \ref{disproof} says nothing about conformal classes on which the value $I^\sigma_k(\Sigma)$ is attained. We conjecture that \begin{conjecture}\label{Conj1} The infimum $I^\sigma_k(\Sigma)$ is attained if and only if $\Sigma$ is diffeomorphic to the disc $\mathbb D^2$. \end{conjecture} Note that this conjecture would be a corollary of the following one \begin{conjecture}\label{Conj2} Let $\Sigma$ be a compact surface non-diffeomorphic to the disc. Then for every conformal class $c$ on $\Sigma$ one has $$ \sigma^*_1(\Sigma,c)>\sigma^*_1(\mathbb D^2)=2\pi. $$ \end{conjecture} This conjecture is an analog of the Petrides rigidity theorem for the first conformal Laplace eigenvalue \cite[Theorem 1]{petrides2014existence}. Recently this conjecture has been confirmed in the case of the cylinder and the M\"obius band (see \cite{matthiesen2020remark}). We plan to tackle Conjectures~\ref{Conj1} and~\ref{Conj2} in the subsequent papers. Let us discuss the analogy between the quantity $I^\sigma_k$ and the Friedlander-Nadirashvili invariant of closed surfaces $I_k$. In the paper~\cite{karpukhin2019friedlander} it was conjectured that $I_k$ are invariants of cobordisms of closed surfaces (see Conjecture 1.8). Similarly, one can see that $I^\sigma_k$ are invariants of cobordisms of compact surfaces with boundary. Let us recall that two compact surfaces with boundary $(\Sigma_1,\partial\Sigma_1)$ and $(\Sigma_2,\partial\Sigma_2)$ are called cobordant if there exists a 3-dimensional \textit{manifold with corners} $\Omega$ whose boundary is $\Sigma_1\cup_{\partial\Sigma_1} W \cup_{\partial\Sigma_2}\Sigma_2$, where $W$ is a cobordism of $\partial\Sigma_1$ and $\partial\Sigma_2$ (i.e. $W$ is a surface with boundary $\partial\Sigma_1\sqcup\partial\Sigma_2$). Following~\cite{borodzik2016morse} we denote a cobordism of two surfaces $(\Sigma_1,\partial\Sigma_1)$ and $(\Sigma_2,\partial\Sigma_2)$ by $(\Omega;\Sigma_1,\Sigma_2,W;\partial\Sigma_1,\partial\Sigma_2)$. One can easily see that the cobordisms of surfaces with boundary are trivial. Indeed, we can construct the following cobordism of a surface $(\Sigma,\partial\Sigma)$ and $(\O, \O)$: $(\Sigma\times [0,1];\Sigma\times \{0\}, \O,\partial\Sigma\times [0,1]\cup\Sigma\times \{1\};\partial\Sigma, \O)$. A fundamental fact about cobordisms of surfaces with boundary is \textit{Theorem about splitting cobordisms} (see~\cite[Theorem 4.18]{borodzik2016morse}) which says that every cobordism of compact surfaces with boundary can be split into a sequence of cobordisms given by a handle attachment and cobordisms given by a \textit{half-handle} attachment. We refer to~\cite{borodzik2016morse} for definitions and further information about cobordisms of compact manifolds with boundary. Analysing the proof of Theorem~\ref{disproof} one can remark that the value of $I^\sigma_k$ does not change under handle and half-handle attachments. Since by this procedure any surface $\Sigma$ can be reduced to the disc, we get $I^\sigma_k(\Sigma)=I^\sigma_k(\mathbb D^2)=2\pi k$. \subsection*{Plan of the paper.} The paper is organized in the following way. In Section~\ref{analysis} we collect all the analytic facts which are necessary for the proof of Theorem~\ref{conf&conv}. The main result here is Proposition~\ref{subdomain}. In Section~\ref{appendix4} we prove Theorem~\ref{non-bound} using the techniques developed in the previous section. Section~\ref{geometry} represents the geometric part of the paper. Here we describe convergence on the moduli space of conformal structures on a surface with boundary. Section~\ref{proofconf&conv} is devoted to the proof of Theorem~\ref{conf&conv}. In Section~\ref{main theorem proof} we deduce Theorem~\ref{disproof} from Theorem~\ref{conf&conv}. Finally, Section~\ref{appendix} contains some auxiliary technical results. \subsection*{Acknowledgements.} The author would like to express his gratitude to Iosif Poltero-vich, Mikhail Karpukhin, Alexandre Girouard and Bruno Colbois for stimulating discussions and useful remarks during the preparation of the paper. The author is also thankful to the reviewers for valuable remarks and helpful suggestions. This research is a part of author's PhD thesis at the Universit\'e de Montr\'eal under the supervision of Iosif Polterovich. This work is supported by the Ministry of Science and Higher Education of the Russian Federation: agreement no. 075-03-2020-223/3 (FSSF-2020-0018). \medskip \section{Analytic background}\label{analysis} Here we provide a necessary analytic background that we will use in the proof of Theorem~\ref{conf&conv} in Section~\ref{proofconf&conv}. The propositions in this section are analogs of the propositions in \cite[Section 4]{karpukhin2019friedlander}. We postpone the proof of a proposition to Section \ref{appendix2} every time when it follows the exactly same way as the proof of an analogous proposition in \cite[Section 4]{karpukhin2019friedlander}. \subsection{Convergence of Steklov-Neumann spectrum} \label{convergence} We start with the following convergence result. \begin{lemma}\label{Neumann conv} Let $(M, g)$ be a compact Riemannian manifold with boundary. Consider a finite collection $\{ B_\epsilon(p_i) \}_{i=1}^l$ of geodesic balls of radius $\epsilon$ centred at some points $p_1,\ldots,p_l \in M$. Then the spectrum of the Steklov-Neumann problem \begin{gather} \label{use} \begin{cases} \Delta_gu=0&\text{in $M\setminus \cup^l_{i=1}B_\epsilon(p_i)$},\\ \frac{\partial u}{\partial n}=0&\text{on $\cup^l_{i=1} \partial B_\epsilon(p_i) \setminus \partial M$},\\ \frac{\partial u}{\partial n}=\lambda^N_k(M \setminus \cup^l_{i=1}B_\epsilon(p_i), g) u&\text{on $\partial M \setminus \cup^l_{i=1} \partial B_\epsilon(p_i)$} \end{cases} \end{gather} converges to the Steklov spectrum of $(M,g)$ as $\epsilon \to 0$. \end{lemma} \begin{proof} For the sake of simplicity we only consider the case of one ball that we denote by $B_\epsilon$ centred at $p \in M$. First we consider the case when $B_\epsilon \cap \partial M \neq \varnothing$, i.e. $p\in \partial M$. Let $\mathcal E(u)$ denote the extension of the function $u$ by the unique solution of the problem \begin{gather*} \begin{cases} \Delta_g\mathcal E(u)=0&\text{in $B_\epsilon$},\\ \frac{\partial \mathcal E(u)}{\partial n}=0&\text{on $\partial M \cap \partial B_\epsilon$},\\ \mathcal E(u)= u&\text{on $\partial B_\epsilon \setminus \partial M$}. \end{cases} \end{gather*} {\bf Claim 1.} The operator $\mathcal E(u)$ is uniformly bounded. \begin{proof} The proof is similar to the proof of uniform boundedness of the harmonic continuation operator into small geodesic balls \cite[Example 1]{rauch1975potential}. Fix $0<r<\epsilon$ and let $B_r$ denote a geodesic ball of radius $r$ with the same center as $B_\epsilon$. One has \begin{gather}\label{first} ||\mathcal E(u)||^2_{L^2(B_r,g)} \leq C||u||^2_{L^2(M\setminus B_r,g)}+C||\nabla u||^2_{L^2(M\setminus B_r,g)} \end{gather} and \begin{gather}\label{second} ||\nabla \mathcal E(u)||^2_{L^2(B_r,g)} \leq C||\nabla u||^2_{L^2(M\setminus B_r,g)}. \end{gather} Inequality~\eqref{first} follows from estimate \eqref{finally} and the trace inequality $$ ||\mathcal E(u)||^2_{L^2(B_r,g)} \leq ||\mathcal E(u)||^2_{H^1(B_r,g)} \leq C||u||^2_{H^{1/2}(\partial B_r \setminus \partial M,g)} \leq C||u||^2_{H^1(M\setminus B_r,g)}. $$ Suppose that inequality~\eqref{second} was false. Then there exists a sequence of functions $\{u_n\}$ in $H^1(M\setminus B_r,g)$ such that $$ ||\nabla u_n||_{L^2(M\setminus B_r,g)} \leq 1/n $$ and $$ ||\mathcal E(u_n)||_{L^2(B_r,g)} \geq 1. $$ Consider $\alpha_n=\frac{1}{Vol(M\setminus B_r,g)}\int_{M\setminus B_r}u_ndv_g$. We show that $$ ||u_n-\alpha_n||_{H^1(M\setminus B_r,g)} \leq C/n. $$ Indeed, by the generalized Poincar\'e inequality one has $$ ||u_n-\alpha_n||_{L^2(M\setminus B_r,g)} \leq C||\nabla u_n||_{L^2(M\setminus B_r,g)} \leq C/n $$ moreover $$ ||\nabla (u_n-\alpha_n)||_{L^2(M\setminus B_r,g)}=||\nabla u_n||_{L^2(M\setminus B_r,g)} \leq 1/n. $$ Note that $\mathcal E(u_n-\alpha_n)=\mathcal E(u_n)-\alpha_n$. Then we can prove inequality~\eqref{second} \begin{gather*} ||\nabla \mathcal E(u_n)||_{L^2(B_r,g)}=||\nabla \mathcal E(u_n-\alpha_n)||_{L^2(B_r,g)} \leq || \mathcal E(u_n-\alpha_n)||_{H^1(B_r,g)} \leq \\ \leq ||u_n-\alpha_n||_{H^{1/2}(\partial B_r \setminus \partial M,g)} \leq C||u_n-\alpha_n||_{H^{1}(M\setminus B_r,g)} \leq C/n, \end{gather*} where in the second and third inequalities we have used in order estimate \eqref{finally} and the trace inequality. We got a contradiction. Hence inequality~\eqref{second} is true. Note that for any $\rho r<\epsilon$ the first inequality scales as $$ ||\mathcal E(u)||^2_{L^2(B_{\rho r},g)} \leq C||u||^2_{L^2(M\setminus B_{\rho r},g)}+C\rho^2||\nabla u||^2_{L^2(M\setminus B_{\rho r},g)}, $$ while the second inequality scales as $$ ||\nabla \mathcal E(u)||^2_{L^2(B_{\rho r},g)} \leq C||\nabla u||^2_{L^2(M\setminus B_{\rho r},g)}. $$ Therefore, $||\mathcal E(u)||^2_{H^1(B_{\rho r},g)} \leq C||u||^2_{L^2(M\setminus B_{\rho r},g)}+C||\nabla u||^2_{L^2(M\setminus B_{\rho r},g)}$ for $\epsilon$ small enough. \end{proof} {\bf Claim 2.} One has $$ \operatorname{limsup}_{\epsilon \to 0}\sigma^N_k(M\setminus B_\epsilon,g) \leq \sigma_k(M,g). $$ \begin{proof} We only consider the case of $B_\epsilon \cap \partial M\neq \o$. The case of $B_\epsilon \cap \partial M=\o$ is easier and follows the exactly same arguments. The proof is similar to the proof of \cite[Theorem 3.5]{bogosel2017steklov}. Let $V_k$ be a $k-$dimensional subspace of $H^1(M,g)$ and $v\in V_k$ such that $$ \sigma_k(M,g)=\max_{u\in V_k\setminus \{0\}}\frac{\int_M|\nabla u|^2dv_g}{\int_{\partial M}u^2ds_g} $$ Let $u_1,\ldots,u_k$ be an orthonormal basis in $V_k$. We modify the functions $u_i, i=1,\ldots,k$ as $$ u_{i,\epsilon}=u_i-\frac{1}{L(\partial M \setminus \partial B_\epsilon)}\int_{\partial M\setminus \partial B_\epsilon}u_ids_g. $$ Then $\int_{\partial M\setminus \partial B_\epsilon}u_{i,\epsilon}ds_g=0$. Consider the space $V_{k,\epsilon}:=span(u_{1,\epsilon},\ldots,u_{k,\epsilon})$. Since $\dim V_{k,\epsilon}=k$ one has $$ \sigma^N_k(M\setminus B_\epsilon,g) \leq \max_{u_\epsilon \in V_{k,\epsilon}\setminus\{0\}}\frac{\int_{M\setminus B_\epsilon}|\nabla u_\epsilon|^2dv_g}{\int_{\partial M\setminus \partial B_\epsilon}u_\epsilon^2ds_g}. $$ Moreover, since the dimension of $V_{k,\epsilon}$ is finite then there exists a function $v_\epsilon\in V_{k,\epsilon}$ such that \begin{gather}\label{forclaim} \sigma^N_k(M\setminus B_\epsilon,g) \leq \frac{\int_{M\setminus B_\epsilon}|\nabla v_\epsilon|^2dv_g}{\int_{\partial M\setminus \partial B_\epsilon}v_\epsilon^2ds_g}. \end{gather} Let $v_\epsilon=\sum^k_{i=1}c_iu_{i,\epsilon}$. We build the following function $v=\sum^k_{i=1}c_iu_{i} \in V_k\subset H^1(M,g)$. Note that $\nabla v_\epsilon=\sum^k_{i=1}c_i\nabla u_{i,\epsilon}=\sum^k_{i=1}c_i\nabla u_{i}=\nabla v$ on $M\setminus B_\epsilon$. Thus $\int_{M\setminus B_\epsilon}|\nabla v_\epsilon|^2dv_g=\int_{M\setminus B_\epsilon}|\nabla v|^2dv_g\to \int_{M}|\nabla v|^2dv_g$ as $\epsilon\to 0$. Moreover, it is easy to see that \begin{gather*} \int_{\partial M\setminus \partial B_\epsilon}v_\epsilon^2ds_g=\sum_ic^2_i\Big(\int_{\partial M\setminus\partial B_\epsilon}u^2_idv_g-\frac{1}{L(\partial M\setminus \partial B_\epsilon,g)}\Big(\int_{\partial M\setminus \partial B_\epsilon}u_ids_g\Big)^2\Big)+\\+\sum_{i\neq j}2c_ic_j\Big(\int_{\partial M\setminus \partial B_\epsilon}u_iu_jds_g-\frac{1}{L(\partial M\setminus \partial B_\epsilon,g)}\int_{\partial M\setminus \partial B_\epsilon}u_ids_g\int_{\partial M\setminus \partial B_\epsilon}u_jds_g\Big), \end{gather*} which converges to $\int_{\partial M}v^2ds_g$ as $\epsilon \to 0$. Then \eqref{forclaim} implies $$ \operatorname{limsup}_{\epsilon \to 0}\sigma^N_k(M\setminus B_\epsilon,g) \leq \operatorname{limsup}_{\epsilon \to 0} \frac{\int_{M\setminus B_\epsilon}|\nabla v_\epsilon|^2dv_g}{\int_{\partial M\setminus \partial B_\epsilon}v_\epsilon^2ds_g}= \frac{\int_{M}|\nabla v|^2dv_g}{\int_{\partial M}v^2ds_g} \leq \sigma_k(M,g). $$ \end{proof} Now we are ready to prove the Lemma. The proof is similar to the proof of \cite[Lemma 3.2]{matthiesen2020sharp}. Let $u_\epsilon$ be a normalized $\sigma^N_k-$eigenfunction. By Claim 2 $u_\epsilon$ are uniformly bounded. If $B_\epsilon \cap \partial M=\O$ then we take the harmonic continuation into $B_\epsilon$. It is known that the operators of harmonic continuation into $B_\epsilon$ are uniformly bounded (see \cite[Example 1]{rauch1975potential}). Otherwise we extend $u_\epsilon$ into $B_\epsilon$ by $\mathcal E(u_\epsilon)$. By Claim 1 these operators are also uniformly bounded. Therefore, we get a uniformly bounded in $H^1(M,g)$ sequence $\{\tilde u_\epsilon\}$. Then there exists $\epsilon_l\to 0$ such that $\tilde u_{\epsilon_l} \rightharpoonup u$ in $H^1(M,g)$. Thus, $\tilde u_{\epsilon_l} \to u$ in $L^2(M,g)$ by the Rellich-Kondrachov embedding theorem. The standard elliptic estimates imply $u_{\epsilon_l} \to u$ in $C^\infty_{loc}(M\setminus\{p\})$. Consider a function $\varphi \in C^\infty_c(M\setminus\{p\})$ such that $supp(\varphi) \subset M\setminus B_R$ for a ball $B_R$ centred at $p$ with $R$ fixed. Extracting a subsequence by Claim 2 one can assume that $\sigma^N_k(M\setminus B_{\epsilon_l},g)\to \sigma$. Then we have \begin{gather*} \int_M\langle \nabla u, \nabla \varphi \rangle dv_g= \lim_{l\to 0}\int_{M\setminus B_R}\langle \nabla u_{\epsilon_l}, \nabla \varphi \rangle dv_g=\\=\lim_{l\to 0}\sigma^N_k(M\setminus B_{\epsilon_l},g)\int_{M\setminus B_R} u_{\epsilon_l} \varphi dv_g=\sigma\int_Mu\varphi dv_g. \end{gather*} Hence $u$ is an eigenfunction with eigenvalue $\sigma$. Thus all accumulation points of $\{\sigma^N_k(M\setminus B_{\epsilon_l},g)\}$ are in the Steklov spectrum of $M$. Our aim now is to show that $\sigma=\sigma_k(M,g)$. We will do this by showing that the $u$ is orthogonal in $L^2(\partial M,g)$ to the first $k-1$ Steklov eigenfunctions of $(M,g)$. We use the proof by induction. Let $u_\epsilon$ be a first Steklov-Neumann eigenfunction of $(M\setminus B_\epsilon,g)$. We have already shown that $\tilde u_\epsilon \rightharpoonup u$ in $H^1(M,g)$ then by the trace embedding theorem one has $\tilde u_\epsilon \to u$ in $H^{1/2}(\partial M,g)$ and hence in $L^2(\partial M,g)$. In particular, one has $||u_\epsilon-u||_{L^2(\partial M\setminus\partial B_\epsilon,g)}\to 0$ as $\epsilon\to 0$. Then \begin{gather*} |\int_{\partial M\setminus\partial B_\epsilon}(u_\epsilon -u)ds_g| \leq \int_{\partial M\setminus\partial B_\epsilon}|u_\epsilon -u|ds_g \leq \\ \leq L(\partial M\setminus \partial B_\epsilon,g)^{1/2}||u_\epsilon-u||_{L^2(\partial M\setminus\partial B_\epsilon,g)}^{1/2}, \end{gather*} which converges tp $0$ as $\epsilon \to 0$. Since $\int_{\partial M\setminus\partial B_\epsilon}u_\epsilon ds_g=0$ one then has that $$\lim_{\epsilon\to 0}\int_{\partial M\setminus\partial B_\epsilon}uds_g=\int_{\partial M}uds_g=0.$$ Therefore, $u$ cannot be a constant and since by claim 2 $\operatorname{limsup}_{\epsilon\to 0}\sigma^N_1(M\setminus B_\epsilon,g)=\sigma \leq \sigma_1(M,g)$ and $\sigma$ belongs to the Steklov spectrum of $(M,g)$ we conclude that $u$ is a first Steklov eigenfunction of $(M,g)$ and $\sigma=\sigma_1(M,g)$. Now suppose that $\operatorname{limsup}_{\epsilon\to 0}\sigma^N_i(M\setminus B_\epsilon,g)= \sigma_i(M,g)$ for any $i<k$. Let $u_\epsilon$ be a $k-$th Steklov-Neumann eigenfucntion of $(M\setminus B_\epsilon,g)$. Since $\tilde u_\epsilon \rightharpoonup u$ in $H^1(M,g)$ then the trace embedding theorem implies that $\tilde u_\epsilon \to u$ in $H^{1/2}(\partial M,g)$ in particular $\tilde u_\epsilon \to u$ in $L^{2}(\partial M,g)$ whence $||u_\epsilon-u||_{L^2(\partial M\setminus \partial B_\epsilon,g)}\to 0$. Let $v_\epsilon$ be an $i-$th Steklov-Neumann eigenfunction of $(M\setminus B_\epsilon,g)$ with $i<k$. Then $\int_{\partial M\setminus \partial B_\epsilon}u_\epsilon v_\epsilon ds_g=0$ moreover we have supposed that $v$ is an $i-$th Steklov eigenfunction of $(M,g)$. One has \begin{gather*} |\int_{\partial M\setminus\partial B_\epsilon}(u_\epsilon v_\epsilon -uv)ds_g| \leq \\ \leq \int_{\partial M\setminus\partial B_\epsilon}|u_\epsilon v_\epsilon -uv|ds_g= \int_{\partial M\setminus\partial B_\epsilon}|u_\epsilon v_\epsilon -u_\epsilon v+u_\epsilon v-uv|ds_g \leq \\ \leq \int_{\partial M\setminus\partial B_\epsilon}|u_\epsilon (v_\epsilon -v)|ds_g+\int_{\partial M\setminus\partial B_\epsilon}|v (u_\epsilon -u)|ds_g \leq \\ \leq \Big(\int_{\partial M\setminus\partial B_\epsilon}u^2_\epsilon ds_g \Big)^{1/2}\Big(\int_{\partial M\setminus\partial B_\epsilon}(v_\epsilon-v)^2 ds_g \Big)^{1/2}+\\+\Big(\int_{\partial M\setminus\partial B_\epsilon}v^2_\epsilon ds_g \Big)^{1/2}\Big(\int_{\partial M\setminus\partial B_\epsilon}(u_\epsilon-u)^2 ds_g \Big)^{1/2}\to 0~\text{as $\epsilon \to 0$}. \end{gather*} Hence $\int_{\partial M\setminus\partial B_\epsilon}u_\epsilon v_\epsilon ds_g\to \int_{\partial M}u v ds_g$ as $\epsilon \to 0$. But $\int_{\partial M\setminus\partial B_\epsilon}u_\epsilon v_\epsilon ds_g=0$ for all $\epsilon$. Thus $\int_{\partial M}u v ds_g=0$. We conclude that $u$ is orthogonal in $L^2(\partial M,g)$ to the first $k-1$ Steklov eigenfunctions. Thus $\sigma=\sigma^N_k(M,g)$. \end{proof} We endow the set of Riemannian metrics on $\Sigma$ with the $C^\infty-$topology. Then the following "continuity" result holds. \begin{proposition}\label{N-cont} Let $\Sigma$ be a surface with boundary and $\Omega\subset \Sigma$ be a Lipschitz domain. Let the sequence of Riemannian metrics $g_m$ on $\Sigma$ converge in $C^\infty-$topology to the metric $g$. Then $\sigma^*_k(\Sigma,[g_m])\to\sigma^*_k(\Sigma,[g])$. Similarly, if $h_m|_{\overline\Omega}$ converge to $g|_{\overline\Omega}$ in $C^\infty$-topology, then $\sigma^{N*}_k(\Omega, \partial^S\Omega, [h_m|_{\overline\Omega}])\to\sigma^{N*}_k(\Omega,\partial^S\Omega, [g|_{\overline\Omega}])$. \end{proposition} \begin{proof} We provide a proof for the functional $\sigma^*_k(\Sigma,[g])$. The proof for the functional $\sigma^{N*}_k(\Omega,[g|_{\overline\Omega}])$ follows the exactly same arguments. Choose any $\varepsilon>0$ and consider $m$ large enough. One has $$ \frac{1}{1+\varepsilon} f g_m(v,v) \leq f g(v,v) \leq (1+\varepsilon) f g_m(v,v),\quad \forall v \in \Gamma(TM\setminus\{0\}), $$ where $f$ is any positive smooth function on $\Sigma$. Then by \cite[Proposition 32]{colbois2016steklov} one has $$ \frac{1}{(1+\varepsilon)^{6}}\bar \sigma_k(\Sigma,fg_m) \leq\bar \sigma_k(\Sigma,fg) \leq (1+\varepsilon)^{6}\bar \sigma_k(\Sigma,fg_m). $$ Taking the supremum over all $f$ yields $$ \frac{1}{(1+\varepsilon)^{6}}\sigma^*_k(\Sigma,[g_m]) \leq\sigma^*_k(\Sigma,[g]) \leq (1+\varepsilon)^{6}\sigma^*_k(\Sigma,[g_m]), $$ which completes the proof since this inequality holds for any $\varepsilon>0$. \end{proof} \subsection{Discontinuous metrics}\label{main lemma proof} Let $\Sigma$ be a compact surface with boundary. Consider a set of pairwise disjoint Lipschitz domains $\{\Omega_i\}^s_{i=1}$ in $\Sigma$ such that $\Sigma=\bigcup^s_{i=1} \overline\Omega_i$. Let $C^{\infty}_+(\Sigma,\{\Omega_i\})$ denote a set of functions on $\bigcup^s_{i=1} \overline\Omega_i$ such that $\rho\in C^{\infty}_+(\Sigma,\{\Omega_i\})$ means that $\rho|_{\Omega_i} = \rho_i\in C^\infty(\overline\Omega_i)$ are positive for every $i$. Similarly, $C^{\infty}(\Sigma,\{\Omega_i\})$ denotes a set of ''smooth'' functions on $\bigcup^s_{i=1} \overline\Omega_i$. We introduce discontinuous metrics on $\Sigma$ defined as $\rho g\in[g]$, where $\rho\in C^{\infty}_+(\Sigma,\{\Omega_i\})$ and $g$ is a genuine Riemannian metric. The set $C^{k}(\Sigma,\{\Omega_i\})$ of functions which are of class $C^k$ in every $\overline\Omega_i$ is defined in a similar way. The Steklov spectrum of the metric $\rho g$ is defined as the set of critical values of the Rayleigh quotient $$ R_{\rho g}[\varphi]=\frac{\displaystyle\int_\Sigma | \nabla_g \varphi|^2_g dv_g}{\displaystyle\int_{\partial \Sigma} \rho^{\frac{1}{2}} \varphi^2 ds_g}. $$ This is the Rayleigh quotient of the \textit{Steklov problem with density $\rho$.} The Steklov spectrum with density $\rho$ is well-defined for any non-negative $\rho \in L^\infty(\Sigma,g)$ (see~\cite[Proposition 1.3]{kokarev2014variational}). Elliptic regularity implies that the eigenfunctions are at least $1/2-$H\"older continuous on $\partial\Sigma$. Therefore, Steklov eigenvalues of the metric $\rho g$ admit the following variational characterization $$ \sigma_k(\Sigma,\rho g) = \inf_{E_{k+1}} \sup_{\varphi\in E_{k+1}} R_{\rho g}[\varphi], $$ where $E_{k+1}$ ranges over all $(k+1)$-dimensional subspaces of $C^0(\Sigma)$. We introduce the following notation \begin{align*} \sigma^*_k(\Sigma,\{\Omega_i\},[g])=\sup \{ \bar\sigma_k(\rho g)~\vert~ \rho \in C^{\infty}_+(\Sigma,\{\Omega_i\})\}, \end{align*} where $\bar\sigma_k(\rho g)$ is the normalized $k$-th eigenvalue given by $$ \bar\sigma_k(\rho g) = \sigma_k(\rho g) L_{\rho g}(\partial \Sigma). $$ The following lemma particularly asserts that the quantity $\sigma^*_k(\Sigma,\{\Omega_i\},[g])$ is well-defined. \begin{lemma}\label{identity} Let $(\Sigma,g)$ be a Riemannian surface with boundary. Consider a set of pairwise disjoint Lipschitz domains $\Omega_i$ such that $\Sigma=\bigcup^s_{i=1} \overline\Omega_i$. Then one has \begin{align*} \sigma^*_k(\Sigma,\{\Omega_i\},[g])=\sigma^*_k(\Sigma,[g]) \end{align*} \end{lemma} \begin{proof} The proof follows the same steps as the proof of Lemma 2 in the paper \cite{MR1717641}. We provide it here. Since the set of discontinuous metrics is larger than the set of continuous ones, we have $\sigma^*_k(\Sigma,\{\Omega_i\},[g])) \geq \sigma^*_k(\Sigma,[g])$. Therefore, we have to prove that \begin{gather*} \sigma^*_k(\Sigma,\{\Omega_i\},[g])) \leq \sigma^*_k(\Sigma,[g]), \end{gather*} which is equivalent to \begin{align}\label{lambda_inequality} \sigma_k(\Sigma,\rho g) \leq \sigma^*_k(\Sigma,[g]), \end{align} where $\rho \in C^{\infty}_{+}(\Sigma,\{\Omega_i\})$ and $ \int_{\partial\Sigma} \rho^{1/2}ds_g =1$. Let $E_k$ be the eigenspace corresponding to the $k$-th Steklov eigenvalue of the metric $\rho g$. We put \begin{align*} S=\{u\in H^1(\Sigma,\rho g)~|~u \perp_{L^2(\partial\Sigma, \rho g)} E_0,\dots,E_{k-1}, \int_{\partial\Sigma} \rho^{1/2}u^2 ds_g =1\} \end{align*} For any $\varepsilon>0$ we consider the functional \begin{align*} \mathcal{F}_\rho [u]:=\int_{\Sigma}|\nabla_gu|^2dv_g-(\sigma_k(\Sigma,\rho g)-\varepsilon)\int_{\partial\Sigma}\rho^{1/2}u^2ds_g. \end{align*} It immediately follows that $\mathcal{F}_\rho[u] \geq\varepsilon, \forall u\in S$. Let $0<a:=\min_{\cup\{\Omega_i\}}\rho$ and $\max_{\cup\{\Omega_i\}}=:b<\infty$. We define a smooth non-decreasing function $\chi(t)$ on $\mathbb{R}_+$ that equals zero if $t<1/2$ and equals 1 when $t>1$ and define the following parametrized family of functions: \begin{equation*} \rho_\delta(x) = \begin{cases} \rho(x) &\text{if $x \notin U$}\\ \rho(x)\chi \Big(\frac{d^2(x)}{\delta}\Big)+b\Big(1-\chi \Big(\frac{d^2(x)}{\delta}\Big)\Big) &\text{if $x \in U$} \end{cases} \end{equation*} where $d$ is the distance function from a point $x\in \Sigma$ to $\cup\{\partial\Omega_i\cap \partial \Omega_j\},~i\neq j$ and $U$ is a sufficiently small tubular neighborhood of $\cup\{\partial\Omega_i\cap \partial \Omega_j\},~i\neq j$ where $d^2$ is smooth. We have: \begin{enumerate}[(i)] \item $\Big(\frac{a}{b}\Big)\rho \leq \rho_\delta \leq \Big(\frac{b}{a}\Big) \rho$; \item $\lim_{\delta \to 0} \int_{\partial\Sigma} \rho^{1/2}_\delta ds_g=1$; \item $\lim_{\delta \to 0} \int_{\partial\Sigma} |\rho^{1/2}_\delta - \rho^{1/2}|^q ds_g=0, \forall q<\infty$. \end{enumerate} We want to prove that $\mathcal{F}_{\rho_\delta}[u] \geq 0, \forall u \in S$. Consider $T=(\sigma_k(\Sigma,\rho g) -\varepsilon)\sqrt\frac{b}{a}$ and divide the set $S$ into two parts $S_1$ and $S_2$: \begin{gather*} S_1:=\{u \in S | \int_{\Sigma} |\nabla_g u|^2 dv_g \geq T\}, \\ S_2:=S \setminus S_1= \{u \in S | \int_{\Sigma} |\nabla_g u|^2 dv_g < T\}. \end{gather*} If $u \in S_1$ then \begin{gather*} \mathcal{F}_{\rho_\delta}[u]=\int_{\Sigma}|\nabla_gu|^2dv_g-(\sigma_k(\Sigma,\rho g)-\varepsilon)\int_{\partial\Sigma}\rho_\delta^{1/2}u^2ds_g \geq\\ \geq (\sigma_k(\Sigma,\rho g)-\varepsilon)\Big(\sqrt\frac{b}{a}-\int_{\partial\Sigma}\rho_\delta^{1/2}u^2ds_g\Big) \geq \\ \geq(\sigma_k(\Sigma,\rho g)-\varepsilon)\sqrt\frac{b}{a}(1-\int_{\partial\Sigma}\rho^{1/2}u^2ds_g)=0. \end{gather*} Let us show that $||u||_{L^p(\partial \Sigma,g)}$ with $p\geq 2$ is bounded for any $u \in S_2$. We consider the following operator $L[u]:=\int_{\partial \Sigma}u\rho^{1/2}ds_g$. For this operator one has $$ |L[u]|\leq C\int_{\partial\Sigma}|u|ds_g\leq C||u||_{L^2(\partial\Sigma,g)}\leq C||u||_{H^1(\Sigma,g)}, $$ which implies that $L\in H^{-1}(\Sigma,g)$. Here we used in order the boundedness of $\rho$, the Cauchy-Schwarz and the trace inequalities. We also used the convention that $C$ denotes any positive constant depending only on $\Sigma$. \cite[Lemma 8.3.1]{MR1411441} applied to the operator $L$ implies that there exists a constant $C>0$ depending only on $\Sigma$ such that $$ ||u||^2_{L^2(\Sigma,g)}\leq C||\nabla u||^2_{L^2(\Sigma,g)}<CT, $$ where we used the fact that $L[u]=0~\forall u\in S$. By the trace theorem one then has $$ ||u||^2_{H^{1/2}(\partial\Sigma,g)}\leq C'||u||^2_{H^{1}(\Sigma,g)}<C'', $$ where $C''=C'(CT+T)$. Finally by the Sobolev embedding theorem (see for instance \cite[Theorem 6.9]{MR2944369}) we get $$ ||u||_{L^p(\partial \Sigma,g)}\leq C'''||u||_{H^{1/2}(\partial\Sigma,g)}<\tilde C~\forall 2\leq p<\infty, $$ where $\tilde C=C'''\sqrt{C''}$. Therefore, if $u \in S_2$ then \begin{gather*} \mathcal{F}_{\rho_\delta}[u]=\int_{\Sigma}|\nabla_gu|^2dv_g-(\sigma_k(\Sigma,\rho g)-\varepsilon)\int_{\partial\Sigma}\rho_\delta^{1/2}u^2ds_g =\\=\int_{\Sigma}|\nabla_gu|^2dv_g- (\sigma_k(\Sigma,\rho g)-\varepsilon)-(\sigma_k(\Sigma,\rho g)-\varepsilon)\int_{\partial\Sigma}(\rho_\delta^{1/2}-\rho^{1/2})u^2ds_g \geq \\ \geq \varepsilon-(\sigma_k(\Sigma,\rho g)-\varepsilon)\Big(\int_{\partial\Sigma}(\rho_\delta^{1/2}-\rho^{1/2})^qds_g\Big)^{1/q}\Big(\int_{\partial\Sigma}|u|^pds_g\Big)^{2/p} \geq\\ \geq \varepsilon-(\sigma_k(\Sigma,\rho g)-\varepsilon)\frac{\varepsilon}{\sigma_k(\Sigma,\rho g)-\varepsilon}=0. \end{gather*} In the last inequality we put $$\Big(\int_{\partial\Sigma}(\rho_\delta^{1/2}-\rho^{1/2})^qds_g\Big)^{1/q}\Big(\int_{\partial\Sigma}|u|^pds_g\Big)^{2/p}=\frac{\varepsilon}{\sigma_k(\Sigma,\rho g)-\varepsilon} $$ since $\int_{\partial\Sigma}(\rho_\delta^{1/2}-\rho^{1/2})^qds_g \to 0$ as $\delta \to 0$ and $\int_{\partial\Sigma}|u|^pds_g<+\infty$. Hence, $\mathcal{F}_{\rho_\delta}[u] \geq 0, \forall u \in S$ which implies $\sigma_k(\Sigma,\rho_\delta g) \geq \sigma_k (\Sigma, \rho g) -\varepsilon$. We then have \begin{gather*} \bar\sigma_k(\Sigma,\rho_\delta g)=\sigma_k(\Sigma,\rho_\delta g)L_{\rho_\delta g}(\partial \Sigma) \geq \sigma_k (\Sigma, \rho g) L_{\rho_\delta g}(\partial \Sigma) -\varepsilon L_{\rho_\delta g}(\partial \Sigma). \end{gather*} Therefore, $\sigma^*_k (\Sigma, [g]) \geq \sigma_k (\Sigma, \rho g) L_{\rho_\delta g}(\partial \Sigma) -\varepsilon L_{\rho_\delta g}(\partial \Sigma)$. Letting $\delta \to 0$ one then gets $\sigma^*_k (\Sigma, [g]) \geq \sigma_k (\Sigma, \rho g)-\varepsilon$ that implies \eqref{lambda_inequality} since $\varepsilon$ is arbitrary small. \end{proof} Lemma \ref{identity} implies the following lemma whose proof is postponed to Section \ref{appendix2}. \begin{lemma} \label{identity2} Let $(\Sigma,g)$ be a Riemannian surface with boundary. Consider a set of pairwise disjoint domains $\Omega_i$ such that $\Sigma=\bigcup^s_{i=1} \overline\Omega_i$ and $\Omega_i\cap \partial\Sigma=\partial^S\Omega_i$. Let $(\Omega,h) = \sqcup(\overline\Omega_i,g|_{\overline\Omega_i})$ and $\partial^S\Omega=\sqcup \partial^S\Omega_i$. Then for all $k \geq 0$ one has $$ \sigma^*_k(\Sigma,[g]) \geq \sigma^{N*}_k(\Omega,\partial^S\Omega, [h]). $$ \end{lemma} \subsection{Steklov-Neumann spectrum of a subdomain.} This section is devoted to the following technical lemma \begin{lemma}\label{liminf} Let $\rho_\delta \in C^\infty_+(\Sigma,\{\Omega,\Sigma\setminus\Omega\})$ such that $\rho_\delta|_{\Omega}\equiv 1$ and $\rho_\delta|_{\Sigma\setminus\Omega}\equiv\delta$. Then one has $$\liminf_{\delta \to 0}\sigma_k(\rho_\delta g) \geq \sigma^{N}_k(\Omega,\partial^S\Omega, g),$$ where $\sigma^{N*}_k(\Omega,\partial^S\Omega, g)$ is the $k$-th Steklov Neumann eigenvalue of the domain $(\Omega,g)$ and $\partial^S\Omega=\partial\Sigma\cap\Omega \neq\O$. \end{lemma} \begin{proof} The idea of the proof comes from the proof of~\cite[Section 2, Step 2]{enciso2015} {\bf Case I.} First we consider the case when $\Omega^c \cap \partial \Sigma \neq \O$. Let $\Omega^c$ denotes $int(\Sigma\setminus\Omega)$ and $\partial^S\Omega^c=\partial \Omega^c \cap \partial \Sigma$. Since by elliptic regularity eigenfunctions of the Steklov problem with bounded density are in $H^1$ on the boundary we can restrict ourselves to the space $H^1(\partial\Sigma,g)$. More precisely, let $\psi$ be an eigenfunction with eigenvalue $\sigma$ then by elliptic regularity: $$ ||\psi||^2_{H^1(\partial\Sigma,\rho_\delta g)} \leq C(||\sigma\psi||^2_{L^2(\partial\Sigma,\rho_\delta g)}+||\psi||^2_{L^2(\partial\Sigma,\rho_\delta g)}) \leq C(\sigma^2+1)||\psi||^2_{L^2(\partial\Sigma,\rho_\delta g)} $$ for some positive constant $C$. This implies $$ \frac{||\nabla \psi||^2_{L^2(\partial\Sigma,\rho_\delta g)}}{||\psi||^2_{L^2(\partial\Sigma,\rho_\delta g)}} \leq C(\sigma^2+1)-1. $$ More generally we see that if $\varphi \in \operatorname{span}\langle \psi_0,\ldots,\psi_k \rangle$, where $\psi_i$ is in the $i$-th eigenspace of $(\Sigma,g_\delta)$ then there exists a constant $C_k>0$ such that $$ \frac{||\nabla \varphi ||^2_{L^2(\partial\Sigma,\rho_\delta g)}}{|| \varphi ||^2_{L^2(\partial\Sigma,\rho_\delta g)}} \leq C_k. $$ Therefore, we set $$ \mathcal H:=\{\varphi \in H^1(\partial\Sigma,g)~|~\frac{||\nabla \varphi ||^2_{L^2(\partial\Sigma,\rho_\delta g)}}{|| \varphi ||^2_{L^2(\partial\Sigma,\rho_\delta g)}} \leq C_k\}, $$ $$ \mathcal{H}_1:=\{\varphi \in \mathcal H ~|~\frac{\partial\hat\varphi}{\partial n}=0~\text{on $\partial^S\Omega^c$}\}, $$ where $\hat \varphi$ stands for the harmonic continuation of $\varphi$ into $\Sigma$ and $$ \mathcal{H}_2:=\{\varphi \in \mathcal H ~|~\varphi \in H^1_0(\partial^S\Omega^c,g),~\varphi_{|_\Omega}=0\}. $$ {\bf Claim 1.} One has $$ \int_\Sigma\langle\nabla \hat\varphi, \nabla \hat\psi \rangle_{\tilde g} dv_{\tilde g}=0, \forall \varphi \in \mathcal{H}_1, \psi \in \mathcal{H}_2, $$ for any metric $\widetilde g\in[g]$. \begin{proof} \begin{gather*} \int_\Sigma\langle\nabla \hat\varphi, \nabla \hat\psi \rangle_{\tilde g} dv_{\tilde g}=\int_\Sigma \Delta_{\tilde g} \hat\varphi \hat\psi dv_{\tilde g}+\int_{\partial \Sigma}\frac{\partial\hat\varphi}{\partial \tilde n}\psi ds_{\tilde g}=\\=\int_{\partial^S\Omega^c}\frac{\partial\hat\varphi}{\partial \tilde n}\psi ds_{\tilde g}+\int_{\partial^S\Omega}\frac{\partial\hat\varphi}{\partial \tilde n}\psi ds_{\tilde g}=0. \end{gather*} \end{proof} For the sake of simplicity we use the symbols $\sigma^\delta_k$ for $\sigma_k(\rho_\delta g)$, $g_\delta$ for $\rho_\delta g$ and $R_\delta$ for the Rayleigh quotient $$ R_\delta[\varphi]=\frac{\displaystyle\int_\Sigma|\nabla \hat\varphi|^2_{g_\delta}dv_{g_\delta}}{\displaystyle\int_{\partial \Sigma} \varphi^2 ds_{g_\delta}}. $$ {\bf Claim 2.} \label{bound} There exists a constant that we also denote by $C_k>0$ such that $\sigma^\delta_k \leq C_k$. \begin{proof} Theorem \ref{Kor} implies that there exists a constant $C(k)>0$ such that $$ \sigma^*_k(\Sigma, [g]) \leq C(k). $$ By Lemma~\ref{identity} for every $\delta$ one has $$ \sigma^\delta_k L_{g_\delta}(\partial \Sigma) \leq \sigma^*_k(\Sigma, [g]) \leq C(k). $$ Therefore, $$ \sigma^\delta_k \leq \frac{C(k)}{L_{g_\delta}(\partial \Sigma)}=\frac{C(k)}{L_{g}(\partial^S\Omega)+ \delta ^{1/2}L_{g}(\partial^S \Omega^c)} \leq \frac{C(k)}{L_{g}(\partial^S \Omega)}=C_k. $$ \end{proof} Let $W_k$ be the set of $k+1$-dimensional subspaces of $\mathcal H$ satisfying the condition that ${R_\delta}|_{W_k} \leq C_k$. Claim 2 particularly implies that the space spanned by the first $k+1$ eigenfunctions is in $W_k$, i.e. $W_k\ne\O$. Consider the operator $\mathcal E$ defined in section \ref{convergence} by \begin{gather*} \begin{cases} \Delta_{g} \mathcal E(u)=0&\text{in $\Sigma$},\\ \frac{\partial \mathcal E(u)}{\partial n}=0&\text{on $\partial^S\Omega^c$},\\ \mathcal E(u)=u&\text{on $\partial^S\Omega$}. \end{cases} \end{gather*} For a function $\varphi \in H^1(\partial \Sigma,g)$ we fix its decomposition $\varphi_1+\varphi_2$ with \begin{align*} \varphi_1= \begin{cases} \varphi&\text{on $\partial^S\Omega$},\\ \mathcal E(\varphi)&\text{on $\partial^S\Omega^c$} \end{cases} \end{align*} and $\varphi_2=\varphi_1-\varphi$. Note that $\hat \varphi_1=\mathcal E(\varphi_1).$ {\bf Claim 3.} For every $\varphi \in V \in W_k$ there exists a constant $C>0$ such that $$ \displaystyle\int_{\partial^S \Omega^c}\varphi^2_2~ds_{g_\delta} \leq C\sqrt{\delta} \displaystyle\int_{\partial \Sigma} \varphi^2 dv_{g_\delta}. $$ \begin{proof} By Claim 1 one has $$ \int_\Sigma\langle\nabla \hat\varphi_1,\nabla \hat\varphi_2 \rangle_{g} dv_{g}=0. $$ Further, since $\varphi \in V \in W_k$ we have \begin{gather*} C_k \geq R_\delta[\varphi]=\frac{\displaystyle\int_\Sigma|\nabla \hat\varphi|^2_{g}dv_{g}}{\displaystyle\int_{\partial \Sigma} \varphi^2 ds_{g_\delta}}=\frac{\displaystyle\int_\Sigma|\nabla \hat\varphi_1|^2dv_{g}+\displaystyle\int_{\Sigma}|\nabla \hat\varphi_2|^2_{g}dv_{g}}{\displaystyle\int_{\partial \Sigma} \varphi^2 ds_{g_\delta}} \geq \\ \geq \frac{\displaystyle\int_{\Omega^c}|\nabla \hat\varphi_2|^2_{g}dv_{g}}{\displaystyle\int_{\partial \Sigma} \varphi^2 ds_{g_\delta}}=\frac{1}{\delta^{1/2}}\frac{\displaystyle\int_{\Omega^c}|\nabla \hat\varphi_2|^2_{g}dv_{g}}{\displaystyle\int_{\partial^S\Omega^c} \varphi^2_2 ds_{g}}\frac{||\varphi_2||^2_{L^2(\partial^S\Omega^c, g_\delta)}}{||\varphi||^2_{L^2(\partial \Sigma, g_\delta)}} \geq \\ \geq \frac{\sigma^D_1(\Omega^c,\partial^S\Omega^c, g)}{\sqrt{\delta}} \frac{||\varphi_2||^2_{L^2(\partial^S\Omega^c, g_\delta)}}{||\varphi||^2_{L^2(\partial \Sigma, g_\delta)}}, \end{gather*} where $\sigma^D_1(\Omega^c,\partial^S\Omega^c, g)$ is the first non-zero Steklov-Dirichlet eigenvalue of $(\Omega^c,g)$ (see \cite{banuelos2010eigenvalue}). \end{proof} {\bf Claim 4.} For every $\varphi \in V \in W_k$ and for every sufficiently small $\delta$ there exists a constant $C>0$ such that $$ \int_{\partial \Sigma}\varphi^2~ds_{g_\delta} \leq (1+C \delta^{1/4}) \int_{\partial \Sigma} \varphi^2_1 ds_{g_\delta}. $$ \begin{proof} One has \begin{align*} ||\varphi||^2_{L^2(\partial \Sigma, g_\delta)}=\int_{\partial^S\Omega^c}(\varphi_1+\varphi_2)^2dv_{s_\delta}+\int_{\partial^S\Omega}\varphi^2_1ds_{g_\delta} \leq\\ \leq \Big(1+\frac{1}{\varepsilon}\Big) \int_{\partial \Sigma}\varphi_2^2ds_{g_\delta}+(1+\varepsilon) \int_{\partial \Sigma}\varphi^2_1ds_{g_\delta}, \end{align*} for every $\varepsilon>0$. Applying Claim 3 we obtain \begin{align*} ||\varphi||^2_{L^2(\partial \Sigma, g_\delta)} \leq C\sqrt{\delta}\Big(1+\frac{1}{\varepsilon}\Big) \int_{\partial \Sigma}\varphi^2ds_{g_\delta}+(1+\varepsilon) \int_{\partial \Sigma}\varphi^2_1ds_{g_\delta}, \end{align*} and hence, \begin{align*} \Big(1-C\sqrt{\delta} \Big(1+\frac{1}{\varepsilon}\Big)\Big)||\varphi||^2_{L^2(\partial \Sigma, g_\delta)} \leq (1+\varepsilon) ||\varphi_1||^2_{L^2(\partial \Sigma, g_\delta)}. \end{align*} Choosing $\varepsilon=\delta^{1/4}$ completes the proof. \end{proof} {\bf Claim 5.} \label{C3} For every $\varphi \in V \in W_k$ and for every sufficiently small $\delta$ there exists a constant $C>0$ such that $$ \int_{\partial^S\Omega^c}\varphi^2_1~ds_{g} \leq C\int_{\partial^S\Omega} \varphi^2_1 ds_{g}. $$ \begin{proof} \begin{gather*} C_k \geq \frac{\displaystyle\int_{\partial\Sigma}|\nabla \varphi|^2_{g_\delta}dv_{g_\delta}}{\displaystyle\int_{\partial \Sigma} \varphi^2 ds_{g_\delta}} \geq \frac{\displaystyle\int_{\partial^S\Omega}|\nabla \varphi|^2_{g}ds_{g}}{\displaystyle\int_{\partial \Sigma} \varphi^2 ds_{g_\delta}}=\frac{\displaystyle\int_{\partial^S\Omega}|\nabla \varphi_1|^2_{g}ds_{g}}{\displaystyle\int_{\partial \Sigma} \varphi^2 ds_{g_\delta}}, \end{gather*} since $\varphi=\varphi_1$ on $\partial^S\Omega$. Then by Claim 4 one has \begin{gather*} C_k \geq\frac{\displaystyle\int_{\partial^S\Omega}|\nabla \varphi_1|^2_{g}ds_{g}}{\displaystyle\int_{\partial \Sigma} \varphi^2 ds_{g_\delta}} \geq \frac{1}{1+C\delta^{1/4}}\frac{\displaystyle\int_{\partial^S\Omega}|\nabla \varphi_1|^2_{g}ds_{g}}{\displaystyle\int_{\partial \Sigma} \varphi_1^2 ds_{g_\delta}}, \end{gather*} which implies \begin{equation}\label{thatsit} \begin{split} \int_{\partial^S\Omega}|\nabla \varphi_1|^2_{g}ds_{g} \leq C_k(1+C\delta^{1/4})\int_{\partial \Sigma} \varphi_1^2 ds_{g_\delta}=\\=C_k(1+C\delta^{1/4})\Big(\int_{\partial^S\Omega} \varphi_1^2 ds_{g}+\delta^{1/2}\int_{\partial^S\Omega^c} \varphi_1^2 ds_{g}\Big). \end{split} \end{equation} For the rest of the proof $C$ stands for any positive constant depending possibly on $\Sigma$ and $g$ but not on $\delta$ or $\varphi$. Note that $\partial^s\Omega$ has positive capacity (see \cite[pp.102-105]{MR3791463}). Applying in order the trace inequality, estimate \eqref{finally}, the Sobolev embedding and inequality \eqref{thatsit} yield \begin{gather*} ||\varphi_1||^2_{L^2(\partial^S\Omega^c,g)} \leq C||\hat \varphi_1||^2_{H^1(\Sigma,g)} \leq C||\varphi_1||^2_{H^{1/2}(\partial^S\Omega,g)} \leq \\ \leq C||\varphi_1||^2_{H^{1}(\partial^S\Omega,g)}=C(||\varphi_1||^2_{L^{2}(\partial^S\Omega,g)}+||\nabla \varphi_1||^2_{L^{2}(\partial^S\Omega,g)}) \leq \\ \leq C(1+C\delta^{1/4})\Big(||\varphi_1||^2_{L^{2}(\partial^S\Omega,g)}+\delta^{1/2}||\varphi_1||^2_{L^{2}(\partial^S\Omega^c,g)}\Big), \end{gather*} which implies the required inequality for $\delta$ small enough. \end{proof} Further by the fact that $\int_\Sigma\langle\nabla \hat\varphi_1, \nabla \hat\varphi_2 \rangle_{g} dv_{g}=0$ and by claim 4 for every $\varphi \in V \in W_k$ and one has \begin{gather*} R_\delta[\varphi]=\frac{\displaystyle\int_\Sigma|\nabla \hat\varphi|^2_{g}dv_{g}}{\displaystyle\int_{\partial \Sigma} \varphi^2 ds_{g_\delta}}=\frac{\displaystyle\int_\Sigma|\nabla \hat\varphi_1|^2_{g}dv_{g}+\displaystyle\int_{\Sigma}|\nabla \hat\varphi_2|^2_{g}dv_{g}}{\displaystyle\int_{\partial \Sigma} \varphi^2 ds_{g_\delta}} \geq \\ \geq \frac{1}{1+C\delta^{1/4}} \frac{\displaystyle\int_\Sigma|\nabla \hat\varphi_1|^2_{g}dv_{g}+\int_{\Sigma}|\nabla \hat\varphi_2|^2_{g}dv_{g}}{\displaystyle\int_{\partial \Sigma} \varphi_1^2 ds_{g_\delta}} \geq \\ \geq \frac{1}{1+C\delta^{1/4}} \frac{\displaystyle\int_\Sigma|\nabla \hat\varphi_1|^2_{g}dv_{g}}{\displaystyle\int_{\partial \Sigma} \varphi_1^2 ds_{g_\delta}}=\frac{1}{1+C\delta^{1/4}} \frac{\displaystyle\int_\Sigma|\nabla \hat\varphi_1|^2_{g}dv_{g}}{\displaystyle\int_{\partial^S\Omega} \varphi_1^2 dv_{g}+\delta^{1/2}\int_{\partial^S\Omega^c}\varphi_1^2 dv_{g}} \end{gather*} and by claim 5 we get \begin{gather*} R_\delta[\varphi] \geq\frac{1}{(1+C\delta^{1/4})(1+\delta^{1/2}C)} \frac{\displaystyle\int_\Sigma|\nabla \hat\varphi_1|^2_{g}dv_{g}}{\displaystyle\int_{\partial^S\Omega} \varphi_1^2 ds_{g}} \geq\\ \geq \frac{1}{(1+C\delta^{1/4})(1+\delta^{1/2}C)} \frac{\displaystyle\int_\Omega|\nabla \hat\varphi_1|^2_{g}dv_{g}}{\displaystyle\int_{\partial^S\Omega} \varphi_1^2 ds_{g}} \geq \\ \geq\frac{1}{(1+C\delta^{1/4})(1+\delta^{1/2}C)} R^N_{(\Omega,\partial^S\Omega, g)}[\varphi_{|_\Omega}]. \end{gather*} where $R^N_{(\Omega,\partial^S\Omega, g)}$ denotes the Rayleigh quotient for the Steklov-Neumann problem in the domain $(\Omega,g)$. Let $V=\operatorname{span}\langle \psi_0,\ldots,\psi_k \rangle$, where $\psi_i$ is in the $i$-th eigenspace of $(\Sigma,g_\delta)$. Then \begin{equation}\label{before} \begin{split} \sigma^\delta_k=\max_{\varphi \in V}R_\delta[\varphi] \geq \frac{1}{(1+C\delta^{1/4})(1+\delta^{1/2}C)} \max_{\varphi \in V}R^N_{(\Omega,\partial^S\Omega, g)}[\varphi_{|_\Omega}] \geq \\ \geq \frac{1}{(1+C\delta^{1/4})(1+\delta^{1/2}C)}\sigma^N_k(\Omega,\partial^S\Omega, g), \end{split} \end{equation} since the restriction to $\Omega$ of the functions $\psi_i$ form the space of the same dimension by unique continuation. Finally, passing to the $\liminf$ as $\delta \to 0$ in~\eqref{before} yields the lemma. {\bf Case II.} The case when $\Omega^c \cap \partial \Sigma=\O$ is trivial. Indeed, in this case we have $\partial^S\Omega=\partial \Sigma$. Then for any function $\varphi$ one has \begin{gather*} R_\delta[\varphi]=\frac{\displaystyle\int_\Sigma|\nabla \hat\varphi|^2_{g}dv_{g}}{\displaystyle\int_{\partial \Sigma} \varphi^2 ds_{g_\delta}} \geq \frac{\displaystyle\int_\Omega|\nabla \hat\varphi|^2_{g}dv_{g}}{\displaystyle\int_{\partial^S\Omega} \varphi^2 ds_{g}}=R^N_{(\Omega,\partial^S\Omega, g)}[\varphi_{|_\Omega}]. \end{gather*} Therefore, considering $V=\operatorname{span}\langle \psi_0,\ldots,\psi_k \rangle$, where $\psi_i$ is in the $i$-th eigenspace of $(\Sigma,g_\delta)$ yields \begin{gather*} \sigma^\delta_k=\max_{\varphi \in V}R_\delta[\varphi] \geq \max_{\varphi \in V}R^N_{(\Omega,\partial^S\Omega, g)}[\varphi_{|_\Omega}] \geq \sigma^N_k(\Omega,\partial^S\Omega, g). \end{gather*} Taking $\liminf$ as $\delta \to 0$ completes the proof. \end{proof} Lemma \ref{liminf} is the key ingredient in the proof of the following proposition. We postpone the proof to Section \ref{appendix2}. \begin{proposition} \label{subdomain} Let $(\Sigma,g)$ be a Riemannian surface with boundary, $\Omega\subset \Sigma$ a Lipschitz domain and $\partial^S\Omega=\partial\Sigma\cap\Omega\neq\O$. Then for all $k$ one has $$ \sigma^*_k(\Sigma,[g]) \geq \sigma^{N*}_k(\Omega,\partial^S\Omega, [g|_{\overline\Omega}]). $$ Similarly, let $(\Sigma,g)$ be a Riemannian surface whose boundary. Let $\partial^S\Sigma$ denote all boundary components with the Steklov boundary condition and $\Omega\subset \Sigma$ be a Lipschitz domain such that $\partial^S\Omega \subset \partial^S\Sigma$. Then for all $k$ one has $$ \sigma^{N*}_k(\Sigma,\partial^S\Sigma, [g]) \geq \sigma^{N*}_k(\Omega,\partial^S\Omega, [g|_{\overline\Omega}]). $$ \end{proposition} As a corollary of Proposition~\ref{subdomain} we get \begin{corollary}\label{Neumann cor2} Let $(M, g)$ be a compact Riemannian surface with boundary. Consider a sequence $\{ K_n \}$ of smooth domains $K_n \subset M$ such that \begin{itemize} \item $K_r \subset K_s$ $\forall r>s$; \item $\cap_n K_n=\{p_1,\ldots,p_l\}$ for some points $p_1,\ldots,p_l \in M$. \end{itemize} Then one has $$ \lim_{n \to \infty}\sigma^{N*}_k(M \setminus K_n, \partial M \setminus \partial K_n, [g])= \sigma^*_k(M, [g]). $$ \end{corollary} The proof is postponed to Section \ref{appendix2}. \subsection{Disconnected surfaces.} The proofs of two lemmas below follow the exactly same arguments as the proofs of Lemma 4.9 and Lemma 4.10 in \cite{karpukhin2019friedlander}. Their proofs are postponed to Section \ref{appendix2}. \begin{lemma} \label{disconnected} Let $(\Omega,g) = \sqcup_{i=1}^s(\Omega_i,g_i)$ be a disjoint union of Riemannian surfaces with Lipschitz boundary. Set $\partial^S\Omega=\sqcup_{i=1}^s\partial^S\Omega_i$. Then for all $k>0$ one has $$ \sigma^{N*}_k(\Omega,\partial^S\Omega, [g]) = \max_{\sum\limits_{i=1}^s k_i=k,\,\,\,k_i>0}\,\,\sum_{i=1}^s\sigma^{N*}_{k_i}(\Omega_i,\partial^S\Omega_i, [g_i]). $$ \end{lemma} \begin{lemma}\label{omega_i} Let $(\Sigma,g)$ be a Riemannian surface with boundary. Consider a set of pairwise disjoint Lipschitz domains $\{\Omega_i\}^s_{i=1}$ in $\Sigma$ such that $\Sigma=\bigcup^s_{i=1} \overline\Omega_i$ and $\Omega_i\cap \partial\Sigma=\partial^S\Omega_i \neq \O$ for $1 \leq i \leq s'$. Then one has $$ \sigma^{*}_k(\Sigma, [g]) \geq \max_{\sum_{i=1}^{s'} k_i=k,\,\,\,k_i \geq 0} \sum^{s'}_{i=1} \sigma^{N*}_{k_i}(\Omega_i,\partial^S\Omega_i, [g]). $$ \end{lemma} \section{Proof of Theorem~\ref{non-bound}.} \label{appendix4} The proof is inspired by the methods of the papers~\cite{MR577325,girouard2012upper, MR3579963}. Let $\Sigma$ be a non-orientable compact surface of genus $\gamma$ and $l$ boundary components. We pass to its orientable cover $\pi\colon\widetilde\Sigma \to \Sigma$. Note that $\Sigma$ is of genus $\gamma$ and has $2l$ boundary components. Let $\tau$ denote the involution exchanging the sheets of $\pi$. If $h$ is a metric on $\Sigma$ then $g:=\pi^*h$ is a metric on $\widetilde\Sigma$ invariant with respect to $\tau$, i.e. $\tau$ is an isometry of $g$. Let $\mathcal D_{\widetilde\Sigma}$ be the Dirichlet-to-Neumann map acting on functions on $\widetilde\Sigma$. Then $\tau\circ\mathcal D_{\widetilde\Sigma}=\mathcal D_{\widetilde\Sigma}\circ\tau$ and hence Steklov eigenfunctions are divided into $\tau-$odd and $\tau-$even ones. The corresponding Steklov eigenvalues are also divided into odd and even ones. Let $\sigma^\tau_k(\widetilde\Sigma,g)$ the $k-$th $\tau-$even Steklov eigenvalue. Then $\sigma^\tau_k(\widetilde\Sigma,g)=\sigma_k(\Sigma,h)$. By a well-known theorem of Ahlfors~\cite{ahlfors1950open} there exists a proper conformal branched cover $\psi\colon(\widetilde\Sigma,g) \to (\mathbb D^2,g_{can})$. The word "proper" means $\psi(\partial\widetilde\Sigma)=\mathbb S^1$. Let $d$ be its degree. Define the following pushed-forward metric $g^*$ on $\mathbb D^2$: consider a neighbourhood $U$ of a non-branching point $p\in\mathbb D^2$. Its pre-image is a collection of $d$ neighbourhoods $U_i, i=1,\ldots,d$ on $\widetilde\Sigma$. Moreover, $\psi_i:=\psi_{|_{U_i}}\colon U_i\to U$ is a diffeomorphism. Then the metric $g^*$ is defined on $U$ as $\sum(\psi^{-1}_i)^*g$. The metric $g^*$ is a metric on $\mathbb D^2$ with isolated conical singularities at branching points of $\psi$. The following lemma is trivial \begin{lemma}\label{Yau} For any function $u\in C^\infty(\mathbb D^2)$ one has $$ \int_{\mathbb S^1}udv_{g^*}=\int_{\partial\widetilde\Sigma}(\psi^*u)dv_{g} $$ and $$ d\int_{\mathbb D^2}|\nabla_{g^*} u|^2dv_{g^*}=\int_{\widetilde\Sigma}|\nabla_{g} (\psi^*u)|^2dv_{g}. $$ \end{lemma} Further, suppose that there exists an involution $\iota$ of $\mathbb D^2$ such that \begin{gather}\label{condition} \psi \circ \tau= \iota \circ \psi. \end{gather} \begin{lemma} The involution $\iota$ is an isometry of $(\mathbb D^2,g^*)$. \end{lemma} \begin{proof} Indeed, let the neighbourhood $U\subset \mathbb D^2$ be small enough and do not contain branching points. Then $\psi^{-1}(U)=\sqcup^d_{i=1} U_i$ and applying $\tau$ one gets: $\tau(\psi^{-1}(U))=\sqcup^d_{i=1} \tau(U_i)$. Note that condition~\eqref{condition} implies $\tau(\psi^{-1}(U))=\psi^{-1}(\iota(U))$. Whence $\psi^{-1}(\iota(U))=\sqcup^d_{i=1} \tau(U_i)$. Let $\widetilde{\psi_i}:=\psi_{\tau(U_i)}$. Then on $U$ one has \begin{gather*} g^*=\sum^d_{i=1}(\widetilde{\psi_i}^{-1})^*g=\sum^d_{i=1}(\widetilde{\psi_i}^{-1})^*\tau^*g=\sum^d_{i=1}(\widetilde{\psi_i}^{-1}\circ\tau)^*g=\sum^d_{i=1}(\iota\circ\widetilde{\psi_i}^{-1})^*g=\\=\sum^d_{i=1}\iota^*(\widetilde{\psi_i}^{-1})^*g=\iota^*g^*. \end{gather*} \end{proof} Consider a $j-$th $\iota-$even eigenfunction $u_j$ on $(\mathbb D^2,g^*)$ with corresponding eigenvalue $\sigma^\iota_j(\mathbb D^2,g^*)$. Then the function $\psi^*u_j$ on $\widetilde\Sigma$ is $\tau-$even and hence it projects to a well-defined function $v_j$ on $\Sigma$. We can construct the following function $v=\sum_{j=0}^{k-1}c_jv_j$. Note that $\pi^*v=\sum_{j=0}^{k-1}c_j\psi^*u_j=\psi^*u$, where $u:=\sum_{j=0}^{k-1}c_ju_j$. Further, let $w_i$ denote an $i-$th eigenfunction on $\Sigma$ with eigenvalue $\sigma_i(\Sigma,h)$. It is easy to see that one can always find some coefficients $c_0,\ldots,c_{k-1}$ such that $\int_{\partial\Sigma}v w_idv_h=0, i=0,\ldots,k-1$. Then we can use $v$ as a test function for $\sigma_k(\Sigma,h)$: $$ \sigma_k(\Sigma,h) \leq \frac{\int_{\Sigma}|\nabla_{h} v|^2dv_{h}}{\int_{\partial\Sigma}v^2dv_{h}}=\frac{\int_{\widetilde\Sigma}|\nabla_{g} \psi^*u|^2dv_{g}}{\int_{\partial\widetilde\Sigma}(\psi^*u)^2dv_{g}}=d\frac{\int_{\mathbb D^2}|\nabla_{g^*} u|^2dv_{g^*}}{\int_{\mathbb S^1}u^2dv_{g^*}}=d\sigma^\iota_k(\mathbb D^2,g^*), $$ where we used Lemma~\ref{Yau}. Moreover, the second identity in Lemma~\ref{Yau} implies $L_{g^*}(\mathbb S^1)=L_g(\partial\widetilde\Sigma)=2L_h(\partial \Sigma)$. Whence \begin{gather}\label{need} \overline\sigma_k(\Sigma,h) \leq \frac{d}{2}\sigma^\iota_k(\mathbb D^2,g^*)L_{g^*}(\mathbb S^1). \end{gather} Consider a conformal map $\psi$ between surfaces with involution $\psi\colon (\widetilde\Sigma, \tau) \to (\mathbb D^2, \iota)$ of minimal degree $d$. The map $\psi$ is conformal, moreover every involution exchanging the orientation on $\mathbb D^2$ is conjugate to the involution $\iota_0(z):=\bar z$, where we identify $\mathbb D^2$ with the unit disc on the complex plane. Therefore, without loss of generality we can assume that $\iota=\iota_0$. The fixed point set of $\iota_0$ is the diameter $\{z\in \mathbb D^2~|~Re(z)=0\}$. Let $H\mathbb D^2$ denote a half-disc for example the right one and $\partial^SH\mathbb D^2$ is the right half-circle. Thus, $\sigma^{\iota_0}_k(\mathbb D^2,g^*)=\sigma^N_k(H\mathbb {D}^2,\partial^SH\mathbb{D}^2, g^*)$ and inequality~\eqref{need} implies: \begin{equation}\label{done} \begin{split} \overline\sigma_k(\Sigma,h) \leq \frac{d}{2}\sigma^\iota_k(\mathbb D^2,g^*)L_{g^*}(\mathbb S^1)=d\overline\sigma^N_k(H\mathbb {D}^2,\partial^SH\mathbb{D}^2, g^*) \leq \\ \leq d\sigma^{N*}_k(H\mathbb {D}^2,\partial^SH\mathbb{D}^2, [g^*]) \leq d\sigma^{*}_k(\mathbb {D}^2, [g_{can}])=2\pi k d, \end{split} \end{equation} where in the last inequality we used Lemma~\ref{subdomain} and the fact that there exists a unique up to an isometry conformal class $[g_{can}]$ on $\mathbb D^2$. We want to estimate $d$ in formula~\eqref{done}. It is known that there exists a proper conformal branched cover $f\colon(\widetilde\Sigma, g) \to (\mathbb D^2,g_{can})$ of degree $d'\leq\gamma+2l$ (see \cite{gabard2006representation}). One can construct the following map $F(x):=f(x)\bar f(\tau(x))$. Note that $\bar F(x)=F(\tau(x))=\iota (F(x))$ and hence $\iota=\iota_0$. Moreover $F$ is proper and the degree of $F$ is not greater than $2d'=2(\gamma+2l)$. Hence there exists a proper map between $(\widetilde\Sigma, \tau)$ and $(\mathbb D^2,\iota_0)$ of degree not exceeding $2d'=2(\gamma+2l)$ satisfying \eqref{condition}. Inequality~\eqref{done} then implies $$ \overline\sigma_k(\Sigma,h) \leq 4\pi k (\gamma+2l). $$ \section{Geometric background} \label{geometry} The aim of this section is the proof of Theorem~\ref{conf&conv}. For this purpose we provide a necessary background concerning the geometry of moduli space of conformal classes on a surface with boundary. We start with closed orientable surfaces. \subsection{Closed orientable surfaces} Let us recall the \textit{Uniformization theorem}. \begin{theorem} Let $\Sigma$ be a closed surface and $g$ be a Riemannian metric on it. Then in the conformal class $[g]$ there exists a unique (up to an isometry) metric $h$ of constant Gauss curvature and fixed area. The area assumption is unnecessary except in the case of the torus for which we fix the volume of $h$ to be equal to $1$ \end{theorem} \begin{remark} It follows from the Gauss-Bonnet theorem that the metric $h$ in the Uniformization theorem is of Gauss curvature $1$ in the case of the sphere, $0$ in the case of the torus and $-1$ in the rest cases. \end{remark} Recall that a Riemannian metric $h$ of constant Gaussian curvature $-1$ is called {\em hyperbolic} and a Riemannian surface $(\Sigma,h)$ endowed with a hyperbolic metric $h$ is called {\em a hyperbolic surface}. Note also that a hyperbolic surface is necessarily of negative Euler characteristic. We also say that the torus endowed with a metric of curvature $h=0$ is a flat torus and the sphere endowed with the metric $h=1$ is the standard (round) sphere. \subsection{Hyperbolic surfaces} We recall that a \textit{pair of pants} is a compact surface of genus $0$ with $3$ boundary components. The following theorem plays an underlying role in the theory of hyperbolic surfaces. \begin{theorem}[Collar theorem (see e.g.~\cite{MR1183224})]\label{Collar theorem} Let $(\Sigma,h)$ be an orientable compact hyperbolic surface of genus $\gamma \geq 2$ and let $c_1,c_2,\ldots,c_m$ be pairwise disjoint simple closed geodesics on $(\Sigma,h)$. Then the following holds \begin{enumerate}[(i)] \item $m \leq 3 \gamma-3$. \item There exist simple closed geodesics $c_{m+1},\ldots,c_{3 \gamma-3}$ which, together with $c_1,\ldots,c_m$, decompose $\Sigma$ into pairs of pants. \item The collars \begin{align*} \mathcal{C}(c_i)=\left\{p\in\Sigma~|~ dist(p,c_i) \leq w(c_i)\right\} \end{align*} of widths \begin{align*} w(c_i)=\frac{\pi}{l(c_i)}\left(\pi-2\arctan\left(\sinh\frac{l(c_i)}{2}\right)\right) \end{align*} are pairwise disjoint for $i=1,\ldots,3 \gamma-3$. \item Each $\mathcal{C}(c_i)$ is isometric to the cylinder $$\left\{(t,\theta)| -w(c_i)<t<w(c_i),\,\theta\in\mathbb{R}/2\pi\mathbb{Z}\right\}$$ with the Riemannian metric \begin{align*} \left(\frac{l(c_i)}{2\pi \cos\left(\frac{l(c_i)}{2\pi}t\right)}\right)^2\left(dt^2+d\theta^2\right). \end{align*} \end{enumerate} \end{theorem} The decomposition of $(\Sigma,h)$ into pair of pants which we denote by $\mathcal{P}$ is called \textit{the pants decomposition}. We also say that the geodesics $c_1,\ldots,c_{3 \gamma-3}$ form $\mathcal{P}$. \subsection{Convergence of hyperbolic metrics} We endow the set of hyperbolic metrics on a given surface $\Sigma$ with $C^\infty-$topology. In this section we describe the convergence on this topological set which is called \textit{the moduli space of conformal classes} on $\Sigma$. Essentially, two cases can happen: the injectivity radii of a sequence of hyperbolic metrics do not go to $0$ or they do. The first case is described by \textit{Mumford's compactness theorem} and the second one is treated by \textit{the Deligne-Mumford compactification}. \begin{proposition}[Mumford's compactness theorem (see e.g. ~\cite{MR1451624})] \label{Mumford} Let $\{h_n\}$ be a sequence of hyperbolic metrics on a surface $\Sigma$ of genus $\geq 2$. Assume that the injectivity radii $\operatorname{inj}(\Sigma,h_n)$ satisfy $\operatorname{limsup}\limits_{n\to\infty}\operatorname{inj}(\Sigma,h_n)>0$. Then there exists a subsequence $\{h_{n_k}\}$, sequence $\{\Phi_k\}$ of smooth automorphisms of $\Sigma$ and a hyperbolic metric $h_\infty$ on $\Sigma$ such that the sequence of hyperbolic metrics $\{\Phi_k^*h_{n_k}\}$ converges in $C^\infty$-topology to $h_\infty$. \end{proposition} If $\lim\limits_{n\to\infty}\operatorname{inj}(\Sigma,h_n)=0$ then we say that the sequence $\{h_n\}$ {\em degenerates}. The thick-thin decomposition implies that if the sequence $\{h_n\}$ degenerates then for each $n$ there exists a collection $\{c_1^n,\ldots,c_s^n\}$ of disjoint simple closed geodesics in $(\Sigma,h_n)$ whose lengths tend to $0$ and the length of any geodesic in the complement $\Sigma_n =\Sigma\backslash (c_1^n\cup\ldots\cup c_s^n)$ is bounded from below by a constant independent of $n$. We call the geodesics $\{c_1^n,\ldots,c_s^n\}$ "pinching" or "collapsing". The surface $(\Sigma_n, h_n)$ is possibly a disconnected hyperbolic surface with geodesic boundary. Let $\widehat{\Sigma_\infty}$ denote the surface having the same connected components as $\Sigma_n$, but with boundary component replaced by marked points. Note that each sequence $\{c_i^n\}$ corresponds to a pair of marked points $\{p_i,q_i\}$ on $\widehat{\Sigma_\infty}$, $i=1,\ldots,s$. Then the punctured surface $\widehat{\Sigma_\infty}\backslash\{p_1,q_1,\ldots,p_s,q_s\}$ that we denote by $\Sigma_\infty$ admits the unique hyperbolic metric $h_\infty$ with cusps at punctures. Now we are ready to formulate one of the underlying results in the theory of \textit{moduli spaces of Riemann surfaces}. \begin{proposition}[Deligne-Mumford compactification (see e.g. ~\cite{MR1451624})]\label{D-M} Let $(\Sigma, h_n)$ be a sequence of hyperbolic surfaces such that $\operatorname{inj}(\Sigma,h_n)\to 0$. Then up to a choice of subsequence, there exists a sequence of diffeomorphisms $\Psi_n: \Sigma_\infty \to \Sigma_n$ such that the sequence $\{\Psi^*_n h_n\}$ of hyperbolic metrics converges in $C_{\mathrm{loc}}^\infty$-topology to the complete hyperbolic metric $h_\infty$ on $\Sigma_\infty$. Furthermore, there exists a metric of locally constant curvature $\widehat{h_\infty}$ on $\widehat{\Sigma_\infty}$ such that its restriction to $\Sigma_\infty$ is conformal to $h_\infty$. \end{proposition} We call $(\widehat{\Sigma_\infty},\widehat{h_\infty})$ a {\em limiting space} of the sequence $(\Sigma,h_n)$. We also say that the limit of conformal classes $[h_n]$ is the conformal class $[\widehat{h_\infty}]$ on $\widehat{\Sigma_\infty}$. \begin{remark} We emphazise that $\widehat{h_\infty}$ has {\em locally} constant curvature, since $\widehat{\Sigma_\infty}$ is possibly disconnected and different connected components could have different signs of Euler characteristic. \end{remark} \subsection{Orientable surfaces with boundary of negative Euler characteristic} \label{modulii_boundary} Our exposition of this topic essentially follows the book~\cite{jost2007bosonic}. Let $\Sigma$ be an orientable surface of genus $\gamma$ with $l$ boundary components. Consider its \textit{Schottky double} $\Sigma^d$ defined in following way. We identify $\Sigma$ with another copy $\Sigma'$ of $\Sigma$ with opposite orientation along the common boundary. We get a closed oriented surface of genus $2\gamma+l-1$. For example the Schottky double of the disk is the sphere and the Schottky double of the cylinder is the torus. In the rest cases we always get a hyperbolic surface as the Schottky double. We endow the surface $\Sigma$ with a metric $g$. The next theorem plays a role of the Uniformization theorem for surfaces with boundary. \begin{proposition}[\cite{osgood1988extremals}] \label{uniformization} In the conformal class $[g]$ of a metric $g$ on the surface $\Sigma$ there exists a unique (up to an isometry) metric of constant Gauss curvature and geodesic boundary. More precisely, this metric is of curvature $1$ in the case of $\mathbb D^2$, of the curvature $0$ in the case of the cylinder and of curvature $-1$ in the rest cases. \end{proposition} Denote the metric of constant Gauss curvature and geodesic boundary from Theorem \ref{uniformization} by $h$. Consider a Riemannian surface with boundary $(\Sigma, h)$. Its Schottky double admits the metric $h^d$ defined as $h^d_{|_\Sigma}=h$ and $h^d_{|_\Sigma'}=h$. It is a metric of constant curvature and the involution $\iota: \Sigma^d \to \Sigma^d$ that interchanges $\Sigma$ and $\Sigma'$ becomes an isometry with $\partial \Sigma$ as the fixed set. Moreover, $(\Sigma,h_n)=(\Sigma^d,h^d_n)/\iota$. Theorem \ref{uniformization} also says that the set of conformal classes on the surface $\Sigma$ with boundary is in one-to-one correspondence with the set of metrics of constant Gauss curvature and geodesic boundary which is in the one-to-one correspondence with the set of "symmetric" metrics (metrics that go to themselves under the involution $\iota$) of constant curvature on the Schottky double. We endow the set of metrics of constant Gauss curvature and geodesic boundary with $C^\infty-$topology. Consider a sequence of conformal classes $\{c_n\}$ on $\Sigma$. It uniquely defines a sequence of "symmetric" metrics of constant curvature $\{h^d_n\}$ on $\Sigma^d$. For this sequence we have the same dichotomy as we have seen in the previous sections. Precisely, either $\operatorname{inj} (\Sigma^d,h^d_n) \nrightarrow0$ or $\operatorname{inj} (\Sigma^d,h^d_n)\to 0$. In the first case we get a genuine Riemannian metric on $\Sigma^d$ which is obviously "symmetric" and of constant curvature while in the second case one can find a set of simple closed geodesics $\{c_1^n,\dots,c_s^n\}$ where $s \leq 6\gamma+3l-6$ whose lengths $l_{h^d_n}(c_i^n)\to 0$. For the geodesics $c_i^n$ there exist two possibilities: either $\iota(c_i^n)=c_i^n$ or $\iota(c_i^n)=c_j^n$ with $j \neq i$. The first possibility implies that the geodesic $c_i^n$ crosses $\partial \Sigma$ which corresponds to two situations as well: either $c_i^n$ has exactly two points of intersection with $\partial \Sigma$ or it belongs to $\partial \Sigma$, i.e. it is one of the boundary components. The second possibility implies that $c_i^n$ does not crosse $\partial \Sigma$. Taking quotient by $\iota$ we then get three types of pinching geodesics on $(\Sigma,h_n)$ with $\operatorname{inj} (\Sigma,h_n) \to 0$: pinching boundary components, pinching simple geodesics which have exactly two points of intersection with the boundary and pinching simple closed geodesics which do not cross the boundary. \subsection{Non-orientable surface with boundary of negative Euler characteristic}\label{nonor} Let $\Sigma$ be a compact non-orientable surface with $l$ boundary components. Note that the Uniformization Theorem \ref{uniformization} also holds for non-orientable surfaces. Pick a metric $h$ of constant Gauss curvature and geodesic boundary. We pass to the orientable cover that we denote by $\widetilde\Sigma$. The surface $\widetilde\Sigma$ is a compact orientable surface with $2l$ boundary components. The pull-back of the metric $h$ that we denote by $\tilde h$ is a metric of constant Gauss curvature and with geodesic boundary. Moreover, this metric is invariant under the involution changing the orientation on $\widetilde\Sigma$. Consider a sequence $\{h_n\}$ on $\Sigma$ of metrics of constant Gauss curvature and geodesic boundary such that $\operatorname{inj}(\Sigma,h_n)\to 0$ as $n\to\infty$. This sequence corresponds to the sequence $\{\tilde h_n\}$ on $\widetilde\Sigma$ such that $\operatorname{inj}(\widetilde\Sigma,\tilde h_n)\to 0$ as $n\to\infty$. As we discussed in the previous section for the sequence $\{\tilde h_n\}$ one can find pinching geodesics of the following three types: pinching boundary components, pinching simple geodesics crossing the boundary at two points and pinching simple closed geodesics which do not cross the boundary. Note that for the geodesics of the second type the points of intersection with the boundary are not identified under the involution. Indeed, if the were identified then the corresponding pinching geodesic had fixed ends under the involution. Applying the involution to this geodesic we would get a pinching \textit{closed} geodesic crossing the boundary at two points which is not one of the possible types of pinching geodesics. Consider now the geodesics of the third type. For every such geodesic there are two possible cases: either this geodesic maps to itself under the involution changing the orientation or it maps to another simple closed geodesic which does not cross the boundary. Then taking the quotient by the involution changing the orientation we get two types of simple closed geodesics on $\Sigma$ which do not crosse the boundary: \textit{one-sided geodesics} which are the images of the geodesics described in the first case and \textit{two-sided geodesics} which are the images of the geodesics described in the second case. The collars of one-sided geodesics are nothing but M\"obius bands while the collars of two-sided geodesics are cylinders. Therefore, if $\operatorname{inj}(\Sigma,h_n)\to 0$ as $n\to\infty$ then one can find pinching geodesics of the following types: pinching boundary components, pinching simple geodesics which have exactly two points of intersection with the boundary, one-sided pinching simple closed geodesics not crossing the boundary and two-sided pinching simple closed geodesics not crossing the boundary. \subsection{Surfaces with boundary of non-negative Euler characteristic} Here we consider the cases of the disc, the cylinder $\mathcal C$ and the M\"obius band $\mathbb{MB}$. It is known that the disc has a unique conformal class (up to an isometry). We denote this conformal class as $[g_{can}]$ or $c_{can}$, where $g_{can}$ is the flat metric on the disc $\mathbb D^2$ with unit boundary length. Accordingly to Theorem~\ref{uniformization} in a conformal class on $\mathcal C$ there exists a flat metric with geodesic boundary, i.e. a metric on the right circular cylinder. This metric is unique if we fix the length of the boundary. The right circular cylinder is uniquely determined by its height. Therefore, conformal classes on $\mathcal C$ are in one-to-one correspondence with heights of right circular cylinders, i.e. the set of conformal classes is $\mathbb R_{>0}$. We will identify conformal classes on $\mathcal C$ with points of $\mathbb R_{>0}$. We say that the sequence $\{c_n\}$ of conformal classes degenerates if either $c_n \to 0$ or $c_n\to\infty$. The case $c_n \to 0$ corresponds to a pinching geodesic having intersection with two boundary components (i.e. the generatrix of the right circular cylinder). The case $c_n\to\infty$ corresponds to pinching boundary components. In the case of the M\"obius band we also use Theorem~\ref{uniformization} which implies that in every conformal class on $\mathbb{MB}$ there exists a flat metric with geodesic boundary which is unique if we fix the length of the boundary. Passing to the orientable cover and pulling back the flat metric from $\mathbb{MB}$ we get a flat cylinder with geodesic boundary. Then the discussion in the previous paragraph implies that the conformal classes on $\mathbb{MB}$ are also encoded by $\mathbb R_{>0}$. Identifying again conformal classes on $\mathbb{MB}$ with points of $\mathbb R_{>0}$ we get two possible cases for a sequence of conformal classes $\{c_n\}$: either $c_n \to 0$ or $c_n\to\infty$. In both cases we say that the sequence $\{c_n\}$ degenerates. The first case corresponds to a pinching geodesic having two points of intersection with boundary. The second case corresponds to the collapsing boundary. \medskip \section{Proof of Theorem \ref{conf&conv}.} \label{proofconf&conv} {\bf Negative Euler characteristic.} Let $\Sigma$ be a surface with boundary and $c_n\to c_\infty$ a degenerating sequence of conformal classes. Consider the corresponding sequence of metrics $h_n$ of constant Gauss curvature and geodesic boundary. Then as we have noticed in Subsection \ref{modulii_boundary} one can find $s=s_1+s_2+s_3$ pinching geodesics of the following three types: $s_1$ pinching boundary components, $s_2$ pinching geodesics that have two points of intersection with boundary and $s_3$ pinching simple closed geodesics that do not intersect the boundary. We introduce the following notations \begin{itemize} \item $\gamma^n_i$ for collapsing geodesics, $i=1,\ldots,s$. If we do not indicate the superscript then the symbol $\gamma_i$ stands for the genus; \item $\mathcal{C}^n_i$ for collars of collapsing geodesics, $i=1,\ldots,s$. Their widths are denoted by $w^n_i$. Moreover, $\mathcal{C}^n_i:=\{(t,\theta)~|~0 \leq t<w^n_i,~0 \leq\theta \leq 2\pi\}$ for $1 \leq i \leq s_1$ and $\mathcal{C}^n_i:=\{(t,\theta)~|~-w^n_i<t<w^n_i,~0 \leq\theta \leq 2\pi\}$ for $s_1+1 \leq i \leq s$ (if the geodesic is one-sided then we consider $\mathcal{C}^n_i:=\{(t,\theta)~|~-w^n_i< t<w^n_i,~0 \leq\theta \leq 2\pi\}/\sim$, where $\sim$ stands for $(t,\theta)\sim(-t,\pi+\theta)$). Note that geodesics correspond to the line $\{t=0\}$, the segments $\{\theta=0\}$ and $\{\theta=2\pi\}$ are identified for $1 \leq i \leq s_1$ and for $s_1+s_2+1 \leq i \leq s$ and they are not identified for $s_1+1 \leq i \leq s_1+s_2$ and correspond to the segments of intersection with the boundary; \item for $0<a<w^n_i$, we denote $\mathcal C^n_i(0,a)$ the subset $\{(t, \theta)\,|~0 \leq t \leq a, 0 \leq \theta \leq 2\pi\}\subset {\mathcal{C}}^n_i$ for $1 \leq i \leq s_1$ and for $-w^n_i< a< b< w^n_i$, we denote $\mathcal C^n_i(a,b)$ the subset $\{(t, \theta)\,|~a \leq t \leq b, 0 \leq \theta \leq 2\pi\}\subset {\mathcal{C}}^n_i$ for $s_1+1 \leq i \leq s$; \item $\Gamma^n_i:=\{(t,\theta)\in\mathcal{C}^n_i~|~\theta=0$ or $\theta=2\pi\}$ for $s_1+1 \leq i \leq s_1+s_2$; \item for $-w^n_i< a< b< w^n_i$, we set $\Gamma^n_i(a,b):=\{(t,\theta)\in\Gamma^n_i~|~a \leq t \leq b\}$ for $s_1+1 \leq i \leq s_1+s_2$; \item $\Sigma^n_j$ for the $j-$th connected component of $\Sigma \setminus \cup^s_{i=1} \mathcal{C}^n_i$. We enumerate $\Sigma^n_j$ by $1 \leq j \leq M$ such that $M$ denotes the number of $\Sigma^n_j$ and for all $1 \leq j \leq m$ one has $\Sigma^n_j \cap \partial \Sigma \neq \O$ ; \item let $\alpha^n = \cup_{i=1}^{s_1+s_2}\{\alpha^n_{i,-},\alpha^n_{i,+}\}$, where $0 \leq\alpha^n_{i,\pm}< w^n_i$. We denote by $\Sigma^n_j(\alpha^n)$ the connected component of $$ \Sigma \setminus \Big(\bigcup^{s_1+s_2}_{i=1}{\mathcal{C}}_i^n(\alpha^n_{i,-},\alpha^n_{i,+}) \cup \bigcup^{s}_{i=s_1+s_2+1} \gamma^n_i\Big) $$ which contains $\Sigma_j^n$; \item for $\alpha^n = \cup_{i=1}^{s_1+s_2}\{\alpha^n_{i,-},\alpha^n_{i,+}\}$, where $0 \leq\alpha^n_{i,\pm}< w^n_i$ we set $I^n_j(\alpha^n)=\Sigma^n_j(\alpha^n)\cap \partial\Sigma$ and $I^n_j=\Sigma^n_j \cap \partial\Sigma$ where $1 \leq j \leq m$; \item we use the notation $a_n \ll b_n$ for two sequences $\{a_n\}$ and $\{b_n\}$ satisfying $a_n, b_n \to +\infty$ and $\frac{a_n}{b_n} \to 0$ as $n \to \infty$. \end{itemize} \subsection{Inequality $\geq$.} We prove that \begin{equation}\label{aim} \begin{split} \liminf_{n \to \infty} \sigma^*_k (\Sigma, c_n) \geq \max \Big(\sum^{m}_{i=1} \sigma^*_{k_{i}}(\Sigma_{\gamma_{i},l_i},c_\infty)+ \sum_{i=1}^{s_1+s_2}\sigma^*_{r_i}(\mathbb{D}^2) \Big), \end{split} \end{equation} For this aim we consider the domains ${\mathcal{C}}^n_i(0,\alpha_{i,+}^n)$ for $1 \leq i \leq s_1$, ${\mathcal{C}}^n_i(\alpha_{i,-}^n,\alpha_{i,+}^n)$ for $1+s_1 \leq i \leq s_1+s_2$, where $ w^n_i - \alpha_{i,\pm}^n \ll w^n_i,$ $\alpha_{i,\pm}^n\to\infty$ and the domains $\Sigma_j^n(\alpha^n)$ for $1 \leq j \leq m$. By Lemma~\ref{omega_i} we have \begin{equation} \label{aim2} \begin{split} &\sigma^*_k(\Sigma, c_n) \geq \max \Big(\sum^{s_1}_{i=1}\sigma^{N*}_{r_{i}}({\mathcal{C}}^n_i(0,\alpha_{i,+}^n), \gamma^n_i, c_n)+\\&+\sum^{s_1+s_2}_{i=1+s_1}\sigma^{N*}_{r_{i}}({\mathcal{C}}^n_i(\alpha_{i,-}^n,\alpha_{i,+}^n),\Gamma^n_i(\alpha_{i,-}^n,\alpha_{i,+}^n), c_n)+\sum^{m}_{j=1}\sigma^{N*}_{k_{j}}(\Sigma_j^n(\alpha^n), I^n_j(\alpha^n), c_n)\Big). \end{split} \end{equation} For $1 \leq i \leq s_1$ we define the conformal maps $\Psi_i^n\colon ({\mathcal{C}}^n_i(0,\alpha_{i,+}^n), c_n) \to (\mathbb{D}^2, [g_{can}])$ as $$ \Psi_i^n(t,\theta)= e^{\sqrt{-1}(\theta+\sqrt{-1}t)}. $$ The images of $\Psi_i^n$ are the annuli $\mathbb D^2 \setminus \mathbb D^2_{e^{-\alpha_{i,+}^n}}$ exhausting $\mathbb D^2$ as $n \to \infty.$ We also note that $\Psi_i^n(\gamma^n_i)=\mathbb S^1$. For $s_1+1 \leq i \leq s_1+s_2$ we define the conformal maps $\Psi_i^n\colon ({\mathcal{C}}^n_i(\alpha_{i,-}^n,\alpha_{i,+}^n), c_n) \to (\mathbb{D}^2, [g_{can}])$ as $$ \Psi_i^n(t,\theta)=\tan\Big(\frac{\theta-\pi+\sqrt{-1}t}{4}\Big). $$ The images of $\Psi_i^n$ that we denote by $\Omega_{i}^n$ exhaust $\mathbb D^2$ as $n \to \infty.$ We also denote the image of $\Gamma^n_i(\alpha_{i,-}^n,\alpha_{i,+}^n)$ by $\partial^S\Omega_{i}^n$. Note that $\partial^S\Omega_{i}^n$ exhaust $\mathbb S^1$ as $n\to\infty$. Finally, we take restrictions of the diffeomorphisms $\Psi_n^{-1}$ given by Proposition~\ref{D-M} to obtain the conformal maps $\check\Psi_j^n\colon({\Sigma}_j^n(\alpha^n),c_n)\to (\Sigma_{\infty},\Psi_n^*c_n)$ where $1 \leq j \leq m$. Let $\check\Omega_{j}^n \subset \Sigma_\infty$ be the the image of $\check\Psi_j^n$ and $\partial^S\check\Omega_{j}^n:=\check\Psi_j^n(I^n_j(\alpha^n))$. The following lemma holds \begin{lemma}\label{post1} Let $\Sigma^\infty_j$ be the connected component $\check\Psi^n_j(\Sigma^n_j)\subset \Sigma_\infty$ where $1 \leq j \leq m$. Then the domains $\check\Omega_j^n$ exhaust $\Sigma^\infty_j$ and $\partial^S\check\Omega_{j}^n$ exhaust $\partial\Sigma^\infty_j$. \end{lemma} \begin{proof} Passing to the Schottky double of the surface $\Sigma$ we immediately deduce this lemma from \cite[Lemma 5.1]{karpukhin2019friedlander}. \end{proof} Further, we apply the conformal transformations to~\eqref{aim2} to get \begin{equation}\label{aim3} \begin{split} &\sigma^*_k(\Sigma, c_n) \geq \max\Big(\sum^{s_1}_{i=1}\sigma^{N*}_{r_{i}}(\mathbb D^2 \setminus\mathbb D^2_{e^{-\alpha_{i,+}^n}}, \mathbb S^1, [g_{can}])+\\&+\sum^{s_1+s_2}_{i=1+s_1}\sigma^{N*}_{r_{i}}(\Omega_{i}^n, \partial^S\Omega_{i}^n, [g_{can}])+\sum^{m}_{j=1}\sigma^{N*}_{k_{j}}(\check\Omega_j^n, \partial^S\check\Omega_{j}^n, [(\Psi^n)^*h_n])\Big). \end{split} \end{equation} It follows from Corollary~\ref{Neumann cor2} that the first two terms on the right hand side converge to $\sigma_{r_{i}}(\mathbb D^2, [g_{can}])$. To complete the proof we will need the following lemma \begin{lemma}\label{post2} Let $\widehat{\Sigma_j^\infty}\subset \widehat{\Sigma_\infty}$ be a closure of $\Sigma_j^\infty$, $1 \leq j \leq m$. Then for all $r$ one has $$ \liminf_{n\to\infty}\sigma^{N*}_{r}(\check\Omega_j^n,\partial^S\check\Omega_{j}^n, [(\Psi^n)^*h_n]) \geq \sigma^*_r(\widehat{\Sigma_j^\infty}, [\widehat{h_\infty}]). $$ \end{lemma} We postpone the proof to Section \ref{appendix3}. Finally, taking $\liminf_{n\to\infty}$ in~\eqref{aim3} completes the proof of~\eqref{aim}. \subsection{Inequality $\leq$.} We prove the inverse inequality, \begin{equation}\label{aim'} \begin{split} \operatorname{limsup}_{n \to \infty} \sigma^*_k (\Sigma, c_n) \leq \max \Big(\sum^{m}_{i=1} \sigma^*_{k_{i}}(\Sigma_{\gamma_{i},l_i},c_\infty)+ \sum_{i=1}^{s_1+s_2}\sigma^*_{r_i}(\mathbb{D}^2) \Big). \end{split} \end{equation} For this aim we choose a subsequence $c_{n_m}$ such that $$\lim_ {n_m \to \infty} \sigma^*_k (\Sigma, c_{n_m})=\operatorname{limsup}_{n \to \infty} \sigma^*_k (\Sigma, c_n).$$ Then we relabel the subsequence and denote it by $\{c_n\}$. Therefore, one can choose subsequences without changing the value of $\operatorname{limsup}$. {\bf Case 1.} Suppose that up to a choice of a subsequence the following inequality holds \begin{align*} \sigma^*_k(\Sigma, c_n) > \sigma^*_{k-1}(\Sigma, c_n) +2\pi. \end{align*} Then by \cite[Theorem 2]{petrides2019maximizing} in the conformal class $c_n$ there exists a metric $g_n$ of unit boundary length induced from a harmonic immersion with free boundary $\Phi_n$ to some $N(n)$-dimensional ball $\mathbb{B}^{N(n)}$, i.e. $$ g_n=\frac{\langle \Phi_n, \partial_{\nu_n}\Phi_n\rangle_{h_n}}{\sigma^*_k(\Sigma, c_n)}h_n $$ and such that $\sigma_k(g_n)=\sigma^*_k(\Sigma, c_n)$. Here the metric $h_n$ is the canonical representative in the conformal class $c_n$. It is known that for any compact surface the multiplicity of $\sigma_k(g_n)$ is bounded from above by a constant depending only on $k$ and the topology of $\Sigma$ (see for instance~\cite{fraser2012minimal, karpukhin2014multiplicity}). Therefore, one can choose the number $N(n)$ large enough such that $N(n)$ does not depend on $n$. Assume that for the sequence $\{c_n\}$ the following inequality holds \begin{equation}\label{gap} \begin{split} \operatorname{limsup}_{n \to \infty} \sigma^*_k (\Sigma, c_n) > \max \Big(\sum^{m}_{i=1} \sigma^*_{k_{i}}(\Sigma_{\gamma_{i},l_i},c_\infty)+ \sum_{i=1}^{s_1+s_2}\sigma^*_{r_i}(\mathbb{D}^2) \Big). \end{split} \end{equation} For $1 \leq i \leq s_1$ we consider the conformal map $\Psi^n_i: (\mathcal C^n_i, c_n) \to (\mathbb D^2,[g_{can}])$ defined as $\Psi^n_i(\theta,t)=e^{\sqrt{-1}(\theta+\sqrt{-1}t)}$. The image of this map is nothing but $\mathbb D^2\setminus \mathbb D^2_{e^{-w^n_i}}$ which exhausts $\mathbb D^2$ as $n\to\infty$. The image of a pinching geodesic is $\mathbb S^1$. Then the map $\Phi^n_i:=\Phi_n\circ(\Psi^n_i)^{-1}: \mathbb D^2\setminus \mathbb D^2_{e^{-w^n_i}}\to \mathbb B^N$ satisfies the \textit{bubble convergence theorem for harmonic maps with free boundary} \cite[Theorem 1]{laurain2017regularity}. Hence, there exist a regular harmonic map with free boundary $\Phi_i: \mathbb D^2 \to \mathbb B^N$ and some harmonic extensions of non-constant $1/2-$harmonic maps $\omega^i_1,\dots,\omega^i_{t_i}: \mathbb D^2 \to \mathbb B^N$ such that $$ \int_{\mathbb D^2}|\nabla \Phi_i|^2dv_{g_{can}}+\sum^{t_j}_{j=1}\int_{\mathbb D^2}|\nabla \omega^j_{t_i}|^2dv_{g_{can}}=\lim_{n\to\infty}\int_{\gamma^n_i}ds_{g_n}. $$ We denote $\lim_{n\to\infty}\int_{\gamma^n_i}ds_{g_n}$ by $m_i$. \begin{proposition} \label{sequences} For $s_1+1 \leq i \leq s_1+s_2$ there exist integers $t_{i} \geq 0$, non-negative sequences $\{a_{i,l}^n\}, \{b_{i,l}^n\}$ with $1 \leq l \leq t_{i}$ and a sequence $\{\alpha^n_i\}$ such that \begin{gather*} -w_i^n \ll \alpha_{i,-}^n=b_{i,0}^n \ll a_{i,1}^n \ll b_{i,1}^n \ll \ldots \ll a_{i,t_i}^n \ll b_{i,t_{i}+1}^n \ll a_{i,t_{i+1}}^n=\alpha_{i,+}^n \ll w_i^n \end{gather*} and $$ m_{i,l}=\lim_{n \to \infty} L_{g_n}(\Gamma_i^n(a_{i,l}^n,b_{i,l}^n))>0. $$ Moreover, there exists a set $J \subset \{1,\ldots,m\}$ such that for every $j \in J$ one has $$ m_{j}=\lim_{n \to \infty} L_{g_n}(I_j^n(\alpha^n))>0 $$ satisfying $$ \sum^{s_1}_{i=1} m_{i}+\sum^{s_1+s_2}_{i=1} \sum^{t_i}_{l=s_1+1}m_{i,l}+\sum_{j\in J}m_j=1, $$ with $s_1+\sum^{s_1+s_2}_{i=s_1+1}t_i$ is maximal. \end{proposition} \begin{proof} The proof follows the proofs of Claim 16, Claim 17 by \cite{petrides2019maximizing}. Precisely, denying the proposition one can construct $k+1$ test-functions such that $\sigma_k(g_n) \leq o(1)$ which contradicts inequality~\eqref{el soufi}. The construction of these functions is given in the proofs of Claim 16, Claim 17 by \cite{petrides2019maximizing}. Note that these functions equal $1$ on $\Sigma^n_j$ for every $m+1 \leq j \leq M$. \end{proof} We proceed with considering a sequence $\{d_{i,l}^n\}$ where $s_1+1 \leq i \leq s_1+s_2$ and $1 \leq l \leq t_i$ such that $$ \lim_{n \to \infty} L_{g_n}(\Gamma_i^n(a_{i,l}^n,d_{i,l}^n))= \lim_{n \to \infty} L_{g_n}(\Gamma_i^n(d_{i,l}^n,b_{i,l}^n))=m_{i,l}/2. $$ Let $q^n_{i,l}\ll a^n_{i,l}$, $q^n_{i,l}\to+\infty$. Consider the conformal maps $$\Psi_{i,l}^n\colon \left({\mathcal{C}}_i^n(a_{i,l}^n-q^n_{i,l},b_{i,l}^n+ q^n_{i,l}),c_n\right) \to (\mathbb{D}^2,[g_{can}]) $$ defined as $$ \Psi_{i,l}^n(t,\theta)=\tan\left(\frac{\theta-\pi+\sqrt{-1}(t-t_{i,l}^n)}{4}\right) $$ Let $$ D_{i,j}^n=\Psi_{i,l}^n\left({\mathcal{C}}_i^n(a_{i,l}^n-q^n_{i,l},b_{i,l}^n+ q^n_{i,l})\right) $$ and $$ S_{i,j}^n=\Psi_{i,l}^n\left(\Gamma_i^n(a_{i,l}^n-q^n_{i,l},b_{i,l}^n+ q^n_{i,l})\right) $$ Then $D_{i,j}^n$ exhausts $\mathbb D^2$ and $S_{i,j}^n$ exhausts $\mathbb S^1$ as $n\to \infty$. We also set $$ \lim_{n\to\infty} L_{(\Psi_{i,l}^n)_*g_n}(S_{i,j}^n)=m_{i,l}. $$ Consider the map $\Phi_{i,l}^n=\Phi_n \circ (\Psi_{i,l}^n)^{-1}\colon (D_{i,j}^n,S_{i,j}^n) \to (\mathbb B^{N},\mathbb S^{N-1})$. We endow $D_{i,j}^n$ with the metric $(\Psi_{i,l}^n)_*g_n$ and $\mathbb B^{N}$ with the Euclidean metric. Then the map $\Phi_{i,l}^n$ is harmonic with free boundary since $\Phi_n$ is harmonic with free boundary and $\Psi_{i,l}^n$ is conformal. Moreover, it is shown in \cite{petrides2019maximizing} that the measure $\boldsymbol{1}_{S_{i,j}^n}\langle\Phi_{i,l}^n,\partial_\nu\Phi_{i,l}^n\rangle_{g_{can}}ds_{g_{can}}$ does not concentrate at the poles $(0,1)$ and $(0,-1)$ of $\mathbb{D}^2$. Indeed, if the measure concentrated at the poles then one would obtain a contradiction with the maximality of $s_1+\sum^{s_1+s_2}_{i=s_1+1}t_i$. The exactly same procedure can be carried out for components $\Sigma_j^n(\alpha^n)$, $j\in J$. The only difference is that now we use restrictions of diffeomorphisms $\Psi^n$ given by Proposition~\ref{D-M} instead of the explicit harmonic map as above. As a result, one obtains domains $\check\Omega^n_j\subset \Sigma_\infty$ and harmonic maps with free boundary $\check\Phi^n_j\colon\check\Omega^n_j\to\mathbb{B}^N$ such that the measure $\boldsymbol{1}_{\partial\check\Omega_j^n}\langle\Phi_{i,l}^n,\partial_\nu\Phi_{i,l}^n\rangle_{g_{can}}ds_{g_{can}}$ does not concentrate at the marked points of $\widehat{\Sigma_\infty}$. Now thanks to inequality \eqref{gap} we can construct $k+1$ well-defined test-functions for the Rayleigh quotient of $\sigma_k$ using the limit functions of the sequences of maps $\hat \Phi_{i,l}^n$ and $\hat \Phi_i^n$ as it was shown in \cite{petrides2019maximizing}. Precisely, let $p_{i}$ be the maximal integers such that \begin{gather}\label{tilde i} \frac{\sigma^*_{p_{i}}(\mathbb{D}^2)}{m_{i}}<\operatorname{limsup}_{n\to\infty}\sigma^*_k(\Sigma,c_n), \end{gather} where $1 \leq i \leq s_1$, $p_{i,l}$ the maximal integers such that \begin{align}\label{tilde i,l} \frac{\sigma^*_{p_{i,l}}(\mathbb{D}^2)}{m_{i,l}}<\operatorname{limsup}_{n\to\infty}\sigma^*_k(\Sigma,c_n), \end{align} where $s_1+1 \leq i \leq s_1+s_2$ and $p_j$ the maximal integers such that \begin{align}\label{j} \frac{\sigma^*_{p_{j}}(\widehat{\Sigma_j^\infty}, \widehat{c_{\infty}})}{m_{j}}<\operatorname{limsup}_{n\to\infty}\sigma^*_k(\Sigma,c_n),~j\in J. \end{align} Then one has $$ \sigma^*_{p_{i}+1}(\mathbb{D}^2) \geq m_{i}\operatorname{limsup}_{n\to\infty}\sigma^*_k(\Sigma,c_n),~1 \leq i \leq s_1, $$ $$ \sigma^*_{p_{i,l}+1}(\mathbb{D}^2) \geq m_{i,l}\operatorname{limsup}_{n\to\infty}\sigma^*_k(\Sigma,c_n),~s_1+1 \leq i \leq s_1+s_2 $$and $$ \sigma^*_{p_{j}+1}(\widehat{\Sigma_j^\infty}, \widehat{c_{\infty}}) \geq m_{j}\operatorname{limsup}_{n\to\infty}\sigma^*_k(\Sigma, c_n),~j\in J. $$ If $\sum^{s_1}_{i=1} (p_{i}+1)+\sum^{s_1+s_2}_{i=s_1+1} \sum^{t_i}_{l=1} (p_{i,l}+1)+\sum_{j \in J}(p_j+1) \leq k$ then by inequality \eqref{gap} we have \begin{gather*} \sum^{s_1}_{i=1} \sigma^*_{p_{i}+1}(\mathbb{D}^2)+\sum^{s_1+s_2}_{i=s_1+1} \sum^{t_i}_{l=1}\sigma^*_{p_{i,l}+1}(\mathbb{D}^2)+ \sum_{j \in J}\sigma^*_{p_{j}+1}(\widehat{\Sigma_j^\infty}, \widehat{c_{\infty}})<\operatorname{limsup}_{n\to\infty}\sigma^*_k(\Sigma,c_n), \end{gather*} which implies $\sum^{s_1}_{i=1} m_{i}+\sum^{s_1+s_2}_{i=s_1+1} \sum^{t_i}_{l=1}m_{i,l}+\sum_{j\in J}m_j < 1$ and we arrive at a contradiction with Proposition \ref{sequences}. Hence, $\sum^{s_1}_{i=1} (p_{i}+1)+\sum^{s_1+s_2}_{i=s_1+1} \sum^{t_i}_{l=1} (p_{i,l}+1)+\sum_{j \in J}(p_j+1) \geq k+1$. Further, let $dv_{g^{i}_\infty}=\lim_{n \to \infty} (\Psi_{i}^n)_*dv_{g_n}$, $dv_{g^{i,l}_\infty}=\lim_{n \to \infty} (\Psi_{i,l}^n)_*dv_{g_n}$ and $dv_{g^j_\infty}=\lim_{n \to \infty} (\Psi_j^n)^*dv_{g_n}$. Denote by $\widehat{dv_{g^{i}_\infty}}$, $\widehat{dv_{g^{i,l}_\infty}}$ and $\widehat{dv_{g^j_\infty}}$ the measures induced by the compactification on $\mathbb{D}^2$ for $1 \leq i \leq s_1$ and $s_1+1 \leq i \leq s_1+s_2$ and on $\widehat{\Sigma_j^\infty}$ respectively. These measures are well-defined due to the non-concentration argument explained above. Take orthonormal families of eigenfucntions $(\phi^0_i,\ldots,\phi^{p_{i}}_i)$ in $L^2(\mathbb{D}^2, \widehat{dv_{g^{i}_\infty}})$ $1 \leq i \leq s_1$, $(\phi^0_i,\ldots,\phi^{p_{i,l}}_i)$ in $L^2(\mathbb{D}^2, \widehat{dv_{g^{i,l}_\infty}})$ $s_1+1 \leq i \leq s_1+s_2$ and $(\psi^0_j,\ldots,\psi^{p_{j}}_j)$ in $L^2(\widehat{\Sigma_j^\infty}, \widehat{dv_{g^j_\infty}})$ such that for $0 \leq e \leq p_{i}$ the function $\phi^e_i$ is an eigenfunction with eigenvalue $\sigma_e(\widehat{dv_{g^{i}_\infty}})$ on $\mathbb{D}^2$, for $0 \leq e \leq p_{i,l}$ the function $\phi^e_i$ is an eigenfunction with eigenvalue $\sigma_e(\widehat{dv_{g^{i,l}_\infty}})$ on $\mathbb{D}^2$ and for $0 \leq r \leq p_{j}$ the function $\psi^r_j$ is an eigenfunction with eigenvalue $\sigma_r(\widehat{dv_{g^j_\infty}})$ on $\widehat{\Sigma_j^\infty}$. The standard capacity computations (see for instance \cite[Claim 1]{petrides2019maximizing}) imply the existence of smooth functions supported in a geodesic ball of a Riemannian manifold and having bounded Dirichlet energy. More precisely there exist positive smooth functions $\eta_i$, $\eta_{i,l}$ and $\eta_j$ for $(\mathbb{D}^2, \widehat{dv_{g^{i}_\infty}})$, $(\mathbb{D}^2, \widehat{dv_{g^{i,l}_\infty}})$ and $(\widehat{\Sigma_j^\infty}, \widehat{dv_{g^j_\infty}})$ respectively supported in geodesic balls $B(x,r)$ centered at the compactification points $x$ of radius $r$ such that $\eta \in C^\infty_0(B(x,r))$ and $\eta =1$ on $B(x,\rho_n r)\subset B(x,r)$ where $\rho_n\to 0$ as $n\to\infty$ and $\int_\Omega|\nabla \eta|^2_gdv_g \leq \frac{C}{\log\frac{1}{\rho_n}},$ where $\eta$ is one of the functions $\eta_i$, $\eta_{i,l}$ and $\eta_j$, $(\Omega,dv_g)$ is one of the corresponding manifolds $(\mathbb{D}^2, \widehat{dv_{g^{i}_\infty}})$, $(\mathbb{D}^2, \widehat{dv_{g^{i,l}_\infty}})$ and $(\widehat{\Sigma_j^\infty}, \widehat{dv_{g^j_\infty}})$. Moreover, if $(\Omega,dv_g)=(\mathbb{D}^2, \widehat{dv_{g^{i,l}_\infty}})$ then we additionally require $\rho_n$ to satisfy $\partial D^n_{i,l}\setminus S^n_{i,l} \subset B(x,\rho_n r)$. Then we define the desired test-functions as $$ \xi^e_i=(\Psi_{i}^n)^{-1}\eta_i\phi^e_i, ~1 \leq i \leq s_1 $$ extended by 0 on $\Sigma$, $$ \xi^e_{i,l}=(\Psi_{i,l}^n)^{-1}\eta_{i,l}\phi^e_i, ~s_1+1 \leq i \leq s_1+s_2 $$ extended by 0 on $\Sigma$ and $$ \xi^r_j=\Psi_j^n\eta_j\psi^r_j,~j\in J $$ extended by 0 on $\Sigma$. Note that all these functions have pairwise disjoint supports. Then from the variational characterization of $\sigma_k(g_n)$ one gets \begin{gather*} \sigma_k(g_n) \leq \max \Big\{\max_{1 \leq i \leq s_1} \frac{\int_{\Sigma}|\nabla \xi^e_i|^2_{g_n}dv_{g_n}}{\int_{\partial \Sigma} (\xi^e_i)^2 ds_{g_n}}, \max_{s_1+1 \leq i \leq s_1+s_2} \frac{\int_{\Sigma}|\nabla \xi^e_{i,l}|^2_{g_n}dv_{g_n}}{\int_{\partial \Sigma} (\xi^e_{i,l})^2 ds_{g_n}},\\ \max_{j \in J} \frac{\int_{\Sigma}|\nabla \xi^r_j|^2_{g_n}dv_{g_n}}{\int_{\partial\Sigma} (\xi^r_j)^2 ds_{g_n}} \Big\}, \end{gather*} and passing to $\operatorname{limsup}$ as $n\to\infty$ we get \begin{gather*} \operatorname{limsup}_{n\to\infty}\sigma^*_k(\Sigma,c_n) \leq \max\Big\{\max_{1 \leq i \leq s_1} \frac{\sigma^*_{p_{i}}(\mathbb{D}^2)}{m_{i}}, \max_{s_1+1 \leq i \leq s_1+s_2} \frac{\sigma^*_{p_{i,l}}(\mathbb{D}^2)}{m_{i,l}},\\ \max_{j \in J}\frac{\sigma^*_{p_{j}}(\widehat{\Sigma_j^\infty}, \widehat{c_{\infty}})}{m_{j}}\Big\} \end{gather*} which contradicts \eqref{tilde i}, \eqref{tilde i,l} and \eqref{j}. This means that if inequality \eqref{gap} holds then the sequence $\{c_n\}$ cannot degenerate. We arrived at a contradiction and inequality~\eqref{aim'} is proved. \begin{remark}\label{useful} Note that if $s_2=0$, i.e. there are no pinching geodesics having intersection with boundary components, then we take the set $J$ as $J=\{1,\ldots,m\}$, i.e. we consider $\Sigma^n_j(\alpha^n)$ where $1 \leq j \leq m$. If all the boundary components are getting pinched then we set $J=\O$ and we only have deal with the functions $ \xi^e_i=(\Psi_{i}^n)^{-1}\eta_i\phi^e_i$ extended by 0 on $\Sigma$ and $\sigma^*_{p_{i}}(\mathbb{D}^2)$ where $1 \leq i \leq s_1$. If $s_1=s_2=0$, i.e. only geodesics of the third type are getting pinched then we only have deal with functions $\xi^r_j=\Psi_j^n\eta_j\psi^r_j,~j\in J$ extended by 0 on $\Sigma$ and $\sigma^*_{p_{j}}(\widehat{\Sigma_j^\infty}, \widehat{c_{\infty}})$ where $J=\{1,\ldots,m\}$. \end{remark} {\bf Case 2.} Assume that up to a choice of a subsequence the following inequality holds \begin{align*} \sigma^*_k(\Sigma, c_n) \leq \sigma^*_{k-1}(\Sigma, c_n) +2\pi \end{align*} then we prove inequality \eqref{aim'} by induction. Consider the case $k=1$ then by inequality~ \eqref{el soufi} $\sigma^*_1(\Sigma, c_n) \geq 2\pi$. Suppose that up to a choice of a subsequence one has $\sigma^*_1(\Sigma, c_n) > 2\pi$. Then the case $k=1$ falls under Case 1. Otherwise one has $\operatorname{limsup}_{n \to \infty} \sigma^*_1(\Sigma, c_n)=2\pi$ and the inequality ~\eqref{aim'} reads as \begin{gather*} 2\pi=\operatorname{limsup}_{n \to \infty} \sigma^*_1(\Sigma, c_n) \leq \max \{\sigma^*_{1}(\Sigma_{\gamma_{i},l_i},c_\infty);2\pi\}, \end{gather*} which is true. The base of induction is proved. Suppose that the inequality holds for all numbers $k'\leq k$. We show that it also holds for $k+1$. Indeed, one has \begin{align*} \sigma^*_{k+1}(\Sigma, c_n) \leq \sigma^*_{k}(\Sigma, c_n)+2\pi=\sigma^*_{k}(\Sigma, c_n)+\sigma^*_1(\mathbb{D}^2) \end{align*} and we get \begin{gather*} \operatorname{limsup}_{n \to \infty}\sigma^*_{k+1} (\Sigma, c_n) \leq \max \Big(\sum^{m}_{i=1} \sigma^*_{k_{i}}(\Sigma_{\gamma_{i},l_i},c_\infty)+ \sum_{i=1}^{s_1+s_2}\sigma^*_{r_i}(\mathbb{D}^2) \Big)+\sigma^*_1(\mathbb{D}^2) \leq \\ \leq \max \Big(\sum^{m}_{i=1} \sigma^*_{k_{i}}(\Sigma_{\gamma_{i},l_i},c_\infty)+ \sum_{i=1}^{s_1+s_2}\sigma^*_{r_i}(\mathbb{D}^2) \Big), \end{gather*} where the maximum is taken over all possible combinations of indices such that $$ \sum_{i=1}^{m} k_i + \sum_{i=1}^{s_1+s_2} r_i = k+1, $$ since the term $\sigma^*_1(\mathbb{D}^2)$ can be absorbed by one of the terms inside $\max$ using inequality~\eqref{petya}. The proof is complete. {\bf Zero Euler characteristic.} The case of the cylinder was essentially considered in~\cite[Section 7.1]{petrides2019maximizing}. Indeed, it was proved that if the sequence of conformal classes $\{c_n\}$ degenerates then $$ \lim_{n\to\infty}\sigma^*_k(\mathcal C,c_n) \leq \max_{i_1+\cdots+i_s=k}\sum^s_{q=1}\sigma^*_{i_q}(\mathbb D^2)=2\pi k. $$ Applying then inequality~\eqref{el soufi} one immediately gets that $\lim_{n\to\infty}\sigma^*_k(\mathcal C,c_n)=2\pi k$. Consider the case of the M\"obius band. If the sequence $\{c_n\}$ goes to $0$ then it follows from~\cite[Section 7.1]{petrides2019maximizing} that \begin{gather}\label{MBcase} \lim_{n\to\infty}\sigma^*_k(\mathbb{MB},c_n) \leq \max_{i_1+\cdots+i_s=k}\sum^s_{q=1}\sigma^*_{i_q}(\mathbb D^2)=2\pi k. \end{gather} Indeed, we pass to the orientable cover which is a cylinder. Then inequality~\eqref{MBcase} follows from~\cite[Section 7.1, the case $R_\alpha \to 1$ as $\alpha \to+\infty$ in Petrides' notations]{petrides2019maximizing}. If the sequence $\{c_n\}$ goes to $\infty$ then we prove that inequality~\eqref{MBcase} also holds. The proof follows the exactly same arguments as in the proof of inequality~\eqref{aim'}. The analog of the case 1 for $\mathbb{MB}$ corresponds to the case of pinching boundary (see Remark~\eqref{useful}). Therefore, in both cases inequality~\eqref{MBcase} holds. Applying inequality~ \eqref{el soufi} once again we then get that $\lim_{n\to\infty}\sigma^*_k(\mathbb{MB},c_n)=2\pi k$. \medskip \section{Proof of Theorem \ref{disproof}} \label{main theorem proof} For the proof of Theorem \ref{disproof} we will need to choose a "nice" degenerating sequence of conformal classes, i.e. a degenerating sequence of conformal classes such that the limiting space looks as simple as possible. \begin{lemma}\label{metric sequences} Let $\Sigma$ be a compact surface with boundary of negative Euler characteristic. Then there exists a degenerating sequence of conformal classes such that the limiting space is the disc. \end{lemma} \begin{proof} The proof is purely topological. Assume that $\Sigma$ is orientable. Then we consider collapsing geodesics shown in Figure~\ref{F1}. Passing to the limit when the lengths of all pinching geodesics tend to zero and using the one-point cusps compactification we get an orientable surface of genus 0 with one boundary component, i.e. the disc. \begin{figure}[h!] \centering \def\columnwidth{\columnwidth} \includegraphics[scale=0.5]{vymya.pdf} \footnotesize \caption{ Orientable surface with boundary. The lengths of all \textit{red} geodesics tend to zero. } \label{F1} \end{figure} If $\Sigma$ is non-orientable then we pass to its orientable cover and we consider collapsing geodesics shown in Figure~\ref{F2} for genus $0$ and Figure~\ref{F3} for genus $\neq 0$ (the pictures are symmetric with respect to the involution changing the orientation, "the antipodal map"). Passing to the limit when the lengths of all pinching geodesics tend to zero and using the one-point cusps compactification we get a disconnected surface with two connected components which are topologically discs. The involution changing the orientation maps one component to another one and hence passing to the quotient by this involution we get just one disc. \begin{figure}[h!] \centering \def\columnwidth{\columnwidth} \includegraphics[scale=0.9]{genus0.pdf} \footnotesize \caption{ Orientable cover of a non-orientable surface of genus $0$ with boundary. The lengths of all \textit{red} geodesics tend to zero. } \label{F2} \end{figure} \begin{figure}[h!] \centering \def\columnwidth{\columnwidth} \includegraphics[scale=0.8]{genus.pdf} \footnotesize \caption{ Orientable cover of a non-orientable surface of genus $\neq 0$ with boundary. The lengths of all \textit{red} geodesics tend to zero. } \label{F3} \end{figure} \end{proof} Now we are ready to prove Theorem \ref{disproof}. \medskip {\bf Zero Euler characteristic.} Let $\Sigma$ be either the cylinder $\mathcal C$ or the M\"obius band $\mathbb{MB}$. Then this case immediately follows from Theorem~\ref{conf&conv} by Remark~\ref{main_remark}. Indeed, if $\{c_n\}$ denotes a degenerating sequence of conformal classes on $\Sigma$ then by Theorem~\ref{conf&conv}: $$ I^\sigma_k(\Sigma) \leq \lim_{n\to\infty}\sigma^*_k(\Sigma,c_n)=2\pi k. $$ But $I^\sigma_k(\Sigma) \geq 2\pi k$ by \eqref{el soufi}. Thus $I^\sigma_k(\Sigma)=\lim_{n\to\infty}\sigma^*_k(\Sigma,c_n)=2\pi k$ and the degenerating sequence $\{c_n\}$ is minimizing. \medskip {\bf Negative Euler characteristic.} By Lemma~\ref{metric sequences} there exists a sequence of conformal classes $\{c_n\}$ such that the limiting space $\widehat{\Sigma_\infty}$ is the disc. Then by Theorem~\ref{conf&conv} we have $$ \lim_{n\to\infty}\sigma^*_k(\Sigma,c_n) = \max_{\sum k_j=k}\sum\sigma^*_{k_j}(\mathbb{D}^2). $$ Moreover, we know that $\sigma^*_k(\mathbb{D}^2) = 2\pi k$. Hence, $$ I^\sigma_k(\Sigma) \leq \lim_{n\to\infty}\sigma^*_k(\Sigma,c_n) = 2\pi k. $$ Finally, by~\eqref{el soufi} one has $I^\sigma_k(\Sigma) \geq 2\pi k$ whence $I^\sigma_k(\Sigma)=2\pi k$ which completes the proof. \section{Appendix} \label{appendix} \subsection{A well-posed problem.} \label{appendix1} In this section we consider the problem \begin{gather} \label{toprove} \begin{cases} \Delta u=0&\text{in $M$},\\ u=g&\text{on $D$},\\ \frac{\partial u}{\partial n}=0&\text{on $N$}, \end{cases} \end{gather} where $(M,h)$ is a Riemannian manifold with boundary such that $\overline D\cup \overline N=\partial M$ and $D$ has positive capacity. Let $G$ be a smooth function such that $G_{|_D}=g$ and consider the function $v=G-u$. Then substituting $u=G-v$ into \eqref{toprove} implies: \begin{gather} \label{toprove1} \begin{cases} \Delta v=\Delta G&\text{in $M$},\\ v=0&\text{on $D$},\\ \frac{\partial u}{\partial n}=\frac{\partial G}{\partial n}&\text{on $N$}. \end{cases} \end{gather} We introduce the space $H^1_D(M,h)$ as the closure in $H^1-$norm of $C^\infty-$functions vanishing on $D$. For a function $u \in H^1_D(M,h)$ we have the following coercivity inequality: \begin{gather}\label{coercivity} ||u||_{L^2(M,h)} \leq C||\nabla u||_{L^2(M,h)}, \end{gather} with the best constant $C=\frac{1}{\sqrt{\lambda^{DN}_1(M,h)}}$, where $\lambda^{DN}_1(M,h)$ is the first non zero eigenvalue of the mixed problem \begin{gather} \begin{cases} \Delta u=\lambda u&\text{in $M$},\\ u=0&\text{on $D$},\\ \frac{\partial u}{\partial n}=0&\text{on $N$}. \end{cases} \end{gather} By the Lax-Milgram theorem and by virtue of the inequality \eqref{coercivity} the problem \eqref{toprove1} admits a unique solution on the space $H^1_D(M,h)$. Thus, problem \eqref{toprove} also has a solution. Moreover, it is easy to see that this solution is unique. Our aim now is the following lemma. \begin{lemma} \label{finally} Let $u$ satisfy the problem \eqref{toprove}. Then one has \begin{gather*} ||u||_{H^1(M,h)} \leq C||g||_{H^{1/2}(D,h)}. \end{gather*} \end{lemma} \begin{proof} The weak formulation of \eqref{toprove} reads $$ \int_M\langle\nabla u, \nabla v\rangle dv_h=0,~\forall v\in H^1_D(M,h). $$ Let $G$ be any continuation of the function $g$ into $M$, i.e. $G\in H^1(M,h)$ is any function such that $G_{|_D}=g$. Then substituting $v=u-G$ in the previous identity yields \begin{gather*} 0=\int_M\langle\nabla u, \nabla u-\nabla G\rangle dv_h=\int_M|\nabla u|^2dv_h-\int_M\langle\nabla u, \nabla G\rangle dv_h, \end{gather*} whence \begin{gather} \label{I1} \int_M|\nabla u|^2dv_h=\int_M\langle\nabla u, \nabla G\rangle dv_h \leq \frac{1}{2}\int_M|\nabla u|^2dv_h+\frac{1}{2}\int_M|\nabla G|^2dv_h. \end{gather} Further, it is easy to see that $$||u||_{L^2(M,h)} \leq ||u-G||_{L^2(M,h)}+||G||_{L^2(M,h)}.$$ Moreover, since $u-G\in H^1_D(M,h)$ one has $$ ||u-G||_{L^2(M,h)} \leq C||\nabla u -\nabla G||_{L^2(M,h)} \leq C(||\nabla u||_{L^2(M,h)}+||\nabla G||_{L^2(M,h)}). $$ Substituting it in the previous inequality we get \begin{gather} \label{I2} ||u||_{L^2(M,h)} \leq C(||\nabla u||_{L^2(M,h)}+||\nabla G||_{L^2(M,h)})+||G||_{L^2(M,h)}. \end{gather} Plugging \eqref{I1} in \eqref{I2} yields \begin{gather} \label{I3} ||u||_{L^2(M,h)} \leq C||G||_{H^1(M,h)}. \end{gather} Finally \eqref{I1} and \eqref{I3} imply \begin{gather} \label{l4} ||u||_{H^1(M,h)} \leq C||G||_{H^1(M,h)} \end{gather} for any function $G\in H^1(M,h)$ such that $G_{|_D}=g$. \begin{lemma}\label{inf} The norms $$\inf_{G\in H^1(M,h),~G_{|_D}=g}||G||_{H^1(M,h)}~\text{and}~||g||_{H^{1/2}(D,h)}$$ are equivalent. \end{lemma} \begin{proof} By the trace inequality there exists a positive constant $C_1$ such that for every $G\in H^1(M,h)$ one has \begin{gather*} ||g||_{H^{1/2}(D,h)} \leq C_1||G||_{H^1(M,h)}, \end{gather*} which implies: \begin{gather} \label{l5} ||g||_{H^{1/2}(D,h)} \leq C_1\inf_{G\in H^1(M,h),~G_{|_D}=g}||G||_{H^1(M,h)}; \end{gather} Further, we construct a continuation $G'\in H^1(M,h)$ of $g$ with the property that there exists a positive constant $C_2$ such that for every $g\in H^{1/2}(D,h)$ one has: \begin{gather} \label{l6} ||G'||_{H^1(M,h)} \leq C_2||g||_{H^{1/2}(D,h)}. \end{gather} Let $\tilde g$ be any continuation of $g$ on $\partial M$ such that $||\tilde g||_{H^{1/2}(N,h)} \leq ||g||_{H^{1/2}(D,h)}$. Therefore, $||\tilde g||_{H^{1/2}(\partial M,h)} \leq \sqrt{2}||g||_{H^{1/2}(D,h)}<\infty$ and $\tilde g\in H^{1/2}(\partial M,h)$. Then we take the harmonic continuation of $\tilde g$ into $M$ as $G'$. By \cite[Proposition 1.7]{Taylor} there exists a positive constant that $C_3$ such that: $$ ||G'||_{H^1(M,h)} \leq C_3||\tilde g||_{H^{1/2}(\partial M,h)}. $$ Since $||\tilde g||_{H^{1/2}(\partial M,h)} \leq \sqrt{2}||g||_{H^{1/2}(D,h)}$ we get~\eqref{l6} with $C_2=\sqrt{2}C_3$. Therefore, \eqref{l5} and \eqref{l6} imply: $$ C_2^{-1}||G'||_{H^1(M,h)} \leq ||g||_{H^{1/2}(D,h)} \leq C_1\inf_{G\in H^1(M,h),~G_{|_D}=g}||G||_{H^1(M,h)}, $$ whence \begin{gather*} C_2^{-1}\inf_{G\in H^1(M,h),~G_{|_D}=g}||G||_{H^1(M,h)} \leq ||g||_{H^{1/2}(D,h)} \leq \\ \leq C_1\inf_{G\in H^1(M,h),~G_{|_D}=g}||G||_{H^1(M,h)}, \end{gather*} since $$||G'||_{H^1(M,h)} \geq\inf_{G\in H^1(M,h),~G_{|_D}=g}||G||_{H^1(M,h)}.$$ And lemma follows. \end{proof} Finally, taking the infimum over all $G\in H^1(M,h)$ such that $G_{|_D}=g$ in~\eqref{l4} and using Lemma~\ref{inf} complete the proof. \end{proof} \subsection{Proofs of propositions of Section \ref{analysis}.}\label{appendix2} This section contains the proofs of propositions in section \ref{analysis} analogous to propositions in \cite[Section 4]{karpukhin2019friedlander} whose adaptation to the Steklov setting is rather technical. \begin{proof}[Proof of Lemma \ref{identity2}] Let $h^m\in[h]$ be a maximizing sequence of metrics for $\sigma^{N*}_k(\Omega, \partial^S\Omega, [h])$ and $g^m\in[g]$ be a discontinuous metric on $\Sigma$ defined as $g|_{\Omega_i} = h_i$. By the variational characterization of eigenvalues for all $k$ one has $\sigma_k(\Sigma,g^m) \geq\sigma^N(\Omega,h^m)$ since the set of test functions for the Steklov-Neumann eigenvalues $C^0(\Sigma,\{\Omega_i\})$ is larger than the set $C^0(\Sigma)$ of test functions for $\sigma_k(\Sigma,g^m)$. Using the fact that $L_{g^m}(\partial \Sigma)=\sum_iL_{h^m}(\partial^S \Omega_i) \geq L_{g^m}(\partial^S \Omega_i)$ for any $i$ and taking the limit as $m\to\infty$ we get $$ \sigma^*_k(\Sigma,\{\Omega_i\},[g]) \geq\sigma_k^{N*}(\Omega, \partial^S\Omega, [h]). $$ Finally by Lemma~\ref{identity} one gets $$ \sigma^*_k(\Sigma,[g]) \geq \sigma^{N*}_k(\Omega,\partial^S\Omega, [h]). $$ \end{proof} \begin{proof}[Proof of Proposition \ref{subdomain}] The proof is similar for both cases. The obvious analog of Lemma~\ref{liminf} for the second case holds since its proof follows the exactly same arguments as the proof of Lemma~\ref{liminf}. For that reason we only provide the proof of Proposition \ref{subdomain} for the first case. Take a maximizing sequence of metrics $\{h_i~ |~ h_i \in [g|_{\Omega}]\}$ for the functional $\sigma^{N*}_k(\Omega, \partial^S\Omega, [g])$, i.e. \begin{align*} \lim_{i\to\infty}\bar{\sigma}^{N}_k(\Omega,\partial^S\Omega, h_i)=\sigma^{N*}_k(\Omega,\partial^S\Omega, [g]) \end{align*} Let $h_i=f_i g|_{\Omega}$, where $f_i \in C^\infty_+(\bar\Omega)$. We then define the metric $\widetilde{h_i}=\widetilde{f_i} g$ on $\Sigma$, where $\widetilde{f_i}$ is any positive continuation of the function $f_i$ into $\Omega^c$. It enables us to consider the metric $\rho_\delta \widetilde{h_i}$, where as before \begin{align*} \rho_\delta= \begin{cases} 1&\text{in $\Omega$},\\ \delta&\text{in $\Sigma\setminus\Omega$}. \end{cases} \end{align*} Lemma \ref{liminf} implies \begin{align*} \liminf_{\delta \to 0} \sigma_k(\rho_\delta \widetilde{h_i}) \geq \sigma^N_k(\Omega,\partial^S\Omega, h_i). \end{align*} Moreover, $L_{\rho_\delta\widetilde{h_i}}(\partial \Sigma)\to L_{h_i}(\partial^S\Omega)$. By Lemma \ref{identity} we have \begin{align*} \sigma^*_k(\Sigma, [g])=\sigma^*_k(\Sigma,\{\Omega,\Sigma\setminus\Omega\},[g]) \geq \liminf_{\delta\to 0}\bar\sigma_k(\rho_\delta \widetilde{h_i}) \geq\bar\sigma^N_k(\Omega,\partial^S\Omega, h_i). \end{align*} Therefore, passing to the limit as $i \to \infty$ one gets, \begin{align*} \sigma^*_k(\Sigma, [g]) \geq \sigma^{N*}_k(\Omega,\partial^S\Omega, [g]). \end{align*} \end{proof} \begin{proof}[Proof of Corollary \ref{Neumann cor2}] We show that $$ \sigma^*_k(M, [g]) \leq \liminf_{n \to \infty}\sigma^{N*}_k(M \setminus K_n, \partial M \setminus \partial K_n, [g]). $$ Let $g^m$ be a maximizing sequence for the functional $\sigma^*_k(M, [g])$. For a fixed $m$ we consider geodesic balls $B_{\epsilon_n}(p_i)$ of radius $\epsilon_n\to 0$ in metric $g^m$ centred at the points $p_1,\ldots,p_l \in M$ such that $K_n \subset \cup^l_{i=1}B_{\epsilon_n}(p_i)$. We see that $M\setminus \cup^l_{i=1}B_{\epsilon_n}(p_i) \subset M\setminus K_n$. Then by Proposition~\ref{subdomain} one has \begin{gather}\label{420} \sigma^{N*}_k(M \setminus K_n, \partial M \setminus \partial K_n, [g]) \geq \\ \geq\sigma^{N*}_k(M\setminus \cup^l_{i=1}B_{\epsilon_n}(p_i), \partial M\setminus \cup^l_{i=1}\partial B_{\epsilon_n}(p_i), [g]) \geq \\ \geq \bar\sigma^{N}_k(M\setminus \cup^l_{i=1}B_{\epsilon_n}(p_i), \partial M\setminus \cup^l_{i=1}\partial B_{\epsilon_n}(p_i), g^m). \end{gather} Note that $L(\partial M \setminus \cup^l_{i=1}\partial B_{\epsilon_n}(p_i), g^m)\to L(\partial M, g^m)$ as $n\to\infty$ and by Lemma~\ref{Neumann conv} one has $\sigma^{N}_k(M\setminus \cup^l_{i=1}B_{\epsilon_n}(p_i), \partial M\setminus \cup^l_{i=1}\partial B_{\epsilon_n}(p_i), g^m)\to\sigma_k(M,g^m)$. Hence, $\bar\sigma^{N}_k(M\setminus \cup^l_{i=1}B_{\epsilon_n}(p_i), \partial M\setminus \cup^l_{i=1}\partial B_{\epsilon_n}(p_i), g^m) \to \bar\sigma_k(M,g^m)$ as $n\to\infty$. Taking $\liminf_{n\to\infty}$ in~\eqref{420} one then gets $$ \liminf_{n\to\infty}\sigma^{N*}_k(M \setminus K_n, \partial M \setminus \partial K_n, [g]) \geq \bar\sigma_k(M,g^m). $$ Passing to the limit as $m\to\infty$ we get the desired inequality. The inequality $$ \operatorname{limsup}_{n \to \infty}\sigma^{N*}_k(M \setminus K_n, \partial M \setminus \partial K_n, [g]) \leq \sigma^*_k(M, [g]) $$ follows from Proposition~\ref{subdomain}. This completes the proof. \end{proof} \begin{proof}[Proof of Lemma~\ref{disconnected}] Essentially the idea of the proof comes from the paper \cite{WK94}. We denote by $\partial^S\Omega$ the part of the boundary with the Steklov boundary condition. We also call $\partial^S\Omega$ "Steklov boundary" and $L_g(\partial^S\Omega)$ "the length of Steklov boundary" in metric $g$. {\bf Inequality $\geq$.} Fix the indices $k_i>0$ satisfying $\sum k_i=k$ and consider a maximizing sequence of metrics $\{g_i^m\}$ such that $\bar\sigma^N_{k_i}(\Omega_i,\partial^S\Omega_i, g^m_i)\to\sigma^{N*}_{k_i}(\Omega_i,\partial^S\Omega_i, [g_i])$. One can assume that $\sigma^N_{k_i}(\Omega_i,\partial^S\Omega_i, g^m_i)=\sigma^{N*}_k(\Omega,\partial^S\Omega, [g])$. Then, one has $$ L_{g^m_i}(\partial^S\Omega_i)\to \frac{\sigma^{N*}_{k_i}(\Omega_i,\partial^S\Omega_i, [g_i])}{\sigma^{N*}_{k}(\Omega,\partial^S\Omega, [g])} $$ Let $\{g^m\}$ be a sequence of metrics on $\Omega$ defined as $g^m|_{\Omega_i}=g^m_i$. Then for large enough $m$ one has that $\sigma^N_k(\Omega,\partial^S\Omega, g^m) = \sigma^{N*}_k(\Omega,\partial^S\Omega, [g])$, since the spectrum of disjoint union is the union of spectra of each component. By definition of $\sigma^{N*}_k(\Omega, \partial^S\Omega, [g])$ we also have $$ \sigma^{N*}_k(\Omega,\partial^S\Omega, [g])L_{g^m}(\partial^S\Omega)=\sigma^N_k(\Omega,\partial^S\Omega, g^m)L_{g^m}(\partial^S\Omega) \leq \sigma^{N*}_{k}(\Omega,\partial^S\Omega, [g]), $$ i.e. $L_{g^m}(\partial^S\Omega) \leq 1$. Thus, one has $$ 1 \geq L_{g^m}(\partial^S\Omega) = \sum_iL_{g^m_i}(\partial^S\Omega_i)\to \frac{\sum_i \sigma^{N*}_{k_i}(\Omega_i,\partial^S\Omega_i, [g_i])}{\sigma^{N*}_{k}(\Omega,\partial^S\Omega, [g])}. $$ Passing to the limit $m\to\infty$ yields the inequality. {\bf Inequality $\leq$.} Assume the contrary, i.e. \begin{equation} \label{assumption} \sigma^{N*}_k(\Omega, \partial^S\Omega, [g]) > \max_{\sum\limits_{i=1}^s k_i=k,\,\,\,k_i>0}\,\,\sum_{i=1}^s\sigma^{N*}_{k_i}(\Omega_i, \partial^S\Omega_i, [g_i]). \end{equation} Consider a maximizing sequence of metrics $\{g^m\}$ of unit total length of Steklov boundary such that $\sigma^N_{k}(\Omega,\partial^S\Omega, g^m)\to\sigma^{N*}_{k}(\Omega,\partial^S\Omega, [g])$. Let $g_i^m$ be a restriction of $g^m$ to $\Omega_i$ and $d^m_i$ be the largest number satisfying $\sigma^N_{d^m_i}(\Omega_i,\partial^S\Omega_i, g_i^m)<\sigma^{N*}_k(\Omega,\partial^S\Omega,[g])$ and $\operatorname{limsup}_{m\to\infty}\sigma^N_{d^m_i}(\Omega_i, \partial^S\Omega_i, g_i^m)<\sigma^{N*}_k(\Omega,\partial^S\Omega, [g])$. Let $L^m_i$ denote $L_{g^m_i}(\partial^S\Omega_i)$. Then we have $d^m_i \leq k$ and $L^m_i \leq 1$. Therefore, up to a choice of a subsequence one can assume that $d^m_i = d_i$ does not depend on $m$ and $L^m_i\to L_i$ as $m\to\infty$. We claim that $\sum_i(d_i+1) \geq k+1$. Otherwise, by~\eqref{assumption} and definition of $d_i$ we have \begin{gather*} \sigma^{N*}_k(\Omega,\partial^S\Omega, [g])\sum_i L_i \leq\sum_i\operatorname{limsup}_{m\to\infty}\bar\sigma^N_{d_i+1}(\Omega_i,\partial^S\Omega_i, g^m_i) \leq\\ \leq\sum_i\sigma^{N*}_{d_i+1}(\Omega_i,\partial^S\Omega_i, [g])<\sigma^{N*}_k(\Omega,\partial^S\Omega, [g]). \end{gather*} Moreover, $\sum_i L_i=1$ since $g^m$ are of unit Steklov boundary length. Thus, we arrive at $\sigma^{N*}_k(\Omega, \partial^S\Omega, [g])<\sigma^{N*}_k(\Omega,\partial^S\Omega, [g]),$ which is a contradiction. Therefore, the inequality $\sum(d_i+1) \geq k+1$ holds. Since the spectrum of a union is a union of spectra, we have $$\sigma^N_k(\Omega, \partial^S\Omega, g^m)\in\bigcup_i\{\sigma_0(\Omega_i,g^m_i),\ldots,\sigma_{d_i}(\Omega_i,g^m_i)\},$$ hence \begin{gather*} \sigma^{N*}_k(\Omega,\partial^S\Omega, g)=\operatorname{limsup}_{m\to\infty}\sigma^N_k(\Omega, \partial^S\Omega, g^m) \leq\max_i\operatorname{limsup}_{m\to\infty}\sigma_{d_i}(\Omega_i,g^m_i)<\\<\sigma^{N*}_k(\Omega,\partial^S\Omega, [g]). \end{gather*} Since $g^m$ are of unit Steklov boundary length we arrive at a contradiction. \end{proof} \begin{proof}[Proof of Lemma~\ref{omega_i}] Fix indices $k_i \geq 0$ such that $\sum_{i=1}^{s'} k_i=k$ and set $I = \{i\,|\,k_i>0\}$. Let $\Omega_1 = \cup_{i\in I}\overline\Omega_i\subset \Sigma,~\partial^S\Omega_1 = \cup_{i\in I}\partial^S\Omega_i,~(\Omega_2,h) = \sqcup_{i\in I}(\overline\Omega_i,g_{\overline\Omega_i})$ and $\partial^S\Omega_2 = \sqcup_{i\in I}\partial^S\Omega_i$. One gets \begin{gather*} \sigma^*_k(\Sigma,[g]) \geq \sigma^{N*}_k(\Omega_1,\partial^S\Omega_1, [g]) \geq \sigma^{N*}_k(\Omega_2, \partial^S\Omega_2, [h]) \geq \\ \geq\sum_{i\in I} \sigma^{N*}_{k_i}(\Omega_i,\partial^S\Omega_i, [g])=\sum^{s'}_{i=1} \sigma^{N*}_{k_i}(\Omega_i,\partial^S\Omega_i, [g]), \end{gather*} where we used in order: Proposition~\ref{subdomain}, Lemma~\ref{identity2} and Lemma~\ref{disconnected} and the fact that $\sigma^{N*}_0(\Omega_j,\partial^S\Omega_j, [g])=0$ for any $j$ in the last equality. \end{proof} \subsection{Proof of Lemma \ref{post2}.}\label{appendix3} Fix $\varepsilon>0$. An application of Corollary~\ref{Neumann cor2} to a compact exhaustion of $\Sigma^\infty_j$ yields the existence of a compact set $K\subset \Sigma^\infty_j\subset \widehat{\Sigma_j^\infty}$ such that $$ |\sigma^*_r(\widehat{\Sigma_j^\infty},[\widehat{h_\infty}]) - \sigma^{N*}_r(K, \partial^SK, [\widehat{h_\infty}])|<\varepsilon, $$ where $\partial^SK=K\cap \partial\Sigma^\infty_j \neq \O$. Since $\check\Omega_j^n$ exhaust $\Sigma^\infty_j$, then for all large enough $n$ one has $K\subset\check\Omega_j^n$. Then, by Proposition~\ref{subdomain} $$ \sigma^{N*}_{r}(\check\Omega_j^n,\partial^S\check\Omega_j^n, [(\Psi^n)^*h_n]) \geq \sigma^{N*}_{r}(K, \partial^SK,[(\Psi^n)^*h_n]). $$ Taking $\liminf$ of both sides in the above inequality and using Proposition~\ref{N-cont} yields $$ \liminf_{n\to\infty}\sigma^{N*}_{r}(\check\Omega_j^n,\partial^S\check\Omega_j^n, [(\Psi^n)^*h_n]) \geq \sigma^{N*}_{r}(K,\partial^SK, [\widehat{h_\infty}])> \sigma^*_r(\widehat{\Sigma_j^\infty},[\widehat{h_\infty}])-\varepsilon. $$ Since $\varepsilon$ is arbitrary, this completes the proof.
1,314,259,993,791
arxiv
\section{Introduction} In the last few years a lot of attention has been devoted to inter-related topics which go under the name of $W$-algebras, integrable hierarchies (non-relativistic) in $1+1$ dimensions of KdV or NLS type, $2$-dimensional reativistic equations like Liouville (more generally Toda field theories) or SG, $0$-dimensional matrix models which describe discretized $2$-dimensional gravity.\par Some of the above topics have definitely a more mathematical flavour, like for instance the theory and classification of $W$-algebras; some others are definitely more physically grounded, KdV describes waves in shallow water, Liouville equation is an ubiquous one, but at least a lot of attention has been paid to it in connection with non-critical strings.\par Despite the fact that the above models and theories seem all very different and can be constructed in apparently unrelated ways it turned out indeed that they are just manifestations of an underlying mathematical framework. Indeed $W$-algebras (i.e. non-linear Poisson-bracket algebras of $1$-dimensional fields containing a Virasoro one, which satisfy the standard properties of antisymmetry, Jacobi identity and Leibniz rule in the classical case) turn out to be the Poisson bracket structures for both relativistic Toda field theories and non-relativistic integrable equations like KdV, Boussinesq and so on. Moreover the Ward identities of generalized matrix models generate the so-called W-constraints and their partition functions turn out to be related to the $\tau$-functions of associated classical integrable hierarchies\cite{df}.\par It deserves being mentioned that $W$-algebras themselves can be produced and classified via a truly algebraic approach, by putting restrictions to affine Lie algebras; such restrictions can be realized either as hamiltonian reductions or coset constructions (by looking at some centralizer over some enveloping algebra \cite{d}). While there is maybe no strict mathematical proof so far that all $W$-algebras can be obtained with the methods of \cite{fss}, at least there is no need to believe that all $W$-algebras cannot be obtained that way.\par The production of such a closed structure like a $W$-algebra is an interesting mathematical activity by itself, however there is much more than that. A very peculiar and absolutely non-trivial feature of $W$-algebras arises when they allow constructing towers of infinite hamiltonians in involution. In this way they turn out to be linked to a dynamical system of a special kind, an integrable one. The technical tool which allows to prove integrability consists in formulating the dynamical system (and its associated $W$-algebra) via a Lax operator which can be either of scalar (KP-like) or matrix type.\par In the next section we will sketch the main features of the bosonic construction, postponing to the later section the introduction of supersymmetric integrable systems with the necessary modifications. \section{Bosonic Hierarchies} Let us first point out that $2D$ relativistic Toda models and $1+1$ non-relativistic integrable equations arise from constraining affine Lie algebras ${\hat{\cal G}}$ (and their associated enveloping algebras). The basic difference in the relativistic case is due to the fact that two copies of the affine algebra are considered, associated to the chiral and antichiral currents $J(z)$, ${\overline J}({\overline z})$ respectively. The dynamical fields are group-valued $g(z,{\overline z})$ and possibly expressed through a Gauss decomposition. We have \begin{eqnarray} J(z)= g^{-1}\partial_z g \end{eqnarray} and a similar equation for ${\overline J}({\overline z})$.\par The simplest case is provided when ${\cal G}=sl(2)$. The three currents associated to ${\hat {sl(2)}}$ are $J_{\pm}(x)$ and $J_0(x)$ ($J_0(x)$ generates the ${\hat{U(1)}}$ subalgebra).\par In this simple case only two inequivalent constraints can be imposed on the (enveloping) affine algebra, either\par A) constraining $J_+(x) =1$ (hamiltonian constrain), or\par B) selecting the $X(y)$ centralizer of the enveloping algebra, namely\par $\{J_0(x), X(y)\}=0$ (coset).\par Accordingly, we get in the relativistic (I) and in the non-relativistic (II) cases the following dynamical systems:\par I A) The Liouville equation.\par II A) The m-KdV (and KdV) equation.\par I B) The $2D$ Witten`s black hole.\par II B) The Non-Linear Schr\"odinger Equation.\par From now on we will concentrate only on the non-relativistic case, that is the system of integrable equations in $1+1$ dimension which can be solved through inverse scattering method. As mentioned above the integrability property is expressed by the fact that one can express the equations of motion through a Lax operator. We have two kinds of such operators, the scalar type \begin{eqnarray} L=\partial +\sum_{i=1,...,\infty} u_i \partial^{-i} \end{eqnarray} associated to the KP hierarchy, and the matrix type \begin{eqnarray} {\cal L} = \partial + \sum_i J_i(x) \tau^{i} + \Lambda \end{eqnarray} where the currents $J_i(x)$ are valued in some Lie algebra ${\cal G}$ generated by $\tau^i$. $\Lambda$ is a constant element, depending on a spectral parameter $\lambda$, such that the loop algebra ${\tilde G} = {\cal G}\otimes C(\lambda, \lambda^{-1})$ can be decomposed in the direct sum ${\tilde G} = K \oplus M$, with $K$, $M$ respectively the Kernel and Image under the adjoint action of $\Lambda$ (this technical property implies that ${\cal L}$ can be diagonalized under a similarity transformation).\par In order to extract from scalar Lax operators integrable equations involving only a finite number of fields, we have to constrain the infinite fields $u_i(x)$ in a way consistent with the KP flows (constrained KP hierarchies). One possibility is requiring e.g. for a given $n$ $L^n={L^n}_+$ (that is to be a purely differential operator). This is indeed a consistent constraint (leading to the n-th KdV hierarchies), however it is known there exists many more inequivalent consistent constraints and a classification of them out of the scalar Lax operators alone appears rather impractical.\par On the contrary it is well-known how to classify all possible hierarchies associated with affine algebras. They turn out to be related to the acceptable integral grading for any given loop algebra ${\cal G}$ and the choice of the regular element $\Lambda$ (see e.g. \cite{mat} for details). Moreover it is possible to relate such solutions with the constrained KP hierarchies. \cite{chi} \par In order to be explicit we recall that in the original Drinfeld-Sokolov paper the n-th KdV hierarchies were obtained by assuming the underlying algebra to be $sl(n)$ and the regular element $\Lambda$ to be the sum over the ${\tilde{sl(n)}}$ simple roots.\par The above scheme seems quite satisfactory from the point of view of bosonic hierarchies since it provides a well-defined construction for them and is quite plausible they can all be accomodated in it. Questions concerning the possible equivalence of hierarchies arising from different choices of algebras, integral grading and/or regular element seem more tehnical and less central.\par So far for purely bosonic hierarchies, in the next section we will introduce the supersymmetric ones. \section{Supersymmetric Hierarchies} The first natural question when discussing supersymmetric integrable hierarchies is of course why should we worry about them. One can think e.g. to the fact that so far no supersymmetric matrix model providing a discretized $2D$ supergravity has been produced. Neverthless some achievement has been made like the introduction of a supereigenvalue model which is in a sense pull out of a hat but is related to a superintegrable hierarchy\cite{sem}. \par Morever the remarkable relation of KdV-type hierarchies with the conformal algebras (Virasoro and supersymmetric extensions) establishes a connection between such hierarchies and the (super-)string theories which has still to be fully appreciated.\par From a purely mathematical point of view the role of supersymmetric integrable hierarchies and superalgebras is essential in at least two respects. Indeed, even when considering purely bosonic hierarchies, if we not allow for super-structures like super-algebras we cannot pretend to exhaust the full class of possible hierarchies, new integrable interacting purely bosonic hierarchies arise in fact from the bosonic sector (B-B and F-F submatrices) of supermatrix-valued superhierarchies.\par Moreover investigating large $N$-extended superhierarchies corresponds to a sort of ``unification or grandunification'' program of known hierarchies. It happens in fact that unrelated bosonic hierarchies or lower supersymmetric ($N=1,2)$ hierarchies turn out to be different manifestations of a single ``unifying'' large $N$ supersymmetric hierarchy. We will see later an example of this fact when discussing the $N=4$ KdV hierarchy.\par The point of view that we adopt here in discussing supersymmetric hierarchies is based on the (super-)Lie algebra framework, which one can reasonably hope will provide the key to classify all superhierarchies. The main reason of the difficulty involved in classifying superhierarchies w.r.t. the purely bosonic ones is due to the complications involving the presence of both even and odd generators.\par We need to point out that (super)-Lie algebras appear in $3$ different classes according whether they admit a presentation in terms of Dynkin diagrams with simple roots which are either:\par i) purely fermionic,\par ii) necessarily of mixed type, or\par iii) purely bosonic (they are reduced to standard Lie algebras).\par A simple argument made people believe for a long time that only the special class of super-Lie algebras admitting fermionic simple roots were relevant for the construction of superhierarchies. Inami and Kanno \cite{ik} gave it in the contest of super-KdV hierarchies. In order to extend the bosonic matrix Lax operator they were led to consider a supersymmetric Lax of the kind \begin{eqnarray} {\cal L} = D + \Psi(X) + \Lambda \end{eqnarray} where now $D$ is the ($N=1$) fermionic supersymmetric derivative. $\Psi(X)$ are superfields valued in some superalgebra and as such are fermionic. The regular element $\Lambda$ should be given by the sum over the simple roots and in order to respect statistics it must be fermionic as well. Therefore it seemed that only class i) superalgebras had to be considered. A similar argument was given by Evans and Hollowood \cite{eh} in the case of superToda theories. This argument has paved the way to the standard supersymmetrization recipe of bosonic models which goes as follows: embed the given bosonic algebra which provides the bosonic system into a larger superLie algebra having ``good properties'' and perform the hamiltonian reduction on it. In this way $N=1$ extension of KdV and Liouville equations were provided in terms of the $osp(1|2)$ superalgebra.\par The Inami-Kanno scheme is a perfectly consistent one, leading to a classification of supersymmetric hierarchies much similar to the bosonic case. There would be no need to look for improving it if it would not turn out a too restricted one. Indeed it happens that well-known and interesting superhierarchies cannot be accomodated in it. To my knowledge Brunelli and Das were the first \cite{bd} who faced this problem when they realized that the superNLS equation admits a Lax operator based on the $sl(2)$ bosonic algebra (and not $osp(1|2)$ as one would have been expected). In \cite{sns} it was pointed out that the superNLS hierarchy arises as a coset construction (just as its bosonic counterpart) over a Poisson bracket structure based on the superaffinization (that is expressed in terms of superfields) of the bosonic $sl(2)$ algebra. \par As a consequence there exists a much bigger class of supersymmetric integrable models than previously expected which need to be investigated. Despite the fact that we do not have yet a systematic way of constructing them in terms of matrix Lax operators just like the bosonic models or the Inami-Kanno superhierarchies, still we can develop some strategy to investigate them. This will be explained next. \section{Supersymmetric Hierarchies and Affine Algebras} Let us here discuss a possible strategy for constructing supersymmetric integrable hierarchies from generic superaffinizations of (super-)Lie algebras. But first let us point out that a superaffinization of a given (super-)Lie algebra ${\cal G}$ with generators $\tau^i$ and structure constants ${f^{ij}}_k$ is realized by $N=1$ superfields $\Psi^i(X)$, with opposite statistics w.r.t. $\tau^i$ and such that \begin{eqnarray} \{\Psi^i(X),\Psi^j(Y)\}= {f^{ij}}_k\Psi^k(Y)\delta(X,Y) + K^{ij} D_Y\delta (X,Y) \end{eqnarray} with $\delta(X,Y)$ the $N=1$ delta-function and $K^{ij}=Str({\tau^i\tau^j})$ in some given (let's say the adjoint) representation of ${\cal G}$.\par The following steps should be performed:\par i) Take a superaffine (super-)Lie algebra which should be regarded as Poisson bracket structure.\par ii) Make some Ansatz over the possible hamiltonians in involution; this would mean imposing symmetry requirements, cosets or hamiltonian reductions.\par iii) Check the consistency of flows and if indeed at lower orders one gets hamiltonians in involution.\par iv) Try to figure out the form of possible Lax operators (this is the most difficult task).\par It can even happen that one finds more structures than expected. Indeed it is well-known that a relation exists between division algebras ad extended supersymmetries. Complex, Quaternionic ad Octonionic structures are associated to (global) $N=2,4,8$ extensions respectively.\par A complex structure for a (super-)algebra over the real fields is an operation $J$ which satisfy $J^2=-1$, while a quaternionic structure involves $3$ complex structures $J_i$ whose mutual algebra is that of the Pauli matrices.\par If a theory admits a complex structure it necessarily has an extended supersymmetry. For instance the superNLS equation which arises from the ${\hat {sl(2)}}/ {\hat{u(1)}}$ structure is automatically $N=2$ since such a coset admits a complex structure (while ${\hat{sl(2)}}$ does not). An elegant (but equivalent) formulation can be realized through the coset ${\hat {sl(2)\oplus u(1)}}/{\hat{u(1)\oplus u(1)}}$ \cite{kst}. This construction allows a manifestly $N=2$ superfield formulation since the extra $u(1)$ are added to give a complex structures for both numerator and denominator.\par The $sl(2)\oplus u(1)$ algebra turns out to be vey interesting because it appears in the list given by \cite{xxx} (actually these authors considered group-manifolds, out of which algebras can be immediately recovered) as the simplest example of non-abelian algebra (the even simpler abelian case being $u(1)^{\otimes 4}$) admitting a quaternionic structure.\par A natural question therefore arises, namely if it is possible that the superaffine algebra ${\hat{sl(2)\oplus u(1)}}$, taken as a Poisson bracket algebra, would allow to play another game, not just the coset already mentioned, according to the above scheme. In particular we can ask ourselves if we can demand an $N=4$ symmetry requirement which in turns imply an $N=4$ hierarchy. In the next section we will show that this is indeed the case \cite{ikt}. \section{the $N=4$ structure of ${{sl(2)\oplus u(1)}}$} The superaffine algebra ${\hat{sl(2)\oplus u(1)}}$ can be conveniently described in terms of $N=2$ superfields. Let us introduce the $N=2$ fermionic derivatives $D,{\overline D}$ whose algebra reads as follows \begin{eqnarray} D^2={\overline D}^2 &=&0 \nonumber\\ \{D,{\overline D}\}&=& -\partial_x \end{eqnarray} The spin $\frac{1}{2}$ $N=2$ superfields are denoted as $H, {\overline H}, F, {\overline F}$. \par $H$ and ${\overline H}$ are associated to the $u(1)\oplus u(1)$ subalgebra.\par They are constrained superfields, the constraints being non-linearly realized \begin{eqnarray} && DH = {\overline D}{\overline H}=0\nonumber\\ && (D+ H) F = ({\overline D} -{\overline H}){\overline F} =0 \end{eqnarray} The non-vanishing structure constants are given by \begin{eqnarray} &&\{ H (1),{\overline H}(2)\} = D{\overline D} \delta \nonumber\\ &&\{ H(1), F(2)\} = DF\cdot \delta\nonumber\\ && \{H(1), {\overline F}(2)\} = - D{\overline F} \delta\nonumber\\ &&\{{\overline H}(1), F(2)\} = -{\overline D} F\cdot \delta\nonumber\\ &&\{{\overline H}(1), {\overline F}(2)\} = {\overline D} {\overline F} \cdot \delta\nonumber\\ &&\{ F(1),{\overline F}(2)\} = (D+H) ({\overline D} +{\overline H}) \delta + F {\overline F}\delta \end{eqnarray} where $\delta\equiv \delta (1,2)$ is the $N=2$ delta-function and the derivatives in the r.h.s. are computed at $Z\equiv 1$. \par In the last line a ``fake'' non-linear term appears. It is not present when the chiral constraints are solved in terms of $N=1$ superfields or component fields.\par There exists a second set of {\it global} $N=2$ non-linear supersymmetries, expressed through the infinitesimal parameters $\epsilon, {\overline\epsilon}$, which results from the quaternionic structure associated with $sl(2)\oplus u(1)$. We have \begin{eqnarray} \delta H &=& \epsilon D{\overline F} + {\overline \epsilon} H F\nonumber\\ \delta{\overline H} &=& {\overline \epsilon} {\overline D}F - \epsilon {\overline H}{\overline F}\nonumber\\ \delta F &=& -\epsilon D {\overline H} -\epsilon (H{\overline H} + F{\overline F}) \nonumber\\ \delta{\overline F} &=& -{\overline \epsilon} {\overline D} H - {\overline \epsilon} (H{\overline H} + F{\overline F}) \end{eqnarray} It can be easily checked that the above transformations preserve the chirality constraints and that their commutators close to give, together with the original transformations, an $N=4$ supersymmetry. \section{the $N=4$ Hierarchy} We have seen that ${\hat{sl(2)\oplus u(1)}}$ carries an $N=$4 structure. To prove the existence of globally invariant $N=4$ dynamical systems we have to construct explicitly the $N=4$ invariant hamiltonians. They indeed exist and moreover, at the lower dimensional integral spin dimension $d=1,2$, they are unique up to total derivatives (at least if a global chargeless condition is required, where $H, {\overline H}$ are chargeless, while $F$ and ${\overline F}$ have charges $+1$ and $-1$ respectively).\par We have indeed \begin{eqnarray} H_1 &=& F {\overline F} + H {\overline H}\nonumber\\ H_2 &=& F'{\overline F} -H'{\overline H} -\nonumber\\ && -(D{\overline H} + {\overline D} H) (H {\overline H} + F {\overline F}) -2 H {\overline H} F{\overline F} \end{eqnarray} Higher dimensional $N=4$ hamiltonians can be explicitly constructed and turn out to be in involution with the lower dimensional ones.\par The resulting equations of motion (which is not needed to write here,see \cite{ikt}) with respect to the second hamiltonian realize an $N=4$ dynamical system which combines in a non-trivial way both the $N=2$ mKdV equation and the $N=2$ NLS equations. The latters are recovered by setting, consistently with the equations of motion, respectively $F={\overline F} =0$ and $H={\overline H} =0$. A third, more mysterious, $N=2$ system can be obtained by performing a non-symmetrical reduction leading to ${\overline H} = F = 0$.\par So far for what concerns the construction of the $N=4$ system. An Ansatz has guided us towards its realization and we have seen that it is essentially unique. A point which has been left apart consists in explicitly proving that our system indeed corresponds to an integrable hierarchy admitting an infinite tower of hamiltonians in involution. In this particular case we have a very elegant procedure which proves that. Unfortunately, as already stated, we cannot rely so far on any systematic construction for the Lax operators valid for generic theories. The best we can do at present is based on a trial procedure.\par However, for the ${\hat{sl(2)\oplus u(1)}}$ case the key property which allows to solve the problem is the existence of a (differential polynomial) $N=4$ Sugawara construction \cite{ikt}. This very remarkable transformation has at least four different consequences:\par i) it provides a linearization of the $N=4$ transformations,\par ii) it furnishes a realization for the ``minimal'' $N=4$ SuperConformal Algebra (SCA),\par iii) it relates the ``affine hierarchy'' to the $N=4$ KdV system\cite{di} and\par iv) it allows the construction of the Lax operator which proves the integrability. \par The Sugawara transformation is a differential polynomial transformation which express the ``superconformal fields'' through the original affine superfields $H, {\overline H}, F, {\overline F}$. The transformed superfields are an $N=2$ real superVirasoro superfield $J$ (with component fields content $(1,\frac{3}{2},\frac{3}{2},2)$) plus two chiral and antichiral spin $1$ superfields (in components $(1,\frac{3}{2})$). We have explicitly \begin{eqnarray} J &=& H{\overline H} + F{\overline F} + D {\overline H} + {\overline D} H \nonumber\\ \Phi &=& D {\overline F}\nonumber\\ {\overline \Phi} &=& {\overline D}F \end{eqnarray} The presence of the Feigin-Fuchs terms in the r.h.s. for $J$ is especially important. Without them $J$ would be a nilpotent field ($J^3=0$ due to the fermionic character of $H, {\overline H}, F, {\overline F}$). Moreover they allow the second set of $N=2$ transformations to close linearly on $J, \Phi, {\overline \Phi}$ as \begin{eqnarray} \delta J &=& -{\overline \epsilon} D{\overline \Phi} -{\epsilon}{\overline D} \Phi\nonumber\\ \delta \Phi &=& {\overline \epsilon} D J\nonumber\\ \delta {\overline \Phi} &=& \epsilon {\overline D}\Phi \end{eqnarray} The composite superfields $J, \Phi, {\overline \Phi}$ satisfy a closed algebra structure under the original ${\hat{sl(2)\oplus u(1)}}$ Poisson bracket structure and it coincides with the minimal version of the $N=4$ SCA. \par The hamiltonians in involution can be closely expressed through the superfields $J, \Phi,{\overline \Phi}$ alone. At the lowest order we have, for the hamiltonian densities \begin{eqnarray} H_1 &=& J\nonumber\\ H_2&=& J^2 - 2 \Phi{\overline\Phi}\nonumber\\ H_3&=& J[D,{\overline D}] J + 2 \Phi {\overline \Phi}'+\frac{2}{3}J^3 - 4 J \Phi{\overline \Phi} \end{eqnarray} As a consequence we have a closed system of dynamical equations for $J,\Phi ,{\overline\Phi}$ which coincides with the $N=4$ KdV hierarchy. \par The Lax operator can be borrowed from the known Lax operator of KdV and is given by \cite{dig} \begin{eqnarray} L= D {\overline D} + D {\overline D} \partial^{-1}(J +{\overline \Phi} \partial^{-1}\Phi ) \partial^{-1} D {\overline D} \end{eqnarray} It should be noticed that in this particular case checking the integrability properties of the given hierarchy was immediate once the Sugawara construction has been taken into account since the above Lax operator for the $N=4$ KdV was already known. However, even if this would have not been the case (as one could expect from constructions based on more general algebras), the Sugawara transformation itself would greatly simplify the task of finding the correct Lax pair, since it is much easier to deal with three spin $1$ fields than with spin $\frac{1}{2}$ superfields. The dramatic simpification of the hamiltonians when expressed through $J, \Phi, {\overline\Phi}$ is also an example.\par Some more comments are in order: the $N=4$ KdV is the ``unifying hierarchy'' for two of the three inequivalent $N=2$ KdV hierarchies labeled by $a= 1,-2,4$. The $a=-2$ and $a= 4$ N=2 KdV are indeed obtained from different reductions of $N=4$ KdV.\par The construction based on the abelian $u(1)^{\otimes 4}$ algebra could lead to global $N=4$ hierarchies realized through strictly chiral and antichiral superfields, but it can be easily checked that they are definitely not polynomial generalization of $N=2$ NLS and are not $N=4$ superconformal. \section{Conclusions} We have pointed out that supersymmetrical integrable hierarchies can be very naturally investigated (and hopefully classified) taking as a starting point the (super-)Lie algebras and their supersymmetric affinizations. Our approach is very much complementary with the point of view advocated by many authors in literature (like e.g. Z. Popowicz who is also author of a package for computing Lax operators by using reduce). They rather use the converse attitude of actually producing integrable equations in terms of some consistent Lax operators, especially of scalar type. This approach has the advantage of furnishing indeed integrable systems, but leave aside questions concerning the algebraic interpretation of these results. The approach based on Lie algebras has just opposite merits and drawbacks. It furnishes from the very beginning the algebraic setting for defining dynamical systems and provides guidelines how to obtain them, while the burden is on proving the existence of a tower of hamiltonians in involution.\par This situation is very specific to the supersymmetric case since, in contrast with bosonic hierarchies, we do not dispose of a hamiltonian reduction procedure which automatically leads to Lax operators. The examples where this is indeed the case, corresponding to the Inami-Kanno hierarchies, are of interest but they belong to a restricted class. Other interesting super-integrable systems like the $N=4$ KdV equation previously discussed are left out of this scheme.\par The approach based on (super-)Lie algebras is a very convenient one in the investigation of supersymmetric extended hierarchies. As discussed in this paper, one has to look for algebras admitting extra structures, complex, quaternionic and so far. Some partial results obtained in collaboratin with Ivanov and Krivonos show indeed that $sl(3)$, the next simplest quaternionic algebra, admits a global $N=4$ structure which suggests the realization of at least global $N=4$ hierarchies.\par In conclusion it deserves being mentioned that investigating supersymmetric integrable hierarchies looks promising due to the presence of open problems. \section{Acknowledgments} This work has been done under JSPS contract. The author expresses its gratitude to the members of the Phys. Dept. of Shizuoka University for their kind hospitality.
1,314,259,993,792
arxiv
\section*{Acknowledgement}}{} \newenvironment{romenumerate}[1][-10pt] \addtolength{\leftmargini}{#1}\begin{enumerate \renewcommand{\labelenumi}{\textup{(\roman{enumi})}}% \renewcommand{\theenumi}{\textup{(\roman{enumi})}}% }{\end{enumerate}} \newcounter{oldenumi} \newenvironment{romenumerateq {\setcounter{oldenumi}{\value{enumi}} \begin{romenumerate} \setcounter{enumi}{\value{oldenumi}}} {\end{romenumerate}} \newcounter{thmenumerate} \newenvironment{thmenumerate} {\setcounter{thmenumerate}{0}% \renewcommand{\thethmenumerate}{\textup{(\roman{thmenumerate})}}% \def\item{\pa \refstepcounter{thmenumerate}\textup{(\roman{thmenumerate})\enspace}} } {} \newcounter{xenumerate} \newenvironment{xenumerate} {\begin{list} {\upshape(\roman{xenumerate})} {\setlength{\leftmargin}{0pt} \setlength{\rightmargin}{0pt} \setlength{\labelwidth}{0pt} \setlength{\itemindent}{\labelsep} \setlength{\topsep}{0pt} \usecounter{xenumerate}} } {\end{list}} \newcommand\xfootnote[1]{\unskip\footnote{#1}$ $} \newcommand\pfitem[1]{\par(#1):} \newcommand\pfitemx[1]{\par#1:} \newcommand\pfitemref[1]{\pfitemx{\ref{#1}}} \newcommand\pfcase[2]{\smallskip\noindent\emph{Case #1: #2} \noindent} \newcommand\step[2]{\smallskip\noindent\emph{Step #1: #2} \noindent} \newcommand\stepx{\smallskip\noindent\refstepcounter{steps}% \emph{Step \arabic{steps}:}\noindent} \newcommand{\refT}[1]{Theorem~\ref{#1}} \newcommand{\refC}[1]{Corollary~\ref{#1}} \newcommand{\refL}[1]{Lemma~\ref{#1}} \newcommand{\refR}[1]{Remark~\ref{#1}} \newcommand{\refS}[1]{Section~\ref{#1}} \newcommand{\refSS}[1]{Subsection~\ref{#1}} \newcommand{\refP}[1]{Problem~\ref{#1}} \newcommand{\refD}[1]{Definition~\ref{#1}} \newcommand{\refE}[1]{Example~\ref{#1}} \newcommand{\refF}[1]{Figure~\ref{#1}} \newcommand{\refApp}[1]{Appendix~\ref{#1}} \newcommand{\refTab}[1]{Table~\ref{#1}} \newcommand{\refand}[2]{\ref{#1} and~\ref{#2}} \newcommand\XXX{XXX \marginal{XXX}} \newcommand\linebreakx{\unskip\marginal{$\backslash$linebreak}\linebreak} \begingroup \count255=\time \divide\count255 by 60 \count1=\count255 \multiply\count255 by -60 \advance\count255 by \time \ifnum \count255 < 10 \xdef\klockan{\the\count1.0\the\count255} \else\xdef\klockan{\the\count1.\the\count255}\fi \endgroup \newcommand\nopf{\qed} \newcommand\noqed{\renewcommand{\qed}{}} \newcommand\qedtag{\eqno{\qed}} \DeclareMathOperator*{\sumx}{\sum\nolimits^{*}} \DeclareMathOperator*{\sumxx}{\sum\nolimits^{**}} \newcommand{\sum_{i=0}^\infty}{\sum_{i=0}^\infty} \newcommand{\sum_{j=0}^\infty}{\sum_{j=0}^\infty} \newcommand{\sum_{k=0}^\infty}{\sum_{k=0}^\infty} \newcommand{\sum_{m=0}^\infty}{\sum_{m=0}^\infty} \newcommand{\sum_{n=0}^\infty}{\sum_{n=0}^\infty} \newcommand{\sum_{i=1}^\infty}{\sum_{i=1}^\infty} \newcommand{\sum_{j=1}^\infty}{\sum_{j=1}^\infty} \newcommand{\sum_{k=1}^\infty}{\sum_{k=1}^\infty} \newcommand{\sum_{m=1}^\infty}{\sum_{m=1}^\infty} \newcommand{\sum_{n=1}^\infty}{\sum_{n=1}^\infty} \newcommand{\sum_{i=1}^n}{\sum_{i=1}^n} \newcommand{\sum_{j=1}^n}{\sum_{j=1}^n} \newcommand{\sum_{k=1}^n}{\sum_{k=1}^n} \newcommand{\prod_{i=1}^n}{\prod_{i=1}^n} \newcommand\set[1]{\ensuremath{\{#1\}}} \newcommand\bigset[1]{\ensuremath{\bigl\{#1\bigr\}}} \newcommand\Bigset[1]{\ensuremath{\Bigl\{#1\Bigr\}}} \newcommand\biggset[1]{\ensuremath{\biggl\{#1\biggr\}}} \newcommand\lrset[1]{\ensuremath{\left\{#1\right\}}} \newcommand\xpar[1]{(#1)} \newcommand\bigpar[1]{\bigl(#1\bigr)} \newcommand\Bigpar[1]{\Bigl(#1\Bigr)} \newcommand\biggpar[1]{\biggl(#1\biggr)} \newcommand\lrpar[1]{\left(#1\right)} \newcommand\bigsqpar[1]{\bigl[#1\bigr]} \newcommand\Bigsqpar[1]{\Bigl[#1\Bigr]} \newcommand\biggsqpar[1]{\biggl[#1\biggr]} \newcommand\lrsqpar[1]{\left[#1\right]} \newcommand\xcpar[1]{\{#1\}} \newcommand\bigcpar[1]{\bigl\{#1\bigr\}} \newcommand\Bigcpar[1]{\Bigl\{#1\Bigr\}} \newcommand\biggcpar[1]{\biggl\{#1\biggr\}} \newcommand\lrcpar[1]{\left\{#1\right\}} \newcommand\abs[1]{|#1|} \newcommand\bigabs[1]{\bigl|#1\bigr|} \newcommand\Bigabs[1]{\Bigl|#1\Bigr|} \newcommand\biggabs[1]{\biggl|#1\biggr|} \newcommand\lrabs[1]{\left|#1\right|} \def\rompar(#1){\textup(#1\textup)} \newcommand\xfrac[2]{#1/#2} \newcommand\xpfrac[2]{(#1)/#2} \newcommand\xqfrac[2]{#1/(#2)} \newcommand\xpqfrac[2]{(#1)/(#2)} \newcommand\parfrac[2]{\lrpar{\frac{#1}{#2}}} \newcommand\bigparfrac[2]{\bigpar{\frac{#1}{#2}}} \newcommand\Bigparfrac[2]{\Bigpar{\frac{#1}{#2}}} \newcommand\biggparfrac[2]{\biggpar{\frac{#1}{#2}}} \newcommand\xparfrac[2]{\xpar{\xfrac{#1}{#2}}} \newcommand\innprod[1]{\langle#1\rangle} \newcommand\expbig[1]{\exp\bigl(#1\bigr)} \newcommand\expBig[1]{\exp\Bigl(#1\Bigr)} \newcommand\explr[1]{\exp\left(#1\right)} \newcommand\expQ[1]{e^{#1}} \def\xexp(#1){e^{#1}} \newcommand\ceil[1]{\lceil#1\rceil} \newcommand\floor[1]{\lfloor#1\rfloor} \newcommand\lrfloor[1]{\left\lfloor#1\right\rfloor} \newcommand\frax[1]{\{#1\}} \newcommand\setn{\set{1,\dots,n}} \newcommand\nn{[n]} \newcommand\ntoo{\ensuremath{{n\to\infty}}} \newcommand\Ntoo{\ensuremath{{N\to\infty}}} \newcommand\asntoo{\text{as }\ntoo} \newcommand\ktoo{\ensuremath{{k\to\infty}}} \newcommand\mtoo{\ensuremath{{m\to\infty}}} \newcommand\stoo{\ensuremath{{s\to\infty}}} \newcommand\ttoo{\ensuremath{{t\to\infty}}} \newcommand\jtoo{\ensuremath{{j\to\infty}}} \newcommand\too{\to\infty} \newcommand\bmin{\wedge} \newcommand\norm[1]{\|#1\|} \newcommand\bignorm[1]{\bigl\|#1\bigr\|} \newcommand\Bignorm[1]{\Bigl\|#1\Bigr\|} \newcommand\downto{\searrow} \newcommand\upto{\nearrow} \newcommand\half{\tfrac12} \newcommand\thalf{\tfrac12} \newcommand\punkt{.\spacefactor=1000} \newcommand\iid{i.i.d\punkt} \newcommand\ie{i.e\punkt} \newcommand\eg{e.g\punkt} \newcommand\viz{viz\punkt} \newcommand\cf{cf\punkt} \newcommand{a.s\punkt}{a.s\punkt} \newcommand{a.e\punkt}{a.e\punkt} \renewcommand{\ae}{\vu} \newcommand\whp{w.h.p\punkt} \newcommand\ii{\mathrm{i}} \newcommand{\longrightarrow}{\longrightarrow} \newcommand\dto{\overset{\mathrm{d}}{\longrightarrow}} \newcommand\pto{\overset{\mathrm{p}}{\longrightarrow}} \newcommand\asto{\overset{\mathrm{a.s.}}{\longrightarrow}} \newcommand\eqd{\overset{\mathrm{d}}{=}} \newcommand\neqd{\overset{\mathrm{d}}{\neq}} \newcommand\op{o_{\mathrm p}} \newcommand\Op{O_{\mathrm p}} \newcommand\bbR{\mathbb R} \newcommand\bbC{\mathbb C} \newcommand\bbN{\mathbb N} \newcommand\bbT{\mathbb T} \newcommand\bbQ{\mathbb Q} \newcommand\bbZ{\mathbb Z} \newcommand\bbZleo{\mathbb Z_{\le0}} \newcommand\bbZgeo{\mathbb Z_{\ge0}} \newcounter{CC} \newcommand{\CC}{\stepcounter{CC}\CCx} \newcommand{\CCx}{C_{\arabic{CC}}} \newcommand{\CCdef}[1]{\xdef#1{\CCx}} \newcommand{\CCname}[1]{\CC\CCdef{#1}} \newcommand{\CCreset}{\setcounter{CC}0} \newcounter{cc} \newcommand{\cc}{\stepcounter{cc}\ccx} \newcommand{\ccx}{c_{\arabic{cc}}} \newcommand{\ccdef}[1]{\xdef#1{\ccx}} \newcommand{\ccname}[1]{\cc\ccdef{#1}} \newcommand{\ccreset}{\setcounter{cc}0} \renewcommand\Re{\operatorname{Re}} \renewcommand\Im{\operatorname{Im}} \newcommand\E{\operatorname{\mathbb E{}}} \renewcommand\P{\operatorname{\mathbb P{}}} \newcommand\PP{\operatorname{\mathbb P{}}} \newcommand\Var{\operatorname{Var}} \newcommand\Cov{\operatorname{Cov}} \newcommand\Corr{\operatorname{Corr}} \newcommand\Exp{\operatorname{Exp}} \newcommand\Po{\operatorname{Po}} \newcommand\Bi{\operatorname{Bi}} \newcommand\Bin{\operatorname{Bin}} \newcommand\Be{\operatorname{Be}} \newcommand\Ge{\operatorname{Ge}} \newcommand\NBi{\operatorname{NegBin}} \newcommand\Res{\operatorname{Res}} \newcommand\fall[1]{^{\underline{#1}}} \newcommand\rise[1]{^{\overline{#1}}} \newcommand\supp{\operatorname{supp}} \newcommand\sgn{\operatorname{sgn}} \newcommand\Tr{\operatorname{Tr}} \newcommand\ga{\alpha} \newcommand\gb{\beta} \newcommand\gd{\delta} \newcommand\gD{\Delta} \newcommand\gf{\varphi} \newcommand\gam{\gamma} \newcommand\gG{\Gamma} \newcommand\gk{\varkappa} \newcommand\gl{\lambda} \newcommand\gL{\Lambda} \newcommand\go{\omega} \newcommand\gO{\Omega} \newcommand\gs{\sigma} \newcommand\gS{\Sigma} \newcommand\gss{\sigma^2} \newcommand\gth{\theta} \newcommand\eps{\varepsilon} \newcommand\ep{\varepsilon} \renewcommand\phi{\xxx} \newcommand\cA{\mathcal A} \newcommand\cB{\mathcal B} \newcommand\cC{\mathcal C} \newcommand\cD{\mathcal D} \newcommand\cE{\mathcal E} \newcommand\cF{\mathcal F} \newcommand\cG{\mathcal G} \newcommand\cH{\mathcal H} \newcommand\cI{\mathcal I} \newcommand\cJ{\mathcal J} \newcommand\cK{\mathcal K} \newcommand\cL{{\mathcal L}} \newcommand\cM{\mathcal M} \newcommand\cN{\mathcal N} \newcommand\cO{\mathcal O} \newcommand\cP{\mathcal P} \newcommand\cQ{\mathcal Q} \newcommand\cR{{\mathcal R}} \newcommand\cS{{\mathcal S}} \newcommand\cT{{\mathcal T}} \newcommand\cU{{\mathcal U}} \newcommand\cV{\mathcal V} \newcommand\cW{\mathcal W} \newcommand\cX{{\mathcal X}} \newcommand\cY{{\mathcal Y}} \newcommand\cZ{{\mathcal Z}} \newcommand\bJ{\bar J} \newcommand\tJ{{\tilde J}} \newcommand\ett[1]{\boldsymbol1\xcpar{#1}} \newcommand\bigett[1]{\boldsymbol1\bigcpar{#1}} \newcommand\Bigett[1]{\boldsymbol1\Bigcpar{#1}} \newcommand\etta{\boldsymbol1} \newcommand\smatrixx[1]{\left(\begin{smallmatrix}#1\end{smallmatrix}\right)} \newcommand\limn{\lim_{n\to\infty}} \newcommand\limN{\lim_{N\to\infty}} \newcommand\qw{^{-1}} \newcommand\qww{^{-2}} \newcommand\qq{^{1/2}} \newcommand\qqw{^{-1/2}} \newcommand\qqq{^{1/3}} \newcommand\qqqb{^{2/3}} \newcommand\qqqw{^{-1/3}} \newcommand\qqqbw{^{-2/3}} \newcommand\qqqq{^{1/4}} \newcommand\qqqqc{^{3/4}} \newcommand\qqqqw{^{-1/4}} \newcommand\qqqqcw{^{-3/4}} \newcommand\intoi{\int_0^1} \newcommand\intoo{\int_0^\infty} \newcommand\intoooo{\int_{-\infty}^\infty} \newcommand\oi{[0,1]} \newcommand\ooi{(0,1]} \newcommand\ooo{[0,\infty)} \newcommand\ooox{[0,\infty]} \newcommand\oooo{(-\infty,\infty)} \newcommand\setoi{\set{0,1}} \newcommand\dtv{d_{\mathrm{TV}}} \newcommand\dd{\,\mathrm{d}} \newcommand\ddx{\mathrm{d}} \newcommand\ddd[1]{\frac{\ddx}{\ddx#1}} \newcommand{probability generating function}{probability generating function} \newcommand{moment generating function}{moment generating function} \newcommand{characteristic function}{characteristic function} \newcommand{uniformly integrable}{uniformly integrable} \newcommand\rv{random variable} \newcommand\lhs{left-hand side} \newcommand\rhs{right-hand side} \newcommand\gnp{\ensuremath{G(n,p)}} \newcommand\gnm{\ensuremath{G(n,m)}} \newcommand\gnd{\ensuremath{G(n,d)}} \newcommand\gnx[1]{\ensuremath{G(n,#1)}} \newcommand\etto{\bigpar{1+o(1)}} \newcommand\sumgl{\sum_{\gl}} \newcommand\sumgla{\sum_{\gl\in\gs(A)}} \newcommand\sumglx{\sum_{\gl\neq\gl_1}} \newcommand\summu{\sum_{\mu}} \newcommand\summux{\sum_{\mu\neq\gl_1}} \newcommand\sumiq{\sum_{i=1}^q} \newcommand\sumjq{\sum_{j=1}^q} \newcommand\tin{T_{i,n,\gl,\mu}} \newcommand\txn{T_{\ceil{xn},n,\gl,\mu}} \newcommand\tyn{T_{\ceil{n^y},n,\gl,\mu}} \newcommand\tqn[1]{T_{#1,n,\gl,\mu}} \newcommand\tynb{T_{\ceil{n^y},n,\gl,\bgl}} \newcommand\gsa{\gs(A)} \newcommand\gsax{\gs(A)\setminus\set{\gl_1}} \newcommand\xnn{_{\ceil{xn},n}} \newcommand\iij{_{i,j}} \newcommand\PIx{\widehat P} \newcommand\nyn{_{\ceil{n^y},n}} \newcommand\bgl{\overline \gl} \newcommand\mm{^{(m)}} \newcommand\wmm{\frac{1}{m!}} \newcommand\uniz{uniformly for $z$ in any fixed compact set in the complex plane} \newcommand\qll{^{(\ell)}} \newcommand\el{E_\lambda} \newcommand\nl{N_\lambda} \newcommand\pl{P_\lambda} \newcommand\pli{P_{\lambda_1}} \newcommand\nul{\nu_\gl} \newcommand\tA{\widetilde A} \newcommand{H\"older}{H\"older} \newcommand{P\'olya}{P\'olya} \newcommand\CS{Cauchy--Schwarz} \newcommand\CSineq{\CS{} inequality} \newcommand{L\'evy}{L\'evy} \newcommand\ER{Erd\H os--R\'enyi} \newcommand{Lov\'asz}{Lov\'asz} \newcommand{Fr\'echet}{Fr\'echet} \newcommand{\texttt{Maple}}{\texttt{Maple}} \newcommand\citex{\REM} \newcommand\refx[1]{\texttt{[#1]}} \newcommand\xref[1]{\texttt{(#1)}} \hyphenation{Upp-sala} \begin{document} \begin{abstract} It is well-known that in a small P\'olya urn, i.e., an urn where second largest real part of an eigenvalue is at most half the largest eigenvalue, the distribution of the numbers of balls of different colours in the urn is asymptotically normal under weak additional conditions. We consider the balanced case, and then give asymptotics of the mean and the covariance matrix, showing that after appropriate normalization, the mean and covariance matrix converge to the mean and variance of the limiting normal distribution. \end{abstract} \maketitle \section{Introduction}\label{S:intro} A (generalized) P\'olya{} urn contains balls of different colours. A ball is drawn at random from the urn, and is replaced by a set of balls that depends on the colour of the drawn balls. (Moreover, the replacement set may be random, with a distribution depending on the drawn colour). This is repeated an infinite number of times, and we are interested in the asymptotic composition of the urn. For details, and the assumptions used in the present paper, see \refS{Spolya}; for the history of P\'olya{} urns, see \eg{} \citet{Mahmoud}. It is well-known, see \eg{} \cite[Theorems 3.22--3.24]{SJ154}, that the asymptotic behaviour depends on the eigenvalues of the \emph{intensity matrix} of the urn defined in \eqref{A} below, and in particular on the two largest (in real part) eigenvalues $\gl_1$ and $\gl_2$. If $\Re\gl_2\le\frac12\gl_1$ (a \emph{small urn}), then, under some assumptions (including some version of irreducibility), the number of balls of a given colour is asymptotically normal, while if $\Re\gl_2>\frac12\gl_1$ (a \emph{large urn}), then this is not true: there are (again under some assumptions, and after suitable normalization) limits in distribution, but the limiting distributions have no simple description and are (typically, at least) not normal; furthermore, there may be oscillations so that suitable subsequences converge in distribution but not the full sequence. Another difference is that for a small urn, the limit is independent of the initial state, and therefore independent of what happens in any fixed finite set of draws (i.e., the limit theorem is mixing, see \cite[Proposition 2]{AldousEagleson-stable}), while for a large urn, on the contrary, there is an almost sure (a.s\punkt) limit result and thus the limit is essentially determined by what happens early in the process. For large urns, \citet{P} proved, assuming that the urn is \emph{balanced}, see \refS{Spolya}, a limit theorem which shows such an a.s\punkt{} result and also shows convergence in $L^p$ for any $p$, and thus convergence of all moments. For small urns, however, less is known about moment convergence in general. Balanced deterministic urns with 2 colours were considered already by \citet{Bernstein1,Bernstein2}, who showed both the asymptotic normality in the small urn case and gave results on mean and variance; \citet{Savkevitch} also considered this urn and studied the mean and variance and, moreover, the third and fourth moments. \citet{BagchiPal} (independently, but 45 years later) gave another proof of asymptotic normality for balanced deterministic small urns with 2 colours, by using the method of moments and thus proving moment convergence as part of the proof. \citet{BaiHu}, who consider an arbitrary number of colours and allow random replacements (under somewhat different conditions than ours), show asymptotic results for the variance as part of their proof of asymptotic normality for small urns, using the same decomposition of the covariance matrix as in the present paper; however, these results are buried inside the proof and are not stated explicitly. The main purpose of the present paper is to give explicit asymptotics for the first and second moments for a balanced small urn, and in particular show that the asymptotics are as suggested by the central limit theorem. (In particular, loosely speaking, the variance of the number of balls of a given colour is asymptotic to the asymptotic variance.) We also include a simple result on non-degeneracy of the limit (\refT{TS}). Precise statements are given in \refS{Sresults}. The proofs use a method related to the method by \citet{P}, but somewhat simpler. The main idea (which has been used in various forms earlier, in particular in \cite{BaiHu}) is that the drawing of a ball and the subsequent addition of a set of balls, at time $k$, say, influences the composition of the urn at a later time $n$ not only directly by the added balls, but also indirectly since the added balls change the probabilities for later draws. By including the expectation of these later indirect effects, we find the real effect at time $n$ of the draw at time $k$, and we may write the composition at time $n$ as the sum, for $k\le n$, of these contributions, see \eqref{kia}. The contributions for different $k$ are orthogonal, and thus the variance can be found by summing the variances of these contribution. See also the comments in \refS{Sfurther}. \refS{Spolya} gives definitions and introduces the notation. \refS{Sresults} contains the statements of the main results, which are proved in Sections \ref{Spf1}--\ref{Spf2}. \refS{Sex} presents some applications, and \refS{Sfurther} contains some further comments on where the variance comes from, \ie, which draws are most important, and the difference between small and large urns. The appendicies give some further, more technical, results. \begin{remark} We consider in the present paper only the mean and (co)\-variance. Similar results on convergence of higher moments for balanced small urns will be given in \cite{JansonPouyanne} (under somewhat more restrictive assumptions than in the present paper), using the (related) method of \cite{P}. It is possible that the method in the present paper can be extended to handle higher moments too, but we do not see any immediate extension, and therefore prefer the approach in \cite{JansonPouyanne}. On the other hand, for the first and second moments, the present method seems somewhat simpler, and perhaps also more informative. \end{remark} \begin{problem} In the present paper, we consider only balanced urns. We leave it as a challenging open problem to prove (or disprove?) similar results for non-balanced urns. \end{problem} \section{P\'olya{} urns}\label{Spolya} \subsection{Definition and assumptions} A (generalized) P\'olya{} urn process is defined as follows. (See \eg{} \citet{Mahmoud}, \citet{JKotz}, \citet{SJ154}, \citet{Flajolet-analytic} and \citet{P} for the history and further references, as well as some different methods used to study such urns.) There are balls of $q$ colours (types) $1,\dots, q$, where $2\le q<\infty$. The composition of the urn at time $n$ is given by the vector $X_n=(X_{n1},\dots,X_{nq})\in \ooo^q$, where $X_{ni} $ is the number of balls of colour $i$. The urn starts with a given vector $X_0$, and evolves according to a discrete time Markov process. Each colour $i$ has an \emph{activity} (or weight) $a_i\ge0$, and a (generally random) \emph{replacement vector} $\xi_i=(\xi_{i1},\dots, \xi_{iq})$. At each time $ n+1\ge 1 $, the urn is updated by drawing one ball at random from the urn, with the probability of any ball proportional to its activity. Thus, the drawn ball has colour $i$ with probability \begin{equation} \label{urn} \frac{a_iX_{ni}}{\sum_{j}a_jX_{nj}}. \end{equation} If the drawn ball has type $i$, it is replaced together with $\Delta X_{nj}$ balls of type $j$, $j=1,\dots,n$, where the random vector $ \Delta X_{n}=(\Delta X_{n1},\dots, \Delta X_{nq}) $ has the same distribution as $ \xi_i$ and is independent of everything else that has happened so far. Thus, the urn is updated to $X_{n+1}=X_n+\gD X_n$. In many applications, the numbers $X_{nj}$ and $\xi_{ij}$ are integers, but that is not necessary; it has been noted several times that the P\'olya{} urn process is well-defined also for \emph{real} $X_{ni}$ and $\xi_{ij}$, with probabilities for the different replacements still given by \eqref{urn}, see \eg{} \cite[Remark 4.2]{SJ154}, \cite[Remark 1.11]{SJ169} and \cite{P}, and \cite{Jirina} for the related case of branching processes; the ``number of balls'' $X_{ni}$ may thus be any nonnegative real number. (This can be interpreted as the amount (mass) of colour $i$ in the urn, rather than the number of discrete balls.) We assume that, for every $n\ge0$, \begin{equation}\label{ten} \text{each } X_{ni}\ge0 \quad \text{and}\quad \sum_i a_i X_{ni}>0, \end{equation} so that \eqref{urn} really gives meaningful probabilities. The replacements $\xi_{ij}$ are thus random real numbers. We allow them to be negative, meaning that balls may be subtracted from the urn. However, we always assume that $X_0$ and the random vectors $\xi_i$ are such that the condition \eqref{ten} a.s\punkt{} holds for every $n\ge0$ (and thus the process does not stop due to lack of balls). An urn with such initial conditions and replacement rules is called \emph{tenable}. \begin{remark}\label{Rten} A sufficient condition for tenability, which often is assumed in other papers (sometimes with simple modifications), is that all $\xi_{ij}$ and $X_{0i}$ are integers with $\xi_{ij}\ge0$ for $j\neq i$ and $\xi_{ii}\ge -1$ (this means that we may remove the drawn ball but no other ball), and furthermore, for example, $\sum_ja_j\xi_{ij}\ge0$ a.s\punkt{} (meaning that the total activity never decreases); then the urn is tenable for any $X_0$ with non-zero activity. We shall \emph{not} assume this in the present paper. \end{remark} \begin{remark} \label{Rten2} In all applications that we know of, each $\xi_i$ is a discrete random vector, \ie{} it takes only a countable (usually a finite) number of different values. This is not necessary, however; the results below hold also if, e.g., some $\xi_{ij}$ is continuous. \end{remark} We assume, for simplicity, that the initial composition $X_0$ is deterministic. \begin{remark} The results are easily extended to the case of random $X_0$ by conditioning on $X_0$, but that may require some extra conditions or minor modifications in some of the statements, which we leave to the reader. \end{remark} The P\'olya{} urn is \emph{balanced} if \begin{equation}\label{balanced} \sum_j a_j\xi_{ij} = b>0 \end{equation} (a.s\punkt) for some constant $b$ and every $i$. In other words, the added activity after each draw is fixed (non-random and not depending on the colour of the drawn ball). This implies that the denominator in \eqref{urn} (which is the total activity in the urn) is deterministic for each $n$, see \eqref{wn1}. This is a significant simplification, and is assumed in many papers on P\'olya{} urns. (One exception is \cite{SJ154}, which is based on embedding in a continuous time branching process and stopping at a suitable stopping time, following \cite{AthreyaKarlin}; this method does not seem to easily give information on moments and is not used in the present paper.) \begin{remark} We exclude the case $b=0$, which is quite different; a typical example is a Markov chain, regarded as an urn always containing a single ball. \end{remark} We shall assume that the urn is tenable and balanced; this is sometimes repeated for emphasis. We also assume \eqref{gl1b} below; as discussed in \refR{Rgl1b} and \refApp{Aten}, this is a very weak assumption needed to exclude some trivial cases allowed by our definition of tenable; by \refL{Lapp} it is sufficient to assume that every colour in the specification actually may occur in the urn, which always can be achieved by eliminating any redundant colours. Finally, in order to obtain moment results, we assume that the replacements have second moments: \begin{equation}\label{Exi2} \E \xi_{ij}^2<\infty,\qquad i,j=1,\dots,q. \end{equation} It follows that every $X_n$ has second moments, so the covariance matrix $\Var(X_n)$ is finite for each $n$. \begin{remark} In the tenable and balanced case, the assumption \eqref{Exi2} is almost redundant. First, although there might be negative values of $\xi_{ij}$, we assume that the urn is tenable. Hence, given any instance $(x_1,\dots,x_q)$ of the urn that may occur with positive probability as some $X_n$, we have $\xi_{ij}\ge -x_j$ a.s\punkt{} for every $i$ and $j$ such that $a_ix_i>0$. In particular, if every $a_i>0$, and every colour may appear in the urn, then each $\xi_{ij}$ is bounded below. Furthermore, still assuming $a_i>0$ for each $i$, this and \eqref{balanced} implies that each $\xi_{ij}$ also is bounded above; hence $\xi_{ij}$ is bounded and has moments of any order. \end{remark} \subsection{Notation} We regard all vectors as column vectors. We use standard notations for vectors and matrices (of sizes $q$ and $q\times q$, respectively), in particular ${}'$ for transpose, ${}^*$ for Hermitean conjugate and $\cdot$ for the standard scalar product; thus $u\cdot v=u'v$ for any vectors $u,v\in\bbR^q$. We let $\norm{\,}$ denote the standard Euclidean norm for vectors, and any convenient norm for matrices. Let $a:=(a_1,\dots,a_q)$ be the vector of activities. Thus, the balance condition \eqref{balanced} can be written $a\cdot\xi_i=b$. The \emph{intensity matrix} of the P\'olya{} urn is the $ q\times q $ matrix \begin{equation}\label{A} A:=(a_j\E\xi_{ji})_{i,j=1}^{q}. \end{equation} (Note that, for convenience and following \cite{SJ154}, we have defined $A$ so that the element $(A)_{ij}$ is a measure of the intensity of adding balls of colour $i$ coming from drawn balls of colour $j$; the transpose matrix $A'$ is often used in other papers.) The intensity matrix $ A $ with its eigenvalues and eigenvectors has a central role for asymptotical results. Let $\gs(A)$ (the \emph{spectrum} of $A$) be the set of eigenvalues of $A$. We shall use the Jordan decomposition of the matrix $A$ in the following form. There exists a decomposition of the {complex} space $\bbC^q$ as a direct sum $\bigoplus_\gl \el$ of generalized eigenspaces $\el$, such that $A-\gl I$ is a nilpotent operator on $\el$; here $\gl$ ranges over the set $\gs(A)$ of eigenvalues of $A$. ($I$ is the identity matrix of appropriate size.) In other words, there exist projections $\pl$, $\gl\in\gs(A)$, that commute with $A$ and satisfy \begin{gather} \sum_{\gl\in\gsa}\pl=I, \label{pl}\\ A\pl=\pl A=\gl\pl+\nl, \label{24b} \end{gather} where $\nl=\pl\nl=\nl\pl$ is nilpotent. Moreover, $\pl P_\mu=0$ when $\gl\neq\mu$. We let $\nul\ge0$ be the integer such that $\nl^{\nul}\neq0$ but $\nl^{\nul+1}=0$. (Equivalently, in the Jordan normal form of $A$, the largest Jordan block with $\gl$ on the diagonal has size $\nul+1$.) Hence $\nul=0$ if and only if $\nl=0$, and this happens for all $\gl$ if and only if $A$ is diagonalizable, \ie{} if and only if $A$ has a complete set of $q$ linearly independent eigenvectors. (In the sequel, $\gl$ will always denote an eigenvalue. We may for completeness define $\pl=\nl=0$ for every $\gl\notin\gsa$.) The eigenvalues of $A$ are denoted $\gl_1,\dots,\gl_q$ (repeated according to their algebraic multiplicities); we assume that they are ordered with decreasing real parts: $\Re\gl_1\ge\Re\gl_2\ge\dots$, and furthermore, when the real parts are equal, in order of decreasing $\nu_j:=\nu_{\gl_j}$. In particular, if $\gl_1>\Re\gl_2$, then $\nu_j\le\nu_2$ for every eigenvalue $\gl_j$ with $\Re\gl_j=\Re\gl_2$. Recall that the urn is called \emph{small} if $\Re\gl_2\le\frac12\gl_1$ and \emph{large} if $\Re\gl_2>\frac12\gl_1$; the urn is \emph{strictly small} if $\Re\gl_2<\frac12\gl_1$. In the balanced case, by \eqref{A} and \eqref{balanced}, \begin{equation}\label{bala} a'A =\Bigpar{\sumiq a_i(A)_{ij}}_j =\Bigpar{\sumiq a_ia_j\E \xi_{ji}}_j =\bigpar{a_j\E (a\cdot \xi_{j})}_j =ba', \end{equation} \ie, $a'$ is a left eigenvector of $A$ with eigenvalue $b$. Thus $b\in\gsa$. We shall assume that, moreover, $b$ is the largest eigenvalue, \ie, \begin{equation}\label{gl1b} \gl_1=b. \end{equation} \begin{remark}\label{Rgl1b} In fact, \eqref{gl1b} is a very weak assumption. For example, if each $\xi_{ij}\ge0$, then $A$ is a matrix with non-negative elements, and since the eigenvector $a'$ is non-negative, \eqref{gl1b} is a consequence of the Perron--Frobenius theorem. The same holds (by considering $A+cI$ for a suitable $c>0$) under the assumption in \refR{Rten}. Under our, more general, definition of tenability, there are counterexamples, see \refE{Ebad}, but we show in \refApp{Aten} that they are so only in a trivial way, and that we may assume \eqref{gl1b} without real loss of generality. (Of course, the proof of \refL{Lapp}, which uses \refL{L1}, does not use the assumption \eqref{gl1b}.) \end{remark} We shall in our theorems furthermore assume that $\Re\gl_2<\gl_1$ (and often more), and thus that $\gl_1=b$ is a simple eigenvalue. There are thus corresponding left and right eigenvectors $u_1'$ and $v_1$ that are unique up to normalization. By \eqref{bala}, we may choose $u_1=a$. Furthermore, we let $v_1$ be normalized by \begin{align} \label{normalized} u_1\cdot v_1=a\cdot v_1=1. \end{align} Then the projection $P_{\gl_1}$ is given by \begin{equation} \label{pl1} P_{\gl_1}=v_1u_1'. \end{equation} Consequently, in the balanced case, for any vector $v\in\bbR^q$, \begin{equation}\label{p1} P_{\gl_1}v=v_1u_1'v=v_1a'v=(a\cdot v)v_1. \end{equation} \begin{remark}\label{Rmulti} The dominant eigenvalue $\gl_1$ is simple, and $\Re\gl_2<\gl_1$ if, for example, the matrix $A$ is irreducible, but not in general. A simple counterexample is the original P\'olya{} urn, see \citet{EggPol} and \citet{Polya} (with $q=2$), where each ball is replaced together with $b$ balls of the same colour (and every $a_i=1$); then $A=bI$ and $\gl_1=\dots=\gl_q=b$. As is well-known, the asymptotic behaviour in this case is quite different in this case; in particular, $X_n/n$ converges in distribution to a non-degenerate distribution and not to a constant, see \eg{} \cite{Polya} and \cite{JKotz}. \end{remark} Define also \begin{equation} \PIx:=\sumglx P_\gl = I-P_{\gl_1}, \end{equation} Furthermore, define \begin{equation}\label{B} B:=\sumiq a_i v_{1i}\E\bigpar{\xi_i\xi_i'} \end{equation} and, if the urn is \emph{strictly small}, noting that $\PIx$ commutes with $e^{sA}:=\sum_{k=0}^\infty (sA)^k/k!$, \begin{equation}\label{gSI} \gS_I:=\intoo \PIx e^{sA} B e^{sA'} \PIx' e^{-\gl_1 s} \dd s \end{equation} The integral converges absolutely when the urn is strictly small, as can be seen from the proof of \refT{TV1}, or directly because $\norm{ \PIx e^{sA}}=O\bigpar{s^{\nu_2}e^{\Re\gl_2s}}$ for $s\ge1$, as is easily seen from \refL{LT}. (The integral is matrix-valued; the space of $q\times q$ matrices is a finite-dimensional space and the integral can be interpreted component-wise.) See also \refApp{SSnote}. Unspecified limits are as \ntoo. As usual, $a_n=O(b_n)$ means that $a_n/b_n$ is bounded; here $a_n$ may be vectors or matrices and $b_n$ may be complex numbers; we do not insist that $b_n$ is positive. \section{Main results}\label{Sresults} Our main results on asymptotics of mean and variance are the following. Proofs are given in \refS{Spf2}. \begin{theorem} \label{TE} If the P\'olya{} urn is tenable, balanced and $\Re\gl_2<\gl_1$, then, for $n\ge2$, \begin{equation}\label{te} \begin{split} \E X_n &= (n\gl_1 + a\cdot X_0)v_1 + O\bigpar{n^{\Re\gl_2/\gl_1}\log^{\nu_2} n} \\ &= n \gl_1 v_1 + O\bigpar{n^{\Re\gl_2/\gl_1}\log^{\nu_2} n+1} \\ &= n \gl_1 v_1 + o(n). \end{split} \end{equation} In particular, if the urn is strictly small, \ie{} $\Re\gl_2<\frac12\gl_1$, then \begin{equation}\label{tea} \E X_n = n\gl_1 v_1 + o\bigpar{n\qq}. \end{equation} \end{theorem} \begin{theorem} \label{TV1} If the P\'olya{} urn is tenable, balanced and strictly small, \ie{} $\Re\gl_2<\frac12\gl_1$, then \begin{equation}\label{tv1} n\qw \Var(X_n) \to \gS:= \gl_1\gS_I. \end{equation} \end{theorem} \begin{theorem} \label{TV2} If the P\'olya{} urn is tenable, balanced and small but not strictly small, \ie{} $\Re\gl_2=\frac12\gl_1$, then \begin{equation*} (n\log^{2\nu_2+1}n)\qw \Var(X_n) \to \frac{\gl_1^{-2\nu_2}}{(2\nu_2+1)(\nu_2!)^2} \sum_{\Re\gl=\frac12\gl_1} N_\gl^{\nu_2}P_\gl {B} P_\gl^*\xpar{N_\gl^*}^{\nu_2} . \end{equation*} \end{theorem} \begin{remark}\label{RTV} Under some additional assumptions (irreducibility of $A$, at least if we ignore colours with activity 0, and, for example, the condition in \refR{Rten}), \cite[Theorems 3.22--3.23 and Lemma 5.4]{SJ154} show that if the urn is small, then $X_n$ is asymptotically normal, with the asymptotic covariance matrix equal to the limit in Theorem \ref{TV1} ($\Re\gl_2<\frac12\gl_1$) or or \refT{TV2} ($\Re\gl_2=\frac12\gl_1$). For example, in the strictly small case, $n\qqw(X_n-n\gl_1v_1)\dto N(0,\gS)$. Hence (under these hypotheses), Theorems \ref{TE}--\ref{TV2} can be summarized by saying that the mean and (co)variances converge as expected in these central limit theorems. \end{remark} We also obtain the following version of the law of large numbers for P\'olya{} urns. Convergence a.s\punkt{} has been shown before under various assumptions (including the unbalanced case), see \cite[Section V.9.3]{AN}, \cite{AldousFP}, \cite[Theorem 3.21]{SJ154}, and is included here for completeness and because our conditions are somewhat more general; the $L^2$ result seems to be new. \begin{theorem}\label{TL2} If the P\'olya{} urn is tenable, balanced and $\Re\gl_2<\gl_1$, then, as \ntoo, $X_n/n\to \gl_1v_1$ a.s\punkt{} and in $L^2$. \end{theorem} The asymptotic covariance matrix $\gS$ in \eqref{tv1} is always singular, since, by \eqref{gSI}, $\gS=\PIx\gS\PIx'$ and thus $u'\gS u=u'\PIx\gS\PIx'u=0$ when $\PIx'u=0$, which happens when $P_{\gl_1}' u=u$, \ie, when $u$ is a multiple of the left eigenvector $u_1=a$. In the balanced case, this is easy to see: $a\cdot X_n$ is deterministic and thus $\Var(a\cdot X_n)=0$; hence $a'\gS a=0$ since for any vector $u$, by \eqref{tv1}, \begin{equation}\label{agnus} n\qw \Var(u\cdot X_n)=n\qw u'\Var(X_n) u \to u'\gS u. \end{equation} With an extra assumption, this is the only case when the asymptotic variance $u'\gS u$ vanishes (cf.\ \cite[Remark 3.19]{SJ154}). Let $\tA$ be the submatrix of $A$ obtained by deleting all rows and columns corresponding to colours with activity $a_i=0$. \begin{theorem} \label{TS} Suppose that the P\'olya{} urn is tenable, balanced and strictly small, \ie{} $\Re\gl_2<\frac12\gl_1$, and, furthermore, that $\tA$ is irreducible. If $u\in \bbR^q$, then $u'\gS u =0$ if and only if for every $n\ge0$, $\Var(u\cdot X_n)=0$, \ie, $u\cdot X_n$ is deterministic. \end{theorem} \begin{remark} If $\tA$ is reducible, then, on the contrary, $\gS$ is typically more singular. As an extreme example, consider a ``triangular'' urn with two colours, activities $a_i=1$ and deterministic replacements $\xi_1=(1,0)$, $\xi_2=(1-\gl,\gl)$ for a real $\gl\in(0,1)$. (Starting with one ball of each colour, say.) Then $A= \smatrixx{1&1-\gl\\0&\gl} $. The eigenvalues are $1$ and $\gl$, so the urn is strictly small if $\gl<\frac12$. However, $v_1=(1,0)$, and thus \eqref{B} yields $B=\xi_1\xi_1'=v_1v_1'$, and thus by \eqref{PBP} (or a direct calculation) $\PIx B =0$, and thus $\gS=\gS_I=0$. Theorems \ref{TV1} and \ref{TV2} are still valid, but say only that the limit is 0. In fact, in this example, the proper normalization is $n^\gl$: it follows from \cite[Theorem 1.3(v)]{SJ169} that $n^{-\gl}X_{n2}=n^{-\gl}(n+2-X_{n1})\dto W$ for some non-degenerate (and non-normal) random variable $W$. Moreover, calculations similar to those in \refS{Spf2} show that $\E X_{n2}\sim c_1 n^\gl$ and $\Var X_{n2}\sim c_2 n^{2\gl}$ for some $c_1,c_2>0$. \end{remark} \begin{remark} It is easily seen that $\tA$ is irreducible if and only if $v_{1i}>0$ for every $i$ with $a_i>0$. \end{remark} \section{Proofs, first steps}\label{Spf1} Let $I_n$ be the colour of the $n$-th drawn ball, and let \begin{equation}\label{gDX} \gD X_n:=X_{n+1}-X_n \end{equation} and \begin{equation}\label{wn} w_n:=a\cdot X_n, \end{equation} the total weight (activity) of the urn. Furthermore, let $\cF_n$ be the $\gs$-field generated by $X_1,\dots,X_n$. Then, by the definition of the urn, \begin{equation}\label{pin} \P\bigpar{I_{n+1}=j\mid \cF_n} = \frac{a_jX_{nj}}{w_n} \end{equation} and, consequently, recalling \eqref{A}, \begin{equation}\label{mia} \begin{split} \E\bigpar{\gD X_n\mid \cF_n} &=\sumjq \P\bigpar{I_{n+1}=j\mid \cF_n}\E\xi_j =\frac{1}{w_n}\sumjq a_j X_{nj}\E\xi_j \\& =\frac{1}{w_n}\Bigpar{\sumjq (A)_{ij} X_{nj}}_i =\frac{1}{w_n} A X_{n}. \end{split} \end{equation} Define \begin{equation}\label{Yn} Y_n:=\gD X_{n-1} - \E\bigpar{\gD X_{n-1}\mid \cF_{n-1}}. \end{equation} Then, $Y_n$ is $\cF_n$-measurable and, obviously, \begin{equation} \label{eyn} \E\bigpar{Y_n\mid\cF_{n-1}}=0 \end{equation} and, by \eqref{gDX}, \eqref{Yn} and \eqref{mia}, \begin{equation} X_{n+1}=X_n+Y_{n+1}+w_n\qw A X_n =\bigpar{I+w_n\qw A}X_n+Y_{n+1}. \end{equation} Consequently, by induction, for any $n\ge0$, \begin{equation}\label{tia} X_n=\prod_{k=0}^{n-1}\bigpar{I+w_k\qw A}X_0 +\sum_{\ell=1}^{n}\prod_{k=\ell}^{n-1}\bigpar{I+w_k\qw A}Y_\ell. \end{equation} We now use the assumption that the urn is balanced, so $a\cdot\gD X_n=b$ and thus by \eqref{gDX}--\eqref{wn}, $w_n$ is deterministic with \begin{equation}\label{wn1} w_n=w_0+nb, \end{equation} where the initial weight $w_0=a\cdot X_0$. We define the matrix products \begin{equation}\label{qia} F_{i,j}:=\prod_{i\le k<j}\bigpar{I+w_k\qw A}, \qquad 0\le i\le j, \end{equation} and write \eqref{tia} as \begin{equation} \label{kia} X_n=F_{0,n} X_0+\sum_{\ell=1}^{n}F_{\ell,n} Y_{\ell}. \end{equation} As said in the introduction, we can regard the term $F_{\ell,n}Y_\ell$ as the real effect on $X_n$ of the $\ell$-th draw, including the expected later indirect effects. Taking the expectation we find, since $\E Y_\ell=0$ by \eqref{eyn}, and the $F_{i,j}$ and $X_0$ are nonrandom, \begin{equation} \label{EX} \E X_n=F_{0,n} X_0. \end{equation} Hence, \eqref{kia} can also be written \begin{equation}\label{gw} X_n-\E X_n =\sum_{\ell=1}^{n}F_{\ell,n} Y_\ell. \end{equation} Consequently, the covariance matrix can be computed as \begin{equation}\label{dia} \begin{split} \Var(X_n) &:= \E\bigpar{(X_n-\E X_n)(X_n-\E X_n)'} \\&\phantom: =\E\sum_{i=1}^{n}\sum_{j=1}^{n} \bigpar{F_{i,n} Y_i}\bigpar{F_{j,n} Y_j}' \\&\phantom: =\sum_{i=1}^{n}\sum_{j=1}^{n} F_{i,n} \E\bigpar{Y_i Y_j'} F_{j,n}' . \end{split} \end{equation} However, if $i>j$, then $\E\bigpar{Y_i\mid \cF_j}=0$ by \eqref{eyn}, and since $Y_j$ is $\cF_{j}$-measurable, we have \begin{equation} \E\bigpar{Y_iY_j'} = \E\bigpar{\E (Y_i\mid \cF_j)Y_j'}=0. \end{equation} Taking the transpose we see that $\E\bigpar{Y_iY_j'}=0$ also when $i<j$. Hence, all nondiagonal terms vanish in \eqref{dia}, and we find \begin{equation}\label{varx} \begin{split} \Var(X_n) =\sum_{i=1}^{n} F_{i,n} \E\bigpar{Y_i Y_i'} F_{i,n}' . \end{split} \end{equation} The formulas \eqref{EX} and \eqref{varx} form the basis of our proofs, and it remains mainly to analyse the matrix products $F_{i,j}$. \begin{remark} The formula \eqref{tia} holds for general P\'olya{} urns, also when they are not balanced. However, in the general case, the total weights $w_k$ are random, and they are dependent on each other and on the $Y_\ell$, and it seems difficult to draw any useful consequences from \eqref{tia}; certainly the arguments above fail because the $F_{i,j}$ would be random. \end{remark} \section{Estimates of matrix functions}\label{Smatrix} For notational convenience, we make from now on the simplifying assumption $b=1$. (For emphasis and clarity, we repeat this assumptions in some statements; it will always be in force, whether stated or not.) This is no loss of generality; we can divide all activities by $b$ and let the new activities be $\hat a:=a/b$; this defines the same random evolution of the urn and we have $\hat a\cdot \xi_{i}=b/b=1$ for every $i$, so the modified urn is also balanced, with balance $\hat b=1$. Furthermore, the intensity matrix $A$ in \eqref{A} is divided by $b$, so all eigenvalues $\gl_i$ are divided by $b$, but their ratios remain the same; the projections $P_{\gl}$ remain the same while the nilpotent parts $N_\gl$ are divided by $b$, and in both cases the indices are shifted; also, with the normalization \eqref{normalized}, $u_1=a$ is divided by $b$ while $v_1$ is multiplied by $b$. It is now easy to check that $\gl_1v_1$, $B$ and $\gl_1\gS_I$ are invariant, and thus the theorems all follow from the special case $b=1$. By the assumption \eqref{gl1b}, see \refR{Rgl1b} and \refApp{Aten}, we thus have $\gl_1=1$. Note that \eqref{wn1} now becomes \begin{equation}\label{wn2} w_n=n+w_0. \end{equation} Note also that \eqref{qia} can be written $F_{i,j}=f_{i,j}(A)$, where $f_{i,j}$ is the polynomial \begin{equation}\label{qib} \begin{split} f_{i,j}(z)&:= \prod_{i\le k<j}\bigpar{1+w_k\qw z} =\prod_{i\le k<j}\frac{w_k+ z}{w_k} =\prod_{i\le k<j}\frac{k+w_0+ z}{k+w_0} \\&\phantom: = \frac{\gG(j+w_0+z)/\gG(i+w_0+z)} {\gG(j+w_0)/\gG(i+w_0)} = \frac{\gG(j+w_0+z)}{\gG(j+w_0)} \cdot \frac{\gG(i+w_0)}{\gG(i+w_0+z)} . \end{split} \end{equation} Recall that by the functional calculus in spectral theory, see \eg{} \cite[Chapter VII.1--3]{DunfordS}, we can define $f(A)$ not only for polynomials $f(z)$ but for any function $f(z)$ that is analytic in a neighbourhood of the spectrum $\gs(A)$. Furthermore, if $K$ is a compact set that contains $\gs(A)$ in its interior (for example a sufficiently large disc), then there exists a constant $C$ (depending on $A$ and $K$) such that for every $f$ is analytic in a neighbourhood of $K$, \begin{equation}\label{nC} \norm{f(A)}\le C\sup_{z\in K} |f(z)|. \end{equation} We shall use the functional calculus mainly for polynomials and the entire functions $z\mapsto t^z=e^{(\log t)z}$ for fixed $t>0$; in these cases, $f(A)$ can be defined by a Taylor series expansion as we did before \eqref{gSI}. Note also that the general theory applies to operators in a Banach space; we only need the simpler finite-dimensional case discussed in \cite[Chapter VII.1]{DunfordS}. We shall use the following formula for $f(A)$, where $f\mm$ denotes the $m$-th derivative of $f$. (The formula can be seen as a Taylor expansion, see the proof.) \begin{lemma} \label{LT} For any entire function $f(\gl)$, and any $\gl\in\gs(A)$, \begin{equation} \label{lt} f(A)P_\gl = \sum_{m=0}^{\nu_\gl}\frac{1}{m!} f\mm(\gl)N_\gl^mP_\gl. \end{equation} \end{lemma} \begin{proof} This is a standard formula in the finite-dimensional case, see \cite[Theorem VII.1.8]{DunfordS}, but we give for completeness a simple (and perhaps informative) proof when $f$ is a polynomial (which is the only case that we use, and furthermore implies the general case by \cite[Theorem VII.1.5(d)]{DunfordS}). We then have the Taylor expansion $f(\gl+z)=\sum_{m=0}^\infty \wmm f\mm(\gl)z^m$, which can be seen as an algebraic identity for polynomials in $z$ (the sum is really finite since $f\mm=0$ for large $m$), and thus \begin{equation} f(A)P_\gl = f(\gl I+N_\gl)P_\gl =\sum_{m=0}^\infty \wmm f\mm(\gl)N_\gl^m P_\gl, \end{equation} where $N_\gl^m=0$ when $m>\nu_\gl$. \end{proof} Our strategy is to first show estimates for the polynomials $f_{i,j}(z)$ in \eqref{qib} and then use these together with \eqref{nC} and \eqref{lt} to show the estimates for $F_{i,j}=f_{i,j}(A)$ that we need. \begin{lemma} \label{L0} \begin{thmenumerate} \item For every fixed $i$, as $j\to\infty$, \begin{equation}\label{l0a} f_{i,j}(z)=j^z \frac{\gG(i+w_0)}{\gG(i+w_0+z)}\bigpar{1+o(1)}, \end{equation} \uniz. \item As $i,j\to\infty$ with $i\le j$, \begin{equation}\label{l0b} f_{i,j}(z)=j^z i^{-z}\bigpar{1+o(1)}, \end{equation} \uniz. \end{thmenumerate} \end{lemma} \begin{proof} Both parts follow from \eqref{qib} and the fact that \begin{equation}\label{gg} \frac{\gG(x+z)}{\gG(x)}=x^z\etto, \end{equation} uniformly for $z$ in a compact set, as $x\to\infty$ (with $x$ real, say), which is an easy and well-known consequence of Stirling's formula, see \cite[5.11.12]{NIST}. (Note that $\gG(i+w_0)/\gG(i+w_0+z)$ is an entire function for any $i\ge0$, since $w_0>0$. $\gG(j+w_0+z)/\gG(j+w_0)$ has poles, but for $z$ in a fixed compact set, this function is analytic when $j$ is large enough.) \end{proof} For the derivatives $f_{i,j}\mm(z)$ there are corresponding estimates. \begin{lemma} \label{L0m} Let $m\ge0$. \begin{romenumerate} \item \label{L0ma} For every fixed $i\ge0$, as $\jtoo$, \begin{equation}\label{l0ma} f\mm_{i,j}(z) = {j}^z(\log j)^m \frac{\gG(i+w_0)}{\gG(i+w_0+z)} + o\bigpar{{j}^z\log^m j}, \end{equation} \uniz. \item \label{L0mb} As $i,j\to\infty$, \begin{equation}\label{l0mb} f\mm_{i,j}(z)= \Bigparfrac{j}{i}^z\Bigpar{\log\frac{j}{i}}^m + o\lrpar{\Bigparfrac{j}{i}^z\Bigpar{1+\log\frac{j}{i}}^m}, \end{equation} \uniz. \end{romenumerate} \end{lemma} \begin{proof} \pfitemref{L0ma} Let $g_j(z)= j^{-z}f_{i,j}(z)$. Then, by \eqref{l0a}, \begin{equation}\label{jeeves} g_j(z)= \frac{\gG(i+w_0)}{\gG(i+w_0+z)}\bigpar{1+o(1)} =O(1) \qquad\text{as \jtoo}, \end{equation} uniformly in each compact set, and thus by Cauchy's estimates, for any $\ell\ge1$, \begin{equation}\label{ask} g_j\qll(z)=O(1) \qquad\text{as \jtoo}, \end{equation} uniformly in each compact set. By Leibnitz' rule, \begin{equation} \begin{split} f_{i,j}\mm(z) &= \frac{\ddx^m}{\ddx z^m}\bigpar{j^z g_j(z)} =\sum_{\ell=0}^m \binom{m}{\ell} \frac{\ddx^\ell}{\ddx z^\ell}j^z \cdot g_j^{(m-\ell)}(z) \\& =\sum_{\ell=0}^m \binom{m}{\ell} (\log j)^\ell j^z g_j^{(m-\ell)}(z) \end{split} \end{equation} and \eqref{l0ma} follows by \eqref{jeeves}--\eqref{ask}. \pfitemref{L0mb} Similarly, let (for $1\le i\le j$) $h_{i,j}(z)= (i/j)^{z}f_{i,j}(z)$. Then, by \eqref{l0b}, \begin{equation}\label{embla} h_{i,j}(z)= 1+o(1) \qquad\text{as $i,j\too$}, \end{equation} uniformly in each compact set, and thus by Cauchy's estimates, for any $\ell\ge1$, \begin{equation}\label{hector} h_{i,j}\qll(z)= \frac{\ddx^\ell}{\ddx z^\ell}\bigpar{h_{i,j}(z)-1} =o(1) \qquad\text{as $i,j\too$}, \end{equation} uniformly in each compact set. By Leibnitz' rule, \begin{equation} \begin{split} f_{i,j}\mm(z) &= \frac{\ddx^m}{\ddx z^m}\bigpar{(j/i)^z h_{i,j}(z)} =\sum_{\ell=0}^m \binom{m}{\ell} \frac{\ddx^\ell}{\ddx z^\ell}(j/i)^z \cdot h_{i,j}^{(m-\ell)}(z) \\& =\sum_{\ell=0}^m \binom{m}{\ell} \bigpar{\log (j/i)}^\ell (j/i)^z h_{i,j}^{(m-\ell)}(z) \end{split} \end{equation} and \eqref{l0mb} follows by \eqref{embla}--\eqref{hector}. \end{proof} We now apply these estimates to $F_{i,j}$, noting that by \refL{LT}, \begin{equation}\label{ltij} \begin{split} F_{i,j}P_\gl=f_{i,j}(A)P_\gl = \sum_{m=0}^{\nu_\gl}\frac{1}{m!} f_{i,j}\mm(\gl)N_\gl^mP_\gl. \end{split} \end{equation} \begin{lemma}\label{L1} If $b=1$, then, for $n\ge2$ and $\gl\in\gsa$, \begin{equation}\label{l1ax} {F_{0,n} P_\gl} =n^{\gl}\log^{\nu_\gl}n \frac{\gG(w_0)}{\nul!\,\gG(w_0+\gl)}\nl^{\nul}\pl + o\bigpar{n^{\Re\gl}\log^{\nu_\gl}n}. \end{equation} \end{lemma} \begin{proof} By \eqref{ltij} and \eqref{l0ma}, \begin{equation} \begin{split} F_{0,n}P_\gl &=\sum_{m=0}^{\nu_\gl}\frac{1}{m!} f_{0,n}\mm(\gl)N_\gl^mP_\gl =\frac{1}{\nu_{\gl}!} f_{0,n}^{(\nul)}(\gl)N_\gl^{\nul} P_\gl +\sum_{m=0}^{\nu_\gl-1} O\bigpar{n^\gl \log^m n}, \end{split} \end{equation} which yields \eqref{l1ax} by another application of \eqref{l0ma}. \end{proof} \begin{lemma}\label{L2} If $b=1$, then, for $1\le i\le j$ and $\gl\in\gsa$, \begin{equation}\label{l2} F_{i,j} P_\gl = O\bigpar{(j/i)^{\Re\gl}(1+\log(j/i))^{\nu_\gl}}. \end{equation} More precisely, for any $\nu\ge\nu_\gl$, \begin{multline}\label{tedeum} F_{i,j} P_\gl = \frac{1}{\nu!}\parfrac{j}{i}^\gl \log^{\nu}\parfrac{j}{i} {N_\gl^{\nu}P_\gl } + o\lrpar{\Bigparfrac{j}{i}^{\Re\gl} \log^{\nu}\Bigparfrac{j}{i}} \\ + O\lrpar{ \Bigparfrac{j}{i}^{\Re\gl}\Bigpar{1+\log^{\nu-1}\Bigparfrac{j}{i}}}. \end{multline} \end{lemma} \begin{proof} This is similar to the proof of \refL{L1}. First, \eqref{l2} follows directly from \eqref{ltij} and \eqref{l0mb}. For \eqref{tedeum}, note that the summation in \eqref{ltij} may be extended to $m\le\nu$, since $N_\gl^m=0$ when $m>\nu_\gl$. Then use \eqref{l0mb} for each term $m=\nu$. \end{proof} \begin{lemma} \label{L1b} If\/ $\Re\gl_2<\gl_1=b=1$, then for $0\le i\le j$, \begin{equation}\label{l1bx} \begin{split} F_{i,j}P_{\gl_1}= f_{i,j}(\gl_1)P_{\gl_1} =\frac{j+w_0}{i+w_0}P_{\gl_1}. \end{split} \end{equation} \end{lemma} \begin{proof} Since $\gl_1$ thus is assumed to be a simple eigenvalue, $\nu_{\gl_1}=0$. (Alternatively, see \refL{Lapp}.) Hence, \eqref{ltij} yields $F_{i,j}\pli=f_{i,j}(\gl_1)\pli$. Furthermore, \eqref{qib} yields \begin{equation}\label{qic} f_{i,j}(\gl_1)=f_{i,j}(1)=\frac{j+w_0}{i+w_0}, \end{equation} and \eqref{l1bx} follows. \end{proof} \begin{lemma}\label{L3} For any fixed $x\in(0,1]$, as \ntoo, \begin{equation} F_{\ceil{xn},n} \to x^{-A}. \end{equation} \end{lemma} \begin{proof} Let $K$ be a compact set containing $\gs(A)$ in its interior. As \ntoo, by \eqref{l0b}, \begin{equation f_{\ceil{xn},n}(z) =\Bigparfrac{n}{\ceil{xn}}^z \bigpar{1+o(1)} =x^{-z}\etto=x^{-z}+o(1), \end{equation} uniformly for $z\in K$. Consequently, $f_{\ceil{xn},n}(z)-x^{-z}\to0$ uniformly on $K$, and thus $F_{\ceil{xn},n}-x^{-A}\to0$ by \eqref{nC}. \end{proof} \begin{lemma} \label{LB} There exists $i_0$ and $C$ such that if $i_0\le i\le j\le 2i$, then $\norm{F_{i,j}\qw}\le C$. \end{lemma} \begin{proof} Let again $K$ be a compact set containing $\gs(A)$ in its interior. By \eqref{l0b}, we may choose $i_0$ such that if $i_0\le i\le j$, then $|f_{i,j}(z)|\ge \frac12 |(j/i)^z|$ on $K$. If furthermore $i\le j\le 2i$, this implies $|f_{i,j}(z)|\ge c$ on $K$, for some $c>0$, and thus $|f_{i,j}\qw(z)|\le c\qw$ on $K$. The result follows by \eqref{nC}. \end{proof} \section{Completions of the proofs}\label{Spf2} \begin{proof}[Proof of \refT{TE}] By \eqref{EX} and \eqref{pl}, \begin{equation}\label{ele} \E X_n = \sumgla F_{0,n} P_\gl X_0. \end{equation} For each eigenvalue $\gl\neq\gl_1$, \refL{L1} shows that \begin{equation}\label{gemini} F_{0,n} P_\gl X_0= O\bigpar{n^{\Re\gl}\log^{\nu_\gl}n} = O\bigpar{n^{\Re\gl_2}\log^{\nu_2}n}. \end{equation} Furthermore, by \eqref{p1}, \begin{equation}\label{matt} P_{\gl_1}X_0=(a\cdot X_0)v_1 = w_0v_1, \end{equation} and it follows from \eqref{l1bx} that \begin{equation}\label{win} \begin{split} F_{0,n}P_{\gl_1}X_0 = \frac{n+w_0}{w_0}P_{\gl_1}X_0 = \frac{n+w_0}{w_0} w_0 v_1 =(n+w_0) v_1. \end{split} \end{equation} The result \eqref{te} follows (when $\gl_1=1$) from \eqref{ele}, \eqref{gemini} and \eqref{win}. \end{proof} \begin{lemma}\label{LY0} For every $n$, $ P_{\gl_1}Y_n=0$. \end{lemma} \begin{proof} Since the urn is balanced, $a\cdot \gD X_n=b$ is nonrandom, and thus, by \eqref{Yn}, \begin{equation} a\cdot Y_n:=a\cdot\gD X_{n-1} - \E\bigpar{a\cdot\gD X_{n-1}\mid \cF_{n-1}} =b-b=0. \end{equation} The result follows by \eqref{p1}. \end{proof} Using \eqref{pl}, we can rewrite \eqref{varx} as \begin{equation}\label{varxx} \begin{split} \Var(X_n) =\sumgl\summu \sum_{i=1}^{n} F_{i,n}P_\gl \E\bigpar{Y_i Y_i'}P_\mu' F_{i,n}'. \end{split} \end{equation} For convenience, we define \begin{equation}\label{tin} \tin:=F_{i,n}P_\gl \E\bigpar{Y_i Y_i'}P_\mu' F_{i,n}'. \end{equation} Note that \refL{LY0} implies $P_{\gl_1}\E(Y_iY_i')=\E(P_{\gl_1}Y_iY_i')=0$ and thus also, by taking the transpose, $\E(Y_iY_i')P_{\gl_1}=0$. Hence $\tin=0$ when $\gl=\gl_1$ or $\mu=\gl_1$, so these terms can be dropped and \eqref{varxx} can be written \begin{equation}\label{varxxx} \Var(X_n) =\sumglx\summux \sum_{i=1}^{n} \tin. \end{equation} We begin with a simple estimate of this sum. \begin{lemma}\label{LV} If $\gl_1=1$, then, for $n\ge2$, \begin{equation}\label{lv} \Var X_n = \begin{cases} O\bigpar{n}, & \Re\gl_2<\frac12,\\ O\bigpar{n\log^{2\nu_2+1}n }, & \Re\gl_2=\frac12,\\ O\bigpar{n^{2\Re\gl_2}\log^{2\nu_2}n }, & \Re\gl_2>\frac12. \end{cases} \end{equation} In particular, if $\gl_2<\gl_1=1$, then \begin{equation}\label{lva} \Var(X_n) = o\bigpar{n^2}. \end{equation} \end{lemma} \begin{proof} It follows from \eqref{Exi2} that $\E (Y_nY_n')=O(1)$. By combining this and \refL{L2}, we see that if $\gl$ and $\mu$ are two eigenvalues, then, for $1\le i\le n$, \begin{equation}\label{jul1} \tin= F_{i,n}P_\gl \E\bigpar{Y_i Y_i'}(F_{i,n}P_\mu)' =O\bigpar{(n/i)^{\Re\gl+\Re\mu}(1+\log (n/i))^{\nu_\gl+\nu_\mu}}. \end{equation} If $\Re\gl+\Re\mu\ge1$, we note that this implies \begin{equation}\label{jul2} \tin =O\bigpar{(n/i)^{\Re\gl+\Re\mu}\log^{\nu_\gl+\nu_\mu}n} \end{equation} while if $\Re\gl+\Re\mu<1$, we choose $\ga$ with $\Re\gl+\Re\mu<\ga<1$ and note that \eqref{jul1} implies \begin{equation}\label{jul3} \tin =O\bigpar{(n/i)^{\ga}}. \end{equation} By summing over $i$ we obtain from \eqref{jul2} and \eqref{jul3}, \begin{equation}\label{jul4} \begin{split} \sum_{i=1}^n \tin = \begin{cases} O\bigpar{n}, & \Re\gl+\Re\mu<1,\\ O\bigpar{n\log^{\nu_\gl+\nu_\mu+1}n }, & \Re\gl+\Re\mu=1,\\ O\bigpar{n^{\Re\gl+\Re\mu}\log^{\nu_\gl+\nu_\mu}n }, & \Re\gl+\Re\mu>1. \end{cases} \end{split} \end{equation} The result \eqref{lv} follows from \eqref{varxxx} by summing \eqref{jul4} over the finitely many $\gl,\mu\in\gsax$ and noting that our estimates are largest for $\gl=\mu=\gl_2$. The simpler estimate \eqref{lva} is an immediate consequence. \end{proof} \begin{lemma}\label{LYlim} If\/ $\Re\gl_2<\gl_1=1$, then, as \ntoo, \begin{equation}\label{lylim} \E\bigpar{Y_nY_n'}\to B-v_1v_1'. \end{equation} Hence, for any eigenvalue $\gl\neq\gl_1$, \begin{equation}\label{plylim} P_\gl\E\bigpar{Y_nY_n'}\to P_\gl B. \end{equation} \end{lemma} \begin{proof} By \eqref{Yn} and \eqref{mia}, $Y_{n+1}=\gD X_n-w_n\qw AX_n$, with $\E (Y_{n+1}\mid\cF_n)=0$ by \eqref{eyn}. Hence, \begin{equation}\label{ja} \begin{split} \E\bigpar{Y_{n+1}Y_{n+1}'\mid \cF_n} = \E\bigpar{\gD X_n(\gD X_n)'\mid \cF_n} -w_n\qww A X_n(AX_n)' \end{split} \end{equation} and thus \begin{equation}\label{jb} \begin{split} \E\bigpar{Y_{n+1}Y_{n+1}'} = \E\bigpar{\gD X_n(\gD X_n)'} -w_n\qww A \E\bigpar{X_nX_n}'A'. \end{split} \end{equation} By the definition of the urn and \eqref{pin}, \begin{equation* \begin{split} \E\bigpar{\gD X_n(\gD X_n)'\mid \cF_n} &=\sumjq \P\bigpar{I_{n+1}=j\mid \cF_n}\E\bigpar{\xi_j\xi_j'} =\sumjq \frac{a_j X_{nj}}{w_n}\E\bigpar{\xi_j\xi_j'} \end{split} \end{equation*} and thus, using \eqref{wn2} and \refT{TE}, and recalling \eqref{B}, as \ntoo, \begin{equation}\label{pib} \begin{split} \E\bigpar{\gD X_n(\gD X_n)'} =\sumjq \frac{a_j \E X_{nj}}{n+w_0}\E\bigpar{\xi_j\xi_j'} \to \sumjq a_j v_{1j}\E\bigpar{\xi_j\xi_j'} = B. \end{split} \end{equation} Furthermore, by \eqref{lva} and \refT{TE} again, \begin{equation}\label{pic} n^{-2} \E\bigpar{X_nX_n'} =n\qww \Var(X_n)+n\qww(\E X_n)(\E X_n)' \to 0+v_1v_1'. \end{equation} Consequently, by \eqref{jb}, \eqref{pib}, \eqref{pic}, and recalling that $w_n/n\to1$ by \eqref{wn2} and $Av_1=\gl_1v_1=v_1$, \begin{equation} \E\bigpar{Y_{n+1}Y_{n+1}'} \to B-Av_1v_1'A' = B-v_1v_1'. \end{equation} This proves \eqref{lylim}, and \eqref{plylim} follows by noting that $P_\gl v_1=P_\gl P_{\gl_1}v_1=0$ when $\gl\neq\gl_1$. \end{proof} \begin{proof}[Proof of \refT{TV1}] Let $\gl,\mu\in\gsax$, and note that, by our assumption, $\Re\gl,\Re\mu\le\Re\gl_2<\frac12\gl_1=\frac12$. Write the inner sum in \eqref{varxxx} as an integral: \begin{equation}\label{inte} \frac{1}n \sum_{i=1}^n\tin = \intoi \txn \dd x. \end{equation} For each fixed $x\in\ooi$, by Lemmas \ref{L3} and \ref{LYlim}, \begin{equation}\label{tlim} \begin{split} \txn&= F\xnn P_\gl \E\bigpar{Y_{\ceil{xn}}Y'_{\ceil{xn}}}P_\mu'F\xnn' \\& \to x^{-A} P_{\gl} B P_\mu' x^{-A'}. \end{split} \end{equation} Furthermore, choose some $\ga\in[0,1)$ such that $\Re\gl_2<\frac12\ga$. Then, \eqref{jul3} applies and yields \begin{equation} \begin{split} \txn = O\bigpar{(n/\ceil{xn})^{\ga}} =O(x^{-\ga}), \end{split} \end{equation} which is integrable on $\ooi$. Thus, Lebesgue's theorem on dominated convergence applies to \eqref{inte} and yields, by \eqref{tlim} and the change of variables $x=e^{-s}$, \begin{equation* \frac{1}n \sum_{i=1}^n\tin \to \intoi x^{-A} P_{\gl} B P_\mu' x^{-A'} \dd x =\intoo e^{sA} P_{\gl} B P_\mu' e^{sA'} e^{-s}\dd s. \end{equation*} Hence, \eqref{varxxx} and the definition \eqref{gSI} yield \begin{equation*} \frac{1}n\Var X_n= \frac{1}n \sumglx\summux\sum_{i=1}^n\tin \to \intoo e^{sA} \PIx B \PIx' e^{sA'} e^{-s}\dd s =\gS_I, \end{equation*} showing \eqref{tv1}. \end{proof} \begin{proof}[Proof of \refT{TV2}] As in the proof of \refT{TV1}, we use \eqref{varxxx} and consider the sum $\sum_{i=1}^n\tin$ for two eigenvalues $\gl,\mu\in\gsax$. By assumption, $\Re\gl+\Re\mu\le2\Re\gl_2=1$, and if $\Re\gl+\Re\mu<1$, then $\sum_{i=1}^n\tin=O(n)$ by \eqref{jul4}. Hence we only have to consider the case $\Re\gl+\Re\mu=1$, \ie, $\Re\gl=\Re\mu=\frac12=\Re\gl_2$. In particular, $\nu_\gl,\nu_\mu\le\nu_2$. In this case, as in \eqref{inte}, we transform the sum into an integral, but this time in a somewhat different way. We have, using the change of variables $x=n^y=e^{y\log n}$, \begin{equation} \begin{split} \sum_{i=1}^n\tin &= \tqn1 + \int_1^n \tqn{\ceil x}\dd x \\& =\tqn1 + \intoi \tyn n^y \log n\dd y. \end{split} \end{equation} Hence, since $\tqn1=O\bigpar{n\log^{2\nu_2}n}$ by \eqref{jul2}, \begin{equation}\label{benedictus} \begin{split} \bigpar{n\log^{2\nu_2+1}n}\qw\sum_{i=1}^n\tin & =o(1) + \intoi n^{y-1} (\log n)^{-2\nu_2}\tyn\dd y. \end{split} \end{equation} Fix $y\in(0,1)$. Then, by \eqref{tedeum}, \begin{equation}\label{regina} \begin{split} F\nyn P_\gl &= \frac{1}{\nu_2!}\parfrac{n}{\ceil{n^y}}^\gl \log^{\nu_2}\parfrac{n}{\ceil{n^y}} \Bigpar{N_\gl^{\nu_2}P_\gl +o(1)} \\& = \frac{1}{\nu_2!}n^{(1-y)\gl}\bigpar{(1-y)\log n}^{\nu_2} \bigpar{N_\gl^{\nu_2}P_\gl +o(1)} \end{split} \end{equation} and similarly for $\mu$. Recall the assumption $\Re\gl+\Re\mu=1$, and let $\tau:=\Im\gl+\Im\mu$, so $\gl+\mu=1+\ii\tau$. Then, by \eqref{tin}, \eqref{regina} and \eqref{plylim}, \begin{multline}\label{coeli} n^{y-1} (\log n)^{-2\nu_2}\tyn \\ = \frac{1}{(\nu_2!)^2}n^{\ii(1-y)\tau}(1-y)^{2\nu_2} N_\gl^{\nu_2}P_\gl B \bigpar{N_\mu^{\nu_2} P_\mu}'+o(1). \end{multline} Moreover, by \eqref{jul2}, uniformly for $y\in\ooi$ and $n\ge2$, \begin{equation} \begin{split} n^{y-1} (\log n)^{-2\nu_2}\tyn = O\bigpar{(n/\ceil{n^y}) n^{y-1}} = O(1). \end{split} \end{equation} Hence the error term $o(1)$ in \eqref{coeli} is also uniformly bounded, and we can apply dominated convergence to the integral of it, yielding \begin{multline}\label{rex} \intoi n^{y-1} (\log n)^{-2\nu_2}\tyn\dd y \\ = \frac{1}{(\nu_2!)^2}\intoi n^{\ii(1-y)\tau}(1-y)^{2\nu_2} \dd y\cdot N_\gl^{\nu_2}P_\gl {B} \bigpar{N_\mu^{\nu_2} P_\mu}'+o(1). \end{multline} In the case $\tau=0$, \ie, $\mu=\bgl$, the integral on the \rhs{} of \eqref{rex} is $\intoi (1-y)^{2\nu_2}\dd y=(2\nu_2+1)\qw$. Furthermore, in this case, $P_\mu=P_{\bgl}=\overline{P_\gl}$ and thus $P_\mu'=P_\gl^*$, and similarly $N_\mu'=N_\gl^*$. Hence, \eqref{rex} yields \begin{equation}\label{kyrie} \intoi n^{y-1} (\log n)^{-2\nu_2}\tynb\dd y = \frac{1}{(2\nu_2+1)(\nu_2!)^2} N_\gl^{\nu_2}P_\gl {B} P_\gl^*\xpar{N_\gl^*}^{\nu_2} +o(1). \end{equation} On the other hand, if $\tau\neq0$, then, with $u=1-y$, \begin{equation \intoi n^{\ii(1-y)\tau}(1-y)^{2\nu_2} \dd y = \intoi e^{\ii (\tau\log n)u} u^{2\nu_2} \dd u \to0 \end{equation} as \ntoo{} and thus $|\tau\log n|\to\infty$, by an integration by parts (or by the Riemann--Lebesgue lemma). Hence, when $\mu\neq\bgl$, \eqref{rex} yields \begin{equation}\label{miseri} \intoi n^{y-1} (\log n)^{-2\nu_2}\tyn\dd y =o(1). \end{equation} We saw in the beginning of the proof that we can ignore the terms in \eqref{varxxx} with $\Re\gl<\frac12$ or $\Re\mu<\frac12$, and by \eqref{benedictus} and \eqref{miseri}, we can also ignore the case $\Re\gl=\Re\mu=\frac12$ but $\mu\neq\bgl$. Hence only the case $\mu=\bgl$ with $\Re\gl=\frac12$ remains in \eqref{varxxx}, and the result follows by \eqref{benedictus} and \eqref{kyrie}. \end{proof} \begin{proof}[Proof of \refT{TL2}] By \eqref{lva}, \begin{equation*} \E\norm{X_n/n-\E X_n/n}^2=n^{-2}\E\norm{X_n-\E X_n}^2 =\sumiq n\qww\Var(X_{ni})\to0, \end{equation*} and $\E X_n/n\to v_1$ by \refT{TE}. Hence, $\E\norm{X_n/n-v_1}^2\to0$, which is the claimed convergence in $L^2$. Moreover, if we fix $\eps>0$ such that $\Re\gl_2<1-\eps$, then the same argument shows, using \eqref{lv} and \eqref{ele}--\eqref{win}, that, more precisely, \begin{equation}\label{qin} \E\norm{X_n-(n+w_0)v_1}^2 = O\bigpar{n^{2-2\eps}}. \end{equation} Let $N\ge1$. By \eqref{kia} and the definition \eqref{qia}, for any $n\le N$, \begin{equation}\label{qid} F_{n,N}X_n = F_{0,N} X_0+\sum_{\ell=1}^{n}F_{\ell,N} Y_{\ell}. \end{equation} Moreover, by \eqref{eyn}, $Y_n$ is a martingale difference sequence, and thus so is, for $n\le N$, $F_{n,N}Y_n$. Hence, \eqref{qid} shows that $F_{n,N}X_n$, $n\le N$, is a martingale, and thus \begin{equation}\label{ophelia} F_{n,N}X_n=\E\bigpar{X_N\mid\cF_n},\qquad n\le N. \end{equation} By \refL{L1b}, $F_{n,N} v_1=F_{n,N} \pli v_1=\frac{N+w_0}{n+w_0}v_1$ and thus \eqref{ophelia} implies \begin{equation} F_{n,N}\bigpar{X_n-(n+w_0)v_1}=\E\bigpar{X_N-(N+w_0)v_1\mid\cF_n}, \qquad n\le N. \end{equation} Hence, by Doob's inequality (applied to each coordinate) and \eqref{qin}, \begin{equation} \E\sup_{n\le N}\norm{ F_{n,N}\bigpar{X_n-(n+w_0)v_1}}^2 \le 4\E\norm{X_N-(N+w_0)v_1}^2 =O\bigpar{N^{2-2\eps}} \end{equation} It follows, using \refL{LB}, that if $N\ge 2i_0$, then \begin{equation}\label{poker} \E\sup_{N/2\le n\le N}\norm{\bigpar{X_n-(n+w_0)v_1}/n}^2 =O\bigpar{N^{-2\eps}}. \end{equation} This holds trivially for smaller $N$ as well, since each $X_n\in L^2$ and thus the \lhs{} of \eqref{poker} is finite for each $N$. Consequently, taking $N=2^k$ and summing, \begin{equation} \E\sum_{k=1}^\infty \sup_{2^{k-1}\le n\le 2^k}\norm{\bigpar{X_n-(n+w_0)v_1}/n}^2 <\infty. \end{equation} Consequently, $\norm{\bigpar{X_n-(n+w_0)v_1}/n}\to0$ a.s., and thus $X_n/n\asto v_1$. \end{proof} \begin{proof}[Proof of \refT{TS}] If $\Var(u\cdot X_n)=0$ for every $n$, then $u'\gS u=0$ by \eqref{agnus}. For the converse, assume that $u'\gS u=0$. Then, by \eqref{tv1} and \eqref{gSI}, \begin{equation} 0=u'\gS_I u=\intoo u'\PIx e^{sA} B e^{sA'} \PIx'u\, e^{-\gl_1 s} \dd s. \end{equation} The integrand is a continuous function of $s\ge0$, and non-negatiove since $B$ is non-negative definite by \eqref{B}. Hence, the integrand vanishes for every $s\ge0$. In particular, taking $s=0$ we obtain, using \eqref{B} again, \begin{equation}\label{sev} \begin{split} 0 &= u'\PIx B \PIx' u = \sumiq a_i v_{1i} u'\PIx \E(\xi_i\xi_i')\PIx' u \\& = \sumiq a_i v_{1i} \E\bigpar{u'\PIx \xi_i(u'\PIx\xi_i)'} = \sumiq a_i v_{1i} \E\bigpar{u'\PIx \xi_i}^2, \end{split} \end{equation} noting that $u'\PIx \xi_i$ is a scalar. Each term is non-negative, and thus each term is 0. If $i$ is such that $a_i>0$, then it follows from the assumption that $\tA$ is irreducible that $v_{1i}>0$, and hence \eqref{sev} yields $\E\xpar{u'\PIx \xi_i}^2=0$ and thus $u'\PIx\xi_i=0$ a.s. Furthermore, since the urn is balanced, by \eqref{p1}, \begin{equation} P_{\gl_1}\xi_i = (a\cdot\xi_i)v_1 = b v_1. \end{equation} Hence, for every $i$ with $a_i>0$, \begin{equation} \begin{split} u\cdot\xi_i = u\cdot \bigpar{\PIx+P_{\gl_1}}\xi_i =0+u\cdot (bv_1)=bu\cdot v_1. \end{split} \end{equation} This is independent of $i$, and thus, for every $n$, a.s., \begin{equation} u\cdot \gD X_n=bu\cdot v_1. \end{equation} Consequently, a.s., \begin{equation} u\cdot X_n=u\cdot X_0+nbu\cdot v_1 \end{equation} and thus $u\cdot X_n$ is deterministic. \end{proof} \section{Examples}\label{Sex} P\'olya{} urns have been used for a long time in various applications, for example to study fringe structures in various random trees, see for example \cite{BagchiPal}, \cite{AldousFP}, \cite{Devroye-local}. Some recent examples are given in \cite{SJ292}, where, in particular, the number of two-protected nodes in a random $m$-ary search tree is studied for $m=2$ and $3$ using suitable P\'olya{} urns with 5 and 19 types, respectively, and it is shown that if this number is denoted by $Y_n$ ($m=2$) or $Z_n$ ($m=3$) for a search tree with $n$ keys, then \begin{align}\label{prot2} \dfrac{Y_n-\frac{11}{30}n}{\sqrt{n}} &\dto N\lrpar{0,\frac{29}{225}}, \\ \frac{Z_n-\frac{57}{700}n}{\sqrt{n}} &\dto N\lrpar{0,\frac{1692302314867}{43692253605000}}. \label{prot3} \end{align} (The binary case \eqref{prot2} had earlier been shown by \citet{MahmoudWard} using other methods.) The urns are strictly small; in both cases $\gl_1=1$ and $\gl_2=0$, with $\nu_2=0$, and Theorems \ref{TE} and \ref{TV1} yield, using the calculations in \cite{SJ292}, see \refR{RTV}, \begin{align} \E Z_n&=\frac{57}{700}\,n+ O(1), \\ \Var Z_n& = \frac{1692302314867}{43692253605000}\,n + o(n), \end{align} together with corresponding results for $Y_n$. (The results for $Y_n$ were earlier shown in \citet{MahmoudWard}, where exact formulas for the mean and variance of $Y_n$ are given.) Furthermore, \cite{SJ292} also studies the numbers of leaves and one-protected nodes in a random $m$-ary search tree using a similar but simpler urn. (For $m=2$ this was done already by \citet{Devroye-local}.) For $2\le m\le 26$, this is a strictly small urn, and again the results in \refS{Sresults} yield asymptotics of mean and variance. See \cite{SJxxx} for further similar examples. \begin{remark} As said above, the urn used to show \eqref{prot2} has 5 types, corresponding to 5 different small trees. To draw a ball corresponds to adding a node to a (randomly chosen) gap in the corresponding tree; this may cause the tree to break up into several smaller trees. The 5 types have 4,3,2,1,0 gaps each, and these numbers are their activities. Moreover, for type 2, the gaps are not equivalent, which makes the replacement for this type random. (We have $\xi_2=(1,-1,0,0,0)$ with probability $1/3$ and $\xi_2=(0,0,0,1,0)$ with probability $2/3$, see \cite{SJ292}.) A different, essentially equivalent, approach is to instead as types consider the different gaps in the different trees; this yields 5 new types that we denote by 1, 2A, 2B, 3, 4. The transition from the old urn to the new is a simple linear transformation: each old ball of type 1 is replaced by 4 new of type 1, which we write as $1\to4\cdot 1$, and similarly $ 2\to \mathrm{2A}+2\cdot\mathrm{2B}$, $3\to2\cdot 3$, $4\to4$, while balls of type 5 (which has activity 0) are ignored. This yields a new P\'olya{} urn, where all types have activity 1. In the new urn, all replacements are deterministic, which sometimes is an advantage, but on the other hand, replacements now may involve subtractions. For example, in the original urn, $\xi_1=(-1,1,1,0,0)$, meaning that if we draw a ball of type 1, it is discarded and replaced by one of type 2 and one of type 3. In the new urn, this translates to $\xi_1=(-4,1,2,2,0)$, meaning that we remove the drawn ball together with 3 others of type 1, and then add $\mathrm{2A}+2\cdot\mathrm{2B}+2\cdot3$. Even worse, $\xi_2=(4,-1,-2,0,0)$, meaning that if we draw a ball of type 2A, we remove it together with two balls of type 2B, and add 4 balls of type 1. Nevertheless, by the construction, the urn is obviously tenable in the sense of the present paper. This urn, with the gaps as types, thus is an example of a tenable urn with subtractions that occur naturally in an application. The P\'olya{} urn for the ternary search tree with 19 types in \cite{SJ292} can similarly be translated into an urn (with 29 types) using gaps as types, again with deterministic replacements, but sometimes subtractions. See also \cite{SJ292}, where the transition to the corresponding urn with gaps was used for the simpler urn used to study leaves; in that case there are no subtractions. \end{remark} \section{Further comments}\label{Sfurther} The variance decomposition \eqref{kia} explains some of the differences between the small and large urns stated in the introduction. Suppose again for convenience that $\gl_1=1$. Then, the term $F_{\ell,n}Y_\ell$ in \eqref{kia}, which is the (direct and indirect) contribution from the $\ell$-th draw, is roughly (ignoring logarithmic factors when $\nu_2>0$) of the order $(n/\ell)^{2\Re\gl_2}$. For a large urn, this decreases rapidly with $\ell$ and $\sum_\ell \ell^{-2\Re\gl_2}$ converges, and thus the variance is dominated by the contribution from the first draws. This strong long-term dependency leads to the a.s\punkt{} limit results, and the dependency of the limit on the initial state $X_0$. On the other hand, for a strictly small urn, the sum of the terms is of the order $n$, but each term is $o(n)$ and is negligible, which explains why the first draws, and the initial state, do not affect the limit distribution. In fact, for a component $X_{n,i}$ with asymptotic variance $(\gS)_{ii}>0$, we see that for any $\eps>0$, all but a fraction $\eps$ of $\Var X_{n,i}$ is explained by the draws with numbers in $[\gd n, n]$, for some $\gd=\gd(\eps)>0$. The long-term dependency is thus weak in this case. The remaining case, a small urn with $\Re\gl_2=1/2$, is similar to the strictly small case, but the long-term dependency is somewhat stronger. If we for simplicity assume $\nu_2=0$, then the contribution of the $\ell$-th draw to $\Var(X_n)$ is of the order $n/\ell$, giving a total variance of order $n\log n$. Again, the first draws and the initial state do not affect the limit distribution, but in order to explain all but a fraction $\eps$ of the variance, we have to use the draws in $[n^\gd, n]$, for some small $\gd>0$. Cf.\ \cite[Remark 4.3]{SJ154}, where a similar argument is made using the corresponding continuous time branching process.
1,314,259,993,793
arxiv
\subsection{Diffusion Models} \label{sec:prelim-diffusion} \begin{figure*}[ht] \includegraphics[width=1.0\textwidth]{figures/architecture.pdf} \caption{Overview of the object-centric diffusion model. We combine a diffusion model with an object-centric multimodal transformer to iteratively reason about both 3D object embeddings and task specification, and predict goal poses of objects.} \label{fig-arch} \end{figure*} Denoising Diffusion Models are a class of generative models \cite{sohl2015deep,song2021scorebased}. Given a sample $x \sim q(x_0)$ from the data distribution. The \textit{forward} diffusion process is a Markov chain that creates latent variables $x_1,...,x_T$ by gradually adding Gaussian noise to the sample: \begin{align*}q(x_t|x_{t-1}) = \mathcal{N}(x_t; \sqrt{1-\beta_t}x_{t-1}, \beta_t\mathcal{I})\end{align*} Here $\beta_t$ follows a fixed variance schedule such that the variance at each step is small and the total noise added to the original sample in the chain is large. These two conditions allows sampling $x_0 \sim p_\theta(x_0)$ from a \textit{reverse} process that starts with a Gaussian noise $x_T$ and follows a learned Gaussian posterior \begin{align*} p_\theta(x_{t-1}|x_t) \sim \mathcal{N}(x_{t-1}; \mu_\theta(x_t, t), \Sigma_\theta(x_t, t)) \end{align*} In this work, we adopt the simplified model introduced in \cite{ho2020denoising} that fixes the covariance $\Sigma_\theta(x_t, t)$ to an untrained time-dependent constant and reparameterize the mean $\mu_\theta(x_t, t)$ with a noise term $\epsilon_t$. Diffusion models can be trained to minimize the variational lower bound on the negative log-likelihood $\EX[-\log p_\theta(x_0)]$. A simplied training objective with the reparameterized mean can be derived as: \begin{align*} L_\text{simple} = \EX_{t \sim [1, T], x_0 \sim q(x_0), \epsilon \sim \mathcal{N}(0, \mathcal{I})}[||\epsilon - \epsilon_\theta(x_t, t)||^2] \end{align*} Diffusion models have been used for motion and grasp planning in prior work \cite{janner2022diffuser,urain2022se3dif}. However, existing methods require known object models and are not conditioned on flexible language goals. \subsection{Transformers} \label{sec:prelim-transformers} Transformers were proposed in \cite{vaswani2017attention} for modeling sequential data. At the heart of the Transformer architecture is the scaled dot-product attention function, which allows elements in a sequence to attend to other elements. Specifically, an attention function takes in an input sequence $\{x_1,...,x_n\}$ and outputs a sequence of the same length $\{y_1,...,y_n\}$. Each input $x_i$ is linearly projected to a query $q_i$, key $k_i$, and value $v_i$. The output $y_i$ is computed as a weighted sum of the values, where the weight assigned to each value is based on the compatibility of the query with the corresponding key. In this work, we use the encoder layers in the original transformer architecture. Each encoder layer includes an attention layer and a position-wise fully connected feed forward network. With the use of attention mask, the encoder layer can process sequences with different lengths. \subsection{Encoders} We leverage modality-specific encoders to convert the multimodal inputs to latent tokens that are later processed by the transformer network. \textbf{Object encoder.} Given the segmented point cloud $x_i$ of an object $o_i$, we learn an encoder $h_o(x_i)$, in order to obtain the latent representation of the object. This is based on Point Cloud Transformer (PCT)~\cite{guo2021pct}, which has been shown to be effective at shape classification and part segmentation. We process the centered point cloud with PCT and learn a separate multilayer perceptron (MLP) to encode the mean position of the original point cloud. Encodings from the two networks are concatenated to give $h_o(x_i)$. We rely on this latent representation of objects for semantic, geometric, and spatial reasoning. \textbf{Language.} To map the language goal to its latent representation, we map each unique word token from the language instructions separately to an embedding with a learned mapping $h_w(w_i)$. This method helps establish a fine-grained correspondence between each part of the language specification and the respective constraint on the generated structure. \textbf{Diffusion encodings.} Since the goal poses of objects are iteratively optimized by the diffusion model and need to feed back to the model, we use a MLP to encode the goal poses of the objects $h_T(\xi^{goal}_i)$. To compute the time-dependent Gaussian posterior for reverse diffusion, we combine a latent code for $t$ in the feature channel by learning a time embedding $h_{time}(t)$. \textbf{Positional encoding.} To differentiate the multimodal data, we use a learned position embedding $h_{pos}(i)$ to indicate the position of the words and objects in input sequences and a learned type embedding $h_{type}(\upsilon_i)$ to differentiate object point clouds ($\upsilon_i=1$) and word tokens ($\upsilon_i=0$). \subsection{Conditional Pose Diffusion Model} Combining a diffusion model and an object-centric transformer, \emph{StructDiffusion}{} can sample diverse yet realistic object structures while accounting for the complex constraints imposed by the object geometry and language goal. The conditional diffusion model predicts the goal poses for the objects $\bm{\xi}_0 = \{\xi_i\}^N_i$ starting from the last time step of the reverse diffusion process $\bm{\xi}_T \sim \mathcal{N}(0, \mathcal{I})$, as illustrated in Fig.~\ref{fig:reverse-diffusion}. We use the bold symbol here because we jointly optimize the poses of all objects. Different from most existing diffusion models that directly generate goal images and do not explicitly model individual objects \cite{sohl2015deep,song2021scorebased,kapelyukh2022dall}, we use the transformer model to build an object-centric representation of the scene and reason about the higher-order interactions between multiple objects. This approach allows us to account for both global constraints and local interactions between objects. Leveraging attention masks, a single transformer model can also learn to rearrange different numbers of objects. The use of the diffusion model helps us capture diverse structures since we are sampling from a series of Gaussian noises at different scales when going from $\bm{\xi}_T$ to our goal $\bm{\xi}_0$. The resulting samples, therefore, is diverse at different levels of granularity (e.g., different placements of the structures and different orientations of the individual objects). The diversity is also crucial when dealing with the inherent ambiguity in language instructions. For example, a \textit{large} circle of plates and a \textit{large} circle of candles impose different constraints on the sizes of the structures because the objects being arranged have different sizes. Combining the advantages of the object-centric transformer and the diffusion model, we propose to model the conditional reverse process as \begin{align*} p_\theta(\bm{\xi}_0| \{x_i\}, \{w_i\}) = p(\bm{\xi}_t) \prod p_\theta(\bm{\xi}_{t-1}| \bm{\xi}_{t}, \{x_i\}, \{w_i\}) \end{align*} The generation process depends on the point clouds of the objects and language instruction. As discussed in \ref{sec:prelim-diffusion}, we learn the time-dependent noise $\bm{\epsilon}_t$, which can be used to compute $\bm{\xi}_t$. We use the transformer as the backbone to predict the conditional noise $\bm{\epsilon}_\theta(\bm{\xi}_t, t, \{x_i\}, \{w_i\})$ for each object. We obtain the transformer input for the language part and the object part as \begin{align*} c_{i,t} & = [h_w(x_i); h_{pos}(i); h_{type}(\upsilon_i); h_{time}(t)] \\ e_{i,t} & = [h_o(x_i); h_T(\xi^{goal}_i); h_{pos}(i); h_{type}(\upsilon_i); h_{time}(t)] \end{align*} \noindent where $[;]$ is the concatenation at the feature dimension. The model takes in the sequence $\{c_{1,t},..,c_{M,t}, e_{1,t}, ...,e_{N,t}\}$ and predicts $\{\epsilon_{1,t},...,\epsilon_{N,t}\}$ for the object poses. We parameterize 6-DoF pose target $\xi$ as $(t, R) \in SE(3)$. We directly predict $t \in \mathbb{R}^3$ and predict two vectors $a, b \in \mathbb{R}^3$, which are used to construct the rotation matrix $R \in SO(3)$ using a Gram–Schmidt-like process proposed in \cite{zhou2019continuity}. \subsection{Discriminators} Besides the generator, we can also use a learned discriminator model to further filter the predictions for realism. The discriminator works on imagined scenes, where the point clouds of objects are rigidly transformed to the respective goal poses following $x_i^{goal} = \xi^{goal}_i (\xi^{pc}_i)^{-1}x_i$. Here we also have the opportunity to leverage a spatial abstraction different from the one used by the generator. The generator operates the latent object-centric representation that are suitable to \textit{imagine} possible structures. For a discriminator, the interactions between the transformed point cloud objects can be directly reasoned at the point level. To maintain the ability to distinguish each individual objects, we add a one-hot encoding to each point feature. In our preliminary experiment, we found that the scene-level collision model has more discrimination power than the object-centric model that operates on latent representation of objects. We explore two discriminator models. The first collision discriminator is learned to predict pairwise collisions between two objects from their partial point clouds. The second structure discriminator is learned to classify the whole multi-object structure. Similar to the language-conditioned generator, we also condition the structure discriminator such that the discriminator can learn structure-specific constraints to score the samples. We found that the structure discriminator works better when it is only required to predict if local constraints are satisfied. Therefore, we normalize the scene point cloud and drop parts of the language instruction that specify global constraints such as where to place the structure on the table. \subsection{Planning and Inference} \begin{algorithm}[bt] \caption{Planning with \emph{StructDiffusion}{}} \label{alg:diffusion_planning} \begin{algorithmic}[1] \For{$t \in \text{range}(T, 1)$} \State $\bm{\epsilon}_t \sim \bm{\epsilon}_\theta(\bm{\xi}_t, t, \{x_i\}, \{w_i\})$ \State $\bm{z} \sim \mathcal{N}(0, \mathcal{I})$ if $t>1$ else $\bm{z} = 0$ \State $\bm{\xi}_{t-1} = \frac{1}{\sqrt{\beta_t}}(\bm{\xi}_t - \frac{\beta_t}{\sum_{s=1}^t 1-\beta_t} \bm{\epsilon}_t) + \sqrt{\beta_t} \bm{z}$ \EndFor \State Transform object points: $x_i^{goal} = \xi^{goal}_i (\xi^{pc}_i)^{-1}x_i$ \State Compute discriminator scores \State \textbf{return} ranked $\bm{\xi_1}$ \end{algorithmic} \end{algorithm} In Alg.~\ref{alg:diffusion_planning}, we show how to combine the different components of our framework to sample object structures. We first initialize a batch of goal poses $\mathbb{R}^{\in B \times N \times (3+3+3)}$ with random noise. We use batch operation on a GPU to perform diffusion and transform point clouds of multiple objects for different samples. For the discriminators, we also generate combined point cloud of objects after the diffusion process and score them in batches. The ranked samples are returned. Each sample corresponds to a physically and semantically valid multi-object structure that can be used by other components of the manipulation pipeline for planning. \subsection{Training Details} To train our model, we use the dataset from \cite{liu2022structformer} containing tuples $(\{x_i\}, \{T_i^{goal}\})$. We train a single model for all structures where the number of examples for different classes of structures are balanced. We use a batch size of 128 and train the diffusion model on a single RTX3090 GPU for about 12 hours. To train the collision discriminator, we randomly sample $100,000$ configurations of objects. For the structure discriminator, we generate negative examples by randomly perturbing the ground truth target poses $\xi^{goal}_i$. For each negative example, we also randomly select a set of objects to perturb so that there are negative examples that have different number of objects out of place. We augment the data by using point clouds from different time steps of the rearrangement sequence to create the imagined scene as they usually create different occlusions for objects. \subsection{Baselines} \begin{itemize} \item \textbf{StructFormer}: This baseline uses a multimodal and object-centric transformer network to generate multi-object structures based on segmented object point clouds and language instructions \cite{liu2022structformer}. The transformer network autoregressively predicts the goal poses of each object. We follow the original work to train a separate model for each class of structure. \item \textbf{CVAE}: This baseline is based on a conditional variational autoencoder (CVAE) model. CVAEs have been used to capture the different modes for multi-task learning and language-conditioned manipulation \cite{lynch2020learning,mees2022hulc}. We introduce a CVAE that uses the object-centric transformer backbone as a strong baseline for semantic rearrangement. To prevent the latent variable being ignored when combining the transformer with CVAE, the transformer network predicts the object goal poses in a single forward pass (i.e., not autoregressively). A single model is trained for four classes of structures. \item \textbf{Optimization with Learned Discriminator}: This baseline iteratively optimizes the goal poses of objects with the structure discriminator that is trained to classify valid rearranged scenes and invalid ones. This general approach has been used extensively for learning language-conditioned manipulation from offline data \cite{nair2022learning}, grasping \cite{murali20206,lu2020multifingered}, and predicting stable placements of objects \cite{paxton2021predicting}, but not for language-conditioned multi-object rearrangement. We uses the cross-entropy method for optimization \cite{chen2021fastrack}. We only optimize the object poses and not the structure pose to simplify the optimization problem. We initialize the samples from the baseline generative models because initializing the variables with random values does not lead to meaningful performance. \end{itemize} \begin{figure}[bt] \includegraphics[width=1.0\columnwidth]{figures/housekeep_custom_objects.png} \centering \caption{ Testing objects from Google Scanned Objects~\cite{downs2022google}, ReplicaCAD dataset~\cite{szot2021habitat}, and YCB Object Set~\cite{calli2015ycb}. The test object dataset contains a wide range of textured objects belonging to various classes. None of these objects appear in the training data. } \label{fig:objects} \end{figure} \subsection{Experimental Setup} We evaluate all models in the PyBullet physics simulator~\cite{coumans2017pybullet}. Point cloud observations are rendered with NViSII~\cite{morrical2021nvisii}. We test on novel object models from both known and unknown categories as our goal is to transfer the model learned in simulation directly to real-world objects. Fig.~\ref{fig:objects} shows the testing objects, which are collected from Google Scanned Objects~\cite{downs2022google}, ReplicaCAD dataset~\cite{szot2021habitat}, and YCB object Set~\cite{calli2015ycb}. To generate the test scenes, we use the same data collection pipeline that is used to collect groundtruth data from prior work~\cite{liu2022structformer}. This ensures that a valid rearrangement can be found for each scene. The set of objects and the language goal for each scene are randomly sampled. Distractor objects are randomly placed in the scene to simulate occlusions. We report success rate for the rearrangements. To isolate the pose prediction problem from other components of the system (e.g., grasp sampling and motion planning), we directly place objects $3$cm above the the predicted target poses. We checks whether the rearrangement is physically valid by running the simulation loop after placing each object. We check possible collisions and intersections between objects using approximate convex decompositions of the 3D object models. We also implement model-based classifiers to evaluate whether the rearrangement satisfy the language goal. For example, we check whether the objects are in a line using the centroids of the models. A rearrangement is considered as successful if the placements of objects are not preempted due to physics-related failures and the goal scene satisfies all semantic constraints determined by the given language goal. On average, there are 5 constraints for different types of structures. \begin{figure}[bt] \includegraphics[width=1.0\columnwidth]{figures/result_1.pdf} \centering \caption{Success rates for four different classes of structures on held-out objects. Models are evaluated in a physics simulator using unseen objects. A rearrangement is successful only if all objects are placed in physically valid poses and the rearranged scene satisfies the language goal. Compared to StructFormer~\cite{liu2022structformer}, the model previously proposed for semantic rearrangement, \emph{StructDiffusion}~obtains a 16\% average improvement in success rate.} \label{fig:result-generative} \end{figure} \subsection{Comparison with Other Generative Models} In Fig.~\ref{fig:result-generative}, we compare with other generative models and gain insights into the generator-discriminator design of our model. We see that our complete model, \emph{StructDiffusion}{}, drastically outperformed all baselines on the tower, line, and table setting structures and obtained comparable performance on the circle structures. The improvement was most significant for structures that required precise placements of objects and modeling contacts between objects. The generator-discriminator design was necessary because the diffusion model alone still generated invalid samples, especially for the line structures. The performance difference between \emph{StructDiffusion}{} and the ablated model that does not use discriminator supports that our model can leverage the complimentary strengths of object-centric representation and scene-level representation that preserves the point-to-point interactions. Although applying the collision discriminator also improved the performance of StructFormer and CVAE, our diffusion model benefited the most from the addition. We attributes this difference to the different diversities of samples from these three classes of generative models. The autoregressive transformer underlying StructFormer does not explicitly model uncertainty, therefore leads to similar samples for each scene. The single source of stochasticity from the latent variable of the CVAE model is also not enough. As the diffusion model incorporates uncertainties at different scales, it has the ability to both generate different classes of structures but also generate hypotheses of object placements given only partial, and even heavily occluded, point cloud of objects. We provide qualitative comparison in Fig.~\ref{fig:pc-rearrangement-comparison}. \begin{figure}[bt] \includegraphics[width=1.0\columnwidth]{figures/result_2.pdf} \centering \caption{Comparing \emph{StructDiffusion}{} with other iterative methods. The two baselines initialize samples of target object poses using either the StructFormer or the CVAE model. The predicted scores of a learned discriminator is then used to guide iterative optimization of the samples. In comparison, \emph{StructDiffusion}{} directly predicts the noises $\epsilon_t$ that need to be removed from the samples at each step.} \label{fig:result-iterative} \end{figure} \begin{figure*}[bt] \includegraphics[width=\textwidth]{figures/simulation_vis.pdf} \caption{Comparison between \emph{StructDiffusion}{} and the baselines on partial views of held-out objects, given language commands from four different categories. \emph{StructDiffusion}{} is better at resolving constraints involving contact and precise arrangement of objects, avoiding collisions and creating physically realistic placements. The labels indicate whether the structures can be successfully built in the simulation environment and also satisfy the language goal.} \label{fig:pc-rearrangement-comparison} \end{figure*} \subsection{Comparison with Other Iterative Methods} In Fig.~\ref{fig:result-iterative}, we compare \emph{StructDiffusion}{} with other optimization-based baselines that can take advantage of the additional computational time to iteratively refine the prediction. The result shows that \emph{StructDiffusion}{} outperformed the other two baselines. Even though strong performance was observed when applying optimization-based method to other manipulation tasks, we do not see significant benefit in our task. Looking more closely, we observe that the challenging cases that are not yet solved by the non-iterative variants are cases where the placements of objects are closely related (e.g., mugs tightly packed in a line without intersection, the third task in Fig.~\ref{fig:pc-rearrangement-comparison}). In these cases, the guidance from the discriminator can be ambiguous and leads to local minimal without reaching valid solutions. We hypothesize the leveraging guidance at different scales is necessary, as studied in a recent work that directly learns to predict scores (i.e., gradients) at different scales for 2D object rearrangement~\cite{wu2022targf}. Score-based method is closely related to the diffusion model we used in this work, as shown by Song and colleagues~\cite{song2021scorebased}. \subsection{Real World Experiments} \begin{figure*}[bt] \includegraphics[width=\textwidth]{figures/pc_examples.pdf} \caption{Examples of predicted structures for real-world objects. We can predict structures from raw point clouds for a wide range of language instructions fitting into four different broad classes.} \label{fig:pc-rearrangement-examples} \end{figure*} We also performed a set of real world experiments on real data, including testing structure assembly on a robotic manipulation task. \subsection{Perception and Hardware} We deployed our system on a 7-DoF JACO arm with an Asus Xtion RGB-D Camera. We obtained segmented object point clouds by identifying clusters-of-interest through table surface detection and Euclidean distance clustering, using the Point Cloud Library~\cite{rusu20113d}. We calculated antipodal grasps over each object point cloud~\cite{ten2018using}, which are then ordered and executed using pairwise ranking~\cite{kent2018adaptive}. We used RRT-Connect~\cite{kuffner2000rrt} for motion planning. We released each object $3$cm above the predicated pose. \subsection{Predictions for Real-World Objects} We show examples of the predicted structures for real-world objects in Fig.~\ref{fig:pc-rearrangement-examples}. These examples are created by rigidly transforming the segmented object point clouds from an initial scene with the target poses of the highest ranked structure. Even though our model is trained only on simulation data, it can be directly used to generate semantically diverse and physically valid structures for real-world objects. our model can generate different variations of the same structure type, as shown in \textit{(A, B)}. The same set of objects can be arranged into completely different classes of structures conditioning on the language, as shown in \textit{(A, C)} and \textit{(D, E)}. Besides changing the positions and sizes of the structures, the orientations of the structures can also be specified in language \textit{(F, G)} and \textit{(H, I)}. Note that even though table settings in the training data are only aligned horizontally as shown in \textit{I}, the use of language and training on other orientation-specific structures enable compositional generalization to a new orientation shown in \textit{H}. Finally, we see non-symmetrical object (e.g., mugs, knifes, and spatulas) are correctly aligned in \textit{B, D, E, J, H}. \begin{table}[t] \centering \caption{Robot experiments with real-world objects. We perform each of the task 3 times with different initial positions of objects. We show the number of times that valid grasp and motion plans are found and that the plans are executed successfully by the robot.} \begin{tabular}{lccc} \toprule \multirow{2}{*}{Objects} & \multirow{2}{*}{Structure} & Grasp and & \multirow{2}{*}{Placement} \\ &&Motion Planning& \\ \midrule Bowl, Bowl, Pan & Tower & 3&2\\ Bowl, Bowl, Pan & Small line & 2&2\\ Bowl, Bowl, Pan & Small Circle & 3&3\\ \midrule \multicolumn{2}{c}{Overall Success Rate} & 88.9\% & 77.8\% \\ \bottomrule \end{tabular} \label{tab:robot_result} \end{table} \subsection{Rearrangement} To reliably rearrange multiple objects, we combined \emph{StructDiffusion}{} with grasp and motion planning. We performed nested search to find the target structure to execute. Specifically, we iterate through the generated and ranked structures. For each structure, we sample a set of grasp poses for each object and compute corresponding pre-grasp, standoff, and placement poses based on the prediction. We searched for valid motion plans between these waypoints. If all motion plans have been found, we execute on the robot. In Table~\ref{tab:robot_result}, we show success counts and average success rate for trials with different objects and different language goals. Valid motion and grasp plans can be found most of the time due to the diverse structures generated by \emph{StructDiffusion}{}. We observed that partial point clouds due to noisy real sensor and self-occlusions for large objects led to a small number of invalid structure predictions. While planning, we make the assumption that the objects are rigidly attached to the gripper after grasping without slippage. This assumption generally did not hold in the real world and led to occasional failures. This assumption can be relaxed by predicting a post-grasp displacement, using learned models such as \cite{zhao2020towards}. \section{Introduction}\label{sec:intro} \input{01intro} \section{Related Work}\label{sec:related_work} \input{02related_work} \section{Preliminaries}\label{sec:transformer} \input{03prelim} \section{\emph{StructDiffusion}{} for Object Rearrangement}\label{sec:approach} \input{04approach} \section{Simulation Experiments}\label{sec:experiments} \input{06experiments} \section{Real World Experiments}\label{sec:robot} \input{07robot} \section{Conclusions}\label{sec:conclusions} \input{07conclusions} \section*{ACKNOWLEDGMENT} We thank Yilun Du for discussions about the use of diffusion models. \bibliographystyle{IEEEtran}
1,314,259,993,794
arxiv
\section{Introduction} \label{sect:intro} In this article we apply methods and techniques known from perturbative Feynman integral calculations to non-perturbative lattice correlation functions. Integration-by-parts identities \cite{Tkachov:1981wb,Chetyrkin:1981qh} are widely used in perturbative calculations: They relate Feynman integrals with different powers of the propagators and allow to express any Feynman integral from a family of Feynman integrals as a linear combination of basis integrals (usually called master integrals). The reduction to master integrals involves (in principle) only linear algebra. Several publicly available computer programs for Feynman integral reduction exist \cite{vonManteuffel:2012np,Smirnov:2014hma,Maierhoefer:2017hyi}, which implement the Laporta algorithm \cite{Laporta:2001dd}. However, for current state-of-the-art calculations the involved systems of linear equations are rather large and constitute actually a bottleneck. Quite recently it has been become clear, that Feynman integral reduction can be formulated in the language of twisted cocycles \cite{Mastrolia:2018uzb,Frellesvig:2019kgj,Frellesvig:2019uqt,Mizera:2019vvs,Mizera:2020wdt}. This formalism comes with an inner product, given by the intersection number of twisted cocycles. Thus, if the intersection numbers can be computed efficiently \cite{Weinzierl:2020xyy} we may bypass the huge systems of linear equations and compute the reduction to master integrals directly from the inner product. In this article we would like to point out that the formalism of twisted cocycles has wider applications and is not restricted to Feynman integral calculations. The formalism of twisted cocycles has also been applied to perturbative scattering amplitudes \cite{Mizera:2017rqa,Mizera:2017cqs,delaCruz:2017zqr,Mizera:2019gea,Mizera:2019blq} within the Cachazo-He-Yuan formalism \cite{Cachazo:2013gna,Cachazo:2013hca,Cachazo:2013iea}. Thirdly, we should also mention that concepts closely related to twisted cocycles in the context of the Batalin-Vilkovisky formalism have been discussed in \cite{Schwarz:2008sa,Albert:2008ui,Gwilliam:2012jg,JohnsonFreyd:2012ww}. In this paper we extend the application of twisted cocycles towards correlation functions on a lattice. At finite coupling these are non-perturbative objects. On a lattice, the correlation functions are finite-dimensional integrals. For a scalar theory the dimension of the integrals equals the number of the lattice points. In this article we focus on a scalar theory. We determine the dimension and a basis for the twisted cohomology group related to the correlation functions of a scalar theory on a lattice. This allows us to find linear relations between different correlation functions at finite coupling on the lattice. For example, within $\phi^3$-theory we may express a correlation function, where a field occurs at a lattice point $x$ to power two or higher as a sum of correlation functions, where at each lattice point the field occurs maximally to power one. In order to avoid misunderstandings let us clearly state that all relations in this article follow from integration-by-parts identities. We do not consider operator product expansions \cite{Wilson:1972ee}. The operator product expansion can be used in a continuum space-time to express a product of bilocal operators, say at space-time points $x$ and $y$, as a linear combination of regular local operators with coefficients, which contain the short distance singularities. In this article we always keep the lattice fixed and the lattice spacing $a$ provides an ultraviolet cutoff. This paper is organised as follows: In the next section we recall the definition of correlation functions in quantum field theory. In section~\ref{sect:lattice} we introduce the lattice formulation. In section~\ref{sect:twisted_cocycles} we define twisted cocycles. Section~\ref{sect:cohomology} and \ref{sect:reduction} contain the main results of this article: In section~\ref{sect:cohomology} we determine the twisted cohomology groups for a scalar theory and in section~\ref{sect:reduction} we present an efficient reduction method for twisted cocycles to a basis of twisted cocycles. A few examples are given in section~\ref{sect:examples}. Finally, section~\ref{sect:conclusions} gives our conclusions. \section{Correlation functions} \label{sect:correlation_function} Fundamental objects in quantum field theory are the $n$-point correlation functions. They are given in the path-integral formalism by \begin{eqnarray} G_n\left(x_1, \dots x_n \right) & = & \frac{\int {\mathcal D} \phi \; {\phi}(x_1) ... {\phi}(x_n) \; \exp\left(i S\right)}{\int {\mathcal D} \phi \; \exp\left(i S \right)}. \nonumber \end{eqnarray} The essential information is given by the path-integral in the numerator, the denominator only provides the normalisation. In this article we study the lattice version of integrals of the form \begin{eqnarray} \label{path_integral} \int {\mathcal D} \phi \; \left[{\phi}(x_1)\right]^{\nu_1} ... \left[{\phi}(x_n)\right]^{\nu_n} \; \exp\left(i S\right), \;\;\;\;\;\; \nu_j \; \in \; {\mathbb N}_0. \end{eqnarray} We use lattice regularisation to convert the infinite-dimensional path integral to a finite-di\-men\-sion\-al integral. We are interested in determining linear relations among these lattice integrals. We do this with the help of methods related to twisted cocycles. As we employ lattice regularisation, all our results are valid for finite coupling and we never use perturbation theory in this article. We illustrate the essential points for a scalar theory with Lagrangian \begin{eqnarray} {\mathcal L} & = & \frac{1}{2} \left( \partial_\mu \phi \right) \left( \partial^\mu \phi \right) - \sum\limits_{j=2}^{j_{\mathrm{max}}} \frac{\lambda_j}{j!} \phi^j, \end{eqnarray} with $\lambda_j \ge 0$ for $2 \le j \le j_{\mathrm{max}}$ and $\lambda_{j_{\mathrm{max}}} \neq 0$. We call $\lambda_{j_{\mathrm{max}}}$ the leading coupling. Of particular interest are the cases $j_{\mathrm{max}} \in \{2,3,4\}$. We call $j_{\mathrm{max}}=4$ a $\phi^4$-theory, $j_{\mathrm{max}}=3$ a $\phi^3$-theory and $j_{\mathrm{max}}=2$ a $\phi^2$-theory. The latter is of course a trivial free theory. \section{Lattice formulation} \label{sect:lattice} As it is standard in lattice formulations we continue to Euclidean time. We denote by $D \in {\mathbb N}$ the number of space-time dimensions. A lattice $\Lambda$ with lattice spacing $a$ is specified by a $D$-tuple $(N_0, N_1, \dots, N_{D-1})$, where $N_\mu$ gives the number of lattice points in the $\mu$-th direction. Often it will be convenient to take the same number of points in any direction: $N_0=\dots=N_{D-1}=L$. We assume periodic boundary conditions. The lattice has \begin{eqnarray} N & = & \prod\limits_{\mu=0}^{D-1} N_\mu \end{eqnarray} lattice points. A lattice point is specified by a $D$-tuple \begin{eqnarray} x & = & \left( j_0, j_1, \dots, j_{D-1} \right), \;\;\;\;\;\; 0 \; \le \; j_i \; < \; N_i. \end{eqnarray} We introduce dimensionless variables \begin{align} \hat{\phi} & = a^{\frac{D-2}{2}} \phi, & \hat{\lambda}_j & = a^{2-\frac{\left(j-2\right)\left(D-2\right)}{2}} \lambda_j. \end{align} The field at a lattice point is denoted by \begin{eqnarray} \hat{\phi}_x & = & \hat{\phi}_{j_0, j_1, \dots, j_{D-1}}. \end{eqnarray} The Euclidean lattice action $S_E$ is a polynomial in the $N$ variables $\hat{\phi}_x$ and given by \begin{eqnarray} S_E & = & \sum\limits_{x \in \Lambda} \left( - \sum\limits_{\mu=0}^{D-1} \hat{\phi}_x \hat{\phi}_{x+a e_\mu} + D \hat{\phi}_x^2 + \sum\limits_{j=2}^{j_{\mathrm{max}}} \frac{\hat{\lambda}_j}{j!} \hat{\phi}_x^j \right). \end{eqnarray} $\hat{\phi}_{x+a e_\mu}$ denotes the next lattice point in the (positive) $\mu$-direction modulo $N_\mu$. On the lattice, the path integral of eq.~(\ref{path_integral}) becomes the $N$-fold integral \begin{eqnarray} \label{lattice_integral} I{\footnotesize \left(\begin{array}{ccc} \nu_1, & \dots, & \nu_n \\ x_1, & \dots, & x_n \end{array} \right)} & = & \int\limits_{{\mathcal C}^N} d^N\hat{\phi} \; \hat{\phi}_{x_1}^{\nu_1} \dots \hat{\phi}_{x_n}^{\nu_n} \; \exp\left(-S_E\right). \end{eqnarray} This is a finite-dimensional integral. Let us now discuss the integration contour ${\mathcal C}^N$. We require that each field variable $\hat{\phi}_x$ is integrated along the same curve ${\mathcal C}$ in ${\mathbb C}$ and that the integrand vanishes at the boundary. In the case where $j_{\mathrm{max}}$ is an even number we may take the standard integration contour along the real axis from minus infinity to plus infinity. However, for $j_{\mathrm{max}}$ odd this is not possible, as the potential is not bounded from below for real field values. We may get around this by taking an integration contour for a single field to approach for example $\arg \hat{\phi} = 2 \pi /j_{\mathrm{max}}$ and $\arg \hat{\phi} = 0$ at the boundaries. For $j_{\mathrm{max}}=3$ this is illustrated in fig.~\ref{fig1}. \begin{figure} \begin{center} \includegraphics[scale=1.0]{fig1} \end{center} \caption{ A possible integration contour for $\phi^3$-theory. The asymptotic values are $\arg \hat{\phi} = 2 \pi /3$ and $\arg \hat{\phi} = 0$. } \label{fig1} \end{figure} In general there are $(j_{\mathrm{max}}-1)$ independent integration contours, specified by the asymptotic values $\arg \hat{\phi} = 2 \pi j /j_{\mathrm{max}}$ and $\arg \hat{\phi} = 0$ with $1 \le j < j_{\mathrm{max}}$. Throughout this article we keep the integration contour fixed. The precise form of the integration contour does not matter, the only requirement is that the integrand vanishes on the boundary of the integration contour. Let us note that the integrand is a holomorphic function of the $N$ variables $\hat{\phi}_x$ on ${\mathbb C}^N$. \section{Twisted cocycles} \label{sect:twisted_cocycles} Let us now reformulate the lattice integrals in the language of twisted cocycles. We define a function $u$, a one-form $\omega$ and a $N$-form $\Phi$ by \begin{eqnarray} \label{def_twisted} u & = & \exp\left(-S_E\right), \nonumber \\ \omega & = & d \ln u \;= \; -d S_E \; = \; \sum\limits_{x \in \Lambda} \omega_x d\hat{\phi}_x, \nonumber \\ \Phi & = & \hat{\phi}_{x_1}^{\nu_1} \dots \hat{\phi}_{x_n}^{\nu_n} \; d^N\hat{\phi}. \end{eqnarray} In terms of these quantities, the integral in eq.~(\ref{lattice_integral}) can be written as \begin{eqnarray} I{\footnotesize \left(\begin{array}{ccc} \nu_1, & \dots, & \nu_n \\ x_1, & \dots, & x_n \end{array} \right)} & = & \int\limits_{{\mathcal C}^N} u \; \Phi. \end{eqnarray} The one-form $\omega$ defines a covariant derivative $\nabla_\omega=d+\omega$. By assumption, the integrand vanishes on the boundary of the integration. We therefore have the integration-by-parts identities \begin{eqnarray} 0 & = & \int\limits_{{\mathcal C}^N} d^N\hat{\phi} \; \frac{\partial}{\partial \hat{\phi}_x} \left[ \hat{\phi}_{x_1}^{\nu_1} \dots \hat{\phi}_{x_n}^{\nu_n} \; \exp\left(-S_E\right) \right]. \end{eqnarray} In terms of $\Phi$ this translates to the statement that the integral is invariant under transformations \begin{eqnarray} \label{equivalence_relation} \Phi' & = & \Phi + \nabla_\omega \Xi, \end{eqnarray} for any $(N-1)$-form $\Xi$. In addition, $\Phi$ is obviously $\nabla_\omega$-closed. We define the twisted cohomology group $H_\omega^N$ as the equivalence class of $\nabla_\omega$-closed $N$-forms modulo exact ones. We denote the equivalence classes by $\langle \Phi |$ and refer to these as twisted cocycles. In a similar way we denote the integration cycle by $| {\mathcal C}^N \rangle$ and refer to it as a twisted cycle. Our original integral is then written as \begin{eqnarray} \left\langle \Phi \left| {\mathcal C}^N \right. \right\rangle & = & \int\limits_{{\mathcal C}^N} u \; \Phi \; = \; \int\limits_{{\mathcal C}^N} d^N\hat{\phi} \; \hat{\phi}_{x_1}^{\nu_1} \dots \hat{\phi}_{x_n}^{\nu_n} \; \exp\left(-S_E\right). \nonumber \end{eqnarray} The essential facts about twisted cohomology groups are \cite{aomoto1975,cho1995,Aomoto:book}: (i) the twisted cohomology groups $H^N_\omega$ are finite-dimensional and (ii) there is a a non-degenerate inner product between $H^N_\omega$ and the dual twisted cohomology group $(H^N_\omega)^\ast=H^N_{-\omega}$ given by the intersection number. Let $\langle e_1 |, \dots, \langle e_B |$ be a basis of $H^N_\omega$ and let $| d_1 \rangle, \dots, | d_B \rangle$ the dual basis of $(H^N_\omega)^\ast$, satisfying \begin{eqnarray} \left\langle e_i | d_j \right\rangle & = & \delta_{i j}. \end{eqnarray} We may express $\langle \Phi |$ as a linear combination of the basis $\langle e_i |$: \begin{eqnarray} \label{reduction_to_basis} \left\langle \Phi \right| & = & \sum\limits_{i=1}^B c_i \left\langle e_i \right|, \end{eqnarray} where the coefficients $c_i$ are given by the intersection numbers \begin{eqnarray} \label{intersection_numbers} c_i & = & \left\langle \Phi \left| d_i \right. \right\rangle. \end{eqnarray} Thus we may write our lattice integral as a linear combination of basis integrals: \begin{eqnarray} \left\langle \Phi \left| {\mathcal C}^N \right. \right\rangle & = & \sum\limits_{i=1}^B c_i \left\langle e_i \left| {\mathcal C}^N \right. \right\rangle. \end{eqnarray} \section{Dimensions and bases of twisted cohomology groups} \label{sect:cohomology} We now determine the dimensions and bases of the twisted cohomology groups. Let $x_1, \dots, x_N$ label the lattice points. We consider the ideal \begin{eqnarray} J & = & \left\langle \omega_{x_1}, \dots, \omega_{x_N} \right\rangle. \end{eqnarray} This is a zero-dimensional ideal and the dimension of the vector space \begin{eqnarray} {\mathbb C}^N / J \end{eqnarray} is finite. In addition, there is an isomorphism between a basis of ${\mathbb C}^N / J$ and a basis of $H^N_\omega$. We may construct a monomial basis of ${\mathbb C}^N / J$ as follows: Let $G=\langle g_1, \dots, g_s \rangle$ be a Gr\"obner basis of $J$. Then a monomial basis of ${\mathbb C}^N / J$ is given by all monomials not divisible by any $\mathrm{lt}(g_i)$. Multiplying the monomials by $d^N\hat{\phi}$ gives a basis of $H^N_\omega$. For the case of $\phi^{j_{\mathrm{max}}}$-theory we find \begin{eqnarray} B & = & \dim H^N_\omega \; = \; \left(j_{\mathrm{max}}-1\right)^N, \end{eqnarray} e.g. \begin{eqnarray} \phi^2\mbox{-theory}: & & \dim H^N_\omega \; = \; 1, \nonumber \\ \phi^3\mbox{-theory}: & & \dim H^N_\omega \; = \; 2^N, \nonumber \\ \phi^4\mbox{-theory}: & & \dim H^N_\omega \; = \; 3^N. \end{eqnarray} A basis of $H^N_\omega$ is given by \begin{eqnarray} \label{def_basis} \left\langle e_i \right| \; : \;\;\; \left( \prod\limits_{k=1}^N \hat{\phi}_{x_k}^{\nu_k} \right) d^N\hat{\phi}, & & 0 \; \le \; \nu_k \; \le \; j_{\mathrm{max}}-2. \end{eqnarray} For $\phi^2$-theory the basis consists of one element \begin{eqnarray} \left\langle e_1 \right| & = & 1 \cdot d^N\hat{\phi}. \end{eqnarray} This is not surprising: $\phi^2$-theory is a free theory and all integrals can be reduced by integration-by-parts identities to a single Gaussian integral. For $j_{\mathrm{max}} > 2$ the generators $\omega_{x_1}, \dots, \omega_{x_N}$ of the ideal $J$ are already a Gr\"obner basis with respect to degree lexicographical ordering or degree reverse lexicographical ordering. $\omega_{x_i}$ is given by \begin{eqnarray} \omega_{x_i} & = & - \frac{\hat{\lambda}_{j_{\mathrm{max}}}}{\left(j_{\mathrm{max}}-1\right)!} \hat{\phi}_{x_i}^{j_{\mathrm{max}}-1} + ..., \end{eqnarray} where the dots stand for terms of lower total degree. \section{Reduction to master integrals} \label{sect:reduction} Given the basis in eq.~(\ref{def_basis}) we would like to express an arbitrary differential $N$-form $\Phi$ of the form as in eq.~(\ref{def_twisted}) as a linear combination of the basis as in eq.~(\ref{reduction_to_basis}). In principle this can be done by computing the intersection numbers in eq.~(\ref{intersection_numbers}). However, this is impractical. Our main interest is the application where the number of lattice points $N$ is large and the number of field insertions $n$ in eq.~(\ref{lattice_integral}) is small. In this case it is more convenient to use eq.~(\ref{equivalence_relation}) repeatedly \cite{Weinzierl:2020xyy}. For the interacting theories ($j_{\mathrm{max}}>2$) we may reduce the power $\nu_i \ge (j_{\mathrm{max}}-1)$ of a field at a space-time point $x_i$ as follows: Let \begin{eqnarray} \Phi & = & \hat{\phi}_{x_i}^{\nu_i} \left( \prod\limits_{\stackrel{k=1}{k \neq i}}^N \hat{\phi}_{x_k}^{\nu_k} \right) d^N\hat{\phi} \end{eqnarray} be a representative of $\langle \Phi |$ and consider the $(N-1)$-form \begin{eqnarray} \Xi & = & \frac{\left(j_{\mathrm{max}}-1\right)!}{\hat{\lambda}_{j_{\mathrm{max}}}} \hat{\phi}_{x_i}^{\nu_i-j_{\mathrm{max}}+1} \prod\limits_{\stackrel{k=1}{k \neq i}}^N \hat{\phi}_{x_k}^{\nu_k} d\hat{\phi}_{x_k}. \end{eqnarray} Then \begin{eqnarray} \Phi' & = & \Phi + \nabla_\omega \Xi \end{eqnarray} has at most the power $\nu_i-1$ at the space-time point $x_i$. The power of the fields at other space-time points may be increased by one. However, the total degree of all fields is decreased and therefore this algorithm will terminate. \section{Examples} \label{sect:examples} As a simple example let us consider massless $\phi^4$-theory in $D$ space-time dimensions. We take $\hat{\lambda}_2=\hat{\lambda}_3=0$ and $\hat{\lambda}_4\neq 0$. Consider \begin{eqnarray} I{\footnotesize \left(\begin{array}{c} 4 \\ x \end{array} \right)} & = & \int\limits_{{\mathcal C}^N} d^N\hat{\phi} \; \hat{\phi}_{x}^{4} \; \exp\left(-S_E\right). \end{eqnarray} As integration contour ${\mathcal C}$ we take the real axis. At the lattice point $x$ the field $\hat{\phi}_{x}$ occurs to power $4$. In $\phi^4$-theory on the lattice, integration-by-parts identities allow us to express this correlation function as a linear combination of correlation functions, where at each lattice point the field occurs maximally to power two. Reducing the integral above to master integrals we find \begin{eqnarray} I{\footnotesize \left(\begin{array}{c} 4 \\ x \end{array} \right)} & = & \frac{6}{\hat{\lambda}_4} \int\limits_{{\mathcal C}^N} d^N\hat{\phi} \; \left[ 1 + \hat{\phi}_{x} \sum\limits_{\mu=0}^{D-1} \left( \hat{\phi}_{x+a e_\mu} + \hat{\phi}_{x-a e_\mu} - 2 \hat{\phi}_{x} \right) \right] \exp\left(-S_E\right), \end{eqnarray} or \begin{eqnarray} \label{result_example_1} I{\footnotesize \left(\begin{array}{c} 4 \\ x \end{array} \right)} & = & \frac{6}{\hat{\lambda}_4} \left\{ I{\footnotesize \left(\vphantom{\begin{array}{c} 4 \\ x \end{array}} \right)} + \sum\limits_{\mu=0}^{D-1} \left[ I{\footnotesize \left(\begin{array}{cc} 1, & 1 \\ x, & x+a e_\mu \end{array} \right)} + I{\footnotesize \left(\begin{array}{cc} 1, & 1 \\ x, & x-a e_\mu \end{array} \right)} - 2 I{\footnotesize \left(\begin{array}{c} 2 \\ x \end{array} \right)} \right] \right\}. \end{eqnarray} Here, $I()$ denotes the integral \begin{eqnarray} I\left(\right) & = & \int\limits_{{\mathcal C}^N} d^N\hat{\phi} \; \exp\left(-S_E\right). \end{eqnarray} As a second example we consider \begin{eqnarray} I{\footnotesize \left(\begin{array}{cc} 1, & 3 \\ x, & y \end{array} \right)} & = & \int\limits_{{\mathcal C}^N} d^N\hat{\phi} \; \hat{\phi}_{x} \hat{\phi}_{y}^{3} \; \exp\left(-S_E\right). \end{eqnarray} In order to exclude degenerate cases we assume that $x$ and $y$ are separated by at least two lattice units. The reduction to master integrals yields \begin{eqnarray} I{\footnotesize \left(\begin{array}{cc} 1, & 3 \\ x, & y \end{array} \right)} & = & \frac{6}{\hat{\lambda}_4} \int\limits_{{\mathcal C}^N} d^N\hat{\phi} \; \hat{\phi}_{x} \left[ \sum\limits_{\mu=0}^{D-1} \left( \hat{\phi}_{y+a e_\mu} + \hat{\phi}_{y-a e_\mu} - 2 \hat{\phi}_{y} \right) \right] \exp\left(-S_E\right), \end{eqnarray} or \begin{eqnarray} \label{result_example_2} I{\footnotesize \left(\begin{array}{cc} 1, & 3 \\ x, & y \end{array} \right)} & = & \frac{6}{\hat{\lambda}_4} \sum\limits_{\mu=0}^{D-1} \left[ I{\footnotesize \left(\begin{array}{cc} 1, & 1 \\ x, & y+a e_\mu \end{array} \right)} + I{\footnotesize \left(\begin{array}{cc} 1, & 1 \\ x, & y-a e_\mu \end{array} \right)} - 2 I{\footnotesize \left(\begin{array}{cc} 1, & 1 \\ x, & y \end{array} \right)} \right]. \end{eqnarray} Please note that the inverse coupling $\hat{\lambda}_4^{-1}$ appears as a prefactor on the right-hand side of eq.~(\ref{result_example_1}) and eq.~(\ref{result_example_2}). As a third and more involved example we consider \begin{eqnarray} I{\footnotesize \left(\begin{array}{cc} 1, & 9 \\ x, & y \end{array} \right)} & = & \int\limits_{{\mathcal C}^N} d^N\hat{\phi} \; \hat{\phi}_{x} \hat{\phi}_{y}^{9} \; \exp\left(-S_E\right). \end{eqnarray} In order to exclude degenerate cases we assume that $x$ and $y$ are separated by at least three lattice units. Furthermore, in order to keep the final expressions to a reasonable size, we present here results for $D=1$ space-time dimensions. Results for $D>1$ are easily obtained, but yield longer expressions. The reduction to master integrals for $D=1$ yields \begin{eqnarray} \label{result_example_3} \lefteqn{ I{\footnotesize \left(\begin{array}{cc} 1, & 9 \\ x, & y \end{array} \right)} = \frac{1296}{\hat{\lambda}_4^4} \left[ 18 I{\footnotesize \left(\begin{array}{cc} 1, & 1 \\ x, & y \end{array} \right)} - 10 I{\footnotesize \left(\begin{array}{cc} 1, & 1 \\ x, & y+a e_0 \end{array} \right)} - 10 I{\footnotesize \left(\begin{array}{cc} 1, & 1 \\ x, & y-a e_0 \end{array} \right)} \right. } & & \nonumber \\ & & \left. + I{\footnotesize \left(\begin{array}{cc} 1, & 1 \\ x, & y+2 a e_0 \end{array} \right)} + I{\footnotesize \left(\begin{array}{cc} 1, & 1 \\ x, & y-2 a e_0 \end{array} \right)} \right] \nonumber \\ & & + \frac{648}{\hat{\lambda}_4^3} \left[ 16 I{\footnotesize \left(\begin{array}{cc} 1, & 1 \\ x, & y \end{array} \right)} - 8 I{\footnotesize \left(\begin{array}{cc} 1, & 1 \\ x, & y+a e_0 \end{array} \right)} - 8 I{\footnotesize \left(\begin{array}{cc} 1, & 1 \\ x, & y-a e_0 \end{array} \right)} + 4 I{\footnotesize \left(\begin{array}{ccc} 1, & 2, & 1 \\ x, & y & y+a e_0 \end{array} \right)} \right. \nonumber \\ & & \left. + 4 I{\footnotesize \left(\begin{array}{ccc} 1, & 2, & 1 \\ x, & y & y-a e_0 \end{array} \right)} - 2 I{\footnotesize \left(\begin{array}{ccc} 1, & 1, & 2 \\ x, & y & y+a e_0 \end{array} \right)} - 2 I{\footnotesize \left(\begin{array}{ccc} 1, & 1, & 2 \\ x, & y & y-a e_0 \end{array} \right)} \right. \nonumber \\ & & \left. + I{\footnotesize \left(\begin{array}{ccc} 1, & 2, & 1 \\ x, & y-a e_0 & y+a e_0 \end{array} \right)} + I{\footnotesize \left(\begin{array}{ccc} 1, & 1, & 2 \\ x, & y-a e_0 & y+a e_0 \end{array} \right)} - 4 I{\footnotesize \left(\begin{array}{cccc} 1, & 1, & 1, & 1 \\ x, & y-a e_0 & y & y+a e_0 \end{array} \right)} \right] \nonumber \\ & & + \frac{108}{\hat{\lambda}_4^2} \left[ 4 I{\footnotesize \left(\begin{array}{cc} 1, & 1 \\ x, & y \end{array} \right)} + 3 I{\footnotesize \left(\begin{array}{ccc} 1, & 2, & 1 \\ x, & y & y+a e_0 \end{array} \right)} + 3 I{\footnotesize \left(\begin{array}{ccc} 1, & 2, & 1 \\ x, & y & y-a e_0 \end{array} \right)} \right]. \end{eqnarray} All relations have been verified numerically by Monte Carlo integrations. \section{Conclusions} \label{sect:conclusions} In this paper we investigated linear relations among correlation functions on a lattice, which have their origin in integration-by-parts identities. We formulated the problem in terms of twisted cocycles. We may think of a twisted cocycle as the integrand of a lattice correlation function without the common factor $\exp(-S_E)$ and defined up to terms which vanish by integration-by-parts identities. Mathematically, a twisted cocycle is a twisted cohomology class. For a scalar theory we determined the dimension and a basis of the twisted cohomology group. In particular we showed that for a $\phi^{j_{\mathrm{max}}}$-theory we may express any lattice correlation function as a linear combination of lattice correlations functions, where at each lattice point the field $\hat{\phi}$ occurs maximally to the power $(j_{\mathrm{max}}-2)$. There is no principal obstruction to extend the analysis to Yang-Mills theory on the lattice. However in practice the determination of the dimension and of a basis for the twisted cohomology groups will be more challenging. For a scalar theory we profited from the fact that a Gr\"obner basis for the ideal $J$ is easily found with respect to degree lexicographical ordering or degree reverse lexicographical ordering. This can be traced back to the leading coupling term in the potential $\frac{\lambda_{j_{\mathrm{max}}}}{j_{\mathrm{max}}!} \phi^{j_{\mathrm{max}}}$. For Yang-Mills theory on the lattice, the Euclidean action is given as a sum over plaquettes and a plaquette expands into a product of fields at neighbouring lattice points, and not at the same lattice point. This makes the determination of the dimension and of a basis for the twisted cohomology groups more challenging. \subsection*{Acknowledgements} I would like to thank the anonymous referee for bringing references \cite{Schwarz:2008sa,Albert:2008ui,Gwilliam:2012jg,JohnsonFreyd:2012ww} to my attention, where ideas of homological perturbation theory in the context of the Batalin–Vilkovisky formalism are discussed. As this article provides a bridge between Feynman integrals and correlation functions on a lattice, we may transfer ideas of homological perturbation theory to integration-by-parts reduction of Feynman integrals. This will be an interesting project for the future.
1,314,259,993,795
arxiv
\section{Introduction} \label{s:intro} A much studied problem in Riemannian geometry asks to what degree a Riemannian manifold is determined by its length spectrum, that is, the set of lengths of its closed geodesics. It is known that the length spectrum does not in general recover the metric, but more refined conjectures and results exist, see for example \cite{Croke:1990tk, Otal:1990yv, Guillarmou:2018qo} and references therein. In contact geometry, an analogous question exists, but little is known. Recall that a contact form on a closed $(2n+1)$-manifold $Y$ is a 1-form $\lambda$ such that $\lambda\wedge(\mathrm{d}\lambda)^n$ is a volume form on $Y$. The kernel of $\mathrm{d}\lambda$ is then generated by a unique vector field $R_\lambda$ such that $\lambda(R_\lambda)\equiv1$, called the Reeb vector field, which defines a Reeb flow $\phi_\lambda^t:Y\to Y$. A Reeb orbit $\gamma:\mathds{R}\to Y$, $\gamma(t)=\phi_\lambda^t(z)$ is said to be closed if it is $\tau$-periodic for some $\tau>0$, i.e.\ $\gamma(t)=\gamma(t+\tau)$ for all $t\in\mathds{R}$. As usual, the minimal period of a closed Reeb orbit $\gamma$ is the minimal $\tau>0$ such that $\gamma$ is $\tau$-periodic; the multiples of such $\tau$ will be simply called periods of $\gamma$. The subset $\sigma(Y,\lambda)\subset(0,\infty)$ consisting of the (not necessarily minimal) periods of the closed Reeb orbits of $\phi_\lambda^t$ is the \textbf{action spectrum} of the contact manifold, whereas its subset $\sigma_{\mathrm{p}}(Y,\lambda)\subsetneq\sigma(Y,\lambda)$ consisting of the minimal periods of the closed Reeb orbits of $\phi_\lambda^t$ is the \textbf{prime action spectrum}. One can now ask to what degree we can characterize $\lambda$ from its action and prime action spectra. In the present note we establish some positive results in dimension~3. \subsection{Setup and main results} A contact form $\lambda$ is called \textbf{Besse} when every orbit of its Reeb flow is closed. Our first result states that one can recognize whether a contact form on a closed connected $3$-manifold is Besse from its action spectrum. We define the \textbf{rank} of the action spectrum $\sigma(Y,\lambda)$ to be the rank of the $\mathbb{Z}$-submodule of $\mathbb{R}$ that it generates (this is the same as the rank of the submodule generated by the prime action spectrum $\sigma_{\mathrm{p}}(Y,\lambda)$). In particular, $\sigma(Y,\lambda)$ has rank 1 if and only if it is contained in a subset of the form $\{nT\ |\ n\in\mathds{N}\}$ for some $T>0$. \begin{thm} \label{t:Besse} Let $(Y,\lambda)$ be a closed connected 3-manifold equipped with a contact form. The following conditions are equivalent: \begin{itemize}[topsep=3pt] \item[$(\mathrm{i})$] The contact manifold $(Y,\lambda)$ is Besse. \item[$(\mathrm{ii})$] The closed orbits of the Reeb flow $\phi_\lambda^t$ have a common period, i.e.\ there is $\tau>0$ such that $\tau/\tau'$ is an integer for all $\tau'\in \sigma_{\mathrm{p}}(Y,\lambda)$. \item[$(\mathrm{iii})$] The action spectrum $\sigma(Y,\lambda)$ has rank 1. \end{itemize} \end{thm} The fact that the closed Reeb orbits of a Besse contact manifold admit a common period, and thus that the action spectrum has rank 1, is a consequence of a classical theorem due to Wadsley \cite{Wadsley:1975sp}, together with Sullivan's remark \cite{Sullivan:1978bl} that Reeb flows are geodesible. The novelty, here, is the reverse implication, namely that the fact that the action spectrum has rank 1 forces a contact form to be Besse. A contact form $\lambda$ is called {\bf Zoll} when it is Besse and its closed Reeb orbits have the same minimal period. Namely, when there exists $\tau>0$ such that $\phi_\lambda^\tau=\mathrm{id}$, and for all $t\in(0,\tau)$ the map $\phi_\lambda^t$ has no fixed points. Theorem~\ref{t:Besse} has the following immediate corollary. \begin{cor} \label{c:Zoll} A closed contact 3-manifold is Zoll if and only if its closed Reeb orbits have the same minimal period. \hfill\qed \end{cor} \begin{rem} \label{r:answer} In \cite[Question 1.2]{Mazzucchelli:2018ek}, the second author and Suhr asked whether a reversible contact form on the unit cotangent bundle of any surface must be Zoll if all its closed Reeb orbits have the same minimal period. (The motivation for this comes from the connection between the contact geometry of the unit cotangent bundle and Riemannian and Finsler geometry, which we say more about below.) Corollary~\ref{c:Zoll} answers this in the affirmative, and without the reversibility requirement on the contact form. \end{rem} To the best of the authors' knowledge, for general higher dimensional closed contact manifolds it is not known whether the Besse or the Zoll properties can be read off from the action spectra. \begin{quest} Let $(Y,\lambda)$ be a closed contact manifold of dimension $n\geq5$. If all its closed Reeb orbits have the same minimal period, is $\lambda$ necessarily Zoll? $Y$ is connected and the action spectrum $\sigma(Y,\lambda)$ has rank 1, is $\lambda$ necessarily Besse? \end{quest} By Theorem~\ref{t:Besse}, from the action spectrum one can determine whether or not a contact form on a closed connected 3-manifold is Besse. However, it is not possible to recover the contact form (up to pull-back by diffeomorphisms) from the action spectrum in the Besse case. For example, the standard 1-form $\lambda_{\mathrm{std}} = \frac{1}{2} \sum_{i=1,2} \big(x_i dy_i - y_i dx_i\big)$ on $\mathds{R}^4$ restricts as a contact form to the boundary of any symplectic ellipsoid \[ E(a,b) := \left \lbrace \frac{ \pi |z_1|^2}{a} + \frac{ \pi |z_2|^2}{b} \le 1 \right \rbrace \subset \mathds{C}^2 = \mathds{R}^4.\] Its Reeb flow always has two closed orbits of minimal period $a$ and $b$. When $b/a$ is rational, the contact form is Besse and the other closed Reeb orbits have minimal period $\mathrm{lcm}(a, b)$. Thus, $\partial E(1,1)$ and $\partial E(1,2)$ have the same action spectrum, but their contact forms cannot be diffeomorphic. We can distinguish these ellipsoids, however, through the prime action spectrum. Indeed, our next theorem states that, in the Besse case, the prime action spectrum always determines the contact form up to pull-back by diffeomorphisms. \begin{thm}\label{t:classification} Let $Y$ be a closed connected 3-manifold, and $\lambda_1,\lambda_2$ two Besse contact forms on $Y$. Then $\sigma_{\mathrm{p}}(Y,\lambda_1)=\sigma_{\mathrm{p}}(Y,\lambda_2)$ if and only if there exists a diffeomorphism $\psi:Y\to Y$ such that $\psi^*\lambda_2=\lambda_1$. \end{thm} In the Zoll case, Theorem~\ref{t:classification} was proved by Abbondandolo et al.\ \cite{Abbondandolo:2017xz, Abbondandolo:2018fb} for $S^3$ and $\mathrm{SO}(3)$, and by Benedetti-Kang \cite[Lemma~2.3]{Benedetti:2018ys} for general $S^1$-bundles over closed surfaces. \begin{rem} \label{r:nottrue} Theorems~\ref{t:classification} and~\ref{t:Besse} in combination provide a spectral recognition result: the contact form of a fixed closed connected 3-manifold can be recovered from its prime action spectrum, provided its action spectrum has rank 1. In higher rank, however, the same cannot in general be done. For example, in \cite[Theorem 1.2]{Albers:2018rq} Albers-Geiges-Zehmisch construct a contact form $\lambda$ on $S^3$ whose Reeb flow has a dense orbit and only two closed orbits. The minimal periods $ a$ and $b$ of these two orbits are rationally independent. So, the action spectrum $\sigma(S^3,\lambda)$ is the same as $\sigma(\partial E(a,b),\lambda_{\mathrm{std}})$, but there is no diffeomorphism $\psi:S^3\to \partial E(a,b)$ such that $\lambda=\psi^*\lambda_{\mathrm{std}}$. \end{rem} \subsection{Finsler geometry} Theorem~\ref{t:Besse} and Corollary~\ref{c:Zoll} apply in particular to Finsler geodesic flows of 2-spheres. We recall that a Finsler metric on a closed manifold $M$ is a continuous function $F:\mathrm{T} M\to[0,\infty)$ that is smooth outside the $0$-section, fiberwise positively homogeneous of degree $1$, and such that $\partial_{vv}F^2(x,v)$ is positive definite at every point $(x,v)$ outside the $0$-section. The Finsler metric $F$ is reversible when $F(x,v)=F(x,-v)$ for all $(x,v)\in\mathrm{T} M$, and Riemannian when it is of the form $F(x,v)=g_x(v,v)^{1/2}$ for some Riemannian metric $g$ on $M$. The geodesic flow of $(M,F)$ is precisely the Reeb flow of $(SM,\lambda)$, where $\pi:SM\to M$ is the $F$-unit tangent bundle of $M$ and $\lambda$ is the Liouville form $\lambda_{(x,v)}(w)=\partial_vF(x,v)\mathrm{d}\pi(x,v)w$. The action spectrum $\sigma(SM,\lambda)$ is the usual length spectrum of $(M,F)$, and is denoted by $\sigma(M,F)$. The Finsler metric $F$ is Besse or Zoll if the associated Liouville form $\lambda$ is so. In \cite{Mazzucchelli:2018ek}, the second author and Suhr established (a slightly stronger version of) Corollary~\ref{c:Zoll} for geodesic flows of Riemannian 2-spheres. Theorem~\ref{t:Besse} actually implies the following more general corollaries for Finsler geodesic flows of surfaces. \begin{cor} \label{c:finsler} Let $(M,F)$ be a closed connected orientable Finsler surface. The length spectrum $\sigma(M,F)$ has rank 1 if and only if $M=S^2$ and $F$ is Besse. Moreover, if $F$ is reversible, the length spectrum $\sigma(M,F)$ has rank 1 if and only if $M=S^2$ and $F$ is Zoll. \end{cor} \begin{rem} The reversibility assumption in the second part of this statement is essential. Indeed, certain of the so-called Katok's metrics on the 2-sphere \cite{Ziller:1983rw} are examples of non-reversible Finsler metrics that are Besse but not Zoll. \end{rem} \begin{proof}[Proof of Corollary~\ref{c:finsler}] The fact that the length spectrum of a Finsler closed connected surface has rank 1 if and only if the metric is Besse follows from Theorem~\ref{t:Besse}. A theorem due to Frauenfelder-Labrousse-Schlenk \cite{Frauenfelder:2015sn}, which extends the classical Bott-Samelson Theorem \cite{Bott:1954aa, Samelson:1963aa} from Riemannian geometry, implies that $F$ can be Besse only if the fundamental group of $M$ is finite and the integral cohomology ring of the universal cover of $M$ agrees with that of a compact rank-one symmetric space. The only closed orientable surface $M$ with these properties is $S^2$. Finally, a Besse reversible Finsler metric on $S^2$ is Zoll according to a theorem of Frauenfelder-Lange-Suhr \cite{Frauenfelder:2016ud}, which generalizes the classical Riemannian result of Gromoll-Grove \cite{Gromoll:1981kl}. \end{proof} \begin{cor} \label{c:RP2} Let $(M,F)$ be a closed connected non-orientable Finsler surface. The length spectrum $\sigma(M,F)$ has rank 1 if and only if $M=\mathds{R}\mathds{P}^2$ and $F$ is Besse. Moreover, if $F$ is Riemannian, the length spectrum $\sigma(M,F)$ has rank 1 if and only if $M=\mathds{R}\mathds{P}^2$ and $F$ is Riemannian with constant curvature (in particular, $F$ is Zoll). \end{cor} \begin{proof} Let $M'$ be the orientation double cover of $M$, and $F':\mathrm{T} M'\to[0,\infty)$ the lift of $F$. By Corollary~\ref{c:finsler}, $F'$ is Besse if and only if $\sigma(M',F')$ has rank 1 and $M'=S^2$. Notice that $M'=S^2$ if and only if $M=\mathds{R}\mathds{P}^2$. The length spectra satisfy $\sigma(M',F')\subseteq\sigma(M,F)$ and $2\sigma(M,F)\subseteq\sigma(M',F')$; in particular, $\sigma(M',F')$ has rank 1 if and only if the same is true for $\sigma(M,F)$. Moreover, $F'$ is Besse if and only if the same if true for $F$. This proves the first part of the statement. Finally, a Riemannian metric on $\mathds{R}\mathds{P}^2$ is Besse if and only if it has constant curvature, according to a theorem of Pries \cite{Pries:2009aa}. \end{proof} \subsection{Relationship with previous work and organization of the paper} A corollary of Theorem~\ref{t:Besse} is that any contact form on a closed 3-manifold has at least two distinct closed embedded Reeb orbits. This was previously proved by the first author and Hutchings \cite{Cristofaro-Gardiner:2016rp} using embedded contact homology. Our proof of Theorem~\ref{t:Besse} uses a similar method; the main difference here is a strengthening of one of the key lemmas in that paper, see our Lemma~\ref{l:Besse_spectral} below. In contrast, the proof of Theorem~\ref{t:classification} does not require embedded contact homology, but instead makes use of the classification of Seifert fibered spaces, in combination with a Moser trick in Lemma~\ref{l:Moser_trick}. The paper is organized as follows. In Section~\ref{s:ECH_1} we provide the needed background on embedded contact homology. In Section~\ref{s:ECH_2}, we prove our main Theorem~\ref{t:Besse}; in the proof, we will need a slightly stronger version of the bumpy contact form theorem, which we state and prove in Appendix~\ref{a:bumpy}. In Section~\ref{s:Seifert}, after introducing the needed preliminaries on Seifert fibered spaces, we prove Theorem~\ref{t:classification}. \subsection*{Acknowledgments} The authors are grateful to the anonymous referee for her/his careful reading of the manuscript, and for pointing out the statement of Corollary~\ref{c:RP2}. Daniel Cristofaro-Gardiner is partially supported by the National Science Foundation under Grant No.~1711976. Marco Mazzucchelli is partially supported by the National Science Foundation under Grant No.~DMS-1440140 while in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the Fall 2018 semester. \section{Background on Embedded Contact Homology} \label{s:ECH_1} In this section we will recall the essential features of embedded contact homology that will be needed in order to prove Theorem~\ref{t:Besse}. The interested reader will find a detailed account and precise references in Hutchings' survey \cite{Hutchings:2014qf}. \subsection{The chain complex} Let $(Y,\xi)$ be a closed connected oriented contact manifold of dimension 3. Throughout this paper, the contact distribution $\xi\subset\mathrm{T} Y$ is assumed to be cooriented, and as usual we will call a 1-form $\lambda$ on $Y$ a supporting contact form of $\xi$ when $\ker(\lambda)=\xi$ and $\lambda$ induces the orientation of $\mathrm{T} Y/\xi$. The 2-form $\mathrm{d}\lambda$ will then induce an orientation on $\xi$. The contact form $\lambda$ is called bumpy when, for each $\tau>0$ and $z\in\mathrm{fix}(\phi_\lambda^\tau)$, 1 is not an eigenvalue of the linearized Poicar\'e map $\mathrm{d}\phi_\lambda^\tau(z)|_\xi$. We will write the symplectization of our contact manifold as $(\mathds{R}\times Y,\mathrm{d}(e^s\lambda))$, where $s$ is the variable on $\mathds{R}$. The embedded contact homology group $\mathrm{ECH}(Y)$ is a topological invariant obtained as the homology of a chain complex $\big(\mathrm{ECC}(Y,\lambda),\partial_{Y,\lambda,J}\big)$, where $\lambda$ is a bumpy supporting contact form of $(Y,\xi)$, and $J$ is an almost complex structure on $(\mathds{R}\times Y,\mathrm{d}(e^s\lambda))$ such that $JR_\lambda=\tfrac\partial{\partial s}$, $J\xi=\xi$, $\mathrm{d}\lambda(v,Jv)>0$ for each non-zero $v\in\xi$, and $J$ is chosen generically in order to satisfy suitable technical assumptions. The chain group $\mathrm{ECC}(Y,\lambda)$ is the $\mathds{Z}_2$-vector space freely generated by finite sets of pairs $\{(m_i,\gamma_i)\ |\ i=1,...,k \}$, where $k\in\mathds{N}$, the $\gamma_i$ are distinct simple closed orbits of the Reeb flow $\phi_\lambda^t$, and $m_i$ is a positive integer required to be equal to 1 if $\gamma_i$ is hyperbolic. Here, by ``simple'' we mean that the closed Reeb orbits $\gamma_i$ are viewed as maps of the form $\gamma_i:\mathds{R}/\tau_i\mathds{Z}\to Y$, where $\tau_i>0$ is the minimal period of $\gamma_i$. Two simple closed Reeb orbits $\gamma_i,\gamma_j$ are distinct if they are not of the form $\gamma_i=\gamma_j(\cdot+s)$ for any $s>0$. The definition of the differential $\partial_{Y,\lambda,J}$ involves counting certain $J$-holomorphic curves in the symplectization of $(Y,\lambda)$, but will not be needed in the present paper. \subsection{The $U$ map} The embedded contact homology comes equipped with an endomorphism \begin{align*} U:\mathrm{ECH}(Y)\to\mathrm{ECH}(Y) \end{align*} defined as follows. Let $\bm\gamma=\{(m_i,\gamma_i)\ |\ i=1,...,k\}$ and $\bm\zeta=\{(n_i,\zeta_i)\ |\ i=1,...,l\}$ be two chains in $\mathrm{ECC}(Y,\lambda)$. Let $(\Sigma,j)$ be a punctured Riemann surface, and $u:\Sigma\to\mathds{R}\times Y$ a $J$-holomorphic curve that is asymptotic as a current to $\sum_i m_i\gamma_i$ and $\sum_i n_i\zeta_i$ as $s\to\infty$ and $s\to-\infty$ respectively. We denote by $\mathcal{M}(J,\bm\gamma,\bm\zeta)$ the space of such $J$-holomorphic curves modulo equivalence as currents. Notice that, for every $u\in\mathcal{M}(J,\bm\gamma,\bm\zeta)$, we have \begin{align*} \int_\Sigma u^*\mathrm{d}\lambda = \sum_{i=1}^k m_i\mathcal{A}_\lambda(\gamma_i) - \sum_{i=1}^{l} n_i\mathcal{A}_\lambda(\zeta_i). \end{align*} Here, $\mathcal{A}_\lambda$ denotes the contact action \begin{align*} \mathcal{A}_\lambda(\gamma)=\int_\gamma \lambda. \end{align*} If $\gamma$ is a simple closed Reeb orbit, $\mathcal{A}_\lambda(\gamma)$ is simply its minimal period. To every $u\in\mathcal{M}(J,\bm\gamma,\bm\zeta)$ there is an associated integer which is called the ECH-index, and whose definition will not be needed in the present paper. For a given $z\in Y$, we denote by $\mathcal{M}_{2,z}(J,\bm\gamma,\bm\zeta)\subset\mathcal{M}(J,\bm\gamma,\bm\zeta)$ the subset of those $u:\Sigma\to\mathds{R}\times Y$ having ECH-index 2 and whose image $u(\Sigma)$ passes through $(0,z)$. The condition on the ECH index implies that, if $J$ is chosen generically, then $\mathcal{M}_{2,z}(J,\bm\gamma,\bm\zeta)$ is a finite set. The endomorphism \begin{align*} U_z:\mathrm{ECC}(Y,\lambda)\to \mathrm{ECC}(Y,\lambda), \qquad U_z(\bm\gamma)= \!\!\!\! \sum_{\bm\zeta\in\mathrm{ECC}(Y,\lambda)} \!\!\!\! (\# \mathcal{M}_{2,z}(J,\bm\gamma,\bm\zeta)\ \mathrm{mod}\ 2)\,\bm\zeta \end{align*} turns out to be a chain map that induces the endomorphism $U$ in embedded contact homology. Notice that $U_z$ depends on the chosen point $z$, on the contact form $\lambda$, and on the almost complex structure $J$, whereas $U$ is a topological invariant of $Y$. \subsection{Spectral invariants} Given a supporting contact form $\lambda$ on a closed contact 3-manifold $(Y,\xi)$, we denote by $\Sigma(Y,\lambda)\subset(0,\infty)$ the set of real numbers that are finite sums of elements in the action spectrum $\sigma(Y,\lambda)$, i.e. \begin{align*} \Sigma(Y,\lambda)=\big\{\tau_1+...+\tau_k\ \big|\ k\geq 1,\ \tau_i\in\sigma(Y,\lambda)\quad \forall i=1,...,k \big\}. \end{align*} The chain complex $(\mathrm{ECC}(Y,\lambda),\partial_{Y,\lambda,J})$ can be filtered by means of the action as follows. For each $\tau>0$, let $\mathrm{ECC}^\tau(Y,\lambda)$ be the vector subspace of $\mathrm{ECC}(Y,\lambda)$ generated by those $\bm\gamma=\{(m_i,\gamma_i)\ |\ i=1,...,k\}$ such that \begin{align*} \mathcal{A}_\lambda(\bm\gamma):=\sum_{i=1}^k m_i\mathcal{A}_\lambda(\gamma_i) \leq\tau. \end{align*} Since the boundary map $\partial_{Y,\lambda,J}$ does not increase the action, $\big(\mathrm{ECC}^\tau(Y,\lambda),\partial_{Y,\lambda,J}\big)$ is a subcomplex of $\big(\mathrm{ECC}(Y,\lambda),\partial_{Y,\lambda,J}\big)$, whose homology is denoted by $\mathrm{ECH}^\tau(Y,\lambda)$. As the notation suggests, this latter group turns out to be independent of the almost complex structure $J$. There is an inclusion induced map \[ \iota^\tau: \mathrm{ECH}^\tau(Y,\lambda) \to \mathrm{ECH}(Y) .\] Each non-zero $\sigma\in \mathrm{ECH}(Y)$ defines a spectral invariant $c_\sigma(Y,\lambda)\in\Sigma(Y,\lambda)$ as follows. If $\lambda$ is bumpy, then $c_\sigma(Y,\lambda)$ is the minimal $\tau>0$ such that $\sigma$ admits a representative in $\mathrm{ECC}^\tau(Y,\lambda)$, in other words such that $\sigma$ is in the image of the map $\iota^\tau$. If $\lambda$ is not bumpy, we can choose a sequence of smooth functions $b_n:Y\to\mathds{R},$ $C^0$-converging to zero and such that each contact form $e^{b_n}\lambda$ is bumpy (see Proposition~\ref{p:bumpy}); in this case, the sequence $c_\sigma(Y,e^{b_n}\lambda)$ converges and the spectral invariant $c_\sigma(Y,\lambda)$ is defined as its limit, i.e. \begin{align} \label{e:lim_c} c_\sigma(Y,\lambda) = \lim_{n\to\infty} c_\sigma(Y,e^{b_n}\lambda). \end{align} The following statement due to the first author and Hutchings provides the only property of spectral invariants needed in this paper. It is an application of the Volume Property for the ECH spectrum proved in \cite{Cristofaro-Gardiner:2015wa}. \begin{lem}[Cor.~2.2 in \cite{Cristofaro-Gardiner:2016rp}] \label{l:CGH} There exists a sequence $\{\sigma_k\ |\ k\in\mathds{N}\}$ of non-zero elements in $\mathrm{ECH}(Y)$ such that $U\sigma_{k+1}=\sigma_k$ and $c_{\sigma_k}(Y,\lambda)/k\to0$ as $k\to\infty$ for each supporting contact form $\lambda$ of $(Y,\xi)$. \hfill\qed \end{lem} \section{ECH-spectral characterization of Besse contact forms} \label{s:ECH_2} The following statement, which improves \cite[Lemma~3.1(b)]{Cristofaro-Gardiner:2016rp} while following a similar logic, is the main ingredient for proving Theorem~\ref{t:Besse}. \begin{lem} \label{l:Besse_spectral} Let $(Y,\lambda)$ be a closed connected contact 3-manifold equipped with a contact form. If $c_\sigma(Y,\lambda)=c_{U\sigma}(Y,\lambda)$ for some $\sigma\in\mathrm{ECH}(Y)$ with $U\sigma\neq 0$, then $(Y,\lambda)$ is Besse. \end{lem} \begin{proof} Assume that $(Y,\lambda)$ is not Besse, so that there exists $z\in Y$ such that $\phi_\lambda^t(z)\neq z$ for all $t\neq0$. We set $c:=c_\sigma(Y,\lambda)$, and fix an arbitrary real number $\tau>c$. Let $\Sigma\subset Y$ be an embedded compact ball of codimension 1 containing $z$ in its interior and such that $\mathrm{T}_z\Sigma=\xi_z$, where $\xi=\ker(\lambda)$ is the contact distribution. Up to shrinking $\Sigma$ around $z$, the map \begin{align*} \psi:[-\tau/2,\tau/2]\times\Sigma\to Y,\qquad \psi(t,w)=\phi_\lambda^t(w) \end{align*} is a diffeomorphism onto its image $K:=\psi([-\tau/2,\tau/2]\times\Sigma)$. Namely, $K$ is a flow box for the Reeb flow $\phi_\lambda^t$ containing orbits of length $\tau$. We fix an almost complex structure $J$ on the symplectization $(\mathds{R}\times Y,\mathrm{d}(e^s\lambda))$ such that $JR_\lambda=\tfrac{\partial}{\partial s}$, $J\xi=\xi$, and $\mathrm{d}\lambda(v,Jv)>0$ for all non-zero $v\in\xi$. By Proposition~\ref{p:bumpy}, there exists a sequence $b_n\in C^\infty(Y)$ such that $b_n|_{K}\equiv0$, $b_n\to0$ in $C^0$ and $\lambda_n:=e^{b_n}\lambda$ is a bumpy contact form. Since $\lambda_n\equiv \lambda$ on $K$, this latter set is also a flow box for the Reeb flows $\phi_{\lambda_n}^t$. In particular, none of the closed orbits of $\phi_{\lambda_n}^t$ with minimal period at most $\tau$ intersects $K$. Therefore, we can choose an almost complex structure $J_n$ on the symplectization $(\mathds{R}\times Y,\mathrm{d}(e^{s}\lambda_n))$ such that $J_n\equiv J$ on $\mathds{R}\times K$, and $J_n$ is sufficiently generic to define the differential of the complex $\big(\mathrm{ECC}^\tau(Y,\lambda_n),\partial_{Y,\lambda_n,J_n} \big)$ and the endomorphism $U_z:\mathrm{ECC}^\tau(Y,\lambda_n)\to \mathrm{ECC}^\tau(Y,\lambda_n)$. We consider an arbitrary cycle $\bm\gamma_n\in\mathrm{ECC}^\tau(Y,\lambda_n)$ such that $\sigma=\iota^\tau([\bm\gamma_n])$ and $c_\sigma(Y,\lambda_n)=\mathcal{A}_{\lambda_n}(\bm\gamma_n)$. Equation~\eqref{e:lim_c} implies that $\mathcal{A}_{\lambda_n}(\bm\gamma_n)\to c_\sigma(Y,\lambda)$ as $n\to\infty$. In order to conclude the proof, we need to show that there exists $\delta>0$ such that \begin{align*} \mathcal{A}_{\lambda_n}(\bm\gamma_n)-\mathcal{A}_{\lambda_n}(U_z\bm\gamma_n)\geq\delta, \qquad\forall n\in\mathds{N}. \end{align*} Indeed, this implies that \begin{align*} c_{U\sigma}(Y,\lambda) = \!\lim_{n\to\infty}\! c_{U\sigma}(Y,\lambda_n) \leq \!\lim_{n\to\infty}\! \mathcal{A}_{\lambda_n}(U_z\bm\gamma_n) \leq \!\lim_{n\to\infty}\! \mathcal{A}_{\lambda_n}(\bm\gamma_n)-\delta = c_{\sigma}(Y,\lambda) - \delta. \end{align*} Assume by contradiction that \begin{align*} \liminf_{n\to\infty} \big(\mathcal{A}_{\lambda_n}(\bm\gamma_n) - \mathcal{A}_{\lambda_n}(U_z\bm\gamma_n) \big) = 0. \end{align*} Up to extracting a subsequence, we can actually assume that \begin{align} \label{e:zero} \lim_{n\to\infty} \big(\mathcal{A}_{\lambda_n}(\bm\gamma_n) - \mathcal{A}_{\lambda_n}(U_z\bm\gamma_n) \big) = 0. \end{align} We choose, for each $n\in\mathds{N}$, a $J_n$-holomorphic curve $u_n:\Sigma_n\to\mathds{R}\times Y$ in the moduli space $\mathcal{M}_{2,z}(J_n,\bm\gamma_n,U_z\bm\gamma_n)$. We set $C_n:=u_n(\Sigma_n)$, and from now on we will not distinguish between the map $u_n$ and its image $C_n$. Notice that \begin{align} \label{e:energy} \int_{C_n} \mathrm{d}\lambda_n = \mathcal{A}_{\lambda_n}(\bm\gamma_n) - \mathcal{A}_{\lambda_n}(U_z\bm\gamma_n), \end{align} and in particular this quantity is uniformly bounded in $n$. Since $J_n\equiv J$ on $\mathds{R}\times K$, the intersections $C_n\cap([-1,1]\times K)$ are $J$-holomorphic curves. Since $\mathrm{d}\lambda_n=\mathrm{d}\lambda$ is non-negative on $C_n\cap([-1,1]\times K)$, Equations~\eqref{e:zero} and~\eqref{e:energy} imply that \begin{align} \label{e:zero_Cn} \lim_{n\to\infty} \int_{C_n\cap([-1,1]\times K)} \mathrm{d}\lambda = 0, \end{align} and that this integral is uniformly bounded in $n$. Let $s_0\in[-2,-1]$ and $s_1\in[1,2]$ be such that $u_n$ is transverse to $\{s_0,s_1\}\times Y$. Since both $\mathrm{d}(e^s\lambda_n)$ and $\mathrm{d}\lambda_n$ are non-negative on $C_n$ by the conditions on $J_n$, we have the uniform bound \begin{align*} \int_{C_n \cap([-1,1]\times K)} \mathrm{d}(e^s\lambda) & \leq \int_{C_n\cap([s_0,s_1]\times Y)} \mathrm{d}(e^s\lambda_n)\\ & = e^{s_1} \int_{C_n\cap (\{s_1\}\times Y)} \lambda_n - e^{s_0} \int_{C_n\cap (\{s_0\}\times Y)} \lambda_n \\ & \leq e^2 \bigg( \int_{C_n\cap (\{s_1\}\times Y)} \lambda_n + \int_{C_n\cap([s_1,\infty)\times Y)}\mathrm{d}\lambda_n\bigg)\\ & = e^2 \mathcal{A}_{\lambda_n}(\bm\gamma_n) \leq e^2 c_{\sigma}(Y,\lambda) + 1 \end{align*} for all $n\in\mathds{N}$ large enough. We can thus employ a compactness result due to Taubes \cite[Prop.~3.3]{Taubes:1998oz}, in its version \cite[Prop.~3.2]{Cristofaro-Gardiner:2016rp}, and infer that, up to extracting a subsequence, the sequence $C_n\cap([-1,1]\times K)$ converges in the sense of currents to a compact $J$-holomorphic curve $C\subset[-1,1]\times K$ with boundary in $\partial([-1,1]\times K)$, and $(0,z)\in C$. Equation~\eqref{e:zero_Cn} thus implies \begin{align*} \int_{C} \mathrm{d}\lambda =0, \end{align*} and therefore $C$ must have a component of the form $[-1,1]\times\phi_\lambda^{[-\tau/2,\tau/2]}(z)$. In particular \begin{align*} \int_{C \cap (\{s\}\times K)} \lambda \ge \tau,\qquad \forall s\in[-1,1]. \end{align*} We fix an arbitrary $\tau'\in(c_\sigma(Y,\lambda),\tau)$. For each $n\in\mathds{N}$, we choose a point $s_n\in[-1,1]$ such that $u_n$ is transverse to $\{s_n\}\times Y$, and we orient the intersection using the ``$\mathds{R}$-direction first" convention. By the conditions on $J_n$, the contact form $\lambda_n$ is non-negative along the oriented 1-manifold $C_n \cap (\{s_n\}\times Y)$. Therefore, since $C_n\cap([-1,1]\times K)\to C$ in the sense of currents, up to removing sufficiently many elements from the sequence $\{C_n\ |\ n\in\mathds{N}\}$ we have \begin{align*} \int_{C_n \cap (\{s_n\}\times Y)} \lambda_n \geq \int_{C_n \cap (\{s_n\}\times K)} \lambda_n \geq \tau',\qquad \forall n\in\mathds{N}. \end{align*} However, if we choose $n$ large enough so that $\mathcal{A}_{\lambda_n}(\bm\gamma_n)<\tau'$, we have \begin{align*} \int_{C_n \cap (\{s_n\}\times Y)} \lambda_n \leq \int_{C_n \cap (\{s_n\}\times Y)} \lambda_n + \int_{C_n \cap ([s_n,\infty)\times Y)} \mathrm{d}\lambda_n = \mathcal{A}_{\lambda_n}(\bm\gamma_n)<\tau', \end{align*} which gives a contradiction. \end{proof} \begin{proof}[Proof of Theorem~\ref{t:Besse}] We already know that (i) implies (ii) by Wadsley's theorem \cite{Wadsley:1975sp}. Assume now that our closed connected contact 3-manifold $(Y,\xi=\ker(\lambda))$ satifies (ii). We denote by $\tau>0$ a common period for the closed Reeb orbits. Every closed orbit $\gamma$ of the Reeb flow $\phi_\lambda^t$ has minimal period $\tau/k_\gamma$ for some $k_\gamma\in\mathds{N}=\{1,2,3,...\}$. Since $Y$ is compact and the Reeb vector field of $(Y,\lambda)$ is nowhere vanishing, there is a uniform lower bound for the minimal periods of the closed orbits of $\phi_\lambda^t$. In particular, the set \[\mathds{K}:=\big\{k_\gamma\ \big|\ \gamma \mbox{ closed orbit of }\phi_\lambda^t \big\}\] is finite. If we denote by $k\in\mathds{N}$ a common multiple of the natural numbers in $\mathds{K}$, we readily see that the period of each closed orbit of the Reeb flow $\phi_\lambda^t$ must be a multiple of $\tau/k$. This implies (iii). Finally, let us assume that $(Y,\lambda)$ satisfies (iii). By Lemma \ref{l:CGH}, there exists a sequence $\{\sigma_k\ |\ k\in\mathds{N}\}$ of non-zero elements in $\mathrm{ECH}(Y,\xi,\Gamma)$ such that $U\sigma_{k+1}=\sigma_k$ and $c_{\sigma_k}(Y,\lambda)/k\to0$ as $k\to\infty$. If $c_{\sigma_{k+1}}(Y,\lambda)\neq c_{\sigma_k}(Y,\lambda)$ for all $k\in\mathds{N}$, then $c_{\sigma_{k+1}}(Y,\lambda)\geq c_{\sigma_k}(Y,\lambda)+T$, where $T>0$ is such that $\sigma(Y,\lambda)\subset\{nT\ |\ n\in\mathds{N}\}$. However, this would imply that \begin{align*} \liminf_{k\to\infty} c_{\sigma_k}(Y,\lambda)/k \geq T >0, \end{align*} which is a contradiction. Therefore we must have $c_{\sigma_{k+1}}(Y,\lambda)= c_{\sigma_k}(Y,\lambda)$ for some (and indeed for infinitely many) $k\in\mathds{N}$. By Lemma~\ref{l:Besse_spectral}, we conclude that $(Y,\lambda)$ is Besse. \end{proof} Recent results of the second author and Suhr, \cite[Theorem~3.1]{Mazzucchelli:2018ek} and \cite[Theorem~1.2]{Mazzucchelli:2018pb}, provide a min-max characterization of certain Zoll Riemannian manifolds by employing Morse-theoretic spectral invariants for the length and energy functionals on the loop space. In the same spirit, the proof of Theorem~\ref{t:Besse} also provides the following ECH-spectral characterization of Besse contact forms. \begin{thm} A closed connected contact 3-manifold $(Y,\lambda)$ is Besse if and only if, for some $\sigma\in\mathrm{ECH}(Y)$ with $U\sigma\neq0$, we have $c_{\sigma}(Y,\lambda)=c_{U\sigma}(Y,\lambda)$. \hfill\qed \end{thm} \section{Besse contact forms and Seifert fibrations} \label{s:Seifert} \subsection{The Morse-Bott property} Let us recall that a closed connected Besse contact manifold $(Y,\lambda)$ of any dimension $2n+1\geq3$ has Morse-Bott closed orbits. By the already mentioned Wadsley's Theorem \cite{Wadsley:1975sp}, there exists a minimal $\tau>0$ such that the Reeb flow satisfies $\phi_\lambda^\tau=\mathrm{id}$. Therefore, each point $z\in Y$ lies on a closed Reeb orbit of minimal period $\tau_z=\tau/\alpha_z$, for some $\alpha_z\in\mathds{N}$. For each $\alpha\in\mathds{N}$, we define a compact subset \[K_\alpha:=\mathrm{fix}(\phi_\lambda^{\tau/\alpha})\subset Y.\] Since the Reeb vector field $R_\lambda$ is nowhere vanishing, there exists a finite subset $\mathds{F}\subset\mathds{N}$ such that $K_\alpha\neq\varnothing$ if and only if $\alpha\in\mathds{F}$. Let $g_0$ be a Riemannian metric on $Y$ such that $g_0(R_\lambda,\cdot)=\lambda$. Its average \begin{align*} g:=\frac1\tau\int_0^\tau (\phi_\lambda^t)^*g_0\,\mathrm{d} t \end{align*} is a Riemannian metric that still satisfies $g(R_\lambda,\cdot)=\lambda$ and is invariant under the Reeb flow, i.e.\ $(\phi_\lambda^t)^*g=g$. Since $\phi_\lambda^{\tau/\alpha}$ is a $g$-isometry, its fixed-point set $K_\alpha$ is a closed submanifold of $Y$ with tangent spaces \[\mathrm{T}_zK_\alpha=\ker(\mathrm{d}\phi_\lambda^{\tau/\alpha}(z)-\mathrm{id}),\] see \cite[Theorem~5.1]{Kobayashi:1995mo}. The linearized map $\mathrm{d}\phi_\lambda^{\tau/\alpha}(z)|_{\xi_z}$ is a symplectic endomorphism of the symplectic vector space $(\xi_z,\mathrm{d}\lambda_z|_{\xi_z})$, where $\xi:=\ker(\lambda)$. Therefore, the eigenvalue $1\in\sigma(\mathrm{d}\phi_\lambda^{\tau/\alpha}(z)|_{\xi_z})$ has even algebraic multiplicity. Since $\mathrm{d}\phi_\lambda^{\tau/\alpha}(z)|_{\xi_z}$ is an $\alpha$-th root of the identity, this algebraic multiplicity is equal to the geometric multiplicity $\dim\ker(\mathrm{d}\phi_\lambda^{\tau/\alpha}(z)|_{\xi_z}-\mathrm{id})$. This, together with the fact that \[\mathrm{d}\phi_\lambda^{\tau/\alpha}(z)R_\lambda(z)=R_\lambda(z),\] proves that $\dim(\mathrm{T}_zK_\alpha)$ is odd, and thus that $K_\alpha$ is an odd-dimensional closed manifold. \subsection{Seifert fibrations} We now assume that our Besse closed connected contact manifold $(Y,\lambda)$ has dimension 3. Therefore, the subsets $K_\alpha$ with $\alpha\in\mathds{F}\setminus\{1\}$ are finite disjoint unions of embedded circles. If $\mathds{F}\setminus\{1\}\not=\varnothing$, the complement $Y\setminus K$, where $K:=\cup_{\alpha\in\mathds{F}\setminus\{1\}} K_\alpha$, is an open Zoll contact manifold. The Reeb flow on $Y$ defines a locally free $\mathds{R}/\tau\mathds{Z}$-action on $Y$, whose quotient $\Sigma_g$ can be given the structure of a closed orientable surface of some genus $g\geq0$. The quotient map $\pi:Y\to \Sigma_g$ is not a genuine circle bundle if $(Y,\lambda)$ is not Zoll, but it is still a Seifert fibration. Namely, if $\{x_1,...,x_r\}:=\pi(K)$, for each $x_j$ there are associated parameters $\alpha_j,\beta_j,\alpha'_j,\beta'_j\in\mathds{Z}$ with the following properties. The parameter $\alpha_j\geq 1$ is such that $\pi^{-1}(x_j)\subset K_{\alpha_j}$. Therefore, $\pi^{-1}(x_j)$ is a closed Reeb orbit of minimal period $\tau/\alpha_j$. Both pairs $(\alpha_j,\beta_j)$, $(\alpha_j',\beta_j')$ are coprime, and form an integer matrix with determinant $\alpha_j\beta_j'-\alpha_j'\beta_j=1$. The point $x_j$ possesses a compact disk neighborhood $D_j\subset\Sigma_g$ that we identify with the unit ball in the complex plane, and there is a diffeomorphism $\psi_j:D_j\times S^1\to\pi^{-1}(D_j)$ such that \begin{align*} \pi\circ\psi_j(\rho z_1 ,z_2)=\rho z_1^{\alpha_j} z_2^{\alpha_j'}, \qquad\forall \rho\in[0,1],\ z_1,z_2\in S^1. \end{align*} Here and in the following, $S^1$ denotes the unit circle in the complex plane $\mathds{C}$. The Reeb flow induced on $D_j\times S^1$ has the form \begin{align*} \psi_j^{-1}\circ\phi_\lambda^t\circ\psi_j(\rho z_1,z_2)=(\rho z_1e^{-i2\pi\alpha_j't/\tau},z_2e^{i2\pi\alpha_jt/\tau}). \end{align*} The restriction $\pi:Y\setminus K\to\Sigma_g\setminus\{x_1,...,x_r\}$ is a trivial $S^1$-bundle, that is, there is a diffeomorphism $\psi:\Sigma_g\setminus\{x_1,...,x_r\}\times S^1\to Y\setminus K$ such that $\pi\circ\psi(z_1,z_2)=z_1$. The Reeb flow induced on $\Sigma_g\setminus\{x_1,...,x_r\} \times S^1$ is simply \begin{align*} \psi^{-1}\circ\phi_\lambda^t\circ\psi(z_1,z_2)=(z_1,z_2e^{i2\pi t/\tau}). \end{align*} We orient $\Sigma_g$ by means of a 2-form $\omega$ on $\Sigma_g\setminus\{x_1,...,x_r\}$ such that $\pi^*\omega=\mathrm{d}\lambda|_{Y\setminus K}$, and we orient the fibers of $\pi$ by means of the Reeb vector field $R_\lambda$, so that the diffeomorphisms $\psi|_{\{x\}\times S^1}:\{x\}\times S^1\to\pi^{-1}(x)$ are orientation preserving. We introduce the oriented circles in the torus $T_j:=\pi^{-1}(\partial D_j)$ \begin{align*} M_j&:=\psi_j(\partial D_j\times\{1\}),& L_j&:=\psi_j(\{1\}\times S^1),\\ M_j'&:=\psi(\partial D_j\times\{1\}),& L_j'&:=\psi(\{x\}\times S^1), \end{align*} where $x$ is any point in $\partial D_j$. In the homology group $\mathrm{H}_1(T_j;\mathds{Z})$, we have \begin{align*} [M_j] = \alpha_j[M_j']+\beta_j[L_j'], \qquad [L_j] = \alpha_j'[M_j']+\beta_j'[L_j']. \end{align*} The integers in the tuple $(g;\alpha_1,\beta_1,...,\alpha_r,\beta_r)$ are the so-called Seifert invariants of the Seifert fibration $\pi:Y\to\Sigma_g$, and every $(\alpha_j,\beta_j)$ is called a Seifert pair. We stress that the concept of Seifert fibration is more general than the one presented here (for instance it allows for non-orientable total spaces and non-orientable base surfaces), but will not be needed in its full generality for the application to Besse contact forms. In this paper, all Seifert fibrations are implicitly assumed to be of the above type, and in particular with total space and base surface both closed and orientable. A Seifert fibration can be described by different Seifert invariants tuples, but nevertheless these invariants determine the Seifert fibration completely. More precisely, given two Seifert fibrations $\pi_i:Y_i\to\Sigma_{g_i}$, $i=1,2$, there exist an orientation preserving diffeomorphism $F:Y_1\to Y_2$ and a diffeomorphism $f:\Sigma_{g_1}\to\Sigma_{g_2}$ such that $\pi_2\circ F=f\circ\pi_1$ if and only if the two Seifert fibrations can be described by the same Seifert invariants tuple. A theorem due to Raymond \cite{Raymond:1968sf} (see also \cite[Theorem~2.1]{Jankins:1983zm}) implies that the isomorphism classes of Seifert fibrations are the same as the isomorphism classes of effective $S^1$-actions on $3$-manifolds. This readily implies the following statement in our setting. \begin{thm} \label{t:Raymond} For $i=1,2$, let $(Y_i,\lambda_i)$ be a Besse closed connected contact 3-manifold oriented via the volume form $\lambda_i\wedge\mathrm{d}\lambda_i$ and whose Reeb orbits have minimal common period $\tau_i$. Then, there exists an orientation preserving diffeomorphism $\psi:Y_1\to Y_2$ such that $\psi\circ\phi_{\lambda_1}^{\tau_1 t}\circ\psi^{-1}=\phi_{\lambda_2}^{\tau_2 t}$ for all $t\in\mathds{R}$ if and only if $(Y_1,\lambda_1)$ and $(Y_2,\lambda_2)$ have the same Seifert invariants in normal form (up to permutation of the pairs). \hfill\qed \end{thm} A particular case of a result due to Lisca-Mati\'c \cite{Lisca:2004oz} provides a constraint on the Seifert invariants of a Seifert fibration associated to a Besse contact form. \begin{thm}[Prop.~3.1 in \cite{Lisca:2004oz}] \label{t:lisca_matic} The Seifert invariants $(g;\alpha_1,\beta_1,...,\alpha_r,\beta_r)$ of any Besse closed connected contact 3-manifold satisfy $\tfrac{\beta_1}{\alpha_1}+...+\tfrac{\beta_r}{\alpha_r}>0$. \hfill\qed \end{thm} The Seifert fibrations are classified. In particular, a result due to Orlik-Vogt-Zieschang \cite{Orlik:1967ad} (see also \cite[Section~1]{Geiges:2018zt}) implies that a given closed connected orientable 3-manifold $Y$ admits at most one Seifert fibration structure (up to Seifert fibration isomorphism possibly reversing the orientation of the total space), unless $Y$ is a prism manifold, a single Euclidean manifold, or a lens space. Every manifold that is of prism or single Euclidean type admits two non-isomorphic Seifert fibration structures, one of which projects onto a non-orientable surface. By applying this together with Lisca-Mati\'c's Theorem~\ref{t:lisca_matic}, we obtain the following uniqueness result for Besse contact forms. \begin{lem} \label{l:classification_Seifert} Let $Y$ be a closed connected 3-manifold not homeomorphic to a lens space, and $\lambda_1,\lambda_2$ two Besse contact forms on $Y$ whose Reeb orbits have minimal common periods $\tau_1,\tau_2$ respectively. Then, there exists a diffeomorphism $\psi:Y\to Y$ such that $\psi\circ\phi_{\lambda_1}^{\tau_1 t}\circ\psi^{-1}=\phi_{\lambda_2}^{\tau_2 t}$ for all $t\in\mathds{R}$, and the volume forms $\psi^*(\lambda_2\wedge\mathrm{d}\lambda_2)$ and $\lambda_1\wedge\mathrm{d}\lambda_1$ induce the same orientation on $Y$. \end{lem} \begin{proof} Let $\pi_i:Y\to\Sigma_{g_i}$ be the Seifert fibration defined by the Besse contact form $\lambda_i$. Since $\Sigma_{g_i}$ is orientable and the total space $Y$ is not homeomorphic to a lens space, the above mentioned result of Orlik-Vogt-Zieschang \cite{Orlik:1967ad} implies that there exist diffeomorphisms $F:Y\to Y$ and $f:\Sigma_{g_1}\to\Sigma_{g_2}$ such that $\pi_2\circ F=f\circ\pi_1$. The lemma now follows from Theorem~\ref{t:Raymond} once we prove that $\lambda_1\wedge\mathrm{d} \lambda_1$ and $F^*(\lambda_2\wedge\mathrm{d} \lambda_2)$ define the same orientation on $Y$. Let us assume by contradiction that $\lambda_1\wedge\mathrm{d} \lambda_1$ and $F^*(\lambda_2\wedge\mathrm{d} \lambda_2)$ define opposite orientations on $Y$. If $(g_1;\alpha_1,\beta_1,...,\alpha_r,\beta_r)$ are Seifert invariants for $\pi_1:Y\to\Sigma_{g_1}$, Lisca-Mati\'c's Theorem~\ref{t:lisca_matic} implies that \begin{align} \label{e:positive_euler} \tfrac{\beta_1}{\alpha_1}+...+\tfrac{\beta_r}{\alpha_r}>0. \end{align} Since $\lambda_1\wedge\mathrm{d} \lambda_1$ and $F^*(\lambda_2\wedge\mathrm{d} \lambda_2)$ define opposite orientations, the Seifert fibration $\pi_2:Y\to \Sigma_{g_2}$ has Seifert invariants $(g_1;\alpha_1,-\beta_1,...,\alpha_r,-\beta_r)$, and Lisca-Mati\'c's Theorem~\ref{t:lisca_matic} would imply \[-\tfrac{\beta_1}{\alpha_1}-...-\tfrac{\beta_r}{\alpha_r}>0,\] contradicting~\eqref{e:positive_euler}. \end{proof} The classification of Seifert fibrations on lens spaces has been recently carried out by Geiges-Lange \cite{Geiges:2018zt}. We summarize their results that we will need as follows. We recall that, for $p$ and $q$ coprime integers and $p>0$, the lens space $L(p,q)$ is the quotient of the unit 3-sphere $S^3\subset\mathds{C}^2$ under the $\mathds{Z}/p\mathds{Z}$-action generated by $(z_1,z_2)\mapsto(e^{i2\pi/p}z_1,e^{i2\pi q/p}z_2)$. When $p$ is not positive, the lens spaces are defined by $L(p,q):=L(-p,-q)$ and $L(0,1):=S^2\times S^1$. If $\pi:L(p,q)\to\Sigma_g$ is a Seifert fibration, then the base surface $\Sigma_g$ is either $S^2$ or $\mathds{R} P^2$. Since $\Sigma_g$ is orientable whenever the Seifert fibration is defined by a Besse contact form, in this section we will only consider Seifert fibrations of lens spaces over $S^2$. \begin{thm}[Prop.~4.6--4.8 and Th.~4.10 in \cite{Geiges:2018zt}] \label{t:geiges_lange} $ $ \begin{itemize}[topsep=3pt] \item[$(\mathrm{i})$] Any Seifert fibration $\pi:L(0,1)\to S^2$ has Seifert invariants $(0;\alpha,\beta,\alpha,-\beta)$, where $\alpha$ and $\beta$ are coprime integers such that $\alpha>0$ and $\beta\geq0$. \item[$(\mathrm{ii})$] If $p>0$, any Seifert fibration $\pi:L(p,q)\to S^2$ with at most one singular fiber has Seifert invariants $(0;\alpha,\beta)$, where $\beta=p$, $\alpha\neq0$, and $\alpha\equiv q$ or $\alpha q\equiv1$ mod $p$. \item[$(\mathrm{iii})$] There exist functions $b_1:\mathds{Z}^4\to\mathds{Z}$ and $b_2:\mathds{Z}^4\to\mathds{Z}$ such that any Seifert fibration $\pi:L(p,q)\to S^2$ with $p>0$ has Seifert invariants $(0;\alpha_1,\beta_1,\alpha_2,\beta_2)$ satisfying $\beta_1=b_1(p,q,\alpha_1,\alpha_2)$, $\beta_2=b_2(p,q,\alpha_1,\alpha_2)$, and the greatest common divisor $\gcd(\alpha_1,\alpha_2)$ divides $p$. \hfill\qed \end{itemize} \end{thm} \subsection{Classification of Besse contact 3-manifolds} The following is the last ingredient needed for proving Theorem~\ref{t:classification}. \begin{lem} \label{l:Moser_trick} For $i=0,1$, let $(Y_i,\lambda_i)$ be a closed contact 3-manifold equipped with a contact form and oriented by means of the volume form $\lambda_i\wedge\mathrm{d}\lambda_i$. If there exists an orientation preserving diffeomorphism $\psi_0:Y_0\to Y_1$ such that $\mathrm{d}\psi_0(z)R_{\lambda_0}(z)=R_{\lambda_1}(\psi(z))$ for all $z\in Y_0$, then $\psi_0$ can be isotoped to a diffeomorphism $\psi_1:Y_0\to Y_1$ such that $\psi_1^*\lambda_1=\lambda_0$. \end{lem} \begin{proof} By pulling back the contact form $\lambda_1$ by means of $\psi_0$, we can assume without loss of generality that $Y_0=Y_1=:Y$, $\psi_0=\mathrm{id}$, $R_{\lambda_0}=R_{\lambda_1}$, and both volume forms $\lambda_0\wedge\mathrm{d}\lambda_0$ and $\lambda_1\wedge\mathrm{d}\lambda_1$ define the same orientation on $Y$. For each $t\in[0,1]$, the convex combination $\lambda_t:=t\lambda_1+(1-t)\lambda_0$ is a contact form. Indeed, consider any oriented basis of a tangent space of $Y$ of the form $R_{\lambda_0}(z),v,w$. Since $R_{\lambda_0}=R_{\lambda_1}$, notice that \begin{align*} \lambda_i\wedge\mathrm{d}\lambda_j(R_{\lambda_0}(z),v,w) = \mathrm{d}\lambda_j(v,w) = \lambda_j\wedge\mathrm{d}\lambda_j(R_{\lambda_j}(z),v,w) >0, \quad\forall i,j\in\{0,1\}. \end{align*} This readily implies that the 3-form \begin{align*} \lambda_t\wedge\mathrm{d}\lambda_t = t^2\lambda_1 \wedge\mathrm{d}\lambda_1 +(1-t)^2 \lambda_0\wedge\mathrm{d}\lambda_0 + t(1-t) (\lambda_0\wedge\mathrm{d}\lambda_1 + \lambda_1\wedge\mathrm{d}\lambda_0) \end{align*} is a positive volume form on $Y$, and in particular each $\lambda_t$ is a contact form. We can now complete the proof by applying a Moser trick as follows. We consider the time-dependent vector field $X_t$ on $Y$ defined by $\lambda_t(X_t)\equiv0$ and $X_t\lrcorner\,\mathrm{d}\lambda_t=\lambda_0-\lambda_1$. Its flow $\psi_t:Y\to Y$, with $\psi_0=\mathrm{id}$, satisfies \begin{align*} \tfrac{\mathrm{d}}{\mathrm{d} t} \psi_t^*\lambda_t = \psi_t^*\big(\mathrm{d}(\lambda_t(X_t)) + X_t\lrcorner\,\mathrm{d}\lambda_t + \lambda_1 - \lambda_0 \big)=0, \end{align*} which gives the desired condition $\psi_1^*\lambda_1=\lambda_0$. \end{proof} \begin{proof}[Proof of Theorem~\ref{t:classification}] Let $\lambda_1,\lambda_2$ be two Besse contact forms on a closed 3-manifold $Y$. If there exists a diffeomorphism $\psi:Y\to Y$ such that $\psi^*\lambda_2=\lambda_1$, clearly $\sigma_{\mathrm{p}}(Y,\lambda_1)=\sigma_{\mathrm{p}}(Y,\lambda_2)$. Conversely, assume that the two Besse closed connected contact manifolds have the same prime action spectrum $\sigma_{\mathrm{p}}:=\sigma_{\mathrm{p}}(Y,\lambda_1)=\sigma_{\mathrm{p}}(Y,\lambda_2)$. If one of the two contact forms is Zoll, then $\sigma_{\mathrm{p}}$ is a singleton, and the other contact form must be Zoll as well. In this case, \cite[Lemma~2.3]{Benedetti:2018ys} implies that there exists a diffeomorphism $\psi:Y\to Y$ such that $\psi^*\lambda_2=\lambda_1$. Assume now that $\lambda_1$ and $\lambda_2$ are not Zoll. By Wadsley's Theorem \cite{Wadsley:1975sp}, their prime action spectrum must have the form \begin{align*} \sigma_{\mathrm{p}}=\{\tau,\tau/a_1,...,\tau/a_s\}, \end{align*} for some integers $s>0$ and $a_i>1$, $i=1,...,s$. Here, $\tau>0$ is the minimal common period of the Reeb orbits of both $(Y,\lambda_1)$ and $(Y,\lambda_2)$. We denote by $\Sigma_i$ the quotient of $Y$ under the locally free $\mathds{R}/\tau\mathds{Z}$-action defined by the Reeb flow $\phi_{\lambda_i}^t$. As we already discussed, $\Sigma_1$ and $\Sigma_2$ are orientable closed surfaces, and the quotient projections $\pi_1:Y\to\Sigma_1$ and $\pi_2:Y\to\Sigma_2$ are Seifert fibrations. If $Y$ is not homeomorphic to a lens space, since the two Reeb flows have the same minimal common period $\tau$, Lemmas~\ref{l:classification_Seifert} and~\ref{l:Moser_trick} imply that there exists a diffeomorphism $\psi:Y\to Y$ such that $\psi^*\lambda_2=\lambda_1$. It remains to consider the case in which $Y$ is a lens space. Since $Y$ admits the Besse contact forms $\lambda_1$ and $\lambda_2$, it cannot be the lens space $L(0,1)$; indeed, if $Y=L(0,1)$, Theorem~\ref{t:geiges_lange}(i) would imply that the Seifert fibrations $\pi_i:Y\to\Sigma_i$ have Seifert invariants of the form $(0;\alpha,\beta,\alpha,-\beta)$, contradicting Lisca-Mati\'c's Theorem~\ref{t:lisca_matic}. Therefore, we can assume that $Y=L(p,q)$ for some $p>0$. We claim that the two Seifert fibrations $\pi_1:Y\to\Sigma_1$ and $\pi_2:Y\to\Sigma_2$ have the same number of singular fibers (which is at most two according to Theorem~\ref{t:geiges_lange}). Indeed, assume that one of the two fibrations, say $\pi_1:Y\to\Sigma_1$, has two singular fibers. Let $(0;\alpha_1,\beta_1,\alpha_2,\beta_2)$ be its Seifert invariants, and notice that $\alpha_1>1$ and $\alpha_2>1$. If the other Seifert fibration has only one singular fiber, then we must have $\alpha_1=\alpha_2=:\alpha$ and $\sigma_{\mathrm{p}}=\{\tau,\tau/\alpha\}$. By Theorem~\ref{t:geiges_lange}(iii), the quotient $n_1:=p/\alpha\in(0,p)$ is a positive integer, and we must have $p>1$ and thus $q\neq0$. This, together with Theorem~\ref{t:geiges_lange}(ii), implies that $\pi_2:Y\to\Sigma_2$ has Seifert invariants $(0;\alpha,p)$, and $\alpha\equiv q$ or $\alpha q\equiv1$ mod $p$. Therefore $(1+n_1n_2)\alpha=q$ or $(q+n_1n_2)\alpha=1$ for some $n_2\in\mathds{Z}$. None of these equalities is possible: the first one since $\alpha>1$ divides $p$ and the non-zero integers $p,q$ are coprime; the latter once since $\alpha>1$. This gives a contradiction. The Seifert fibrations $\pi_1:Y\to\Sigma_1$ and $\pi_2:Y\to\Sigma_2$ have the same Seifert invariants. Indeed, if they have only one singular fiber, then $\sigma_{\mathrm{p}}=\{\tau,\tau/\alpha\}$ for some integer $\alpha>1$, and Theorem~\ref{t:geiges_lange}(ii) implies that their Seifert invariants are $(0;\alpha,p)$. If they have two singular fibers, then $\sigma_{\mathrm{p}}=\{\tau,\tau/\alpha_1,\tau/\alpha_2\}$ for some integers $\alpha_1,\alpha_2>1$, and Theorem~\ref{t:geiges_lange}(ii) implies that their Seifert invariants are $(0;\alpha_1,b_1(p,q,\alpha_1,\alpha_2),\alpha_2,b_2(p,q,\alpha_1,\alpha_2))$. This, together with the fact that both Besse contact forms have the same minimal common period $\tau$ for their Reeb orbits, allows to apply Lemmas~\ref{l:classification_Seifert} and~\ref{l:Moser_trick}, which imply that there exists a diffeomorphism $\psi:Y\to Y$ such that $\psi^*\lambda_2=\lambda_1$. \end{proof}
1,314,259,993,796
arxiv
\section{Introduction} The ongoing outbreak of COVID-19 has had a devastating impact on the United States' health care system, economy, and social wellbeing. Despite early promises of an "American Resurrection" by April 12, 2020 \cite{cnnTrumpSaysHe}, as of \today, the U.S. has unfortunately experienced more than 190,000 deaths due to COVID-19 and remains a significant epicenter of the disease with more than 25,000 daily cases \cite{dongInteractiveWebbasedDashboard2020a, NewCasesCOVID19}. Social distancing measures remain in effect throughout much of the country, and despite optimistic plans to reopen schools, millions of students return to virtual classrooms this Fall due to COVID-19 \cite{hobbsMillionsStudentsHead2020}. In many parts of the country, confirmed COVID-19 cases, hospitalizations, and deaths are increasing exponentially \cite{dongInteractiveWebbasedDashboard2020a}. Drastic interventions like social distancing and mask mandates are necessary to slow the spread of the disease, giving more time to \begin{itemize} \item provide treatment within our healthcare system's capacity, \item develop effective testing capability, \item establish sophisticated tracing mechanisms, and \item discover novel treatments for the virus. \end{itemize} At the same time, the current mitigation strategies have had severe effects on society and the economy. Widespread closures of schools and daycares have left working parents with limited childcare options \cite{MapCoronavirusSchool2020}; shuttered bars, restaurants, and entertainment venues have forced owners to lay off employees, predominantly in the service industry \cite{casselmanLayoffsAreJust2020}; and the U.S. and global economy may be experiencing the worst recession since World War II \cite{EmploymentSituationSummary, COVID19PlungeGlobal}. To combat these effects, U.S. representatives have passed the largest economic stimulus package in U.S.~history \cite{HouseGivesFinal}, and the Federal Reserve has cut interest rates to near zero \cite{wesselWhatFedDoing2020}. However, no economic stimulus can offset the effects of altered consumer behavior. Determining when and how to roll back non-pharmaceutical interventions in a manner which is safe and responsible is of the utmost importance. The initial lockdown period was necessary to avoid overwhelming our hospital systems, but the current situation calls for a more nuanced approach. Moving forward, the U.S. must balance reducing the risk of spread with the adverse economic consequences of millions of furloughed and unemployed people. To inform this process, we have curated a \textbf{machine-readable dataset} that aggregates data from governmental, journalistic, and academic sources on the county level, including aggregated NPI implementation dates. While most of these sources are freely available, there is substantial work to align them and put them in a standard format that enables analysis. In addition to time-series data from \cite{dongInteractiveWebbasedDashboard2020a}, which details COVID-19 per-county infections and deaths, our dataset contains more than 300 variables that summarize population estimates, demographics, ethnicity, housing, education, employment and income, climate, transit scores, and healthcare system-related metrics. Further, we source a significant number of journal articles and official statements detailing implementation dates of interventions, including mask mandates, stay-at-home orders, school closures, and restaurant and entertainment venue closures \cite{carterItNCBars, carroll62CoronavirusCases2020, hansenAlabamaGovernorCloses, rettnerArkansasLatestUpdates, mookBurgumClosesBars, andersonCoronavirusFloridaGovernor, ketvstaffCoronavirusNebraskaIowa2020, sogaCoronavirusVermontGovernor, kiteCoronavirusShutsKansas, feuerCoronavirusNYNJ2020, hiattAddsStayatHomeOrder2020, svitekGovGregAbbott2020, kcciGovReynoldsIssues2020, capuanoMissouriNoDiningin2020, tobinKentuckyDerbyPostponed, hnnLISTHereHow, LIVEUPDATESHere, helminiakLocalBarsResaurants2020, wgmeMaineBarsRestaurants2020, MapCoronavirusSchool2020a, ganucheauMayorsScrambleKnow2020, associatedpressMontanaExtendsSchool, etehadNevadaOrdersAll2020, whdhNewHampshireBans, krqemediaNewRestrictionsNew2020, webbPhoenixTucsonOrder2020, kludtRestaurantsBarsShuttered2020, nunesRIRestaurantsClosed2020, mervoshSeeWhichStates2020, ruskinStateBansRestaurant2020, grossStateRestrictBars, axiosStatesOrderBars, kelmanTennesseeGovernorOrders, leeTheseStatesHave, semeradUtahOrdersRestaurants, spiegelVirginiaRestaurantsBars2020, widaWhichStatesHave, star-tribuneWyomingCancellationsClosures, speciaWhatYouNeed2020}, as well as reopenings \cite{xu2020epidemiological, noauthor_coronavirus_nodate, morlan_restaurants_nodate, louie_monterey_2020, noauthor_home_nodate, noauthor_coronavirus_nodate-1, angeles_county_2020, noauthor_reopening_nodate, noauthor_san_nodate, amanda_del_castillo_revised_2020, yeager_update_nodate, noauthor_phase_nodate, noauthor_phase_nodate-1, noauthor_ron_nodate, noauthor_plan_nodate, noauthor_georgia_nodate, idaho_stages_nodate, noauthor_illinois_nodate, noauthor_update_nodate, noauthor_iowa_nodate, noauthor_gov_nodate, noauthor_updated_2020, blair_road_nodate, noauthor_whats_nodate, noauthor_baltimore_nodate, noauthor_covid-19_nodate, noauthor_covid-19_nodate-1, noauthor_coronavirus_nodate-2, noauthor_news_nodate, noauthor_prince_nodate, noauthor_safety_nodate, noauthor_minnesotas_nodate, noauthor_show_nodate, noauthor_see_nodate, noauthor_governors_nodate, noauthor_details_2020, russell_iowa_nodate, noauthor_governor_nodate, wolf24PennsylvaniaCounties2020, doxseyAnotherCOVIDCluster, COVID19InformacionGeneral, COVID19NEWSPolk, tierneyCOVID19UpdateReopening2020, COVID19UpdatesClackamas2020, raymondEverythingWeDon2020, GovHenryMcMaster2020, GovHenryMcMaster2020a, GovWolf122020, tierneyGovernorAnnouncesOrders2020, GovernorCuomoAnnounces2020, GovernorCuomoAnnounces2020a, GovernorCuomoAnnounces2020b, HundredsRestaurantsReopen, dormanKnoxCountyWill, murphyNCGettingReady, NewMexicoAllow, NorthDakotaCafes, NYCRestaurantReopening, OregonCoronavirusInformation, misincoPennsylvaniaReopeningCounties, twitterPhaseStartsTuesday, ReopeningMarionCounty, cooperRoadmapReopeningNashville, insleeSafeStartWashington2020, SaferHomePhase, richardShelbyCountyBegin, brownStateOregonNewsroom, kassahunSullivanCountyHealth2020, illersTennesseeReleasesNew, cowleyThese31Oregon, herbertUtahLeadsTogether2020, WestVirginiaStrong, WesternNewYork, WhatAllowedCounties2020, skrumWisconsinCountyList2020, WolfAnnouncesNext2020, WyomingCountyCommissioners} . Finally, we aggregate out-of-home activity data from \cite{safeGraph} and \cite{aktay2020google} in each county, possibly measuring compliance with the aforementioned restrictions. Fig.~\ref{fig:foot-traffic} shows a sample of out-of-home activity for selected counties. We hope that this dataset proves to be a useful resource to the community, facilitating important research on epidemiological forecasting. In particular, a machine learning approach to identify highly relevant factors may inform a graduated rollback of isolation measures and travel restrictions. \begin{table*}[ht] \centering \rowcolors{2}{gray!25}{white} \begin{tabular}{|l|l|l|} \hline \rowcolor{gray!50} Data Type & Source & Availability \\ COVID-19 Infections COVID-19 Related Deaths Time-series & \cite{dongInteractiveWebbasedDashboard2020a} & ---\\ 2020 Date of COVID-19 Interventions, \textit{e.g.}~stay-at-home order & \cite{carterItNCBars, carroll62CoronavirusCases2020, hansenAlabamaGovernorCloses, rettnerArkansasLatestUpdates, mookBurgumClosesBars, andersonCoronavirusFloridaGovernor, ketvstaffCoronavirusNebraskaIowa2020, sogaCoronavirusVermontGovernor, kiteCoronavirusShutsKansas, feuerCoronavirusNYNJ2020, hiattAddsStayatHomeOrder2020, svitekGovGregAbbott2020, kcciGovReynoldsIssues2020, capuanoMissouriNoDiningin2020, tobinKentuckyDerbyPostponed, hnnLISTHereHow, LIVEUPDATESHere, helminiakLocalBarsResaurants2020, wgmeMaineBarsRestaurants2020, MapCoronavirusSchool2020a, ganucheauMayorsScrambleKnow2020, associatedpressMontanaExtendsSchool, etehadNevadaOrdersAll2020, whdhNewHampshireBans, krqemediaNewRestrictionsNew2020, webbPhoenixTucsonOrder2020, kludtRestaurantsBarsShuttered2020, nunesRIRestaurantsClosed2020, mervoshSeeWhichStates2020, ruskinStateBansRestaurant2020, grossStateRestrictBars, axiosStatesOrderBars, kelmanTennesseeGovernorOrders, leeTheseStatesHave, semeradUtahOrdersRestaurants, spiegelVirginiaRestaurantsBars2020, widaWhichStatesHave, star-tribuneWyomingCancellationsClosures, speciaWhatYouNeed2020} & --- \\ 2020 Date of COVID-19 Intervention Rollbacks & \cite{xu2020epidemiological, noauthor_coronavirus_nodate, morlan_restaurants_nodate, louie_monterey_2020, noauthor_home_nodate, noauthor_coronavirus_nodate-1, angeles_county_2020, noauthor_reopening_nodate, noauthor_san_nodate, amanda_del_castillo_revised_2020, yeager_update_nodate, noauthor_phase_nodate, noauthor_phase_nodate-1, noauthor_ron_nodate, noauthor_plan_nodate, noauthor_georgia_nodate, idaho_stages_nodate, noauthor_illinois_nodate, noauthor_update_nodate, noauthor_iowa_nodate, noauthor_gov_nodate, noauthor_updated_2020, blair_road_nodate, noauthor_whats_nodate, noauthor_baltimore_nodate, noauthor_covid-19_nodate, noauthor_covid-19_nodate-1, noauthor_coronavirus_nodate-2, noauthor_news_nodate, noauthor_prince_nodate, noauthor_safety_nodate, noauthor_minnesotas_nodate, noauthor_show_nodate, noauthor_see_nodate, noauthor_governors_nodate, noauthor_details_2020, russell_iowa_nodate, noauthor_governor_nodate, wolf24PennsylvaniaCounties2020, doxseyAnotherCOVIDCluster, COVID19InformacionGeneral, COVID19NEWSPolk, tierneyCOVID19UpdateReopening2020, COVID19UpdatesClackamas2020, raymondEverythingWeDon2020, GovHenryMcMaster2020, GovHenryMcMaster2020a, GovWolf122020, tierneyGovernorAnnouncesOrders2020, GovernorCuomoAnnounces2020, GovernorCuomoAnnounces2020a, GovernorCuomoAnnounces2020b, HundredsRestaurantsReopen, dormanKnoxCountyWill, murphyNCGettingReady, NewMexicoAllow, NorthDakotaCafes, NYCRestaurantReopening, OregonCoronavirusInformation, misincoPennsylvaniaReopeningCounties, twitterPhaseStartsTuesday, ReopeningMarionCounty, cooperRoadmapReopeningNashville, insleeSafeStartWashington2020, SaferHomePhase, richardShelbyCountyBegin, brownStateOregonNewsroom, kassahunSullivanCountyHealth2020, illersTennesseeReleasesNew, cowleyThese31Oregon, herbertUtahLeadsTogether2020, WestVirginiaStrong, WesternNewYork, WhatAllowedCounties2020, skrumWisconsinCountyList2020, WolfAnnouncesNext2020, WyomingCountyCommissioners} & --- \\ March, 2020 Out-of-home Activity Time-series & SafeGraph \cite{safeGraph} & --- \\ 2018 Population Estimates & Census \cite{PopEst} & 97-100\% \\ 2014-2018 Educational Attainment & Census \cite{EduAttain} & 100\% \\ 2018 Estimated Poverty Level & USDA~\cite{EstPoverty} & 97\%\\ 2018 Employment and Income & USDA~\cite{EmpInc} & 99\% \\ 2019 Precipitation and Temperature & NOAA~\cite{PrecTemp} & 86\% (37.8\% imputed)\\ 2010 Housing and Density & Census \cite{HouseLand} & 99\% \\ 2018 Age Group Demographics & Census \cite{HouseDemo} & 97\%\\ 2018 Household Demographics & Census \cite{HouseDemo} & 25\%\\ 2018 Ethnic Group Demographics & Census \cite{Ethnic} & 97\%\\ 2019 Healthcare Capacity: Physicians, NPs, PAs & AAMC, KFF \cite{AAMCPhy, PrimaryCareDoctorsSource, SpecialistDoctorsSource} & 86-97\%\\ 2019 Healthcare Capacity: ICU Beds & KFF \cite{hospitals, ICUSource} & 92-97\%\\ 2019 Public Transit Scores & CNT \cite{PubTrans} & 95\%\\ 2016 Crime Rates & DOJ \cite{CrimeStats} & 97\%\\ \hline \end{tabular} \vspace{1em} \caption{Data Source Descriptions and Percentage of Counties Included For Static Data} \label{tab:data-summary} \end{table*} \section{Related Work} Because of the rapidly-evolving nature of the COVID-19 pandemic, the response from the data science community is ongoing and in flux. Here, we review related efforts which were influential at the outset of the pandemic. As new articles are published every day, this is by no means an exhaustive review. Despite significant public interest, government agencies have yet to publish a county-level data source for cases of COVID-19. The World Health Organization has gathered self-reported data on the national level \cite{WHOCoronavirusDisease}, while the United States Center for Disease Control reports state-level infection and fatality rates \cite{cdcCoronavirusDisease20192020}. However, \cite{dongInteractiveWebbasedDashboard2020a, timesWeReSharing2020} continue to maintain the most up-to-date and reputable collections of COVID-19 cases across the United States, hosted by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University and the New York Times, respectively. These efforts focus on current, hard data gathered from local government publications and reputable journalistic sources. Other efforts focus on gleaning related information from a variety of sources, including social media. \cite{chenCOVID19FirstPublic2020} tracks COVID-19 related tweets in an effort to understand the conversation and possible misinformation surrounding the pandemic. Johns Hopkins University, University of Maryland, and George Washington University have also started a collaboration to track COVID-19 through social media \cite{socialMedaForPublicHealth}. A large body of work has focused on using machine learning and data science tools to understand the virus. \cite{zhangEstimationReproductiveNumber2020} uses data from the Diamond Princess cruise ship, where an early outbreak took place, to estimate the reproductive number $R_0$ of the virus. \cite{santoshAIDrivenToolsCoronavirus2020} implements active learning methods to detect new outbreaks of the virus, incorporating new data types without having to retrain. \cite{fongCompositeMonteCarlo2020, fongFindingAccurateEarly2020} focus on understanding the current pandemic in its early stages, compensating for the inherent uncertainty in novel disease. Finally, \cite{flaxman2020estimating} applies a data-driven approach to understand the effect of NPIs on the reproductive ratio of COVID-19 in European countries. \section{Dataset} We describe the structure of our dataset, which includes each component in its raw form as well as a narrowed-down, machine-readable form conducive to a machine-learning approach. Table~\ref{tab:data-summary} summarizes the sources and availability for each type of data, and a full description of each variable can be found in our repository. \subsection{County Descriptors} \label{sec:county-descriptors} We populate a CSV file with over 300 variables for 3220 county-equivalent areas (as well as the fifty states, District of Columbia, and the whole United States) with numerous types of data, including population, education, economic, climate, housing, health care capacity, public transit, and crime statistics. Each area is uniquely identified by its Federal Information Processing Standard (FIPS) code, a five digit number where they first two digits designate the state, and the last three digits describe the county-equivalent. Our sources include the United States Census Bureau \cite{PopEst, EduAttain, HouseDemo, Ethnic}, the United States Department of Agriculture (USDA) Economic Research Service \cite{EstPoverty, EmpInc}, the National Oceanic and Atmosphere Administration (NOAA) \cite{PrecTemp}, the Association of American Medical Colleges (AAMC) \cite{AAMCPhy}, the Henry J.~Kaiser Family Foundation (KFF) \cite{PrimaryCareDoctorsSource, SpecialistDoctorsSource, ICUSource}, the Center for Neighborhood Technology (CNT) \cite{PubTrans}, and the Bureau of Justice Statistics, Department of Justice (DOJ) \cite{CrimeStats}. Perhaps most relevant to the ongoing effort to mitigate the effects of COVID-19 in the U.S.~is county-level healthcare system capacity. The dataset includes detailed counts for each type of medical practitioner as well as the number of Intensive Care Unit beds in each county, shown in Fig.~\ref{fig:icu-beds}. For the most part, these basic descriptive variables are unaltered from their original state. Where appropriate, missing values have been imputed with the state-wide average, detailed in Table~\ref{tab:data-summary}. \begin{figure*} \centering \includegraphics[width=0.99\linewidth]{images/infections_la_dc.pdf} \caption{The daily number of infections for Los Angeles County and the District of Columbia, \cite{dongInteractiveWebbasedDashboard2020a}. The 7-day average is shown in red. For simplicity, we only show the implementation dates for two interventions and their rollbacks, stay-at-home orders and restaurant closures, since many interventions overlap. Note that the scales differ by a factor of 20.} \label{fig:infections-la-dc} \end{figure*} \subsection{Interventions} Our dataset describes mitigation efforts taken at the state level, including stay-at-home advisories, banning large gatherings, public school closures, and restaurant and entertainment venue closures. We also include the rollback of these mitigation efforts up to Aug 2nd. For machine readability, we provide each date of implementation as a Gregorian ordinal, \textit{i.e.} the integer number of days starting at January 1, Year 1 CE, consistent with standard software libraries. Moreover, these data are provided according to the same county-level row ordering as our county descriptor data (see Sec.~\ref{sec:county-descriptors}). Interventions made at the state level have been assigned to each county in that state, and we include county-level interventions wherever possible. An intervention is designated \texttt{NA} if the county or state has not yet enacted it. For the most parts, county-level NPI implementation was gathered through local newspapers and government websites. The full list of these sources can be found \url{https://github.com/JieYingWu/COVID-19_US_County-level_Summaries/tree/master/data}. Since rollbacks were implemented in a more staggered fashion, we started by using the dates provided by the IHME database, which contains state-level rollback information.~\cite{xu2020epidemiological} While IHME records the categories for stay-at-home and gatherings, they separate businesses as essential and non-essential. We use dates for non-essential businesses reopening for restaurants and gym/entertainment. As schools were mostly still on summer vacation by Aug 2nd, we did not collect school reopenings. To refine the state-level data to the county level, we use reopening information from the New York Times\cite{leeSeeHowAll2020}, which includes some counties that do not follow state guidelines. We also rely on it to fix discrepancies such as if the opening of the first non-essential business did not include restaurants or gyms. Additionally, since counties are generally driven to implement a different rollback schedule if they have an unusual number of COVID-19 cases, we check the county government website of those counties where there was a drastic uptick in cases in the previous months. As different counties have reopened at different levels, such as reopening restaurants at 25\% outdoor seating, we count any amount of reopening as that NPI has been rolled back. As the policies surrounding COVID-19 management is continually changing, we appreciate any contributions to the repository to keep it up-to-date, especially as school reopening decisions come into effect. \subsection{Out-of-home Activity and Mobility} We have aggregated point-of-interest location data gathered from user's smartphones to show out-of-home activity, using raw data from \cite{safeGraph}. For privacy and IP reasons, our dataset does not include user location data in its raw form but rather in several time-series files summarizing county-level activity. Fig.~\ref{fig:foot-traffic} shows the time-series for selected counties which have a high incidence of COVID-19 cases. The decline in overall activity on March 12 corresponds to an increased media attention and stay-at-home advisories in those areas. At the same time, a spike in grocery store visits points to a panic-buying spree which has since subsided. Additionally we include data from Google mobility reports \cite{aktay2020google}, which may correlate with changes in the reproductive ratio of COVID-19. These include aggregated and anonymized data, detailing the percent change in number of visits to six location types compared to baseline: grocery and pharmacy; parks; residential; retail and recreation; transit stations; and workplaces. Visits to residential addresses likely describes individuals staying at home, with the obvious exception of gatherings that occur at residential addresses, either for work or social reasons. \subsection{Disease Spread} Finally, we provide time-series data for the cumulative number of COVID-19 confirmed cases and related deaths, from \cite{dongInteractiveWebbasedDashboard2020a}. This data begins on January 22, 2020. It should be noted that epidemiological modeling efforts may want to consider the uncertainty surrounding U.S. testing \cite{scherUSSeverelyUndertesting}, on which these data are based. At the time of this writing, efforts to improve the availability of COVID-19 tests are ongoing, but the current strategies prioritize patients with severe symptoms. Thus, modeling efforts may wish to take into account random subsampling of the true population, where untested individuals still spread the virus. This is especially true given that nearly half of all COVID-19 infections may be asymptomatic \cite{CoronavirusCasesSymptoms2020}. Fig.~\ref{fig:infections-la-dc} shows the daily number of infections in Los Angeles County, CA, and the District of Columbia according to \cite{dongInteractiveWebbasedDashboard2020a}, as well as implementation dates for select interventions: ``stay at home'' and ``restaurant dine-in.'' In some areas, rollbacks have coincided with a resurgence of the virus, reaching levels of new daily infections far greater than the initial outbreak, as in Los Angeles, whereas other areas have rolled back NPIs and experienced only a small or negligible increase, such as the District of Columbia (see Fig.~\ref{fig:infections-la-dc}). \section{Discussion} The resurgence of COVID-19 in some areas but not others reinforces the need for continued vigilance everywhere. In some sense, the United States has experienced not a single outbreak but multiple outbreaks, both simultaneous and non-simultaneous, with differing characteristics in terms of transmission rate, mobility, and response to NPIs. Although pharmaceutical interventions, such as a vaccine, or natural herd immunity may eventually mitigate the likelihood of an outbreak independent from public behavior, these eventualities are still far on the horizon. In the meantime, the possibility of a resurgence, which may overwhelm the healthcare systems, is ever-present. The number of individuals who will ultimately be infected---and the number of deaths that will result---depend on the interventions reinforced now. At the same time, the economic impact of these interventions, which is not evenly distributed across counties, cannot be ignored. It depends on the characteristic qualities of each area---very different, for example, New York as opposed to Silicon Valley. The former has a large population in the entertainment and service industries, which will need financial support during quarantine, whereas the latter is dominated by large tech firms, whose employees can adapt to working from home. By providing the socioeconomic attributes of each county, the spread of COVID-19 confirmed cases, and the ongoing response in a machine-readable format, we hope to inform the decisions made to most effectively protect each area. \section*{Acknowledgment} Thank you to all our sources, especially the JHU CSSE COVID-19 Dashboard for making their data public and SafeGraph, for providing researchers their data for COVID-19 related work. \bibliographystyle{IEEEtran}
1,314,259,993,797
arxiv
\section{Introduction} Cantilever mechanical resonators have found application in a wide range of sensing and detection schemes including mass sensing~\cite{Moser,Lavrik}, atomic (magnetic) force microscopes~\cite{Giessiblrmp}, chemical sensors~\cite{Nordstrom}, torque magnetometry~\cite{Rossel1996,Willemin1998,Rossel1998} and torque differential magnetometry~\cite{Kamra} (also known as cantilever magnetometry~\cite{Stipe,Weber}). Most of these measurement schemes are based on detection of changes in either the oscillation amplitude at a fixed drive or the resonance frequency of the mechanical resonator. This change in amplitude or resonance frequency can in turn be attributed to a change in the effective mass ($m_{\textrm{eff}}$) or the effective spring constant ($k_{\textrm{eff}}$) of the mechanical resonator induced by the physical parameter to be detected (henceforth called {\it perturbation}). Thus, to achieve a high sensitivity, small $m_{\textrm{eff}}$ and $k_{\textrm{eff}}$ are required to obtain a large frequency shift for a given perturbation. The trade-offs include small specimen size and exclusive operation at low temperatures and pressures. Further, the resonant response of these resonators is typically recorded using a mechanical piezoelectric drive and optical detection of the oscillation amplitude~\cite{Stipe}. This optical detection makes the setup relatively expensive and sensitive to external disturbances. Complementary, purely electrical detection schemes have also been investigated including piezoresistive resonators~\cite{Tortonese,Eriksson} and piezoelectric quartz tuning forks (TFs)~\cite{Giessiblnanotech,Rychenrsi,Todorovic,Unterreithmeier}. The latter resonators are particularly attractive as they are commercially available and can be integrated on small footprints. In addition to high quality factors of about 10000 under ambient conditions, they offer several other desirable properties like high temperature stability, low sensitivity to external mechanical disturbances, and robustness~\cite{Bottom}. Further, the relatively large $k_{\textrm{eff}}$ of these resonators offers several advantages due to smaller oscillation amplitude~\cite{Giessiblnanotech} and larger linear operation range~\cite{Kamra} while preserving the high sensitivity, since the drawback of smaller frequency shifts due to large $k_{\textrm{eff}}$ is compensated by a high quality factor resulting in better frequency resolution. \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{ctmvstdm.pdf} \caption{Torque differential magnetometry~\cite{Stipe,Kamra} using a cantilever (a) and a quartz TF (b). On the rhs the mechanical and electrical equivalent circuits are shown. A magnetic specimen (light blue) is attached to the tip of the resonator (red) and the former experiences a torque under the influence of an applied magnetic field. This torque translates to a magnetic field dependent effective stiffness (capacitance) leading to magnetic field dependent resonance frequency of the cantilever (TF). The applied magnetic field dependent quantities are shown in blue. $R_m$ represents the magnetic contribution to the dissipation. The cantilever setup requires a laser interferometer for measurements while the TF enables an all-electrical measurement scheme.} \label{ctmvstdm} \end{figure} We have employed quartz TFs to perform {\it quantitative} measurements via torque differential magnetometry (TDM)~\cite{Stipe,Kamra}. Our setup improves upon the existing TDM capabilities~\cite{Stipe,Weber} by allowing `large' specimens weighing up to several 10 $\mu$g, operation over a broad temperature and pressure range, higher sensitivity for detecting magnetic contribution to dissipation, and a simple all-electrical implementation. We give a brief introduction to the quartz TF as a mechanical resonator in Sec. \ref{tfresonator}. TFs have already been used for high resolution microscopy~\cite{Giessiblnanotech,Rychenrsi} and alternating gradient magnetometry~\cite{Todorovic}. However, in all these measurements, only qualitative changes in the surface topography or magnetic moment were of interest. In contrast, we demonstrate, in a proof-of-principle experiment, a quantitative analysis of the anisotropy field and magnetization of a thin iron wire with a high precision. The TDM method entails attaching the magnetic specimen to the tip of the TF and recording the resonance frequency as a function of an applied magnetic field (see Fig. \ref{ctmvstdm}). We detail optimal mounting procedures in Sec. \ref{setupsec} and experimental spectroscopy techniques in Sec. \ref{measurement}. In TDM, the magnetic specimen experiences a mechanical torque, which acts as an effective force $F_m$ [see Fig. \ref{ctmvstdm} (a)], under the influence of an externally applied magnetic field. This additional restoring force translates to an additional magnetic field dependent stiffness ($k_m$). Since the resonance frequency $f_r$ of the mechanical resonator depends on the total effective stiffness constant $k_{\textrm{eff}} = k_{m} + k_{\textrm{el}}$ via $f_r = 1/2\pi \sqrt{k_{\textrm{eff}}/m_{\textrm{eff}}}$, the magnetic torque imposes a magnetic field dependent resonance frequency shift ($\delta f_r$). In a TF [Fig. \ref{ctmvstdm} (b)], the effective stiffness is inversely related to the capacitance $C = (C_m C_{\textrm{el}})$ $/ (C_m + C_{\textrm{el}})$ in the equivalent circuit, $C_m$ being magnetic field dependent, which in turn determines the resonance frequency as detailed in Sec. \ref{tfresonator}. Further, there is magnetic field dependent contribution to the dissipation represented by the resistance $R_m$ in the equivalent circuit. In Sec. \ref{tdmsec}, we detail how the observed frequency shift can be used to quantify the magnetic properties of the specimen~\cite{Kamra}. \section{Tuning Fork Resonator}\label{tfresonator} Commercially available quartz TFs are designed to have a perfectly anti-symmetric resonance mode, with one prong mirroring the other, with a resonance frequency of 32768 Hz. Like any other mechanical oscillator~\cite{Morse}, piezoelectric quartz TFs can be modeled as an effective mass and spring system with friction. Due to piezoelectricity, there is a direct relation between the deflection $x$ of the effective mass and the charge accumulated across the electrodes $Q = \alpha x$, with $\alpha$ the electromechanical coupling constant. Comparing the mechanical setup with the lumped element $LCR$ equivalent electrical resonance circuit allows to identify the following relations~\cite{Rychenphd,Friedt} (see Fig. \ref{ctmvstdm}): \begin{equation}\label{equiv} L = \frac{m_{\textrm{eff}}}{\alpha^2}, \quad \frac{1}{C} = \frac{k_{\textrm{eff}}}{\alpha^2}, \quad R = \frac{\gamma_{\textrm{eff}}}{\alpha^2}. \end{equation} Here $C^{-1} = C_{\textrm{el}}^{-1} + C_m^{-1}$, $R = R_{\textrm{el}} + R_m$, and $-\gamma_{eff} = -\gamma_{\textrm{el}} - \gamma_{m}$ is the proportionality constant between the friction force and the velocity, in the mechanical model. The negative sign emphasizes that the friction acts against the motion. In addition to the {\it motional} $LCR$ equivalent circuit, the TF has a physical shunt capacitance $C_s$, which acts in parallel to the motional branch, leading to the Butterworth-van Dyke (BvD) equivalent circuit~\cite{Rychenphd,Friedt}, as shown in the lower right panel of Fig. \ref{ctmvstdm}. The admittance $Y(\omega)$ for this circuit exhibits a resonance at $\omega_r = 2 \pi f_r = 1/\sqrt{LC}$ and an anti-resonance at $\omega_{\textrm{ar}} = 2 \pi f_{\textrm{ar}} = 1/\sqrt{L[C C_s /(C + C_s) ]}$, and is given by \begin{eqnarray} Y(\omega) = \frac{\tilde{I}(\omega)}{\tilde{V}(\omega)} & = & \frac{1}{R + \frac{1}{i \omega C} + i \omega L} + i \omega C_s, \end{eqnarray} where $\tilde{I}$ and $\tilde{V}$ denote ac quantities. The admittance can be recast in the form of a complex Lorentzian, characteristic for any resonance, close to the resonance frequency: \begin{eqnarray}\label{lorentzian} Y(f) & = & \frac{A_0 \Delta f \left( \Delta f - 2 i (f - f_r) \right)}{(\Delta f)^2 + 4 (f - f_r)^2} + i 2 \pi f_r C_s, \end{eqnarray} with $A_0 = 1/R$ and $\Delta f = R / 2 \pi L$. A high quality factor $Q = f_r/\Delta f$ is desirable as it quantifies the precision in the measurement of the resonance frequency. \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{admcurves.pdf} \caption{Admittance magnitude (left axis, blue curves) and phase (right axis, green curve) for a TF removed from its casing under different environments. Only one phase curve, corresponding to $T=300$\,K and $p = 5$\,mbar, is shown to avoid crowding in the figure.} \label{admcurves} \end{figure} The measured magnitude and phase of the admittance are shown in Fig. \ref{admcurves} for a TF (casing removed) at ambient conditions (squares), 300 K and 5 mbar pressure in helium environment (triangles), and 10 K and 5 mbar pressure in helium exchange gas (circles). Fitting the admittance magnitude recorded at 300 K and 5 mbar pressure with a complex Lorentzian [Eq. (\ref{lorentzian})] yields $R_{\textrm{el}} = 39 ~\textrm{k}\Omega$, $L = 8400 ~\textrm{H}$, $C_{\textrm{el}} = 2.8 ~\textrm{fF}$, and $C_s = 1.4 ~\textrm{pF}$. The quality factor was found to increase by about an order of magnitude in going from ambient conditions to 10 K and 5 mbar. Quality factors above $10^6$ have been reported at still lower temperatures and pressures~\cite{Gomez}. \section{Setup}\label{setupsec} \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{setup.pdf} \caption{Schematic showing the electronics used in the setup. Thick lines denote GPIB connections while the thin lines represent coaxial cables.} \label{setup} \end{figure} {\it All measurements reported below were performed under ambient conditions.} In our measurements, we perform admittance spectroscopy of the device as sketched in Fig. \ref{setup}. A lock-in amplifier is used to apply a small ac voltage $\tilde{V}(\omega)=V_1 \cos \omega t$ with amplitude $V_1=4$\,mV and angular frequency $\omega$ across the TF electrodes, and simultaneously measure the current response $\tilde{I}(\omega)$. The ratio of the current response $\tilde{I}(\omega)$ and the applied ac voltage $\tilde{V}(\omega)$ gives the complex admittance $Y(\omega)$ at the given frequency. Admittance over a certain frequency range or at the resonance frequency only may be of interest as per the requirements of the measurement. To obtain a high frequency resolution, we employ an Agilent 33250A function generator which provides the reference signal to a Stanford Research SR830 lock-in amplifier, whose voltage output and current input ports are respectively used for applying the voltage drive and measuring the current response (see Fig. \ref{setup}). The high impedance of the TF under all conditions, except very low temperature and pressure, enables a direct current measurement using the SR830 current input port (impedance $1 ~\mathrm{k}\Omega$). A Lakeshore 455 DSP gaussmeter was used to control the magnetic field in a home build electromagnet. All data acquisition and the phase locked loop (PLL) implementation described in Sec. \ref{measurement} were achieved using Labview. Taken together, the setup allows to record the admittance of the TF as a function of the applied magnetic field. \begin{figure}[htb] \centering \subfloat[]{\includegraphics[height = 5cm]{TF.pdf}} \ \subfloat[]{\includegraphics[height = 5cm]{TFmounted.pdf}} \caption{(a) Picture of a TF used for performing torque differential magnetometry. The two prongs with the non-magnetic electrodes are visible at the top. The base of the quartz TF is embedded in a magnetic base visible at the bottom. (b) Picture depicting a TF cemented onto a glass slide including the magnetic base. Using this mounting technique, the unwanted magnetic field dependence is suppressed.} \label{pictures} \end{figure} \begin{figure}[htb] \centering \includegraphics[width = 8.5cm]{confcomp.pdf} \caption{Resonance frequency shift vs. applied magnetic field for different mounting configurations of the TF. The label ``Asymmetric'' configuration refers to a TF mounted with a non-magnetic specimen on one prong and the ``symmetric'' configuration refers to TF operated without a specimen. For the asymmetrically loaded TF without appropriate mounting of the base, we find a strong W-shaped magnetic field dependence (blue dashed line). In contrast, the symmetrical configuration (green dotted line) shows no significant magnetic field dependence. We also show an asymmetrically loaded TF, where we have removed the base (red dash-dotted line). Here, no magnetic field dependence is observable, but the noise in the data is increased due to the lower quality factor of the TF. Mounting the TF as shown in Fig. \ref{pictures}(b) allowed for asymmetric loading while no magnetic field dependence is observed (solid black line). The curves are offset for clarity.} \label{confcomp} \end{figure} The TFs employed come in an evacuated casing which we remove to gain access to the prongs required for mounting the specimen as sketched in Fig. \ref{ctmvstdm}. To this end, we lathe off the top cap of the TF so that the prong, along with its electrical contacts, remains connected to the base (see Fig. \ref{pictures}(a)). We find, experimentally, that the casing and the base are magnetic. Note, that as long as the actual electrodes deposited on the prong are non-magnetic, which is indeed the case, this should not obstruct the experiments. To test this hypothesis, we recorded the resonance frequency of a ``symmetric'' (no specimen attached) and an ``asymmetric'' (loaded with a non-magnetic specimen) TF as a function of the applied magnetic field strength. As shown in Fig. \ref{confcomp}, the asymmetric configuration (blue dashed line) exhibits a W shaped magnetic field dependence while the symmetric configuration (green dotted line) shows no significant shift in the resonance frequency on changing the applied magnetic field. \footnote{In principle, the symmetric configuration does exhibit a very weak magnetic field dependence of the resonance frequency in the range of 1 mHz. But this is due to small but finite asymmetry in any TF. Hence the magnetic field dependence of resonance frequency can be used as a method to detect asymmetry in TFs.} For comparison, Fig. \ref{confcomp} also shows an asymmetrically loaded TF removed from its magnetic base showing also no magnetic field dependence. This behavior can be understood as follows. The mirrored motion of the two prongs in the ``anti-symmetric'' resonance mode of a perfectly symmetric (both prongs identical) TF ensures that the center of mass is at rest. This implies that exciting the anti-symmetric resonance mode does not excite the center of mass motion, and vice versa. Hence this anti-symmetric mode is completely decoupled from the TF's center of mass motion~\cite{Rychenphd}. This decoupling, in a symmetric TF, prevents the resonance frequency of the anti-symmetric mode from getting affected by the forces experienced by the TF as a whole. However, the slight asymmetry induced on attaching the specimen to one prong leads to a small, but finite, coupling between the center of mass motion and the anti-symmetric resonance mode~\cite{Gomez}. Thus the resonance frequency depends, although weakly, on the net force (gradient) experienced by the TF in the presence of an applied magnetic field. In experiments carried out at a fixed magnetic field~\cite{Rychenrsi}, this additional force provides no further complication. However, it hinders the analysis of the torque differential magnetometry data, when the TF resonance frequency needs to be recorded as function of the applied magnetic field~\cite{Todorovic,Nicks}. Thus it is desired to suppress this unwanted magnetic field dependence. To this end, we tested different configurations (including the so called qPlus configuration~\cite{Giessiblnanotech}) of TFs removed from their magnetic base and glued to a substrate. Since the packaging is part of the design for the TFs~\cite{Bottom}, some of the desirable quality criteria of the TFs are compromised on removal of the base. In particular, the robustness, reproducibility and most importantly, the high quality factor under moderate asymmetric loading are lost. The quality factor becomes sensitive to asymmetry~\cite{Gomez} and drops drastically even under smallest loads (a few micrograms). An alternative approach for suppressing the unwanted magnetic field dependence is to freeze the center of mass degree of freedom. We achieved this by cementing the TF including its base onto a glass slide using a two component epoxy (WIKO Alu)~\footnote{Simply gluing the TF to a substrate as in Refs. \cite{Rychenrsi,Rychenphd} was not sufficient.} [Fig. \ref{pictures} (b)]. Employing this mounting technique, the unwanted magnetic field dependence diminishes to below the measurement precision (solid black line in Fig. \ref{confcomp}). More importantly, the quality factor of the TF is practically unchanged under loadings up to a few tens of micrograms (Table \ref{qfactors}), a mass which cannot be achieved with the micrometer-sized cantilevers. Hereby, we can investigate macroscopic samples and thus extend the TF capabilities demonstrated so far~\cite{Rychenphd}. \begin{table*}[tb] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Specimen mass & 0 $\mu$g & 17 $\mu$g & 35 $\mu$g & 52 $\mu$g & 69 $\mu$g & 87 $\mu$g & 104 $\mu$g \\ \hline Quality factor & 8535 & 8707 & 8433 & 7237 & 7464 & 1521 & 1698 \\ \hline \end{tabular} \caption{Quality factors of TFs mounted as depicted in Fig. \ref{pictures}(b) for various loadings. Up to 70 $\mu$g, only a small decrease in the quality factor is observed demonstrating the possibility to investigate a wide range of magnetic specimen with this technique. The mass error in the loading is estimated at 5 $\mu$g due to the glue used to mount the specimen.} \label{qfactors} \end{table*} \section{Measurement schemes}\label{measurement} For the analysis of torque differential magnetometry we need to measure the resonance frequency of the mechanical resonator and its amplitude at resonance. These quantities depend on the capacitance $C$ and the resistance $R$, respectively, of the lumped $LCR$ model representing the motional branch of the BvD equivalent circuit of the TF (See Fig. \ref{ctmvstdm}). The standard method for obtaining these parameters is to measure the full frequency response at every magnetic field value and analyze the result by fitting a Lorentzian [Eq. (\ref{lorentzian})]~\cite{Stipe}. We will refer to this method as {\it Lorentzian fitting}. Nevertheless, this is not the most time efficient procedure to determine the relevant experimental parameters. To this end, we employed a phase lock loop and a pure phase detection technique as detailed in the following. \begin{figure}[htb] \centering \subfloat[]{\includegraphics[width = 8cm]{concheck1.pdf}} \\ \subfloat[]{\includegraphics[width = 8cm]{concheck2.pdf}} \caption{Comparison of Lorentzian fitting and PLL method for determination of resonance frequency and resistance for the iron wire specimen (see text). Both methods are found to yield the same result within the accuracy of the measurement, while the PLL method is much faster. (a) False color plot of admittance magnitude vs. frequency and applied magnetic field. The white circles denote the resonance frequency measured using the PLL method. (b) Resistance vs. applied magnetic field.} \label{concheck} \end{figure} {\it Phase-locked loop}: We have implemented a phase-locked loop~\cite{Giessiblrmp,Rychenphd} (PLL) using Labview. We begin our experiment by measuring a full admittance spectrum. Using this information, we determine the resonance frequency and the corresponding (experimental) phase. In the subsequent magnetic field sweep, the frequency of the ac drive is adjusted to maintain a constant phase that corresponds to the resonance frequency. Once the PLL has adjusted the drive frequency to the current resonance frequency, the measurement of the admittance magnitude yields information on the resistance $R$: \begin{eqnarray} R & \approx & \frac{1}{|Y(f_r)|} \left(1 + \frac{\left(2 \pi f_r C_s\right)^2}{2\left| Y(f_r) \right|^2} \right). \end{eqnarray} The above approximation has been obtained under the valid assumption $|Y(f_r)| \gg 2 \pi f_r C_s$, and the value of $C_s$ is determined from the initial frequency spectroscopy and Lorentzian fitting. A comparison between the PLL technique and the Lorentzian fitting method is presented in Fig. \ref{concheck} for the iron wire specimen detailed in Sec. \ref{tdmsec} (henceforth simply called the {\it iron wire specimen}). For each magnetic field value, we measured a full frequency sweep of the admittance, which is displayed in panel a). Additionally, we determined the TF resonance frequency and resistance using the PLL method finding excellent quantitative agreement between both. Comparing the data acquisition speed of the experiment, we find that we require about 10 frequency points for a successful fitting of the Lorentzian lineshape. In contrast, we need only 3 to 6 admittance measurements using our PLL algorithm to obtain the same information. Thus assuming the same measurement bandwidth, we have improved the measurement speed by a factor of about two compared to the full frequency analysis. \begin{figure}[htb] \centering \subfloat[]{\includegraphics[width = 8.5cm]{pddemo.pdf}} \\ \subfloat[]{\includegraphics[width = 8.5cm]{lowH.pdf}} \caption{(a) Schematic of the PD method. Panel a) shows the phase response for two resonance frequencies 32764 Hz and 32765 Hz. If the linewidth is not changed significantly by the external magnetic field, Eq. (\ref{pdeq}) allows to track the resonance frequency. The black vertical line depicts the fixed drive frequency at which the phase is measured, which is then converted to the resonance frequency using Eq. (\ref{pdeq}). (b) Comparison between the phase locked loop and the phase detection technique for a torque magnetometry experiment with the iron wire specimen and the TF prepared as depicted in Fig \ref{pictures}(b). } \label{pd} \end{figure} {\it Phase detection}: By recording the phase response as a function of the applied magnetic field , it is also possible to obtain the quantities of interest within certain constraints. During a magnetic field sweep, only two parameters of the resonator ($C$ and $R$) change. If the resonator is driven at a fixed frequency (close to the resonance frequency), the admittance magnitude and phase measurements yield two equations with two unknown values. In general, these two equations need to be solved numerically. When assuming a constant resistance (hence constant $\Delta f$) and thus using only the measured phase information, we can give an analytic expression for the frequency shift. Errors are small for a high quality factor resonator since there is a steep phase change and a weak magnitude change close to the resonance frequency (see Fig. \ref{admcurves}). Using Eq. (\ref{lorentzian}) and the condition $2 \pi f_r C_s R \ll 1$, we obtain \begin{eqnarray}\label{pdeq} f_r & = & f_d + \frac{\Delta f}{2} \tan (\phi), \end{eqnarray} where $f_d$ is the fixed drive frequency and $\phi$ the measured phase and $\Delta f$ is determined from the initial full frequency response spectroscopy. Fig. \ref{pd} shows the schematic of the phase detection (PD) scheme and a comparison between resonance frequency shift obtained by PD and PLL methods. We find a very good quantitative agreement for the two methods. Only at relatively large magnetic fields, we observe a small deviation between the two data sets stemming from our assumption of a fixed linewidth $\Delta f$ in the PD method. Note that the PD technique further reduces our measurement time by a factor of about 3-5, since a single datapoint is acquired for each magnetic field strength. {\it Comparison between and hybrid of methods}: Clearly, Lorentzian fitting for each magnetic field value is not an efficient method. The PLL method measures the resonance frequency shift by tracking the phase at the point of highest slope, and thus highest sensitivity. The PD technique is the fastest method possible requiring only one admittance measurement but is limited to small frequency shifts.~\footnote{The choice of the best method becomes important when single admittance measurement time is limited from below by the settling time ($\sim Q/f_r$) of a very high quality factor resonator. This can easily happen at low temperature and pressure.} A hybrid of PD and PLL methods can also be employed. One can use the PD method as long as the frequency shift stays within certain acceptable range. Once the shift reaches the specified limit, one PLL step can be activated bringing the drive frequency to the current resonance frequency. After that the PD method can take over once again. \section{Torque differential magnetometry}\label{tdmsec} In this section, we discuss the proof of principle experimental results of torque differential magnetometry performed on a sample with uniaxial shape anisotropy. The specimen is chosen for a direct comparison of the data and the experimental method with Refs. \cite{Stipe} and \cite{Weber}. Our setup consists of a $3.5 \pm 0.5$ mm long and $25 \pm 2.5~\mu$m diameter iron wire (mass $\sim 13 ~\mu$g) attached to a TF as shown in Fig \ref{pictures}(b). Such a thin magnetic wire is known to have a uniaxial shape anisotropy~\cite{Chikazumi} and can be described within the Stoner-Wohlfarth single domain approximation~\cite{Chikazumi} via the free energy density $F = K_u \sin^2(\theta)$, where $\theta$ is the angle between the magnetization and anisotropy axis, and $K_u (> 0)$ is the anisotropy constant. \begin{figure}[htb] \centering \includegraphics[height = 6cm]{canting.pdf} \caption{Schematic depicting a single prong of the TF (red) loaded with a magnetic specimen (light blue) displaced from equilibrium. The uniaxial anisotropy direction deviates by an angle $\theta$ from the magnetization direction and an angle $\Theta$ from the applied magnetic field. } \label{canting} \end{figure} \begin{figure}[htb] \centering \subfloat[]{\includegraphics[width = 8.5cm]{highH.pdf}} \\ \subfloat[]{\includegraphics[width = 8.5cm]{resistance.pdf}} \caption{Magnetic field dependent resonance frequency shift (a) and resistance (b) for the iron wire specimen [Fig. \ref{pictures}(b)] (triangles), no specimen (stars), and a non-magnetic Copper specimen (square symbols). All data are unaveraged and show a single magnetic field sweep. (a) The fit of the resonance frequency shift (solid black line) to Eq. (\ref{freqshift}) yields an anisotropy field of $1.08 \pm 0.01$ T in good agreement with the value expected from the shape anisotropy. (b) The fit of resistance data to $c_1 [ H_{\textrm{ext}}/ ( H_{\textrm{ext}} + H_k) ]$ (solid black line) yields an almost perfect fit and $\mu_0 H_k = 1.07 \pm 0.03$ T in consistence with the frequency shift data, while a fit to $c_2 [ H_{\textrm{ext}}/ ( H_{\textrm{ext}} + H_k) ]^2$ (red dashed line) yields a bad fit and inconsistent value of $\mu_0 H_k = 0.28 \pm 0.03$ T.} \label{tdmfig} \end{figure} We consider an applied magnetic field collinear with the anisotropy easy axis under equilibrium orientation of the prong. If the prong is deflected by a small angle $\Theta$ from this direction, the magnetic moment of the specimen makes an angle $\theta = [ H_{\textrm{ext}}/ ( H_{\textrm{ext}} + H_k) ] \Theta$ with the anisotropy axis (Fig. \ref{canting}), where $\mu_0 H_k = 2 K_u/M_s$ is the anisotropy field, with the saturation magnetization $M_s$~\cite{Stipe,Kamra}. Alternately, the angle between the magnetic moment and the applied magnetic field direction is $\Theta - \theta$. This gives rise to an additional restoring torque~\cite{Stipe,Kamra} $\tau_m = M_s V \mu_0 H_{\textrm{ext}} (\Theta - \theta) = M_s V \mu_0 H_{\textrm{ext}} [H_k/H_{\textrm{ext}} + H_k] \Theta$, and an additional effective stiffness of $k_m \ = \ \tau_m/\Theta L_e^2 \ = \ (M_s V/L_e^2 )\mu_0 H_{\textrm{ext}} [H_k/H_{\textrm{ext}} + H_k] $, where $V$ is the volume of the specimen, and $L_e$ is the effective length of the prong~\cite{Morse}. Under the condition $k_m \ll k_{\textrm{el}}$ corresponding to a weak perturbation of the resonance frequency, the resonance frequency shift becomes: \begin{eqnarray}\label{freqshift} \delta f_r(H_{\textrm{ext}}) & = & \frac{f_{\textrm{el}}M_s V \mu_0}{2 k_{\textrm{el}} L_e^2} \ \frac{|H_{\textrm{ext}}| H_k}{|H_{\textrm{ext}}| + H_k}, \end{eqnarray} where $f_{\textrm{el}}$ and $k_{\textrm{el}}$ are respectively the resonance frequency and effective stiffness of the oscillator at zero applied magnetic field. The absolute value of $H_{\textrm{ext}}$ appears because the magnetization too reverses its direction on reversal of magnetic field direction. A fit of the frequency shift data to Eq. (\ref{freqshift}) yields the anisotropy field of the specimen as $\mu_0 H_k = 1.08 \pm 0.01$ T [see Fig. \ref{tdmfig}(a)]. A clear hysteresis in the resonance frequency shift vs. magnetic field curve is seen. The low field curve is depicted in Fig. \ref{pd}(b) where hysteretic kinks can be seen close to zero field region. Assuming that the anisotropy is purely due to the shape of the specimen, the saturation magnetization becomes $\mu_0 M_s = 2.16 \pm 0.02$ T in good agreement with the literature value for iron 2.15 T~\cite{Chikazumi}. The magnetic field dependent resistance is shown in Fig. \ref{tdmfig} (b). The resistance curve does not show a strong hysteresis consistent with the observation of Stipe and co-workers~\cite{Stipe}. However, we find the magnetic contribution to the resistance proportional to the magnetization canting amplitude ($\theta_{max} = [ |H_{\textrm{ext}}|/ ( |H_{\textrm{ext}}| + H_k) ] \Theta_{max}$). This is in contrast with the observation of Stipe {\it et al.}~\cite{Stipe}, who find the magnetic contribution to the dissipation coefficient to be proportional to the square of $\theta_{max}$. We attribute this observation to the slight deviation of our wire from the optimal alignment with respect to the applied magnetic field. Here, we expect that the linear effects dominate. For observing the quadratic effects, a perfect alignment of the sample with respect to the applied magnetic field is crucial. We notice that our data on the magnetic field dependent dissipation coefficient has little spread as compared to the data reported by Stipe and co-workers~\cite{Stipe}. Weber {\it et al.}~\cite{Weber}, on the other hand, were not able to resolve the dependence of dissipation coefficient on the applied magnetic field in their measurements. Hence our technique looks particularly promising for the investigation of magnetic dissipation. The data depicted in Fig. \ref{tdmfig}, which is a single full magnetic sweep recorded without any averaging, underlines the high signal-to-noise ratio in our measurement. The plotted data for a single specimen was captured within 15 minutes, although several magnetic field sweeps were recorded to check their reproducibility. A comparison between Fig. \ref{tdmfig} and the corresponding plots in Ref. \cite{Stipe}, in particular the data for dissipation coefficients, makes the advantage of our measurement scheme evident. \section{Discussion}\label{discussion} We have demonstrated an inexpensive and all electrical setup for torque differential magnetometry~\cite{Stipe,Weber,Kamra} using piezoelectric quartz tuning forks (TFs) capable of operation over a broad temperature and pressure range. A high signal-to-noise ratio under ambient conditions was achieved at a lock-in effective bandwidth of about 1 Hz. The anisotropy field and saturation magnetization of an iron wire specimen were quantitatively extracted corroborating the literature values. We also demonstrated the possibility to measure specimens with a mass of up to $\sim 70~ \mu$g without any significant loss in sensitivity, and a high sensitivity for detecting magnetic contribution to dissipation. In the following we estimate the minimum mass of the same iron wire that could be used in our measurements (leaving all parameters unchanged) with a signal-to-noise ratio of about 1. We find that the slope of the phase vs frequency curve for our TFs (under ambient conditions) at resonance is about 35 degrees per Hz. At the bandwidth of our measurements, the phase measurement noise was found to be below 1 degree. This allows us a frequency sensitivity of below 30 mHz. In Fig. \ref{tdmfig}(a), the typical frequency shift between adjacent magnetic field values (separated by 20 mT) is about 200 mHz. This implies that we can detect a signal that is weaker by a factor of about $200/30 \sim 7$, which corresponds to a sample weighing 7 times less than the sample investigated herein ($\sim 13~ \mu$g). Hence a specimen with mass $\sim 2~\mu$g can easily be measured using the same set of parameters that we employed for our measurement reported herein (15 minutes of measurement time). We notice that the extraction of the anisotropy constant requires fitting the resonance frequency shift data which need not be separated by 20 mT. Hence the more meaningful limit for the minimum specimen mass in an anisotropy parameter extraction experiment is obtained by noticing that the maximum frequency shift in Fig. \ref{tdmfig}(a) is of the order of 10 Hz. Since we can detect frequencies shifts as low as 30 mHz as argued above, it should be possible to characterize a specimen $10\,\textrm{Hz}/30\,\textrm{mHz} \sim 100$ times smaller than the one characterized here setting the limit to about $100$ ng of iron. Torque differential magnetometry (TDM) is a very powerful tool for the characterization of magnetic specimens, and the investigation of magnetic switching, magnetic phase transitions and high Tc superconductors (see Ref. \cite{Kamra} and references therein). As compared to superconducting quantum interference device magnetometry, TDM has a faster response time~\cite{Brugger} enabling investigation of dynamic phenomena while offering a comparable sensitivity~\cite{Stipe,Brugger}. The simple and inexpensive TDM setup demonstrated herein makes it still more attractive as a magnetometry technique. In the present work, we employed piezoelectric quartz TFs and achieved a high sensitivity under ambient conditions. There is still a large room for gain in sensitivity by using smaller TFs at low temperature and pressure~\cite{Giessiblnanotech}. Although the present work lays emphasis on TDM, the generic measurement scheme that has been demonstrated here can be adapted for other kinds of measurements which have traditionally been done using cantilevers and optical interferometers in vacuum. In particular, our setup design eliminates unwanted artifacts due to magnetic TF base that may interfere with magnetic field dependent measurements~\cite{Rychenrsi,Rychenphd}. \begin{acknowledgement} We thank Sibylle Meyer for help with measurements in the cryostat, and Matthias Pernpeintner for fruitful discussions. Financial support from the DFG via SPP 1538 ``Spin Caloric Transport'', Project No. GO 944/4 and the Dutch FOM Foundation is gratefully acknowledged. \end{acknowledgement} \bibliographystyle{epj}
1,314,259,993,798
arxiv
\section{Introduction}\label{sec:introduction} \IEEEPARstart{M}{}ultiple object tracking (MOT) is an essential task for computer vision, and has been applied to many applications, such as industrial surveillance and autonomous driving. Tracking-by-detection has been a dominant paradigm for MOT for a long time, where object trajectories are obtained by associating object detections over a video through appearance features. Object detection~\cite{duan2019centernet,liu2020deep,chen2021novel} and \new{re-identification (reID)~\cite{li2016person,zhou2019person,ye2021deep,gu2022motion}} have achieved significant progress in recent years, but \emph{tracking-by-detection} paradigm can hardly perform real-time tracking due to the separately compute-intensive models. Thus, the recent trend in MOT is jointly solving detection and tracking~(JDT), where object detections and appearance features (or motions) are learned simultaneously. Despite many years of effort, JDT usually fails to find accurate object associations in crowd scenes due to missed or false detections. The crowd density map was proposed for object counting~\cite{lempitsky2010learning}, where the sum over a region in the map corresponds to the object count in that region. It predicts the object count without explicitly detecting objects, and thus is a reliable and informative clue in crowd scenes. Using multi-task learning technique, we propose a novel MOT approach, to jointly perform counting, detection and reID, especially for crowd scenes. We use object count from crowd density map to constrain the number of objects from detection task, which can help detection task to find missed detections. In return, we also use object detections to refine the quality of crowd density map, improving its localization ability. By imposing the mutual object-count constraints between detection and counting, the proposed model can recover missed detections or reject false detections. With the improvement of object detections, the reID task can also be enhanced, because it can see more varieties of the same identity. Besides, our approach can perform online and real-time multi-object tracking. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{figure2/countingmot.pdf} \caption{Object detections with counting constraint in a crowd scene of~\emph{MOT20}. For clear visualization, we only show part of the object detections in the scene. Object detections in the top-right are generated by the state-of-the-art method~FairMOT~\cite{zhang2021fairmot}, which jointly produces object detections and reID features. However, FairMOT fails to locate occluded people in extremely crowd regions. By incorporating crowd density map (bottom-left) as a counting constraint, our proposed CountingMOT finds missed object detections (green boxes in the bottom-right) and also can eliminate false detections (the red box in the bottom-right). }\label{fig:CountingMOT} \vspace{-0.10in} \end{figure} In Fig.~\ref{fig:CountingMOT}, we show a qualitative comparison between the state-of-the-art MOT approach FairMOT~\cite{zhang2021fairmot} and our proposed CountingMOT. FairMOT is a typical JDT method which produces object detections and reID features simultaneously. However, in extremely crowded regions, it fails to locate occluded people and also generates some false detections (see top-right), resulting in failures of data association. By incorporating the crowd density map, an informative clue in crowd scenarios (see bottom-left), into the object detection task, the proposed CountingMOT can find the missed object detections. Also, our method can correct false detections using the object count constraint from density map. E.g., in the right corner of the scene, there are four people together, and FairMOT misses an object and produces two inaccurate boxes in one person. Using the counting constraint from density map (bottom-left), our CountingMOT can recover the missed object, and the false detections also are eliminated. Previous works~\cite{dehghan2017binary, ren2020tracking} have explored crowd density maps for multi-object tracking. They first predict the crowd density maps, and then rely on either a joint objective function or a discriminative model with initialized object detections for tracking. The approach of~\cite{dehghan2017binary} uses appearance, motion and contextual information to solve multi-object tracking in high-density crowd scenes, but it needs initial object detections in the first frame and then predicts the potential location of each object in the following frames by a quadratic objective function. Also, the method doesn't have an effective strategy to detect new objects, and thus usually works well for structure scenes. The recent method~\cite{ren2020tracking} proposes a \emph{tracking-by-counting} paradigm, which jointly models detection, and counting of multiple people as a network flow problem. It can achieve the global optimal detections and trajectories over a pre-given sequence of crowd density maps, but building a dense graph over the crowd density maps is time-consuming. Also, the tracking performance depends heavily on the crowd density maps. In contrast to~\cite{dehghan2017binary,ren2020tracking}, our proposed approach is an end-to-end deep learning model, which simultaneously produces object detections, crowd density map, and reID features, and can perform online and real-time tracking. The counting task and the object detection task can boost each other by using the same object-count constraint within a given area, which makes our approach robust to crowd scenes. Therefore, our contribution is an end-to-end MOT framework that jointly solves counting, object detection and re-identification simultaneously. By imposing mutual object-count constraints between detection and counting, the two tasks can be optimized and enhanced at the same time, making the proposed approach robust to crowd scenes. \new{The proposed CountingMOT tries to find a balance between object detection and crowd density map estimation, which can help it to recover missed detections or reject false detections, and thus the MOT performance can be improved.} Our approach is an attempt to bridge the gap of object detection, counting, and re-Identification. The experimental results demonstrate the superiority of our approach against the state-of-the-arts on public benchmarks. {The remainder of this paper is organized as follows. Section \ref{text:related} reviews the previous MOT works, including \emph{tracking-by-detection} and \emph{joint detection-and-tracking}. Section \ref{text:tbc} introduces our proposed CountingMOT, and the experiments on public benchmark datasets are conducted in Section \ref{text:experiments}. Finally, Section \ref{text:conclusion} concludes this work.} \section{Related work} \label{text:related} In this section, we briefly review the related works on \emph{tracking-by-detection} MOT approaches, \emph{joint detection-and-tracking} MOT approaches and crowd counting approaches. Comprehensive reviews on MOT can be found in~\cite{ciaparrone2020deep,dendorfer2021motchallenge}. \subsection{MOT approaches using tracking-by-detection} Tracking-by-detection is a standard paradigm for MOT, where the problem is split into two stages: object detection and data association. Here, we generally divide \emph{tracking-by-detection} approaches into two categories, i.e., batch basis for offline scenarios ~\cite{yu2017adaptive,schulter2017deep,son2017multi,li2018learning,braso2020learning} and frame-by-frame basis for online applications~\cite{chu2017online,zhu2018online,guo2021online,you2021multi}. Early tracking-by-detection approaches regard data association as a global optimization problem \new{using batch input}, and various formulations are proposed, such as continuous energy optimization~\cite{milan2015multi}, min-cost network flow~\cite{butt2013multi}, and Conditional Random Field (CRF)~\cite{yang2011learning}. \new{However, the above traditional methods need to manually build a flow graph with hand-crafted features or costs, which is cumbersome for tracking. Recently, some approaches~formulate the network flow of MOT into a fully differentiable neural network, which can adaptively learn features or costs for data association. E.g., \cite{schulter2017deep} presents a deep network flow that expresses the optimum of network flow as a differentiable function of pairwise association costs, and it can perform data association in an end-to-end fashion. \cite{braso2020learning}~constructs a flow graph of MOT using CNNs, where the nodes represent object detections and the edges indicate the associations across different frames. \cite{xiang2020end} also formulates the assignment costs as unary and pairwise potentials, and then uses a recurrent neural network to gradually refine tracklet association. Deep flow approaches can improve the tracking performance using the powerful neural networks, but the inability to real-time tracking shifts the further research~\cite{gao2021crf}.} Using batch detections as input, the MOT methods usually transform data association \new{as an offline energy optimization problem.} Though they can use long-term trajectory to recover missed or occluded detections, the batch processing strategy can hardly be applied to realtime applications. \new{To perform online and realtime tracking}, \cite{bewley2016simple} is a primary attempt towards online tracking where only detections from the previous and the current frames are presented. The MOT is solved by an assignment problem with the Hungarian algorithm where the assignment cost matrix is formulated by intersection-over-union (IOU). This method runs very fast, and it also indicates that tracking performance is highly dependent on detection results. To reduce Identity Switches (IDS), \cite{wojke2017simple} further extends \cite{bewley2016simple} by incorporating appearance information, making it a strong MOT tracker at high frame rates. \cite{yu2016poi} also proves that high-performance detection and deep appearance feature can lead to the state-of-the-art multi-object tracking results. \new{Recent trend in MOT is to leverage the powerful representational ability of deep learning~\cite{ sun2019deep,dai2021learning} to perform online data association.} Using recurrent neural networks (RNNs), \cite{milan2017online}~formulates data association and trajectory estimation into a neural network without tuning tedious hyper-parameters. However, this method can't run in real time. \cite{sun2019deep} proposed a deep affinity network to infer object affinities across different frames, and it is a realtime tracker with high tracking performance. \new{Single object tracking (SOT) has achieved great advances, and MOT approaches can benefit from the development of SOT~\cite{yuan2020self,cao2021feature,jain2021channel,zheng2021improving}.} \cite{zheng2021improving}~extends the detection network by adding a SOT branch for tracking objects, making the MOT task have the powerful discrimination ability of SOT. Based on the siamese tracker, \cite{shuai2021siammot}~proposes a region-based MOT network to simultaneously \new{detect and associate} object instances. Using graph convolutional network (GCN), \cite{dai2021learning}~models MOT as a proposal generation and trajectory inference problem, \new{but} it still faces the problem of occlusion in crowd scenes. The above methods can improve tracking performance using the advances of deep learning, \new{but they are subjected to wrong detections in crowd scenes, which is a key factor for MOT~\cite{bewley2016simple,yu2016poi,shuai2021siammot}}. To handle missed detections or occlusions, \cite{xiao2015collaborative}~adopts a hierarchical tracking system using different priorities to resolve long-term occlusion. \cite{fu2019multi}~integrates full body and body parts to address ambiguous identity associations for people tracking. \cite{wang2021dynamic}~introduces a dynamic tracking system, which maintains tracking results for each frame by combining global and local search. The above methods can handle crowd scenes well, but they still follow the tracking-by-detection paradigm, which limits the tracking efficiency in real applications. \subsection{MOT approaches with joint detection-and-tracking} The \emph{tracking-by-detection} MOT approaches \new{usually focus on real-time data association, but pay little attention to the detection step and thus are not real-time MOT systems~\cite{zhang2021fairmot}}. Recently, many efforts aim to solve object detection, feature embedding or motion prediction simultaneously~\cite{lu2020retinatrack,zhou2020tracking,bergmann2019tracking,pang2020tubetk, wang2021joint,pang2021quasi,wu2021track,wang2021multiple}. \cite{wang2020towards} proposes a pioneering work that allows object detection and appearance embedding to be learned in a unified model, but the embedding is generated from a positive anchor which may shift to neighbouring objects in crowd scenes. Further, \cite{lu2020retinatrack}~modifies a one-stage detector to capture the instance-level embedding, but it still suffers the identity ambiguity problem when two anchors are centered at the same grid. To address this ambiguity, \cite{zhang2021fairmot}~learns appearance embedding in an anchor-free approach, i.e., extracting reID features at the object center. \cite{bergmann2019tracking} converts an object detector into a MOT tracker, by exploiting the bounding box regression branch to predict the new object positions in the next frame, but its performance is limited by the object detector. Also, \cite{zhou2020tracking} extends a classic detector CenterNet~\cite{duan2019centernet} to a MOT tracker which directly uses an additional branch to predict object motions, but it lacks the modelling of object appearance. Built on~\cite{zhou2020tracking}, \cite{tokmakov2021learning}~takes pairs of frames as input, allowing it to recover occluded object detections or trajectories using historical information, but it is not applicable to extremely crowd scenes. To realize one-step tracking, \cite{pang2020tubetk}~directly predicts bounding tubes using spatial-temporal information in overlapped video clips. Using graph neural networks, \cite{wang2021joint}~models both detection and data association in a relational graph, and it can improve object detection results using temporal relations. \cite{wu2021track}~proposes two modules to jointly learn appearance embedding and object motions, it proves that tracking clue enhances object detection and in return benefits object tracking. To distinguish similar objects, \cite{wang2021multiple}~presents a correlation tracking model that exploits the temporal context, which can make the trajectory temporarily consistent. The above \emph{joint detection-and-tracking} approaches either utilize the temporal information to recover trajectories, or focus on learning discriminative embeddings. They may ignore the most important thing in MOT, i.e., object detection which is the dominant factor for tacking. \begin{figure*}[!htbp] \centering \includegraphics[width=0.90\linewidth]{figure2/pipeline4.png} \caption{The proposed CountingMOT model for joint Counting, Detection, and re-Identification. The input image is first fed to the backbone for multi-level feature extraction (DLA-34~\cite{zhou2020tracking} in this work). Then, we add three homogeneous branches for simultaneously performing detection, counting and reID, respectively. Also, we create mutual constraints between detection and counting to improve detections in crowd scenes. The reID \new{branch} is used to generate appearance feature for data association.}\label{fig::pipeline} \vspace{-0.25in} \end{figure*} \subsection{Counting, detection and tracking using density maps} Object counting aims to estimate the number of objects in an image, and it is different from object detection, which focuses on the individual object and usually suffers from occlusion. Crowd density map is an effective method for object counting, where the object count in a region corresponds to the sum over that region. Recent approaches use deep learning techniques to learn crowd density map and have proved that it is helpful to locate objects in crowd scenes Using crowd density map, \cite{rodriguez2011density} primarily proposes a ``density-aware'' detection and tracking framework, where the detections are encouraged to be consistent with the crowd density map. However, it only adopts a nearest-neighbour strategy for tracking and doesn't model object appearance, which could fail when two objects stay close to each other. As a pioneer work, \cite{ma2015small} adopts crowd density map for small object detection, where integer programming is used to recover object locations from sliding windows over the density map. \cite{lian2019density}~proposes a depth-adaptive method to simultaneously estimate head counts and locate head positions. The method uses two independent branches to predict density map and head locations separately, but it doesn't establish a connection between the two tasks. \cite{sam2020locate}~introduces a multi-column network for crowd counting, and head location is obtained by a classification task of the predefined boxes. In~\cite{ren2018fusing}, a fusion tracker is proposed by combing crowd density map and a visual object tracker for tracking in crowd scenes. This method only considers single object tracking, and is not applicable for multi-object tracking. \cite{wan2021body}~proposes a joint body-face detector for multi-object tracking, but it may generate miss-matching between bodies and faces in crowd scenes. \cite{ren2020tracking}~presents a novel tracking paradigm, where detection, counting, and tracking are jointly formulated as a network flow problem on crowd density maps. Though this method can achieve detection and tracking through a global optimization, it should take much time to create a network-flow graph. In contrast to the existing methods, our model jointly formulates counting, object detection and appearance embedding using a multi-task learning scheme. By imposing mutual object-count constraints between object detection and crowd density map, the two tasks can be simultaneously optimized and enhanced, making the tracking task robust to crowd scenes. Besides, the joint model can perform real-time detection and tracking simultaneously. \section{Joint Counting, Detection and reID model} \label{text:tbc} Our CountingMOT model has three homogeneous branches, and it optimizes object detection, counting and reID feature simultaneously in one framework, shown in Fig.~\ref{fig::pipeline}. Different from other \emph{joint detection-and-tracking} methods, our model uses counting task (i.e., crowd density map) to enhance detection task in crowd scenes. The workflow of our method is as follows. Multi-scale features of an input image are first extracted by an encoder-decoder backbone~(DLA-34~\cite{zhou2020tracking} in this paper), which are then adopted to simultaneously generate object detections, density map and reID features. For the object detection branch, it is not only supervised by detection loss, but also encouraged to be consistent with the crowd density map. Meanwhile, the predicted density map is constrained by the object count from object detections at the same time. The mutual constraints between detection and crowd density map can enhance each other, and work well for multi-object tracking in crowd scenes. Together with object detections, the reID features are finally used for data association. In the remainder of this section, we will introduce each component of our model: feature extractor, detection task, density map estimation task and reID task respectively. \subsection{Feature Extractor} Following~\cite{zhang2021fairmot}, we also use DLA (Deep Layer Aggregation) as the backbone to extract multi-scale features. DLA is preliminary proposed for image classification, and then is enhanced by~\cite{zhou2019objects} with hierarchical skip connections. To improve feature map resolution, DLA~iteratively aggregates low-resolution features to high-resolution features. Besides, the original convolution at each upsampling layer is replaced with deformable convolution to dynamically perceive different object scales. The revised DLA aggregates semantic and spatial fusion to capture information across layers, which has been proven more effective for object detection and tracking~\cite{zhou2019objects,zhou2020tracking, zhang2021fairmot}. The parameter setting in DLA of this work is the same with the previous works. As shown in Fig.~\ref{fig::pipeline}, for an input image with size $H_\text{in} \times W_\text{in}\times 3$, the output feature map of the feature extractor is of size $H\times W \times 256$. Here, $H = H_\text{in}/R$, $W = W_\text{in}/R$, where the downsampling factor $R$ is set to 4 in this work. After feature extraction, we add three homogeneous branches to predict object detection, density map and reID feature, respectively. Each branch has a $3 \times 3$ convolutional layer with $256$ channels after the feature extractor, followed by a $1 \times 1$ convolutional layer to produce the final output. \subsection{Detection task} The detection \new{branch} is an anchor-free method, responsible for estimating the object centers. The anchor-free method locates an object through object center, which is convenient for re-Identification feature learning~\cite{zhang2021fairmot}. It produces a set of object detections $\{\left(\hat{\textbf{p}}_1,\hat{\textbf{s}}_1 \right), \left(\hat{\textbf{p}}_2, \hat{\textbf{s}}_2\right), ... \}$ for each class $c\in \left\{ 0,...,C-1 \right\}$, where~${{\hat{\textbf{p}}}_{i} = \left(\hat{p}_i^x, \hat{p}_i^y\right)}$ is the object center and ${{\hat{\textbf{s}}}_{i} = \left(\hat{h}_i, \hat{w}_i\right)}$ indicates the object scale (height and width). Specifically, the object centers are produced by a heatmap~$\hat{M}\in {{\left[ 0,1 \right]}^{{{{H}}}\times {{{W}}}\times C}}$, and the object scale is also generated by a scale map~$\hat{S}\in {{\left[ 0,1 \right]}^{{{{H}}}\times {{{W}}}\times 2}}$. Thus, the final output of the detection \new{branch} has $C + 2$ channels. Given a set of GT (Ground Truth) annotated boxes $\left\{ \left( {{{\textbf{p}}}_{i}},{{{\textbf{s}}}_{i}} \right) \right\}_{1}^{N}$, we first generate a GT heat map~${M}\in {{\left[ 0,1 \right]}^{{{{H}}}\times {{{W}}}\times C}}$ using a Gaussian kernel \begin{equation} \label{eq1} M=\sum\limits_{i=1}^{N}{\exp -\frac{{{\left( x-p_{i}^{x} \right)}^{2}}+{{\left( y-p_{i}^{y} \right)}^{2}}}{2\sigma _{i}^{2}}}, \end{equation} where $N$ represents the number of objects in the image, $(x, y)$ represents a location at the heat map, and $\sigma_i$ is the standard deviation, determined by the object scale. Then, the loss function for object center is a pixel-wise regression with a focal loss~\cite{lin2017focal} \begin{equation}\label{eq2} \resizebox{0.44\textwidth}{!}{$ {\mathcal{L}_{\text{center}}}=-\frac{1}{N}\sum\limits_{xyc}{\left\{ \begin{aligned} & {{\left( 1-{{{\hat{M}}}_{xyc}} \right)}^{\alpha }}\text{log}\left( {\hat{M}_{xyc}} \right)\text{ ~~~~~~~~~~~~~~~if }{{M}_{xyc}}=1 \\ & {{\left( 1-{{M}_{xyc}} \right)}^{\beta }}{{\left( {{{\hat{M}}}_{xyc}} \right)}^{\alpha }}\text{log}\left( 1-{\hat{M}_{xyc}} \right)\text{ otherwise} \\ \end{aligned} \right.}$}, \end{equation} where $\alpha$ and $\beta$ are the hyper-parameters in focal loss, and are set to $2$ and $4$, respectively. For scale estimation, we can directly use $L_1$ loss between ${{{\hat{\textbf{s}}}}^{i}}$ and ${{\textbf{s}}^{i}}$. Since the downsampling factor $R$ in the final feature map introduces quantization errors for object center, we thus \new{add} an additional offset branch to compensate for the errors. The GT offset is obtained by ${{\textbf{o}}^{i}}=\left( \frac{p_{i}^{x}}{R},\frac{p_{i}^{y}}{R} \right)-\left( \left\lfloor \frac{p_{i}^{x}}{R},\frac{p_{i}^{y}}{R} \right\rfloor \right)$, and the corresponding predicted one is represented as ${\hat{\textbf{o}}^{i}}$. Then, the loss for scale and offset can be written as \begin{equation}\label{eq3} {\mathcal{L}_{\text{scale}}}=\sum\limits_{i=1}^{N}{\left\| {{{\hat{\textbf{s}}}}_{i}}-{{\textbf{s}}_{i}} \right\|+{{\left\| {{{\hat{\textbf{o}}}}_{i}}-{{\textbf{o}}_{i}} \right\|}_{1}}}. \end{equation} Overall, the total loss for object detection task is \begin{equation}\label{eq4} \mathcal{L}_{\text{det}} = \mathcal{L}_{\text{center}} + \mathcal{L}_{\text{scale}}. \end{equation} \subsection{Density map estimation task} \new{Crowd density map estimation was first proposed for object counting~\cite{lempitsky2010learning}, where the sum over a region in the map corresponds to the object count in that region. It can provide an informative clue for object localization since the GT density map is usually generated by blurring object annotations with a Gaussian kernel.} \new{In this work, for density map estimation, we directly} use the extracted features from the backbone to generate a density map $\hat{D}$ with size~${{{{H}}}\times {{{W}} \times 1}}$. Following the typical \new{density map generation methods~\cite{li2018csrnet,song2021rethinking,yan2021towards}}, we adopt a scale-adaptive strategy to generate density map for crowd scenes. All the object centers in an image are first represented by an indicator matrix, where ``1" corresponds to an object while ``0" is the background. Then, the GT density map is obtained by blurring each object center ${{\textbf{p}}_{i}}$ using a Gaussian kernel \begin{equation}\label{eq5} D\left( \textbf{p} \right)=\sum\limits_{j=1}^{N}{\delta \left( \textbf{p}-{{\textbf{p}}_{j}} \right)\otimes {{G}_{\sigma _{j}}}\left(\textbf{p} \right)}, \end{equation} where $\delta$ is the delta function, $\otimes$ indicates the convolution operator, and ${{G}_{\sigma _{j}}}$ is the 2D Gaussian kernel with standard deviation ${\sigma _{j}}$. Here, ${\sigma _{j}} = \gamma {{\bar{d}}_{j}}$ and ${{\bar{d}}_{j}}$ is the average distances of $k$ nearest neighbours. Different from the object heat map, the Gaussian kernel ${{G}_{\sigma _{j}}}$ is normalized to ``1'' to keep the sum over the density map consistent with the number of objects. Finally, the Mean Square Error (MSE) loss and the Structural Similarity Index (SSIM)~\cite{wang2004image} loss are jointly adopted to measure the difference between $\hat{D}$ and ${D}$ \begin{equation}\label{eq6} {{\mathcal{L}}_{\text{cnt}}}={\left\| \hat{D}-\mu \cdot D \right\|_{2}^{2}} + \text{SSIM}\left(\hat{D}, ~\mu\cdot D\right), \end{equation} where $\mu$ is an amplification factor to accelerate convergence and lower estimation error~\cite{gao2019c}. \subsection{Mutual constraints between detection and counting} Object detection has achieved significant progress in recent years~\cite{zou2019object}, but it usually fails to handle crowd scenes due to the heavy occlusions. Crowd density map is designed for crowd counting, and can provide an informative clue for object detection. By exploiting object count constraint, we establish connections between object detection and density map estimation, to jointly improve the tracking performance. \subsubsection{Detection with counting} After obtaining object detections from the detection \new{branch}, we first generate an indicator matrix $U$ to represent the candidate object centers. Similar to GT density map generation (see~(\ref{eq5})), the candidate detection matrix $U$ is then blurred by a Gaussian kernel to generate a density map $U\otimes {{G}_{\sigma _{j}}}$, where ${\sigma _{j}}$ is also determined by its $k$ nearest neighbours. Intuitively, the density map $U\otimes {{G}_{\sigma _{j}}}$ should be consistent with the density map estimated by the density \new{branch} as shown in Fig.~\ref{fig::pipeline}. Thus, the additional constraint for detection task can be formulated as \begin{equation}\label{eq7} {{\mathcal{L}}_{\text{dc}}}={\left\| U\otimes {{G}_{\sigma _{j}}} - \hat{D}\right\|_{2}^{2}}. \end{equation} Usually, the estimated density map $\hat{D}$ can accurately provide the object count information in a region, and thus it can implicitly help detection task to find occluded or missed detections. Note that $U$ contains the trainable variables, and $\hat{D}$ is fixed in this constraint. \subsubsection{Counting with detection} Detection task should accurately locate objects, and thus it usually has few false detections. \new{Crowd density map focuses on counting of the whole scene, and we can use object detection task to enhance the localization ability of density map estimation task.} For the predicted density map $\hat{D}$, we first predefine a set of sliding windows which move vertically and horizontally in the density map. Each 2D sliding window \new{is vectorized as a 1D mask vector $\textbf{w} \in {{\left\{ 0,1 \right\}}^{HW}}$}, where ``1" means that a pixel is within the sliding window, and ``0" otherwise. Thus, for a specific sliding window $\textbf{w}_k$, its object count can be obtained from density map $\hat{D}$ \begin{equation}\label{eq8} {{n}_{k}}={{\left( {\textbf{w}_{k}} \right)}^{T}}\hat{\textbf{d}}, \end{equation} where $\hat{\textbf{d}}$ is \new{also} the vectorization of the density map $\hat{D}$. Also, the object count can be computed from the object detections \begin{equation}\label{eq9} {{n}_{k}}={{\left( {\textbf{w}_{k}} \right)}^{T}}\textbf{u}, \end{equation} where $\textbf{u}$ is the vectorization of object detections $U$. The predicted density map thus can be optimized through minimizing the counting difference between $(\ref{eq8})$ and $(\ref{eq9})$ \begin{equation}\label{eq10} {{\mathcal{L}}_{\text{cd}}}=\frac{1}{K}\sum\limits_{k=1}^{K}{{{\left( {\textbf{w}^{T}_k}\hat{\textbf{d}}-{\textbf{w}^{T}_k}\textbf{u} \right)}^{2}}}, \end{equation} where $K$ is the number of sliding windows, $\hat{\textbf{d}}$ is the training variable and $\textbf{u}$ is fixed. \new{ The sliding windows are densely sampled from the crowd density map $\hat{D}$ with size ${{{{H}}}\times {{{W}} \times 1}}$, which means that the number of sliding windows $K$ equals to the product of $H$ and $W$.} The sliding window $\textbf{w}_{k}$ is an important factor for object detection, and further affects the tracking performance. The size of $\textbf{w}_k$ means how much region is used to calculate the difference between density map and object detections. Usually, it is set to the average size of the objects in a scene. For a small $\textbf{w}_k$, the sum (object count) within the crowd density map is less than 1, which may cause missed detections in the detection task due to the mutual constraints. Also, for a large $\textbf{w}_k$, it contains too many objects, resulting in inaccurate detections. Please refer to section~\ref{sec:IV-D} for further analysis. \subsection{ReID task} ReID task tries to learn appearance features to distinguish objects. Similar to object heat map $\hat{M}$ and crowd density map $\hat{D}$, the reID branch outputs a feature map $\hat{E}\in {{\mathbb{R}}^{{{{H}}}\times {{{W}}}\times 128}}$, where $\hat{E}_{ij}$ represents the embedding feature centered at $\left(i, j \right)$, and $128$ is the feature dimension. Following the work~\cite{zhang2021fairmot}, the reID task can be regarded as a classification problem. For a given GT box, its embedding feature can be first extracted from the feature map $\hat{E}$, and then is converted to a class distribution vector $\textbf{q}^i$ through a softmax loss. The GT class label for the box can be represented as a one-hot vector, denoted as $\textbf{v}^i$. The reID loss then can be computed as \begin{equation}\label{eq11} {{\mathcal{L}}_{\text{id}}}=-\sum\limits_{i=1}^{N}{\sum\limits_{l=1}^{L}{{\textbf{v}^{i}}\left( l \right)\text{log}\left( \textbf{q}^i\left( l \right) \right)}}, \end{equation} where $L$ is the number of instance identities, and $N$ also denotes the number of objects in the image. For training, only the embedding features located at object centers are used. To improve the robustness of reID features, we adopt image transformations \new{including} HSV~(Hue, Saturation, Value) augmentation, rotation, scaling, translation and shearing for data preparation. \subsection{Overall loss for training CountingMOT} For CountingMOT, the object detection, density map estimation and reID feature can be trained simultaneously by using uncertainty weight~\cite{kendall2018multi} \begin{equation}\label{eq12} \begin{aligned} & {{\mathcal{L}}_{\det \text{-dc}}}={{\mathcal{L}}_{\text{det}}}+{{\mathcal{L}}_{\text{dc}}} \\ & {{\mathcal{L}}_{\text{cnt-cd}}}={{\mathcal{L}}_\text{cnt}}+{{\mathcal{L}}_{\text{cd}}} \\ \end{aligned}, \end{equation} \begin{equation}\label{eq:all} \begin{aligned} {{\mathcal{L}}_{\text{total}}}=\frac{1}{2}&\left( \frac{1}{{{e}^{w1}}}{{\mathcal{L}}_{\det \text{-dc}}}+\frac{1}{{{e}^{w2}}}{{\mathcal{L}}_{\text{cnt-cd}}}+\frac{1}{{{e}^{w3}}}{{\mathcal{L}}_{\text{id}}} \right) \\ & +\left( w1+w2+w3 \right) \\ \end{aligned}, \end{equation} where $w_1$, $w_2$ and $w_3$ \new{are} trainable parameters to automatically balance the three tasks. For training, we first generate heat map, size map, box offset, density map and one-hot representation for each object identity. Then, the total loss is computed between the GT labels and the predicted outputs. \setlength{\tabcolsep}{8pt} \begin{table*}[!htb] \small \begin{center} \caption{Comparison with the state-of-the-art trackers under the ``private detector'' protocol. The two-stage trackers are labeled by ``*''. The best results of each dataset are shown in {\bf bold}, and the second best are in {\underline {underline}}.} \label{table:sota} \begin{tabular}{llccccccccc} \toprule Dataset & Tracker & MOTA$\uparrow$ & IDF1$\uparrow$&HOTA$\uparrow$ & MT$\uparrow$ & ML$\downarrow$ &\new{FP$\downarrow$}&\new{FN$\downarrow$}& IDs$\downarrow$ & FPS$\uparrow$\\ \midrule MOT16 & DeepSORT~\_2\textsuperscript{*}~\cite{wojke2017simple} & 61.4 & 62.2 &50.1& 32.8\% & 18.2\%&12,852&56,668 & 781 & 6.4\\ &TLR~\cite{wang2021multiple}&\underline{76.6}&74.3&61.0&47.8\%&{\bf 13.3\%}&10,860&\underline{30,756}&979&15.9 \\ &Trackor++~\cite{bergmann2019tracking}&54.4&52.5&42.3&19.0\%&36.9\%&\textbf{3,280}& 79,149&{682}&1.5 \\ &GSDT~\cite{wang2021joint}&74.5&68.1&56.6&41.2\%&17.3\%&8,913&36,428&1,229& 1.6 \\ &QuasiDense~\cite{pang2021quasi}&69.8&67.1&54.5&41.6\%&19.8\%&9,861 &44,050&1,097&20.3 \\ &TraDeS~\cite{wu2021track}&70.1&64.7&53.2&37.3\%&20.0\%&\underline{8,091}&45,210&1,144&22.3 \\ &TubeTK~\cite{pang2020tubetk}&66.9&62.2&50.8&39.0\%&16.1\%&11,544&47,502&1,236&1.0\\ &FairMOT~\cite{zhang2021fairmot} & {75.7} & \underline{75.3} &61.6& {48.1\%} & \underline{14.4\%}& 13,501 &41,653 & \underline{621} & {\bf 25.9}\\ &GRTU\textsuperscript{*}~\cite{wang2021general &76.5&{\bf 75.9}&{\bf 62.6}&{\bf 51.5}\%&17.0\%&11,438&30,866&{\bf 584}&0.3 \\ &CTrackerV1~\cite{peng2020chained &67.6&57.2&48.8&32.9\%&23.1\%&8,934&48,350&1,897&6.8 \\ &MeMOT~\cite{cai2022memot}&69.7&72.6&57.4&44.9\%&16.6\%&14,595&34,595&845& -- \\ & CountingMOT (ours)&{\bf 77.6}&75.2&\underline{62.0}&\underline{50.7\%} & {14.8\%}&12,337&\textbf{27,382} &1,087&\underline{24.9} \\ \midrule MOT17 &TBC~\cite{ren2020tracking}&53.9 & 50.0& 40.9&20.2\%&36.7\%&24,584&232,670 &4,612&6.7 \\ &SST\textsuperscript{*}~\cite{sun2019deep} & 52.4 & 49.5 &39.3& 21.4\% & 30.7\%&25,423& 23,4592& 8,431 & 3.9\\ &TLR~\cite{wang2021multiple}&\underline{76.5}&73.6&60.7&47.6\%&{\bf 12.7\%}&29,808&\underline{99,510}&3,369&15.6 \\ &Trackor++~\cite{bergmann2019tracking}&56.3&55.1&44.8&21.1\%&35.3\%&\textbf{8,866}& 235,449&\underline{1,987}&1.5 \\ &GSDT~\cite{wang2021joint}&66.2&68.7&55.5&40.8\%&18.3\%&26,397&120,666&3,318& 4.9 \\ &QuasiDense~\cite{pang2021quasi}&68.7&66.3&53.9&40.6\%&21.9\%&26,589& 146,643&3,378&20.3 \\ &TraDeS~\cite{wu2021track}&69.1&63.9&52.7&37.3\%&20.0\%&20,892&150,060 &3,555&22.3 \\ &TubeTK~\cite{pang2020tubetk} & 63.0 & 58.6 &48.0& 31.2\% & 19.9\%&27,060& 177,483 & 4,137 & 3.0\\ &CenterTrack~\cite{zhou2020tracking} & 67.8 & 64.7 &52.2 & 36.4\% & 21.5\%&\underline{18,498}& 160,332 & {2,583} & 17.5\\ & FairMOT~\cite{zhang2021fairmot}& {73.7} & 72.3& {59.3} & {43.2\%} & {17.3\%}& 27,507 &117,477 & 3,303 & {\bf 25.9}\\ &GRTU\textsuperscript{*}~\cite{wang2021general &74.9&{\bf 75.0}&{\bf 62.0}&\underline{49.7}\%&18.9\%&32,007&107,616&{\bf 1,812}&3.6 \\ & CTrackerV1~\cite{peng2020chained & 66.6 & 57.4 & 49.0 &32.2\% & 24.2\%&22,284&160,491 & 5,529 & 6.8\\ &PermaTrack~\cite{tokmakov2021learning &73.1&67.2&54.2&42.3\%&19.1\%&28,998&115,104&3,571 &11.9 \\ & CSTrack~\cite{liang2022rethinking} &74.9&{72.3}&41.5&50.4\%&15.5\%&23,847&114,303&3,196&4.5 \\ &MeMOT~\cite{cai2022memot}&69.0&72.5&56.9&43.8\%&18.0\%&37,221&115,248&2,724&--\\ &Trackformer~\cite{meinhardt2022trackformer} &74.1&68.0&57.3&47.3\%&10.4\%&34,602& 108,777&2,829&5.7\\ &CountingMOT (ours)&{\bf 78.0}&\underline{74.8}&\underline{61.7}&{\bf 49.8\%} & \underline{15.4\%}&28,233&\textbf{92,247}&3,453&\underline{24.9} \\ \midrule MOT20 & TBC~\cite{ren2020tracking}&54.4&50.1&--&33.4\%&19.7\%&37,937&195,242&2,580&5.6 \\ &Trackor++~\cite{bergmann2019tracking}&52.6&52.7&42.1&29.4\%&26.7\%&\textbf{6,930}& 236,680&{\bf 1,648}&1.2 \\ & FairMOT~\cite{zhang2021fairmot}& \underline{61.8} & {67.3} & \underline{54.6}& {\bf 68.8\%} & {\bf 7.6\%}&103,440&\textbf{88,901}&{5,243} & {\bf 13.2}\\ &GSDT~\cite{wang2021joint}&67.1&68.6&53.6&53.1\%&13.2\%&31,913&135,409&3,230& 1.5 \\ &MLT\textsuperscript{*}~\cite{zhang2020multiplex}&48.9&54.6&43.2&30.9\%&22.1\%&45,660& 216,803&{2,187}&3.7 \\ &TransCenter~\cite{xu2021transcenter} &58.5&49.6&43.5&48.6\%&14.9\%&64,217&146,019&4,695&1.0 \\ & CSTrack~\cite{liang2022rethinking} &66.6&\underline{68.6}&54.0&50.4\%&15.5\%&25,404&144,358&3,196&4.5 \\ &MeMOT~\cite{cai2022memot}&66.1&63.7&54.1&57.5\%&14.3\%&47,882 &137,983&\underline{1,938}&--\\ &Trackformer~\cite{meinhardt2022trackformer}&68.6&65.7&54.7&53.6\%& 14.6\%&\underline{20,348}&140,373&2,474&5.7\\ & CountingMOT (ours)&{\bf 70.2}&{\bf 72.4}&{\bf 57.0}&\underline{62.0\%} & \underline{12.1\%}&33,531&\underline{117,886}&{2,795}&\underline{12.6} \\ \bottomrule \end{tabular} \end{center} \vspace{-0.2in} \end{table*} \subsection{Online multi-object tracking} For each frame in a video sequence, the CountingMOT model jointly outputs object detections and their corresponding reID features. To avoid interference of object distractors, we adopt the hierarchical strategy for data association. First, the Kalman Filter is used to predict the location of each previous tracklet in the current frame. Then, the cosine distance on reID features and Mahalanobis distance on bounding boxes are computed, respectively. \new{Through} a weighting parameter $\lambda$ ($\lambda = 0.98$), a final distance matrix can be obtained which is applied for preliminary matching. For the unmatched tracklets and detections, we further match them using IoU~(Intersection over Union) with a given threshold $\tau$ ($\tau = 0.5$). After that, the final unmatched tracklets will be maintained for 30 frames before termination, while the unmatched detections are initialized as new trackers. More details can be found in~\cite{wojke2017simple}. \section{Experiments} \label{text:experiments} In this section, we evaluate the CountingMOT model on some public MOT datasets~\emph{MOT16}, \emph{MOT17} and \emph{MOT20}. Note that \emph{MOT20} contains some extremely crowd scenes which \new{are} very suitable to validate the anti-occlusion ability of MOT trackers. Also, we perform ablation studies to prove the effectiveness of the mutual constraints between detection and counting. \subsection{Datasets and metrics} \label{sec::datasts} Following~\cite{zhang2021fairmot}, we also adopt the same experimental setup to train our CountingMOT model. Six additional datasets (some with box and identity annotations) are used for pre-training, and they contain various pedestrian scenes (e.g., street, station, malls or the wild). For evaluation, the commonly CLEAR MOT metrics are adopted~\cite{bernardin2008evaluating}, e.g., MOTA (Multiple Object Tracking Accuracy), IDS (ID Switch), MT (Mostly Tracked Trajectories), ML (Mostly Lost Trajectories), \new{FP (Number of False Positives), FN (Number of False Negatives)} and etc. Also, IDF1~(ID F1 Score) is used to measure the correctly identified detections~\cite{ristani2016performance}. Recent approach~\cite{luiten2021hota} thinks that previous metrics overemphasize either detection or association, and thus it proposes a new MOT metric HOTA (Higher Order Tracking Accuracy) which matches at the detection level while considering association over the whole trajectory. \subsection{Implementation details} For training, we use the pre-trained parameters on the COCO dataset to initialize the CountingMOT model. The entire network is trained using the Pytorch framework with a batch size 12. The Adam optimizer is used for training with 30 epochs, where the starting learning rate is $1e^{-4}$ and then decays to $1e^{-5}$ at the 20 epochs. The input image for the model is resized to $1088 \times 608$ (namely $W_\text{in} = 1088$, $H_\text{in} = 608$), and the standard transforms including color jittering, rotation and scaling are also adopted. In total, the whole training process takes about 10 hours. For the hyper-parameters, $\mu$ in (\ref{eq6}) is set to $1000$ for fast convergence. The size of the sliding window $\textbf{w}_k$ in (\ref{eq8}) is set to $19 \times 19$. We do some ablation studies in the experimental part to analyze the effect of the two parameters. \subsection{Evaluation on MOT challenge} We evaluate our CountingMOT model on the datasets from the \emph{MOT challenge}, and compare it with the recent state-of-the-art approaches including two-stage ones (e.g., \cite{sun2019deep, wang2021general}) and JDT ones (e.g., \cite{zhou2020tracking,zhang2021fairmot,cai2022memot, meinhardt2022trackformer}). The tracking results on \emph{MOT16}, \emph{MOT17} and \emph{MOT20} are summarized in Tab.~\ref{table:sota}. Note that all the results are directly taken from the related papers or MOT leaderboard. For \emph{MOT17}, our CountingMOT tracker performs the best among all the methods in terms of MOTA. The two-stage method GRTU~\cite{wang2021general} has a higher HOTA, since it adopts a recurrent module to associate potential tracks through long-term dependency. However, this method runs a little slowly (\new{3.6 FPS}), and can't be applied to real-time tracking. By introducing density map as auxiliary information, our CountingMOT model can significantly improve FairMOT~\cite{zhang2021fairmot} (e.g., MOTA can be improved from 73.7 to 78.0). \new{Besides, CountingMOT has less FP and FN compared with FairMOT on \emph{MOT16} (FP of 12,337 vs 13501, FN of 27,382 vs 41,653), which implies that crowd density map can help to find missed object detections. For \emph{MOT17}, CountingMOT can also significantly improve FairMOT by decreasing FN from $117,477$ to $92,247$, but it slightly introduces more FP compared with FairMOT ($28,233$ vs $27,507$). } TBC~\cite{ren2020tracking} also adopts crowd density map to improve object detections, but it splits density map estimation and object detection into two separate steps. This means that object detections are highly \new{dependent} on the quality of density map, and they can't be optimized simultaneously, which affects the tracking performance (MOTA of 53.9) and running speed (FPS of 6.7) significantly. TBC needs to build a whole graph across all the frames, which makes it impossible for real applications. In Fig.~\ref{fig:ExpMOT17-07}, we also show some qualitative results on \emph{MOT17-07} test set. From left to right, frames are 130, 230 and 330, respectively. As observed, FairMOT~\cite{zhang2021fairmot} and CSTrack~\cite{liang2022rethinking}~lose object detections in some occluded situations (marked with red arrows) and thus have relatively worse tracking performance. Trackformer~\cite{meinhardt2022trackformer} can accurately locate occluded persons, but it may miss object detections in the far distance (zoom in for clear visualization). Using crowd density map as auxiliary information, our tracker has more object detections. For the extremely crowd scenes \emph{MOT20}, our method can further show its superior. E.g., CountingMOT achieves higher MOTA (70.2) and IDF1 (72.4). In our model, the density map can help the detection task to find occluded or missed detections. In return, the detection task can also enhance the localization ability of crowd density map. The two tasks can jointly improve the detection results, which also leads to the enhancement of reID features. Compared with FairMOT~\cite{zhang2021fairmot}, IDF1 is increased while IDs is significantly decreased (IDs of 2,795 vs 5,243). \new{ It is interesting that the FP of CountingMOT is significantly decreased from $103,440$ to $33,531$, but the FN is increased from $88,901$ to $117,886$. Crowd density map focuses on counting of the whole scene, and it can help to locate objects well in sparse scenes. However, for crowd scenes, the quality of the crowd density map will be affected, and thus the localization ability is weakened. For crowd scenes in \emph{MOT20}, FairMOT introduces many false detections (FP of 103,440), and the estimated crowd density map can reject false detections using counting constraint~(FP of 33,531 for CountingMOT). However, crowd density map also causes an increase of FN. Overall, the detection result is improved (MOTA of 70.2 vs 61.8). Also, the CountingMOT performs better than the recent two trackers MeMOT~[69] and Trackformer~[71]~(e.g., MOTA of 70.2 vs 66.1 and 66.8).} In Fig.~\ref{fig:ExpMOT20-07}, we also show some qualitative results on \emph{MOT17-20} test set. CSTrack~\cite{liang2022rethinking} and Trackformer~\cite{meinhardt2022trackformer} focus more on data association, and they work well in sparse scenes (see Fig.~\ref{fig:ExpMOT17-07}). However, for the crowd scene, they lose too many object detections (see the ``Num'' in the figure). Our CountingMOT tracker has the most object count, which implicitly indicates that crowd density map indeed helps to locate occluded persons. Please see details in MOT challenge (our tracker is denoted as ``CountingMOT''). \new{ Through the analysis of Tab.~\ref{table:sota}, our CountingMOT model tries to find a balance between the object detection task and the crowd density map estimation task. For sparse scenes, the crowd density map has a relatively strong localization ability, and it can help object detection task to decrease FN. For crowd scenes, the localization ability of crowd density map is weakened, but it can also help object detection task to reject false detections. Overall, the joint detection and counting indeed help to improve MOT performance (from MOTA and IDF1).} \begin{figure*}[!hbtp] \centering \captionsetup[subfigure]{labelformat=empty} \vspace{-0.10in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT17-07/Fair/000130_1.jpg}} \hspace{0.02in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT17-07/Fair/000230_1.jpg}} \hspace{0.02in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT17-07/Fair/000330_1.jpg}} \hspace{0.02in} {\small FairMOT~\cite{zhang2021fairmot}~(IJCV 2021)} \\ \vspace{-0.10in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT17-07/CSTrack/000130_1.jpg}} \hspace{0.02in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT17-07/CSTrack/000230_1.jpg}} \hspace{0.02in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT17-07/CSTrack/000330_1.jpg}} \hspace{0.02in} {\small CSTrack~\cite{liang2022rethinking}~(TIP 2022)} \\ \vspace{-0.10in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT17-07/TRCKFRMER_PR/000130.jpg}} \hspace{0.02in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT17-07/TRCKFRMER_PR/000230_1.jpg}} \hspace{0.02in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT17-07/TRCKFRMER_PR/000330.jpg}} \hspace{0.02in} {\small Trackformer~\cite{meinhardt2022trackformer}~(CVPR 2022)} \\ \vspace{-0.10in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT17-07/CountingMOT/000130.jpg}} \hspace{0.02in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT17-07/CountingMOT/000230.jpg}} \hspace{0.02in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT17-07/CountingMOT/000330.jpg}} \hspace{0.02in} \\ {\small CountingMOT (ours)} \\ \caption{Qualitative results on \emph{MOT17-07} test set. As observed, all the trackers except ours loss object detections (marked with red arrows) and thus have relatively worse tracking performance. Also, our tracker has more object count.}\label{fig:ExpMOT17-07} \vspace{-0.2in} \end{figure*} \begin{figure*}[!hbtp] \centering \captionsetup[subfigure]{labelformat=empty} \vspace{-0.10in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT20-07/Fair/000100.jpg}} \hspace{0.02in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT20-07/Fair/000180.jpg}} \hspace{0.02in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT20-07/Fair/000320.jpg}} \hspace{0.02in} {\small FairMOT~\cite{zhang2021fairmot}~(IJCV 2021)} \\ \vspace{-0.10in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT20-07/CSTrack/000100.jpg}} \hspace{0.02in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT20-07/CSTrack/000180.jpg}} \hspace{0.02in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT20-07/CSTrack/000320.jpg}} \hspace{0.02in} {\small CSTrack~\cite{liang2022rethinking}~(TIP 2022)} \\ \vspace{-0.10in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT20-07/TRCKFRMER_PR/000100.jpg}} \hspace{0.02in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT20-07/TRCKFRMER_PR/000180.jpg}} \hspace{0.02in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT20-07/TRCKFRMER_PR/000320.jpg}} \hspace{0.02in} {\small Trackformer~\cite{meinhardt2022trackformer}~(CVPR 2022)} \\ \vspace{-0.10in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT20-07/CountingMOT/000100.jpg}} \hspace{0.02in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT20-07/CountingMOT/000180.jpg}} \hspace{0.02in} \subfloat{\includegraphics[width=0.30\linewidth]{figure2/MOT20-07/CountingMOT/000320.jpg}} \hspace{0.02in} \\ {\small CountingMOT (ours)} \\ \caption{Qualitative results on \emph{MOT17-20} test set. For the crowd scene, CSTrack~\cite{liang2022rethinking} and Trackformer~\cite{meinhardt2022trackformer} lose too many object detections (see the ``Num'' in the figure), while our CountingMOT tracker has the most object count, which implicitly indicates that crowd density map indeed helps to locate occluded persons (zoom in for clear visualization).}\label{fig:ExpMOT20-07} \vspace{-0.2in} \end{figure*} \subsection{Effect of sliding windows size $\textbf{w}_{k}$} \label{sec:IV-D} In (\ref{eq8}), the sliding window size $\textbf{w}_k$ determines the region to count the objects, which will affect the final loss ${{\mathcal{L}}_{\text{cnt-cd}}}$. In other words, the size of the sliding window size $\textbf{w}_k$ means how much prior detection information is used for crowd density estimation. A large sliding window contains more object counts, but it weakens the constraint of localization for crowd density map. By contrast, a small sliding window can improve the location ability, but it ignores the counting ability of crowd density map. For the experiment, we train all the models on half of the training set and validate on another half (demoted as ``validation set''). In Tab.~\ref{table:window}, we summarize the results on the validation set of \emph{MOT17} by using different sliding window size $\textbf{w}_k$. As observed, the MOTA continuously increases when the sliding window size changes from $7$ to $19$. However, the MOTA drops when the sliding window size becomes larger than $19$. For a small sliding window, it focuses more on individual detection and thus ignores the overall understanding of the scene. A large sliding window introduces more detection prior but the counting result is degraded. According to the ablation study, we choose $\textbf{w}_k=19$ in our work. \begin{table} \begin{center} \setlength{\tabcolsep}{5pt} \caption{Comparison of using different sliding window size $w_k$ on the validation set of \emph{MOT17}. The best results are shown in {\bf bold}.} \label{table:window} \begin{tabular}{lcccccc} \toprule Sliding window size & MOTA$\uparrow$ & IDF1$\uparrow$ & MT$\uparrow$& ML$\downarrow$ & IDS $\downarrow$\\ \midrule $w_k = 7$ & 63.8 & 65.2 & 40.30\% &24.8\% & 432\\ $w_k = 9$ & 64.7 & 69.3 &43.63\%& 17.27\% & 406\\ $w_k = 11$ & 67.8 & {70.3} &45.15\%&16.97\%& 303&\\ $w_k = 13$ & 69.0 & 68.2 &45.45\%&16.97\%& 328\\ $w_k = 15$ & 68.3 & {70.2} & 44.85\% & 16.67\%&312\\ $w_k = 17$ &{69.5} & 75.6 &46.06\% & 16.97\% &{\bf 280}\\ {$w_k = 19$} & {\bf 71.8} & {\bf 76.3}&{\bf 49.55\%} &{\bf 13.27}\% & 309\\ $w_k = 21$ & 70.8 & 74.8 & {45.76\%} & {15.15\%}&345\\ $w_k = 23$ &68.8 & {74.3} & 44.85\% & 17.23\%& 361\\ \bottomrule \end{tabular} \end{center} \vspace{-0.1in} \end{table} \subsection{Effect of amplification factor $\mu$} In (\ref{eq6}), the amplification factor $\mu$ affects the performance of crowd density map. A small $\mu$ leads to the non-convergence of the counting task, while a large $\mu$ affects the accuracy of the detection result. In Tab.~\ref{table:amplication}, we perform an ablation study of the amplification factor $\mu$ on the validation set. As observed, when $\mu =1$, the CountingMOT model achieves relatively worse tracking results (MOTA of 62.7). We check the predicted crowd density map, and find that it is a map of all zeros. This indicates that the counting task is hardly trained when $\mu$ is small. As $\mu$ increases, the tracking performance is continuously improved (e.g., MOTA is improved from 62.7 to 71.8). However, MOTA drops when $\mu$ becomes too large. The reason is that a large $\mu$ highlights the importance of counting while undermines object detection. Thus, we choose $\mu = 1000$ in our work, which is a good balance between counting and detection. \begin{table} \begin{center} \setlength{\tabcolsep}{5pt} \caption{Effect of the amplification factor $\mu$ on the validation set of \emph{MOT17}. The best results are shown in {\bf bold}.} \label{table:amplication} \begin{tabular}{lcccccc} \toprule Amplification factor & MOTA$\uparrow$ & IDF1$\uparrow$ & MT$\uparrow$& ML$\downarrow$ & IDS $\downarrow$\\ \midrule $\mu = 1$ & 62.7 & 65.2 & 40.30\% &24.8\% & 438\\ $\mu = 10$ & 66.5 & 69.4 &38.94\%& 17.11\% & 451\\ $\mu = 100$ & 67.9 & {71.3} &41.59\%&17.11\%& 414&\\ $\mu = 1000$& {\bf 71.8} & {\bf 76.3}&{\bf 49.55\%} &{\bf 13.27}\% & 309\\ $\mu = 2000$ & 67.7 & {72.7} & 41.59\% & 16.52\%&428\\ $\mu = 4000$ &{67.5} & 70.5 &41.30\% & 16.81\% &{446}\\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{Effect of the mutual constraints} The mutual constraints are used to build connections between crowd density map and object detection. Crowd density map usually has a strong counting ability in crowd scenes, and it thus can provide an informative clue for object detection. In return, object detection can also improve the localization ability of crowd density map. We thus evaluate the effects of ${{\mathcal{L}}_{\text{dc}}}$ and ${{\mathcal{L}}_{\text{cd}}}$ on the validation data set. Also, we validate the effectiveness of the counting task, i.e., ${{\mathcal{L}}_{\text{cnt}}}$. In Tab.~\ref{table:mutual}, we report the tracking results of different variants. When the ${{\mathcal{L}}_{\text{cnt}}}$ is removed, the CountingMOT model will degrade into FairMOT which doesn't borrow prior information from counting. Thus, FairMOT achieves relatively worse tracking results. When FairMOT is only added with the counting task ${{\mathcal{L}}_{\text{cnt}}}$ (without ${{\mathcal{L}}_{\text{dc}}}$ and ${{\mathcal{L}}_{\text{cd}}}$), MOTA and IDF1 can be improved (e.g., MOTA of 69.1 vs 69.9), which indicates that the counting task implicitly improves the tracking performance. When only ${{\mathcal{L}}_{\text{dc}}}$ is removed from CountingMOT (denoted as ``w/o ${{\mathcal{L}}_{\text{dc}}}$"), the model can also slightly improve MOTA. In contrast, when only ${{\mathcal{L}}_{\text{cd}}}$ is removed, the model can achieve the highest MT and lowest ML, which benefits from the crowd density map to find missed detections. Overall, the mutual constraints can achieve the best results, making the CountingMOT robust to crowded scenes. \begin{table} \begin{center} \setlength{\tabcolsep}{5pt} \caption{Effect of mutual constraints ${{\mathcal{L}}_{\text{dc}}}$ and ${{\mathcal{L}}_{\text{cd}}}$ on the validation set of \emph{MOT17}. The best results are shown in {\bf bold}.} \label{table:mutual} \begin{tabular}{lcccccc} \toprule Mutual constraints & MOTA$\uparrow$ & IDF1$\uparrow$ & MT$\uparrow$& ML$\downarrow$ & IDS $\downarrow$\\ \midrule w/o ${{\mathcal{L}}_{\text{cnt}}}$ & 69.1 & 72.8 &42.18\%& 15.63\% & {\bf 299}\\ w/o ${{\mathcal{L}}_{\text{dc}}}$, ${{\mathcal{L}}_{\text{cd}}}$, & 69.9 & 72.9&43.07\%& 16.22\% & 325\\ w/o ${{\mathcal{L}}_{\text{dc}}}$ & 70.6 & 72.6 &48.67\%& 14.74\% & 410\\ w/o ${{\mathcal{L}}_{\text{cd}}}$ & 71.1 & {73.8} &{\bf 51.92\%}&{\bf 12.98\%}& 468&\\ ${{\mathcal{L}}_{\text{dc}}}$ + ${{\mathcal{L}}_{\text{cd}}}$ & {\bf 71.8} & {\bf 76.3}&{49.55\%} &{13.27}\% & 309\\ \bottomrule w/o ${{\mathcal{L}}_{\text{id}}}$ (two-stage) & 71.0 & 73.4 &43.07\%& 15.04\% & {389}\\ \new{w/o ${{\mathcal{L}}_{\text{dc}}}$, ${{\mathcal{L}}_{\text{cd}}}$ + TBC} & \new{70.0} & \new{74.3} &\new{42.18\%}& \new{15.93\%} & \new{334}\\ \bottomrule \end{tabular} \end{center} \vspace{-0.3in} \end{table} To evaluate the effect of reID \new{branch}, we first train a variant of CountingMOT with only detection and counting tasks (denoted as ``w/o ${{\mathcal{L}}_{\text{id}}}$"). Then, we use the ROI-Align strategy~\cite{voigtlaender2019mots} to extract the feature of each bounding box from the backbone features. Finally, we use a fully connected layer to classify these bounding boxes and obtain reID features. This experiment is also conducted on the validation set of MOT17, and the result is reported in Tab.~\ref{table:mutual} (the last row). As observed, the two-stage model ``w/o ${{\mathcal{L}}_{\text{id}}}$" performs worse than our original CountingMOT (IDF1 of 73.4 vs 76.3), which indicates that the joint optimization of reID task can improve the tracking performance. \new{TBC [10] also adopts crowd density map to improve object detections, but it splits density map estimation and object detection into two separate steps. This means that object detections highly depend on the quality of crowd density map, and they can’t be optimized simultaneously, which affects the tracking performance. Here, we also compare TBC with CountingMOT on the validation set of \emph{MOT17}. For a fair comparison, we use CountingMOT without the mutual constraints ${{\mathcal{L}}_{\text{dc}}}$, ${{\mathcal{L}}_{\text{cd}}}$ to generate the input (detections, density maps and reID features) for TBC. As observed in Tab.~\ref{table:mutual}, from MOTA and IDF1, TBC works better than CountingMOT without ${{\mathcal{L}}_{\text{dc}}}$ and ${{\mathcal{L}}_{\text{cd}}}$, and this is because TBC can use information across the whole video to perform data association. When ${{\mathcal{L}}_{\text{dc}}}$ and ${{\mathcal{L}}_{\text{cd}}}$ are added, CountingMOT can optimize object detection and crowd density map simultaneously, and thus achieves better results compared with TBC (e.g., MOTA of 71.8 vs 70.0).} \begin{figure}[htbp] \centering \centering \includegraphics[width={0.40\textwidth}]{figure2/parameter.pdf} \caption{Training Curves for $w_1$, $w_2$, $w_3$.}\label{fig:para} \vspace{-0.1in} \end{figure} \begin{figure}[htbp] \centering \centering \includegraphics[width={0.40\textwidth}]{figure2/loss.pdf} \vspace{-0.1in} \caption{Training \new{loss curves} for detection, counting an reID tasks.}\label{fig:loss} \vspace{-0.2in} \end{figure} \subsection{Effect of the training parameters} In (\ref{eq:all}), $w_1$, $w_2$ and $w_3$ are trainable parameters to automatically balance the detection, counting and reID tasks. For initialization, we set $w_1$, $w_2$ and $w_3$ to $-2$, $-1$ and $-1$, respectively. In Fig.\ref{fig:para}, we show the training curves for the three parameters on the training set, and they coverage at the 20th epoch. In Fig.~\ref{fig:loss}, the corresponding detection, counting and reID losses also converge at the same epoch. Meanwhile, we show the loss curves for the three tasks (dotted lines in Fig.~\ref{fig:loss}) when the training parameters are fixed ($w_1=-2$, $w_2=-1$ and $w_3=-1$). As observed, the dotted lines converge at relatively high loss values, which indicates that the uncertainty weights are effective for multi-task training. On the validation set, automatically adjusting the three parameters also performs better than using fixed training parameters (MOTA of 71.8 vs 70.2, IDF1 of 76.3 vs 72.3). The parameter $w_1$ controls the weight of object detection task, and its initial value may affect the tracking performance. To analyze the effect of $w_1$, we initialize $w_2 = -1$, $w_3 = -1$, and change $w_1$ from $-5$ to $-1$ for different training models. The results of different parameter settings are reported in Tab.~\ref{table:para_w}, and we find that a smaller $w_1$ (larger weight for detection) can't improve the tracking performance. The reason is that the detection task is also constrained by the counting task. For the reID weight $w_3$, according to our experiments, the tracking performance has little changes when it varies from $-5$ to $-1$. \begin{table}[!htb] \begin{center} \setlength{\tabcolsep}{4pt} \caption{Tracking results of different $w_1$ on the validation set of \emph{MOT17}. } \label{table:para_w} \begin{tabular}{lcccccc} \toprule $w_2 = -1$, $w_3 = -1$ & MOTA$\uparrow$ & IDF1$\uparrow$ & MT$\uparrow$& ML$\downarrow$ & IDS $\downarrow$\\ \midrule $w_1 = -5$ & {60.2} & { 69.3}&{41.89\%} &{16.22}\% & 482\\ $w_1 = -4$ & 68.4 & {71.9} &{43.66\%}&{16.22\%}& 476&\\ $w_1 = -3$ & 71.3 & 74.1 &48.67\%& 13.57\% & 374\\ $w_1 = -2$ & {\bf 71.8} & {\bf 76.3}&{49.55\%} &{\bf 13.27}\% & {\bf 309}\\ $w_1 = -1$ & 70.7 & 72.7 &{\bf 51.33\%}& 15.93\% & {391}\\ \bottomrule \end{tabular} \end{center} \vspace{-0.3in} \end{table} \subsection{Evaluation of crowd counting performance} \new{ To analyze the counting performance, we evaluate different models on the validation sets of \emph{MOT17} and \emph{MOT20} using MAE~(Mean Absolute Error) and SSIM~(Structure Similarity Index Measure). The validation sets are generated by uniformly splitting the original train sets (see \ref{sec:IV-D}). The counting results are reported in Tab.~\ref{table:counting}. ``CountingModel" shares the same backbone with FairMOT, but it only has a counting branch. ``FairMOT+Counting" is created by adding the additional counting task to FairMOT. Note that all the models in Tab.~\ref{table:counting} are first pre-trained on the extra datasets (see \ref{sec::datasts}), and then are fine-tuned on the validation sets. As observed in Tab.~\ref{table:counting}, for \emph{MOT17}, the CountingMOT achieves the highest MAE and SSIM. For the crowd scene \emph{MOT20}, the CountingMOT has the highest SSIM, which implies that the estimated density map has a better localization ability. In summary, our CountingMOT model can improve FairMOT from the counting results, and it can also improve the localization ability of crowd density map in crowd scenes (from SSIM).} \begin{table} \begin{center} \setlength{\tabcolsep}{5pt} \caption{Crowd counting performance of different models on the validation sets of \emph{MOT17} and \emph{MOT20}.} \label{table:counting} \begin{tabular}{|l|cc|cc|} \hline \multirow{2}{*}{Models} &\multicolumn{2}{c|}{MOT17}&\multicolumn{2}{c|}{MOT20} \\ \cline{2-5} &MAE&SSIM&MAE&SSIM \\ \hline FairMOT&20.56&-&17.08&- \\ CountingModel&18.04&{0.84}&\textbf{16.11}&0.70 \\ FairMOT+Counting&18.36&0.82&16.80&0.68 \\ CountingMOT&\textbf{17.55}&\textbf{0.87}&16.27& \textbf{0.72} \\ \hline \end{tabular} \end{center} \vspace{-0.3in} \end{table} \section{Conclusion} \label{text:conclusion} In this paper, we propose a multi-task model to jointly perform crowd counting, detection and re-Identification. Unlike the existing \emph{tracking-by-detection} MOT methods, our CountingMOT model introduces the informative crowd density map to help recover missed or occluded object detections, which makes our model robust to crowd scenes. Also, the detection task can improve the quality of crowd density map. The mutual constraints between detection and counting can jointly improve the tracking performance. Our approach is an attempt to bridge the gap of counting, detection and re-Identification. Experimental results show that our model achieves the state-of-the-art results on \emph{MOT16} and \emph{MOT17}. Also, the comparison on \emph{MOT20} indicates that our CountingMOT model can well solve multi-object tracking in crowd scenes. \new{In the future, we will further improve the ReID task with unsupervised or semi-supervised learning~\cite{zhou2019person,chen2019semisupervised,gu2022motion}. For real applications, we will also consider the long-term association by jointly using the correlation filtering~\cite{yuan2020self,cao2021feature,jain2021channel}.} \ifCLASSOPTIONcaptionsoff \newpage \fi \footnotesize \bibliographystyle{IEEEtran}
1,314,259,993,799
arxiv
\section{Introduction and statement of results} \subsection{Overview} The study of logarithmically correlated Gaussian fields (LCGFs) has its roots in two-dimensional quantum field theory and the study of Gaussian multiplicative chaos \cite{K85, KPZ88}. The last decade saw rapid development, driven both by these motivations \cite{S07,DS11,RV14} as well as the fact that those Gaussian field arise as natural limits in a variety of contexts including random matrix theory \cite{BMP21,CN18,CFLW19}, planar random walks and Brownian motion \cite{DPRZ04,J20} and number theory \cite{FHK12, ABR20}. Within the theory of LCGFs, the study of extremes plays an important role and highlights the link with the study of Branching Random Walks (BRW); this link is already made in the seminal \cite{K85}, and played an important role in the study of the maxima of LCGFs, see \cite{BZ12, BDZ16b, BL18} and the lecture notes \cite{B20,Z16}. A general framework for the study of the maximum of (discretely indexed) LCGFs is put forward in \cite{DRZ17}. Building on the approach of \cite{BDZ16}, it provided an axiomatic framework that ensures that the maximum of such a field, properly centered, converges in distribution as the size of the box on which it is defined increases. Checking the hypotheses for that framework is often a non-trivial task, and a recent success is the evaluation of the limit for the 4D membrane model, see \cite{S20}. In all of the above mentioned examples, the covariance of the LCGF is uniformly controlled by the logarithm. We introduce notation to explain this point. Consider the lattice $\mathbb{Z}^d$ with nearest neighbor edges, and let $E(\mathbb{Z}^d)$ denote the collection of edges. For $w\in\mathbb{Z}^d$ let $V_N(w)=(w+[0,N-1]^d)\cap\mathbb{Z}^d$ be the lattice cube of sidelength $N$ with lower left corner $w$, and for $\delta>0$ set \begin{equation} \label{eq:VNd} V^\delta_N(w):=\left\{v\in V_N(w)\colon\mathsf{d}(v,\partial^+ V_N(w)\ge\delta N\right\}, \end{equation} where $\partial^+ V_N(w)$ is the outer boundary of $V_N(w)$, that is those points $v\in \mathbb{Z}^d\setminus V_N(w)$ at distance $1$ from $V_N(w)$. For $N\in\mathbb{N}$ and $w\in\mathbb{Z}^d$, we consider a centered Gaussian field $\varphi^{N,w}=(\varphi^{N,w}_v)_{v\in V_N(w)}$ on $V_N(w)$ with variance and covariance \begin{equation} \label{eq-cov} \Cov^{N,w}(v,u)=\mathbb{E}\varphi^{N,w}_v\varphi^{N,w}_u, \quad v,u\in V_N, \quad \Var^{N,w}(u)=\Cov^{N,w}(u,u), \end{equation} and law $\mathbb{P}^{N,w}$. Throughout, when $w=0$ we omit it from the notation. We say that $(\varphi^{N}_v)_{v\in V_N}$ is a LCGF if for any $\delta>0$ there exists $C_\delta<\infty$ so that \begin{equation} \label{e:LCGF}\left|\Cov^N(v,u) -\log N+\log_+|u-v|\right|\le C_\delta, \quad u,v\in V_N^\delta, \end{equation} where $\log_+ x:=\max(\log x,0)$. Our goal in this paper is to extend the theory of the extrema of LCGFs in the direction of Gaussian fields defined in a \textit{random environment}, for which \eqref{e:LCGF} does not hold uniformly. We describe below in Theorem \ref{t:mainthm} a general result in that direction, but as the result is rather technical, we present first two main applications. The first application is the Gaussian free field with i.i.d uniformly elliptic conductances, i.e. the Gaussian field whose covariance corresponds to random walk in i.i.d. bounded random conductances. Throughout, we consider $\mathbb{Z}^2$ as a graph with nearest-neighbour edges, and write $E(G)$ for the edges of a graph $G$. \begin{theorem} \label{t:random_conductances} Fix $0<\Lambda^-\le\Lambda^+$ and let $\mathbf{P}$ be an i.i.d. probability measure on $[\Lambda^-,\Lambda^+]^{E(\mathbb{Z}^2)}$. For each sample ${\mathbf{a}}$ from $\mathbf{P}$ construct the family of Gaussian fields $\varphi^{{\mathbf{a}},N}$ on $V_N$ with law \begin{align*} &\mathbb{P}^{{\mathbf{a}},N}(\mathrm{d}\varphi^{{\mathbf{a}},N})\\ &=\frac{1}{Z_{\mathbf{a}}^{V_N}}\exp\left(-\frac12\sum_{\{u,v\}\in E(\mathbb{Z}^2)}{\mathbf{a}}(\{u,v\})(\varphi^{{\mathbf{a}},N}_u-\varphi^{{\mathbf{a}},N}_v)^2\right)\prod_{v\in V_N}\,\mathrm{d}\varphi^{{\mathbf{a}},N}_v\prod_{v\notin V_N}\,\mathrm{d}\delta_0(\mathrm{d}\varphi^{{\mathbf{a}},N}_v). \end{align*} Then there is a deterministic constant ${\overline{\A}}\in[\Lambda^-,\Lambda^+]$ such that for $\mathbf{P}$-almost every ${\mathbf{a}}$, the sequence of random variables \[\max_{v\in V_N}\varphi^{{\mathbf{a}},N}_v-\sqrt{\frac{2}{\pi{\overline{\A}}}}\left(\log N-\frac{3}{8}\log\log N\right)\] converges in law as $N\to\infty$ to a deterministic limit. \end{theorem} Theorem \ref{t:random_conductances} resolves a version of \cite[Problem 1.23]{B11}, where the conductances $\kappa$ in the notation there are i.i.d. Our second application is concerned with the Gaussian free field on the infinite cluster of supercritical Bernoulli percolation with parameter sufficiently close to 1. Recall, see \cite{K80}, that for every $p> 1/2$, under the the law $\mathbf{P}_p$ of Bernoulli bond percolation on $\mathbb{Z}^2$, there is almost surely a unique infinite cluster $\mathcal{C}_\infty$ of open edges. \begin{theorem}\label{t:perc_unit_cond} There is $1/2\le p_*<1$ with the following property. For $p> p_*$, construct a family of Gaussian fields $\varphi^{\mathcal{C}_\infty,N}$ on $V_N\cap \mathcal{C}_\infty$ with law \begin{align*} &\mathbb{P}^{\mathcal{C}_\infty,N}(\mathrm{d}\varphi^{\mathcal{C}_\infty,N})\\ &=\frac{1}{Z_{\mathcal{C}_\infty}^{V_N}}\exp\left(-\frac12\sum_{\{u,v\}\in E(\mathcal{C}_\infty)}(\varphi^{\mathcal{C}_\infty,N}_u-\varphi^{\mathcal{C}_\infty,N}_v)^2\right)\prod_{v\in V_N(w)\cap\mathcal{C}_\infty}\,\mathrm{d}\varphi^{\mathcal{C}_\infty,N}_v\prod_{v\notin V_N(w)\cap\mathcal{C}_\infty}\,\mathrm{d}\delta_0(\mathrm{d}\varphi^{\mathcal{C}_\infty,N}_v). \end{align*} Then there is $ a_p $ depending on $p$ only such that for $\mathbf{P}_p$-almost every realization of $\mathcal{C}_\infty$, the sequence of random variables \[\max_{v\in V_N}\varphi^{\mathcal{C}_\infty,N}_v-\sqrt{\frac{2}{\pi a_p }}\left(\log N-\frac{3}{8}\log\log N\right)\] converges in law as $N\to\infty$ to a deterministic limit. \end{theorem} As in the case of LCGF's, we also have a description of the limits in Theorem \ref{t:perc_unit_cond} and \ref{t:perc_unit_cond} as a randomly shifted Gumbel distribution. Before proceeding, we explain the emergence of the constants ${\overline{\A}}$ and $ a_p $ in Theorems \ref{t:random_conductances} and \ref{t:perc_unit_cond}, as well as the intuition why should one expect these results. Recall that the covariance of the Gaussian fields $\varphi^{{\mathbf{a}},N}$ in the theorems is given by the Green function of a random walk on the random-conductance graph given by ${\mathbf{a}}$, killed when hitting $\partial^+ V_N$. It is natural then to ask for the scaling limit (on the infinite lattice) of the random walk, and homogenization theory gives that scaling limit. More precisely, building on the Kipnis-Varadhan theory, it is proved in \cite{DMFGW88} that the random walk converges (under the so called \textit{averaged} law, that is under the law which is the semidirect product of $\mathbf{P}$ (or $\mathbf{P}_p$) with the law of the random walk), to a Brownian motion of diffusivity ${\overline{\A}}$ (or $ a_p $). Convergence to Brownian motion in the random conductance model of Theorem \ref{t:random_conductances} under the \textit{quenched} law (that is, for $\mathbf{P}$ almost every ${\mathbf{a}}$) appears (with some restrictions) in \cite{Ko85}, and in full generality in \cite{SS04}, where the percolation case is also handled, but only in dimension $d\geq 4$; the analogous result for percolation clusters in all dimensions $d\geq 2$ was settled in \cite{BB07, MP07}, building on earlier heat kernel estimates in \cite{B04} and \cite{MR04}. See \cite{B11} for a review. With the invariance principle (and related heat kernel estimates) in place, it is then natural to expect the scaled Green function for the random walk to be related to that for Brownian motion. Indeed, the (quenched) invariance principle is enough for proving that macroscopic averages of the Green function behave like macroscopic averages of the covariance of a LCGF (cf. \cite{ADS20}); in particular, this makes the statements in Theorems \ref{t:random_conductances} and \ref{t:perc_unit_cond} plausible. However, for our needs much more \textit{quantitative} estimates are needed. For these, we rely on recently developed versions of homogenization theory, such as developed in \cite{GNO15, AKM17, AKM19}. We note that convergence at the level of the associated Gaussian fields is proved (in the continuous setup) in \cite{CR22}. Further details are provided below in Section \ref{sec-4}. Theorems \ref{t:random_conductances} and \ref{t:perc_unit_cond} both are consequence of Theorem \ref{t:percolation_cluster} below, which utilizes quantitative homogenization in checking the general hypotheses of our main result, Theorem \ref{t:mainthm}. We note that the latter is expected to be useful also in other contexts, and we list some examples in Section \ref{sec-applications}. \subsection{Maxima of Gaussian fields under non-uniform log-correlation assumptions} We next turn to the general result alluded to above, and begin by introducing notation and assumptions. For $L\le N$ set \begin{equation} \label{eq:calW} \mathcal{W}_{N,L}(w)=\left\{w'\in\mathbb{Z}^d: w'-w\in L\mathbb{Z}^d,V_L(w')\subset V_N(w)\right\}. \end{equation} For any Polish space $\mathcal{X}$, we let $\mathcal{M}_1(\mathcal{X})$ denote the space of probability measures on $\mathcal{X}$, equipped with the topology of weak convergence. For $w\in \mathbb{Z}^d$, we let $\tau_w$ denote the shift by $w$ on $\mathbb{Z}^d$ (i.e. $\tau_w(v)=v+w)$, and also the shift on $\mathbb{R}^{\mathbb{Z}^d}$ (i.e. $\tau_w F(\cdot)=F(\tau_w(\cdot))$). For $N\in\mathbb{N}$ and $w\in\mathbb{Z}^d$, we consider a centered Gaussian field $\varphi^{N,w}=(\varphi^{N,w}_v)_{v\in V_N(w)}$ on $V_N$ with law $\mathbb{P}^{N,w}$. \begin{assumption} \label{as:main} There exist functions $\mathsf{T},\mathsf{R}^{(1)},\mathsf{R}^{(2)}\colon\mathbb{Z}^d\to[0,\infty)$ so that the variances and covariances $\Var^{N,w}(\cdot)$, $\Cov^{N,w}(\cdot,\cdot)$ of $\varphi^{N,w}$ satisfy the following. \begin{enumerate}[label=(A.\arabic*)] \item\label{a:logupp} \textbf{(Logarithmic upper bounds on the covariances)} For all $N$, all $w\in\mathbb{Z}^d$ and for all $u,v\in V_N(w)$, \[\Var^{N,w}(v)\le\log N+\mathsf{T}_v\] and \[\Var^{N,w}(v)-\Cov^{N,w}(u,v)\le \log_+|u-v|+\mathsf{T}_u+\mathsf{T}_v.\] \item\label{a:logbd} \textbf{(Logarithmic bounds on the covariances away from the boundary)} For every $\delta>0$ there is an increasing function $\alpha_{\delta}\colon[0,\infty)\to[0,\infty)$ such that for all $N$, all $w\in\mathbb{Z}^d$ and for all $u,v\in V^\delta_N(w)$, \[\left|\Cov^{N,w}(u,v)-\log N+\log_+|u-v|\right|\le \alpha_{\delta}(\mathsf{R}^{(1)}_u)+\alpha_{\delta}(\mathsf{R}^{(1)}_v).\] \item\label{a:micro} \textbf{(Approximation on a microscopic scale near the diagonal)} There are a continuous function $f\colon(0,1)^d\to\mathbb{R}$ which is bounded from above and a function $g\colon\mathbb{Z}^d\times\mathbb{Z}^d\to\mathbb{R}$ such that the following holds. For all $L,\delta,\varepsilon>0$ there is $N_{A.3}$ such that for $N\ge N_{A.3}$, $w\in\mathbb{Z}^d$ and for all $u,v\in V^\delta_N(w)$ with $|u-v|_\infty\le L$ and $\mathsf{R}^{(1)}_u\vee\mathsf{R}^{(1)}_v\le \delta N$, $\mathsf{R}^{(2)}_w\le N$ we have \[\left|\Cov^{N,w}(u,v)-\log N-f\left(\frac{v-w}{N}\right)-g(u,v)\right|\le\varepsilon.\] \item\label{a:macro} \textbf{(Approximation on a macroscopic scale off the diagonal)} There is a continuous function $h\colon(0,1)^d\times(0,1)^d\setminus\{(x,x)\colon x\in(0,1)^d\}\to\mathbb{R}$ such that the following holds. For all $L,\delta,\varepsilon>0$ there is $N_{A.4}$ such that for $N\ge N_{A.4}$, $w\in\mathbb{Z}^d$ and for all $u,v\in V^\delta_N(w)$ with $|u-v|_\infty\ge {N}/{L}$ and $\mathsf{R}^{(1)}_u\vee\mathsf{R}^{(1)}_v\le (\delta N)\wedge({N}/{L})$, $\mathsf{R}^{(2)}_w\le N$ we have \[\left|\Cov^{N,w}(u,v)-h\left(\frac{u-w}{N},\frac{v-w}{N}\right)\right|\le\varepsilon.\] \end{enumerate} Additionally, the following holds for $w=0$. \begin{enumerate}[label=(B.\arabic*) \item\label{a:sparseT} \textbf{(Quantitative sparsity of large values of $\mathsf{T}$)} There are $\varepsilon>0$, $C>0$ such that the following holds. For every $L,N\in\mathbb{N}$, for any $T\ge 1$ and for any $w'\in\mathcal{W}_{N,\left\lfloor\frac{N}{L}\right\rfloor}$ we have that \[\left|\left\{v\in V_{\left\lfloor\frac{N}{L}\right\rfloor}(w')\colon \mathsf{T}_v\ge T\right\}\right|\le C\left(\frac{N}{L}\right)^d{\mathrm{e}}^{-(d+\varepsilon)T}.\] \item\label{a:sparseR} \textbf{(Qualitative sparsity of large values of $\mathsf{R}^{(1)}$ and $\mathsf{R}^{(2)}$)} We have that \begin{align*} \limsup_{R\to\infty}\sup_{L\in\mathbb{N}}\limsup_{N\to\infty}\left(\frac{L}{N}\right)^d\max_{w'\in \mathcal{W}_{N,\left\lfloor \frac{N}{L}\right\rfloor}(w_N)}\left|\left\{v\in V_{\left\lfloor \frac{N}{L}\right\rfloor}(w'): \mathsf{R}^{(1)}_{v}> R\right\}\right|&=0,\\ \sup_{L\in\mathbb{N}}\limsup_{L'\to\infty}\limsup_{N\to\infty}\left(\frac{LL'}{N}\right)^d\max_{w'\in \mathcal{W}_{N,\left\lfloor \frac{N}{L}\right\rfloor}(w_N)}\left|\left\{w''\in \mathcal{W}_{\left\lfloor \frac{N}{L}\right\rfloor,L'}(w'): \mathsf{R}^{(2)}_{w''}> L'\right\}\right|&=0. \end{align*} \item\label{a:lln} \textbf{(Law of large numbers for the local behaviour)} For every $L'\in\mathbb{N}$ there is a probability measure $\nu_{L'}\in \mathcal{M}_1(\mathcal{M}_1(\mathbb{R}^{V_{L'}(0)}))$ such that for any $L\in\mathbb{N}$, for every $a\in V_L$, setting $w_{a,N}'=a\left\lfloor \frac{N}{L}\right\rfloor$ and \[\mu_{N,a}=\frac{1}{\left|\mathcal{W}_{\left\lfloor \frac{N}{L}\right\rfloor,L'}\right|}\sum_{w''\in\mathcal{W}_{\left\lfloor \frac{N}{L}\right\rfloor,L'}(w_{N,a}')}\delta_{\mathbb{P}^{L',w''}(\tau_{w''}(\cdot))}\] then the sequence $\mu_{N,a}$ converges weakly to $\nu_{L'}$. \end{enumerate} \end{assumption} In Assumption \ref{as:main}, one thinks of the variables $\mathsf{R}^{(1)}_v$ as the scale at which covariance estimates for $\Cov^{N,w}(v,\cdot)$ hold, $\mathsf{R}^{(2)}_w$ as the scale at which homogenization applies for random walk started at $w$, and $\mathsf{T}_v$ as the local maximal error for estimates on the covariance $\Cov^{N,w}(v,\cdot)$. \begin{theorem}\label{t:mainthm} Let Assumption \ref{as:main} hold. Set $M_{N}:=\max_{v\in V_N}\varphi^{N}_v$, and define the sequence \[m_N:=\sqrt{2d}\log N-\frac{3}{2\sqrt{2d}}\log\log N.\] Then $M_{N}-m_N$ converges in distribution to a limit that is given as a randomly shifted Gumbel distribution. That is, there exists a positive number $\beta^*$ and a random variable $Z$ so that, for any $t\in \mathbb{R}$, \[ \lim_{N\to\infty}\mathbb{P}^{N}(M_{N}-m_N\leq t)= \mathbb{E}\left( {\mathrm{e}}^{- \beta^*Z {\mathrm{e}}^{-\sqrt{2d}t}}\right).\] Moreover, $Z$ is the weak limit of the sequence $Z_N$ defined by \[Z_N=\sum_{v\in V_N}(\sqrt{2d}\log N-\varphi^N_v){\mathrm{e}}^{-2d\log N+\sqrt{2d}\varphi^N_v}.\] \end{theorem} As discussed in \cite{DRZ17}, $Z_N$ resembles the derivative martingale which occurs in the study of various branching processes. In our setting, $Z_N$ is not necessarily a martingale. Nonetheless we expect that many of the properties of $Z_N$ and its limit in the case of the two-dimensional Gaussian free field carry over to our setting. In particular, in analogy to \cite{BL16}, there should actually exist a random measure $\mathcal{Z}$ on $[0,1]^d$ with $Z=\mathcal{Z}([0,1]^d)$ that encodes the locations of near-maximizers of the fields and has a number of other interesting properties. However, we do not pursue this description in the current paper. We next explain in more detail the role of the different parts of Assumption \ref{as:main}. First, one should compare our assumptions to those in \cite{DRZ17}, which are similar to those in part A of Assumption \ref{as:main} (we kept similar notation to allow for easy comparison). In fact, as we explain in Remark \ref{r:gen_of_DRZ} below, one can check that Theorem \ref{t:mainthm} is a strict generalization of the main results of \cite{DRZ17}. The assumptions in part A are on $\varphi^{N,w}$ for any $w\in\mathbb{Z}^d$, i.e. on boxes of any size and position. One can think of them as providing an implicit definition of $\mathsf{T}_\cdot$, $\mathsf{R}^{(1)}_\cdot$ and $\mathsf{R}^{(2)}_\cdot$. The assumptions in part B are only on $\varphi^N=\varphi^{N,0}$, i.e. on our domain of interest $V_N=V_N(0)$. They encode the fact that on $V_N$ the random scales $\mathsf{R}^{(1)}_\cdot$ and $\mathsf{R}^{(2)}_\cdot$ and the error term $T_\cdot$ are well-behaved. In Assumption \ref{a:logupp} we state upper bounds on the variances in terms of the random field $\mathsf{T}_\cdot$. It is clear that if there were many points where the variances are atypically large, then these could influence the maximum of the field in a non-negligible way. So we accompany Assumption \ref{a:logupp} with Assumption \ref{a:sparseT}, which provides a quantitative tail bound on the number of points in $V_N$ where $T_\cdot$ is large. This assumption is close to optimal, as the following heuristic back-of-the-envelope calculation shows. Suppose there are $\kappa_T N^d$ many points in $V_N$ with variance $\ge \log N+T$, where $1\ll T\ll\log N$, $1\ll |\log\kappa_T|\ll\log N$, and such that the field at these points is still logarithmically correlated. Then the maximum of the field over those points should be at least of order \begin{align*} \sqrt{\frac{\log N+T}{\log(\kappa_T^{1/d}N)}}m_{\kappa_T^{1/d}N}&=\sqrt{2d}\sqrt{(\log N+T)\log(\kappa_T^{1/d}N)}-\frac{3}{2\sqrt{2d}}\sqrt{\frac{\log N+T}{\log(\kappa_T^{1/d}N)}}\log\log(\kappa_T^{1/d}N)\\ &=m_N+\sqrt{\frac{d}{2}}(T+\log\kappa_T^{1/d})+o(1). \end{align*} Therefore, if the maximum of the entire field is supposed to be of order $m_N$, we need that $T+\log\kappa_T^{1/d}$ is bounded above for $T$ large enough. That is, we need $\kappa_T\le C{\mathrm{e}}^{-dT}$ for $T$ large enough. Our assumption \ref{a:sparseT} essentially implies that $\kappa_T\le C{\mathrm{e}}^{-(d+\varepsilon)T}$ for some $\varepsilon>0$, and so we see that this is close to optimal. Assumption \ref{a:logbd} provides error bounds with error $O(1)$ away from the boundary, and Assumptions \ref{a:micro} and \ref{a:macro} provide even sharper estimates, with error $o(1)$, on microscopic and macroscopic scales. These three assumptions are all in terms of the random scales $\mathsf{R}^{(1)}_\cdot$ and $\mathsf{R}^{(2)}_\cdot$. For the conclusion of Theorem \ref{t:mainthm} we need to have these bounds on most points of $V_N$, and here the qualitative assumption \ref{a:sparseR} is sufficient. While assumption \ref{a:micro} ensures that the covariances of the field are locally described by a function $g\colon\mathbb{Z}^d\times\mathbb{Z}^d$, it requires nothing of $g$. We still need to make sure that $g$ behaves roughly the same on the points of $V_N$. Our rather weak, qualitative way to do so is assumption \ref{a:lln}. It makes sure the law of the field on small boxes satisfies a law of large numbers. \begin{remark}\label{r:variable_wN} As mentioned above, the assumptions in part A are on $\varphi^{N,w}$ for any $w\in\mathbb{Z}^d$, while the assumptions in part $B$ are only on $\varphi^N=\varphi^{N,0}$, i.e. on our domain of interest $V_N=V_N(0)$. It is also possible to prove a variant of Theorem \ref{t:mainthm}, where instead of considering $V_N(0)$ one considers $V_N(w_N)$ for some sequence $(w_N)_{N\in\mathbb{N}}$ of points in $\mathbb{Z}^d$. In this case one needs to modify the assumptions in part B so that they hold for $\mathcal{W}_{N,\left\lfloor{N}/{L}\right\rfloor}(w_N)$ instead of $\mathcal{W}_{N,\left\lfloor{N}/{L}\right\rfloor}$, and one also needs to make the additional assumption that for $N$ large enough $\mathsf{R}^{(2)}_{w_N}\le N$ (if $w_N=0$ for all $N$, that is trivially true, so we did not list this in Assumption \ref{as:main}). The proof of this slightly more general statement is exactly the same as the one we give below. \end{remark} \begin{remark}\label{r:gen_of_DRZ} As mentioned above, Theorem \ref{t:mainthm} is a generalization of the main result of \cite{DRZ17}. To see this, it suffices to check that any Gaussian field satisfying the assumptions of \cite{DRZ17} also satisfies Assumption \ref{as:main} for a suitable choice of $\mathsf{T},\mathsf{R}^{(1)},\mathsf{R}^{(2)}$. Indeed we can choose $\mathsf{T}_v=\alpha_0$ with the $\alpha_0$ from \cite[Assumption (A.0)]{DRZ17}, $\mathsf{R}^{(1)}_v=\mathsf{R}^{(2)}_v=1$, and $\alpha_\delta(\cdot)={\alpha^{(\delta)}}/{2}$ with the $\alpha^{(\delta)}$ from \cite[Assumption (A.1)]{DRZ17}. With these choices $\mathsf{T},\mathsf{R}^{(1)},\mathsf{R}^{(2)}$ are all bounded, so that \ref{a:sparseT} and \ref{a:sparseR} hold trivially. Assumption \ref{a:lln} is immediate since the Gaussian laws $\mathbb{P}^{L',w''}(\tau_{w''}(\cdot))$ do not depend, in the setup of \cite{DRZ17}, on $w''$. Next, assumptions \ref{a:logupp}, \ref{a:logbd} and \ref{a:macro} all follow from straightforward calculations which we omit. For \ref{a:micro} things are less obvious, so let us give some details: To avoid clashes in notation, rename the functions $f$ and $g$ from \cite[Assumption (A.2)]{DRZ17} to $\tilde f$, $\tilde g$. The function $\tilde g$ must be translation-invariant (to see this, use \cite[Assumption (A.2)]{DRZ17} with $x=\bar x:=\left(\frac12,\ldots,\frac12\right)^d$ and $(u,v)=(\bar u,\bar v)$ arbitrary, and then also with $x=\bar x+\frac{\bar u}{N}$ and $(u,v)=(0,\bar v-\bar u)$). Thus $\tilde g(u,v)$ is actually a function of $u-v$. The function $\tilde f$ must be bounded above, because otherwise Assumption (A.2) with $(u,v)=(0,0)$ would contradict \cite[Assumption (A.0)]{DRZ17}. These considerations show that if we choose $f=\tilde f$ and $g=\tilde g$ then \ref{a:micro} is satisfied as well. \end{remark} \begin{remark} In Assumption \ref{a:micro} the condition that $f$ is bounded from above can be relaxed. In fact, if we set \[\Gamma_\delta:=\sup_{x\in[\delta,1-\delta]^d}f(x)\] then the condition \[\Gamma_\delta\le\frac{1-\varepsilon}{d}|\log \delta|\] for some $\varepsilon>0$ is sufficient. To show this slightly stronger result, one can no longer take the limit $\delta\to0$ before $T\to\infty$ in the results below, but needs to choose $\delta$ as a suitable function of $T$. However, in all applications we are aware of, $f$ will be bounded from above, and so we chose to give the detailed proof only in that case. \end{remark} \begin{remark} In all examples that we have in mind, Assumption \ref{a:lln} is checked using an appropriate ergodic theorem, and provides information concerning averages of local functions. This forces us, in various places in the proof and in particular in applications to the percolation problem, to approximate general translation invariant functions by local functions. One could replace assumption \ref{a:lln} by a stronger ergodicity assumption involving the law of the field jointly with $\mathsf{T}_\cdot$, $\mathsf{R}^{(1)}_\cdot$ and $\mathsf{R}^{(2)}_\cdot$. This stronger assumption would still be satisfied in our examples of interest, and some proofs (in particular step 4 in the proof of Lemma \ref{l:right_tail}) could be shortened slightly. However, we found it worthwhile to showcase that we only require a very weak ergodicity assumption like \ref{a:lln}, and so we chose the current formulation. \end{remark} \subsection{Applications to log-correlated fields in random environment}\label{sec-applications} After our discussion of Theorem \ref{t:mainthm}, we now explain how it can be used. We already mentioned that our main application is to establish Theorems \ref{t:random_conductances} and \ref{t:perc_unit_cond}. These theorems follow from Theorem \ref{t:mainthm} once we show that for almost every realization of the environment the corresponding Gaussian measures satisfy Assumptions \ref{as:main}. In fact, we prove that this is the case for a common generalization of the setting of both Theorem \ref{t:random_conductances} and Theorem \ref{t:perc_unit_cond} where we allow conductances in $(\{0\}\cup[\lambda,\Lambda])$ under the assumption that $p:=\mathbf{P}({\mathbf{a}}(e)>0)>p_c=1/2$. We also allow boxes with locations varying with $N$, i.e. instead of considering the fields on $V_N(0)$ we consider them on $V_N(w_N)$ for some deterministic sequence $(w_N)_{N\in\mathbb{N}}$. This is the content of the following theorem, which is the second main result of the present paper. \begin{theorem}\label{t:percolation_cluster} Let $1/2<p\le1$ and $0<\Lambda^-\le\Lambda^+$. Let $\mathbf{P}$ be an i.i.d. Borel probability measure on $(\{0\}\cup[\Lambda^-,\Lambda^+])^{E(\mathbb{Z}^2)}$, with $\mathbf{P}({\mathbf{a}}(e)>0)=p$. For $\mathbf{P}$-almost every sample ${\mathbf{a}}$, let $\mathcal{C}_\infty$ denote the (unique) infinite cluster of edges $e$ where ${\mathbf{a}}(e)>0$, see \cite{K80}. With $\tilde V_N(w)=V_N(w)\cap \mathcal{C}_\infty$, construct a family of Gaussian fields $\varphi^{{\mathbf{a}},N,w}$ on $V_N(w)\cap C_\infty$ for $N\in\mathbb{N}$ with law \begin{align*} &\mathbb{P}^{{\mathbf{a}},N}(\mathrm{d}\varphi^{{\mathbf{a}},N,w})\\ &=\frac{1}{Z_{\mathbf{a}}^{V_N(w)}}\exp\Big(-\frac12\sum_{\{u,v\}\in E(\mathbb{Z}^2)}{\mathbf{a}}(\{u,v\})(\varphi^{{\mathbf{a}},N,w}_u-\varphi^{{\mathbf{a}},N,w}_v)^2\Big)\prod_{v\in \tilde V_N(w)}\,\mathrm{d}\varphi^{{\mathbf{a}},N,w}_v\!\!\! \prod_{v\notin \tilde V_N(w)}\delta_0(\mathrm{d}\varphi^{{\mathbf{a}},N,w}_v) \end{align*} and extend this to a family of Gaussian fields $\varphi'^{{\mathbf{a}},N,w}$ on $V_N(w)$ by setting $\varphi'^{{\mathbf{a}},N,w}_v=\varphi^{{\mathbf{a}},N,w}_{v^*}$, where $v^*\in\mathbb{Z}^2$ is the point in $\mathcal{C}_\infty$ closest to $v$ (with ties broken by taking the lexicographically first point) Then there is a deterministic constant ${\overline{\A}}$ with the following property. For $\mathbf{P}$-almost every ${\mathbf{a}}$ the collection of Gaussian fields $\sqrt{2\pi{\overline{\A}}}\varphi'^{{\mathbf{a}},N,w}$ on $V_N(w)$ satisfy assumptions \ref{a:logbd}, \ref{a:micro}, \ref{a:macro}, \ref{a:sparseR} and \ref{a:lln} of Theorem \ref{t:mainthm}. Furthermore, there is a constant $\frac12\le p_{\Lambda^+/\Lambda^-}<1$ which depends on ${\Lambda^+}/{\Lambda^-}$ only, such that if $p>p_{\Lambda^+/\Lambda^-}$ then the collection of Gaussian fields also satisfies, for $\mathbf{P}$-almost every ${\mathbf{a}}$, assumptions \ref{a:logupp} and \ref{a:sparseT}. In particular, if $p>p_{\Lambda^+/\Lambda^-}$, then for $\mathbf{P}$-almost every ${\mathbf{a}}$ the fields $\sqrt{2\pi{\overline{\A}}}\varphi'^{{\mathbf{a}},N,w}$ satisfy all assumptions of Theorem \ref{t:mainthm}, and consequently \[\max_{v\in V_N}\varphi'^{{\mathbf{a}},N}-\sqrt{\frac{1}{2\pi{\overline{\A}}}}m_N=\max_{v\in V_N\cap\mathcal{C}_\infty}\varphi^{{\mathbf{a}},N}_v-\sqrt{\frac{1}{2\pi{\overline{\A}}}}m_N\] converges in distribution to a limit that is a randomly shifted Gumbel distribution. \end{theorem} In view of Remark \ref{r:variable_wN}, Theorem \ref{t:percolation_cluster} also holds with $V_N=V_N(0)$ replaced by $V_N(w_N)$ for a deterministic sequence $(w_N)_{N\in\mathbb{N}}$ (but note that then the $\mathbf{P}$-null set of environments ${\mathbf{a}}$ for which the theorem fails will in general depend on $(w_N)_{N\in\mathbb{N}}$). In particular, our proof of Theorem \ref{t:percolation_cluster} does not use in any way that the boxes $V_N(0)$ are nested, but rather uses Borel-Cantelli-type arguments to show that almost surely the assumptions in part B of Assumption \ref{as:main} are satisfied. Our results for percolation clusters are restricted to the highly supercritical regime where $p$ is close to 1. A natural question is what happens for other $p>1/2$. Of course, Theorem \ref{t:mainthm} implies that we can take the constant $p_*$ in Theorem \ref{t:perc_unit_cond} to be equal to the constant $p_1$ in Theorem \ref{t:percolation_cluster}. This raises the following question. \begin{question}\label{q:nearcritical} How does $p_{\Lambda^+/\Lambda^-}$ depend on ${\Lambda^+}/{\Lambda^-}$? In particular, can one take $p_{\Lambda^+/\Lambda^-}=1/2$, at least if ${\Lambda^+}={\Lambda^-}$? If that is not possible, can one nonetheless take $p_*=1/2$ in Theorem \ref{t:perc_unit_cond}? \end{question} We think that this question is very interesting, but possibly challenging. Our proof of Theorem \ref{t:percolation_cluster} relies on the fact that $p_{\Lambda^+/\Lambda^-}$ is close to 1 (as explained in Remark \ref{r:nearcritical} below). So a positive answer to question \ref{q:nearcritical} would require some new ideas. While in this work we focus on the application in Theorem \ref{t:percolation_cluster}, the general result in Theorem \ref{t:mainthm} should also be applicable in various other settings. \begin{enumerate} \item In Theorem \ref{t:percolation_cluster} we assumed that the conductances ${\mathbf{a}}$ are independent. It should be possible to relax this assumption and to replace it, for example, by a sufficiently strong mixing assumption. However, we certainly need a quantitative assumption, and mere ergodicity is not sufficient. In fact, in \cite{ADS20} under the assumption of ergodicity error bounds on the covariances with error $o(\log N)$ were shown, and this is best possible. One example where such extension is relevant might be when one replaces the percolation model by the two-dimensional interlacement model of \cite{CPV16}. Another interesting example with non-i.i.d. environment are the gradient interface models introduced in \cite{BK07,BS11}. These are non-Gaussian, but can be represented as a mixture of Gaussian fields with ergodic random conductances. Our framework might be useful to study the maximum of the latter fields, see for example \cite[Problem 1.23]{B11}. \item As mentioned above, another important example of a LCGF is the membrane model in the critical dimension $d=4$, see \cite{S20} for the convergence in law of its centered maximum. One can define disordered versions of this model in various ways. A natural one is to take i.i.d. positive random variables $\mathbf{b}$ for each vertex of $\mathbb{Z}^4$, and then consider the probability measure \begin{align*} &\mathbb{P}^{\mathbf{b},N}(\mathrm{d}\varphi^{\mathbf{b},N})\\ &=\frac{1}{Z_\mathbf{b}^{V_N}}\exp\left(-\frac12\sum_{v\in \mathbb{Z}^4}\mathbf{b}(v)(\Delta\varphi^{\mathbf{b},N}_v)^2\right)\prod_{v\in V_N}\,\mathrm{d}\varphi^{\mathbf{b},N}_v\prod_{v\notin V_N}\,\mathrm{d}\delta_0(\mathrm{d}\varphi^{\mathbf{b},N}_v). \end{align*} Checking that Theorem \ref{t:mainthm} applies in that setting would require the development of a suitable quantitative homogenization theory for fourth-order elliptic equations, and bypassing various comparison theorems we used where the maximum principle was invoked. Nonetheless, we do expect the analogue of Theorem \ref{t:percolation_cluster} to hold in this context. Alternatively, one could also take i.i.d conductances ${\mathbf{a}}$ on the edges of $\mathbb{Z}^4$, and define a Gaussian field on $V_N$ as the preimage of discrete white noise (i.e. i.i.d standard Gaussians) under the operator $-\nabla\cdot{\mathbf{a}}\nabla$. This is the approach used in \cite{CR22}. Again, we would expect that the analogue of Theorem \ref{t:percolation_cluster} holds in this context. \item The disordered GFF in Theorem \ref{t:percolation_cluster} and the disordered membrane model all only involve finite range interactions in the Hamiltonian, and so have some domain Markov property. However, this is not used in Theorem \ref{t:mainthm}. In particular, the theorem should also be applicable when considering independent random conductances ${\mathbf{a}}(u,v)$ for non-neighboring $u,v$, provided that these conductances decay sufficiently fast as $|u-v|\to\infty$. \item As in \cite{DRZ17}, we focused in this paper on lattice models. One expects similar results in the continuum, where one uses mollifications of the continuous GFF defined using a disordered reversible second order operator. The quantitative homogenization results for such operators are available in \cite{AKM19}, while the convergence in law of the maximum in the non-disordered case was derived in \cite{A16}. \end{enumerate} \subsection{Outline of the proofs} The proof of Theorem \ref{t:mainthm} builds on the argument of \cite{DRZ17}. We recall the latter. Starting with a LCGF $\varphi_v$ satisfying a uniform version of Assumptions \ref{as:main}, one first proves the tightness of the maximum (shifted by $m_N$) and the following crucial property: local maxima of the field that are within a bounded distance from $m_N$ must be macroscopically separated. Equipped with this fact, one constructs approximating fields $\xi_v$ such so that the variance of $\xi_v$ matches that of $\phi_v$, and the covariances match at microscopic and macroscopic scales (while remaining a bounded distance away in mesoscopic scales). This is achieved by dividing $V_N$ into macroscopic boxes (of side length $N/J$) and microscopic boxes (of side length $J'$). One then constructs three Gaussian fields, denoted $\xi_\cdot^{ \rm mac}$, $\xi_\cdot^{\rm meso}$, and $\xi_\cdot^{ \rm mic}$, called the macroscopic, mesoscopic and microscopic fields, which are independent of each other and have the following properties: \begin{itemize} \item The field $\xi_\cdot^{ \rm mac}$ is piecewise constant on macroscopic boxes, and its covariance matches the function $h$ in assumption \ref{a:macro}. \item The field $\xi_\cdot^{ \rm meso}$ is a modified branching random walk (see \eqref{eq-MBRW} below for a definition) with covariance adjusted to match the mesoscopic scale of the LCGF, up to additive error. \item The field $\xi_\cdot^{ \rm mic}$ consists of independent copies (between microscopic boxes) of a field whose covariance matches the local covariance of the field determined by the functions $f,g$ in assumption \ref{a:micro}. \item The field $\xi_\cdot= \xi_\cdot^{ \rm mac}+\xi_\cdot^{\rm meso}+\xi_\cdot^{ \rm mic}$ matches the covariance of $\phi_\cdot$ at microscopic and macroscopic scales. \end{itemize} (The actual construction decomposes $J=KL$ and $J'=K'L'$, in order to obtain good matching of covariances, but we overlook this detail in this high level description.) We now describe how the proof is adapted to the non-homogeneous setup of the field $\varphi^N_\cdot$. First, one controls the maximum of the field over the ``bad points'', i.e. those vertices $v$ with large values of $\mathsf{T}_v$, $\mathsf{R}^{(1)}_v$ or which lie in boxes with large $\mathsf{R}^{(2)}_w$, and shows that those do not contribute, see Lemma \ref{l:upper_right_tail_badset}, whose proof (together with some auxiliary results) takes up Section \ref{subsec-badset}; the statement and its proof slightly sharpen the corresponding ones in \cite{DRZ17}. Having controlled the bad points, one controls the tails of the distribution of the maximum following up the recipe of \cite{DRZ17}, culminating with the proof of tightness of the centered maximum (Theorem \ref{t:tightness}) and the macroscopic separation between local maxima (Theorem \ref{t:geom_nearmax}). The fact that the good points now form a cube with small ``holes'' and thus have a rather complicated geometry causes some additional difficulty, but generally the arguments are very similar to \cite{DRZ17}. The main point of departure from the analysis of \cite{DRZ17} comes in constructing our approximating fields, see \eqref{eq-approxfield}. The local inhomogeneity of $\varphi^N_\cdot$ and the need to match variances forces one to introduce local correction terms in the form of additive independent Gaussians; using the construction of \cite{DRZ17} would have led us to a lack of uniform control on these corrections. Instead, we use corrections at both the microscopic and macroscopic levels, which possess a uniform control. The main technical step is then to prove matching asymptotics for the right tail of the maximum over the good points (Lemma \ref{l:right_tail}). Here one needs extra care to compare the maximum over the good points with the maximum over all points (to which one can apply Assumption \ref{a:lln}). Having established these asymptotics, Theorem \ref{t:mainthm} follows by a coupling argument similar to the one in \cite{DRZ17}. The details of the construction are provided in Section \ref{sec-3}. \medskip The proof of Theorem \ref{t:percolation_cluster} (which implies Theorems \ref{t:random_conductances} and \ref{t:perc_unit_cond}) consists of checking the hypotheses of Theorem \ref{t:mainthm}. A major tool in doing that is quantitative homogenization theory on the percolation cluster, which we use in the version developed in \cite{AD18,DG21} and review in Section \ref{sec-4.1}. This theory provides us with asymptotics for the (full-space) Green's function (Theorem \ref{t:green_asympt}) as well as estimates for the difference between the solution of a discrete Dirichlet problem and its continuous counterpart (Theorem \ref{t:homog_dirich}). Both these results hold on lengthscales beyond some random lengthscale (and these two random lengthscales will reoccur in $\mathsf{R}^{(1)}$ and $\mathsf{R}^{(2)}$, respectively). While in Theorem \ref{t:homog_dirich} the error term is only estimated in an $L^2$-sense, we can upgrade this to pointwise control by also using the large-scale $C^{0,1}$-regularity of ${\mathbf{a}}$-harmonic functions of \cite{AD18}. Assumptions \ref{a:logbd}, \ref{a:macro}, \ref{a:micro} then follow from these results with some work, relying mainly on the maximum principle with well-chosen comparison functions. Regarding Assumption \ref{a:sparseR}, the results from \cite{AD18,DG21} come with a stretched-exponential tail bound on the random scales. This tail-bound allows us to control the first moment of the number of points with large $\mathsf{R}^{(1)}$ or $\mathsf{R}^{(2)}$. To establish \ref{a:sparseR}, however, such an annealed estimate is not good enough. Instead we also need an estimate on the second moment of the number of points with large $\mathsf{R}^{(1)}$ or $\mathsf{R}^{(2)}$. That is, we need to quantify that the random scales arising from \cite{AD18,DG21} are decorrelated on large scales. This issue has not been adressed in the previous literature. One way to establish such decorrelation would be to go through \cite{AD18,DG21}, keeping track of the dependence of the various scales there as they arise. Fortunately for us, there is an easier way, and we can explicitly write down local events that approximate the events that $\mathsf{R}^{(1)}$ or $\mathsf{R}^{(2)}$ are large. For Theorem \ref{t:homog_dirich} defining such an event is easy; for Theorem \ref{t:green_asympt} this is more tricky (see Lemma \ref{l:green_asympt_improved}). Finally Assumption \ref{a:lln} follows directly from the translation-invariance of the law of $\mathbf{P}$. All this holds for any $p>p_c=1/2$. Extra work is needed to verify Assumptions \ref{a:logupp} and \ref{a:sparseT}. One part requires us to study boundary behavior of the Green function; this is handled by reflections and an a-priori estimate in half-spaces, see Lemma \ref{l:green_asympt_halfspace}. More importantly, we need to check that points with exceptionally high variance (such as points in the bulk and end of long ``pipes'' connected to the percolation cluster) are sparse enough so that \ref{a:logupp} and \ref{a:sparseT} still hold. This we can only do if $p$ is sufficiently close to 1. The argument builds on a combination of isoperimetric control on the percolation cluster, large deviation estimates for the chemical distance on the cluster (going back to \cite{AP96}), a-priori Peierls-like large deviations estimates for the size of the cluster (all recalled in Section \ref{sec-4.3}) and a multiscale argument due to \cite{BK05}, see Lemma \ref{l:tail_bound_green} and its proof. Once these are established, it is routine to verify Assumptions \ref{a:logupp} and \ref{a:sparseT}, and complete the proof of Theorem \ref{t:percolation_cluster}. \subsection{Additional notation and conventions} In all the following, $c$ and $C$ will denote generic constants whose precise value may change from line to line. If we want to emphasize that the value of $C$ depends on other quantities, we add them as index to $C$. We use $|\cdot|_p$ to denote the $\ell^p$ norm ($p\in [1,\infty]$) on $\mathbb{R}^d$ or $\mathbb{Z}^d$, and use $|\cdot|=|\cdot|_2$. For a set $A$ we denote by $|A|$ its cardinality. \section{Properties of the maximum} We want to deduce estimates on the maximum of $\varphi$ by Gaussian comparison inequalities. To do so, let us introduce the objects that we will use for comparison, following \cite{DRZ17}. Recall \eqref{eq:calW} and define \begin{align} \label{eq:calV} \mathcal{V}_{N,L}(w)&=\left\{V_L(w')\colon w'\in\mathcal{W}_{N,L}(w)\right\}, \quad \tilde\mathcal{V}_{N,L}(w)=\left\{V_L(w')\colon w'\in V_N(w)\right\}. \end{align} Assume that $N=2^n$ is a power of two, and that $w\in\mathbb{Z}^d$. Take a collection of i.i.d. Gaussians $X_{j,Q_j}$ of variance $\log2$, where $j\in\{0,\ldots,n\}$ and $Q_j\in\mathcal{V}_{N,2^j}$. Then the branching random walk $\theta^{N,w}=(\theta^{N,w}_v)_{v\in V_N(w})$ is defined as \begin{equation}\label{eq-BRW}\theta^{N,w}_v=\sum_{j=0}^n\sum_{\substack{Q_j\in \mathcal{V}_{N,2^j}(w)\\v\in Q_j}}X_{j,Q_j}.\end{equation} Similarly, take a collection of i.i.d. Gaussians $Y_{j,Q_j}$ of variance $2^{-jd} \log2$, where $j\in\{0,\ldots,n\}$ and $Q_j\in\tilde\mathcal{V}_{N,2^j}(w)$, and define the modified branching random walk $\tilde\theta^{N,w}=(\tilde\theta^{N,w}_v)_{v\in V_N(w})$ as \begin{equation} \label{eq-MBRW}\tilde\theta^{N,w}_v=\sum_{j=0}^n\sum_{\substack{Q_j\in \tilde\mathcal{V}_{N,2^j}(w)\\v\in Q_j+N\mathbb{Z}^d}}Y_{j,Q_j}.\end{equation} It is well known (see e.g. \cite[Lemma 2.3]{DRZ17}) that for the BRW we have the estimates \begin{align*} \left|\mathbb{E}(\theta^{N,w}_v)^2-\log N\right|&\le C,\\ \mathbb{E}\theta^{N,w}_v\theta^{N,w}_u-\log N+\log_+|u-v|&\le C, \end{align*} and that for the MBRW we have \[\left|\mathbb{E}\tilde\theta^{N,w}_v\tilde\theta^{N,w}_u-\log N+\log_+|u-v|_{\sim,N}\right|\le C,\] where $|x|_{\sim,N}=\inf_{z\in N\mathbb{Z}^d}|x+z|$ is the quotient norm. This implies in particular that for any $\delta<\frac12$ and any $u,v\in V_N^\delta(w)$ we have \begin{equation}\label{e:log_corr_MBRW} \left|\mathbb{E}\tilde\theta^{N,w}_v\tilde\theta^{N,w}_u-\log N+\log_+|u-v|_{\sim,N}\right|\le C_\delta. \end{equation} \subsection{Tailbounds on the "bad set"} \label{subsec-badset} For the study of the maximum we will need to argue that the maximum of $\varphi^{N}$ is unlikely to occur on points that are exceptional in the sense that e.g. $\mathsf{T}_\cdot$ is large, $\mathsf{R}^{(1)}_\cdot$ is large, or the points are too close to the boundary of $V_N$. Thus, in this subsection we give upper bounds on the maximum of $\varphi^{N}$ in such exceptional sets. The main technical result is the following. \begin{lemma}\label{l:upper_right_tail_badset} Under Assumptions \ref{a:logupp} and \ref{a:sparseT} there are $C,c>0$ such that for $J,N\in\mathbb{N}$ with $J\le\frac{N}{2}$, $w'\in\mathcal{W}_{N,\left\lfloor\frac{N}{J}\right\rfloor}$, $z\in\mathbb{R}$, $T$ sufficiently large, and $A\subset V_{\left\lfloor\frac{N}{J}\right\rfloor}(w')$, \begin{equation}\label{e:upper_right_tail_badset} \mathbb{P}^{N}\bigg(\max_{\substack{v\in V_{\left\lfloor\frac{N}{J}\right\rfloor}(w'):\\\mathsf{T}_v\ge T\textrm{, or }v\in A}}\varphi^{N}_v\ge m_N+z \bigg)\le C\!\Big({\mathrm{e}}^{dT}\frac{|A|}{N^d}\Big(1+T+\log\Big(\frac{N^d}{|A|}\Big)\Big)^{19/8}+{\mathrm{e}}^{-cT}\!\Big)(z\vee 1){\mathrm{e}}^{-\sqrt{2d}z}. \end{equation} \end{lemma} \noindent In particular, choosing $J=1$ and $w'=0$ yields an estimate on all of $V_N$. To see why this estimate is useful, note that, for example, under assumptions \ref{a:logbd} and \ref{a:sparseR} the set $A_{N,R,\delta}$ of points in $V_N$ where $\mathsf{d}(v,\partial^+ V_N)\le \delta N$ or $\mathsf{R}^{(1)}_v\ge R$ satisfies \[\lim_{\substack{\delta\to0\\mathbb{R}\to\infty}}\frac{|A_{N,R,\delta}|}{N^d}=0\] (where here and in the following we write $\limsup_{\substack{\delta\to0\\mathbb{R}\to\infty}}$ to denote that we take $R\to\infty$ and $\delta\to0$ while the order of the limits does not matter). So Lemma \ref{l:upper_right_tail_badset} (with $J=1$) implies that under assumptions \ref{a:logupp}, \ref{a:logbd}, \ref{a:sparseT} and \ref{a:sparseR} we have \[\limsup_{T\to\infty}\limsup_{\substack{\delta\to0\\mathbb{R}\to\infty}}\frac{{\mathrm{e}}^{\sqrt{2d}z}}{z\vee 1}\mathbb{P}^{N}\bigg(\max_{\substack{v\in V_N\\\mathsf{T}_v\ge T\textrm{, or }\mathsf{R}^{(1)}_v\ge R\text{, or }\mathsf{d}(v,\partial^+ V_N)\le \delta N}}\varphi^{N}_v\ge m_N+z\bigg)=0.\] That is, the maximum of $\varphi^N_v$ over the exceptional points in $V_N$ is indeed unlikely to be large, with a quantitative estimate on the right tail. We will use this and similar estimates repeatedly in the following. Combined with a lower bound on the maximum (that we will show in the next section), this estimate allows us to conclude that the maximum is unlikely to occur at an exceptional point. We will later also need a tail bound for the maximum of the fields on the microscopic cubes $V_{J'}(w'')$ contained in $V_{\left\lfloor{N}/{J}\right\rfloor}(w')$. Of course, such an estimate cannot hold for each microscopic cube, as we have no control over $\mathsf{R}^{(1)}$ and $T$ in all cubes. However, the estimate does hold when averaging over all cubes. \begin{lemma}\label{l:upper_right_tail_badset_micro} Under Assumptions \ref{a:logupp} and \ref{a:sparseT} there are $C,c>0$ such that for $J,J',N\in\mathbb{N}$ with $J\le{N}/{2}$, $J'\le{N}/({2J})$, $w'\in\mathcal{W}_{N,\left\lfloor{N}/{J}\right\rfloor}$, $z\in\mathbb{R}$, $T$ sufficiently large, and $A\subset V_{\left\lfloor{N}/{J}\right\rfloor}(w')$ we have \begin{equation}\label{e:upper_right_tail_badset_micro} \begin{split} &\sum_{w''\in\mathcal{W}_{\left\lfloor\frac{N}{J}\right\rfloor,J'}(w')} \mathbb{P}^{J',w''}\bigg(\max_{\substack{v\in V_{J'}(w''):\\\mathsf{T}_v\ge T\textrm{, or }v\in A}}\varphi^{J',w''}_v\ge m_{L'}+z\bigg)\\ &\quad\le C\left(\frac{N}{JJ'}\right)^d\left({\mathrm{e}}^{dT}\frac{|A|}{N^d}\left(1+T+\log\left(\frac{N^d}{|A|}\right)\right)^{19/8}+{\mathrm{e}}^{-cT}\right)(z\vee 1){\mathrm{e}}^{-\sqrt{2d}z}. \end{split} \end{equation} \end{lemma} The main step in the proofs of Lemma \ref{l:upper_right_tail_badset} and Lemma \ref{l:upper_right_tail_badset_micro} is an estimate on the right tail of the maximum on subsets where $\mathsf{T}_\cdot$ is bounded. The same argument implies an estimate for subsets where $\mathsf{R}^{(1)}_\cdot$ is bounded, and the following lemma collects both these estimates. \begin{lemma}\label{l:upper_right_tail_sparse} Under Assumption \ref{a:logupp} let $N\ge 2$, $w\in\mathbb{Z}^d$ and suppose that $A\subset V_N(w)$. Let $T=\max_{v\in A}\mathsf{T}_v$. Then for any $z\in\mathbb{R}$, \begin{equation}\label{e:upper_right_tail_sparse} \mathbb{P}^{N,w}\left(\max_{v\in A}\varphi^{N,w}_v\ge m_N+z\right)\le C{\mathrm{e}}^{dT}\frac{|A|}{N^d}\left(1+T+\log\left(\frac{N^d}{|A|}\right)\right)^{19/8}(z\vee 1){\mathrm{e}}^{-\sqrt{2d}z}. \end{equation} \end{lemma} This estimate should be optimal (except possibly regarding the exponent of the logarithmic correction term). Similar estimates can be found in various places in the literature, e.g. in \cite[Proposition 1.1]{DRZ17} (but with exponent $1/2$ instead of $1$ for $\frac{|A|}{N^d}$), or in \cite[Lemma 10.4]{B20} (but only for the case of the Gaussian free field). We need the almost optimal dependence on $\frac{|A|}{N^d}$, as it is directly related to the exponents admissible in \ref{a:sparseT}. For the proof we proceed similary to \cite{DRZ17}, and use Gaussian comparison inequalities to reduce the problem to an estimate for BRW, for which explicit calculations are possible. \begin{proof} Note that by Assumption \ref{a:logupp} we have for any $u,v\in A$ that \begin{align} \Var\varphi^{N,w}_v\le\log N+T\label{e:upper_right_tail_sparse_5},\\ \Var\varphi^{N,w}_v-\mathbb{E}\varphi^{N,w}_v\varphi^{N,w}_u\le \log_+|u-v|+2T.\label{e:upper_right_tail_sparse_6} \end{align} Let $N'=2^{n'}\ge N$ be an integer to be chosen shortly. We can define an upscaling map $\Psi^{w,w'}_{N,N'}\colon V_N(w)\to V_{N'}(w')$ by \begin{equation}\label{e:upscaling} \Psi^{w,w'}_{N,N'}(v)=w'+\left\lfloor\frac{N'}{N}\right\rfloor(v-w). \end{equation} Proceeding as in the proof of \cite[Lemma 2.5]{DRZ17}, we see that there is $N'=2^{n'}$ such that $\log N'-\log N\le T+C$ and such that \[\mathbb{P}^{N,w}\left(\max_{v\in A}\varphi^{N,w}_v\ge m_N+z\right)\le 2\mathbb{P}^{N',w}\left(\max_{v\in \Psi^{w,w}_{N,N'}(A)}\theta^{N',w}_v\ge m_N+z\right),\] for any $z\in\mathbb{R}$\footnote{In \cite[Lemma 2.5]{DRZ17} $N'$ is not a power of 2, but $\frac{N'}{N}$ is. This does not matter for the argument.}. This reduces the proof to the study of $\theta^{N,w}$. As in \cite[Section 3.4]{BDZ16} we can associate to $\theta^{N',w}$ a $2d$-ary branching Brownian motion such that $\theta^{N',w}_v$ is the value at time $n'$ of the branch corresponding to $v$. Let $a$ be the smallest integer such that $|A|\le2^{da}$. We claim that \begin{equation}\label{e:upper_right_tail_sparse_1} \mathbb{P}^{N',w}\left(\max_{v\in \Psi^{w,w}_{N,N'}(A)}\theta^{N',w}_v\ge m_{N'}+z\right)\le \frac{C}{2^{d(n'-a')}}(1+(n'-a'))^{19/8}(z\vee 1){\mathrm{e}}^{-\sqrt{2d}z}. \end{equation} Once we have shown this, the lemma easily follows. Indeed, we have that $m_{N'}\le m_N+\sqrt{2d}(T+C)$ and so \eqref{e:upper_right_tail_sparse_1} implies that \begin{align*} &\mathbb{P}^{N,w}\left(\max_{v\in A}\varphi^{N,w}_v\ge m_N+z\right)\le 2\mathbb{P}^{N',w}\left(\max_{v\in\Psi^{w,w}_{N,N'}(A)}\theta^{N',w}_v\ge m_N+z\right)\\ &\le\frac{C}{2^{d(n'-a')}}(1+(n'-a'))^{19/8}((z-(m_N'-m_N))\vee 1){\mathrm{e}}^{-\sqrt{2d}(z-(m_N'-m_N))}\\ &\le\frac{C|A|^d}{N'^d}\left(1+\log\left(\frac{N'^d}{|A|}\right)\right)^{19/8}{\mathrm{e}}^{-\sqrt{2d}z}{\mathrm{e}}^{2dT}, \end{align*} which implies \eqref{e:upper_right_tail_sparse}. It remains to prove \eqref{e:upper_right_tail_sparse_1}. We can assume that $a\ge1$. As $2^a\le2N$, while $2^{n'}\ge2^9N$, we know that $n'>a+8$. For later use we note that for $1\le a\le n'-8$ we have \begin{equation}\label{e:upper_right_tail_sparse_3} \frac{\log n'-\log a}{n'-a}\le\frac{\log8}{7}. \end{equation} If $z\le-\sqrt{\frac{d}{2}}(n'-a)\log2$ then the right-hand side of \eqref{e:upper_right_tail_sparse_1} is at least one (if $C$ is large enough), and so we can assume that $z>-\sqrt{\frac{d}{2}}(n'-a)\log2$. Let $\beta=z+\sqrt{\frac{d}{2}}(n'-a)\log2>0$, and consider the event \[G(\beta)=\bigcup_{v\in \Psi_{N,N'}(A)}\bigcup_{0\le t\le a}\left\{\theta^{N',w}_v(t)\ge\beta+1+\frac{t m_{2^{a}}}{a}+10\log_+(t\wedge(a-t))\right\}\] By \cite[Lemma 3.7]{BDZ16} (or rather its obvious extension to $d$ dimensions), we have that \begin{align} \mathbb{P}(G(\beta))&\le C(\beta\vee1){\mathrm{e}}^{-\sqrt{2d}\beta} \le C\left((z\vee 1)+\frac{\sqrt{d}(n'-a)\log2}{\sqrt{2}}\right){\mathrm{e}}^{-\sqrt{2d}z}\frac{1}{2^{d(n'-a)}}\nonumber\\ &\le C(z\vee 1)(1+(n'-a)){\mathrm{e}}^{-\sqrt{2d}z}\frac{1}{2^{d(n'-a)}}, \label{eq-190222} \end{align} which can be absorbed into the right side of \eqref{e:upper_right_tail_sparse_1}. So it remains to consider the case that $G(\beta)$ does not occur. We have that \begin{equation}\label{e:upper_right_tail_sparse_2} \begin{split} &\mathbb{P}^{N',w}\left(\max_{v\in \Psi^{w,w}_{N,N'}(A)}\theta^{N',w}_v\ge m_{N'}+z, G_N(\beta)^\complement\right) \\ &\leq \sum_{v\in \Psi^{w,w}_{N,N'}(A)}\mathbb{P}^{N',w}\Big(\theta^{N',w}_v(n')\ge m_{N'}+z, \\ &\qquad \qquad \qquad \qquad\qquad \theta^{N',w}_v(j)\le\beta+1+\frac{j m_{2^{a}}}{a}+10\log_+(t\wedge(a-t))\ \forall j\in\{1,\ldots a\}\Big)\,. \end{split} \end{equation} Denote by $\chi$ the density of \[\mathbb{P}^{N',w}\left(\theta^{N',w}_v(j)\le\beta+1+\frac{jm_{2^{a}}}{a}+10\log_+(t\wedge(a-t))\ \forall j\in\{1,\ldots a\},\theta^{N',w}_v(a)-m_{2^{a}}\in\cdot\right)\] with respect to one-dimensional Lebesgue measure. A calculation similar to those in the proofs of \cite[Lemma 3.7]{BDZ16} and \cite[Lemma 2.4]{BDZ16b} shows that \begin{equation}\label{e:upper_right_tail_sparse_4} \begin{split} \chi(x)&\le\frac{C}{2^{da}}(\beta+1)(\beta+1-x)\exp\left(-\left(\sqrt{2d}-C'\frac{\log n}{n}\right)x-\frac{x^2}{2(\log 2)a}\right)\\ &\le\frac{C}{2^{da}}(\beta+1)(\beta+1-x){\mathrm{e}}^{-\sqrt{2d}x}. \end{split} \end{equation} We can now rewrite \eqref{e:upper_right_tail_sparse_2} by conditioning on the value of $\theta^{N',w}_v(a)$, and using that $\theta^{N',w}_v(n')-\theta^{N',w}_v(a)$ is Gaussian with variance $(n'-a)\log2$ and independent of $\theta^{N',w}_v$ for times less than $a$. We obtain \begin{align*} &\mathbb{P}^{N',w}\bigg(\max_{v\in\Psi^{w,w}_{N,N'}(A)}\theta^{N',w}_v\ge m_{N'}+z, G_N(\beta)^\complement\bigg)\\ &\le\sum_{v\in \Psi^{w,w}_{N,N'}(A)}\int_0^\infty\chi(\beta+1-x)\mathbb{P}^{N',w}\left(\theta^{N',w}_v(n')-\theta^{N',w}_v(a)\ge m_{N'}+z-(m_{2^{a}}+\beta+1-x)\right)\,\mathrm{d} x. \end{align*} Note that \begin{align*} m_{N'}-m_{2^{a}}-\beta+z&=\sqrt{2d}(n'-a)\log2-\frac{3}{2\sqrt{2d}}(\log n'-\log a)-\sqrt{\frac{d}{2}}(n'-a)\log2\\ &=\sqrt{\frac{d}{2}}(n'-a)\log2-\frac{3}{2\sqrt{2d}}(\log n'-\log a) \end{align*} and that, using \eqref{e:upper_right_tail_sparse_3}, the right-hand side is bounded below by 1 for any $d$. In particular, $m_{N'}+z-(m_{2^{a}}+\beta+1-x)\ge x>0$ for any $x>0$. Applying now \eqref{e:upper_right_tail_sparse_4} and standard Gaussian tail estimates, we see that \begin{align*} &\mathbb{P}^{N',w}\bigg(\max_{v\in \Psi^{w,w}_{N,N'}(A)}\theta^{N',w}_v\ge m_{N'}+z, G_N(\beta)^\complement\bigg)\\ &\le2^{da}\int_0^\infty\frac{C}{2^{da}}(\beta+1)x\exp\left(-\sqrt{2d}(\beta+1-x)\right)\\ &\qquad\qquad \qquad \times \frac{\sqrt{(n'-a)\log2}}{m_{N'}+z-(m_{2^{a}}+\beta+1-x)}\exp\left(-\frac{(m_{N'}+z-(m_{2^{a}}+\beta+1-x))^2}{2(n'-a)\log2}\right)\,\mathrm{d} x\\ &\le \int_0^\infty \frac{C\sqrt{n'-a}((n'-a)+z+1)x}{x} \exp\Big(-\sqrt{2d}\big(z+\sqrt{\frac{d}{2}}(n'-a)\log2+1-x\big)-\\ &\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad \qquad \frac{\big(\sqrt{\frac{d}{2}}(n'-a)\log2-\frac{3}{2\sqrt{2d}}(\log n'-\log a)-1+x\big)^2}{2(n'-a)\log2}\Big)\,\mathrm{d} x\\ &\le C(1\vee z)(n'-a)^{3/2}{\mathrm{e}}^{\sqrt{2d}z}\\ &\qquad\times\int_0^\infty\exp\Big(\sqrt{\frac{d}{2}}x-\frac{5d}{4}(n'-a)\log2+\frac{3(\log n'-\log a)}{4\sqrt{2d}(n'-a)\log2}x-\frac{x^2}{2(n'-a)\log2}\Big)\,\mathrm{d} x, \end{align*} where we have absorbed various bounded error terms into the constant $C$. If the replace the integral over $[0,\infty)$ with one over $\mathbb{R}$, we recognize the integral of a Gaussian function. We find, after some computation, \begin{align*} &\mathbb{P}^{N',w}\bigg(\max_{v\in \Psi^{w,w}_{N,N'}(A)}\theta^{N',w}_v\ge m_{N'}+z, G_N(\beta)^\complement\bigg)\\ &\le C(1\vee z)(n'-a)^2{\mathrm{e}}^{\sqrt{2d}z}\exp\left(-d(n'-a)\log2+\frac38(\log n'-\log a)\right)\\ &\le\frac{C}{2^{d(n'-a)}}(1\vee z)(n'-a)^{19/8}{\mathrm{e}}^{\sqrt{2d}z} \end{align*} where we used ${n'}/{a}\le2(n'-a)$ in the last step. Together with \eqref{eq-190222}, this completes the proof of \eqref{e:upper_right_tail_sparse_1} and therefore yields \eqref{e:upper_right_tail_sparse}. \end{proof} As a consequence of Lemma \ref{l:upper_right_tail_sparse}, we can give an estimate of the right tail of the maximum on those points where $\mathsf{T}_\cdot$ is large. \begin{lemma}\label{l:tail_max_bad_points} Under Assumptions \ref{a:logupp} and \ref{a:sparseT} there are $C,c>0$ such that for $J,N\in\mathbb{N}$ with $J\le\frac{N}{2}$, any $w'\in\mathcal{W}_{N,\left\lfloor\frac{N}{J}\right\rfloor}$, $z\in\mathbb{R}$, and any $T$ sufficiently large \begin{equation}\label{e:tail_max_bad_points} \mathbb{P}^{N}\bigg(\max_{\substack{v\in V_{\left\lfloor\frac{N}{J}\right\rfloor}(w')\\\mathsf{T}_v\ge T}}\varphi^{N}_v\ge m_N+z\bigg)\le C(z\vee1){\mathrm{e}}^{-\sqrt{2d}z-cT}. \end{equation} \end{lemma} \begin{proof} We can apply a union bound to estimate \[ \mathbb{P}^{N}\bigg(\max_{\substack{v\in V_{\left\lfloor\frac{N}{J}\right\rfloor}(w')\\\mathsf{T}_v\ge T}}\varphi^{N}_v\ge m_N+z\bigg) \le\sum_{t=T}^\infty\mathbb{P}^{N}\bigg(\max_{\substack{v\in V_{\left\lfloor\frac{N}{L}\right\rfloor}(w')\\t\le \mathsf{T}_v<t+1}}\varphi^{N}_v\ge m_N+z\bigg). \] By Assumption \ref{a:sparseT}, there is $\varepsilon>0$ such that for $t$ sufficiently large the set of points $v\in V_{\left\lfloor\frac{N}{J}\right\rfloor}(w')$ where $t\le \mathsf{T}_v<t+1$ has cardinality at most $C\left(\frac{N}{J}\right)^d{\mathrm{e}}^{-(d+\varepsilon)t}$. So an application of Lemma \ref{l:upper_right_tail_sparse} implies for any $r$ sufficiently large that \begin{align*} \mathbb{P}^{\textcolor{purple}{N}}\bigg(\max_{\substack{v\in V_{\left\lfloor\frac{N}{J}\right\rfloor}(w')\\\mathsf{T}_v\geq T}}\varphi^{N}_v\ge m_N+z\bigg) &\le C\sum_{t=T}^\infty\frac{{\mathrm{e}}^{d(t+1)}}{{\mathrm{e}}^{(d+\varepsilon)t}L^d}\left(1+t+(d+\varepsilon)t\log L^d\right)^{19/8}(z\vee 1){\mathrm{e}}^{-\sqrt{2d}z}\\ &\le C_\varepsilon(z\vee 1){\mathrm{e}}^{-\sqrt{2d}z}\sum_{t=T}^\infty {\mathrm{e}}^{-\varepsilon t/2}. \end{align*} The lemma follows. \end{proof} Lemma \ref{l:upper_right_tail_badset} is now a straightforward consequence of Lemma \ref{l:upper_right_tail_sparse} and Lemma \ref{l:tail_max_bad_points}. \begin{proof}[Proof of Lemma \ref{l:upper_right_tail_badset}] First use Lemma \ref{l:tail_max_bad_points} to estimate the maximum over the points $v$ where $\mathsf{T}_v\ge T$, and then use Lemma \ref{l:upper_right_tail_sparse} to estimate the maximum over the points $v\in A$ where $\mathsf{T}_v\le T$. \end{proof} For the proof of Lemma \ref{l:upper_right_tail_badset_micro} we need a version of Lemma \ref{l:tail_max_bad_points} where one takes the average over microscopic cubes. \begin{lemma}\label{l:tail_max_bad_points_micro} Under assumptions \ref{a:logupp} and \ref{a:sparseT} there are $C,c>0$ such that for any $J,N\in\mathbb{N}$ with $J\le\frac{N}{2}$, any $w'\in\mathcal{W}_{N,\left\lfloor\frac{N}{J}\right\rfloor}$, $z\in\mathbb{R}$, and any $T$ sufficiently large, \begin{equation}\label{e:tail_max_bad_points_micro} \sum_{w''\in\mathcal{W}_{\left\lfloor\frac{N}{J}\right\rfloor,J'}(w')} \mathbb{P}^{J',w''}\bigg(\max_{\substack{v\in V_{J'}(w'')\\\mathsf{T}_v\ge T}}\varphi^{J',w''}_v\ge m_{J'}+z\bigg)\le C\left(\frac{N}{JJ'}\right)^d(z\vee1){\mathrm{e}}^{-\sqrt{2d}z-cT}. \end{equation} \end{lemma} \begin{proof} As in the proof of Lemma \ref{l:tail_max_bad_points} we can apply an union bound and then Lemma \ref{l:upper_right_tail_sparse} to obtain that the left hand side of \eqref{e:tail_max_bad_points_micro} is bounded above by \begin{equation}\label{e:tail_max_bad_points_micro1} \begin{split} &\sum_{w''\in\mathcal{W}_{\left\lfloor\frac{N}{J}\right\rfloor,J'}(w')}\sum_{t=T}^\infty\mathbb{P}^{J',w''}\bigg(\max_{\substack{v\in V_{L'}(w'')\\t\le \mathsf{T}_v<t+1}}\varphi^{J',w''}_v\ge m_{J'}+z\bigg)\\ &\quad\le C\sum_{t=T}^\infty\sum_{w''\in\mathcal{W}_{\left\lfloor\frac{N}{J}\right\rfloor,J'}(w')} {\mathrm{e}}^{d(t+1)}\frac{|A_{j,w''}|}{L'^d}\Big(1+t+\log\big(\frac{J'^d}{|A_{t,w''}|}\big)\Big)^{19/8}(z\vee 1){\mathrm{e}}^{-\sqrt{2d}z}, \end{split} \end{equation} where \[A_{t,w''}=\left\{v\in V_{L'}(w'')\colon t\le \mathsf{T}_v<t+1\right\}.\] We have \[\sum_{w''\in\mathcal{W}_{\left\lfloor\frac{N}{J}\right\rfloor,J'}(w')}\frac{|A_{t,w''}|}{L'^d}\le C\left(\frac{N}{JJ'}\right)^d{\mathrm{e}}^{-(d+\varepsilon)t}\] by Assumption \ref{a:sparseT}, and trivially $0\le{|A_{t,w''}|}/{J'^d}\le1$ for any $w''$. Now we can apply Jensen's inequality to the function $x\mapsto x(1+t+\log x)^{19/8}$ (which is concave on the interval $[0,1]$ for any $t\ge1$) on the right-hand side of \eqref{e:tail_max_bad_points_micro1} to see that the left hand side of \eqref{e:tail_max_bad_points_micro} is bounded above by \[ C\left(\frac{N}{JJ'}\right)^d\sum_{t=T}^\infty\frac{{\mathrm{e}}^{d(t+1)}}{{\mathrm{e}}^{(d+\varepsilon)t}}\left(1+t+(d+\varepsilon)t\right)^{19/8}(z\vee 1){\mathrm{e}}^{-\sqrt{2d}z},\] which immediately implies the result. \end{proof} \begin{proof}[Proof of Lemma \ref{l:upper_right_tail_badset_micro}] As in the proof of Lemma \ref{l:upper_right_tail_badset}, we first use Lemma \ref{l:tail_max_bad_points_micro} to estimate the maximum over the points $v$ where $\mathsf{T}_v\ge T$, and then use Lemma \ref{l:upper_right_tail_sparse} (combined with another application of Jensen's inequality) to estimate the maximum over the points $v\in A$ where $\mathsf{T}_v\le T$. \end{proof} \subsection{Further bounds on the tail of the maximum} \label{subsec-further} Lemma \ref{l:upper_right_tail_badset} allows us to show that with probability tending to 1 as $R\to\infty$ and $T\to\infty$, the maximum of $\varphi^{N,w_N}$ does not occur at points $v$ where $\mathsf{T}_v\ge T$ or $\mathsf{R}^{(1)}_v\ge R$. Thus, for later arguments we will be able to restrict attention to those points where $\mathsf{T}_v\le T$ and $\mathsf{R}^{(1)}_v\le R$ for some $R$ and $T$, taking the limit $R\to\infty$ and $T\to\infty$ at the very end. With this in mind, we next state and prove some estimates on the maximum restricted to the points where $\mathsf{T}_v\le T$ or $\mathsf{R}^{(1)}_v\le R$. For reasons that will become clear later, we need these estimates not just on $V_N$, but on all the macroscopic subcubes in $\mathcal{V}_{N,\left\lfloor{N}/{J}\right\rfloor}$. We begin with an upper bound on the right tail, that is just a small variant of Lemma \ref{l:upper_right_tail_sparse}. This time we do not care about the precise dependency in $T$, but need to keep an additional factor $\exp(-z^2/(C\log N))$ on the right hand side. \begin{lemma}\label{l:upper_right_tail} Let $N\in\mathbb{N}$, $w\in\mathbb{Z}^d$ and suppose that $A\subset V_N(w)$. Under Assumption \ref{a:logupp} let $T=\max_{v\in A}\mathsf{T}_v$. Then for any $z\in\mathbb{R}$ we have \begin{equation}\label{e:upper_right_tail} \mathbb{P}^{N,w}\Big(\max_{v\in A}\varphi^{N,w}_v\ge m_N+z\Big)\le C(z\vee 1){\mathrm{e}}^{-\sqrt{2d}z-\frac{z^2}{C(\log N+T\vee1)}+C(T\vee1)}. \end{equation} If instead of \ref{a:logupp} we have \ref{a:logbd}, then for $\delta>0$ and $R=\max_{v\in A}\mathsf{R}^{(1)}_v$ we have \begin{equation}\label{e:upper_right_tail_R} \mathbb{P}^{N,w}\Big(\max_{v\in A\cap V_N^\delta(w)}\varphi^{N,w}_v\ge m_N+z\Big)\le C(z\vee 1){\mathrm{e}}^{-\sqrt{2d}z-\frac{z^2}{C(\log N+\alpha_{\delta}(R)\vee1)}+C(\alpha_{\delta}(R)\vee1)}. \end{equation} \end{lemma} Note that in the lemma $N$ and $w$ are again arbitrary. So we can apply the lemma for $ V_{\left\lfloor{N}/{J}\right\rfloor}(w')$ and some subset $A$ of it to conclude an upper bound on the maximum on that (macroscopic) subbox of $V_N$. The analogous remark holds true for Lemma \ref{l:lower_right_tail} and \ref{l:upper_left_tail} below. \begin{proof} We begin with the proof of \eqref{e:upper_right_tail}. As in the proof of Lemma \ref{l:upper_right_tail_sparse} we find an integer $N'=2^{n'}$ such that $T-C\le\log N'-\log N\le T+C$ and such that \[\mathbb{P}^{N,w}\Big(\max_{v\in A}\varphi^{N,w}_v\ge m_N+z\Big)\le 2\mathbb{P}^{N',w}\Big(\max_{v\in \Psi^{w,w}_{N,N'}(A)}\theta^{N',w}_v\ge m_N+z\Big).\] Obviously $\Psi^{w,w}_{N,N'}(A)\subset V_{N'}(w)$ and so we see that \begin{equation}\label{e:upper_right_tail1} \mathbb{P}^{N,w}\Big(\max_{v\in A}\varphi^{N,w}_v\ge m_N+z\Big)\le2\mathbb{P}^{N',w}\Big(\max_{v\in V_{N'}(w)}\theta^{N',w}_v\ge m_N+z\Big), \end{equation} and so we only need a right tail bound for the maximum of a BRW. Such an estimate can be shown as in the two-dimensional case in \cite[Lemma 3.8]{BDZ16}, and we find \[\mathbb{P}^{N',w}\Big(\max_{v\in V_{N'}(w)}\theta^{N',w}_v\ge m_{N'}+z\Big)\le C(z\vee 1){\mathrm{e}}^{-\sqrt{2d}z-\frac{z^2}{C\log N'}}.\] Combining this with \eqref{e:upper_right_tail1} and using $m_{N'}\le m_N+\sqrt{2d}T+C$ we can now calculate that \begin{align*} &\mathbb{P}^{N,w}\Big(\max_{v\in A}\varphi^{N,w}_v\ge m_N+z\Big) \le 2\mathbb{P}^{N',w}\Big(\max_{v\in V_{N'}(w)}\theta^{N',w}_v\ge m_{N'}+z-\sqrt{2d}T-C\Big)\\ &\le C((z-\sqrt{2d}T-C)\vee 1)\exp\Big(-\sqrt{2d}(z-\sqrt{2d}T-C)-(-\frac{(z-\sqrt{2d}T-C)^2}{C\log N'}\Big)\\ &\le C(z\vee 1)\exp\Big(-\sqrt{2d}z-\frac{z^2}{C\log N'}+C\big(1+T+\frac{T^2}{\log N'}\big)\Big), \end{align*} and \eqref{e:upper_right_tail} follows when taking into account that $T-C\le\log N'-\log N\le T+C$. The proof of \eqref{e:upper_right_tail_R} is analogous. In fact, we used \ref{a:logupp} only via the bounds \eqref{e:upper_right_tail_sparse_5} and \eqref{e:upper_right_tail_sparse_6}, and under Assumption \ref{a:logbd} those bounds hold on $A\cap V_N^\delta(w)$ with $\alpha_\delta(R)$ in place of $T$. \end{proof} These upper bounds on the right tail were straightforward. We state next the complementary lower bound on the right tail, which will be slightly more difficult to show. The difference is that for the upper bound it is only helpful to consider the maximum over a subset, while for the lower bound this might be harmful. However, we will assume that ${|A|}/{N^d}$ is large enough. By the pigeon-hole principle we can then find subcubes of $V_N(w)$ which contain many points of $A$, and this is enough to proceed with a Gaussian comparison argument as in \cite{DRZ17}. \begin{lemma}\label{l:lower_right_tail} Under Assumption \ref{a:logbd} there are constants $\gamma$, $c$ and $C$ (depending on $d$ only) with the following property: Let $N\in\mathbb{N}$, $w\in\mathbb{Z}^d$ and $A\subset V_N(w)$ with $|A|\ge(1-2^{-(d+1)})N^d$. Let $R=\max_{v\in A}\mathsf{R}^{(1)}_v$ and suppose that $N\ge\gamma\exp\left(\alpha_{(d2^{d+3})^{-1}}(R)\right)$. Then for any $z\in[1,\sqrt{\log N}-C(\alpha_{(d2^{d+3})^{-1}}(R)\vee 1)]$ we have \begin{equation}\label{e:lower_right_tail} \mathbb{P}^{N,w}\Big(\max_{v\in A}\varphi^{N,w}_v\ge m_N+z\Big)\ge cz{\mathrm{e}}^{-\sqrt{2d}z-C\alpha_{(d2^{d+3})^{-1}}(R)}. \end{equation} \end{lemma} \begin{proof} As the function $\alpha_{\delta}$ is increasing, we know that $\alpha_{\delta}(\mathsf{R}^{(1)}_v)\le\alpha_{\delta}(R)$ for any $v\in A$. Fix $\delta={1}/{(d2^{d+3})}$. For any $u,v\in A\cap V_N^\delta(w)$ we know that \begin{equation}\label{e:lower_right_tail1} \left|\mathbb{E}\varphi^{N,w}_v\varphi^{N,w}_u-\log N+\log_+|u-v|\right|\le \alpha_{\delta}(R).\end{equation} By a comparison argument similar to that in Lemma \ref{l:upper_right_tail} (see \cite[Lemma 2.6]{DZ14} for details, there is a constant $C_0$ (depending on $d$ only) with the following property: there is $N'=2^{n'}$ with $\log N-\log N'\le \alpha_{\delta}(R)+C_0$, such that if $w'\in V_N(w)$ and $\Psi^{w,w'}_{N,N'}$ is the upscaling map from the proof of Lemma \ref{l:upper_right_tail_sparse}, then \begin{equation}\label{e:lower_right_tail2} \mathbb{P}^{N,w}\Big(\max_{v\in \Psi^{w,w'}_{N,N'}(A\cap V^{2\delta}_{N'}(w'))}\varphi^{N,w}_v\ge m_N+z\Big)\ge \frac12\mathbb{P}^{N',w'}\Big(\max_{v\in A\cap V^{2\delta}_{N'}(w')}\tilde\theta^{N',w'}_v\ge m_N+z\Big). \end{equation} Here we used that $\Psi^{w,w'}_{N,N'}(V^{2\delta}_{N'}(w'))\subset V^\delta_N(w)$, and so \eqref{e:lower_right_tail1} applies to any $v\in\Psi^{w,w'}_{N,N'}(A\cap V^{2\delta}_{N'}(w'))$. Choosing $\gamma\ge{\mathrm{e}}^{C_0}$ ensures that we find such an $N'$ with $N'\ge1$, for which \eqref{e:lower_right_tail2} holds. We want to choose $w'$ such that the right-hand side of \eqref{e:lower_right_tail2} is as large as possible. For that purpose note that \[\Big|A\cup\bigcup_{w'\in\mathcal{W}_{N,N'}(w)}V_{N'}(w')\Big|\ge\Big(1-\frac{1}{2^{d+1}}\Big)N^d+\Big(N'\Big\lfloor\frac{N}{N'}\Big\rfloor\Big)^d-N^d\ge \frac{N^d}{2^{d+1}},\] while $|\mathcal{W}_{N,N'}(w)|=\left\lfloor\frac{N}{N'}\right\rfloor^d\le\frac{N^d}{N'^d}$. Thus, by the pigeonhole principle there is some $w'$ such that \[\left|A\cap V_{N'}(w')\right|\ge\frac{1}{2^{d+1}}(N')^d.\] Our choice of $\delta={1}/{(d2^{d+3})}$ ensures that $V_{N'}^{2\delta}(w')$ contains at least $(N')^d-2d\delta (N')^{d-1}\ge$ $\big(1-\frac{1}{2^{d+2}}\big)(N')^d$ points for any $N\ge1$. Thus \[\left|A\cap V_{N'}^{2\delta}(w')\right|\ge\frac{1}{2^{d+1}}(N')^d-\frac{1}{2^{d+2}}(N')^d=\frac{1}{2^{d+2}}(N')^d.\] Let $A':=A\cap V_{N'}^{2\delta}(w')$. We have just seen that $|A'|\ge(N')^d/2^{d+2}$, and because of \eqref{e:lower_right_tail2} we only need to give a lower bound for the right tail of $\max_{v\in A'}\tilde\theta^{N',w'}_v$. To be precise, we claim that for any $y\in[1,\sqrt{\log N'}]$, \begin{equation}\label{e:lower_right_tail3} \mathbb{P}^{N',w'}\left(\max_{v\in A'}\tilde\theta^{N',w'}_v\ge m_{N'}+y\right)\ge cy{\mathrm{e}}^{-\sqrt{2d}y}. \end{equation} If instead of $\max_{v\in A'}$ we had $\max_{v\in V_{N'}(w')}$, this would be analogous to the proof of \cite[Lemma 3.7]{DZ14}. Our $A'$ contains a positive fraction of the points in $V_{N'}(w')$, and so we can hope that \eqref{e:lower_right_tail3} still holds. Indeed, the proof of \cite[Lemma 3.7]{DZ14} uses a second-moment method, and for that it is only helpful if one considers fewer events, as long as the first moment is not too reduced. The details follow. As in \cite[Section 4.2]{BDZ16}, we associate to $\tilde\theta^{N',w'}$ a family of Brownian motions $\tilde\theta^{N',w'}(t)$ such that $\tilde\theta^{N',w'}_v=\tilde\theta^{N',w'}_v(n')$ and $\mathbb{E}^{N',w'} \tilde\theta^{N',w'}_v\tilde\theta^{N',w'}_u=\mathbb{E}^{N',w'} \tilde\theta^{N',w'}_v(n')\tilde\theta^{N',w'}_u(n')$. Consider the events \[\mathcal{E}_{F,v}^{N'}(y)=\left\{\tilde\theta^{N',w'}_v(t)\le y+\frac{tm_{N'}}{n'}\ \forall t\in [1,n'], \tilde\theta^{N',w'}_v(n')\in [m_{N'}+y,m_{N'}+y+1]\right\}\] and the random variable \[\Lambda_F(y)=\sum_{v\in A'}\mathbbm{1}_{\mathcal{E}_{F,v}^{N'}(y)}.\] A calculation analogous to the proof of \cite[Lemma 3.7]{DZ14} shows that \[\mathbb{P}^{N',w'}\left(\mathcal{E}_{F,v}^{N'}(y)\right)\ge \frac{c}{2^{dn'}}z{\mathrm{e}}^{-\sqrt{2d}y},\] and thus \[ \mathbb{E}^{N',w'}\Lambda_F(y)\ge cy{\mathrm{e}}^{-\sqrt{2d}y}. \] The second moment of $\Lambda_F(y)$ is bounded by that of $\sum_{v\in V_{N'}(w')}\mathbbm{1}_{\mathcal{E}_{F,v}^{N'}(y)}$, and the latter can be bounded again like in the proof of \cite[Lemma 3.7]{DZ14}. We obtain that \[\mathbb{E}^{N',w'}\Lambda(y)^2\le Cy\exp\left(\frac{m_{N'}}{\log N'}y\right)\le Cy{\mathrm{e}}^{-\sqrt{2d}y}\] From the Paley-Zygmund inequality we can thus conclude \[\mathbb{P}^{N',w'}\left(\max_{v\in A\cap V^{2\delta}_{N'}(w')}\tilde\theta^{N',w'}_v\ge m_{N'}+y\right)\ge\mathbb{P}^{N',w'}(\Lambda(y)>0)\ge\frac{\left(\mathbb{E}^{N',w'}\Lambda_F(y)\right)^2}{\mathbb{E}^{N',w'}\Lambda_F(y)^2}\ge cy{\mathrm{e}}^{-\sqrt{2d}y}.\] This shows \eqref{e:lower_right_tail3}. We now recall \eqref{e:lower_right_tail2} and use that $m_N-m_{N'}\le \sqrt{2d}\alpha_{\delta}(R)+C$ to see that for any $z\in[1,\sqrt{\log N'}-\sqrt{2d}\alpha_{\delta}(R)-C]$ we have \begin{align*} &\mathbb{P}^{N,w}\Big(\max_{v\in \Psi^{w,w'}_{N,N'}(A\cap V^{2\delta}_{N'}(w'))}\varphi^{N,w}_v\ge m_N+z\Big)\\ &\ge \frac12\mathbb{P}^{N',w'}\Big(\max_{v\in A\cap V^{2\delta}_{N'}(w')}\tilde\theta^{N',w'}_v\ge m_{N'}+z+\sqrt{2d}\alpha_{\delta}(R)+C\Big)\\ &\ge \frac12\mathbb{P}^{N',w'}\Big(\max_{v\in A\cap V^{2\delta}_{N'}(w')}\tilde\theta^{N',w'}_v\ge m_{N'}+z+\sqrt{2d}\alpha_{\delta}(R)+C\Big)\\ &\ge c(z+\sqrt{2d}\alpha_{\delta}(R))\exp\left(-\sqrt{2d}(z+\sqrt{2d}\alpha_{\delta}(R)\right) \ge cz{\mathrm{e}}^{-\sqrt{2d}z-C\alpha_{\delta}(R)}, \end{align*} which implies \eqref{e:lower_right_tail}. \end{proof} We can use this lower bound on the right tail directly to deduce an upper bound on the left tail. Again, this result follows from Gaussian comparison arguments together with the pigeon-hole principle. \begin{lemma}\label{l:upper_left_tail} Under Assumption \ref{a:logbd} there are constants $\gamma'$, $c$ and $C$ (depending on $d$ only) with the following property: Let $N\in\mathbb{N}$, $w\in\mathbb{Z}^d$ and $A\subset V_N(w)$ with $|A|\ge(1-2^{-(d+3)})N^d$. Let $R=\max_{v\in A}\mathsf{R}^{(1)}_v$ and suppose that $N\ge\gamma'\exp\left(\alpha_{(d2^{d+2})^{-1}}(R)\right)$. Then for any $z\in[1,2\sqrt{2d}\log N-C(\alpha_{(d2^{d+2})^{-1}}(R)\vee 1)]$ we have \begin{equation}\label{e:upper_left_tail} \mathbb{P}^{N,w}\left(\max_{v\in A}\varphi^{N,w}_v\le m_N-z\right)\le C{\mathrm{e}}^{-cz+C\alpha_{(d2^{d+2})^{-1}}(R)}. \end{equation} \end{lemma} \begin{proof} We use a comparison argument very similar to the one of the proof of Lemma \ref{l:lower_right_tail}. Note that the comparison argument for the analogous result in \cite[Lemma 2.1]{DRZ17}, is set up somewhat differently. As in Lemma \ref{l:lower_right_tail} we fix $\delta={1}/{(d2^{d+2})}$ and observe that for any $u,v\in A\cap V_N^\delta(w)$ we have the correlation bound \eqref{e:lower_right_tail1}. We now compare $\varphi^{N,w}_{ \Psi^{w,w'}_{N,N'}(v)}$ with $\tilde\theta^{N',w'}_v+a_v X$, where $X$ is a standard Gaussian independent of everything else and the $a_v$ are constants chosen so that the variance of $\varphi^{N,w}_{ \Psi^{w,w'}_{N,N'}(v)}$ equals that of $\tilde\theta^{N',w'}_v+a_v X$. Slepian's lemma then implies that there is a choice of $N'=2^{n'}$ with $C_0-1\le\log N-\log N'\le \alpha_{\delta}(R)+C_0$ such that for any $w'\in V_N(w)$, \begin{equation}\label{e:upper_left_tail1} \mathbb{P}^{N,w}\Big(\max_{v\in \Psi^{w,w'}_{N,N'}(A\cap V^{2\delta}_{N'}(w'))}\!\!\varphi^{N,w}_v\le m_N-z\Big) \le \frac12\mathbb{P}^{N',w'}\Big(\max_{v\in A\cap V^{2\delta}_{N'}(w')} \!\!\tilde\theta^{N',w'}_v\le m_N-\frac z2\Big)+\mathbb{P}\big(X\le-\frac z2\big). \end{equation} We choose $\gamma={\mathrm{e}}^{C_0}$ which ensures that a choice of $N'$ with $N'\ge1$ is possible. The second summand in \eqref{e:upper_left_tail1} is obviously bounded by $Cz{\mathrm{e}}^{-cz}$, and so we focus on the first one. We are free to choose $w'$ in \eqref{e:upper_left_tail1}. For that purpose we use again the pigeonhole principle, however this time we need to be slightly more careful in the estimates. Because $\frac{N}{N'}\ge{\mathrm{e}}^{C_0-1}$, we can control $\left\lfloor\frac{N}{N'}\right\rfloor$ by $\frac{N}{N'}$ and $C_0$, and in fact, by making $C_0$ larger, if necessary, we can ensure that \[\Big|\bigcup_{w'\in\mathcal{W}_{N,N'}(w)}V_{N'}(w')\Big|=(N')^d\Big\lfloor\frac{N}{N'}\Big\rfloor^d\ge\big(1-\frac{1}{2^{d+3}}\big)N^d.\] Recall that we assumed $|A|\ge(1-2^{-(d+3)})N^d$. By the pigeon-hole principle, there is now $w'\in\mathcal{W}_{N,N'}(w)$ with \[\Big|A\cap V_{N'}(w')\Big|\ge\big(1-\frac{1}{2^{d+2}}\big)(N')^d,\] and hence also \[\Big|A\cap V^{2\delta}_{N'}(w')\Big|\ge\big(1-\frac{1}{2^{d+1}}\big)(N')^d.\] We fix this choice of $w'$ for the remainder of the proof. Defining $A'=A\cap V_{N'}^{2\delta}(w')$, we claim that for any $y\in[0,2\sqrt{2d}\log N']$ that \begin{equation}\label{e:upper_left_tail2} \mathbb{P}^{N',w'}\left(\max_{v\in A'}\tilde\theta^{N',w'}_v\le m_{N'}-y\right)\le C{\mathrm{e}}^{-cy} \end{equation} To prove this, we proceed similarly to the proof of \cite[Lemma 2.8]{DRZ17} (where the analogous result with $\max_{v\in V_{N'}}$ instead of $\max_{v\in A'}$ was shown). That is, we pick an integer $N''=2^{n''}$ to be fixed later, and consider the boxes $V_{N''}(w'')$ for $w''\in\mathcal{W}_{N',2N''}(w')$. This is a collection of $2^{d(n'-n''-1)}$ boxes of sidelength $N''$ and with pairwise distance at least $N''$. We can now compare the maximum of $\tilde\theta^{N',w'}$ with the maxima of the (independent) MBRWs $\tilde\theta^{N'',w''}$ for $w''\in\mathcal{W}_{N',2N''}(w')$. As in the proof of \cite[Lemma 2.8]{DRZ17} we see that (for $X$ again a standard Gaussian independent of everything else) \begin{equation}\label{e:upper_left_tail4} \begin{split} &\mathbb{P}^{N',w'}\left(\max_{v\in A'}\tilde\theta^{N',w'}_v\le m_{N'}-y\right)\\ &\le\prod_{w''\in\mathcal{W}_{N',2N''}(w')}\mathbb{P}^{N'',w''}\left(\max_{v\in A'\cap V_{N''}(w'')}\tilde\theta^{N'',w''}_v\le m_{N'}-\frac y2\right)+\mathbb{P}\left(\sqrt{\log N'-\log N''}X\le -\frac y2\right). \end{split} \end{equation} As $|A'|\ge\left(1-\frac{1}{2^{d+1}}\right)(N')^d$, we know that \[\Big|A'\cap\bigcup_{w''\in\mathcal{W}_{N',2N''}(w')}V_{N''}(w'')\Big|\ge\frac{1}{2^{d+1}}(N')^d=\frac12\Big|\bigcup_{w''\in\mathcal{W}_{N',2N''}(w')}V_{N''}(w'')\Big|.\] Therefore, by the pigeon-hole principle for at least $\frac23$ of the possible $w''$ we must have \begin{equation}\label{e:upper_left_tail3} \left|A'\cap V_{N''}(w'')\right|\ge\frac14(N'')^d. \end{equation} Denote by $\mathcal{S}$ the collection of $w''$ for which \eqref{e:upper_left_tail3} holds. We have just argued that $|\mathcal{S}|\ge\frac23 2^{d(n'-n''-1)}\ge c\left(\frac{N'}{N''}\right)^d$. An argument analogous to that in the proof of Lemma \ref{l:lower_right_tail} shows that there is a universal constant $c_0$ such that \[\mathbb{P}^{N'',w''}\left(\max_{v\in A'\cap V_{N''}(w'')}\tilde\theta^{N'',w''}_v\ge m_{N''}\right)\ge c_0\] whenever $\left|A'\cap V_{N''}(w'')\right|\ge c\left|V_{N''}(w'')\right|$ and so in particular whenever $w''\in\mathcal{S}$. If we choose $N''$ as the smallest power of 2 larger than $N'\exp\left(-\frac{y}{2\sqrt{2d}}\right)\ge1$ then $m_{N'}-m_{N''}\le\frac y2$ and so \begin{align*} \mathbb{P}^{N'',w''}\left(\max_{v\in A'\cap V_{N''}(w'')}\tilde\theta^{N'',w''}_v\le m_{N'}-\frac y2\right)&\le \mathbb{P}^{N'',w''}\left(\max_{v\in A'\cap V_{N''}(w'')}\tilde\theta^{N'',w''}_v\le m_{N''}\right)\le 1-c_0. \end{align*} This allows to estimate the first term in \eqref{e:upper_left_tail4}. For the second term we just note $\log N'-\log N''$ is of order $y$, and so that term is exponentially small in $y$. In summary, we see that \begin{align*} \mathbb{P}^{N',w'}\left(\max_{v\in A'}\tilde\theta^{N',w'}_v\le m_{N'}-y\right)&\le(1-c_0)^{|\mathcal{S}|}+C{\mathrm{e}}^{-cy} \le C{\mathrm{e}}^{-c{\mathrm{e}}^{cy}}+C{\mathrm{e}}^{-cy}\le C{\mathrm{e}}^{-cy}. \end{align*} This establishes \eqref{e:upper_left_tail2}. We can now complete the proof of the lemma. Using \eqref{e:upper_left_tail2} to estimate the first summand in \eqref{e:upper_left_tail1}, we obtain for any $z$ such that $z-\sqrt{2d}\alpha_{\delta}(R)\le2\sqrt{2d}\log N'$ \begin{align*} &\mathbb{P}^{N,w}\Big(\max_{v\in \Psi^{w,w'}_{N,N'}(A\cap V^{2\delta}_{N'}(w'))}\varphi^{N,w}_v\le m_N-z\Big)\\ &\le \frac12\mathbb{P}^{N',w'}\Big(\max_{v\in A\cap V^{2\delta}_{N'}(w')}\tilde\theta^{N',w'}_v\le m_{N'}+\sqrt{2d}\alpha_{\delta}(R)-\frac z2\Big)+C{\mathrm{e}}^{-cz}\\ &\le C{\mathrm{e}}^{-c((z-\sqrt{2d}\alpha_{\delta}(R))\vee0)}+C{\mathrm{e}}^{-cz} \le C{\mathrm{e}}^{-cz+C\alpha_{\delta}(R)}, \end{align*} which implies \eqref{e:upper_left_tail}. \end{proof} As the next theorem shows, the tail bounds of this subsection imply the tightness of the maximum of $\varphi^{N}$. \begin{theorem} \label{t:tightness} Suppose that Assumptions \ref{a:logupp}, \ref{a:logbd}, \ref{a:sparseT}, \ref{a:sparseR} hold. Then for each relatively open subset $U\subset[0,1]^d$, the sequence of random variables $\max_{\substack{v\in V_N\\v/(N-1)\in U}}\varphi^{N}_v-m_N$ is tight. In particular, $\max_{v\in V_N}\varphi^{N}_v-m_N$ is tight. \end{theorem} \begin{proof} By Lemma \ref{l:upper_right_tail_badset} there is a large $T$ such that for any sufficiently large $N$ we have the bound \[\mathbb{P}^{N}\Big(\max_{\substack{v\in V_N\\v/(N-1)\in U\\}}\varphi^{N,w}_v\ge m_N+z\Big)\le C(z\vee1){\mathrm{e}}^{-\sqrt{2d}z-cT}+C(z\vee1){\mathrm{e}}^{-\sqrt{2d}z+CT}.\] Taking the limit $N\to\infty$ and then $z\to\infty$, we obtain the tightness of the upper tail. For the lower tail we note that because $U$ is open, for every $J$ sufficiently large and for every $N\ge J$ at least one of the cubes $V_{\left\lfloor\frac{N}{J}\right\rfloor}(w')$ for $w'\in\mathcal{W}_{N,\left\lfloor\frac{N}{J}\right\rfloor}$ is such that $\frac{1}{N-1}V_{\left\lfloor\frac{N}{J}\right\rfloor}(w')\subset U$. We fix such a choice of $J$, and denote the corresponding sequence of points by $w'_N$. By Assumption \ref{a:sparseR} there exist $R>0$ such that for all $N$ sufficiently large at most $\frac{1}{2^{d+3}}\left(\left\lfloor\frac{N}{J}\right\rfloor\right)^d$ of the points in $V_{\left\lfloor\frac{N}{J}\right\rfloor}(w')$ satisfy $\mathsf{R}^{(1)}_\cdot\ge R$. Applying Lemma \ref{l:upper_left_tail} to the cube $V_{\left\lfloor\frac{N}{J}\right\rfloor}(w'_N)$ and the set of points where $\mathsf{R}^{(1)}_\cdot\ge R$, we obtain the tightness of the lower tail as well. \end{proof} \subsection{Geometry of the near-maximizers} For the following arguments it is important to understand the set of near-maximizers, i.e. those vertices $v$ where $\varphi^{N}_v$ is of the order of $m_N$. It turns out that, if we restrict attention to those points where $\mathsf{T}$ (or $\mathsf{R}^{(1)}$) stay bounded, then any two near-maximizers are either microscopically close (i.e. at distance of order 1 from each other), or macroscopically far apart (i.e. at distance of order $N$ from each other). Let us make this precise. \begin{lemma}\label{l:geom_nearmax} Under Assumption \ref{a:logbd} for $\delta>0$ there is a constant $c$ such that for any sequence $(A_N)_{N\in\mathbb{N}}$ of subsets of $\mathbb{Z}^d$ with $A_N\subset V_N$, if $R:=\sup_{N\in\mathbb{N}}\max_{v\in A_N}\mathsf{R}^{(1)}_v<\infty$, we have \begin{equation}\label{e:geom_nearmax} \limsup_{L\to\infty}\limsup_{N\to\infty}\mathbb{P}^{N}\Big(\exists u,v\in A_N\cap V_N^\delta\colon L\le |u-v|\le \frac NL,\varphi^{N,w_N}_u\wedge\varphi^{N,w_N}_v\ge m_N-c\log\log L\Big)=0. \end{equation} \end{lemma} Before we begin the proof, we point out that Lemma \ref{l:geom_nearmax} can be combined with the tailbounds of the previous subsection to yield the analogous statement on the geometry of the near-maximizers over all of $V_N$ (the analogue of \cite[Theorem 1.1]{DZ14}). Although not used in the present paper, this might be of independent interest, and so we state it separately. \begin{theorem}\label{t:geom_nearmax} Suppose that Assumptions \ref{a:logupp}, \ref{a:logbd}, \ref{a:sparseT}, \ref{a:sparseR} hold. Then there is a constant $c$ such that \begin{equation}\label{e:geom_nearmax2} \limsup_{L\to\infty}\limsup_{N\to\infty}\mathbb{P}^{N}\left(\exists u,v\cap V_N\colon L\le |u-v|\le \frac NL,\varphi^{N}_u\wedge\varphi^{N}_v\ge m_N-c\log\log L\right)=0. \end{equation} \end{theorem} \begin{proof} In view of Lemma \ref{l:geom_nearmax}, it suffices to show that for each fixed $L$ \[ \limsup_{T\to\infty}\limsup_{\substack{\delta\to0\\mathbb{R}\to\infty}}\limsup_{N\to\infty}\mathbb{P}^{N}\Big(\max_{\substack{v\in V_N\\v\notin V^\delta_N\text{ or }\mathsf{T}_v\ge T\text{ or }\mathsf{R}^{(1)}_v\ge R}}\varphi^{N}_v\ge m_N-c\log\log L\Big)=0. \] But this follows immediately from Lemma \ref{l:upper_right_tail_badset}. \end{proof} We can now turn to the proof of Lemma \ref{l:geom_nearmax}. For this we could proceed similarly as in \cite{DZ14} or \cite{DRZ17}. The main difference would be that we restrict $u,v$ to a smaller set, and this only helps. However, there is an even faster way. Namely we will use comparison inequalities to deduce our result directly from the one in \cite[Lemma 3.3]{DRZ17}. \begin{proof}[Proof of Lemma \ref{l:geom_nearmax}] We use Slepian's inequality to compare the maxima of the two Gaussian processes \[\Big\{\varphi^{N}_u+\varphi^{N}_v+a_{u,v}X\Big| u,v\in A_N\cap V_N^\delta\colon L\le |u-v|\le \frac NL\Big\}\] and \[\Big\{\tilde\theta^{N'}_{\Psi_{N,N'}^{0,0}(u)}+\tilde\theta^{N'}_{\Psi_{N,N'}^{0,0}(v)}\Big| u,v\in A_N\cap V_N^\delta\colon L\le |u-v|\le \frac NL\Big\}.\] Here $\Psi$ is an in \eqref{e:upscaling}, $X$ is a standard Gaussian independent of everything else and $a_{u,v}$ is chosen in such a way that the variances match. As in previous arguments, we find that there is a choice of $N'$ with $\log N'-\log N\le 4\alpha_{\delta}(R)+C$ such that the assumptions of Slepian's inequality are satisfied, and we conclude that for any $z\in\mathbb{R}$ \begin{align*} &\mathbb{P}^{N}\Big(\max\Big\{\varphi^{N}_u+\varphi^{N}_v\Big| u,v\in A_N\cap V_N^\delta, L\le |u-v|\le \frac NL\Big\}\ge 2m_N-z\Big)\\ &\le\mathbb{P}^{N'}\Big(\max\Big\{\tilde\theta^{N'}_{\Psi_{N,N'}^{0,0}(u)}+\tilde\theta^{N'}_{\Psi_{N,N'}^{0,0}(v)}\Big| u,v\in A_N\cap V_N^\delta, L\le |u-v|\le \frac NL\Big\}\ge 2m_N-z\Big). \end{align*} The left-hand side here is clearly an upper bound for the probability in \eqref{e:geom_nearmax}. On the other hand, we can make the right-hand side larger by loosening the restrictions on $u,v$. Thereby we see that \begin{align*} &\mathbb{P}^{N}\Big(\exists u,v\in A_N\cap V_N^\delta\colon L\le |u-v|\le \frac NL,\varphi^{N}_u\wedge\varphi^{N}_v\ge m_N-c_0\log\log L\Big)\\ &\le\mathbb{P}^{N'}\Big(\max\Big\{\tilde\theta^{N'}_u+\tilde\theta^{N'}_v\Big| u,v\in V_{N'}, L\le |u-v|\le \frac{N'}{L}\Big\}\ge 2m_N-2c_0\log\log L\Big)\\ &\le\mathbb{P}^{N'}\Big(\max\Big\{\tilde\theta^{N'}_u+\tilde\theta^{N'}_v\Big| u,v\in V_{N'}, L\le |u-v|\le \frac{N'}{L}\Big\}\ge 2m_{N'}-C\alpha_{\delta}(R)-2c_0\log\log L\Big). \end{align*} If the maximum of $\tilde\theta^{N'}_u+\tilde\theta^{N'}_v$ exceeds $2m_{N'}-2C_0$ for some constant $C_0$, then either there is a pair of points $u,v$ where $\tilde\theta^{N'}_\cdot$ is at least $m_{N'}-3C_0$, or there must exist one point $v$ where $\tilde\theta^{N'}_\cdot$ is at least $m_{N'}+C_0$. Thus, \begin{equation}\label{e:geom_nearmax4} \begin{split} &\mathbb{P}^{N}\Big(\exists u,v\in A_N\cap V_N^\delta\colon L\le |u-v|\le \frac NL,\varphi^{N}_u\wedge\varphi^{N}_v\ge m_N-c_0\log\log L\Big)\\ &\le\mathbb{P}^{N'}\Big(\exists u,v\in V_{N'}\colon L\le |u-v|\le \frac{N'}{L},\tilde\theta^{N'}_u\wedge\tilde\theta^{N'}_v\ge m_{N'}-3C\alpha_{\delta}(R)-6c_0\log\log L\Big)\\ &\quad+\mathbb{P}^{N'}\Big(\exists v\in V_{N'}\colon\tilde\theta^{N'}_v\ge m_{N'}+C\alpha_{\delta}(R)+2c_0\log\log L\Big). \end{split} \end{equation} The second summand here can be bounded using bounds for the right tail of a MBRW (as implied for example by Lemma \ref{l:upper_right_tail}, or by \cite[Lemma 2.7]{DRZ17}). Taking the limit $N\to\infty$ and then $L\to\infty$, this summand vanishes (for any $c_0$). For the first summand in \eqref{e:geom_nearmax4}, we can apply \cite[Lemma 3.3]{DRZ17} to see that for a sufficiently small $c_0$ it also vanishes in the limit $N\to\infty$ and then $L\to\infty$. This completes the proof. \end{proof} \section{Convergence of the maximum of the Gaussian field: proof of Theorem \ref{t:mainthm}} \label{sec-3} As in \cite{DRZ17}, the proof of convergence of the maximum of $\varphi^N_\cdot$ is built on constructing an easier to analyze Gaussian field, and using comparison theorems for Gaussian processes to relate the two. This section is devoted to the construction of the approximating field and to a proof of Theorem \ref{t:mainthm}. After introducing in Section \ref{sec-3.1} a quick comparison with processes augmented with independent Gaussians, we provide in Section \ref{sec-approx} the construction of the approximating fields. Section \ref{sec-3.3} then provides the proof of Theorem \ref{t:mainthm}. \subsection{Preliminary results} \label{sec-3.1} In the next subsection we will construct an approximation to $\varphi^N$ for which we can control the behaviour of the maximum, and show that in a suitable limit the maxima of $\varphi^N$ and the approximation are close. In this section we will lay the groundwork for that by showing that various modifications do not significantly change the maximum of a log-correlated Gaussian field. The two results are similar to results in \cite{DRZ17}, and the proofs will also be straightforward adaptions of the proofs there. As in \cite{DRZ17} we use the Levy metric \[\mathsf{d}(\nu_1,\nu_2)=\inf\left\{\delta>0\middle|\nu_1(U^\delta)\le\nu_2(U)+\delta\ \forall U\subset \mathbb{R}\text{ open}\right\}\] on probability measures on $\mathbb{R}$ (where $U^\delta=\{x\in\mathbb{R}\colon\mathsf{d}(x,U)<\delta\}$), as well as the one-sided variant \[\mathsf{d}^{\le}(\nu_1,\nu_2)=\inf\left\{\delta>0\middle|\nu_1((x,\infty))\le\nu_2((x-\delta,\infty))+\delta\ \forall x\in\mathbb{R}\right\}.\] It is well-known that $\mathsf{d}(\cdot,\cdot)$ induces the topology of weak convergence. Furthermore $\mathsf{d}^{\le}$ measures approximate stochastic domination (in the sense that $\mathsf{d}^{\le}(\nu_1,\nu_2)=0$ if and only $\nu_2$ stochastically dominates $\nu_1$). Clearly $\mathsf{d}^{\le}$ is not a metric, however when one symmetrizes it, one obtains a metric that also induces the topology of weak convergence. Given $L,L'\in\mathbb{N}$ and $\sigma,\sigma'\in\mathbb{R}$, consider standard Gaussians $X_B$ for $B\in\mathcal{V}_{N,\left\lfloor{N}/{L}\right\rfloor}$ and $X_{B'}$ for $B'\in\mathcal{V}_{N,L'}$, such that they are all independent. We define a variant $\tilde\varphi^{N}$ of $\varphi^{N}$ by setting $\tilde\varphi^{N}_v=\varphi^{N}_v+\sigma X_B+\sigma'X_{B'}$ on $B\cap B'\cap V_N$. As we next show, the law of the maximum of $\tilde \varphi^N$ is (up to a deterministic shift) close to the law of the maximum of $\varphi^{N}$ itself. \iffalse The definition of $\tilde\varphi^{N}$ is as follows. Consider standard Gaussians $X_B$ for $B\in\mathcal{V}_{N,\left\lfloor\frac{N}{L}\right\rfloor}$ and $X_{B'}$ for $B'\in\mathcal{V}_{N,L'}$, such that they are all independent. Then we define $\tilde\varphi^{N}_v=\varphi^{N}_v+\sigma X_B+\sigma'X_{B'}$ on $B\cap B'\cap V_N$. \fi \begin{lemma}\label{l:approx_fields} Under Assumption \ref{a:logbd} let $\delta>0$. Let $(A_N)_{N\in\mathbb{N}}$ be a sequence of subsets of $\mathbb{Z}^d$ with $A_N\subset V_N$ and $|A_N|\ge(1-2^{-(d+3)})N^d$, and assume that $R:=\limsup_{N\to\infty}\max_{v\in A_N}\mathsf{R}^{(1)}_v<\infty$. Then \begin{equation}\label{e:approx_fields} \lim_{L,L'\to\infty}\lim_{N\to\infty}\mathsf{d}\Big(\max_{v\in A_N\cap V_N^\delta}\varphi^{N},\max_{v\in A_N\cap V_N^\delta}\tilde\varphi^{N}-(\sigma^2+\sigma'^2)\sqrt{\frac{d}{2}}\Big)=0. \end{equation} \end{lemma} \begin{proof} The proof of \cite[Lemma 3.1]{DRZ17} carries over with very minor changes, as it does not use the geometry of the domain other than via the application of Lemma \ref{l:geom_nearmax}. So we only mention the most important steps. We define yet another variant of $\varphi^{N}$. Namely let $\varphi'^{N}$ be an independent copy of $\varphi^{N}$ (realized on the same probability space), and define \[\hat\varphi^{N}_v=\varphi^{N}_v+\sqrt{\frac{\sigma_1^2+\sigma_2^2}{\log N}}\varphi'^{N}_v.\] Clearly, $\hat\varphi^{N}$ is equal in distribution to $\sqrt{1+\frac{\sigma_1^2+\sigma_2^2}{\log N}}\varphi^{N}$. By Lemma \ref{l:upper_right_tail} we know that $\mathcal{E}:=\left\{\max_{v\in A_N\cap V_N^\delta}|\varphi^{N}_v|\le2\sqrt{2d}\log N\right\}$ occurs with probability tending to 1 as $N\to\infty$. On the event $\mathcal{E}$ we can Taylor-expand the square root and find that \[\Big|\max_{v\in A_N\cap V_N^\delta}\sqrt{1+\frac{\sigma_1^2+\sigma_2^2}{\log N}}\varphi^{N}_v-\max_{v\in A_N\cap V_N^\delta}\varphi^{N}_v-m_N-(\sigma^2+\sigma'^2)\sqrt{\frac{d}{2}}\Big|\le \frac{C}{\log N}\] which implies that \[\lim_{N\to\infty}\mathsf{d}\Big(\max_{v\in A_N\cap V_N^\delta}\varphi^{N},\max_{v\in A_N\cap V_N^\delta}\hat\varphi^{N}-(\sigma^2+\sigma'^2)\sqrt{\frac{d}{2}}\Big)=0.\] Thus we only have to show that \begin{equation}\label{e:approx_fields1} \lim_{L,L'\to\infty}\lim_{N\to\infty}\mathsf{d}\Big(\max_{v\in A_N\cap V_N^\delta}\tilde\varphi^{N},\max_{v\in A_N\cap V_N^\delta}\hat\varphi^{N}\Big)=0. \end{equation} For that purpose let $\kappa>0$, and let\footnote{Note that in the corresponding definition in the proof of \cite[Proposition 3.9]{DRZ17}, who use $\kappa=\delta$, the second intersection was omitted; this is a mistake there.} \begin{equation}\label{e:approx_fields3} V_N^{\delta,\kappa}=V_N^\delta\cap\Big(\bigcup_{w'\in\mathcal{V}_{N,\left\lfloor{N}/{L}\right\rfloor}}V_{\left\lfloor\frac{N}{L}\right\rfloor}^\kappa(w')\Big)\cap\Big(\bigcup_{w''\in\mathcal{V}_{N,L'}}V_{L}^\kappa(w'')\Big). \end{equation} Then $|V_N^\delta-V_N^{\delta,\kappa}|\le C_d\kappa N^d$, and so by Lemma \ref{e:upper_right_tail_sparse} we have that the probability of the event \[\Big\{\max_{v\in A_N\cap V_N^\delta}\tilde\varphi^{N}_v\neq \max_{v\in A_N\cap V_N^{\delta,\kappa}}\tilde\varphi^{N}_v\Big\}\cup\Big\{\max_{v\in A_N\cap V_N^\delta}\hat\varphi^{N}_v\neq \max_{v\in A_N\cap V_N^{\delta,\kappa}}\hat\varphi^{N}_v\Big\}\] vanishes in the limit $N\to\infty$ and then $\kappa\to0$. Therefore, \eqref{e:approx_fields1} follows once we show that \begin{equation}\label{e:approx_fields2} \lim_{L,L'\to\infty}\lim_{\kappa\to0}\lim_{N\to\infty}\mathsf{d}\Big(\max_{v\in A_N\cap V_N^{\delta,\kappa}}\tilde\varphi^{N},\max_{v\in A_N\cap V_N^{\delta,\kappa}}\hat\varphi^{N}\Big)=0. \end{equation} To see \eqref{e:approx_fields2}, we can proceed exactly as in the proof of \cite[Proposition 3.9]{DRZ17}, by constructing a coupling between the two fields. The crucial point is that we can apply Lemma \ref{l:upper_left_tail} and Lemma \ref{l:geom_nearmax} not just to $\varphi^N$, but also to $\tilde\varphi^N$ and $\hat\varphi^N$. The former lemma ensures the lower tightness of the maxima, and the latter lemma is used to ensure that near-maximizers that share the macroscopic box $B$ also share the microscopic box $B'$. We omit further details.\end{proof} \begin{lemma} Under Assumption \ref{a:logbd} let $\delta>0$, $R>0$. Then there is a function $\iota_{t}\colon(0,\infty)\to(0,\infty)$ with $\lim_{\varepsilon\to0}\iota_{t}(\varepsilon)=0$, which depends on $t>0$ only, with the following property. Let $(A_N)_{N\in\mathbb{N}}$ be a sequence of subsets of $\mathbb{Z}^d$ with $A_N\subset V_N$ and suppose that $$R\ge\limsup_{N\to\infty}\max_{v\in A_N}\mathsf{R}^{(1)}_v<\infty.$$ Let $\bar\varphi^{N}$ be another sequence of Gaussian fields on $V_N$, and suppose that there is $\varepsilon>0$ such that for all $N$ and all $u,v\in A_N$ \begin{align*} \left|\Var \varphi^{N}_v-\Var\bar\varphi^{N}_v\right|\le\varepsilon,\qquad \mathbb{E}\bar\varphi^{N}_u\bar\varphi^{N}_v-\mathbb{E}\varphi^{N}_u\varphi^{N}_v\le\varepsilon. \end{align*} Then \[\limsup_{N\to\infty}\mathsf{d}^{\le}\left(\max_{v\in A_N\cap V_N^\delta}\varphi^{N}-m_N,\max_{v\in A_N\cap V_N^\delta}\bar\varphi^{N}-m_N\right)\le\iota_{\alpha_\delta(R)}(\varepsilon).\] \end{lemma} \begin{proof} The proof is analogous to the proof of Lemma 3.2 in \cite{DRZ17}. \end{proof} \subsection{An approximating field} \label{sec-approx} Following the approach in \cite[Section 4.1]{DRZ17}, we construct an approximation $\xi^{N,J,J',\delta}$ to $\varphi^{N}$ that consists of rescaled versions of the field on macroscopic and microscopic scales, with a modified branching random walk in between. {Our construction differs from that in \cite{DRZ17} in one important aspect: for the approach of \cite{DRZ17} to work, it is necessary that the variances of $\varphi^{N}$ and the approximating field agree, at all vertices. In \cite{DRZ17} the correction terms have variance up to $C\alpha_\delta(R)$ (in our notation), and are controlled uniformly. In our setting, these correction terms blow up as $R\to\infty$, which leads to a loss of control of the tail behaviour of the field. Instead, we estimate the relevant variances more precisely, and this allows us to use correction terms that are independent of $R$ and $T$. However, this comes at the cost of having to use two sets of correction terms, one at macroscopic and one at microscopic scale. In this and the next section we take particular care to use subscripts and superscripts to indicate on which variables a certain object depends on. For example, our approximating field $\xi^{N,J,J',R,\delta}$ will depend on $N,J,J',R\in\mathbb{N}$ and $\delta>0$, but not on other quantities. In order to construct $\xi^{N,J,J',R,\delta}$, let $J,J',R\in\mathbb{N}$, and $\delta>0$. We subdivide $V_N$ into macroscopic boxes of sidelength $\left\lfloor{N}/{J}\right\rfloor$, and then we subdivide these into microscopic boxes of sidelength $J'$. Of course, in general $N$ will not be divisible by $JJ'$, and so there will be some points left over. However, there will be $o(N^d)$, and hence they will not be relevant for the maximum (using Lemma \ref{l:upper_right_tail_badset}).\footnote{In \cite{DRZ17} it is directly assumed that $N$ is a multiple of $JJ'$. We cannot do so here, as this would result in shifting the boxes in $V_N$ in a way that is incompatible with Assumptions \ref{a:micro}, \ref{a:macro} and in particular \ref{a:lln}.} To make this precise, let $N^*=JJ'\left\lfloor{N}/{JJ'}\right\rfloor$ be the largest multiple of $JJ'$ that is not larger than $N$. We consider the macroscopic cubes $V_{\left\lfloor{N}/{J}\right\rfloor}(w')$ for $w'\in\mathcal{W}_{N,\left\lfloor{N}/{J}\right\rfloor}$, and in each of them we consider the microscopic cubes $V_{J'}(w'')$ for $w''\in\mathcal{W}_{\left\lfloor{N}/{J}\right\rfloor,J'}(w')$. In this way, the number of microscopic cubes in each of the macroscopic cubes is exactly \[\left\lfloor\frac{1}{J'}\left\lfloor\frac{N}{J}\right\rfloor\right\rfloor=\left\lfloor\frac{N}{JJ'}\right\rfloor=\frac{N^*}{JJ'}\] where the first equality is an easy exercise in arithmetic. We will define $\xi^{N}$ on the the union of these microscopic cubes and then extend it by 0 to $V_N$. The reader is referred to figure \ref{fig:outer} for a pictorial description of the various boxes entering the construction. \begin{figure}[h] \begin{tikzpicture}[scale=0.7] \fill[fill=gray!70] (0,0) -- (0,10) -- (0.5,10) -- (0.5,0) -- cycle; \fill[fill=gray!70] (0,10) -- (10,10) -- (10,9.5) -- (0,9.5) -- cycle; \fill[fill=gray!70] (0,0.5) -- (10,0.5) -- (10,0) -- (0,0) -- cycle; \fill[fill=gray!70] (10,0) -- (10,10) -- (9.5,10) -- (9.5,0) -- cycle; \fill[fill=gray!70] (9,0) -- (9,10) -- (10,10) -- (10,0) -- cycle; \fill[fill=gray!70] (0,9) -- (0,10) -- (10,10) -- (10,9) -- cycle; \fill[fill=gray!70] (0,1.35) -- (9,1.35) -- (9,1.575) -- (0,1.575) -- cycle; \fill[fill=gray!70] (0,2.85) -- (9,2.85) -- (9,3.075) -- (0,3.075) -- cycle; \fill[fill=gray!70] (0,4.35) -- (9,4.35) -- (9,4.575) -- (0,4.575) -- cycle; \fill[fill=gray!70] (0,5.85) -- (9,5.85) -- (9,6.075) -- (0,6.075) -- cycle; \fill[fill=gray!70] (0,7.35) -- (9,7.35) -- (9,7.575) -- (0,7.575) -- cycle; \fill[fill=gray!70] (0,8.85) -- (9,8.85) -- (9,9.075) -- (0,9.075) -- cycle; \fill[fill=gray!70] (1.35,0) -- (1.35,9) -- (1.575,9) -- (1.575,0) -- cycle; \fill[fill=gray!70] (2.85,0) -- (2.85,9) -- (3.075,9) -- (3.075,0) -- cycle; \fill[fill=gray!70] (4.35,0) -- (4.35,9) -- (4.575,9) -- (4.575,0) -- cycle (4.425,0); \fill[fill=gray!70] (5.85,0) -- (5.85,9) -- (6.075,9) -- (6.075,0) -- cycle; \fill[fill=gray!70] (7.35,0) -- (7.35,9) -- (7.575,9) -- (7.575,0) -- cycle; \fill[fill=gray!70] (8.85,0) -- (8.85,9) -- (9.075,9) -- (9.075,0) -- cycle (8.925,0); \draw (0,0) rectangle (10,10); \draw[dashed] (0.5,0.5) rectangle (9.5,9.5); \draw(0,0) rectangle (1.5,1.5); \draw(0,1.5) rectangle (1.5,3); \draw(0,3) rectangle (1.5,4.5); \draw(0,4.5) rectangle (1.5,6); \draw(0,6) rectangle (1.5,7.5); \draw(0,7.5) rectangle (1.5,9); \draw(1.5,0) rectangle (3,1.5); \draw(1.5,1.5) rectangle (3,3); \draw(1.5,3) rectangle (3,4.5); \draw(1.5,4.5) rectangle (3,6); \draw(1.5,6) rectangle (3,7.5); \draw(1.5,7.5) rectangle (3,9); \draw(3,0) rectangle (4.5,1.5); \draw(3,1.5) rectangle (4.5,3); \draw(3,3) rectangle (4.5,4.5); \draw(3,4.5) rectangle (4.5,6); \draw(3,6) rectangle (4.5,7.5); \draw(3,7.5) rectangle (4.5,9); \draw(4.5,0) rectangle (6,1.5); \draw(4.5,1.5) rectangle (6,3); \draw(4.5,3) rectangle (6,4.5); \draw(4.5,4.5) rectangle (6,6); \draw(4.5,6) rectangle (6,7.5); \draw(4.5,7.5) rectangle (6,9); \draw(6,0) rectangle (7.5,1.5); \draw(6,1.5) rectangle (7.5,3); \draw(6,3) rectangle (7.5,4.5); \draw(6,4.5) rectangle (7.5,6); \draw(6,6) rectangle (7.5,7.5); \draw(6,7.5) rectangle (7.5,9); \draw(7.5,0) rectangle (9,1.5); \draw(7.5,1.5) rectangle (9,3); \draw(7.5,3) rectangle (9,4.5); \draw(7.5,4.5) rectangle (9,6); \draw(7.5,6) rectangle (9,7.5); \draw(7.5,7.5) rectangle (9,9); \draw[|<->|] (0,10.2) -- (10,10.2); \node at (5,10.5) {\small $N$}; \draw[|<->|] (0,-0.2) -- (9,-0.2); \node at (5,-0.5) {\small $J\lfloor N/J\rfloor$}; \draw[|<->|] (-0.2,0) -- (-0.2,1.5); \node at (-1,0.75) {\small $\lfloor N/J\rfloor$}; \draw[|<->|] (10.2,0) -- (10.2,0.5); \node at (10.8,0.25) {\small $\delta N$}; \draw (14.5,4.5) circle(3.6); \draw [thick] plot [smooth, tension=1] coordinates { (16.5,2.5) (13,0.5) (7.5,1.5)}; \draw [thick] plot [smooth, tension=1] coordinates { (12.5,6.5) (8,5.5) (6,3)}; \fill[fill=gray!70] (12.5,2.5) -- (12.5,6.5) -- (12.6,6.5) -- (12.6,2.5) -- cycle; \fill[fill=gray!70] (12.72,2.5) -- (12.72,6.5) -- (12.78,6.5) -- (12.78,2.5) -- cycle; \fill[fill=gray!70] (12.97,2.5) -- (12.97,6.5) -- (13.03,6.5) -- (13.03,2.5) -- cycle; \fill[fill=gray!70] (13.22,2.5) -- (13.22,6.5) -- (13.28,6.5) -- (13.28,2.5) -- cycle; \fill[fill=gray!70] (13.47,2.5) -- (13.47,6.5) -- (13.53,6.5) -- (13.53,2.5) -- cycle; \fill[fill=gray!70] (13.72,2.5) -- (13.72,6.5) -- (13.78,6.5) -- (13.78,2.5) -- cycle; \fill[fill=gray!70] (13.97,2.5) -- (13.97,6.5) -- (14.03,6.5) -- (14.03,2.5) -- cycle; \fill[fill=gray!70] (14.22,2.5) -- (14.22,6.5) -- (14.28,6.5) -- (14.28,2.5) -- cycle; \fill[fill=gray!70] (14.47,2.5) -- (14.47,6.5) -- (14.53,6.5) -- (14.53,2.5) -- cycle; \fill[fill=gray!70] (14.72,2.5) -- (14.72,6.5) -- (14.78,6.5) -- (14.78,2.5) -- cycle; \fill[fill=gray!70] (14.97,2.5) -- (14.97,6.5) -- (15.03,6.5) -- (15.03,2.5) -- cycle; \fill[fill=gray!70] (15.22,2.5) -- (15.22,6.5) -- (15.28,6.5) -- (15.28,2.5) -- cycle; \fill[fill=gray!70] (15.47,2.5) -- (15.47,6.5) -- (15.53,6.5) -- (15.53,2.5) -- cycle; \fill[fill=gray!70] (15.72,2.5) -- (15.72,6.5) -- (15.78,6.5) -- (15.78,2.5) -- cycle; \fill[fill=gray!70] (15.97,2.5) -- (15.97,6.5) -- (16.03,6.5) -- (16.03,2.5) -- cycle; \fill[fill=gray!70] (16.22,2.5) -- (16.22,6.5) -- (16.28,6.5) -- (16.28,2.5) -- cycle; \fill[fill=gray!70] (16.22,2.5) -- (16.22,6.5) -- (16.5,6.5) -- (16.5,2.5) -- cycle (16.72,2.5); \fill[fill=gray!70] (12.5,2.5) -- (16.5,2.5) -- (16.5,2.6) -- (12.5,2.6) -- cycle; \fill[fill=gray!70] (12.5,2.72) -- (16.5,2.72) -- (16.5,2.78) -- (12.5,2.78) -- cycle; \fill[fill=gray!70] (12.5,2.97) -- (16.5,2.97) -- (16.5,3.03) -- (12.5,3.03) -- cycle; \fill[fill=gray!70] (12.5,3.22) -- (16.5,3.22) -- (16.5,3.28) -- (12.5,3.28) -- cycle; \fill[fill=gray!70] (12.5,3.47) -- (16.5,3.47) -- (16.5,3.53) -- (12.5,3.53) -- cycle; \fill[fill=gray!70] (12.5,3.72) -- (16.5,3.72) -- (16.5,3.78) -- (12.5,3.78) -- cycle; \fill[fill=gray!70] (12.5,3.97) -- (16.5,3.97) -- (16.5,4.03) -- (12.5,4.03) -- cycle; \fill[fill=gray!70] (12.5,4.22) -- (16.5,4.22) -- (16.5,4.28) -- (12.5,4.28) -- cycle; \fill[fill=gray!70] (12.5,4.47) -- (16.5,4.47) -- (16.5,4.53) -- (12.5,4.53) -- cycle; \fill[fill=gray!70] (12.5,4.72) -- (16.5,4.72) -- (16.5,4.78) -- (12.5,4.78) -- cycle; \fill[fill=gray!70] (12.5,4.97) -- (16.5,4.97) -- (16.5,5.03) -- (12.5,5.03) -- cycle; \fill[fill=gray!70] (12.5,5.22) -- (16.5,5.22) -- (16.5,5.28) -- (12.5,5.28) -- cycle; \fill[fill=gray!70] (12.5,5.47) -- (16.5,5.47) -- (16.5,5.53) -- (12.5,5.53) -- cycle; \fill[fill=gray!70] (12.5,5.72) -- (16.5,5.72) -- (16.5,5.78) -- (12.5,5.78) -- cycle; \fill[fill=gray!70] (12.5,5.97) -- (16.5,5.97) -- (16.5,6.03) -- (12.5,6.03) -- cycle; \fill[fill=gray!70] (12.5,6.22) -- (16.5,6.22) -- (16.5,6.28) -- (12.5,6.28) -- cycle; \fill[fill=gray!70] (12.5,6.22) -- (16.5,6.22) -- (16.5,6.5) -- (12.5,6.5) -- cycle; \fill[fill=black] (14,4.5)--(14,4.75)--(14.25,4.75)--(14.25,4.5)-- cycle; \fill[fill=black] (15.5,4)--(15.5,4.25)--(15.75,4.25)--(15.75,4)-- cycle; \draw(12.5,2.5) rectangle (16.5,6.5); \draw (12.3,5) -- (12.3,5.25); \draw[->|] (12.3,4.8) -- (12.3,5); \draw[->|] (12.3,5.45) -- (12.3,5.25); \node at (11.9,5.1) {\small $J'$}; \draw[|<->|] (12.5,6.7) -- (16.5,6.7); \node at (14.5,7) { $\lfloor N/J\rfloor$}; \draw[|<->|] (12.5,2.3) -- (16.22,2.3); \node at (14.4,2) {\small $N^*/J$}; \draw (16.7,2.5) -- (16.7,2.6); \draw[->|] (16.7,2.3) -- (16.7,2.5); \draw[->|] (16.7,2.8) -- (16.7,2.6); \node at (17.15,3) {\tiny $\delta\lfloor N/J\rfloor$}; \draw (18,9) circle(1.8); \draw [thick] plot [smooth, tension=1] coordinates { (17,10) (16,8) (15,5)}; \draw [thick] plot [smooth, tension=1] coordinates { (19,8) (17,6) (15.25,4.75)}; \fill[fill=black] (18,8.5)--(18.1,8.5)--(18.1,8.6)--(18,8.6)-- cycle (18,8.5); \draw[|<->|] (17,10.1) -- (19,10.1); \node at (18,10.4) {\small $J'$}; \draw (19.2,8) -- (19.2,8.1); \draw[->|] (19.2,8.3) -- (19.2,8.1); \draw[->|] (19.2,7.8) -- (19.2,8); \node at (19.4,8.5) {\tiny $\delta J'$}; \fill[fill=gray!70] (17,9.9) -- (19,9.9) -- (19,10) -- (17,10) -- cycle (17,9.9); \fill[fill=gray!70] (17,10) -- (17,8) -- (17.1,8) -- (17.1,10) -- cycle (17,10); \fill[fill=gray!70] (17,8) -- (19,8) -- (19,8.1) -- (17,8.1) -- cycle (17,8); \fill[fill=gray!70] (19,8) -- (19,10) -- (18.9,10) -- (18.9,8) -- cycle (19,8); \draw(17,8) rectangle (19,10); \end{tikzpicture} \caption{Block partitions and good points (see \eqref{eqdef-good}). The grey areas are various $\delta$-neighborhoods of the boundary, at various scales, together with rounding effects (the strips on the top and right sides of the $N$-box and the $\lfloor N/J\rfloor$-boxes). The solid black boxes depict regions with $\mathsf{R}^{(2)}_{w'}>\lfloor N/J\rfloor$ (larger box) or vertices with $\mathsf{R}^{(1)}_v>R$ (smaller black box). The points which are left after removing all the gray and black points will be the good points.} \label{fig:outer} \end{figure} \medskip Our definition of $\xi^{N,J,J',\delta}$ will be as the sum of fields on macro-, meso- and microscopic scale, and some independent Gaussians that serve as correction terms. We introduce all these objects in the following. We begin with the macroscopic scale, which will be an upscaling of $\varphi^{J}$. That is, given $\varphi^{J}$ distributed according to $\mathbb{P}^{J}$, we let \[\xi^{N,J,\mathrm{mac}}_v=\varphi^{J}_{w'^*}\] for all $w'\in\mathcal{W}_{N,\left\lfloor{N}/{J}\right\rfloor}$, where $w'^*\in V_J$ is the preimage of $w'$ under $\Phi^{0,0}_{J,N}$ (the upscaling map $\Phi$ was defined in \eqref{e:upscaling}). For the microscopic scale we patch together the fields $\varphi^{J',w''}$. That is, given a random field $\varphi^{J',w''}$ distributed according to $\mathbb{P}^{J',w''}$ for $w''\in\bigcup_{w'\in\mathcal{W}_{N,\left\lfloor{N}/{J}\right\rfloor}}\mathcal{W}_{\left\lfloor{N}/{J}\right\rfloor,J'}$ (and where we take the fields on different microscopic boxes to be independent), we define \[\xi^{N,J,J',w_N,\mathrm{mic}}_v=\varphi^{J,w''}_v\] for all $w''\in\bigcup_{w'\in\mathcal{W}_{N,\left\lfloor{N}/{J}\right\rfloor}}\mathcal{W}_{\left\lfloor{N}/{J}\right\rfloor,J'}$ and all $v\in V_{J'}(w'')$. Next, for the mesoscopic scale we patch together independent copies of an upscaled MBRW. This is easy to do if it happens that ${N^*}/{JJ'}$ is a power of 2. For the general case we need to make some small adjustments\footnote{This detail is overlooked in \cite{DRZ17}.}. For that purpose let $i=\left\lceil \log_2\left({N^*}/{JJ'}\right)\right\rceil$, and $I=2^i$, so that $I$ is the smallest power of 2 not smaller than ${N^*}/{JJ'}$. Now let $\tilde\theta^{I}$ be distributed as a MBRW on $V_I$. Then $\Var\tilde\theta^{I}_v=\log I$ for all $v\in V_I$. We want the variance to be exactly $\log({N^*}/{JJ'})$, so define the correction factor \[s_{{N}/{J},J'}=\frac{\log\frac{N^*}{JJ'}}{\log I}\] and note that \[1\ge s_{{N}/{J},J'}\ge1-\frac{\log 2}{\log\frac{N^*}{JJ'}}.\] Then $s_{{N}/{J},J'}\tilde\theta^{I,0}$, restricted to $V_{{N^*}/{JJ'}}$, looks like a MBRW on that domain in the sense that we have the estimate \begin{equation}\label{e:log_corr_MBRW_corrfact} \Big|s_{N/J,J'}\mathbb{E}\tilde\theta^{I}_v\tilde\theta^{I}_u-\log \frac{N^*}{JJ'}+\log_+|u-v|_{\sim,N}\Big|\le C_\delta \end{equation} for $u,v\in V^\delta_{{N^*}/{JJ'}}$ (as follows easily from \eqref{e:log_corr_MBRW}). We now take independent copies $(\tilde\theta^{I,(w')})$ of $\tilde\theta^{I}$, indexed by $w'\in \mathcal{W}_{N,\left\lfloor{N}/{J}\right\rfloor}$, and define \[\xi^{N,J,J',\mathrm{mes}}_{v+w''}=s_{{N}/{J},J'}\tilde\theta^{I,0,(w')}_{(w''-w')/J'}\] for all $w'\in\mathcal{W}_{N,\left\lfloor{N}/{J}\right\rfloor}$, $w''\in\mathcal{W}_{\left\lfloor\frac{N}{J}\right\rfloor,J'}(w')$, and $v\in V_{J'}(w'')$. To motivate the definition of the correction terms, note that Assumption \ref{a:micro} implies that for $w'\in\mathcal{W}_{N,\left\lfloor{N}/{J}\right\rfloor}$, $w''\in\mathcal{W}_{\left\lfloor{N}/{J}\right\rfloor,J'}(w')$, and $v\in V_{J'}(w'')$ we have, with the symbol $\approx$ meaning with an error that goes to $0$ as $N\to\infty$ followed by $J,J'\to\infty$, \[ \Var\varphi^{N}_v\approx\log N+g(v,v)+f\left(\frac{v}{N}\right)\] and \[ \Var\left(\xi^{N,J,\mathrm{mac}}_v+\xi^{N,J,J',\mathrm{mes}}_v+\xi^{N,J,J',\mathrm{mic}}_v\right)\approx\log N+f\left(\frac{w'^*}{J}\right)+g(v,v)+g(w'^*,w'^*)+f\left(\frac{v-w''}{J'}\right)\] (where $w'^*$ is the preimage of $w'$ under $\Phi^{0,0}_{J,N}$). By the continuity of $f$ we know that $f\left({v}/{N}\right)\approx f\left({w'}/{J}\right)$, and so the difference of the two variances is approximately $f\left(\frac{v-w''}{J'}\right)+g(w'^*,w'^*)$. These are the terms that we still need to account for. In order to account for $f\left(\frac{v-w''}{J'}\right)$ note that by Assumption \ref{a:micro} the function $f$ is bounded from above on $(0,1)^d$. Let \begin{equation} \label{eq-Gamma} \Gamma:=\sup_{x\in(0,1)^d}f(x)<\infty\end{equation} and define $b^{J',\mathrm{mic}}_{v}\colon V_{J'}(0)\to\mathbb{R}$ by setting \[(b^{J',\mathrm{mic}}_{v})^2=\Gamma-f\left(\frac{v}{J'}\right).\] In order to account for $g(w'^*,w'^*)$, we first observe that we have an upper bound for $g$. Indeed, let \[\Gamma_\delta:=\sup_{x\in(\delta,1-\delta)^d}f(x)\le\Gamma.\] If Assumptions \ref{a:logbd} and \ref{a:micro} hold, we must have $g(v,v)+f(x)\le\alpha_\delta(\mathsf{R}^{(1)}_v)$ for any $v\in\mathbb{Z}^d$ and any $x\in(\delta,1-\delta)^d$, as otherwise \ref{a:micro} would contradict \ref{a:logbd} on large cubes $V_N(w)$ with $\frac{v-w}{N}\approx x$. Optimizing over $x$, this means that $g(v,v)\le\alpha_\delta(\mathsf{R}^{(1)}_v)-\Gamma_\delta$. Let now \begin{equation} \label{eq-Gammaprime}\Gamma'_{R,\delta}:=\alpha_\delta(R)-\Gamma_\delta\end{equation} and define $\hat b^{J,R,\delta,\mathrm{mac}}\colon V_J\to[0,\infty)$ by setting \begin{equation}\label{e:def_hatb} (\hat b^{J,R,\delta,\mathrm{mac}}_{\hat w})^2=\begin{cases}\Gamma'_{R,\delta}-g(\hat w,\hat w)&\mathsf{R}^{(1)}_{\hat w}\le R\\0&\text{else}\end{cases} \end{equation} and then $b^{J,R,\delta,\mathrm{mac}}\colon \mathcal{W}_{N,\left\lfloor\frac{N}{J}\right\rfloor}\to[0,\infty)$ as \[b^{J,R,\delta,\mathrm{mac}}_{w'}=\hat b^{J,R,\delta,\mathrm{mac}}_{w'^*}\] where, once again, $w'^*\in V_J$ is the preimage of $w'$ under $\Phi^{0,0}_{J,N}$. \medskip Now we can finally define our approximating field $\xi^{N,J,J',R,\delta}$. We take two collections of independent standard Gaussians. The first, $X^{\mathrm{mic}}_{w''}$, is indexed by $w''\in\bigcup_{w'\in\mathcal{W}_{N,\left\lfloor{N}/{J}\right\rfloor}}\mathcal{W}_{\left\lfloor{N}/{J}\right\rfloor,J'}(w')$, and the second, $X^{\mathrm{mac}}_{w'}$ is indexed by $w'\in\mathcal{W}_{N,\left\lfloor{N}/{J}\right\rfloor}$. We then define for $w'\in\mathcal{W}_{N,\left\lfloor{N}/{J}\right\rfloor}$, $w''\in\mathcal{W}_{\left\lfloor{N}/{J}\right\rfloor,J'}(w')$, and $v\in V_{J'}(w'')$: \begin{align} \xi^{N,J,J',R,\delta,\mathrm{coarse}}_v&=\xi^{N,J,\mathrm{mac}}_v+b^{J,R,\delta,\mathrm{mac}}_{w'}X^{\mathrm{mac}}_{w'},\nonumber\\ \xi^{N,J,J',\mathrm{fine}}_v&=\xi^{N,J,J',\mathrm{mes}}_v+\xi^{N,J,J',\mathrm{mic}}_v+b^{J',\mathrm{mic}}_{v-w''}X^{\mathrm{mic}}_{w''},\label{eq-approxfield}\\ \xi^{N,J,J',R,\delta}_v&=\xi^{N,J,J',R,\delta,\mathrm{coarse}}_v+\xi^{N,J,J',\mathrm{fine}}_v.\nonumber \end{align} \medskip We will shortly show that $\xi^{N,J,J',R,\delta}$ is indeed a good approximation to $\varphi^{N}$. Before doing so, we introduce some more notation. We let $J=KL$ and $J'=K'L'$ for integers $K,L,K',L'$ that we will later send to infinity in the order $K',L',K,L$. As in \cite{DRZ17}, we abbreviate \[\limsup_{(L,K,L',K')\Rightarrow\infty}:=\limsup_{L\to\infty}\limsup_{K\to\infty}\limsup_{L'\to\infty}\limsup_{K'\to\infty} .\] Recall that by definition $N^*$ is a multiple of $KLK'L'$. Let also $R,T$ be integers. We define a set of good points, (similarly as in \eqref{e:approx_fields3}) by setting \begin{equation} \label{eqdef-good} \begin{split} &V_N^{K,L,K',L',R,\delta}\\ &=V_N^\delta\\ &\quad\cap\bigcup_{w'\in\mathcal{W}_{N,\left\lfloor\frac{N}{L}\right\rfloor}}V_{\left\lfloor\frac{N}{L}\right\rfloor}^\delta(w')\cap\bigcup_{w'\in\mathcal{W}_{N,\left\lfloor\frac{N}{KL}\right\rfloor}}V_{\left\lfloor\frac{N}{KL}\right\rfloor}^\delta(w')\\ &\quad\cap\bigcup_{w'\in\mathcal{W}_{N,\left\lfloor\frac{N}{KL}\right\rfloor}}\ \bigcup_{w''\in\mathcal{W}_{\left\lfloor\frac{N}{KL}\right\rfloor,L'}(w')}V_{L'}^\delta(w'')\cap \bigcup_{w'\in\mathcal{W}_{N,\left\lfloor\frac{N}{KL}\right\rfloor}}\ \bigcup_{w''\in\mathcal{W}_{\left\lfloor\frac{N}{KL}\right\rfloor,K'L'}(w')}V_{K'L'}^\delta(w'')\\ &\quad\cap\bigcup_{\substack{w'\in\mathcal{W}_{N,\left\lfloor\frac{N}{KL}\right\rfloor}\\\mathsf{R}^{(2)}_{w'}\le \left\lfloor\frac{N}{KL}\right\rfloor}}V_{K'L'}(w')\cap\bigcup_{w'\in\mathcal{W}_{N,\left\lfloor\frac{N}{KL}\right\rfloor}}\ \bigcup_{\substack{w''\in\mathcal{W}_{\left\lfloor\frac{N}{KL}\right\rfloor,K'L'}(w')\\\mathsf{R}^{(2)}_{w''}\le K'L'}}V_{K'L'}(w'')\\ &\quad\cap\bigcup_{\substack{w'\in\mathcal{W}_{N,\left\lfloor\frac{N}{KL}\right\rfloor}\\\mathsf{R}^{(1)}_{w'^*}\le R}}V_{K'L'}(w')\nonumber\\ &\quad\cap\left\{v\in V_N:\mathsf{R}^{(1)}_v\le R\right\} \end{split} \end{equation} and \[V_N^{K,L,K',L',R,T,\delta}=V_N^{K,L,K',L',R,\delta}\cap\left\{v\in V_N(w_N):T_v\le T\right\}.\] These definitions looks admittedly quite complicated, so let us give some explanations: \begin{itemize} \item In the definition of $V_N^{K,L,K',L',R,\delta}$ we begin with $V_N^\delta$. In the second and third lines we exclude those points which are too close the boundary of the relevant boxes on any of the four scales. The reason is that in order to apply Assumptions \ref{a:logbd}, \ref{a:micro} and \ref{a:macro}, we need to stay away from the boundary of the corresponding box. \item In the fourth line we exclude those points which lie in a macroscopic or a microscopic box where we know that the random scale $\mathsf{R}^{(2)}$ is smaller than the sidelength of the corresponding box. On the remaining macroscopic and microscopic boxes we can now use Assumptions \ref{a:micro} and \ref{a:macro}. \item In the fifth line we exclude those points that lie in a macroscopic box for which $\mathsf{R}^{(1)}$ at the corresponding point in the reference configuration $V_J$ is too large. The reason is that on these points we do not control $b^{J,R,\delta,\mathrm{mac}}$. \item Finally, in the sixth line we exclude those points where $\mathsf{R}^{(1)}$ is too large. For $V_N^{K,L,K',L',R,T,\delta}$ we also exclude those points where $T$ is too large. This is because the assumptions in part A of \ref{as:main} are only useful together with upper bounds on $\mathsf{R}^{(1)}$ and $\mathsf{T}$. \end{itemize} We refer to the points that are left after all these steps as good or typical points. We expect that most points are good, and will quantify this now. For later use we define for $w'\in\mathbb{Z}^d$ \begin{align*} V_{\left\lfloor\frac{N}{KL}\right\rfloor}^{K,L,K',L',R,\delta}(w')&:=V_{\left\lfloor\frac{N}{KL}\right\rfloor}(w')\cap V_N^{K,L,K',L',R,\delta},\\ V_{\left\lfloor\frac{N}{KL}\right\rfloor}^{K,L,K',L',R,T,\delta}(w')&:=V_{\left\lfloor\frac{N}{KL}\right\rfloor}(w')\cap V_N^{K,L,K',L',R,T,\delta}. \end{align*} Now, Assumptions \ref{a:sparseT} and \ref{a:sparseR} imply that most points are good in the sense that we have \begin{align} \liminf_{\substack{\delta\to0\\mathbb{R}\to\infty}}\inf_{K,L\in\mathbb{N}}\liminf_{L'\to\infty}\liminf_{K'\to\infty}\liminf_{N\to\infty}\min_{w'\in\mathcal{W}_{N,\left\lfloor\frac{N}{KL}\right\rfloor}}\frac{|V_N^{K,L,K',L',R,\delta}(w_N')|}{(N/KL)^d}=1,\label{e:few_bad_pointsR}\\ \liminf_{T\to\infty}\liminf_{\substack{\delta\to0\\mathbb{R}\to\infty}}\inf_{K,L\in\mathbb{N}}\liminf_{L'\to\infty}\liminf_{K'\to\infty}\liminf_{N\to\infty}\min_{w'\in\mathcal{W}_{N,\left\lfloor\frac{N}{KL}\right\rfloor}}\frac{|V_N^{K,L,K',L',R,T,\delta}(w_N')|}{(N/KL)^d}=1.\label{e:few_bad_pointsT} \end{align} \subsection{Convergence in distribution of the maximum} \label{sec-3.3} We can now begin with comparing the maxima of $\varphi^{N}$ and $\xi^{N,J,J',R,\delta}$. To do so, we first show that their covariances on the good set $V_N^{K,L,K',L',R,T,\delta}$ are close. Recall the constants $\Gamma$ and $\Gamma'_{R,\delta}$, see \eqref{eq-Gamma} and \eqref{eq-Gammaprime}. The precise statement is as follows. \begin{lemma}\label{l:approx_variances} Under Assumptions \ref{a:logbd}, \ref{a:micro} and \ref{a:macro} let $K,L,K',L',R$ be integers, $\delta>0$, and choose $J=KL,J'=K'L'$. Then, for all $R,T,\delta$ there is a constant $\Gamma''_{R,\delta}$ and $\varepsilon'_{N,K,L,K',L',R,T,\delta}$ with \[\limsup_{(L,K,L',K')\Rightarrow\infty}\limsup_{N\to\infty}\varepsilon'_{N,K,L,K',L',R,T,\delta}=0\] with the following property. Whenever $KL$ and $K'L'$ are large enough (depending on $R$), the following estimates hold for all $N$ large enough and $u,v\in V_N^{K,L,K',L',R,T,\delta}$. \begin{itemize} \item[(i)] If $u,v\in V^\delta_{L'}(\hat w'')$ for some $\hat w''\in\mathcal{W}_{\left\lfloor\frac{N}{KL}\right\rfloor,L'}(w')$ and some $w'\in\mathcal{W}_{N,\left\lfloor\frac{N}{KL}\right\rfloor}$, then \[\left|\mathbb{E}\varphi^{N}_v\varphi^{N}_u-\mathbb{E}\xi^{N,KL,K'L',R,\delta}_v\xi^{N,KL,K'L',R,\delta}_u-\Gamma- \Gamma'_{R,\delta}\right|\le\varepsilon'_{N,K,L,K',L',R,\delta}.\] \item[(ii)] If $u\in V^\delta_{\left\lfloor\frac{N}{L}\right\rfloor}(w'_u)$, $v\in V^\delta_{\left\lfloor\frac{N}{L}\right\rfloor}(w'_v)$ for some $\hat w'_u,\hat w'_v\in\mathcal{W}_{N,\left\lfloor\frac{N}{L}\right\rfloor}$ with $\hat w'_u\neq \hat w'_v$, then \[\left|\mathbb{E}\varphi^{N}_v\varphi^{N}_u-\mathbb{E}\xi^{N,KL,K'L',R,\delta}_v\xi^{N,KL,K'L',R,\delta}_u\right|\le\varepsilon'_{N,K,L,K',L',R,\delta}.\] \item[(iii)] In any case, \[\left|\mathbb{E}\varphi^{N}_v\varphi^{N}_u-\mathbb{E}\xi^{N,KL,K'L',R,\delta}_v\xi^{N,KL,K'L',R,\delta}_u\right|\le\Gamma''_{R,\delta}.\] \end{itemize} \end{lemma} Note that in Lemma \ref{l:approx_variances} we do not assume Assumption $\ref{a:logupp}$, i.e. we do not use that on the set $V_N^{K,L,K',L',R,T,\delta}$ we have a bound on $\mathsf{T}$. We will use this fact only later in the proof of Lemma \ref{l:comparison_maxima}. On the other hand, it is crucial for the proof that on $V_N^{K,L,K',L',R,T,\delta}$ we have upper bounds on $\mathsf{R}^{(1)}$ and $\mathsf{R}^{(2)}$. \begin{proof} This is shown just like \cite[Lemma 4.1]{DRZ17}. The idea for part (i) is that if $u,v\in V^\delta_{L'}(\hat w'')$ then they must lie in the same microscopic subbox $V^\delta_{K'L'}(w'')$. In particular the contributions from $\xi^{N,J,\mathrm{mac}}$, $b^{J,R,\delta,\mathrm{mac}}X^{\mathrm{mac}}$ and $\xi^{N,J,J',\mathrm{mes}}$ are the same for $u,v$, and the nontrivial part is to estimate the covariance of $\xi^{N,\mathrm{mic}}$. We must have $\mathsf{R}^{(1)}_v\le R$, $\mathsf{R}^{(2)}_{\hat w''}\le K'L'$, as otherwise $u,v$ would not be in the set of good points $V_N^{K,L,K',L',R,T,\delta}$. So, as soon as $K'L'\ge R$, we can apply Assumption \ref{a:micro} to the cube $V^\delta_{K'L'}(\hat w'')$, and obtain easily the claimed estimate. The argument for part (ii) is similar. This time we have to estimate the covariance of $\xi^{N,J,\mathrm{mac}}$. This field was defined using $\varphi^{KL}$. For $KL$ large enough we have $\mathsf{R}^{(2)}_0\le KL$. Therefore we can apply Assumption \ref{a:macro} to $V_{KL}$ once $KL$ is large enough, and again obtain the conclusion. Finally, for part (iii) we use that on $V_N^{K,L,K',L',R,T,\delta}$ we can have an upper bound on $\mathsf{R}^{(1)}$, so that we can apply \ref{a:logbd} while having a uniform upper bound for $\mathsf{R}^{(1)}$. It turns out that we can choose $\Gamma''_{R,\delta}=4\alpha_\delta(R)+\Gamma+\Gamma'_{R,\delta}$. \end{proof} Using the results of the previous subsection, Lemma \ref{l:approx_variances} implies that also the maxima of $\varphi^{N}$ and the approximating field are close. \begin{lemma}\label{l:comparison_maxima} Under Assumptions \ref{a:logupp}, \ref{a:logbd}, \ref{a:micro}, \ref{a:macro}, \ref{a:sparseT} and \ref{a:sparseR} we have \begin{align*} &\limsup_{T\to\infty}\limsup_{\substack{\delta\to0\\mathbb{R}\to\infty}}\limsup_{(L,K,L',K')\Rightarrow\infty}\limsup_{N\to\infty}\\ &\qquad \mathsf{d}\Big(\max_{v\in V_N}\varphi^{N}_v,\max_{v\in V_N^{K,L,K',L',R,T,\delta}}\xi^{N,KL,K'L',R,\delta}_v-\sqrt{\frac{d}{2}}(\Gamma+\Gamma'_{R,\delta})\Big)=0. \end{align*} \end{lemma} \begin{proof} By Assumptions \ref{a:sparseT} and \ref{a:sparseR}, \eqref{e:few_bad_pointsR} and Lemma \ref{l:upper_right_tail_badset} we have an upper bound on the right tail of the maximum of $\varphi^{N}$ on the set $V_N\setminus V_N^{K,L,K',L',R,T,\delta}$. On the other hand, by Lemma \ref{l:upper_left_tail} and Assumption \ref{a:sparseR} we have a lower bound of the maximum on $V_N^{K,L,K',L',R,T,\delta}$. When combined, these two facts together imply that in the limit $R\to\infty$, $\delta\to0$, $T\to\infty$ the probability that the maxima of $\varphi^{N}$ on $V_N$ and $V_N^{K,L,K',L',R,T,\delta}$ are different vanishes. Thus we can assume that the maximum of $\varphi^{N}$ is assumed at a point in $V_N^{K,L,K',L',R,T,\delta}$ So we can restrict our attention to $V_N^{K,L,K',L',R,T,\delta}$. On this set, we can argue just as in \cite[Lemma 4.2]{DRZ17}, using Lemma \ref{l:approx_fields}, to show that the two maxima are close in distribution. \end{proof} By Lemma \ref{l:comparison_maxima} we now only have to investigate the maximum of the random variable \[\max_{v\in V_N^{K,L,K',L',R,T,\delta}}\xi^{N,KL,K'L',R,\delta}_v-m_N-\sqrt{\frac{d}{2}}(\Gamma+\Gamma'_{R,\delta}).\] For that purpose, the main ingredient are precise asymptotics for the right tail of the fine field $\xi^{N,KL,K'L',\mathrm{fine}}$, where we recall that \[\xi^{N,J,J',\mathrm{fine}}_v=\xi^{N,J,J',\mathrm{mes}}_v+\xi^{N,J,J',\mathrm{mic}}_v+b^{J',\mathrm{mic}}_{v-w''}X^{\mathrm{mic}}_{w''}.\] Note that this field does not depend on $R,T,\delta$. \begin{lemma}\label{l:right_tail} Under Assumptions \ref{a:logupp}, \ref{a:logbd}, \ref{a:micro}, \ref{a:macro}, \ref{a:sparseT}, \ref{a:sparseR} and \ref{a:lln} there are there are constants $\Gamma^\pm$ such that for any $R,T$ sufficiently large and $\delta$ sufficiently small there is a parameter $\varepsilon''_{R,T,\delta}$ such that \[\limsup_{T\to\infty}\limsup_{\substack{\delta\to0\\mathbb{R}\to\infty}}\varepsilon''_{R,T,\delta}=0\] and such that the following hold. For any $K',L'$ sufficiently large there are constants $\beta_{K'L'}$ that depend on $K'L'$ only, and satisfy $\Gamma^-\le\beta_{K'L'}\le\Gamma^+$ such that for any $K,L,N$ sufficiently large and every sequence $a\in V_{KL}$, setting $w_{a,KL,N}'=a\left\lfloor \frac{N}{KL}\right\rfloor$, we have \begin{equation}\label{e:right_tail} \begin{split} &\limsup_{z\to\infty}\limsup_{K'L'\to\infty}\limsup_{N\to\infty}\\ &\quad\Bigg|\frac{{\mathrm{e}}^{\sqrt{2d}z}}{z}\mathbb{P}^{N}\bigg(\max_{v\in V_{\left\lfloor\frac{N}{KL}\right\rfloor}^{K,L,K',L',R,T,\delta}(w_{a,KL,N}')}\xi^{N,KL,K'L',\mathrm{fine}}_v\ge m_{\left\lfloor\frac{N}{KL}\right\rfloor}+z\bigg)-\beta_{K'L'}\Bigg|\le\varepsilon ''_{R,T,\delta}. \end{split} \end{equation} \end{lemma} \begin{proof} This is the most technical proof of the whole paper. The proof is similar to the proof of \cite[Proposition 4.3]{DRZ17}, although extra care is needed because of the additional parameters $R$ and $T$ and the fact that the distribution of $\xi^{N,J,J',\mathrm{mic}}$ on different microscopic boxes is not the same, but only that Assumption \ref{a:lln} ensures some averaging of these distribution. The proof of \cite[Proposition 4.3]{DRZ17} is outlined in the appendix there, while making extensive reference to \cite{BDZ16} for various intermediate steps. We proceed similary, by giving details where some new ideas are required, and otherwise referring the reader to \cite{BDZ16,DRZ17} for various calculations. \emph{Step 1: Preliminaries}\\ We begin with the observation that by Assumption \ref{a:logbd} and Lemma \ref{l:approx_variances} (i) we have for $K,L,K',L'$ large enough that \begin{equation}\label{e:right_tail1} \left|\mathbb{E}\xi^{N,KL,K'L',\mathrm{fine}}_v\xi^{N,KL,K'L',\mathrm{fine}}_u-\log N+\log_+|u-v|\right|\le \alpha_{\delta}(R)+\Gamma+1 \end{equation} for all $u,v\in V_N^{K,L,K',L',R,T,\delta}$. Thus the field $\xi^{N,KL,K'L',\mathrm{fine}}$ satisfies a bound like in Assumption \ref{a:logbd}, just with $\alpha_{\delta}(\cdot)+\Gamma+1$ instead of $\alpha_{\delta}(\cdot)$, and so Lemma \ref{l:upper_right_tail} implies an upper bound on the right tail of the form \begin{equation}\label{e:right_tail2} \mathbb{P}^{N}\bigg(\max_{v\in V_{\left\lfloor\frac{N}{KL}\right\rfloor}^{K,L,K',L',R,T,\delta}(w'_{a,KL,N})}\xi^{N,KL,K'L',\mathrm{fine}}_v\ge m_{\left\lfloor\frac{N}{KL}\right\rfloor}+z\bigg)\le C_{\alpha_\delta(R)}z{\mathrm{e}}^{-\sqrt{2d}z} \end{equation} for all $z\ge 1$. Similarly, \eqref{e:right_tail1} together with Lemma \ref{l:lower_right_tail} and Assumptions \ref{a:sparseT} and \ref{a:sparseR} imply that for $R,T$ sufficiently large and $\delta$ sufficiently small we have a lower bound on the right tail of the form \begin{equation}\label{e:right_tail3} \mathbb{P}^{N}\bigg(\max_{v\in V_{\left\lfloor\frac{N}{KL}\right\rfloor}^{K,L,K',L',R,T,\delta}(w'_{a,KL,N})}\xi^{N,KL,K'L',\mathrm{fine}}_v\ge m_{\left\lfloor\frac{N}{KL}\right\rfloor}+z\bigg)\ge c_{\alpha_\delta(R)}z{\mathrm{e}}^{-\sqrt{2d}z} \end{equation} for all $\sqrt{\log N}-C_{\alpha_\delta(R)}\ge z\ge 1$. Now \eqref{e:right_tail2} and \eqref{e:right_tail3} applied for some fixed $R,\delta$, clearly imply that if the constant $\beta_{K'L'}$ in \eqref{e:right_tail} exists, then necessarily $\Gamma^-\le\beta_{K'L'}\le\Gamma^+$. So it remains to show the existence of $\beta_{K'L'}$. \emph{Step 2: Definition of barrier events}\\ By definition, the field $\xi^{N,KL,K'L',\mathrm{fine}}$ on the macroscopic box $V_{\left\lfloor\frac{N}{KL}\right\rfloor}(w'_{a,KL,N})$ is defined only on the microscopic boxes $V_{KL}(w'')$ for $w''\in\mathcal{W}_{\left\lfloor\frac{N}{KL}\right\rfloor,KL'}(w'_{a,KL,N})$, and there are exactly $\left(\frac{N^*}{KLK'L'}\right)^d$ of these microscopic boxes in the macroscopic box $V_{\left\lfloor\frac{N}{KL}\right\rfloor}(w'_{a,KL,N})$. As in \cite{DRZ17}, we switch notation to make the connection to (M)BRWs more obvious. We let $\bar N:=\frac{N^*}{KL}$, $J':=K'L'$, and let $\bar n:=\log_2\bar N$, $j'=\log_2J'$. Furthermore, we let $\Xi_{\bar N,J'}:=\mathcal{W}_{\left\lfloor\frac{N}{KL}\right\rfloor,K'L'}(w'_N)$. For $w''\in\Xi_{\bar N,J'}$ we write \begin{align*} X^{N,KL,K'L'}_{w''}&:=\xi^{N,KL,K'L',\mathrm{mes}}_{w''}\\ Y^{N,K,L,K',L',R,T,\delta}_{w''}&:=\max_{v\in V_{J'}(w'')\cap V^{K,L,K',L',R,T,\delta}_N}\left(\xi^{N,KL,K'L',\mathrm{mic}}_{v}+b^{K'L',\mathrm{mic}}_vX_{w''}\right) \end{align*} where we use the convention that the maximum of the empty set is $-\infty$. With these definitions we have that \begin{equation}\label{e:right_tail4} \max_{v\in V_{\left\lfloor\frac{N}{KL}\right\rfloor}^{K,L,K',L',R,T,\delta}(w'_N)}\xi^{N,KL,K'L',\mathrm{fine}}_v=\max_{w''\in \Xi_{\bar N,J'}}\left(X^{N,KL,K'L'}_{w''}+Y^{N,K,L,K',L',R,T,\delta}_{w''}\right), \end{equation} and so it suffices to study the behaviour of the right-hand side. Recall that $\xi^{N,KL,KL',\mathrm{mes}}_{w''}$ was defined as the value of a certain MBRW together with a small correction factor, that is \[X^{N,KL,K'L'}_{w''}=s_{\bar N,J'}\tilde\theta^{2^{\lceil \bar n-j'\rceil}}_{w''}\] where \[s_{\bar N,J'}=\frac{\log 2^{\bar n-j'}}{\log 2^{\lceil\bar n-j'\rceil}}=\frac{\bar n-j'}{\lceil\bar n-j'\rceil}.\] For each $w''$, $X^{N,KL,K'L'}_{w''}$ is distributed as a centered Gaussian with variance $\lceil\bar n-j'\rceil\log 2$. As in the proof of Lemma \ref{l:upper_right_tail_sparse}, we can interpret $s_{\bar N,J'}X^{N,KL,K'L'}_{w''}$ as the value at time $\bar n-j'$ of a $2d$-ary branching Brownian motion that branches at times $0,s_{\bar N,J'},2s_{\bar N,J'},\ldots,\bar n-j' -s_{\bar N,J'}$. For that branching Brownian motion, we can define the barrier events \begin{align*} \mathcal{E}^{N,K,L,K',L',R,T,\delta}_{E,w''}(z)&=\Bigg\{X^{N,KL,K'L'}_{w''}(t)\le z+\frac{t m_{\bar N}}{\bar n}\ \forall0\le t\le \bar n-j',\\ &\qquad \qquad\qquad\qquad X^{N,KL,K'L'}_{w''}(\bar n-j')+Y^{N,K,L,K',L',R,T,\delta}_{w''}\ge m_{\bar N}+z\Bigg\},\\ \mathcal{E}^{N,K,L,K',L',R,T,\delta}_{F,w''}(z)&=\Bigg\{X^{N,KL,K'L'}_{w''}(\bar n-j')+Y^{N,K,L,K',L',R,T,\delta}_{w''}\ge m_{\bar N}+z,\\ &\!\!\!\!\!\! \!\!\!\!\!\!\!\! \!\!X^{N,KL,K'L'}_{w''}(t)\le z+\frac{t m_{\bar N}}{\bar n}+10\log_+(t\vee(\bar n-j'-t))+z^{1/20}\ \forall0\le t\le \bar n-j'\Bigg\},\\ \mathcal{E}^{N,K,L,K',L',R,T,\delta}_G(z)&=\\ &\!\!\!\!\!\! \!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\! \!\! \bigcup_{w''\in\Xi_{\bar N,J'}}\bigcup_{0\le t\le \bar n-j'}\left\{X^{N,KL,K'L'}_{w''}(t)\ge z+\frac{t m_{\bar N}}{\bar n}+10\log_+(t\vee(\bar n-j'-t))+z^{1/20}\right\}, \end{align*} and the random variables \begin{align*} \Lambda^{N,K,L,K',L',R,T,\delta}_{E}(z)&=\sum_{w''\in\Xi_{\bar N,J'}}\mathbbm{1}_{\mathcal{E}^{N,K,L,K',L',R,T,\delta}_{E,w''}(z)},\\ \Lambda^{N,K,L,K',L',R,T,\delta}_{F}(z)&=\sum_{w''\in\Xi_{\bar N,J'}}\mathbbm{1}_{\mathcal{E}^{N,K,L,K',L',R,T,\delta}_{F,w''}(z)}. \end{align*} \emph{Step 3: First and second moment argument}\\ Our goal in this step is to show that the probability that the maximum of $\xi^{N,KL,K'L',R,T,\delta,w_N,\mathrm{fine}}_v$ is at least $m_{\bar N}+z$ is asymptotically the same as the expectation of $\Lambda^{N,K,L,K',L',R,T,\delta}_E(z)$. To do so, we need to compare the various barrier events, and show that $\Lambda^{N,K,L,K',L',R,T,\delta}_E(z)$ is concentrated around its expectation. For the latter we will use a second moment argument. We begin by observing that by the same argument as in \cite{BDZ16} and \cite{DRZ17} the event $\mathcal{E}^{N,K,L,K',L',R,T,\delta}_G(z)$ is negligible in the sense that \begin{equation}\label{e:right_tail6} \mathbb{P}^{N}\left(\mathcal{E}^{N,K,L,K',L',R,T,\delta}_G(z)\right)\le C{\mathrm{e}}^{-\sqrt{2d}z} \end{equation} for some absolute constant $z$. Next we study $\Lambda^{N,K,L,K',L',R,T,\delta}_E(z)$ and $\Lambda^{N,K,L,K',L',R,T,\delta}_F(z)$. We have the trivial estimate $\Lambda^{N,K,L,K',L',R,T,\delta}_E(z)\le\Lambda^{N,K,L,K',L',R,T,\delta}_F(z)$, and we claim that this is asymptotically sharp in the sense that\footnote{Note that the corresponding result in \cite{DRZ17}, Equation (59), contains several typos: The $\limsup$s there should be $\liminf$s, and the expectation in the numerator and denominator is missing.} \begin{equation}\label{e:right_tail7} \lim_{z\to\infty}\liminf_{J'\to\infty}\liminf_{\bar N\to\infty}\frac{\mathbb{E}\Lambda^{N,K,L,K',L',R,T,\delta}_E(z)}{\mathbb{E}\Lambda^{N,K,L,K',L',R,T,\delta}_F(z)}=1. \end{equation} For this, the proof from \cite{DRZ17} does not carry over directly, as the $\mathcal{E}^{N,K,L,K',L',R,T,\delta}_{E,w''}(z)$ for different $w''$ do not have the same probability (and the same holds for the $\mathcal{E}^{N,K,L,K',L',R,T,\delta}_{F,w''}(z)$). But this is just a minor problem: For each fixed $w''$ it could be that $V_{J'}(w'')\cap V^{K,L,K',L',R,T,\delta}=\varnothing$, and in that case $Y^{N,K,L,K',L',R,T,\delta}_{w''}=-\infty$, and so both $\mathcal{E}^{N,K,L,K',L',R,T,\delta}_{E,w''}(z)$ and $\mathcal{E}^{N,K,L,K',L',R,T,\delta}_{F,w''}(z)$ have probability 0. In the generic case that $V_{J'}(w'')\cap V^{K,L,K',L',R,T,\delta}_N(w_N)\neq\varnothing$, we have that both events $\mathcal{E}^{N,K,L,K',L',R,T,\delta}_{E,w''}(z)$ and $\mathcal{E}^{N,K,L,K',L',R,T,\delta}_{F,w''}(z)$ have positive probability. We can compare them using the same argument as in \cite[Lemma 4.10]{BDZ16}, which uses as its only information on $Y^{N,K,L,K',L',R,T,\delta}_{w''}$ the Gaussian bound on the right tail that is implied by Lemma \ref{l:upper_right_tail}. In this way we find \begin{align*} &\limsup_{z\to\infty}\limsup_{J'\to\infty}\limsup_{\bar N\to\infty}\max_{\substack{w''\in\Xi_{\bar N,J'}\\mathcal{V}_{J'}(w'')\cap V^{K,L,K',L',R,T,\delta}_N\neq\varnothing}}\\ &\qquad\qquad\qquad\frac{\mathbb{P}^{N}(\mathcal{E}^{N,K,L,K',L',R,T,\delta}_{F,w''}(z))-\mathbb{P}^{N}(\mathcal{E}^{N,K,L,K',L',R,T,\delta}_{E,w''}(z))}{\mathbb{P}^{N}(\mathcal{E}^{N,K,L,K',L',R,T,\delta}_{E,w''}(z))}=0.\end{align*} Summing this for all $w''$, we indeed obtain \eqref{e:right_tail7}. Next, we need an exponential lower bound on $\mathbb{E}\Lambda^{N,K,L,K',L',R,T,\delta}_E(z)$. Such a lower bound follows from \eqref{e:right_tail3} together with \eqref{e:right_tail6} and \eqref{e:right_tail7}, and we find that for $R,T$ sufficiently large and $\delta$ sufficiently small \begin{equation}\label{e:right_tail9} \mathbb{E}\Lambda^{N,K,L,K',L',R,T,\delta}_E(z)\ge c'_{\alpha_\delta(R)}z{\mathrm{e}}^{-\sqrt{2d}z} \end{equation} for all $\sqrt{\log N}-C_{\alpha_\delta(R)}\ge z\ge 1$. The bounds derived so far can now be combined to give upper bounds on the tail probability in question. Indeed from \eqref{e:right_tail6}, \eqref{e:right_tail7}, \eqref{e:right_tail9} and Markov's inequality we obtain for $R,T$ sufficiently large and $\delta$ sufficiently small \begin{equation}\label{e:right_tail10} \limsup_{z\to\infty}\limsup_{J'\to\infty}\limsup_{\bar N\to\infty}\frac{\mathbb{P}^{N}\left(\max_{w''\in \Xi_{\bar N,J'}}\left(X^{N,KL,K'L'}_{w''}+Y^{N,K,L,K',L',R,T,\delta}_{w''}\right)\ge m_{\bar N}+z\right)}{\mathbb{E}\Lambda^{N,K,L,K',L',R,T,\delta}_E(z)}\le1. \end{equation} For the corresponding lower bound, we need to bound the second moment of $\Lambda^{N,K,L,K',L',R,T,\delta}_E(z)$. This is a very lengthy calculation, but fortunately this calculation also uses as its only information on $Y^{N,K,L,K',L',R,T,\delta}_{w''}$ the Gaussian bound on the right tail. This means that the calculation in \cite[Lemma 4.11]{BDZ16} carries over directly to our setting, and we find \begin{equation}\label{e:right_tail8} \lim_{z\to\infty}\limsup_{J'\to\infty}\limsup_{\bar N\to\infty}\frac{\mathbb{E}\Lambda^{N,K,L,K',L',R,T,\delta}_E(z)^2}{\mathbb{E}\Lambda^{N,K,L,K',L',R,T,\delta}_E(z)}=1 \end{equation} where we note that the right-hand side is trivially bounded below by 1. Now \eqref{e:right_tail8} and the Paley-Zygmund inequality imply that \begin{equation}\label{e:right_tail11} \liminf_{z\to\infty}\liminf_{J'\to\infty}\liminf_{\bar N\to\infty}\frac{\mathbb{P}^{N,w_N}\left(\max_{w''\in \Xi_{\bar N,J'}}\left(X^{N,KL,K'L'}_{w''}+Y^{N,K,L,K',L',R,T,\delta}_{w''}\right)\ge m_{\bar N}+z\right)}{\mathbb{E}\Lambda^{N,K,L,K',L',R,T,\delta}_E(z)}\ge1. \end{equation} Now \eqref{e:right_tail10} and \eqref{e:right_tail11} are matching upper and lower bounds. Taking them together and recalling \eqref{e:right_tail4}, we obtain the desired asymptotics on the right tail, namely \begin{align}\label{e:right_tail12} &\limsup_{z\to\infty}\limsup_{J'\to\infty}\limsup_{\bar N\to\infty}\Bigg|\frac{\mathbb{P}^{N,w_N}\bigg(\max_{v\in V_{\left\lfloor\frac{N}{KL}\right\rfloor}^{K,L,K',L',R,T,\delta}(w'_N)}\xi^{N,KL,K'L',\delta,w_N,\mathrm{fine}}_v\ge m_{\left\lfloor\frac{N}{KL}\right\rfloor}+z\bigg)}{\mathbb{E}\Lambda^{N,K,L,K',L',R,T,\delta}_E(z)}-1\Bigg|\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad=0 \end{align} for any $R,T$ large and $\delta$ small. \emph{Step 4: Asymptotics for the first moment}\\ After the hard work of the previous step, it remains to derive the asymptotics, as $z\to\infty$, of $\mathbb{E}\Lambda^{N,K,L,K',L',R,T,\delta}_E(z)$. This step is somewhat different than the corresponding argument in \cite{DRZ17}, as the $Y^{N,K,L,K',L',R,T,\delta}_{w''}$ for different $w''$ are not identically distributed. Instead we will employ Assumption \ref{a:lln} to control the average. In order to so, we will first need to get rid of the restriction to points in $V^{K,L,K',L',R,T,\delta}_N$ in the definition of $Y^{N,K,L,K',L',R,T,\delta}_{w''}$. Denote by $\chi_{N,KL,K'L',w''}$ the density of \[\mathbb{P}^{N}\left(X^{N,KL,K'L'}_{w''}(t)\le z+\frac{t m_{\bar N}}{\bar n}\ \forall0\le t\le \bar n-j',X^{N,KL,K'L'}_{w''}(\bar n-j')-\frac{(\bar n-j') m_{\bar N}}{\bar n}\in\cdot\right)\] with respect to one-dimensional Lebesgue measure. Then we have \begin{equation}\label{e:right_tail13} \begin{split} &\mathbb{E}\Lambda^{N,K,L,K',L',R,T,\delta}_E(z)=\sum_{w''\in\Xi_{\bar N,J'}}\mathbb{P}^{N}\left(E^{N,K,L,K',L',R,T,\delta}_{w''}(z)\right)\\ &=\sum_{w''\in\Xi_{\bar N,J'}}\int_{-\infty}^z\chi_{N,KL,K'L',w''}(y)\mathbb{P}^{N}\left( Y^{N,K,L,K',L',R,T,\delta}_{w''}\ge \frac{j' m_{\bar N}}{\bar n}+z-y\right)\,\mathrm{d} y. \end{split} \end{equation} The main contribution to the integrals on the right-hand side should come from $y$ which are of the order of $j'^{1/2}$. Indeed, for an interval $I\subset\mathbb{R}$ we define \[\lambda^{N,K,L,K',L',R,T,\delta}_{I,w''}(z):= \int_{I}\chi_{N,KL,K'L',w''}(y)\mathbb{P}^{N}\left( Y^{N,K,L,K',L',R,T,\delta}_{w''}\ge \frac{j' m_{\bar N}}{\bar n}+z-y\right)\,\mathrm{d} y.\] Now consider any sequence $(x_{w'',\bar N})_{\bar N\in\mathbb{N},w''\in \Xi_{\bar N,J'}}$ with $|x_{w'',\bar N}|\le j'^{1/5}$, and let $I_{j'}=[-j',-j'^{2/5}]$. Then we claim as in \cite{DRZ17} that \begin{equation}\label{e:right_tail14} \liminf_{z\to\infty}\liminf_{J'\to\infty}\liminf_{\bar N\to\infty}\frac{\sum_{w''\in\Xi_{\bar N,J'}}\lambda^{N,K,L,K',L',R,T,\delta}_{x_{w'',\bar N}+I_{j'},w''}(z)}{\mathbb{E}\Lambda^{N,K,L,K',L',R,T,\delta}_E(z)}=1. \end{equation} Note that the right-hand side here is trivially bounded above by 1. The actual estimate \eqref{e:right_tail14} follows just as in \cite{DRZ17}. Namely using the Gaussian upper bound on the right tail of the random variables $Y^{N,K,L,K',L',R,T,\delta}_{w''}$ one bounds the integrand on $(-\infty,0]\setminus x_{w'',\bar N}+I_{j'}$ by terms that are negligigle in comparison to the lower bound in \eqref{e:right_tail9}. Next, we want to argue that the numerator on the left-hand side of \eqref{e:right_tail14} depends only weakly on $R,T,\delta$. To that end, define \[\bar Y^{N,KL,K'L'}_{w''}:=\max_{v\in V_{J'}(w'')}\left(\xi^{N,KL,K'L',w_N,\mathrm{mic}}_{v}+b^{K'L',\mathrm{mic}}_vX_{w''}\right)\] and, for an interval $I\subset\mathbb{R}$, \[\bar\lambda^{N,KL,K'L'}_{I,w''}(z):= \int_{I}\chi_{N,KL,K'L',w''}(y)\mathbb{P}^{N}\left( \bar Y^{N,KL,K'L'}_{w''}\ge \frac{j' m_{\bar N}}{\bar n}+z-y\right)\,\mathrm{d} y.\] Note that $\bar\lambda^{N,KL,K'L'}_{I,w''}(z)$ is independent of $R,T,\delta$. Our claim then is that for $(x_{w'',\bar N})$ as in \eqref{e:right_tail14} we have \begin{equation}\label{e:right_tail15} \liminf_{z\to\infty}\liminf_{L'\to\infty}\liminf_{K'\to\infty}\liminf_{\bar N\to\infty}\frac{\sum_{w''\in\Xi_{\bar N,J'}}\lambda^{N,K,L,K',L',R,T,\delta}_{I_{j'},w''}(z)}{\sum_{w''\in\Xi_{\bar N,J'}}\bar\lambda^{N,KL,K'L'}_{I_{j'},w''}(z)}\ge 1-\varepsilon''_{R,T,\delta} \end{equation} where $\varepsilon''_{R,T,\delta}$ is a parameter that depends on $R,T,\delta$ only and satisfies \[\limsup_{T\to\infty}\limsup_{\substack{\delta\to0\\mathbb{R}\to\infty}}\varepsilon''_{R,T,\delta}=0.\] Again we note that the fraction in \eqref{e:right_tail15} is trivially bounded above by 1. To see \eqref{e:right_tail15} itself, it suffices to show that \begin{equation}\label{e:right_tail16}\liminf_{z\to\infty}\liminf_{J'\to\infty}\liminf_{\bar N\to\infty}\inf_{y\in I_{\ell}}\frac{\sum_{w''\in\Xi_{\bar N,J'}}\mathbb{P}^{N}\Big( \bar Y^{N,KL,K'L'}_{w''}\ge \frac{j' m_{\bar N}}{\bar n}+z-y\Big)}{\sum_{w''\in\Xi_{\bar N,J'}}\mathbb{P}^{N}\Big( Y^{N,K,L,K',L',R,T,\delta}_{w''}\ge \frac{j' m_{\bar N}}{\bar n}+z-y\Big)}\ge 1-\varepsilon''_{R,T,\delta} \end{equation} as then \eqref{e:right_tail15} follows by integrating over $y$. To that end, let \[z':=\frac{j' m_{\bar N}}{\bar n}+z-y-m_{J'}\] and note that by Lemma \ref{l:upper_right_tail_badset_micro}, \begin{equation}\label{e:right_tail17} \begin{split} &\sum_{w''\in\Xi_{\bar N,J'}}\bigg(\mathbb{P}^{N}\Big( \bar Y^{N,KL,K'L'}_{w''}\ge \frac{j' m_{\bar N}}{\bar n}+z-y\Big)- \mathbb{P}^{N}\Big( Y^{N,K,L,K',L',R,T,\delta}_{w''}\ge \frac{j' m_{\bar N}}{\bar n}+z-y\Big)\bigg)\\ &\le\sum_{w''\in\Xi_{\bar N,J'}}\quad\mathbb{P}^{N}\Big(\max_{v\in V_{J'}(w'')\setminus V^{K,L,K',L',R,T,\delta}_N}\big(\xi^{N,KL,K'L',\mathrm{mic}}_{v}+b^{K'L',\mathrm{mic}}_vX_{w''}\big)\ge m_{J'}+z'\Big)\\ &\le C\gamma^{K,L,K',L',R,\delta}_{w'_{a,KL,N}}\left(\frac{N}{JJ'}\right)^dz'{\mathrm{e}}^{-\sqrt{2d}z'} \end{split} \end{equation} where \[\begin{split} &\gamma^{K,L,K',L',R,\delta}_{w'_{a,KL,N}}:=\\ &\left({\mathrm{e}}^{d(T+\Gamma)}\frac{\Big|V_{\left\lfloor\frac{N}{KL}\right\rfloor}^{K,L,K',L',R,\delta}(w'_{a,KL,N})\Big|}{N^d}\Bigg(1+T+\log\bigg(\frac{N^d}{\Big|V_{\left\lfloor\frac{N}{KL}\right\rfloor}^{K,L,K',L',R,\delta}(w'_N)\Big|}\bigg)\bigg)^{19/8}+{\mathrm{e}}^{-c(T-\Gamma)}\right). \end{split}\] By \eqref{e:few_bad_pointsR}, we have \[\limsup_{T\to\infty}\limsup_{\substack{\delta\to0\\mathbb{R}\to\infty}}\sup_{K,L\in\mathbb{N}}\limsup_{L'\to\infty}\limsup_{K'\to\infty}\limsup_{N\to\infty}\max_{a\in V_{KL}(0)}\gamma^{K,L,K',L',R,\delta}_{w'_{a,KL,N}}=0.\] These estimates show that the difference between denominator and numerator in \eqref{e:right_tail15} is small. On the other hand we have a lower bound for the numerator. Indeed we can pick some $R_0$ large such that at least $\frac34$ of the points in $\bigcup_{w''\in\Xi_{\bar N,J'}}V_{J'}(w'')$ satisfy $\mathsf{R}^{(1)}_v\ge R_0$. Then by the pigeonhole principle we can apply the lower bound on the right tail, Lemma \ref{l:lower_right_tail}, with $R=R_0+\Gamma$ on at least half of the cubes $V_{J'}(w'')$, and summing over these we find \[\sum_{w''\in\Xi_{\bar N,J'}}\mathbb{P}^{N}\left( \bar Y^{N,KL,K'L'}_{w''}\ge \frac{j' m_{\bar N}}{\bar n}+z-y\right)\ge cz'\left(\frac{N}{JJ'}\right)^dz'{\mathrm{e}}^{-\sqrt{2d}z'}.\] Together with \eqref{e:right_tail17}, we find \[\frac{\sum_{w''\in\Xi_{\bar N,J'}}\lambda^{N,K,L,K',L',R,T,\delta}_{I_{j'},w''}(z)}{\sum_{w''\in\Xi_{\bar N,J'}}\bar\lambda^{N,KL,K'L'}_{I_{j'},w''}(z)}\ge 1-C\gamma^{K,L,K',L',R,\delta}_{w'_{a,KL,N}}\] and from this estimate we immediately obtain \eqref{e:right_tail16} and then also \eqref{e:right_tail15}. Combining now \eqref{e:right_tail15} with \eqref{e:right_tail14}, we have shown that \begin{equation}\label{e:right_tail19} \limsup_{z\to\infty}\limsup_{L'\to\infty}\limsup_{K'\to\infty}\limsup_{\bar N\to\infty}\left|\frac{\mathbb{E}\Lambda^{N,K,L,K',L',R,T,\delta}_E(z)}{\sum_{w''\in\Xi_{\bar N,J'}}\bar\lambda^{N,KL,K'L'}_{I_{j'},w''}(z)}-1\right|\le\varepsilon''_{R,T,\delta}. \end{equation} So to show the existence of $\beta_{K'L'}$, it suffices to show that $\sum_{w''\in\Xi_{\bar N,J'}}\bar\lambda^{N,KL,K'L'}_{I_{j'},w''}(z)$ has a limit that only depends on $K'L'$. Recall that \[\sum_{w''\in\Xi_{\bar N,J'}}\bar\lambda^{N,KL,K'L'}_{I_{j'},w''}(z)= \sum_{w''\in\Xi_{\bar N,J'}}\int_{I_{j'}}\chi_{N,KL,K'L',w''}(y)\mathbb{P}^{N,w_N}\left( \bar Y^{N,KL,K'L'}_{w''}\ge \frac{j' m_{\bar N}}{\bar n}+z-y\right)\,\mathrm{d} y.\] Now we have that \[\frac{j' m_{\bar N}}{\bar n}=\sqrt{2d}\log 2j'+O\left(j'\frac{\log\bar n}{\bar n}\right)\] and therefore, by shifting the domain of integration by $O\left(j'\frac{\log\bar n}{\bar n}\right)$ and using the asymptotics for $\chi_{N,KL,K'L',w''}(y)$ from \cite[Equation (67)]{DRZ17} to estimate the change in that term, we see that \begin{align*} &\sum_{w''\in\Xi_{\bar N,J'}}\bar\lambda^{N,KL,K'L'}_{I_{j'},w''}(z)\\ &=\left(1+O\left(j'^3\frac{\log\bar n}{\bar n}\right)\right)\int_{I_{j'}}\chi_{N,KL,K'L',w''}(y)\sum_{w''\in\Xi_{\bar N,J'}}\mathbb{P}^{N}\left( \bar Y^{N,KL,K'L'}_{w''}\ge \sqrt{2d}\log 2l+z-y\right)\,\mathrm{d} y\\ &=\left(1+O\left(j'^3\frac{\log\bar n}{\bar n}\right)\right)\int_{I_{j'}}\chi_{N,KL,K'L',w''}(y)\\ & \qquad\qquad \qquad\qquad \qquad \sum_{w''\in\Xi_{\bar N,J'}}\mathbb{P}^{J',w''}\left(\max_{v\in V_{J'}(w'')}\varphi^{J',w''}+b_{J',v}X_{w''}\ge \sqrt{2d}\log 2l+z-y\right)\,\mathrm{d} y. \end{align*} The right-hand side is now finally in a form to which we can apply our ergodicity Assumption \ref{a:lln}. By that assumption and dominated convergence we know for every fixed $y\le0$ that \begin{align*} &\lim_{N\to\infty}\left(\frac{KLK'L'}{N}\right)^d\sum_{w''\in\Xi_{\bar N,J'}}\mathbb{P}^{N}\left(\max_{v\in V_{J'}(w'')}\varphi+b^{K'L',\mathrm{mic}}_v x\ge \sqrt{2d}\log 2l+z-y\right)\\ &\quad=\lim_{N\to\infty}\int_{\mathbb{R}}\frac{{\mathrm{e}}^{-x^2/2}}{\sqrt{2\pi}}\left(\frac{KK'LL''}{N}\right)^d\sum_{w''\in\Xi_{\bar N,J'}}\mathbb{P}^{N}\left(\max_{v\in V_{J'}(w'')}\varphi+b^{K'L',\mathrm{mic}}_v x \ge \sqrt{2d}\log 2l+z-y\right)\,\mathrm{d} x\\ &\quad=\int_{\mathbb{R}}\int\frac{{\mathrm{e}}^{-x^2/2}}{\sqrt{2\pi}}\mathbb{P}^{N}\left(\max_{v\in V_{J'}(w'')}\varphi+b^{K'L',\mathrm{mic}}_v x\ge \sqrt{2d}\log 2l+z-y\right)\,\nu_{J'}(\mathrm{d}\mathbb{P})\,\mathrm{d} x. \end{align*} Using dominated convergence a second time and also the asymptotics from \cite[Equation (67)]{DRZ17}, we obtain \begin{equation}\label{e:right_tail18} \begin{split} &\lim_{N\to\infty}\sum_{w''\in\Xi_{\bar N,J'}}\bar\lambda^{N,KL,K'L'}_{I_{j'},w''}(z)\\ &\quad=\int_{I_{j'}}\frac{z(z-y)}{\sqrt{2\pi\log2}{\mathrm{e}}^{\sqrt{2d}y}}\int_{\mathbb{R}}\int\frac{{\mathrm{e}}^{-x^2/2}}{\sqrt{2\pi}}\\ &\qquad\qquad\qquad\qquad\qquad\qquad \qquad \mathbb{P}^{N}\left(\max_{v\in V_{J'}(w'')}\varphi+b^{K'L',\mathrm{mic}}_v x\ge \sqrt{2d}\log 2l+z-y\right)\,\nu_{J'}(\mathrm{d}\mathbb{P})\,\mathrm{d} x\,\mathrm{d} y\\ &\quad=\int_{I_{j'}}\frac{z(z-y)}{\sqrt{2\pi\log2}{\mathrm{e}}^{\sqrt{2d}y}}\int\mathbb{P}\left(\max_{v\in V_{J'}(w'')}\varphi+b_{J',v}X\ge \sqrt{2d}\log 2l+z-y\right)\nu_{J'}(\mathrm{d}\mathbb{P})\,\mathrm{d} y. \end{split} \end{equation} Denoting the right-hand side of \eqref{e:right_tail18} by $\Lambda^{*,J'}(z)$ and recalling \eqref{e:right_tail19} and \eqref{e:right_tail12}, we have thus shown that there is $\Lambda^{*,K'L'}(z)$ depending on $K'L'$ and $z$ only, such that \begin{equation}\label{e:right_tail20} \begin{split} &\limsup_{z\to\infty}\limsup_{L'\to\infty}\limsup_{K'\to\infty}\limsup_{\bar N\to\infty}\\ &\qquad\qquad \Bigg|\frac{\mathbb{P}^{N}\bigg(\max_{v\in V_{\left\lfloor\frac{N}{KL}\right\rfloor}^{K,L,K',L',R,T,\delta}(w'_{a,KL,N})}\xi^{N,KL,K'L',\mathrm{fine}}_v\ge m_{\left\lfloor\frac{N}{KL}\right\rfloor}+z\bigg)}{\Lambda^{*,K'L'}(z)}-1\Bigg|\le\varepsilon''_{R,T,\delta}. \end{split} \end{equation} Now the only remaining task is to asymptotics of $\Lambda^{*,K'L'}(z)$ for large $z$. This can be done just as in \cite{DRZ17}, using \eqref{e:right_tail14} as well as the asymptotics from \cite[Equation (67)]{DRZ17}. We omit the details. \end{proof} Using Lemma \ref{l:right_tail}, we can complete the proof of Theorem \ref{t:mainthm} just as in \cite{DRZ17}. Let us mention the most important steps. As a first step, we note that with little additional effort we can deduce from \ref{l:right_tail} a seemingly stronger version. \begin{lemma}\label{l:right_tail_unif} In the setting of Lemma \ref{l:right_tail} there is a function $\gamma\colon\mathbb{N}^3\times(0,1)\to\mathbb{R}$ such that for all $R,T,\delta$ we have \[\lim_{J'\to\infty}\gamma(J',R,T,\delta)=0\] and such that \begin{equation}\label{e:right_tail_unif} \begin{split} &\limsup_{z'\to\infty}\limsup_{K'L'\to\infty}\limsup_{N\to\infty}\sup_{z'\le z\le\gamma(K'L',R,T,\delta)}\\ &\quad\left|\frac{{\mathrm{e}}^{\sqrt{2d}z}}{z}\mathbb{P}^{N}\bigg(\max_{v\in V_{\left\lfloor\frac{N}{KL}\right\rfloor}^{K,L,K',L',R,T,\delta}(w'_{a,KL,N})}\xi^{N,KL,K'L',\mathrm{fine}}_v\ge m_{\left\lfloor\frac{N}{KL}\right\rfloor}+z\bigg)-\beta_{K'L'}\right|\le2\varepsilon ''_{R,T,\delta}. \end{split} \end{equation} \end{lemma} The corresponding result in \cite{DRZ17} is stated there without proof, so let us quickly give an outline of the proof here. \begin{proof} Abbreviate the term in absolute values in \eqref{e:right_tail_unif} by $F_{N,KL,K'L',R,T,\delta}(z)$. The probability in \eqref{e:right_tail_unif} is a non-increasing function of $z$, while the prefactor ${{\mathrm{e}}^{\sqrt{2d}z}}/{z}$ grows only exponentially fast. These facts combined imply that we can control $F$ on short intervals by its values at the endpoints of that interval. This means we can discretize the problem in $z$. More precisely, there is $\eta>0$ depending only on $\varepsilon ''_{R,T,\delta}$ and $\Gamma^\pm$ such that \eqref{e:right_tail_unif} follows from \begin{equation}\label{e:right_tail_unif1} \limsup_{z'\to\infty}\limsup_{K'L'\to\infty}\limsup_{N\to\infty}\sup_{\substack{z'\le z-\eta\le\gamma(K'L',R,T,\delta)+\eta\\z\in\eta\mathbb{Z}}}\left|F_{N,KL,K'L',R,T,\delta}(z)\right|\le\frac32\varepsilon ''_{R,T,\delta}. \end{equation} To see \eqref{e:right_tail_unif1}, note that from Lemma \ref{l:right_tail} we know that for each fixed sufficiently large $z$ we have \[ \limsup_{K'L'\to\infty}\limsup_{N\to\infty}\quad\left|F_{N,KL,K'L',R,T,\delta}(z)\right|\le\frac54\varepsilon ''_{R,T,\delta}. \] So for each sufficiently large $z$ there is $J'_z<\infty$ such that for any $K'L'\ge J'_z$ we have \[\limsup_{N\to\infty}\quad\left|F_{N,KL,K'L',R,T,\delta}(z)\right|\le\frac32\varepsilon ''_{R,T,\delta}.\] Now we can define \[\gamma(K'L',R,T,\delta)=\sup\left\{z\in\eta\mathbb{Z}\colon K'L'\ge J'_z\right\}.\] With this choice of $\gamma$ we trivially have \eqref{e:right_tail_unif1}, and moreover it is clear that $\lim_{J'\to\infty}\gamma(J',R,T,\delta)=\infty$. \end{proof} We construct a field that is independent of $N$ as follows: Our starting point is Lemma \ref{l:right_tail_unif}, and we use the objects from that Lemma (and Lemma \ref{l:right_tail}). Let $\hat\Xi_{KL}=\frac{1}{KL}\mathbb{Z}^d\cap[0,1]^d$. Let $\tilde\gamma(J):=\log\log\log(J)$.\footnote{In \cite{DRZ17} the choice $\tilde\gamma=\gamma$ was made. However the functions do play different roles, and so we prefer to keep them separate here.} For each $x\in\hat\Xi_{KL}$, consider a Bernoulli random variable $\rho_{KL,K'L',R,T,\delta,x}$ with \[\mathbb{P}(\rho_{KL,K'L',R,T,\delta,x}=1)=\beta_{K'L'}\tilde\gamma(KL){\mathrm{e}}^{-\sqrt{2d}\tilde\gamma(KL)}\] and a random variable $Y_{KL,K'L',R,T,\delta,x}$ such that \[\mathbb{P}(Y_{KL,K'L',R,T,\delta,x}\ge z)=\frac{\tilde\gamma(KL)}{\tilde\gamma(KL)+z}{\mathrm{e}}^{-\sqrt{2d}z}.\] The random field $Z_{KL,K'L',R,T,\delta}$ will be defined as a downscaled version of $\xi^{N,J,J',R,\delta,\mathrm{coarse}}$ (which by the downscaling becomes independent of $N$). To be precise, consider $\varphi^{KL}$ distributed according to $\mathbb{P}^{KL}$, and define the random field $Z_{KL,K'L',R,T,\delta}$ by \[Z_{KL,K'L',R,T,\delta,x}=\varphi^{KL}_{KLx}+\hat b^{KL,R,\delta,\mathrm{mac}}_{KLx}X^{\mathrm{mac}}_{KLx}\] where $\hat b^{KL,R,\delta,\mathrm{mac}}$ is as in \eqref{e:def_hatb} and $X^{\mathrm{mac}}$ is a collection of independent standard Gaussians. We can assume that $(\rho_{KL,K'L',R,T,\delta,x})_{x\in\Xi_{KL}}$, $(Y_{KL,K'L',R,T,\delta,x})_{x\in\Xi_{KL}}$ and $Z_{KL,K'L',R,T,\delta}$ are all independent. This allows us to consider the random field $\hat\xi^{KL,K'L',R,T,\delta}$ on $\Xi_{KL}$, where \[\hat\xi^{K,L,K',L',R,T,\delta}_x=Z_{KL,K'L',R,T,\delta,x}-\sqrt{2d}\log(KL)+\rho_{KL,K'L',R,T,\delta,x}\left(Y_{KL,K'L',R,T,\delta,x}+\tilde\gamma(KL)\right).\] \begin{lemma}\label{l:coupling_max} Under Assumptions \ref{a:logupp}, \ref{a:logbd}, \ref{a:micro}, \ref{a:macro}, \ref{a:sparseT}, \ref{a:sparseR} and \ref{a:lln} we have that \begin{equation}\label{e:coupling_max} \begin{split} &\limsup_{T\to\infty}\limsup_{\substack{\delta\to0\\mathbb{R}\to\infty}}\limsup_{(L,K,L',K')\Rightarrow\infty}\limsup_{N\to\infty}\\ &\quad\mathsf{d}\Big(\max_{v\in V_N^{K,L,K',L',R,T,\delta}}\xi^{N,KL,K'L',R,\delta}_v,\max_{x\in \hat\Xi_{KL}}\hat\xi^{K,L,K',L',R,T,\delta}_x\Big)=0 \end{split} \end{equation} \end{lemma} \begin{proof} This is shown similar to \cite[Theorem 4.5]{DRZ17}, so we just describe the most important steps. Let $\varepsilon>0$ be given. Our goal is to construct a coupling between the two fields such that on an event of probability $1-\varepsilon$ their maxima are at most $\varepsilon$ apart. As a first step, we choose $T,R$ large and $\delta$ small and fix them for the moment. Now we proceed just like in the proof of \cite[Theorem 4.5]{DRZ17}. It is obvious from the definition of $\hat\xi^{K,L,K',L',R,T,\delta}$ how to couple it to $\xi^{N,KL,K'L',R,\delta,\mathrm{coarse}}$. To construct the coupling for the other random variables, the main observation is that with high probability the maximum of $\xi^{N,KL,K'L',R,T,\delta}$ is assumed at a point where $\xi^{N,KL,K'L',\mathrm{fine}}$ is exceptionally large, and hence in the regime where the right-tail asymptotics from Lemma \ref{l:right_tail_unif} are sharp. To make this precise, let $v_{\mathrm{max}}=\argmax_{v\in V_N^{K,L,K',L',R,T,\delta}}\xi^{N,KL,K'L',R,T,\delta}_v$. Note that by Lemma \ref{l:upper_right_tail} and \ref{l:upper_left_tail} the maximum of $\xi^{N,KL,K'L',R,T,\delta}$ over $V_N^{K,L,K',L',R,T,\delta}$ is tight around $m_N$, and similarly the maximum of $\xi^{N,KL,K'L',R,\delta,\mathrm{coarse}}$ over $V_N^{K,L,K',L',R,T,\delta}$ is tight around $m_J$. We have \[\lim_{J\to\infty}\lim_{N\to\infty}m_N-m_{\left\lfloor\frac{N}{J} \right\rfloor}-m_J-\tilde\gamma(J)=\infty,\] and so we have \begin{equation}\label{e:coupling_max1} \xi^{N,KL,K'L',\mathrm{fine}}_{v_{\mathrm{max}}}\ge m_{\left\lfloor\frac{N}{KL} \right\rfloor}+\tilde\gamma(KL) \end{equation} on a event whose probability tends to 1 in the limit $N\to\infty$ and then $(L,K,L',K')\Rightarrow\infty$. So for our purposes we can assume that \eqref{e:coupling_max} occurs. By similar arguments we can assume that \begin{equation}\label{e:coupling_max2} \max_{v\in V_N^{K,L,K',L',R,T,\delta}}\xi^{N,KL,K'L',\mathrm{fine}}_v\le m_{\left\lfloor\frac{N}{KL} \right\rfloor}+KL. \end{equation} For $w'\in \mathcal{W}_{N,\left\lfloor\frac{N}{KL}\right\rfloor}$ define \[M^{N,K,L,K',L',R,T,\delta,\mathrm{fine}}_{w'}=\max_{v\in V_N^{K,L,K',L',R,T,\delta}(w')}\xi^{N,KL,K'L',\mathrm{fine}}_v.\] Note that Lemma \ref{l:right_tail_unif} gives us good estimates for $\mathbb{P}(M^{N,K,L,K',L',R,T,\delta,\mathrm{fine}}_{w'}\ge m_{\left\lfloor{N}/{KL}\right\rfloor}+z$ when $z\in[z',\gamma(K'L',R,T,\delta)]$. In view of \eqref{e:coupling_max1} and \eqref{e:coupling_max2} we thus want to arrange our parameters such that \begin{equation}\label{e:coupling_max3} [\tilde\gamma(KL),KL]\subset[z',\gamma(K'L',R,T,\delta)]. \end{equation} But this is not a problem: We first choose $z'$ large enough that the error in \eqref{e:right_tail_unif} is small, then $KL$ large enough in terms of $z'$, and finally $K'L'$ large enough in terms of $KL$. Now it is clear how to construct the coupling. Our goal is that $\rho_{KL,K'L',R,T,\delta}=1$ if and only if the corresponding $M^{N,K,L,K',L',R,T,\delta,\mathrm{fine}}$ exceeds $m_{\left\lfloor{N}/{KL} \right\rfloor}+\tilde\gamma(KL)$, and in that case $Y_{KL,K'L',R,T,\delta}$ should equal $M^{N,K,L,K',L',R,T,\delta,\mathrm{fine}}-\tilde\gamma(KL)$. We can achieve this exactly, but \eqref{e:coupling_max2}, \eqref{e:coupling_max3} and Lemma \ref{l:right_tail_unif} imply that this is possible up to errors that are bounded by $C\varepsilon''_{R,T,\delta}$ in the limit $N\to\infty$ and then $(L,K,L',K')\Rightarrow\infty$ (see \cite{DRZ17} for more details). Note that by \eqref{e:coupling_max1} there is at least one $x$ with $\rho_{KL,K'L',R,T,\delta,x}=1$, and so our coupling indeed ensures that $\max_{v\in V_N^{K,L,K',L',R,T,\delta}}\xi^{N,KL,K'L',R,\delta}_v$ and $\max_{x\in \hat\Xi_{KL}}\hat\xi^{K,L,K',L',R,T,\delta}_x$ are close. All errors vanish in the limit $N\to\infty$, then $(L,K,L',K')\Rightarrow\infty$, then $R\to\infty, \delta\to0$ and finally $T\to\infty$. This allows us to conclude the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{t:mainthm}] The convergence in distribution of the recentered maximum follows directly from Lemma \ref{l:comparison_maxima} and Lemma \ref{l:coupling_max}. The characterisation of the limit law as a randomly shifted Gumbel distribution requires more work. However, the argument is completely analogous to the proof of \cite[Theorem 1.4]{DRZ17}, and so we omit the details. \end{proof} \section{Estimates for Green's functions on percolation clusters, and proof of Theorem \ref{t:percolation_cluster}} \label{sec-4} This section is devoted to the proof of Theorem \ref{t:percolation_cluster}, based on quantitative homogenization results and a-priori estimates from percolation theory, and is structured as follows. Section \ref{sec-4.1} collects results from the quantitative homogenization literature, and in particular from \cite{AD18,DG21}, and introduces some useful modifications with respect to locality of various parameters and uniformity as function of the parameter $p$. The next three sections are devoted to the proof of Theorem \ref{t:percolation_cluster}, by checking the assumptions of Theorem \ref{t:mainthm}. In Section \ref{sec-4.2}, which deals with the ``easy to check'' assumptions of Theorem \ref{t:mainthm}, we slightly modify the results from Section \ref{sec-4.1} and verify all assumptions of Theorem \ref{t:mainthm} except for \ref{a:logupp} and \ref{a:sparseT}. These arguments are valid for all $p>p_c=1/2$. Section \ref{sec-4.3} is devoted to the statements and proofs of some large deviation results for the percolation cluster $\mathcal{C}_\infty$, for $p$ close to $1$. The verification of the remaining Assumptions \ref{a:logupp} and \ref{a:sparseT} for $p$ sufficiently close to 1 is carried out in Section \ref{sec-4.4}, and is based on the results in Section \ref{sec-4.3}. \subsection{Homogenization on percolation clusters} \label{sec-4.1} In this section we introduce various notation and recall various existing results on the structure of supercritical clusters, and on the large-scale behaviour of the graph Laplacian on the cluster. We take various results on the homogenization of the percolation cluster from \cite{AD18,DG21}. Most results have the character that there is a random scale $\mathsf{R}_v$ indexed by $v\in\mathbb{Z}^2$ such that on length-scales $\ge\mathsf{R}_v$ around $v$ some desirable property holds. Where possible we also state an estimate that the event $\{\mathsf{R}\le R\}$ is asymptotically a local event. This is not done in the works \cite{AD18,DG21} from which we cite. However in most cases it is very easy do so, so we state and prove the (asymptotic) locality right away. A notable exception is Theorem \ref{t:green_asympt}, where there is no direct way to obtain locality of the relevant random scale, and so we postpone discussing the locality in that theorem to the next section. In this section we only consider dimension $d=2$. We denote by $\nabla$ and $\Delta$ the lattice gradient and Laplacian, and by $\bar\nabla$ and $\bar\Delta$ their continuous counterparts. We also denote by $L^q$ discrete $q$-norms, and by $\bar L^q$ the (standard) continuous $q$-norms. Thereby we hopefully avoid any risk of confusion. Recall that $V_N(w)$ denoted the cube of sidelength $N$ and lower left corner $w$, and that $\partial^+ V_N(w)$ denotes its outer boundary. In this section we will frequently encounter cubes with given center. Thus we define $Q_N(w)=w+[-N+1,N-1]^2\cap \mathbb{Z}^2$. In the following we denote the sidelength of a cube $Q:=V_N(w)$ by $\ell(Q):=N$ (note that $\ell(Q_N(w))=2N-1$ with our definitions). Let $0<\Lambda^-\le\Lambda^+$. We let $\mathcal{F}$ be the Borel $\sigma$-algebra on $(\{0\}\cup[\Lambda^-,\Lambda^+])^{E(\mathbb{Z}^2)}$, and we let $\mathbf{P}$ be an i.i.d. probability measure on $(\{0\}\cup[\Lambda^-,\Lambda^+])^{E(\mathbb{Z}^2)},\mathcal{F}$. In other words, for each edge $e\in E(Z^2)$ we consider an i.i.d random variable ${\mathbf{a}}(e)$ supported in $\{0\}\cup[\Lambda^-,\Lambda^+]$, and denote by $\mathbf{P}$ the joint law of the ${\mathbf{a}}(e)$. For some $A\subset E(\mathbb{Z}^2)$, we let $\mathcal{F}_A$ be the $\sigma$-algebra generated by $\{{\mathbf{a}}(e)\colon e\in A\}$. When $A$ is equal to the edges of the subgraph induced by $Q_N(w)$, we write, slightly abusing notation, $\mathcal{F}_{Q_N(w)}$ for the corresponding $\sigma$-algebra. Let $p=\mathbf{P}({\mathbf{a}}(e)>0)$. We always assume $p>1/2$, the critical threshold for bond percolation on $\mathbb{Z}^2$ \cite{K80}. It is well-known that $\mathbf{P}$-almost surely there is a unique infinitely cluster $\mathcal{C}_\infty$, i.e. a infinite connected subgraph of $\mathbb{Z}^2$ such that ${\mathbf{a}}(e)>0$ for each $e\in E(\mathcal{C}_\infty)$. We denote by $\mathsf{d}_{\mathcal{C}_\infty}$ the graph distance on $\mathcal{C}_\infty$ (as an unweighted graph). Unless indicated otherwise, we regard $p,\Lambda^-,\Lambda^+$ as fixed and so do not make explicit how various constants depend on this quantities, except that in various locations, with $\Lambda^-,\Lambda^+$ fixed we specify uniformity with respect to $p$ in a neighborhood of $1$. We let $\Delta_{{\mathbf{a}}}$ be the graph Laplacian on $\mathcal{C}_\infty$, i.e. \[\Delta_{{\mathbf{a}}}U(x)=\sum_{\{u,v\}\in E(\mathcal{C}_\infty)}{\mathbf{a}}(\{u,v\})(U(u)-U(v)).\] If $V_N(w)\subset\mathbb{Z}^2$ is a box, we can define a Green's function $G^{\mathbf{a}}_{V_N(w)}\colon\mathcal{C}_\infty\times\mathcal{C}_\infty\to\mathbb{R}$ as follows: if $v\notin\mathcal{V}_N(w)$, then $G^{\mathbf{a}}_{V_N(w)}(\cdot,v)=0$, while if $v\in\mathcal{V}_N(w)$ then $G^{\mathbf{a}}_{V_N(w)}(\cdot,v)$ is the unique function which is 0 on $\mathcal{C}_\infty\setminus V_N(w)$ and such that $-\Delta_{\mathbf{a}} G^{\mathbf{a}}_{V_N(w)}(\cdot,v)=\delta_v$ on $V_N(w)\cap \mathcal{C}_\infty$. Note that $G^{\mathbf{a}}_{V_N(w)}$ is also the Green's function for the (variable speed, continuous time) random walk on $\mathcal{C}_\infty$ that is killed when exiting $V_N(w)$. We also want to define a Green's function $G^{\mathbf{a}}$ on all of $\mathcal{C}_\infty$. This is only possible when we fix some normalization. We let $G^{\mathbf{a}}\colon\mathcal{C}_\infty\times\mathcal{C}_\infty\to\mathbb{R}$ be a function such that $-\Delta_{\mathbf{a}} G^{\mathbf{a}}(\cdot,y)=\delta_y$, $G^{\mathbf{a}}(y,y)=0$ and $\lim_{|x|\to\infty}\frac{1}{|x|}G^{\mathbf{a}}(x,y)=0$ for all $x,y$. There is a unique such function $\mathbf{P}$-almost surely. The function $G^{\mathbf{a}}$ is also the potential kernel for random walk on $\mathcal{C}_\infty$. From the theory of stochastic homogenization it is known that there is a deterministic constant ${\overline{\A}}$ such that the operator $-\Delta_{\mathbf{a}}$ homogenizes to the operator $-{\overline{\A}}\bar\Delta$, i.e. a scalar multiple of the standard continuous Laplacian. In the following we rely in particular on the quantitative homogenization results from \cite{AD18,DG21}. Let us quote the results that are the most important for us. We begin with an estimate for $G^{\mathbf{a}}$ that follows easily from \cite[Theorem 1.2]{DG21}. Here and in the rest of the chapter, we use the phrase ``there is a uniform $s>0$'' to mean that $s>0$ is bounded below uniformly in $p$ in a neighborhood of $1$, for fixed ${\Lambda^+}/{\Lambda^-}$. \begin{theorem}\label{t:green_asympt} For each $p>1/2$ and each ${\Lambda^+}/{\Lambda^-}\ge1$ there are random variables $\mathsf{K}_v\in\mathbb{R}$ and $\mathsf{R}^{\mathrm{Green}}_v\in\mathbb{Z}$ indexed by $v\in\mathcal{C}_\infty$, such that \[\left|G^{\mathbf{a}}(u,v)+\frac{1}{2\pi{\overline{\A}}}\log|u-v|-\mathsf{K}_v\right|\le\frac{1}{{\overline{\A}}|u-v|^{3/4}}\] whenever $u,v\in\mathcal{C}_\infty$ are such that $|u-v|\ge \mathsf{R}^{\mathrm{Green}}_v$. In addition $R^{\mathrm{Green}}_v$ satisfies the tail bound \[\mathbf{P}(\mathsf{R}^{\mathrm{Green}}_v\le R)\ge 1- C{\mathrm{e}}^{-R^{s}/C}\] with some constant $C>0$ and some uniform $s>0$ independent of $v\in \mathcal{C}_\infty$. \end{theorem} Note that here (unlike the following statements) we do not make any claim about the locality of $\{\mathsf{R}^{\mathrm{Green}}_v\le R\}$. The problem is that $G^{\mathbf{a}}$ is by definition a global object depending on all of $\mathcal{C}_\infty$ and not just on $\mathcal{C}_\infty$ intersected with some finite box. \begin{proof} In \cite[Theorem 1.2]{DG21}, the same statement is given, with $1/({\overline{\A}}|u-v|^{3/4})$ replaced by ${C_\varepsilon}/({|u-v}|^{1-\varepsilon})$ for any $\varepsilon>0$. Our version follows by taking some $\varepsilon<1/4$ and increasing $\mathsf{R}^{\mathrm{Green}}_v$ by a bounded factor. The fact that $s$ is bounded below uniformly as claimed is not made explicit in \cite{AD18,DG21}, however it follows from tracking their proofs (cf. also the discussion below \cite[Remark 1.1]{AD18}). In particular, the quantitative estimates in \cite{AD18} improve as $p-p_c$ increases, and thus it should even be possible to take $s$ non-increasing in $p\in\left(\frac12,1\right]$. \end{proof} In order to state the other results we need, we need to introduce more notation to describe the large-scale structure of $\mathcal{C}_\infty$. In \cite{AD18,DG21} the lattice is partitioned into triadic cubes in such a way that all the cubes $Q$ are well-connected (this means in particular that they contain a unique large crossing cluster, denoted $\mathcal{C}_*(Q)$). The details of the construction can be found in \cite[Section 2.1]{DG21}, but are not important for our purposes here. What is important is that almost surely there is a partition $\mathcal{P}$ of $\mathbb{Z}^2$ into well-connected cubes that satisfies \[\bigcup_{Q\in\mathcal{P}}\mathcal{C}_*(Q)\subset \mathcal{C}_\infty\] (note that in general we do not have equality here). We also need that this partition is local and comes with tail bounds on the size of the cubes. More precisely, we have that for each triadic cube $Q=Q_{3^\ell}(3^j)$ the event $Q\in\mathcal{P}$ is $F_{Q_{3^{\ell+1}}(3^j)}$-measurable.} As a consequence of this, while $\mathcal{C}_\infty$ is of course a global object, we can approximate it by local objects as follows: \begin{lemma}\label{l:local_approx_cluster} For each $p>1/2$ there is $C>0$ such that for any $R\in\mathbb{N}$ there are events $\mathcal{E}_v^{R,\mathrm{Clust}}\in\mathcal{F}_{Q_{9R}(v)}$ indexed by $v\in\mathbb{Z}^2$ such that \[\mathbf{P}(\mathcal{E}_v^{R,\mathrm{Clust}})\ge 1-C{\mathrm{e}}^{-R/C}\] and such that on the event $\mathcal{E}_v^{R,\mathrm{Clust}}$ we have \[\mathcal{C}_*(Q_{3R}(v))\cap Q_R(v)=\mathcal{C}_\infty\cap Q_R(v).\] \end{lemma} We emphasize that $\mathcal{C}_*(Q_{3R}(v))$ only depends on the bonds in $Q_{3R}(v)$, and so is a genuinely local object. \begin{proof} This follows from the results in \cite[Section 2]{AD18}, in particular Equation (2.18) there. \end{proof} Already in the statement of Theorem \ref{t:percolation_cluster} we needed to consider points not in $\mathcal{C}_\infty$ and their projection to $\mathcal{C}_\infty$. Let us recall this here: For $v\in\mathbb{Z}^2$ we denote by $v^*$ the point in $\mathcal{C}_\infty$ closest to $v$ in Euclidean distance, where in case of a tie we take the lexicographically first point. The following lemma provides us with a tail bound on the distance between $v$ and $v^*$. For $v\in\mathbb{Z}^2$, define the random variable \begin{equation} \label{eq-Rvdist} \mathsf{R}^{\mathrm{Dist}}_v=|v-v^*|_\infty\in\mathbb{N}. \end{equation} \begin{lemma}\label{l:dist_to_cluster} For each $p>1/2$ there is $C>0$ so that, for any $R\in\mathbb{N}$, we have \begin{equation}\label{e:dist_to_cluster} \mathbf{P}(\mathsf{R}^{\mathrm{Dist}}_v\le R,\mathcal{E}_v^{R,\mathrm{Clust}})\ge 1-C{\mathrm{e}}^{R/C}. \end{equation} Moreover, the event $\{\mathsf{R}^{\mathrm{Dist}}_v\le R\}\cap \mathcal{E}_v^{R,\mathrm{Clust}}$ is $\mathcal{F}_{Q_{9R}(v)}$-measurable. \end{lemma} \begin{proof} The tail bound \eqref{e:dist_to_cluster} follows from Lemma \ref{l:local_approx_cluster} and \cite[Lemma 2.7]{AD18}. The locality follows from the fact on the event $\mathcal{E}_v^{R,\mathrm{Clust}}$, the clusters $\mathcal{C}_\infty$ and $\mathcal{C}_*(Q_{3R}(v))$ agree on $Q_R(v)$, so that $v^*$ is also the point in $\mathcal{C}_*(Q_{3R}(v))$ closest to $v$. \end{proof} We also have good upper bounds on the size of the largest cube in $\mathcal{P}$ intersecting a given cube. We denote by $Q^{\mathcal{P}}(v)$ the unique cube in $\mathcal{P}$ containing $v$. \begin{lemma}\label{l:maximal_size_P} For each $p>1/2$ and any $\varepsilon>0$ there are random variables $\mathsf{R}^{\mathrm{Part},\varepsilon}_w\in\mathbb{N}$ indexed by $w\in\mathbb{Z}^2$ such that the following holds: Let $w\in\mathbb{Z}^2$, $R\in\mathbb{N}$ with $R\ge\mathsf{R}^{\mathrm{Part},\varepsilon}_w$. Then \begin{equation}\label{e:maximal_size_P} \max_{v\in V_R(w)}\ell(Q^{\mathcal{P}}(v))\le R^{\varepsilon}. \end{equation} In addition, there are constants $C>0$ and uniform $s>0$ so that for any $w\in\mathbb{Z}^2$ and any $R,R'\in\mathbb{N}$ with $R'\ge R$ there is an event $\mathcal{E}^{R,R',\mathrm{Part},\varepsilon}_w\in\mathcal{F}_{Q_{4R'}(w)}$ such that \begin{equation}\label{e:maximal_size_P1} \mathbf{P}(\mathcal{E}^{R,R',\mathrm{Part},\varepsilon}_w)\ge 1-C{\mathrm{e}}^{-R^{s}/C} \end{equation} and \begin{equation}\label{e:maximal_size_P2} \mathbf{P}(\mathsf{R}^{\mathrm{Part},\varepsilon}_w> R,\mathcal{E}^{R,R',\mathrm{Part},\varepsilon}_w)\le C{\mathrm{e}}^{-R'^{s}/C}. \end{equation} \end{lemma} Note that the exponent in the right hand side of \eqref{e:maximal_size_P2} is $R'$, in contrast with \eqref{e:maximal_size_P1}. The estimates \eqref{e:maximal_size_P1} and \eqref{e:maximal_size_P2} combined formalize the intuition that the event $\mathsf{R}^{\mathrm{Part},\varepsilon}_w> R$ is approximately local in the sense that if it occurs then either the local event $\mathcal{E}^{R,R',\mathrm{Part},\varepsilon}_w$ or an event with very small probability occurs. Similar statements will occurs many times in the next lemmas. \begin{proof} The event that \eqref{e:maximal_size_P} holds for some fixed $R$ is local. We will use this fact to prove both the existence of $\mathsf{R}^{\mathrm{Part},\varepsilon}_w$ and that of $\mathcal{E}^{R,R',\mathrm{Part},\varepsilon}_w$. We will use this argument many times in the following, so we spell out the details here. Define the event \[\mathcal{E}^{\rho,\mathrm{Part},\varepsilon}_w:=\left\{\max_{v\in V_\rho(w)}\ell(Q^{\mathcal{P}}(v))\le \rho^{\varepsilon}\right\}.\] For a fixed $\rho$ the event $\mathcal{E}^{\rho,\mathrm{Part},\varepsilon}_w$ only depends on the events $Q\in\mathcal{P}$ for the cubes $Q$ intersecting $V_\rho(w)$ and of diameter $\le \rho^\varepsilon$. Thus the event that \eqref{e:maximal_size_P} holds is in $\mathcal{F}_{Q_{\rho+3\rho^\varepsilon}(w)}\subset \mathcal{F}_{Q_{4\rho}(w)}$. Moreover, from \cite[Proposition 2]{DG21} it follows that \begin{equation}\label{e:maximal_size_P3} \mathbf{P}(\mathcal{E}^{\rho,\mathrm{Part},\varepsilon}_w)\ge 1-C{\mathrm{e}}^{-\rho^{s}/C}. \end{equation} We set now \[\mathsf{R}^{\mathrm{Part},\varepsilon}_w=\inf\left\{R\in\mathbb{N}\colon \mathcal{E}^{\rho,\mathrm{Part},\varepsilon}_w \quad \forall \rho\ge R\right\}\] so that \eqref{e:maximal_size_P} clearly holds, and further set \begin{align*} \mathcal{E}^{R,R',\mathrm{Part},\varepsilon}_w&=\left\{\mathsf{R}^{\mathrm{Part},\varepsilon}_w\le R\text{ or }\mathsf{R}^{\mathrm{Part},\varepsilon}_w>R'\right\} =\bigcap_{R<\rho\le R'}\mathcal{E}^{\rho,\mathrm{Part},\varepsilon}_w. \end{align*} This event is obviously in $\mathcal{F}_{Q_{4R'}(w)}$, and the estimates \eqref{e:maximal_size_P1} and \eqref{e:maximal_size_P2} follow directly from \eqref{e:maximal_size_P3}. Regarding the uniformity of $s$, see the comment in the proof of Theorem \ref{t:green_asympt}. \end{proof} Next, we need a lower bound on the density of the cluster in a large cube. \begin{lemma}\label{l:density_cluster} For each $p>1/2$ there are $C>0$ and random variables $\mathsf{R}^{\mathrm{Dens}}_v\in\mathbb{N}$ indexed by $v\in\mathbb{Z}^2$ such for any $v\in\mathbb{Z}^2$, $R\in\mathbb{N}$ with $R\ge\mathsf{R}^{\mathrm{Dens}}_v$, \[\left|\mathcal{C}_\infty\cap Q_R(v)\right|\ge\frac{R^2}{C}.\] In addition, for any $v\in\mathbb{Z}^2$ and any $R,R'\in\mathbb{N}$ with $R\le R'$ there is an event $\mathcal{E}^{R,R',\mathrm{Dens}}_v\in\mathcal{F}_{Q_{9R'}(v)}$ such that \[ \mathbf{P}(\mathcal{E}^{R,R',\mathrm{Dens}}_v)\ge 1-C{\mathrm{e}}^{-R/C} \] and \[ \mathbf{P}(\mathsf{R}^{\mathrm{Dens}}_v> R,\mathcal{E}^{R,R',\mathrm{Dens}}_v)\le C{\mathrm{e}}^{-{R'}/C}. \] \end{lemma} \begin{proof} From \cite[Theorems 2,3]{DS88} we have that there is a constant $C>0$ such that \begin{equation} \label{eq-DSlater} \mathbf{P}\left(\left|\mathcal{C}_\infty\cap Q_R(v)\right|<\frac{R^2}{C}\right)\le C{\mathrm{e}}^{-R/C}. \end{equation} The argument to deduce from this the lemma is similar, but not completely the same as in Lemma \ref{l:maximal_size_P}. Namely, the event \[\mathcal{E}^{R,\mathrm{Dens}}:=\left\{\left|\mathcal{C}_\infty\cap Q_R(v)\right|<\frac{R^2}{C}\right\}\] only depends on $\mathcal{C}_\infty\cap Q_R(v)$, and so $\mathcal{E}^{R,\mathrm{Dens}}_v\cap \mathcal{E}^{R,\mathrm{Clust}}_v$ is $\mathcal{F}_{Q_{9R}(v)}$-measurable. We can now define \begin{align*} \mathsf{R}^{\mathrm{Dens}}_v&=\inf\left\{R\in\mathbb{N}\colon(\mathcal{E}^{\rho,\mathrm{Dens}}\cap \mathcal{E}^{\rho,\mathrm{Clust}}_v)\;\forall \rho\ge R\right\},\\ \mathcal{E}^{R,R',\mathrm{Dens}}_v&=\bigcap_{R<\rho\le R'}\mathcal{E}^{\rho,\mathrm{Dens}}_v\cap \mathcal{E}^{\rho,\mathrm{Clust}}_v, \end{align*} and directly obtain the claimed result from \eqref{eq-DSlater} and Lemma \ref{l:local_approx_cluster}. \end{proof} Given a function $f\colon\mathcal{C}_\infty\to\mathbb{R}$, we can make use of the partition $\mathcal{P}$ to define an interpolated version $[f]_{\mathcal{P}}\colon\mathbb{R}^2\to\mathbb{R}$ as follows. We first define $[f]_{\mathcal{P}}$ on $\mathbb{Z}^2$ by setting $[f]_{\mathcal{P}}(v)=f(z(Q^{\mathcal{P}}(v)))$ for $v\in\mathbb{Z}^2$, where $Q^{\mathcal{P}}(v)$ is the unique cube in $\mathcal{P}$ containing $v$ and $z$ is the lattice point closest to its center (where in case of ties we take the lexicographically first). Then extend to $\mathbb{R}^2$ by letting $[f]_{\mathcal{P}}$ be piecewise constant on each cube $v+\left[-\frac12,\frac12\right)^2$. We also fix a mollifier $\eta\in C_c^\infty(\mathbb{R}^2)$ such that $\int\eta=1$. Throughout, write $\hat V_N(w)=V_N(w)\cup \partial^+ V_N(w))$. We are now ready to state an estimate for the homogenization error for the Dirichlet problem. \begin{theorem}\label{t:homog_dirich} For each $p>1/2$ and any ${\Lambda^+}/{\Lambda^-}\ge1$ there are $C>0$, $\varepsilon_{\mathrm{Homog}}\in(0,1)$, and random variables $\mathsf{R}^{\mathrm{Homog}}_w\in\mathbb{N}$ indexed by $w\in\mathbb{Z}^2$ such that the following holds. Let $w\in\mathbb{Z}^2$, $N\ge \mathsf{R}^{\mathrm{Homog}}_w$, let $F\colon\mathcal{C}_\infty\cap \hat V_N(w)\to\mathbb{R}$, and let $U\colon\mathcal{C}_\infty\cap \hat V_N(w) \to\mathbb{R}$ be the unique solution of the discrete elliptic equation \begin{alignat*}{2} \begin{aligned} -\Delta_{\mathbf{a}} U&=0 && \text{in }\mathcal{C}_\infty\cap V_N(w),\\ U&=F && \text{on }\mathcal{C}_\infty\cap\partial^+ V_N(w). \end{aligned} \end{alignat*} Furthermore let $\bar U\colon \hat V_N(w) \to\mathbb{R}$ be the unique solution of the continuous elliptic equation \begin{alignat*}{2} \begin{aligned} -{\overline{\A}}\bar\Delta \bar U&=0 && \text{in }w+(-1,N)^2,\\ \bar U&=[F]_{\mathcal{P}}*\eta && \text{on }\partial(w+(-1,N)^2). \end{aligned} \end{alignat*} Then we have the estimate \begin{equation}\label{e:homog_dirich} \|U-\bar U\|_{L^2(\mathcal{C}_\infty\cap V_N(w))}\le CN^{2-\varepsilon_{\mathrm{Homog}}}\|\nabla F\|_{ L^\infty(\hat V_N(w))}. \end{equation} In addition, there is a uniform $s>0$ so that for any $w\in\mathbb{Z}^2$ and any $R,R'$ with $R\le R'$ there is an event $\mathcal{E}^{R,R',\mathrm{Homog}}_w\in\mathcal{F}_{Q_{9R'}(w)}$ such that, \[ \mathbf{P}(\mathcal{E}^{R,R',\mathrm{Homog}}_w)\ge 1-C{\mathrm{e}}^{-R^s/C} \] and \[ \mathbf{P}(\mathsf{R}^{\mathrm{Homog}}_w> R,\mathcal{E}^{R,R',\mathrm{Homog}}_w)\le C{\mathrm{e}}^{-R'^s/C}. \] \end{theorem} We expect that the $L^2$-norms of $U$ and $\bar U$ are both at most of order $N^2\|\nabla F\|_{ L^\infty(\hat V_N(w)) }$, so \eqref{e:homog_dirich} effectively means a gain of $N^{-\varepsilon_{\mathrm{Homog}}}$ over the trivial estimate. \begin{proof} This is essentially \cite[Theorem 1]{AD18}. However, the result there is phrased differently. The result, as stated above, follows immediately from \cite[Theorem 3.2]{DG21}. There the parabolic version of the theorem is given, but the elliptic version follows by taking all functions constant in time. The locality follows again as in Lemma \ref{l:density_cluster}. Regarding the uniformity of $s$, see the proof of Theorem \ref{t:green_asympt}. \end{proof} We also need various functional equalities and elliptic regularity estimates for $-\Delta_{\mathbf{a}}$-harmonic functions, most of which hold true beyond some random lengthscale. We collect them in the following theorem. We denote by $(U)_A$ the average of $U$ over a set $A$, and remark for future use that for $A\subset A'$ we have \begin{equation}\label{e:monotonicity_L2} \|U-(U)_A\|_{L^2(A)}\le \|U-(U)_{A'}\|_{L^2(A)}\le \|U-(U)_{A'}\|_{L^2(A')}. \end{equation} \begin{theorem}\label{t:regularity} For each $p>1/2$ and each ${\Lambda^+}/{\Lambda^-}\ge1$ there are $C>0$ and random variables $\mathsf{R}^{\mathrm{Regul}}_v\in\mathbb{N}$ indexed by $v\in\mathbb{Z}^2$, such that the following holds: Let $v\in\mathbb{Z}^2$, $R\ge \rho\ge\mathsf{R}^{\mathrm{Regul}}_v$. Let $U\colon\mathcal{C}_\infty\cap Q_{R+1}(v)\to\mathbb{R}$ satisfy $-\Delta_A U=0$ in $\mathcal{C}_\infty\cap Q_R(v)$. Then we have the elliptic regularity estimate \begin{equation}\label{e:regularity} \left\|U-(U)_{\mathcal{C}_\infty\cap Q_{\rho}(v))}\right\|_{L^2(\mathcal{C}_\infty\cap Q_{\rho}(v))}\le \frac{C\rho^2}{R^2}\left\|U-(U)_{\mathcal{C}_\infty\cap Q_{R}(v))}\right\|_{L^2(\mathcal{C}_\infty\cap Q_{R}(v))}. \end{equation} In addition, for any $v\in\mathbb{Z}^2$ and any $R,R'\in\mathbb{N}$ with $R\le R'$ there is an event $\mathcal{E}^{R,R',\mathrm{Regul}}_v\in\mathcal{F}_{Q_{9R'}(v)}$ such that, with $s>0$ uniform, \[ \mathbf{P}(\mathcal{E}^{R,R',\mathrm{Regul}}_v)\ge 1-C{\mathrm{e}}^{-R^s/C} \] and \[ \mathbf{P}(\mathsf{R}^{\mathrm{Regul}}_v> R,\mathcal{E}^{R,R',\mathrm{Regul}}_v)\le C{\mathrm{e}}^{-R'^s/C}. \] \end{theorem} \begin{proof} This result be extracted from \cite{AD18} and \cite{DG21}. Indeed, for \eqref{e:regularity} we can first assume that $R\ge 2\rho$, as otherwise the result follows trivially from \eqref{e:monotonicity_L2}. We can also assume that $(U)_{\mathcal{C}_\infty\cap(Q_R(v)))}=0$, as otherwise we replace $U$ with $U-(U)_{\mathcal{C}_\infty\cap Q_R(v))}$. Then \eqref{e:regularity} follows from \cite[Theorem 2 (iii)]{AD18} by taking $k=0$. Namely the only bounded $-\Delta_{\mathbf{a}}$-harmonic functions are constants, and so the optimal $\phi$ on the left-hand side in \cite[Theorem 2 (iii)]{AD18} is indeed $(U)_{\mathcal{C}_\infty\cap Q_{\rho}(v))}$. Once more, the locality follows as in Lemma \ref{l:density_cluster}, and the uniformity of $s$ as in Theorem \ref{t:green_asympt}. \end{proof} \subsection{Proof of Theorem \ref{t:percolation_cluster}, first part} \label{sec-4.2} We can now begin with the proof of Theorem \ref{t:percolation_cluster}. As mentioned above, Assumptions \ref{a:logupp} and \ref{a:sparseT} are much harder to establish than the other five assumptions. So in this subsection we only consider the five "easier" assumptions, and postpone the discussion of Assumptions \ref{a:logupp} and \ref{a:sparseT} to the following sections. We will use the results collected in the previous section as a toolbox, but we cannot directly use them in our setting. As a first step, we establish improved versions of Theorem \ref{t:green_asympt} and \ref{t:homog_dirich}. For Theorem \ref{t:green_asympt} the improvement consists in showing that the event $\{\mathsf{R}^{\mathrm{Green}}_v>R\}$ can be approximated by a local event. For Theorem \ref{t:homog_dirich} this fact is rather obvious. There a different improvement is necessary. Namely we claim that (at least under sufficiently strong assumptions on $f$) we can get rid of the interpolation $[f]_{\mathcal{P}}*\eta$. We also use the regularity results from Theorem \ref{t:regularity} to replace the $L^2$-estimate in \eqref{e:homog_dirich} by a pointwise estimate, provided we stay far enough from the boundary. We begin with a version of Theorem \ref{t:green_asympt}. Recall that for $v\in \mathbb{Z}^2$, we denote by $v^*\in\mathbb{Z}^2$ the point in $\mathcal{C}_\infty$ closest to $v$ (with ties broken lexicographically). \begin{lemma}\label{l:green_asympt_improved} For each $p>1/2$ and each ${\Lambda^+}/{\Lambda^-}\ge1$ there are $C>0$ and random variables $\mathsf{K}'_v\in\mathbb{R}$ and $\mathsf{R}^{\mathrm{Green}'}_v\in\mathbb{N}$ indexed by $v\in\mathbb{Z}^2$ with the following property: if $u,v\in\mathbb{Z}^2$ satisfy $|u-v|\ge \mathsf{R}^{\mathrm{Green}'}_v$ and we also have $u\in\mathcal{C}_\infty$ or $|u-v|\ge \mathsf{R}^{\mathrm{Green}'}_u$, then \begin{equation}\label{e:green_asympt_improved} \left|G^{\mathbf{a}}(u^*,v^*)+\frac{1}{2\pi{\overline{\A}}}\log|u-v|-\mathsf{K}'_v\right|\le\frac{1}{{\overline{\A}}|u-v|^{1/2}}. \end{equation} In addition, for any $v\in\mathbb{Z}^2$ and any $R,R'\in\mathbb{N}$ with $R\le {R'}/{2}$ there is an event $\mathcal{E}^{R,R',\mathrm{Green}'}_v\in\mathcal{F}_{Q_{9R'}(v)}$ such that, with $s>0$ uniform, \begin{equation}\label{e:green_asympt_improved1} \mathbf{P}(\mathcal{E}^{R,R',\mathrm{Green}'}_v)\ge 1-C{\mathrm{e}}^{-R^{s}/C} \end{equation} and \begin{equation}\label{e:green_asympt_improved2} \mathbf{P}(\mathsf{R}^{\mathrm{Green}'}_v> R,\mathcal{E}^{R,R',\mathrm{Green}'}_v)\le C{\mathrm{e}}^{-R'^{s}/C}. \end{equation} \end{lemma} In other words, the bad event $\mathsf{R}^{(1)}_v>R$ implies that either the local bad event $ (\mathcal{E}^{R,R',\mathrm{Green}'}_v)^\complement$ or a very unlikely event occurs. \begin{proof} We take \begin{align*} \mathsf{R}^{\mathrm{Green'}}_v=\mathsf{R}^{\mathrm{Green}}_{v^*}\vee(\mathsf{R}^{\mathrm{Dist}}_v)^4\vee C', \qquad \mathsf{K}'_v=\mathsf{K}_{v^*}, \end{align*} with the objects on the right-hand sides of the equality as in Theorem \ref{t:green_asympt} and \eqref{eq-Rvdist}, and $C'$ is a constant to be chosen shortly. We also set $s$ as the minimum of the exponents $s$ in the quoted results. Let us first show that with this choice we have \eqref{e:green_asympt_improved}. From Theorem \ref{t:green_asympt} we know that \begin{equation}\label{e:green_asympt_improved3} \left|G^{\mathbf{a}}(u^*,v^*)+\frac{1}{2\pi{\overline{\A}}}\log|u^*-v^*|-\mathsf{K}_{v*}\right|\le\frac{1}{|u^*-v^*|^{3/4}}. \end{equation} The assumption $|u-v|\ge\mathsf{R}^{\mathrm{Green'}}_v$ implies that \[|v-v^*|_\infty=\mathsf{R}^{\mathrm{Dist}}_v\le(\mathsf{R}^{\mathrm{Green'}}_v)^{1/4}\le|u-v|^{1/4}.\] Similarly, if $|u-v|\ge\mathsf{R}^{\mathrm{Green'}}_v$ holds then \[|u-u^*|_\infty\le|u-v|^{1/4},\] and the same is trivially true if $u\in\mathcal{C}_\infty$. So under the given assumptions on $u,v$ we have in any case that \[\left|\frac{|u^*-v^*|}{|u-v|}-1\right|\le\frac{2\sqrt{2}}{|u-v|^{3/4}}\] and hence \eqref{e:green_asympt_improved3} implies \[\left|G^{\mathbf{a}}(u^*,v^*)+\frac{1}{2\pi{\overline{\A}}}\log|u-v|-\mathsf{K}'_v\right|\le\frac{C}{|u-v|^{3/4}}.\] If we choose $C'$ large enough, this implies \eqref{e:green_asympt_improved}. Regarding $\mathcal{E}^{R,R',\mathrm{Green'}}_v$, a first attempt might be to define \[\mathcal{E}^{R,R',\mathrm{Green'}}_v=\left\{\mathsf{R}^{\mathrm{Green'}}_v\le R\text{ or }\mathsf{R}^{\mathrm{Green'}}_v> R'\right\}\cap \mathcal{E}_v^{R',\mathrm{Clust}}\cap \left\{\mathsf{R}^{\mathrm{Dist}}\le R^{1/4}\right\}\cap \mathcal{E}_v^{R^{1/4},\mathrm{Clust}}.\] With this definition \eqref{e:green_asympt_improved1} and \eqref{e:green_asympt_improved2} would easily follow. However, the event is not local. The problem is that $\left\{\mathsf{R}^{\mathrm{Green}}_v\le R\right\}$ depends on all of $\mathcal{C}_\infty$, not just on $\mathcal{C}_\infty$ intersected with some large box. So we need to do another approximation step. We let $B_{r}(v)=\{u\in\mathbb{Z}^2\colon |u-v|\le r\}$ be the intersection of $\mathbb{Z}^2$ with an (Euclidean) ball of radius $r$ centered at $v$, and define the event $\mathcal{E}^{R,R',\mathrm{Green'}}_v$ as follows: We let $\mathcal{E}^{R,R',\mathrm{Green'}}_v$ be the certain event if $R\le C''$ for some $C''$ to be determined later, and if $R>C''$ we set \begin{align}\label{e:green_asympt_improved4} &\mathcal{E}^{R,R',\mathrm{Green'}}_v\nonumber\\ &=\left\{\exists K\text{ s.t. }\left|G^{\mathbf{a}}_{B_{R'}(v)}(u,v^*)+\frac{1}{2\pi{\overline{\A}}}\log|u-v|-K\right|\le\frac{1}{{\overline{\A}}|u-v|^{1/2}}\ \forall u\in B_v(R')\text{ with }|u-v|\ge R\right\}\nonumber\\ &\qquad\cap\mathcal{E}_v^{R',\mathrm{Clust}}\cap \left\{\mathsf{R}^{\mathrm{Dist}}_{v}\le R^{1/4}\right\}\cap \mathcal{E}_v^{R^{1/4},\mathrm{Clust}}. \end{align} By the same argument as in the proof of Lemma \ref{l:dist_to_cluster} we have $\mathcal{E}^{R,R',\mathrm{Green}}_v\in\mathcal{F}_{Q_{9R'}(v)}$. To see \eqref{e:green_asympt_improved1}, observe that we know \[\mathbf{P}\left(\mathcal{E}_v^{R',\mathrm{Clust}}\cap \left\{\mathsf{R}^{\mathrm{Dist}}_{v}\le R^{1/4}\right\}\cap \mathcal{E}_v^{R^{1/4},\mathrm{Clust}}\right)\ge 1-C{\mathrm{e}}^{-R^s/C}\] from Lemmas \ref{l:local_approx_cluster} and \ref{l:dist_to_cluster}. So it suffices to show that the first event on the right-hand side of \eqref{e:green_asympt_improved4} also occurs with probability at least $1-C{\mathrm{e}}^{-R^s/C}$. In fact, we will show that this event occurs whenever $\mathsf{R}^{\mathrm{Green}}_{v^*}\le {R}/{2}$ and $(\mathsf{R}^{\mathrm{Dist}}_v)^4\le R$ (which by Theorem \ref{t:green_asympt} and Lemma \ref{l:local_approx_cluster} is sufficient). Indeed, if $\mathsf{R}^{\mathrm{Green}}_{v^*}\le {R}/{2}$, then there is a constant $K=K_{v^*}$ such that \[\left|G^{\mathbf{a}}(u,v^*)+\frac{1}{2\pi{\overline{\A}}}\log|u-v^*|-K\right|\le\frac{1}{|u-v^*|^{3/4}}\ \forall u\in \mathcal{C}_\infty\text{ with }|u-v^*|\ge \frac{R}{2}.\] By the same argument as earlier in the proof, we know that if $u\in\mathcal{C}_\infty$ and $|u-v|\ge \vee\mathsf{R}^{\mathrm{Green}'}_v$ then $\vee|v-v^*|\le|u-v|^{1/4}$, and so also \[\left|G^{\mathbf{a}}(u,v^*)+\frac{1}{2\pi{\overline{\A}}}\log|u-v|-K\right|\le\frac{C}{|u-v|^{3/4}}\ \forall u\in \mathcal{C}_\infty\text{ with }|u-v|\ge R.\] In particular, on $B_{R'}(v)\setminus B_{R'-1}(v)$, the oscillation of $G^{\mathbf{a}}(\cdot,v^*)$ is at most ${C}/{R'^{3/4}}$. Now we can employ the maximum principle on the domain $B_{R'}(v)$ with comparison function $G^{\mathbf{a}}(\cdot,v)-c_\pm$ for suitable $c_\pm$ to conclude that \[\left|G^{\mathbf{a}}_{B_v(R')}(u,v^*)+\frac{1}{2\pi{\overline{\A}}}\log|u-v|-K'\right|\le\frac{1}{|u-v|^{3/4}}+\frac{C}{R'^{3/4}}\le \frac{C}{|u-v|^{3/4}}\] for some $K'$ (which can be taken as $K'=K_{v^*}+(\log R')/2\pi{\mathbf{a}}$) whenever $u\in B_{R'}(v)$ with $|u-v|\ge R$. As soon as $C''$ is large bounded, the right-hand side here is bounded by ${1}/{|u-v|^{1/2}}$, and so indeed $\mathcal{E}^{R,R',\mathrm{Green}}_v$ occurs. Regarding \eqref{e:green_asympt_improved2} it suffices to check that if $\mathcal{E}^{R,R',\mathrm{Green'}}_v$ occurs and $\mathsf{R}^{\mathrm{Green'}}_v\ge R$, then already $\mathsf{R}^{\mathrm{Green'}}_v\ge R'$. This claim follows again from the maximum principle on $B_v(R')$ by a very similar argument. \end{proof} In Theorem \ref{t:homog_dirich} the locality of the relevant event is much easier to show. Here the main challenge is to show a pointwise estimate instead of just an $L^2$ error estimate. Recall that $\bar\nabla$ and $\bar \Delta$ denote, respectively, the continuous gradient and Laplacian operators. \begin{lemma}\label{l:homog_dirich_improv} For each $p>1/2$ and each ${\Lambda^+}/{\Lambda^-}\ge1$ there are $C>0$, $\varepsilon_{\mathrm{Homog'}}>0$ and random variables $\mathsf{R}^{\mathrm{Homog'}}_w\in\mathbb{N}$ and $\mathsf{R}^{\mathrm{Regul'}}_v\in\mathbb{N}$ indexed by $w,v\in\mathbb{Z}^2$ respectively, such that the following holds: Let $w\in\mathbb{Z}^2$, $N\ge \mathsf{R}^{(2)}_w$, let $F\in C^{1,1}(w+(-1,N)^2,\mathbb{R})$, and let $U\colon\mathcal{C}_\infty\cap \hat V_N(w)\to\mathbb{R}$ be the unique solution of the discrete elliptic equation \begin{alignat*}{2} \begin{aligned} -\Delta_{\mathbf{a}} U&=0 && \text{in }\mathcal{C}_\infty\cap V_N(w),\\ U&=F && \text{on }\mathcal{C}_\infty\cap\partial^+ V_N(w). \end{aligned} \end{alignat*} Furthermore let $\hat U\colon w+(-1,N)^2\to\mathbb{R}$ be the unique solution of the continuous elliptic equation \begin{alignat*}{2} \begin{aligned} -{\overline{\A}}\bar\Delta \hat U&=0 && \text{in }w+(-1,N)^2,\\ \hat U&=F && \text{on }\partial(w+(-1,N)^2). \end{aligned} \end{alignat*} Then for every $\delta>0$ there are constants $C_\delta$, $N_\delta$ such that if $N\ge N_\delta$ then for any $v\in V_N^\delta(w)$ with $\mathsf{R}^{\mathrm{Regul'}}_v\le\delta N$ we have the estimate \begin{equation}\label{e:homog_dirich_improv} |U(v^*)-\hat U(v^*)|\le C_\delta\left( N^{1-\varepsilon_{\mathrm{Homog'}}}(\mathsf{R}^{\mathrm{Regul'}}_v)^{1/2}+(\mathsf{R}^{\mathrm{Regul'}}_v)^2\right)\|\bar\nabla F\|_{\bar L^\infty(w+(-1,N)^2)}. \end{equation} In addition, for any $w,v\in\mathbb{Z}^2$ and any $R,R'\in\mathbb{N}$ with $R\le {R'}/{2}$ there are events $\mathcal{E}^{R,R',\mathrm{Homog'}}_w\in\mathcal{F}_{Q_{9R'}(w)}$ and $\mathcal{E}^{R,R',\mathrm{Regul}}_v\in\mathcal{F}_{Q_{9R'}(v)}$ such that, with uniform $s>0$, \begin{align} \mathbb{P}(\mathcal{E}^{R,R',\mathrm{Homog'}}_w)&\ge 1-C{\mathrm{e}}^{-R^{s}/C}\label{e:homog_dirich_improv1},\\ \mathbb{P}(\mathcal{E}^{R,R',\mathrm{Regul'}}_v)&\ge 1-C{\mathrm{e}}^{-R^{s}/C}\label{e:homog_dirich_improv2}, \end{align} and \begin{align} \mathbb{P}(\mathsf{R}^{\mathrm{Homog'}}_w>R,\mathcal{E}^{R,R',\mathrm{Homog'}}_w)&\le C{\mathrm{e}}^{-R^{s}/C},\label{e:homog_dirich_improv3}\\ \mathbb{P}(\mathsf{R}^{\mathrm{Regul'}}_v>R,\mathcal{E}^{R,R',\mathrm{Regul'}}_v)&\le C{\mathrm{e}}^{-R^{s}/C}.\label{e:homog_dirich_improv4} \end{align} \end{lemma} \begin{proof} In order to improve the $L^2$-estimate from Theorem \ref{t:homog_dirich} into a pointwise estimate, we will use the regularity of $\hat U$ (which follows from standard Schauder estimates) and of $U$ (which follows from Theorem \ref{t:regularity}). More precisely, we use these regularity results to show that if $U(v)$ and $\hat U(v)$ were far apart, then the averages of $U$ and of $\hat U$ over $Q_S(v)$ for some mesoscopic length-scale $S$ would still be be far apart, in contradiction to the $L^2$-estimate. \emph{Step 1: Preliminaries and $L^2$-estimate}\\ We let $\varepsilon_{\mathrm{Homog'}}={\varepsilon_{\mathrm{Homog}}}/{2}$ with the $\varepsilon_{\mathrm{Homog}}$ from Theorem \ref{t:homog_dirich}. We take \begin{align*} \mathsf{R}^{\mathrm{Homog'}}_w&=\mathsf{R}^{\mathrm{Homog}}_w\vee \mathsf{R}^{\mathrm{Part},\varepsilon_{\mathrm{Homog}}}_w, \qquad \mathsf{R}^{\mathrm{Regul'}}_v=\mathsf{R}^{\mathrm{Regul}}_{v^*}\vee\mathsf{R}^{\mathrm{Dens}}_{v^*}\vee\mathsf{R}^{\mathrm{Dist}}_v, \end{align*} where the random scales on the right-hand sides are those from Lemma \ref{l:dist_to_cluster}, Lemma \ref{l:maximal_size_P}, Lemma \ref{l:density_cluster}, Theorem \ref{t:homog_dirich} and Theorem \ref{t:regularity}. We take $s$ as the minimum of the exponents $s$ in the quoted results. We also define \begin{align*} \mathcal{E}^{R,R',\mathrm{Homog'}}_w&=\mathcal{E}^{R,R',\mathrm{Homog}}_w\cap \mathcal{E}^{R,R',\mathrm{Part},\varepsilon_{\mathrm{Homog}}}_w,\\ \mathcal{E}^{R,R'\mathrm{Regul'}}_v&=\mathcal{E}^{R,R',\mathrm{Regul}}_v\cap \mathcal{E}^{R,R',\mathrm{Dens}}_{v}\cap\left\{(\mathsf{R}^{\mathrm{Dist}})^4\le R\right\}. \end{align*} With these definitions, \eqref{e:homog_dirich_improv1} and \eqref{e:homog_dirich_improv3} and the fact that $\mathcal{E}^{R,R',\mathrm{Homog'}}_w\in\mathcal{F}_{w+[-9R',9R']^2}$ follow directly from Theorem \ref{t:homog_dirich} and Theorem \ref{t:regularity}. Similarly, \eqref{e:homog_dirich_improv2} and \eqref{e:homog_dirich_improv4} and $\mathcal{E}^{R,R',\mathrm{Regul}}_v\in\mathcal{F}_{Q_{9R'}(v)}$ follow from Lemma \ref{l:dist_to_cluster}, Lemma \ref{l:maximal_size_P} and Lemma \ref{l:density_cluster}. The main challenge of the proof is to show \eqref{e:homog_dirich_improv}. For that purpose, we can assume that $v=v^*$. In this first step, we control the difference between $U$ and $\hat U$ in $L^2$. We claim that \begin{equation}\label{e:homog_dirich_improv5} \|U-\hat U\|_{L^2(\mathcal{C}_\infty\cap V_N(w))}\le CN^{2-\varepsilon_{\mathrm{Homog}}}\|\bar\nabla F\|_{\bar L^\infty(w+(-1,N)^2)}. \end{equation} With $\bar U$ as in the statement of Theorem \ref{t:homog_dirich}, we know that \begin{equation}\label{e:homog_dirich_improv6} \|U-\bar U\|_{L^2(\mathcal{C}_\infty\cap V_N(w))}\le CN^{2-\varepsilon_{\mathrm{Homog}}}\|\nabla F\|_{ L^\infty(\hat V_N(w))}\le CN^{2-\varepsilon_{\mathrm{Homog}}}\|\bar\nabla F\|_{\bar L^\infty(w+(-1,N)^2)}, \end{equation} and so it suffices to control $\|\bar U-\hat U\|_{L^2(\mathcal{C}_\infty\cap V_N(w))}$. This can be done using the maximum principle for the Laplacian. Indeed, $\bar U-\hat U$ is a solution of the (discrete) elliptic equation \begin{alignat*}{2} \begin{aligned} -{\overline{\A}}\Delta (\bar U-\hat U)&=0 && \text{in }w+(-1,N)^2\\ \bar U-\hat U&=[F]_{\mathcal{P}}*\eta -F && \text{on }\partial(w+(-1,N)^2) \end{aligned} \end{alignat*} and so the (discrete) maximum principle implies that \begin{align*} \|\bar U-\hat U\|_{L^2(\mathcal{C}_\infty\cap V_N(w))}&\le N\|\bar U-\hat U\|_{L^\infty(\mathcal{C}_\infty\cap V_N(w))}\\ &\le N\sup_{x\in\partial(w+(-1,N)^d)}\left|([F]_{\mathcal{P}}*\eta)(x) -F(x)\right|\\ &\le CN\max_{\substack{Q\in\mathcal{P}\\ Q\cap V_N(w)\neq\varnothing}}\ell(Q)\|\bar\nabla F\|_{\bar L^\infty(w+(-1,N)^2)}. \end{align*} From Lemma \ref{l:maximal_size_P} we know that the maximum on the right-hand side is at most $N^{\varepsilon_{\mathrm{Homog'}}}$, which is much less than $N^{1-\varepsilon_{\mathrm{Homog'}}}$. So, together with \eqref{e:homog_dirich_improv6}, we have shown \eqref{e:homog_dirich_improv5}. \emph{Step 2: Regularity of $U$ and $\hat U$}\\ Our task now is to improve the $L^2$-estimate \eqref{e:homog_dirich_improv5} to a pointwise estimate. For that purpose we use the knowledge that both $U$ and $\hat U$ are solutions of elliptic equations, and so we expect them to be regular on small scales, meaning that if they were far apart at some point $u$, then the $L^2$-norm of their difference would also be large. To make this rigorous, recall that $(U)_A$ denotes the average of $U$ on a set $A$. We claim that for any $S$ with $\mathsf{R}^{\mathrm{Regul'}}_v\le S\le \delta N$ we have \begin{align} \left\|U-(U)_{\mathcal{C}_\infty\cap Q_S(v)}\right\|_{L^2(\mathcal{C}_\infty\cap Q_{\mathsf{R}^{\mathrm{Regul'}}_v}(v))}&\le C_\delta \mathsf{R}^{\mathrm{Regul'}}_v S\|\bar\nabla F\|_{\bar L^\infty(w+(-1,N)^2)},\label{e:regularity_U}\\ \left\|\hat U-(\hat U)_{\mathcal{C}_\infty\cap Q_S(v)}\right\|_{L^2(\mathcal{C}_\infty\cap Q_{\mathsf{R}^{\mathrm{Regul'}}_v}(v))}&\le C_\delta \mathsf{R}^{\mathrm{Regul'}}_v S\|\bar\nabla F\|_{\bar L^\infty(w+(-1,N)^2)}.\label{e:regularity_hatU} \end{align} We begin by establishing \eqref{e:regularity_hatU}. This follows easily from standard elliptic regularity theory. Indeed, by the maximum principle we have \begin{align*} \left\|\hat U-(\hat U)_{w+(-1,N)^2}\right\|_{\bar L^2(w+(-1,N)^2)}&\le N\left(\sup_{x\in(w+[-1,N]^2)}\hat U(x)-\inf_{x\in\mathcal{C}_\infty\cap V_N(w)}\hat U(x)\right)\\ &\le N\left(\sup_{x\in\partial(w+[-1,N]^2)}F(x)-\inf_{x\in\partial(w+[-1,N]^2)}F(x)\right)\\ &\le CN^2\|\bar\nabla F\|_{\bar L^\infty(w+(-1,N)^2)} \end{align*} and by interior Schauder estimates for $-\bar\Delta$ we then have \[ \left\|\bar\nabla\hat U\right\|_{\bar L^\infty(w+(\delta N,(1-\delta)N)^2)}\le\frac{C_\delta}{N^2}\left\|\hat U-(\hat U)_{w+[-1,N]^2}\right\|_{\bar L^2(w+[-1,N]^2)}\le C_\delta \|\bar\nabla F\|_{\bar L^\infty(w+(-1,N)^2)}. \] This means in particular that \[\sup_{x\in Q_S(v)}U(x)-\inf_{x\in Q_S(v)}U(x)\le C_\delta S\|\bar\nabla F\|_{\bar L^\infty(w+(-1,N)^2)},\] which easily implies \eqref{e:regularity_hatU}. For \eqref{e:regularity_U} we cannot proceed like this, because we do not have Schauder estimates for $-{\mathbf{a}}\Delta$. Instead we will make use of Theorem \ref{t:regularity}, which can be thought of as a large-scale $C^{0,1}$-estimate. But as we do not actually control the gradient of $U$, but only $L^2$-norms on various scales, the proof of \eqref{e:regularity_U} will be quite technical. The following argument is similar to the well-known proof that Campanato spaces embed into H\"{o}lder spaces. Note first that by the (discrete) maximum principle we have \begin{align*} \left\|U-(U)_{\mathcal{C}_\infty\cap V_N(w))}\right\|_{L^2(\mathcal{C}_\infty\cap V_N(w))}&\le N\left(\sup_{u\in\mathcal{C}_\infty\cap V_N(w)}U(u)-\inf_{u\in\mathcal{C}_\infty\cap V_N(w)}U(u)\right)\\ &\le N\left(\sup_{x\in\partial(w+[-1,N]^2)}F(x)-\inf_{x\in\partial(w+[-1,N]^2)}F(x)\right)\\ &\le CN^2\|\bar\nabla f\|_{\bar L^\infty(w+(-1,N)^2)} \end{align*} and so in particular, by Theorem \ref{t:regularity} and \eqref{e:monotonicity_L2}, \begin{equation}\label{e:homog_dirich_improv7} \begin{split} \left\|U-(U)_{\mathcal{C}_\infty\cap Q_S(v)}\right\|_{L^2(\mathcal{C}_\infty\cap Q_S(v))}&\le\frac{CS^2}{(\delta N)^2}\left\|U-(U)_{\mathcal{C}_\infty\cap Q_{\delta N}(v)}\right\|_{L^2(\mathcal{C}_\infty\cap Q_{\delta N}(v))}\\ &\le\frac{CS^2}{(\delta N)^2}\left\|U-(U)_{\mathcal{C}_\infty\cap V_N(w))}\right\|_{L^2(\mathcal{C}_\infty\cap V_N(w))}\\ &\le C_\delta S^2\|\bar\nabla F\|_{\bar L^\infty(w+(-1,N)^2)}. \end{split} \end{equation} If now $\mathsf{R}^{\mathrm{Regul'}}_v\ge {S}/{2}$, then \eqref{e:regularity_U} follows from another application of \eqref{e:monotonicity_L2}. So in the following we assume $\mathsf{R}^{\mathrm{Regul'}}_v\le{S}/{2}$. Let $k_0=\left\lfloor\log_2\left(\frac{S}{\mathsf{R}^{\mathrm{Regul'}}_v}\right)\right\rfloor$ and note that $k_0\ge1$. Pick some $k\in\{0,1,\ldots,k_0\}$. Applying Theorem \ref{t:regularity} with length-scales $S$ and $2^{-k}S$ and using \eqref{e:homog_dirich_improv7} we find \begin{equation}\label{e:homog_dirich_improv8} \begin{split} \left\|U-(U)_{\mathcal{C}_\infty\cap Q_{2^{-k}S}(v))}\right\|_{L^2(\mathcal{C}_\infty\cap Q_{2^{-k}S}(v))}&\le C\frac{2^{-2k}S^2}{(\delta N)^2}N^2\|\bar\nabla F\|_{\bar L^\infty(w+(-1,N)^2)}\\ &=C_\delta2^{-2k}S^2\|\bar\nabla F\|_{\bar L^\infty(w+(-1,N)^2)}. \end{split} \end{equation} Let now $k\in\{0,1,\ldots,k_0-1\}$. Using \eqref{e:homog_dirich_improv8} for $k$ and $k+1$ as well as the lower bound on the cluster density from Lemma \ref{l:density_cluster} and \eqref{e:monotonicity_L2}, we can now estimate that \begin{align*} &\left|(U)_{\mathcal{C}_\infty\cap Q_{2^{-k}S}(v)}-(U)_{\mathcal{C}_\infty\cap Q_{2^{-k-1}S}(v)}\right|\\ &\le\frac{1}{2^{-k}S}\left\|(U)_{\mathcal{C}_\infty\cap Q_{2^{-k}S}(v)}-(U)_{\mathcal{C}_\infty\cap Q_{2^{-k-1}S}(v)}\right\|_{L^2(\mathcal{C}_\infty\cap Q_{2^{-k-1}S}(v))}\\ &\le\frac{1}{2^{-k}S}\Big(\left\|U-(U)_{\mathcal{C}_\infty\cap Q_{2^{-k}S}(v)}\right\|_{L^2(\mathcal{C}_\infty\cap Q_{2^{-k}S}(v))} +\left\|U-(U)_{\mathcal{C}_\infty\cap Q_{2^{-k-1}S}(v)}\right\|_{L^2(\mathcal{C}_\infty\cap Q_{2^{-k-1}S}(v))}\Big)\\ &\le C_\delta 2^{-k}S\|\bar\nabla F\|_{\bar L^\infty(w+(-1,N)^2)}. \end{align*} We can sum this estimate over $k\in\{0,1,\ldots,k_0-1\}$ and obtain that \begin{equation}\label{e:homog_dirich_improv9} \left|(U)_{\mathcal{C}_\infty\cap Q_{2^{-k_0}S}(v)}-(U)_{\mathcal{C}_\infty\cap Q_S(v)}\right|\le C_\delta S\|\bar\nabla F\|_{\bar L^\infty(w+(-1,N)^2)}. \end{equation} Next, we combine \eqref{e:homog_dirich_improv9} with \eqref{e:homog_dirich_improv8} (for $k=k_0$) and use that $\mathsf{R}^{\mathrm{Regul'}}_v\le 2^{-k_0}S\le2\mathsf{R}^{\mathrm{Regul'}}_v$ to see that \begin{align*} &\left\|U-(U)_{\mathcal{C}_\infty\cap Q_S(v)}\right\|_{L^2(\mathcal{C}_\infty\cap Q_{\mathsf{R}^{\mathrm{Regul'}}_v}(v))}\\ &\le\left\|U-(U)_{\mathcal{C}_\infty\cap Q_S(v)}\right\|_{L^2(\mathcal{C}_\infty\cap Q_{2^{-k_0}S}(v))}\\ &\le\left\|U-(U)_{\mathcal{C}_\infty\cap Q_{2^{-k_0}S}(v)}\right\|_{L^2(\mathcal{C}_\infty\cap Q_{2^{-k_0}S}(v))} +2^{-k_0}S\left|(U)_{\mathcal{C}_\infty\cap Q_{2^{-k_0}S}(v)}-(U)_{\mathcal{C}_\infty\cap Q_S(v)}\right|\\ &\le C_\delta2^{-2k_0}S^2\|\bar\nabla F\|_{\bar L^\infty(w+(-1,N)^2)}+C_\delta 2^{-k_0}S^2\|\bar\nabla F\|_{\bar L^\infty(w+(-1,N)^2)}\\ &\le C_\delta \mathsf{R}^{\mathrm{Regul'}}_v S \|\bar\nabla F\|_{\bar L^\infty(w+(-1,N)^2)}, \end{align*} which is \eqref{e:regularity_U}. \emph{Step 3: Pointwise estimate}\\ Using the results from the previous steps, it is now easy to finish the proof of \eqref{e:homog_dirich_improv}. Let $v\in V_N^\delta(w)$. Consider $S$ with $\mathsf{R}^{\mathrm{Regul'}}_v\le S\le \delta N$ (which we will choose shortly). Then \eqref{e:homog_dirich_improv5} and \eqref{e:regularity_U} imply that \begin{align} \left|U(v)-(U)_{\mathcal{C}_\infty\cap Q_S(v)}\right|&\le \left\|U-(U)_{\mathcal{C}_\infty\cap Q_S(v)}\right\|_{L^2(\mathcal{C}_\infty\cap Q_{\mathsf{R}^{\mathrm{Regul}}_v}(v)}\le C_\delta \mathsf{R}^{\mathrm{Regul'}}_v S\|\bar\nabla F\|_{L^\infty(w+(-1,N)^2)},\label{e:homog_dirich_improv10}\\ \left|\hat U(v)-(\hat U)_{\mathcal{C}_\infty\cap Q_S(v)}\right|&\le \left\|\hat U-(\hat U)_{\mathcal{C}_\infty\cap Q_S(v)}\right\|_{L^2(\mathcal{C}_\infty\cap Q_{\mathsf{R}^{\mathrm{Regul}}_v}(v)}\le C_\delta \mathsf{R}^{\mathrm{Regul'}}_v S\|\bar\nabla F\|_{L^\infty(w+(-1,N)^2)}.\label{e:homog_dirich_improv11} \end{align} Furthermore, Lemma \ref{l:density_cluster}, the Cauchy-Schwarz inequality and \eqref{e:homog_dirich_improv5} imply that \begin{equation}\label{e:homog_dirich_improv12} \begin{split} \left|(U)_{\mathcal{C}_\infty\cap Q_S(v)}-(\hat U)_{\mathcal{C}_\infty\cap Q_S(v)}\right|&\le \frac{C}{S^2}\|U-\bar U\|_{L^1(\mathcal{C}_\infty\cap Q_S(v))}\\ &\le \frac{C}{S}\|U-\bar U\|_{L^2(\mathcal{C}_\infty\cap Q_S(v))}\\ &\le \frac{CN^{2-\varepsilon_{\mathrm{Homog}}}}{S}\|\bar\nabla F\|_{\bar L^\infty(w+(-1,N)^2)}. \end{split} \end{equation} Combining \eqref{e:homog_dirich_improv10}, \eqref{e:homog_dirich_improv11} and \eqref{e:homog_dirich_improv12} we see that \[\left|U(v)-\hat U(v)\right|\le C_\delta\left(\mathsf{R}^{\mathrm{Regul'}}_v S+ \frac{N^{2-\varepsilon_{\mathrm{Homog}}}}{S}\right)\|\bar\nabla F\|_{\bar L^\infty(w+(-1,N)^2)},\] and all that remains is to optimize $S$. Choosing \[S=\sqrt{\frac{N^{2-\varepsilon_{\mathrm{Homog}}}}{\mathsf{R}^{\mathrm{Regul'}}_v}}\vee \mathsf{R}^{\mathrm{Regul'}}_v=\frac{N^{1-\varepsilon_{\mathrm{Homog'}}}}{(\mathsf{R}^{\mathrm{Regul'}}_v)^{1/2}}\vee \mathsf{R}^{\mathrm{Regul'}}_v\] we certainly have $S\ge \mathsf{R}^{\mathrm{Regul'}}_v$, and for $N\ge N_\delta:=\delta^{-1/\varepsilon_{\mathrm{Homog'}}}$ we also have $S\le \delta N$. Now with this choice of $S$ we finally obtain \eqref{e:homog_dirich_improv}. \end{proof} Lemma \ref{l:green_asympt_improved} and Lemma \ref{l:homog_dirich_improv} now allow us to begin with the proof of Theorem \ref{t:percolation_cluster}. Namely we prove that for $p>1/2$ the Gaussian free field on a percolation cluster satisfies all assumptions of Theorem \ref{t:mainthm} expect \ref{a:logupp} and \ref{a:sparseT}. Before we begin with the actual proof, let us introduce notation for the relevant continuous Green's functions: We let \[\bar G^{\overline{\A}}(x,y)=-\frac{1}{2\pi{\overline{\A}}}\log|x-y|\] be the Green's function of $-{\overline{\A}}\bar\Delta$ on $\mathbb{R}^2$. Furthermore, for $Q\subset\mathbb{R}^2$ open and bounded, we let $\bar G^{\overline{\A}}_Q$ be the Green's function of $-{\overline{\A}}\bar\Delta$ on $Q$. That is, for each $y\in Q$ the function $\bar G^{\overline{\A}}_Q(\cdot,y)$ is the unique solution of \begin{alignat*}{2} \begin{aligned} -{\overline{\A}}\bar\Delta \bar G^{\overline{\A}}_Q(\cdot,y)&=\bar\delta_y && \text{in }Q,\\ \bar G^{\overline{\A}}_Q(\cdot,y)&=0 &&\text{on }\partial Q, \end{aligned} \end{alignat*} where $\bar\delta$ denotes the Dirac delta distribution. The idea of the proof of the first part of Theorem \ref{t:percolation_cluster} is now relatively straightforward: Lemma \ref{l:green_asympt_improved} gives us precise asymptotics for $G^{\mathbf{a}}$, and Lemma \ref{l:homog_dirich_improv} allows us to compare the difference of $G^{\mathbf{a}}$ and $G^{\mathbf{a}}_{V_N(w)}$ with the difference of $\bar G^{\overline{\A}}$ and $\bar G^{\overline{\A}}_{V_N(w)}$. So, combining the two lemmas, we can get very good estimates on $G^{\mathbf{a}}_{V_N(w)}$. \begin{proof}[Proof of Theorem \ref{t:percolation_cluster}, first part]$ $ \emph{Step 1: Preliminaries}\\ To prove Theorem \ref{t:percolation_cluster} we need to check to check whether $\sqrt{2\pi{\overline{\A}}}\varphi'^{{\mathbf{a}},N,w}$ satisfies Assumption \ref{as:main}. Let us begin by relating this field to the Green's functions of the previous lemmas. Recall that the field $\varphi^{{\mathbf{a}},N,w}$ is defined a priori only on $\mathcal{C}_\infty\cap V_N(w)$, and that we extend it to $V_N(w)$ by setting $\varphi'^{{\mathbf{a}},N,w}_v=\varphi^{{\mathbf{a}},N,w}_{v^*}$. Now for $u,v\in V_N$ we have \begin{equation}\label{e:estgreen} \mathbb{E}\sqrt{2\pi{\overline{\A}}}\varphi'^{{\mathbf{a}},N,w}_{u}\sqrt{2\pi{\overline{\A}}}\varphi'^{{\mathbf{a}},N}_{v}=2\pi{\overline{\A}} G^{\mathbf{a}}_{V_N(w)}(u^*,v^*), \end{equation} and so all the items in Assumption \ref{as:main} correspond to various statements on $2\pi{\overline{\A}} G^{\mathbf{a}}_{V_N(w)}$. We can assume that the $\varepsilon_{\mathrm{Regul'}}$ from Lemma \ref{l:homog_dirich_improv} is at most $1/3$. We make the choices \begin{align*} \mathsf{R}^{(1)}_v&=\mathsf{R}^{\mathrm{Green}'}_v\vee(\mathsf{R}^{\mathrm{Regul}'}_v)^{1/\varepsilon_{\mathrm{Regul'}}}, \qquad \mathsf{R}^{(2)}_w=\mathsf{R}^{\mathrm{Homog}'}_w, \end{align*} where the random scales on the right-hand sides are as in Lemmas \ref{l:green_asympt_improved} and \ref{l:homog_dirich_improv}. \emph{Step 2: Verification of \ref{a:micro} and \ref{a:macro}}\\ Let now $w\in\mathbb{Z}^2$. We define the function $H^{\mathbf{a}}_{V_N(w)}\colon (\mathcal{C}_\infty\cap V_N(w))\times(\mathcal{C}_\infty\cap V_N(w))\to\mathbb{R}$ by \[H^{\mathbf{a}}_{V_N(w)}(u^*,v^*)=G^{\mathbf{a}}_{V_N(w)}(u^*,v^*)-\mathsf{K}'_v-G^{\mathbf{a}}(u^*,v^*)\] This function can be thought of as the regular part of $G^{\mathbf{a}}$ on $\mathcal{C}_\infty\cap V_N(w)$. Similarly, we define its continuous analogue $\bar H^{\overline{\A}}_{w+(-1,N)^2}\colon (w+(-1,N)^2)\times(w+(-1,N)^2)\to\mathbb{R}$ by \[\bar H^{\overline{\A}}_{w+(-1,N)^2}(x,y)=\bar G^{\overline{\A}}_{w+(-1,N)^2}(x,y)-\bar G^{\overline{\A}}(x,y)\] Our main claim is that if $u,v\in V_N^\delta(w)$ and $\mathsf{R}^{(1)}_v\vee\mathsf{R}^{(1)}_v\le\delta N$, $\mathsf{R}^{(2)}_w\le N$ then \begin{equation}\label{e:estgreen1} \left|H^{\mathbf{a}}_{V_N(w)}(u^*,v^*)-\bar H^{\overline{\A}}_{w+(-1,N)^2}(u^*,v^*)\right|\le\frac{C_\delta}{N^{\varepsilon_{\mathrm{Homog'}}/2}}. \end{equation} To see this, note that $\bar H^{\overline{\A}}_{w+(-1,N)^2}(\cdot,v^*)$ is harmonic on $w+(-1,N)^2$. Furthermore, $\bar H^{\overline{\A}}_{w+(-1,N)^2}(\cdot,v^*)$ is Lipschitz-continuous. Indeed, by interior Schauder estimates we have \[\left\|\bar\nabla\bar H^{\overline{\A}}_{w+(-1,N)^2}(\cdot,v^*)\right\|_{\bar L^\infty(w+(\delta N,(1-\delta)N)^2)}\le \frac{C_\delta}{N}\] while near the boundary we can estimate the gradient of $\bar G^{\overline{\A}}(\cdot,v^*)$ and $\bar G^{\overline{\A}}_{w+(-1,N)^2}(\cdot,v^*)$ separately (for the latter there are no issues near the boundary because of the Schwarz reflection principle) and find that \begin{align*} & \left\|\bar\nabla\bar G^{\overline{\A}}(\cdot,v^*)\right\|_{\bar L^\infty((w+(-1,N)\setminus(w+(\delta N,(1-\delta)N))}+\!\left\|\bar\nabla\bar G^{\overline{\A}}_{w+(-1,N)^2}(\cdot,v^*)\right\|_{\bar L^\infty((w+(-1,N)^2\setminus(w+(\delta N,(1-\delta)N)^2)}\\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \le \frac{C_\delta}{N}.\end{align*} Combining the last two estimates, we indeed obtain the global Lipschitz estimate \begin{equation}\label{e:estgreen2} \left\|\bar\nabla\bar H^{\overline{\A}}_{w+(-1,N)^2}(\cdot,v^*)\right\|_{\bar L^\infty(w+(-1,N)^2)}\le \frac{C_\delta}{N}. \end{equation} Now let $\tilde H^{\mathbf{a}}_{V_N(w)}\colon (\mathcal{C}_\infty\cap V_N(w))\times(\mathcal{C}_\infty\cap V_N(w))\to\mathbb{R}$ be such that $\tilde H^{\mathbf{a}}_{V_N(w)}(\cdot,v^*)$ is the unique solution of \begin{alignat*}{2} \begin{aligned} -\Delta_{\mathbf{a}} \tilde H^{\mathbf{a}}_{V_N(w)}(\cdot,v^*)&=0 && \text{in }\mathcal{C}_\infty\cap V_N(w),\\ \tilde H^{\mathbf{a}}_{V_N(w)}(\cdot,v^*) &=\bar H^{\overline{\A}}_{w+(-1,N)^2}(\cdot,v^*) && \text{on }\mathcal{C}_\infty\setminus V_N(w). \end{aligned} \end{alignat*} From Lemma \ref{l:homog_dirich_improv} we obtain that \begin{align}\label{e:estgreen3} &\left|\tilde H^{\mathbf{a}}_{V_N(w)}(u^*,v^*)-\bar H^{\overline{\A}}_{w+(-1,N)^2}(u^*,v^*)\right|\\ &\nonumber \qquad\le C_\delta\left( N^{1-\varepsilon_{\mathrm{Homog'}}}(\mathsf{R}^{\mathrm{Regul'}}_v)^{1/2}+(\mathsf{R}^{\mathrm{Regul'}}_v)^2\right)\left\|\bar\nabla\bar H^{\overline{\A}}_{w+(-1,N)^2}(\cdot,v^*)\right\|_{\bar L^\infty(w+(-1,N)^2)}. \end{align} We can estimate the right-hand side here further. On the one hand, the Lipschitz norm is bounded by \eqref{e:estgreen3}, on the other hand our assumption $\delta N\ge\mathsf{R}^{(1)}_v\ge(\mathsf{R}^{\mathrm{Regul'}}_v)^{1/\varepsilon_{\mathrm{Regul'}}}$ implies that $\mathsf{R}^{\mathrm{Regul'}}_v\le C_\delta N^{\varepsilon_{\mathrm{Regul'}}}$. So from \eqref{e:estgreen3} we obtain \begin{equation}\label{e:estgreen4} \left|\tilde H^{\mathbf{a}}_{V_N(w)}(u^*,v^*)-\bar H^{\overline{\A}}_{w+(-1,N)^2}(u^*,v^*)\right|\le\frac{C_\delta}{N^{\varepsilon_{\mathrm{Homog'}}/2}}. \end{equation} On the other hand, $H^{\mathbf{a}}_{V_N(w)}(\cdot,v^*)$ and $\tilde H^{\mathbf{a}}_{V_N(w)}(\cdot,v^*)$ are close. Indeed, both are $-\Delta^{\mathbf{a}}$-harmonic functions on $\mathcal{C}_\infty\cap V_N(w)$, and by Lemma \ref{l:green_asympt_improved} and our assumption $\delta N\ge\mathsf{R}^{\mathrm{Green}}_v$ we have \[\sup_{u'^*\in\mathcal{C}_\infty\cap\partial(w+[-1,N]^2)}\left|H^{\mathbf{a}}_{V_N(w)}(u'^*,v^*)-\tilde H^{\mathbf{a}}_{V_N(w)}(u'^*,v^*)\right|\le\frac{1}{(\delta N)^{1/2}}\le \frac{C_\delta}{N^{1/2}}\] so that by the discrete maximum principle we also have \begin{equation}\label{e:estgreen5} \left|H^{\mathbf{a}}_{V_N(w)}(u^*,v^*)-\tilde H^{\mathbf{a}}_{V_N(w)}(u^*,v^*)\right|\le\frac{C_\delta}{N^{1/2}}. \end{equation} Combining \eqref{e:estgreen4} and \eqref{e:estgreen5}, we have established \eqref{e:estgreen1}. Using this estimate, we can now verify \ref{a:micro} and \ref{a:macro}. We begin with \ref{a:micro}. The functions $\bar G^{\overline{\A}}_{w+(-1,N)^2}$ and $\bar H^{\overline{\A}}_{w+(-1,N)^2}$ satisfy the scaling relations \begin{align} \bar G^{\overline{\A}}_{w+(-1,N)^2}(x,y)&=\bar G^{\overline{\A}}_{(0,1)^2}\left(\frac{x-w+e}{N+1},\frac{y-w+e}{N+1}\right)\label{e:estgreen7G}\\ \bar H^{\overline{\A}}_{w+(-1,N)^2}(x,y)&=\bar H^{\overline{\A}}_{(0,1)^2}\left(\frac{x-w+e}{N+1},\frac{y-w+e}{N+1}\right)+\frac{1}{2\pi{\overline{\A}}}\log (N+1)\label{e:estgreen7} \end{align} where $e:=(-1,-1)\in\mathbb{Z}^2$. Furthermore, by standard elliptic regularity theory, $\bar H^{\overline{\A}}_{(0,1)^2}$ is a smooth function on $(0,1)^2\times(0,1)^2\setminus\{(x,x)\colon x\in(0,1)^2\}$ that has a continuous extension onto the diagonal. So \[f(x):=2\pi{\overline{\A}}\lim_{\substack{x'\to x\\x''\to x}}\bar H^{\overline{\A}}_{(0,1)^2}(x',x'')\] is a well-defined continuous function on $(0,1)^2$. The function $f$ is also bounded above (as can be seen from the maximum principle and the fact that $\bar H^{\overline{\A}}_{(0,1)^2}(\cdot,x)=-\bar G^{\overline{\A}}_{(0,1)^2}(\cdot,x)$ is bounded by $\frac{1}{2\pi{\overline{\A}}}\log\sqrt{2}$ on $\partial(0,1)^2$). We also choose $g(u,v)=2\pi{\overline{\A}} G^{\mathbf{a}}_{V_N(w)}(u^*,v^*)+\mathsf{K}'_v$. With these definitions, \eqref{e:estgreen1} can be rewritten as \[\Big|G^{\mathbf{a}}_{V_N(w)}(u^*,v^*)-\frac{1}{2\pi{\overline{\A}}}g(u,v)-\frac{1}{2\pi{\overline{\A}}}\log (N+1)-\bar H^{\overline{\A}}_{(0,1)^2}\left(\frac{u^*-w+e}{N+1},\frac{v^*-w+e}{N+1}\right)\Big|\le\!\frac{C_\delta}{N^{\varepsilon_{\mathrm{Homog'}}/2}}\] and hence also \begin{equation}\label{e:estgreen12} \begin{split} &\left|G^{\mathbf{a}}_{V_N(w)}(u^*,v^*)-\frac{1}{2\pi{\overline{\A}}}g(u,v)-\frac{1}{2\pi{\overline{\A}}}\log N-\frac{1}{2\pi{\overline{\A}}}f\left(\frac{v-w}{N}\right)\right|\\ &\;\le\frac{C_\delta}{N^{\varepsilon_{\mathrm{Homog'}}/2}}+\left|\bar H^{\overline{\A}}_{(0,1)^2}\left(\frac{u^*-w+e}{N+1},\frac{v^*-w+e}{N+1}\right)-\bar H^{\overline{\A}}_{(0,1)^2}\left(\frac{v-w}{N},\frac{v-w}{N}\right)\right|+\!\frac{1}{2\pi{\overline{\A}}}\log\frac{N+1}{N}. \end{split} \end{equation} Under the assumption $|u-v|_\infty\le L$ and $|u-u^*|^4\vee|v-v^*|^4\le\mathsf{R}^{(1)}_u\vee\mathsf{R}^{(1)}_v\le\delta N$ we have that $|u^*-v|\vee|v^*-v|\le L+N^{1/4}$, and all of $\frac{u^*-w+e}{N+1},\frac{v^*-w+e}{N+1},\frac{v-w}{N}$ are elements of $\left(\frac{\delta}{2},1-\frac{\delta}{2}\right)^2\times\left(\frac{\delta}{2},1-\frac{\delta}{2}\right)^2$. The function $\bar H^{\overline{\A}}_{(0,1)^2}$ is uniformly continuous on that set, and therefore the second summand on the right-hand side of \eqref{e:estgreen12} is bounded by a function of $\delta$ and ${L}/{N}$ that goes to 0 as $N\to\infty$. From this we immediately obtain the desired estimate for $2\pi{\overline{\A}} G^{\mathbf{a}}_{V_N(w)}$, i.e. \ref{a:micro}. The argument for \ref{a:macro} is similar. We choose $h(x,y)=2\pi{\overline{\A}}\bar G^{\overline{\A}}_{(0,1)^2}$. The estimate \eqref{e:estgreen1} and the scaling relation \eqref{e:estgreen7G} imply that \begin{equation}\label{e:estgreen6} \begin{split} &\left|G^{\mathbf{a}}_{V_N(w)}(u^*,v^*)-\frac{1}{2\pi{\overline{\A}}}h\left(\frac{u-w}{N},\frac{u-w}{N}\right)\right|\\ &\quad\le\frac{C_\delta}{N^{\varepsilon_{\mathrm{Homog'}}/2}}+\left|G^{\mathbf{a}}(u^*,v^*)+\mathsf{K}_v-\bar G^{\overline{\A}}(u^*,v^*)\right|\\ &\qquad+\frac{1}{2\pi{\overline{\A}}}\left|h\left(\frac{u^*-w+e}{N+1},\frac{v^*-w+e}{N+1}\right)-h\left(\frac{u-w}{N},\frac{v-w}{N}\right)\right|. \end{split} \end{equation} Under the assumption $\mathsf{R}^{(1)}_u\vee\mathsf{R}^{(1)}_v\le\delta N$ we know that $|u^*-u|\vee|v^*-v|\le N^{1/4}$, and so by Lemma \ref{l:green_asympt_improved} the second term on the right-hand side of \eqref{e:estgreen6} is bounded by $|u-v|^{-1/2}$ for $|u-v|\ge \mathsf{R}^{(1)}_v$. The third term on the right-hand side is small because we know $|u^*-v^*|\ge{N}/{2L}$ , and $h$ is uniformly continuous away from the diagonal and the boundary of $(0,1)^2$. Thus we obtain \ref{a:macro}. \emph{Step 3: Verification of \ref{a:logbd}}\\ This step is different than the previous one in that we make no use of \eqref{e:estgreen1}. Instead we will argue directly using the maximum principle. To do so, we begin with a rather crude estimate. Namely we claim that for any $N$ and $v,u,u'$ we have \begin{equation}\label{e:estgreen8} 0\le G^{\mathbf{a}}_{Q_R(v)}(u^*,u'^*)\le\frac{\mathsf{d}_{\mathcal{C}_\infty}(u^*,\partial Q_R(v))\wedge \mathsf{d}_{\mathcal{C}_\infty}(u'^*,\partial Q_R(v))}{\Lambda^-} \end{equation} where we recall that $\mathsf{d}_{\mathcal{C}_\infty}$ denotes the graph distance on $\mathcal{C}_\infty$. Indeed, the lower bound is trivial. For the upper bound, it suffices (by symmetry) to prove the upper bound $\frac{\mathsf{d}_{\mathcal{C}_\infty}(u^*, \partial Q_R(v))}{\Lambda^-}$. Moreover, it suffices (by the maximum principle) to consider the case $u=u'=u^*$. It is helpful to consider $G^{\mathbf{a}}_{Q_R(v)}(\cdot,u)$ as the voltage distribution that arises when we let a unit current flow from $u$ through the electrical network given by the conductances ${\mathbf{a}}$. Then for each edge the current through it is at most 1, and so the voltage drop along that edge is at most its resistance. In other words, for each edge the difference of $G^{\mathbf{a}}_{Q_R(v)}(\cdot,u)$ at the two endpoints is at most ${1}/{\Lambda^-}$. If we use this estimate along each of the edges of path $\gamma$ consisting of $\mathsf{d}_{\mathcal{C}_\infty}(u,\partial Q_R(v))$ edges connecting $u$ to a vertex in $\mathcal{C}_\infty\cap \partial Q_R(v)$, we directly obtain \[G^{\mathbf{a}}_{Q_R(v)}(u,u)\le\frac{\mathsf{d}_{\mathcal{C}_\infty}(u,\partial Q_R(v))}{\Lambda^-}.\] This establishes \eqref{e:estgreen8}. Note that if $v\in \mathcal{C}_\infty\cap Q_R(w)$, then $\mathsf{d}_{\mathcal{C}_\infty}(v,\partial Q_R(w))\le 4R^2$. Thus, the bound in \eqref{e:estgreen8} is at worst quadratic. This motivates the choice $\alpha_\delta(R)=C'_\delta (R^2+1)$ (for a constant $C'_\delta$ that depends on $\delta$ and will be chosen later). To verify \ref{a:logbd} we have to check whether for $u,v\in V_N^\delta(w)$ we have \begin{equation}\label{e:estgreen9} \left|G^{\mathbf{a}}_{V_n(w)}(u^*,v^*)-\frac{1}{2\pi{\mathbf{a}}}\log N+\frac{1}{2\pi{\mathbf{a}}}\log_+|u-v|\right|\le C_\delta\left((\mathsf{R}^{(1)}_u)^2+(\mathsf{R}^{(1)}_v)^2+1\right). \end{equation} As the estimate \eqref{e:estgreen9} is symmetric in $u,v$, we can assume $\mathsf{R}^{(1)}_u\le\mathsf{R}^{(1)}_v$. If $\mathsf{R}^{(1)}_v>\delta N$ then both the upper and lower bound in \eqref{e:estgreen9} follow from \eqref{e:estgreen8} (provided that $C_\delta$ is large enough). So we can assume that $\mathsf{R}^{(1)}_v\le\delta N$. Lemma \ref{l:green_asympt_improved} then implies that $G^{\mathbf{a}}(\cdot,v^*)$ has oscillation bounded by $C_\delta$ on $\mathcal{C}_\infty\cap V_N(w)$. Hence the maximum principle on $\mathcal{C}_\infty\cap V_N(w)$ with comparison functions $G^{\mathbf{a}}(\cdot,v^*)+\frac{1}{2\pi{\overline{\A}}}\log N-\mathsf{K}'_v\pm C_\delta$ implies that \begin{equation}\label{e:estgreen10} \left|G^{\mathbf{a}}_{V_N(w)}(\cdot,v^*)-\frac{1}{2\pi{\overline{\A}}}\log N-G^{\mathbf{a}}(\cdot,v^*)\right|\le C_\delta \end{equation} on $\mathcal{C}_\infty\cap V_N(w)$. If now $|u^*-v|>\mathsf{R}^{(1)}_v=\mathsf{R}^{(1)}_v\vee\mathsf{R}^{(1)}_u$, then \eqref{e:estgreen9} directly follows from \eqref{e:estgreen10} and Lemma \ref{l:green_asympt_improved}. If on the other hand $|u^*-v|\le\mathsf{R}^{(1)}_v$, we use the maximum principle once again: the estimate \eqref{e:estgreen10} implies that \[\left|G^{\mathbf{a}}_{V_N(w)}(\cdot,v^*)-\frac{1}{2\pi{\overline{\A}}}\log \frac{N}{\mathsf{R}^{(1)}_v}\right|\le C_\delta\] on $\mathcal{C}_\infty\cap Q_{\mathsf{R}^{(1)}_v}(v)$, and so the maximum principle on $\mathcal{C}_\infty\cap Q_{\mathsf{R}^{(1)}_v}(v)$ with comparison function $G^{\mathbf{a}}_{Q_{\mathsf{R}^{(1)}_v}(v)}$ implies that \[\left|G^{\mathbf{a}}_{V_N(w)}(u^*,v^*)-G^{\mathbf{a}}_{Q_{\mathsf{R}^{(1)}_v}(v)}-\frac{1}{2\pi{\overline{\A}}}\log \frac{N}{\mathsf{R}^{(1)}_v}\right|\le C_\delta.\] Remembering \eqref{e:estgreen8}, this implies that \[\left|G^{\mathbf{a}}_{V_N(w)}(u^*,v^*)-\frac{1}{2\pi{\overline{\A}}}\log N\right|\le C_\delta+\frac{1}{2\pi{\overline{\A}}}\log \mathsf{R}^{(1)}_v.\] We also know that \[\log_+|u-v|\le\log_+\left((\mathsf{R}^{(1)}_u)^{1/4}+|u*-v|\right)\le C(\mathsf{R}^{(1)}_u+\mathsf{R}^{(1)}_v)\] and so we obtain \eqref{e:estgreen9} also in this case. \emph{Step 4: Verification of \ref{a:sparseR}}\\ In order to verify Assumption \ref{a:sparseR}, we need an upper bound on the number of points where $\mathsf{R}^{(1)}$ or $\mathsf{R}^{(2)}$ are large. In view of Lemmas \ref{l:green_asympt_improved} and \ref{l:homog_dirich_improv}, our strategy is to write the events $\{\mathsf{R}^{(i)}_v\ge R\}$ as the union of a local event and an event with very small probability. The former events can be controlled using a second moment bound, while the occurrence of the latter events can be controlled with a first moment bound. We give details, beginning with $\mathsf{R}^{(1)}$. For $R\le R'$ we define the event \[\mathcal{E}^{R,R',(1)}=\mathcal{E}^{R,R',\mathrm{Green'}}_v\cap \mathcal{E}^{R,R',\mathrm{Regul'}}_v.\] From Lemmas \ref{l:green_asympt_improved} and \ref{l:homog_dirich_improv} we conclude that there are $C$ and $s>0$ such that \[ \mathbf{P}(\mathcal{E}^{R,R',(1)}_v)\ge 1-C{\mathrm{e}}^{-R^{s}/C} \] and \[ \mathbf{P}(\mathsf{R}^{(1)}_v> R,\mathcal{E}^{R,R',(1)}_v)\le C{\mathrm{e}}^{-R'^{s}/C} \] and that moreover $\mathcal{E}^{R,R',(1)}\in\mathcal{F}_{Q_{9R'}(v)}$. So we know that if $\mathsf{R}^{(1)}_v>R$ then either the local event $(\mathcal{E}^{R,R',(1)}_v)^\complement$ or a very unlikely event occurs. Now, given $R$, we choose $R'=\sqrt{{N}/{L}}$. Then for $ w'\in \mathcal{W}_{N,\left\lfloor \frac{N}{L}\right\rfloor}(w_N)$ we have \begin{align*} \left|\left\{v\in V_{\left\lfloor \frac{N}{L}\right\rfloor}(w'): \mathsf{R}^{(1)}_{v}> R\right\}\right|&=\sum_{v\in V_{\left\lfloor \frac{N}{L}\right\rfloor}(w')}\mathbbm{1}_{\mathsf{R}^{(1)}_v>R}\\ &\le\sum_{v\in V_{\left\lfloor \frac{N}{L}\right\rfloor}(w')}\mathbbm{1}_{(\mathcal{E}^{R,R',(1)}_v)^\complement}+\sum_{v\in V_{\left\lfloor \frac{N}{L}\right\rfloor}(w')}\mathbbm{1}_{\mathsf{R}^{(1)}_v>R,\mathcal{E}^{R,R',(1)}_v}, \end{align*} and thus, for a constant $C'$ to be fixed later, \begin{equation}\label{e:estgreen11} \begin{split} &\mathbf{P}\left(\left|\left\{v\in V_{\left\lfloor \frac{N}{L}\right\rfloor}(w'): \mathsf{R}^{(1)}_{v}> R\right\}\right|\ge C'\left(\frac{N}{L}\right)^2{\mathrm{e}}^{-R^s/C'}\right)\\ &\le \mathbf{P}\bigg(\sum_{v\in V_{\left\lfloor \frac{N}{L}\right\rfloor}(w')} \mathbbm{1}_{(\mathcal{E}^{R,R',(1)}_v)^\complement}\ge \frac{C'}{2}\left(\frac{N}{L}\right)^2{\mathrm{e}}^{-R^s/C'}\bigg)\\ &\qquad \qquad +\mathbf{P}\bigg(\sum_{v\in V_{\left\lfloor \frac{N}{L}\right\rfloor}(w')}\mathbbm{1}_{\mathsf{R}^{(1)}_v>R,\mathcal{E}^{R,R',(1)}_v}\ge \frac{C'}{2}\left(\frac{N}{L}\right)^2{\mathrm{e}}^{-R^s/C'}\bigg). \end{split} \end{equation} We can bound the second summand here very crudely with a union bound and obtain \begin{align*} &\mathbf{P}\bigg(\sum_{v\in V_{\left\lfloor \frac{N}{L}\right\rfloor}(w')}\mathbbm{1}_{\mathsf{R}^{(1)}_v>R,\mathcal{E}^{R,R',(1)}_v}\ge\frac{C'}{2}\left(\frac{N}{L}\right)^2{\mathrm{e}}^{-R^s/C'}\bigg)\\ &\le \mathbf{P}\left(\exists v\colon\mathsf{R}^{(1)}_v>R,\mathcal{E}^{R,R',(1)}_v\right) \le C(\frac{N}{L})^2{\mathrm{e}}^{-(N/L)^{s/2}/C}\le C{\mathrm{e}}^{-(N/L)^{s/2}/C}. \end{align*} For the first summand, we use that the events $\mathcal{E}^{R,R',(1)}_v$ and $\mathcal{E}^{R,R',(1)}_{v'}$ are independent when $|v-v'|\ge18R'$. Thus we can partition $V_{\left\lfloor \frac{N}{L}\right\rfloor}(w')$ into $(18R')^2=324{N}/{L}$ classes $(A_i)_{i=1}^{(18R')^2}$ of at most $C\left(\frac{N}{18R'L}\right)^2\le C\frac{N}{L}$ elements such that for each $i$ the events corresponding to $v\in A_i$ are independent. Using now, for example, Hoeffding's inequality on each $A_i$, we find \begin{align*} &\mathbf{P}\bigg(\sum_{v\in V_{\left\lfloor \frac{N}{L}\right\rfloor}(w')}\mathbbm{1}_{(\mathcal{E}^{R,R',(1)}_v)^\complement}\ge\frac{C'}{2}\left(\frac{N}{L}\right)^2{\mathrm{e}}^{-R^s/C'}\bigg)\\ &\le\sum_{i=1}^{324N/L}\mathbf{P}\left(\sum_{v\in A_i}\mathbbm{1}_{(\mathcal{E}^{R,R',(1)}_v)^\complement}\ge\frac{C'}{648}\frac{N}{L}{\mathrm{e}}^{-R^s/C'}\right)\\ &\le C\frac{N}{L}\exp\left(-2\left(\frac{C'}{648}\frac{N}{L}{\mathrm{e}}^{-R^s/C'}-\mathbb{E}\left(\sum_{v\in A_i}\mathbbm{1}_{(\mathcal{E}^{R,R',(1)}_v)^\complement}\right)\right)^2\right)\\ &\le C\frac{N}{L}\exp\left(-2\left(\frac{C'}{648}\frac{N}{L}{\mathrm{e}}^{-R^s/C'}-C\frac{N}{L}{\mathrm{e}}^{-R^s/C}\right)^2\right) \le C\frac{N}{L}\exp\left(-C\frac{N}{L}{\mathrm{e}}^{-R^s/C}\right), \end{align*} where the last step holds provided we have chosen $C'$ large enough. Inserting these results into \eqref{e:estgreen11}, we have shown that \begin{align*} &\mathbf{P}\left(\left|\left\{v\in V_{\left\lfloor \frac{N}{L}\right\rfloor}(w'): \mathsf{R}^{(1)}_{v}> R\right\}\right|\ge C'\left(\frac{N}{L}\right)^2{\mathrm{e}}^{-R^s/C'}\right)\\ &\qquad \qquad \le C\left(\frac{N}{L}\exp\left(-C\frac{N}{L}{\mathrm{e}}^{-R^s/C}\right)+{\mathrm{e}}^{-(N/L)^{s/2}/C}\right).\end{align*} As the right-hand side is exponentially small in $N$, it follows from the Borel-Cantelli lemma that for each fixed $R$ and $L$ \[\mathbf{P}\left(\limsup_{N\to\infty}\left(\frac{L}{N}\right)^2\left|\left\{v\in V_{\left\lfloor \frac{N}{L}\right\rfloor}(w'): \mathsf{R}^{(1)}_{v}> R\right\}\right|\ge C'{\mathrm{e}}^{-R^s/C'}\right)=0\] and this in turn easily implies the assertion on $\mathsf{R}^{(1)}$ in Assumption \ref{a:sparseR}. The assertion on $\mathsf{R}^{(2)}$ is shown very similarly. We define \[\mathcal{E}^{R,R',(2)}_w=\mathcal{E}^{R,R',\mathrm{Homog'}}_v\] and use that $\mathsf{R}^{(2)}_w\ge L'$ implies that either $\mathcal{E}^{L',R',(2)}_w$ does not hold or $\left\{\mathsf{R}^{(2)}_w\ge L'\right\}\cap\mathcal{E}^{L',R',(2)}_w$. The latter event is so unlikely that with the choice $R'=\sqrt{\frac{N}{L}}$ we can control its occurrence with a union bound, while the former event is local, so that upon partitioning into $CR'^2$ classes we gain independence and can argue using Hoeffding's inequality as before. \emph{Step 5: Verification of \ref{a:lln}}\\ When we consider $\delta_{\mathbb{P}^{L',w''}}$ as a random variable with values in $\mathcal{M}_1(V_{L'}(w''))$, then it is $\mathcal{F}_{Q_L'(w'')}$-measurable and its law is invariant under translations of ${\mathbf{a}}$. So Assumption \ref{a:lln} follows from an appropriate ergodic theorem for random variables with values in the Polish space $\mathcal{M}_1(\mathbb{R}^{V_{L'}(0)})$. \end{proof} \subsection{Large deviation results for highly supercritical percolation clusters} \label{sec-4.3} All previous results were valid for any $p>\frac12$, and we did not care about the dependence on $p$. For the quantitative estimate in \ref{a:sparseT} we cannot be so careless. As a preparation for the second part of the proof of Theorem \ref{t:percolation_cluster}, we collect here three large deviation results for $\mathcal{C}_\infty$. We no longer care about whether the relevant events are (asymptotically) local, however we are now interested in the decay rates as $p$ gets close to 1. The first result complements Lemma \ref{l:density_cluster}, however we want now a $p$-independent lower bound on the density of the cluster. Of course this is only possible if $p$ is close enough to 1. \begin{lemma}\label{l:density_cluster_highlysc} For any $\kappa>0$ there is $p^{\mathrm{Dens}}_\kappa<1$ with the following property. For $p^{\mathrm{Dens}}_\kappa\le p\le1$ there is $C>0$ such that for any $v\in\mathbb{Z}^2$, $R\in\mathbb{N}$ we have \[\mathbf{P}\left(\left|\mathcal{C}_\infty\cap Q_R(v)\right|\ge\frac{R^2}{2}\right)\ge 1-C{\mathrm{e}}^{-\kappa R}.\] \end{lemma} \begin{proof} This is essentially a contour argument. An even stronger result can be found in \cite{DP96}. \end{proof} The second result is an estimate for the isoperimetric constant of sufficiently large subsets of $\mathcal{C}_\infty$. For $A\subset\mathcal{C}_\infty$ we denote by \[\partial^+A=\{u\in\mathcal{C}_\infty\setminus A\colon\exists v\in A \,\text{such that}\, (u,v)\in E(\mathcal{C}_\infty)\}\] the outer (vertex) boundary of $A$ in $\mathcal{C}_\infty$. Our result then is. \begin{lemma}\label{l:isoperimetry} For any $\kappa>0$ there is $p^{\mathrm{Iso}}_\kappa<1$ with the following property. For $p^{\mathrm{Iso}}_\kappa\le p\le1$ there is $C>0$ such that for any $v\in\mathbb{Z}^2$, $R\in\mathbb{N}$ we have \[\mathbf{P}\left(\exists A\subset\mathcal{C}_\infty\colon v\in A,|A|\ge R^2,|\partial^+A|\le |A|^{1/2}\right)\le C{\mathrm{e}}^{-\kappa R}.\] \end{lemma} \begin{proof} This is once again a straightforward Peierls' argument. The same argument (with the $p$-dependence not made explicit) can be found for example in \cite[Proof of Theorem 2.4]{BM03}. \end{proof} The third result is a large-deviation result for the graph distance (or chemical distance), which essentially follows from \cite{AP96}. For this specific version we use a result of \cite{GM07}. \begin{lemma}\label{l:chemical_distance} For any $\kappa>0$ there is $p^{\mathrm{Chem}}_\kappa<1$ with the following property. For $p^{\mathrm{Chem}}_\kappa\le p\le1$ there is $C>0$ such that for any $v\in\mathbb{Z}^2$, $R\in\mathbb{N}$ we have \begin{equation}\label{e:chemical_distance1} \mathbf{P}\left(\max_{u,u'\in \mathcal{C}_\infty\cap Q_R(v)}\mathsf{d}_{\mathcal{C}_\infty}(u,u')\le 24R\right)\ge 1-C{\mathrm{e}}^{-\kappa R}. \end{equation} \end{lemma} \begin{proof} Applying \cite[Theorem 1.4]{GM07}, we have for all $p$ sufficiently close to 1 the large deviation estimate \begin{equation}\label{e:chemical_distance2} \mathbf{P}(\mathsf{d}_{\mathcal{C}_\infty}(u,u')\ge 2|u-v|)\le Ce^{-\kappa|u-u'|} \end{equation} for any $u,u'\in\mathcal{C}_\infty$. This bound is useful for us when $|u-u'|\ge cR$. To get an estimate also for $u,u'$ which are closer to each other, we proceed as follows. We can assume that $\mathcal{C}_\infty\cap Q_{3R}(v)$ is non-empty, as otherwise the left-hand side of \eqref{e:chemical_distance1} is trivially equal to 1. So take some $\bar u\in \mathcal{C}_\infty\cap (Q_{3R}(v)\setminus Q_{3R}(v))$. If there are $u,u'\in \mathcal{C}_\infty\cap Q_R(v)$ with $\mathsf{d}_{\mathcal{C}_\infty}(u,u')\ge24R$, then one of $\mathsf{d}_{\mathcal{C}_\infty}(u,\bar u)$ and $\mathsf{d}_{\mathcal{C}_\infty}(u',\bar u)$ must be at least $12R$, while $R\le|u-\bar u|\le6R $ and $\rho\le|u'-\bar u|\le6R $. Thus, using \eqref{e:chemical_distance2} and a union bound, \begin{align*} &\mathbf{P}\left(\exists u,u'\in \mathcal{C}_\infty\cap Q_R(v)\colon \mathsf{d}_{\mathcal{C}_\infty}(u,u')> 24R\right)\\ &\quad\le\mathbf{P}\left(\exists u\in\mathcal{C}_\infty\cap Q_R(v),\bar u\in \mathcal{C}_\infty\cap (Q_{3R}(v)\setminus Q_{2R}(v))\colon \mathsf{d}_{\mathcal{C}_\infty}(u,\bar u)\ge 12R \right)\\ &\quad\le\sum_{u\in\mathcal{C}_\infty\cap Q_{R}(v)}\sum_{\bar u\in \mathcal{C}_\infty\cap (Q_{3R}(v)\setminus Q_{2R}(v))}\mathbf{P}(\mathsf{d}_{\mathcal{C}_\infty}(u,\bar u)\ge 12R) \le CR^4{\mathrm{e}}^{-\kappa R}, \end{align*} which implies \eqref{e:chemical_distance1}. \end{proof} \subsection{Proof of Theorem \ref{t:percolation_cluster}, second part} \label{sec-4.4} In this section we will complete the proof of Theorem \ref{t:percolation_cluster} by showing that for $p$ sufficiently close to 1, also Assumptions \ref{a:logupp} and \ref{a:sparseT} are satisfied for a suitable choice of $\mathsf{T}_\cdot$. In the previous section the dependence of various quantities on $p$ and ${\Lambda^+}/{\Lambda^-}$ was unimportant for as, as anyhow all relevant results held for all supercritical $p$. This will be different in this section, and we change our convention on notation slightly: the generic constant $C$ might still depend on $p$ or ${\Lambda^+}/{\Lambda^-}$, but for all other named constants we now indicate explicitly their dependence on $p$ and ${\Lambda^+}/{\Lambda^-}$. The Assumptions \ref{a:logupp} and \ref{a:sparseT} are at first glance very similar to Assumptions \ref{a:logbd} and \ref{a:sparseR}. However, establishing them is substantially more challenging, mainly because of two new difficulties. The first is that we now require estimates not just in $V_N^\delta$ for some $\delta>0$, but up to the boundary. Near the boundary, the full-space Green's function $G^{\mathbf{a}}$ is no longer a useful comparison function to be used for the maximum principle, as it was used e.g. in the proof of Lemma \ref{l:green_asympt_improved}. Instead we will need to use half-space Green's functions as comparison function. Therefore, as a first step we prove asymptotics for these, by using Lemma \ref{l:green_asympt_improved} together with the reflection principle. The other difficulty is even more serious. Namely \ref{a:sparseT} is a quantitative upper bound on the number of points $v\in V_N(w)$ where $\mathsf{T}_v$ is large. Our argument for \ref{a:sparseR} was also quantitative, but the resulting bounds are too weak to be useful here. In more detail, for \ref{a:logbd} we have used Lemma \ref{l:green_asympt_improved} together with the maximum principle to obtain estimates for $G^{\mathbf{a}}(\cdot,v)$ on lengthscales at least $\mathsf{R}^{(1)}_v$, and used the deterministic estimate \eqref{e:estgreen8} to control $G^{\mathbf{a}}(\cdot,v)$ in the box $Q_{\mathsf{R}^{(1)}_v}(v)$. This has lead to bounds on $G^{\mathbf{a}}(u,v)$ with error term of order $(\mathsf{R}^{(1)}_v)^2$. In order to establish \ref{a:logupp}, we would thus need to choose $\mathsf{T}_v=C(\mathsf{R}^{(1)}_v)^2$. In view of the tail-bound \begin{equation}\label{e:estgreenquant} \mathbf{P}(\mathsf{R}^{(1)}_v\ge R)\le C{\mathrm{e}}^{-R^s/C} \end{equation} we would then obtain the tail-bound \[\mathbf{P}(\mathsf{T}_v\ge T)\le C{\mathrm{e}}^{-T^{s/2}/C}\] where $s>0$ is some small exponent. This only gives us a chance to also establish \ref{a:sparseT} if we are able to take $s=2$ (or even better $s>2$). So a natural question is whether \eqref{e:estgreenquant} holds for some $s\ge2$. Unfortunately, the answer to that question is no, and the best possible $s$ one can hope for is $s=1$. The problem (which is explained well in \cite[Section 1.4]{DG21}) is that one needs to change only $CR$ bonds to (almost) disconnect a box $Q_R(v)$ from its complement, which makes the environment very irregular on length-scale $R$ around $v$. Therefore, the probability that $\mathsf{R}^{(1)}_v>R$ should be at least the probability of this particular bad event, which is at least $c{\mathrm{e}}^{-R/C}$. Thus, with the approach that worked well for Assumptions \ref{a:logbd} and \ref{a:sparseR}, we cannot hope to also show \ref{a:logupp} and \ref{a:sparseT}. However, we remark that if $p=1$ (i.e. we are in the setting of uniformly bounded conductances), then things are much easier. Namely, then instead of \eqref{e:estgreen8} we could use the upper bound \[ 0\le G^{\mathbf{a}}_{V_N(w)}(u^*,v^*)\le\frac{\log N}{2\pi\Lambda^-} \] (which follows from Rayleigh monotonicity), that would allow us to take $\mathsf{T}_v=c\mathsf{R}^{(1)}_v$. With this choice of $\mathsf{T}_v$, \eqref{e:estgreenquant} would easily be strong enough to obtain exponential (and not just stretched-exponential) tails for $\mathsf{T}_v$, and so we would have a good chance to establish \ref{a:sparseT}. If $p<1$, though, then \eqref{e:estgreen8} is clearly best-possible as a deterministic estimate. Our idea to establish \ref{a:logupp} and \ref{a:sparseT} now is that we do not actually need a bound like \eqref{e:estgreen8} that holds for all environments, but only a bound that holds for most environments. In other words, we need to understand the upper tail of the random variable $G^{\mathbf{a}}_{Q_N}(v^*,v^*)$, and in fact we need to show that it is exponentially unlikely in $T$ that this random variable exceeds its typical value ${\log N}/{(2\pi{\overline{\A}})}$ by more that $T$. Such a quantitative tail estimate appears to be new, and previously only qualitative estimates were known (see e.g. \cite{A15} and the references therein, although of course stronger quantitative bounds could be deduced from \cite{DG21}). Turning now to the actual proof, we begin with the asymptotics for the Green's function in half-spaces. Let us first introduce some notation. For $w\in\mathbb{Z}^2$ and $\vec e\in\{\vec e_1,-\vec e_1,\vec e_2,-\vec e_2\}$ let \[Q^{\vec e}(w)=\{v\in\mathbb{Z}^2\colon (v-w)\cdot \vec e\ge 0\}\] be a half-space, and let $G^{\mathbf{a}}_{Q^{\vec e}(w)}\colon\mathcal{C}_\infty\times\mathcal{C}_\infty\to\mathbb{R}$ be the Green's function on $Q^{\vec e}(w)$. That is, if $v\notin Q^{\vec e}(w)$, then $G^{\mathbf{a}}_{Q^{\vec e}(w)}(\cdot,v)=0$, while if $v\in Q^{\vec e}(w)$ then $G^{\mathbf{a}}_{Q^{\vec e}(w)}(\cdot,v)$ is 0 on $\mathcal{C}_\infty\setminus Q^{\vec e}(w)$ and satisfies $-\Delta_{\mathbf{a}} G^{\mathbf{a}}_{Q^{\vec e}(w)}(\cdot,v)=\delta_v$ on $Q^{\vec e}(w)\cap \mathcal{C}_\infty$ and grows sublinearly at infinity. Equivalently, $G^{\mathbf{a}}_{Q^{\vec e}(w)}$ is the Green's function for random walk on $\mathcal{C}_\infty$ killed when exiting $Q^{\vec e}(w)$. It follows from the general homogenization results in \cite{DG21} (or already from a qualitative invariance principle for the random walk on $\mathcal{C}_\infty$), that $\mathbf{P}$-almost surely there is a unique such $G^{\mathbf{a}}_{Q^{\vec e}(w)}$, and it is non-negative on $\mathcal{C}_\infty\times\mathcal{C}_\infty$. We also denote by \[v_{w,\vec e}:=v-2((v-w)\cdot \vec e)\vec e\] the mirror point of $v$ under reflection at $\partial Q^{\vec e}(w)$. Then we have the following asymptotics for $G^{\mathbf{a}}_{Q^{\vec e}(w)}$ beyond some random scale. \begin{lemma}\label{l:green_asympt_halfspace} For each $p>1/2$ and any ${\Lambda^+}/{\Lambda^-}\ge1$ there are $C>0$ and random variables $\mathsf{R}^{\mathrm{Green}''}_v\in\mathbb{N}$ indexed by $v\in\mathbb{Z}^2$ with the following property: if $w\in\mathbb{Z}^2$, $\vec e'\in\{\vec e_1,-\vec e_1,\vec e_2,-\vec e_2\}$ and $u,v\in Q^{\vec e}(w)$ satisfy $|u-v|\ge\mathsf{R}^{\mathrm{Green}''}_v$, $(v-w)\cdot \vec e\ge \mathsf{R}^{\mathrm{Green}''}_v$ and we also have $u\in\mathcal{C}_\infty$ or $(u-w)\cdot \vec e\ge \mathsf{R}^{\mathrm{Green}''}_u$, then \begin{equation}\label{e:green_asympt_halfspace} \left|G^{\mathbf{a}}_{Q^{\vec e}(w)}(u^*,v^*)-\frac{1}{2\pi{\overline{\A}}}\log\left(\frac{|u^*-v_{w,\vec e}|}{|u^*-v|}\right)\right|\le\frac{4}{{\overline{\A}}}. \end{equation} In addition, for any $v\in\mathbb{Z}^2$ and any $R,R'\in\mathbb{N}$ with $R\le {R'}/{2}$ there is an event $\mathcal{E}^{R,R',\mathrm{Green}''}_v\in\mathcal{F}_{Q_{10R'}(v)}$ such that, with $s>0$ uniform, \begin{equation}\label{e:green_asympt_halfspace1} \mathbf{P}(\mathcal{E}^{R,R',\mathrm{Green}''}_v)\ge 1-C{\mathrm{e}}^{-R^{s}/C} \end{equation} and \begin{equation}\label{e:green_asympt_halfspace2} \mathbf{P}(\mathsf{R}^{\mathrm{Green}''}_v> R,\mathcal{E}^{R,R',\mathrm{Green}''}_v)\le C{\mathrm{e}}^{-R'^{s}/C}. \end{equation} \end{lemma} The heat kernel asymptotics in \cite{DG21} actually imply even sharper asymptotics for $G^{\mathbf{a}}_{Q^{\vec e}(w)}$ (with error term $o(1)$ instead of $O(1)$. However the estimate stated in Lemma \ref{l:green_asympt_halfspace} is sufficient for our purposes, and it has the advantage that it follows relatively easily from Lemma \ref{l:green_asympt_improved}. \begin{proof} Our strategy is to use Lemma \ref{l:green_asympt_improved} in combination with the reflection principle. For that purpose we need to know that $\mathsf{R}^{\mathrm{Green}'}$ is small not just at $v$ but also at the relevant mirror point. So we define $\mathsf{R}^{\mathrm{Green}''}$ in such a way that it control $\mathsf{R}^{\mathrm{Green}'}$ at $v$ as well as at the mirror points. To be precise, define \[\mathsf{R}^{\mathrm{Green}''}_v=\mathsf{R}^{\mathrm{Green}'}_v\vee\max\left\{R\colon \mathsf{R}^{\mathrm{Green}'}_{v_{w',\vec e'}}\le\frac{|v-v_{w',\vec e'}|}{2}\ \forall w',\vec e'\text{ with }R\le \frac{|v-v_{w',\vec e'}|}{2}\right\}\] and \[\mathcal{E}^{R,R',\mathrm{Green}''}_v=\mathcal{E}^{R,R',\mathrm{Green}'}_v\cap\bigcap_{\substack{w',\vec e'\\(w'-v')\cdot \vec e'\in\mathbb{N}\\mathbb{R}\le|w'-v|\le R'/2}}\mathcal{E}^{|v-v_{w',\vec e'}|/2,R',\mathrm{Green}'}_{v_{w',\vec e'}}\] and take $s$ as in Lemma \ref{l:green_asympt_improved}. With these definitions, we have \begin{align*} \mathbf{P}(\mathcal{E}^{R,R',\mathrm{Green}''}_v)&\ge 1-\mathbf{P}( (\mathcal{E}^{R,R',\mathrm{Green}'}_v)^\complement)-\sum_{\substack{w',\vec e'\\(w'-v')\cdot \vec e'\in\mathbb{N}\\mathbb{R}\le|w'-v|\le R'/2}}\mathbf{P}( (\mathcal{E}^{|v-v_{w',\vec e'}|/2,R',\mathrm{Green}'}_{v_{w',\vec e'}})^\complement)\\ &\ge1-Ce^{-R^s/C}-\sum_{\substack{w',\vec e'\\(w'-v')\cdot \vec e'\in\mathbb{N}\\mathbb{R}\le|w'-v|\le R'/2}}Ce^{-(|v-v_{w',\vec e'}|)^s/C} \ge1-Ce^{-R^s/C} \end{align*} and thus \eqref{e:green_asympt_halfspace1}. The argument for \eqref{e:green_asympt_halfspace2} is analogous. Moreover, since $\mathcal{E}^{R,R',\mathrm{Green}'}_v\in\mathcal{F}_{Q_{9R'}(v)}$ and $\mathcal{E}^{|v-v_{w',\vec e'}|/2,R',\mathrm{Green}'}_{v_{w',\vec e'}}\in\mathcal{F}_{Q_{9R'}(v_{w',\vec e'})}\subset \mathcal{F}_{Q_{10R'}(v)}$, we see that $\mathcal{E}^{R,R',\mathrm{Green}''}_v\in\mathcal{F}_{Q_{10R'}(v)}$. It remains to show \eqref{e:green_asympt_halfspace}. For this purpose consider $H^{\mathbf{a}}_{Q^{\vec e}(w)}\colon (\mathcal{C}_\infty\cap Q^{\vec e}(w))\times (\mathcal{C}_\infty\cap Q^{\vec e}(w))\to\mathbb{R}$ defined by \[H^{\mathbf{a}}_{Q^{\vec e}(w)}(u'^*,v'^*)=G^{\mathbf{a}}(u'^*,v'^*)-G^{\mathbf{a}}_{Q^{\vec e}(w)}(u'^*,(v_{w',\vec e'})^*)-\mathsf{K}'_{v'}+\mathsf{K}'_{v_{w',\vec e'}}.\] Consider some $u'\in \mathcal{C}_\infty\cap \partial Q^{\vec e}(w)$. Then $|u'-v|\ge(v-w)\cdot\vec e\ge \mathsf{R}^{\mathrm{Green}''}_v\ge\mathsf{R}^{\mathrm{Green}'}_v$, and so by Lemma \ref{l:green_asympt_improved} we know that \[\left|G^{\mathbf{a}}(u'^*,v^*)+\frac{1}{2\pi{\overline{\A}}}\log|u'^*-v|-\mathsf{K}'_v\right|\le\frac{1}{{\overline{\A}}|u'^*-v|^{1/2}}\le\frac{1}{{\overline{\A}}\mathsf{R}^{\mathrm{Green}''}_v}.\] Similarly, $|u'-v_{w,\vec e}|\ge\mathsf{R}^{\mathrm{Green}'}_{v_{w,\vec e}}$, and so by another application of \eqref{e:green_asympt_improved} we have \[\left|G^{\mathbf{a}}(u'^*,(v_{w,\vec e})^*)+\frac{1}{2\pi{\overline{\A}}}\log|u'^*-v_{w,\vec e}|-\mathsf{K}'_{v_{w,\vec e}}\right|\le\frac{1}{{\overline{\A}}|u'^*-v_{w,\vec e}|^{1/2}}\le\frac{1}{{\overline{\A}}\mathsf{R}^{\mathrm{Green}''}_v}.\] Combining the last two estimates and using that $|u'-v_{w,\vec e}|=|u'^*-v|$ by symmetry, we find that \[\left|H^{\mathbf{a}}_{Q^{\vec e}(w)}(u'^*,v^*)\right|\le \frac{2}{{\overline{\A}}\mathsf{R}^{\mathrm{Green}''}_v}\le\frac{2}{{\overline{\A}}}.\] In other words, $H^{\mathbf{a}}_{Q^{\vec e}(w)}(\cdot,v^*)$ grows sublinearly at infinity and is bounded by $\frac{2}{{\overline{\A}}}$ on $\mathcal{C}_\infty\cap\partial Q^{\vec e}(w)$. This means that $G^{\mathbf{a}}_{Q^{\vec e}(w)}(\cdot,v^*)-H^{\mathbf{a}}_{Q^{\vec e}(w)}(\cdot,v^*)$ is a $-\Delta_{\mathbf{a}}$-harmonic function on $\mathcal{C}_\infty\cap\partial Q^{\vec e}(w)$ that is bounded by $\frac{2}{{\overline{\A}}}$ on $\mathcal{C}_\infty\cap\partial Q^{\vec e}(w)$ and grows sublinearly at infinity. Thus it must be bounded by $\frac{2}{{\overline{\A}}}$ everywhere. We conclude that \[\left|G^{\mathbf{a}}_{Q^{\vec e}(w)}(u^*,v^*)-G^{\mathbf{a}}(u'^*,v'^*)-G^{\mathbf{a}}_{Q^{\vec e}(w)}(u'^*,(v_{w',\vec e'})^*)-\mathsf{K}'_{v'}+\mathsf{K}'_{v_{w',\vec e'}}\right|\le\frac{2}{{\overline{\A}}}\] We can now insert the asymptotics \eqref{e:green_asympt_improved} from Lemma \ref{l:green_asympt_improved} and directly obtain \eqref{e:green_asympt_halfspace}. \end{proof} As mentioned in the beginning of the section, we need a replacement for the deterministic upper bound on the Green's function in a (small) box given by \eqref{e:estgreen8}. The following lemma provides such an estimate. The random variable $\mathsf{T}^{\mathrm{Tail},S,\varepsilon}_v$ defined here is genuinely local, and so this time (as already for Lemma \ref{l:dist_to_cluster}) we do not need an estimate regarding its approximate locality. \begin{lemma}\label{l:tail_bound_green} For any $\kappa>0$ there is $p^{\mathrm{Tail}}_\kappa<1$ with the following property. For $p^{\mathrm{Tail}}_\kappa\le p\le1$, for any $0<\Lambda^-\le\Lambda^+$ and for any $0<\varepsilon<1$ there are $C>0$ and random variables $\mathsf{T}^{\mathrm{Tail},\varepsilon}_v\in\mathbb{R}$ indexed by $v\in\mathbb{Z}^2$ such that if $R\in\mathbb{N}$ is such that $R^\varepsilon\le \mathsf{T}^{\mathrm{Tail},\varepsilon}_v$ then \begin{equation}\label{e:tail_bound_green} \max_{u,u'\in Q_R(v)}G^{\mathbf{a}}_{Q_R(v)}(u^*,u'^*)\le \frac{\mathsf{T}^{\mathrm{Tail},\varepsilon}_v}{2\pi\Lambda^+}. \end{equation} Furthermore, for all $T\in\mathbb{R}$ we have the tail bound \begin{equation}\label{e:tail_bound_green1} \mathbf{P}(\mathsf{T}^{\mathrm{Tail},\varepsilon}_v\le T,\mathcal{E}^{T^{1/\varepsilon},\mathrm{Clust}}_v)\ge 1-C{\mathrm{e}}^{-\kappa T\Lambda^-/\Lambda^+}, \end{equation} and the event $\{\mathsf{T}^{\mathrm{Tail},\varepsilon}_v\le T\}\cap\mathcal{E}^{T^{1/\varepsilon},\mathrm{Clust}}_v$ is $\mathcal{F}_{Q_{9T^{1/\varepsilon}}(v)}$-measurable. \end{lemma} Note that (by Rayleigh monotonicity) we have ${\overline{\A}}\le\Lambda^+$. So this lemma will allow us to control $2\pi{\overline{\A}} G_{Q_R(v)}(u^*,u'^*)$ by $\mathsf{T}^{\mathrm{Tail},\varepsilon}_v$. \begin{proof} For any $\tau\in\mathbb{N}_\varepsilon:=\{n^{\varepsilon}\colon n\in\mathbb{N}\}$ consider the event \[\mathcal{E}^{\tau,\mathrm{Tail},\varepsilon}_v=\left\{\max_{R\le \tau^{1/\varepsilon}}\max_{u,u'\in Q_R(w)}G^{\mathbf{a}}_{Q_R(w)}(u^*,u'^*)\le \frac{\tau}{2\pi\Lambda^+}\right\}.\] This event only depends on $\mathcal{C}_\infty\cap Q_{\tau^{1/\varepsilon}}(v)$, and so $\mathcal{E}^{\tau,\mathrm{Tail},\varepsilon}_v\cap \mathcal{E}^{\tau^{1/\varepsilon},\mathrm{Clust}}_v$ is $\mathcal{F}_{Q_{9\tau^{1/\varepsilon}}(v)}$-measurable. We can now define \[\mathsf{T}^{\mathrm{Tail},\varepsilon}_v=\inf\left\{\tau\in\mathbb{N}_\varepsilon\colon \mathcal{E}^{\tau,\mathrm{Tail},\varepsilon}_v\right\}.\] From Lemma \ref{l:local_approx_cluster} we know that \[\mathbf{P}(\mathcal{E}^{\tau^{1/\varepsilon},\mathrm{Clust}}_v)\ge 1-C{\mathrm{e}}^{-\tau^{1/\varepsilon}/C}.\] Now, if we also knew that \begin{equation}\label{e:tail_bound_green3} \mathbf{P}(\mathcal{E}^{\tau,\mathrm{Tail},\varepsilon}_v)\ge 1-C{\mathrm{e}}^{-\kappa\tau\Lambda^-/(2\Lambda^+)}, \end{equation} then we would have \[\mathbf{P}\left(\mathcal{E}^{\tau,\mathrm{Tail},\varepsilon}_v\cap \mathcal{E}^{\tau^{1/\varepsilon},\mathrm{Clust}}_v\right)\ge 1-C{\mathrm{e}}^{-T^{1/\varepsilon}/C}-C{\mathrm{e}}^{-3\kappa\tau\Lambda^-/(4\Lambda^+)}\ge1-C{\mathrm{e}}^{-\kappa\tau\Lambda^-/\Lambda^+}\] for a constant $C$ depending on ${\Lambda^+}/{\Lambda^-}$, $\varepsilon$, from which \eqref{e:tail_bound_green1} easily follows. So it remains to prove \eqref{e:tail_bound_green3}. Note first that \[\max_{u,u'\in Q_R(v)}G^{\mathbf{a}}_{Q_R(v)}(u^*,u'^*)\le \max_{u\in \mathcal{C}\infty\cap Q_R(v)}G^{\mathbf{a}}_{Q_R(v)}(u,u),\] and so \eqref{e:tail_bound_green3} follows from a union bound if we show \begin{equation}\label{e:tail_bound_green4} \mathbf{P}\left(G^{\mathbf{a}}_{Q_R(v)}(u,u)>\frac{\tau}{2\pi\Lambda^+}\right)\le C{\mathrm{e}}^{-\kappa\tau\Lambda^-/(4\Lambda^+)}\quad\forall R\le\tau^{1/\varepsilon},u\in \mathcal{C}_\infty \cap Q_R(v). \end{equation} Our proof strategy for \eqref{e:tail_bound_green4} follows closely the approach in \cite{BK05}, where the super-levelsets of $G^{\mathbf{a}}_{Q_R(v)}(\cdot,u)$ are studied and their size is related to their isoperimetry. We use this exact strategy on larger scales (once the level sets have grown to size $\ge cT^2$). On smaller scales we use another argument, namely that the difference of the Green $G^{\mathbf{a}}_{Q_R(v)}(\cdot,u)$ on two different points is bounded by their chemical distance. In this manner we can efficiently find a large super-level set at height close to $v$. To make this precise, fix for the moment some $u\in\mathcal{C}_\infty\cap Q_R(v)$. As in \cite{BK05} and as in the argument that showed \eqref{e:estgreen8}, it is convenient to use the viewpoint of electrical network theory. That is, we consider $G^{\mathbf{a}}_{Q_R(v)}(\cdot,u)$ equivalently as the voltage distribution that arises when a unit current flows from $u$ through the electrical network given by the conductances ${\mathbf{a}}$. Then the voltage drop along each edge is at most ${1}/{\Lambda^-}$. Moreover, for each $A\subset\mathcal{C}_\infty$ with $u\in A\subset Q_R(v)$ the total current flowing out of $A$ is equal to 1. Let $n=|\mathcal{C}_\infty\cap Q_R(v)|$. For $k\in\mathbb{N}$ let \[\theta(k):=\sup_{\substack{B\subset \mathcal{C}_\infty\\B\text{ connected}\\|B|=k}}\min_{\bar u\in B}G^{\mathbf{a}}_{Q_R(v)}(\bar u,u)\] and let $A_k$ be the optimizer (with ties broken in some deterministic manner). By the maximum principle, $G^{\mathbf{a}}_{Q_R(v)}(\bar u,u)\le \theta(k)$ for all $\bar u\in\mathcal{C}_\infty\setminus A_k$. Trivially, $\theta(k)$ is non-increasing in $k$, and $\theta(k)=0$ for $k\ge n+1$. Our goal will be to bound the decrements of $\theta(k)$, assuming some good events, which we define next. Let \begin{align*} \mathcal{E}^{\tau,(I)}_u&=\left\{|\mathcal{C}_\infty\cap Q_{\tau\Lambda^-/(100\pi\Lambda^+)}(u)|\ge\frac{\tau^2(\Lambda^-)^2}{20000\pi^2(\Lambda^+)^2}\right\},\\ \mathcal{E}^{\tau,(II)}_u&=\left\{\max_{u'\in \mathcal{C}_\infty\cap Q_{\tau\Lambda^-/(100\pi\Lambda^+)}(u)}\mathsf{d}_{\mathcal{C}_\infty}(u,u')\le \frac{\tau\Lambda^-}{4\pi\Lambda^+}\right\},\\ \mathcal{E}^{\tau,(III)}_u&=\left\{\exists A\subset\mathcal{C}_\infty\colon u\in A,|A|\ge \frac{\tau^2(\Lambda^-)^2}{20000\pi^2(\Lambda^+)^2},|\partial^+A|\le |A|^{1/2}\right\}^\complement. \end{align*} According to Lemma \ref{l:density_cluster_highlysc}, Lemma \ref{l:isoperimetry} and Lemma \ref{l:chemical_distance} we can make the probability of each of these three events larger than $1-C{\mathrm{e}}^{-\kappa\tau\Lambda^-/(4\Lambda^+)}$, if we choose $p$ close enough to 1 (depending on $\kappa$ only). On the event $\mathcal{E}^{\tau,(I)}_u\cap \mathcal{E}^{\tau,(II)}_u$ there are at least ${\tau^2(\Lambda^-)^2}/{(20000\pi^2(\Lambda^+)^2})$ points in $\mathcal{C}_\infty$ at graph distance at most ${\tau\Lambda^-}/{(4\pi\Lambda^+)}$ from $u$. Call the set of these points $A$. As the voltage drop along each edge in $\mathcal{C}_\infty$ is at most ${1}/{\Lambda^-}$, the voltage difference between $u$ and any point in $A$ is at most $({1}/{\Lambda^-})\cdot({\tau\Lambda^-}/{4\pi\Lambda^+})= {\tau}/{(4\pi\Lambda^+)}$. In other words, \[\min_{\bar u\in A_k}G^{\mathbf{a}}_{Q_R(v)}(\bar u,u)\ge G^{\mathbf{a}}_{Q_R(v)}(u,u)-\frac{\tau}{4\pi\Lambda^+}.\] As $A$ is also clearly connected, we have shown that on $\mathcal{E}^{\tau,(I)}_v\cap \mathcal{E}^{\tau,(II)}_v$ we have \begin{equation}\label{e:tail_bound_green5} \theta\left(\left\lceil\frac{\tau^2(\Lambda^-)^2}{10000\pi^2(\Lambda^+)^2}\right\rceil\right)\ge G^{\mathbf{a}}_{Q_R(v)}(u,u)-\frac{\tau}{4\pi\Lambda^+}. \end{equation} On the other hand, assume for the moment that $\mathcal{E}^{\tau,(III)}_u$ holds. We can now directly reuse the argument from \cite{BK05} on large scales. Namely, let $k\ge{\tau^2(\Lambda^-)^2}/{(20000\pi^2(\Lambda^+)^2)}$. If $\theta(k)>0$ then $A_k\subset Q_R(u)$, so the current flowing out of $A_k$ is exactly 1. On the other hand, on $\mathcal{E}^{\tau,(III)}_u$ there are at least $\sqrt{k}$ vertices in $\partial^+A_k$. So by the pigeon-hole principle for at least ${\sqrt{k}}/{2}$ of the vertices in $\partial^+A_k$ the total current flowing from $A$ into each of them is at most ${2}/{\sqrt{k}}$. Let $\tilde A_k$ be the set of these vertices. For each vertex in $\tilde A_k$ the voltage drop from $A_k$ can be at most ${2}/{(\sqrt{k}\Lambda^-)}$. This means that \[\min_{\bar u\in A_\cup \tilde A_k}G^{\mathbf{a}}_{Q_R(v)}(\bar u,u)\ge \theta(k)-\frac{2}{\sqrt{k}\Lambda^-}\] and as $A_k\cup\tilde A_k$ is clearly connected and has at least $k+{\sqrt{k}}/{2}$ elements, we have shown that on $\mathcal{E}^{\tau,(III)}_u$, if $k\ge{\tau^2(\Lambda^-)^2}/{(20000\pi^2(\Lambda^+)^2)}$ and $\theta(k)>0$ then \begin{equation}\label{e:tail_bound_green6} \theta\left(\left\lceil k+\frac{\sqrt{k}}{2}\right\rceil\right)\ge\theta(k)-\frac{2}{\sqrt{k}}. \end{equation} The same estimate also holds trivially if $\theta(k)=0$ so we can remove that condition. We can now iterate \eqref{e:tail_bound_green6} $2\sqrt{k}$ times to obtain that for such $k$, $\theta(2k)\ge \theta(k)-{C}/{\Lambda^-}$. We also know that $\theta(n+1)=0$, and so we see that on $\mathcal{E}^{\tau,(III)}_u$ we have \begin{equation}\label{e:tail_bound_green7} \begin{split} \theta\left(\left\lceil\frac{\tau^2(\Lambda^-)^2}{20000\pi^2(\Lambda^+)^2}\right\rceil\right)&\le C\frac{\log n}{\Lambda^-} \le C\frac{\log R^2}{\Lambda^-} \le C\frac{\log \tau}{\varepsilon\Lambda^-}, \end{split} \end{equation} where $C$ is an absolute constant. Now we are almost done. Combining \eqref{e:tail_bound_green5} and \eqref{e:tail_bound_green7}, we showed that on the event $\mathcal{E}^{\tau,(I)}_u\cap \mathcal{E}^{\tau,(II)}_u\cap \mathcal{E}^{\tau,(III)}_u$ we have \[G^{\mathbf{a}}_{Q_R(v)}(u,u)\le \frac{\tau}{4\pi\Lambda^+}+C\frac{\log \tau}{\varepsilon\Lambda^-}.\] For $\tau$ large enough the right-hand side here is bounded by ${\tau}/{2\pi\Lambda^+}$, and so we have shown \eqref{e:tail_bound_green4} when $\tau\ge C'$ for some constant $C'$ (which depends on $\Lambda^+/{(\varepsilon\Lambda^-)}$, but not on $\kappa$ or $p$). But this is enough, as for $\tau< C'$ the estimate \eqref{e:tail_bound_green4} trivially holds if we choose the constant there large enough. \end{proof} We can now turn to showing Assumptions \ref{a:logupp} and\ref{a:sparseT}. In this proof, we will have to make the $p$-dependence of various quantities explicit. \begin{proof}[Proof of Theorem \ref{t:percolation_cluster}, second part]$ $ \emph{Step 1: Preliminaries}\\ In principle we would like to take $\mathsf{T}_v=\mathsf{T}^{\mathrm{Tail},\varepsilon}_v$. However there are several other rare events that we need to account for: On the one hand, if $\mathsf{R}^{\mathrm{Green}''}_v\ge T^{1/\varepsilon}$, then $\mathsf{T}^{\mathrm{Tail},\varepsilon}_v$ no longer controls $G^{\mathbf{a}}_{Q_{\mathsf{R}^{\mathrm{Green}''}_v}}(v,v)$. On the other hand, we also need control over $\mathsf{R}^{\mathrm{Dist}}_v$, which $\mathsf{T}^{\mathrm{Tail},\varepsilon}_v$ does not provide. So the actual definition of $\mathsf{T}_v$ is a little involved. The exponent $s$ in Lemma \ref{l:green_asympt_halfspace} is uniformly in $p$ bounded away from $1/2$. So there is $s_*>0$ such that $s\ge s_*$ for any $p\ge3/4$, say. We fix such an $s_*$, and choose $\varepsilon={s_*}/{2}$. We also take $\kappa={3\Lambda^+}/{\Lambda^-}$, and define $p_{\Lambda^+/\Lambda^-}=p^{\mathrm{Tail}}_\kappa\vee\frac34$. This means that for $p\ge p_{\Lambda^+/\Lambda^-}$ the decay rate in \eqref{e:tail_bound_green1} is ${\mathrm{e}}^{-3T}$. We now define \begin{align*} \tilde\mathsf{T}_v&=2\pi\Lambda^+\max_{u,u'\in\mathcal{C}_\infty\cap Q_{\mathsf{R}^{\mathrm{Green}''}_v}(v)}G_{Q_{\mathsf{R}^{\mathrm{Green}''}_v}}(u^*,u'^*)\vee(\mathsf{R}^{\mathrm{Dist}}_v)^{\varepsilon},\\ \mathsf{T}_v&=\tilde\mathsf{T}_v+C',\\ \mathcal{E}^{T,T'}_v&=\left\{\mathsf{T}^{T,\mathrm{Tail},\varepsilon}_v\le T\right\}\cap\mathcal{E}^{T^{1/\varepsilon},\mathrm{Clust}}_v\cap\mathcal{E}^{T^{1/\varepsilon},T'^{1/\varepsilon},\mathrm{Green}''}_v\cap\left\{\mathsf{R}^{\mathrm{Dist}}_v\le T^{1/\varepsilon}\right\} \end{align*} where the objects here are as in Lemma \ref{l:local_approx_cluster}, Lemma \ref{l:dist_to_cluster}, Lemma \ref{l:green_asympt_halfspace} and Lemma \ref{l:tail_bound_green}, and $C'$ is a constant that depends on ${\Lambda^+}/{\Lambda^-}$ only and will be fixed later. The point of these definitions is that we want $\mathsf{T}_v$ to control the Green's function on lengthscale $\mathsf{R}^{\mathrm{Green}''}_v$. As on lengthscales larger than that we have the estimates from Lemma \ref{l:green_asympt_halfspace}, we will be able to control the Green's function on all scales. Furthermore $\mathcal{E}^{T,T'}_v$ is supposed to provide a local event that approximates $\{\mathsf{T}_v\le T\}$. Of the two terms to approximate in the definition of $\tilde\mathsf{T}_v$, the tricky one is the first one. There we use that if $\mathsf{R}^{\mathrm{Green}''}_v\le T^{1/\varepsilon}$ then the term is controlled by $\mathsf{T}^{T,\mathrm{Tail},\varepsilon}_v$ (which is local once we intersect with $\mathcal{E}^{T^{1/\varepsilon},\mathrm{Clust}}_v$), while if $\mathsf{R}^{\mathrm{Green}''}_v> T^{1/\varepsilon}$ we use that $\mathsf{R}^{\mathrm{Green}''}_v$ itself is asymptotically local. \emph{Step 2: Verification of \ref{a:logupp}}\\ We want to show that with $\mathsf{T}_v$ as defined above we have \ref{a:logupp}. The argument will be based on the maximum principle (similar to the argument that lead to \ref{a:logbd}). In view of \eqref{e:estgreen} we have to prove that \begin{align} 2\pi{\overline{\A}} G^{\mathbf{a}}_{V_N(w)}(v^*,v^*)&\le \log N+\tilde\mathsf{T}_v+C\label{e:estgreenT1}\\ 2\pi{\overline{\A}} \left(G^{\mathbf{a}}_{V_N(w)}(v^*,v^*)-G^{\mathbf{a}}_{V_N(w)}(u^*,v^*)\right)&\le \log_+|u-v|+\tilde\mathsf{T}_u+\tilde\mathsf{T}_v+C\label{e:estgreenT2} \end{align} hold true for all $u,v\in V_N(w)$ (then \ref{a:logupp} follows once we take the constant $C'$ sufficiently large). We will begin by showing a stronger version of \eqref{e:estgreenT1}, which we will also use in the proof of \eqref{e:estgreenT2}. Thus, we claim that \begin{equation}\label{e:estgreenT3} 2\pi{\overline{\A}} G^{\mathbf{a}}_{V_N(w)}(v^*,v^*)\le \log \left(1+\frac{\mathsf{d}(v,\partial^+ V_N(w))}{\mathsf{R}^{\mathrm{Green}''}_v}\right)+\tilde\mathsf{T}_v+C. \end{equation} To see this, we use the maximum principle in a similar manner as for \ref{a:logbd}. We can assume without loss of generality that the side of $V_N(w)$ closest to $v$ is the bottom side. The maximum principle on $V_N(w)$ implies that \[G^{\mathbf{a}}_{V_N(w)}(\cdot,v^*)\le G^{\mathbf{a}}_{Q^{\vec e_1}(w)}(\cdot,v^*)\] on $\mathcal{C}_\infty\cap V_N(w)$. We know that for $u\in \mathcal{C}_\infty\cap\partial Q_{\mathsf{R}^{\mathrm{Green}''}_v}(v)$ we have $|u^*-v|=\mathsf{R}^{\mathrm{Green}''}_v$ and $|u^*-v_{w,\vec e_1}|\le \mathsf{R}^{\mathrm{Green}''}_v+2\mathsf{d}(v,\partial^+ V_N(w))$, and so Lemma \ref{l:green_asympt_halfspace} implies that \begin{equation}\label{e:estgreenT4} G^{\mathbf{a}}_{V_N(w)}(u^*,v^*)\le \frac{4}{{\overline{\A}}}+\frac{1}{2\pi{\overline{\A}}}\log\left(1+\frac{2\mathsf{d}(v,\partial^+ V_N(w))}{\mathsf{R}^{\mathrm{Green}''}_v}\right)\le \frac{C}{{\overline{\A}}}+\frac{1}{2\pi{\overline{\A}}}\log \left(1+\frac{\mathsf{d}(v,\partial^+ V_N(w))}{\mathsf{R}^{\mathrm{Green}''}_v}\right) \end{equation} for $u^*\in \mathcal{C}_\infty\cap Q^{\vec e_1}(w)\cap \partial Q_{\mathsf{R}^{\mathrm{Green}''}_v}(v)$. We can now apply the maximum principle once again, namely on $\mathcal{C}_\infty\cap Q^{\vec e_1}(w)\cap \partial Q_{\mathsf{R}^{\mathrm{Green}''}_v}(v)$ with comparison function $G^{\mathbf{a}}_{Q^{\vec e_1}(w)\cap \partial Q_R(v)}$ and conclude from \eqref{e:estgreenT4} that \[G^{\mathbf{a}}_{V_N(w)}(v^*,v^*)\le \frac{C}{{\overline{\A}}}+\frac{1}{2\pi{\overline{\A}}}\log \left(1+\frac{\mathsf{d}(v,\partial^+ V_N(w))}{\mathsf{R}^{\mathrm{Green}''}_v}\right)+G_{Q_{\mathsf{R}^{\mathrm{Green}''}_v}}(v^*,v^*).\] Because ${\overline{\A}}\le\Lambda^+$, this implies \eqref{e:estgreenT3} (if $C'$ in the definition of $\mathsf{T}_v$ is taken sufficiently large). Having showed \eqref{e:estgreenT3} (and in particular also \eqref{e:estgreenT1}), we now turn to \eqref{e:estgreenT2}. The argument here is similar. We first note that if $\mathsf{R}^{\mathrm{Green}''}_v>\mathsf{d}(v,\partial^+ V_N(w))$, then \eqref{e:estgreenT2} follows from \eqref{e:estgreenT3} and the trivial lower bound $G^{\mathbf{a}}_{V_N(w)}(u^*,v^*)\ge0$. So we can assume that $\mathsf{R}^{\mathrm{Green}''}_v\le\mathsf{d}(v,\partial^+ V_N(w))$. In that case we need a nontrivial lower bound for $G^{\mathbf{a}}_{V_N(w)}(u^*,v^*)$. We claim that \begin{equation}\label{e:estgreenT5} G^{\mathbf{a}}_{V_N(w)}(u^*,v^*)\ge\frac{1}{2\pi{\overline{\A}}}\log \left(\frac{\mathsf{d}(v,\partial^+ V_N(w))}{|u^*-v|\vee\mathsf{R}^{\mathrm{Green}''}_v}\right)-\frac{C}{{\overline{\A}}} \end{equation} To see this, we use once again the maximum principle. Note that by Lemma \ref{l:green_asympt_halfspace} we have $G^{\mathbf{a}}_{Q^{\vec e_1}(w)}(\cdot,v^*)\le \frac{C}{{\overline{\A}}}$ on $\mathcal{C}_\infty\setminus Q_{\mathsf{R}^{\mathrm{Green}''}_v}(v)\supset \mathcal{C}_\infty\cap\partial Q_N(w)$, and so the maximum principle on $\mathcal{C}_\infty\cap V_N(w)$ implies that \begin{equation}\label{e:estgreenT6} G^{\mathbf{a}}_{V_N(w)}(\cdot,v^*)\ge G^{\mathbf{a}}_{Q^{\vec e_1}(w)}(\cdot,v^*)-\frac{C}{{\overline{\A}}} \end{equation} on $\mathcal{C}_\infty\cap V_N(w)$. If $|u^*-v|\ge\mathsf{R}^{\mathrm{Green}''}_v$ this directly implies \eqref{e:estgreenT5}, while if $|u^*-v|\le\mathsf{R}^{\mathrm{Green}''}_v$ we obtain from \eqref{e:estgreenT6} and Lemma \ref{l:green_asympt_halfspace} that \[G^{\mathbf{a}}_{V_N(w)}(\cdot,v^*)\ge \frac{1}{2\pi{\overline{\A}}}\log \left(\frac{\mathsf{d}(v,\partial^+ V_N(w))}{\mathsf{R}^{\mathrm{Green}''}_v}\right)-\frac{C}{{\overline{\A}}}\] on $\mathcal{C}_\infty\cap\partial Q_{\mathsf{R}^{\mathrm{Green}''}_v}(w)$, from which \eqref{e:estgreenT5} follows by an application of the maximum principle on $\mathcal{C}_\infty\cap Q_{\mathsf{R}^{\mathrm{Green}''}_v}(w)$. We can now combine \eqref{e:estgreenT3} and \eqref{e:estgreenT5} into \begin{equation}\label{e:estgreenT7} G^{\mathbf{a}}_{V_N(w)}(v^*,v^*)-G^{\mathbf{a}}_{V_N(w)}(u^*,v^*)\le \frac{1}{2\pi{\overline{\A}}}\log\left(\frac{|u^*-v|\vee \mathsf{R}^{\mathrm{Green}''}_v}{\mathsf{R}^{\mathrm{Green}''}_v}\right)+\tilde\mathsf{T}_v+\frac{C}{{\overline{\A}}}. \end{equation} This already looks similar to \eqref{e:estgreenT2}. In fact, if $|u^*-v|\le\mathsf{R}^{\mathrm{Green}''}_v$ it directly implies \eqref{e:estgreenT2}, so we can assume the opposite. Then we still to make sure that $|u-v|$ is not much smaller that $|u^*-v|$. For that purpose we estimate that $|u^*-u|=\mathsf{R}^{\mathrm{Dist}}_u\le(\tilde\mathsf{T}_u)^{1/\varepsilon}$ and therefore \[\log|u^*-v|\le \log\left(|u-v|+(\tilde\mathsf{T}_u)^{1/\varepsilon}\right)\le\log|u-v|+\frac{1}{\varepsilon}\log \tilde\mathsf{T}_u.\] Inserting this into \eqref{e:estgreenT7}, we finally obtain \eqref{e:estgreenT2}. \emph{Step 3: Verification of \ref{a:sparseT}}\\ It remains to show that our choice of $\mathsf{T}_v$ also satisfies \ref{a:sparseT}. As $\mathsf{T}$ and $\tilde T$ only differ by a constant, it suffices to check that $\tilde T$ satisfies \ref{a:sparseT}. This is similar to the proof of \ref{a:sparseR}, so we will be brief here. We claim that \begin{align} \mathbf{P}(\mathcal{E}^{T,T'}_v)&\ge 1-C{\mathrm{e}}^{-3T},\label{e:estgreenT8}\\ \mathbf{P}(\tilde\mathsf{T}_v> T,\mathcal{E}^{T,T'}_v)&\le C{\mathrm{e}}^{-3T'}.\label{e:estgreenT9} \end{align} Indeed, \ref{l:tail_bound_green} and our choice of $\kappa$ imply that \[\mathbf{P}(\mathsf{T}^{\mathrm{Tail},\varepsilon}_v\le T,\mathcal{E}^{T^{1/\varepsilon},\mathrm{Clust}}_v)\ge 1-C{\mathrm{e}}^{-\kappa T\Lambda^-/\Lambda^+}.\] Moreover, Lemma \ref{l:green_asympt_halfspace} implies that \[\mathbf{P}(\mathcal{E}^{T^{1/\varepsilon},T'^{1/\varepsilon},\mathrm{Green}''}_v)\ge 1-C{\mathrm{e}}^{-T^2/C},\] and Lemma \ref{l:dist_to_cluster} implies a similar estimate for $\left\{\mathsf{R}^{\mathrm{Dist}}_v\le T^{1/\varepsilon}\right\}\cap \mathcal{E}^{T^{1/\varepsilon},\mathrm{Clust}}_v$. Combining these results, we find \[\mathbf{P}(\mathcal{E}^{T,T'}_v)\ge 1-C{\mathrm{e}}^{-3T}-C{\mathrm{e}}^{-T^2/C},\] which implies \eqref{e:estgreenT8} if we choose the constant there sufficiently large (depending on $\frac{\Lambda^+}{\Lambda^-}$ and $p$). Regarding \eqref{e:estgreenT9}, observe that if $\tilde\mathsf{T}_v>T$ and $\mathcal{E}^{T,T'}_v$ occurs, then by definition of $\mathsf{T}^{\mathrm{Tail},\varepsilon}_v$ we must have $\mathsf{R}^{\mathrm{Green}''}_v\ge \mathsf{T}^{1/\varepsilon}$. But $\mathcal{E}^{T,T'}_v$ also implies that $\mathcal{E}^{T^{1/\varepsilon},T'^{1/\varepsilon},\mathrm{Green}''}_v$ occurs, and according to Lemma \ref{l:green_asympt_halfspace} we have \[\mathbf{P}(\mathsf{R}^{\mathrm{Green}''}_v\ge \mathsf{T}^{1/\varepsilon},\mathcal{E}^{T^{1/\varepsilon},T'^{1/\varepsilon},\mathrm{Green}''}_v)\le C{\mathrm{e}}^{-T^2/C}.\] From this we obtain \eqref{e:estgreenT9}. Moreover, $\mathcal{E}^{T,T'}_v$ is $\mathcal{F}_{Q_{10T'^{1/\varepsilon}}(v)}$-measurable. Now the proof of \ref{a:sparseT} proceeds just like the proof of \ref{a:sparseR}. We choose $T'=\left(\frac{N}{L}\right)^{1/(2\varepsilon)}$. If $\tilde T_v\ge T$ then the local event $Ev^{T,T'}_v$ or a very rare event occurs. The latter events can be controlled by a union bound, and the former events using Hoeffding's inequality and the locality. \end{proof} \begin{remark}\label{r:nearcritical} Lemma \ref{l:tail_bound_green} is the main step in the verification of \ref{a:logupp} and \ref{a:sparseT}. In view of Question \ref{q:nearcritical}, let us explain why this proof cannot work for $p$ close to $1/2$. For convenience, we only consider the (easiest) case where $\Lambda^+=\Lambda^-$, and recall that we denoted the corresponding ${\overline{\A}}$ by $a_p$ in that case. Roughly speaking, we would need to prove that for a generic $v\in V_N(w)$ we have \[\mathbf{P}\left(\frac{1}{2\pi a_p}G^{\mathbf{a}}_{V_N(w)}(v^*,v^*)\ge\frac{1}{2\pi a_p}(\log N+T)\right)\le C_p{\mathrm{e}}^{-(2+\varepsilon)T}\] for some $\varepsilon>0$ or equivalently, \begin{equation}\label{e:near_crit} \mathbf{P}\left(\frac{1}{2\pi a_p}G^{\mathbf{a}}_{V_N(w)}(v^*,v^*)\ge\frac{\log N}{2\pi a_p}+T\right)\le C_p{\mathrm{e}}^{-ca_pT} \end{equation} for a constant $c>4\pi$. There is a competition here: As $p\searrow1/2$, bad environments such as long pipes (which contribute variance of order of the length of the pipe) become more likely, so, for fixed $T$, the left-hand side should be increasing. On the other hand, $a_p$ goes to 0 as $p\searrow1/2$, so the right-hand side is also increasing as $p\searrow1/2$. In fact, critical scaling theory and numerical estimates suggest that $a_p\sim\left(p-1/2\right)^t$ as $p\searrow1/2$, where $t\approx\frac43\cdot0.9826\approx1.310$, see \cite{G99}. This means that as $p\searrow 1/2$, we have a good (albeit non-rigorous) prediction of the behavior of the right-hand side of \eqref{e:near_crit}. Unfortunately, it is unclear to us even heuristically how the left-hand side of \eqref{e:near_crit} behaves as $p\searrow1/2$, and whether a variant of our proof of Lemma \ref{l:tail_bound_green} could be made to work for $p$ close to $1/2$. If one wanted to extend our proof, one would need to understand the large-deviation behavior of the isoperimetric profile and of the volume of balls with respect to the chemical distance for $p$ close to $1/2$, and both tasks seem difficult. In fact, it might well be that our estimate of the effective resistance by the chemical distance is already too crude, and so new ideas might be necessary. \end{remark} \subsubsection*{Acknowledgements} This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 692452). Florian Schweiger is also supported by the Foreign Postdoctoral Fellowship Program of the Israel Academy of Sciences and Humanities. \small
1,314,259,993,800
arxiv
\section{Introduction} We consider the following inhomogeneous incompressible three dimensional Navier-Stokes equations: \begin{equation}\label{1.1} \left\{\begin{array}{l} \displaystyle \pa_t\rho+\div(\rho v)=0,\qquad (t,x)\in\mathop{\mathbb R\kern 0pt}\nolimits^+\times\mathop{\mathbb R\kern 0pt}\nolimits^3,\\ \displaystyle\rho(\pa_t v+v\cdot\nabla v)-\Delta v+\nabla\pi=0,\\ \displaystyle \div v=0,\\ \displaystyle (\rho, v)|_{t=0}=(\rho_0, v_0), \end{array}\right. \end{equation} where $\rho\in\mathop{\mathbb R\kern 0pt}\nolimits^+,$ $v\in\mathop{\mathbb R\kern 0pt}\nolimits^3$ and $\pi\in\mathop{\mathbb R\kern 0pt}\nolimits$ stand for the density, velocity field and pressure of the fluid respectively. System \eqref{1.1} describes an incompressible fluid with variable density. Basic examples are mixture of incompressible and non reactant flows, blood flows, models of rivers, fluids containing a melted substance, etc. The existence and uniqueness issues of the solutions of \eqref{1.1} have been studied by numerous mathematicians. We just cite here among many others \cite{AKM,Lions96, Simon} for the construction of the weak solutions of \eqref{1.1}, and the works \cite{AP,AKM,CK,D1,LS} for the wellposedness result for the strong solutions. Recently progresses have been made: the smallness condition on the density fluctuation was successfully removed in e.g. \cite{AGZ2, CHW}, the small jump of the density across some interface was permitted in e.g. \cite{DM12, HPZ} and in the energy framework, \cite{PZZ} considered the positive density which is only assumed to be bounded from up and below. In \cite{Lions96} P.-L. Lions proposed the following density patch problem: if the initial density $\rho_0=\mathbf{1}_D$ for some smooth domain $D$, then whether or not the boundary regularity of $D$ will persist by time evolution? In \cite{LZ, LZ2}, the first author and P. Zhang considered this density patch problem in space dimension two, away from vacuum. More precisely in \cite{LZ}, the initial density $\rho_0$ is taken of the following form \begin{equation}\label{rho0} {\rho_0}={(1-\eta)\bf 1}_{\Om_0}+ {\bf 1}_{\Om_0^c}, \quad 1-\eta\in\mathop{\mathbb R\kern 0pt}\nolimits^+, \end{equation} where $|\eta|$ is a sufficiently small constant, and $\Omega_0$ is some simply connected $W^{k+2,p}(\mathop{\mathbb R\kern 0pt}\nolimits^2)$, $k\geqslant 1,$ $p\in ]2,4[$ bounded domain in $\mathop{\mathbb R\kern 0pt}\nolimits^2$. Let $X_0 \in W^{k+1,p}(\mathop{\mathbb R\kern 0pt}\nolimits^2)$ be the divergence-free tangential vector field of $\partial\Omega_0$ \footnote{Let $g_0\in W^{k+2,p}(\mathop{\mathbb R\kern 0pt}\nolimits^2)$ such that $\pa \Omega_0=g_0^{-1}(0)$ and $\nabla g_0$ does not vanish on $\pa \Omega_0$. Then we can choose $X_0=\nabla^\bot g_0$.} and denote by $\p_{X_0}f\buildrel\hbox{\footnotesize def}\over = X_0\cdot\nabla f=\div(fX_0)$ the derivative of $f$ in the direction of $X_0$. Then they proved the following persistence result of the boundary regularity: \begin{thm}\label{thm1.1} {\sl Suppose that the initial velocity $v_0$ of \eqref{1.1} satisfies the following conormal regularities for some $\epsilon\in ]0,1[$ \begin{equation}\label{LZv0} \begin{split} v_0\in W^{1,p}(\mathop{\mathbb R\kern 0pt}\nolimits^2) \quad\hbox{with}\quad \p_{X_0}^\ell v_0\in W^{1-\f{\ell}k\epsilon,p}(\mathop{\mathbb R\kern 0pt}\nolimits^2), \quad \ell=1,\cdots, k. \end{split}\end{equation} Then the Cauchy problem \eqref{1.1}-\eqref{rho0}-\eqref{LZv0} has a unique global solution $(\r, v)$ so that $\r(t,x)={(1-\eta)\bf 1}_{\Om(t)}+{\bf 1}_{\Om(t)^c}$ for some simply connected $W^{k+2,p}(\mathop{\mathbb R\kern 0pt}\nolimits^2)$ domain $\Om(t)$. } \end{thm} The smallness condition on $|\eta|$ was successfully removed in \cite{LZ2}, with the initial velocity $v_0$, together with its conormal derivatives $\partial_{X_0}^\ell v_0$, taken as $$v_0\in L^2\cap\dot{B}^{s_0}_{2,1}, \quad \partial_{X_0}^\ell v_0\in L^2\cap\dot{B}^{s_\ell}_{2,1}\quad\mbox{for}\ \ell=1,\cdots,k, $$ where $s_0\in ]0,1[,$ $ s_\ell\buildrel\hbox{\footnotesize def}\over = s_0-\epsilon_1{ \ell}/{k}$ for some fixed $\epsilon_1\in ]0,s_0[,$ and $p\in \bigl]2,{2}/{(1-s_k)}\bigr[.$ Then the authors there took a thorough use of time weighted energy estimates to propagate the boundary and the velocity regularity as in Theorem \ref{thm1.1}, and $v\in C(\mathop{\mathbb R\kern 0pt}\nolimits^+;\dot B^{s_0}_{2,1})$. The purpose of this paper is to extend Theorem \ref{thm1.1} to three dimensional case, for the density patch problem with small jump \eqref{rho0} but not in the finite-energy framework. One firstly notices that, the boundary of a two dimensional domain is one dimensional curve, whose tangent space can be spanned by the tangent vector (e.g. by $X_0$ given above). But the boundary of three dimensional domain is two dimensional surface, whose tangent space has dimension two, hence in order to propagate the boundary regularity of arbitrarily large order, we need to discuss differentiation in different directions. Recall that the conormal (or striated) distribution spaces have been successfully used by J.-Y. Chemin \cite{Chemin91, Chemin93} for studying the evolution of the boundary regularity of the two-dimensional vortex patch problem for Euler equations (see also \cite{BC93}). Then the subsequent works \cite{D97,Hmidi05} considered the viscous case and the works \cite{GR,ZQ} extend the result to three dimensional case. {} We here follow \cite{GR} to select ``good'' tangent vector directions to work with. We first adopt the following definition of admissible systems introduced in \cite{GR}: \begin{defi}[Definition 3.1 of \cite{GR}]\label{def1.1} {\sl Any system $W=\{W_1,\cdots,W_N\}$ composed of N continuous vector fields is said to be \emph{admissible} if the function $$[W]^{-1}\buildrel\hbox{\footnotesize def}\over = \Bigl( \frac{2}{N(N-1)}\sum_{\mu<\nu}|W_\mu\wedge W_\nu|^2 \Bigr)^{-\frac 14}$$ is bounded. Here, for any two vector fields $X=(X^1, X^2, X^3)$, $Y=(Y^1, Y^2, Y^3)$, the notation $X\wedge Y$ is defined as follows \begin{equation*} X\wedge Y=\left( X^2 Y^3-X^3 Y^2, X^3Y^1-X^1 Y^3, X^1Y^2-X^2 Y^1 \right)^T. \end{equation*} } \end{defi} Now we can select five divergence-free tangential vector fields for the two dimensional boundary according to the following result from \cite{GR}: \footnote{Porposition 3.2 in \cite{GR} concerns the framework of H\"older spaces, however the proof also works in the framework of Sobolev spaces.} \begin{prop}[Proposition 3.2 of \cite{GR}]\label{prop1.1} For any $W^{k+2,p}(\mathop{\mathbb R\kern 0pt}\nolimits^3)$ two dimensional compact submanifold $\Sigma$ of $\mathop{\mathbb R\kern 0pt}\nolimits^3$, we can find an admissible system consisting of five $W^{k+1,p}(\mathop{\mathbb R\kern 0pt}\nolimits^3)$, divergence-free vector fields tangent to $\Sigma$. \end{prop} Before we go to the statement of density patch problem in three space dimension, let us introduce the following notations which will be used freely in the following context. For any system $W=\{W_1,\cdots,W_N\}$ composed of $N$ continuous vector fields, and for any multi-index $\alpha=(\alpha_1,\cdots,\alpha_m)$ of length $m$ with $\alpha_1,\cdots,\alpha_m\in\{1,\cdots,N\}$, we denote \begin{equation*}\label{1.3} \pa_W^\alpha=\pa_W^{(\alpha_1,\cdots,\alpha_m)} =\pa_{W_{\alpha_1}}\cdots\pa_{W_{\alpha_m}}. \end{equation*} We emphasize that the order of differentiation is important. We furthermore denote \begin{equation*}\label{1.4} \pa_W^m=\bigl(\pa_W^\alpha\bigr)_\alpha \end{equation*} to be an $N^m$ dimensional vector, where $\alpha$ takes over all the multi-index with length $m$ and with elements taking integer values between $1$ and $N$. Now we can state the density patch problem in $\mathop{\mathbb R\kern 0pt}\nolimits^3$. Let us take the initial density $\rho_0$ of the form \eqref{rho0} where $\Omega_0$ is a simply connected bounded domain of $\mathop{\mathbb R\kern 0pt}\nolimits^3$, such that its boundary $\partial\Omega_0$ is $W^{k+2,p}(\mathop{\mathbb R\kern 0pt}\nolimits^3),$ $k\geqslant 1,$ $p\in ]3,\infty[,$ two dimensional compact submanifold in $\mathop{\mathbb R\kern 0pt}\nolimits^3$. By Proposition \ref{prop1.1}, we can find an admissible system as follows \begin{equation}\label{W0} \begin{split} &W_0=\{X_{1,0},X_{2,0},X_{3,0},X_{4,0},X_{5,0}\}, \\ &\quad\hbox{with } X_{i,0}\in W^{k+1,p}(\mathop{\mathbb R\kern 0pt}\nolimits^3), \,k\geq 1, \,p\in ]3,\infty[, \quad\div X_{i,0}=0, \quad i=1,\cdots,5. \end{split} \end{equation} We assume the following smallness condition on the initial velocity \begin{equation}\label{HPZv0} \begin{split} & \|v_0 \|_{\dot{B}^{-1+\frac 3p_1}_{p_1,r}} \leq c_0, \hbox{ for some } 1<p_1<3 \hbox{ and } 1<r<\min\{\frac{2p_1}{3(p_1-1)},\, \frac{4p}{2p-3},\,3 \}, \end{split} \end{equation} as well as the following (conormal) regularities on $v_0$ for some $\varepsilon\in ]0,1[$ \begin{equation}\label{v0} \begin{split} &v_0\in W^{1,p} \cap L^3, \quad\p_{W_0}^\ell v_0\in W^{1-\f{\ell}k\varepsilon,p},\quad \ell=1,\cdots, k. \end{split} \end{equation} Under the smallness condition on the initial data, we have the following theorem from \cite{HPZ} to guarantee the existence of global weak solutions to \eqref{1.1}. \begin{thm}[Theorem 1.1 of \cite{HPZ}]\label{thm2.1} {\sl Let the initial data $(\rho_0, v_0)$ satisfy \eqref{rho0} and \eqref{HPZv0}. If $|\eta|, c_0$ are sufficiently small, then \eqref{1.1} has a global weak solution $(\rho, v)$, and there exists a positive constant $C$ such that \begin{equation}\label{vLr}\begin{split} \|(\Delta v, \nabla\pi)\|_{L^r(\mathop{\mathbb R\kern 0pt}\nolimits^+;L^{\frac{3r}{3r-2}})}&+\|\nabla v\|_{L^{2r}(\mathop{\mathbb R\kern 0pt}\nolimits^+;L^{\frac{3r}{2r-1}})}\leq C c_0. \end{split}\end{equation} } \end{thm} In the following, we consider the propagation of the boundary regularity for the density patch problem. The main result of this paper states as follows: \begin{thm}\label{thm1.2} {\sl Let the initial data $(\rho_0, v_0)$ satisfy \eqref{rho0}-\eqref{W0}-\eqref{HPZv0}-\eqref{v0}, with $|\eta|, c_0$ sufficiently small. Then \eqref{1.1} has a unique global solution $(\r, v)$ so that $\r(t,x)={(1-\eta)\bf 1}_{\Om(t)}+{\bf 1}_{\Om(t)^c}$ for some simply connected $W^{k+2,p}(\mathop{\mathbb R\kern 0pt}\nolimits^3)$ bounded domain $\Om(t).$ } \end{thm} \begin{rmk} \begin{itemize} \item[(i)]According to Theorem 1.1 of \cite{HPZ}, the smallness condition on the initial data \eqref{rho0} and \eqref{HPZv0} can be replaced by the following more general one \begin{equation*} \begin{split} \bigl( \|\rho_0-1\|_{L^\infty} +\|v_0^{\rm h}\|_{\dot{B}^{-1+\frac 3p_1}_{p_1,r}}\bigr)\cdot \exp\bigl(C_r\|v_0^3\|^{2r}_{\dot{B}^{-1+\frac 3p_1}_{p_1,r}}\bigr)\leq c_0. \end{split} \end{equation*} Here $C_r$ is some positive constant and $v_0^{\rm h}=(v_0^1, v_0^2)$ denotes the initial horizontal velocity. \item[(ii)]We mention that the initial vorticity $\omega_0=c\mathbf{1}_{\Omega_0}$ (i.e. vortex patch) with $c$ sufficiently small implies the assumptions \eqref{HPZv0} and \eqref{v0} on $v_0$. Indeed, noting that $\Omega_0$ is bounded, we have $\|\omega_0\|_{L^q}=c|\Omega_0|^{\frac1q},\ \forall q\in]1,\infty[$. Hence we can take the divergence-free initial velocity $v_0=(-\Delta)^{-1}\nabla\times\omega_0$ such that $\|\nabla v_0\|_{L^q} \leq C\frac{q^2}{q-1}\|\omega_0\|_{L^q}$ and $v_0\in L^s,\ \forall s\in]\frac32,\infty]$. In particular, the smallness condition \eqref{HPZv0} holds true with $p_1=2$ $$\|v_0\|_{\dot B^{\frac 12}_{2,1}} \leq C\|v_0\|_{W^{1,2}}\leq Cc,$$ provided $c$ is small enough. On the other side, thanks to $\p_{W_0}^\ell \omega_0\equiv 0$, the conformal regularities $\p_{W_0}^\ell v_0\in W^{1-\f{\ell}k\varepsilon,p},\ \ell=0,\cdots, k$ were shown in Remark 1.1 of \cite{LZ}. \end{itemize}\end{rmk} Recall in \cite{LZ} that in the two dimensional case, if the velocity of the flow $v\in L^1_{\rm loc}\,(\mathop{\mathbb R\kern 0pt}\nolimits^+, W^{2,p})$, then to prove the persistence of $W^{k+2,p}(\mathop{\mathbb R\kern 0pt}\nolimits^2)$ regularity of the domain $\Omega_0$ is equivalent to show that the tangential vector field $X$ which is transported by the flow from $X_0$ satisfies the following (conormal) regularities: $$ X,\,\partial_X X, \cdots,\partial_X^{k-1}X\in W^{2,p}(\mathop{\mathbb R\kern 0pt}\nolimits^2), \ \mbox{where}\ \div X=0 \hbox{ and } \partial_X f\buildrel\hbox{\footnotesize def}\over = X\cdot\nabla f=\div(fX). $$ Here we consider the system $W(t)=\{X_1(t),\cdots,X_5(t)\}$ satisfying for any $1\leq i\leq 5$, \begin{equation}\label{W} \left\{\begin{array}{l} \displaystyle \pa_t X_i+v\cdot\nabla X_i=X_i\cdot\nabla v,\\ \displaystyle X_i(0,x)=X_{i,0}(x), \end{array}\right. \end{equation} and we claim that \\ \noindent\textbf{Claim:} Theorem \ref{thm1.2} follows, if the following fact holds true for any $1\leq i\leq 5$, \begin{equation}\label{claim} v\in L^1_{\rm loc}\,(\mathop{\mathbb R\kern 0pt}\nolimits^+, W^{2,p}(\mathop{\mathbb R\kern 0pt}\nolimits^3)), \quad\pa_W^{\ell-1}X_i\in L^\infty_{\rm loc}\,(\mathop{\mathbb R\kern 0pt}\nolimits^+; W^{2,p}(\mathop{\mathbb R\kern 0pt}\nolimits^3)),\quad \forall \ell\in \{1,\cdots, k\}. \end{equation} The rest of this section is contributed to prove this claim and next section is devoted to show \eqref{claim}. The procedure of the proofs is similar to the one in \cite{LZ} but we pay attention on the differences caused by the change of the space dimension. Assume that \eqref{claim} is correct. Then the velocity field $v\in L^1_{{\rm loc}\,}(\mathop{\mathbb R\kern 0pt}\nolimits^+;{\rm Lip}\,)$, and the uniqueness result of the weak solutions in Theorem \ref{thm2.1} can be proved by using Lagrangian formulation of \eqref{1.1} as in \cite{DM12, HPZ}. We omit the details here. Let $\psi(t,x)$ be the flow associated with the velocity field $v$, that is, \begin{equation}\label{psi} \left\{\begin{array}{l} \displaystyle \pa_t\psi(t,x)=v(t,\psi(t,x)),\\ \displaystyle \psi(0,x))=x, \end{array}\right. \end{equation} then by view of \eqref{claim}, we have \begin{equation*} \psi(t,\cdot)-Id\in L^\infty_{{\rm loc}\,}(\mathop{\mathbb R\kern 0pt}\nolimits^+;W^{2,p}), \end{equation*} and hence $\Omega(t)\buildrel\hbox{\footnotesize def}\over =\psi(t,\Omega_0)$ is a $W^{2,p}(\mathop{\mathbb R\kern 0pt}\nolimits^3)$ domain. Moreover, the first equation of \eqref{1.1} gives \begin{equation}\label{rho} \rho(t,x)={(1-\eta)\bf 1}_{\Om(t)}+{\bf 1}_{\Om(t)^c}, \quad 1-\eta\in \mathop{\mathbb R\kern 0pt}\nolimits^+. \end{equation} On the other side, there exists a finite number of charts $\{\mathbf{V}^\beta\}_{\beta=1}^m$ covering the two dimensional $W^{k+2,p}(\mathop{\mathbb R\kern 0pt}\nolimits^3)$ compact submanifold $\pa\Omega_0$ such that we can parametrize anyone of them, say $\mathbf{V}^1$, as follows \begin{equation*}\label{1.6} \phi_1: \mathbf{U}^1\rightarrow \mathbf{V}^1, \quad\hbox{via}\quad (r,s)\mapsto\phi_1(r,s), \quad \phi_1\in W^{k+2,p}(\mathbf{U}^1), \end{equation*} where $\mathbf{U}^1$ is an open set on $\mathop{\mathbb R\kern 0pt}\nolimits^2$ and $\mathbf{V}^1$ is $\partial\Omega_0$-open set in $\mathop{\mathbb R\kern 0pt}\nolimits^3$. In order to show that $\pa\Omega(t)=\psi(t,\pa\Omega_0)\in W^{k+2,p}$, $k\geq 1$, it suffices to show (without loss of generality) \begin{equation*} \pa_r^{k_1}\pa_s^{k_2}\psi(t,\phi_1(r,s))\in L^\infty_{\rm loc}\,(W^{2,p}(\mathbf{U}^1)),\ \forall k_1+k_2=k. \end{equation*} Hence we only need to verify that \begin{equation}\label{YZ} (\pa_{Y_0}^{k_1}\pa_{Z_0}^{k_2}\psi)(t,\cdot)\in L^\infty_{\rm loc}\,(W^{2,p}(\mathbf{V}^1)),\ \forall k_1+k_2=k, \end{equation} where the tangent vector fields $Y_0, Z_0\in W^{k+1,p}(\mathbf{V}^1;\mathop{\mathbb R\kern 0pt}\nolimits^3)$ are defined by $$Y_0(\phi_1(r,s))=\pa_r\phi_1(r,s), \quad Z_0(\phi_1(r,s))=\pa_s\phi_1(r,s). $$ Indeed, a direct calculation gives for any $(r,s)\in \mathbf{U}^1$, \begin{align*} &\pa_r\psi(t,\phi_1(r,s)) =Y_0(\phi_1(r,s))\frac{\pa\psi(t,\phi_1(r,s))}{\pa x} =(\pa_{Y_0}\psi)(t,\phi_1(r,s)), \\ & \pa_s\psi(t,\phi_1(r,s)) =Z_0(\phi_1(r,s))\frac{\pa\psi(t,\phi_1(r,s))}{\pa x} =(\pa_{Z_0}\psi)(t,\phi_1(r,s)), \end{align*} and hence by an induction argument we achieve \begin{equation*}\label{5.6} \pa_r^{k_1}\pa_s^{k_2}\psi(t,\phi_1(r,s)) =(\pa_{Y_0}^{k_1}\pa_{Z_0}^{k_2}\psi)(t,\phi_1(r,s)), \hbox{ with }\phi_1\in W^{k+2,p}(\mathbf{U}^1). \end{equation*} Since the initial system $W_0$ given in \eqref{W0} is admissible, for any $(\widetilde r, \widetilde s)\in\mathbf{U}^1$, there exists a $\partial\Omega_0$-open set $\mathbf{V}_0\subset \mathbf{V}^1$ containing $\phi_1(\tilde{r},\tilde{s})$ such that (without loss of generality) \begin{equation*}\label{5.9} \inf_{x\in\mathbf{V}_0 }|X_{1,0}\wedge X_{2,0}|(x) >0. \end{equation*} Thus we can decompose $Y_0$, $Z_0$ as a linear combination of $X_{1,0}$ and $X_{2,0}$ on $\mathbf{V}_0$, namely \begin{equation*}\label{5.10} Y_0=c_1 X_{1,0}+c_2 X_{2,0},\quad Z_0=d_1 X_{1,0}+d_2 X_{2,0}, \end{equation*} where the coefficients are defined by \begin{equation*}\label{5.11}\begin{split} &c_i=\frac{(Y_0,X_{i,0})|X_{j,0}|^2-(Y_0,X_{j,0})(X_{1,0},X_{2,0})}{|X_{1,0}\wedge X_{2,0}|^2},\\ &d_i=\frac{(Z_0,X_{i,0})|X_{j,0}|^2-(Z_0,X_{j,0})(X_{1,0},X_{2,0})}{|X_{1,0}\wedge X_{2,0}|^2}, \quad i,j=1,2,\, i\neq j. \end{split}\end{equation*} Furthermore, in view of the fact $X_{1,0}, X_{2,0}, Y_0, Z_0\in W^{k+1,p}(\mathbf{V}^1)$, $k\geq 1$, $p>3$, we know that the coefficients $c_i, d_i$, $i=1,2$ belong to $W^{k+1,p}(\mathbf{V}_0)$, $k\geq 1$. Hence without loss of generality, to prove \eqref{YZ} reduces to prove \begin{equation*}\label{5.14} (\pa_{c_1 X_{1,0}+c_2 X_{2,0}}^{k_1}\pa_{d_1 X_{1,0}+d_2 X_{2,0}}^{k_2}\psi)(t,\cdot)\in L^\infty_{\rm loc}\,(W^{2,p}(\mathbf{V}_0)),\ \forall k_1+k_2=k, \end{equation*} and it suffices to show \begin{equation}\label{5.15} \pa_{(X_{1,0},X_{2,0})}^{k}\psi\in L^\infty_{\rm loc}\,(W^{2,p}(\mathbf{V}_0)). \end{equation} To do this, let us recall the definition \eqref{psi} of the stream function $\psi$, thus the vector field $X_i(t,x)$ defined by \eqref{W} can be written as \begin{equation}\label{5.17} X_i(t,x)=(\pa_{X_{i,0}}\psi)(t,\psi^{-1}(t,x)), \quad i=1,2. \end{equation} Hence for any function $f(t,x)\equiv g(t,\psi^{-1}(t,x))$, there holds \begin{equation*}\label{5.18}\begin{split} (\pa_{X_i}&f)(t,x)=(\pa_{X_{i,0}}\psi)(t,\psi^{-1}(t,x)) \cdot\nabla\psi^{-1}(t,x)\cdot(\nabla g )(t,\psi^{-1}(t,x))\\ &=X_{i,0}(t,\psi^{-1}(t,x))\cdot(\nabla\psi)(t,\psi^{-1}(t,x)) \cdot\nabla\psi^{-1}(t,x)\cdot(\nabla g )(t,\psi^{-1}(t,x))\\ &=X_{i,0}(t,\psi^{-1}(t,x))\cdot(\nabla g )(t,\psi^{-1}(t,x))\\ &=(\pa_{X_{i,0}} g)(t,\psi^{-1}(t,x)), \quad i=1,2. \end{split}\end{equation*} Applying the above formula repeately on \eqref{5.17} yields that \begin{equation*}\label{5.19}\begin{split} (\pa_{(X_1,X_2)}^{\alpha(k-1)}X_{\alpha_k})(t,x) = (\pa_{(X_{1,0},X_{2,0})}^{\alpha(k)}\psi)\,(t,\psi^{-1}(t,x)), \end{split}\end{equation*} for any multi-index $\alpha(k)=(\alpha_1,\cdots,\alpha_k)$ with $\alpha_1,\cdots,\alpha_k\in\{1,2\}$, and $\alpha(k-1)$ denotes $(\alpha_1,\cdots,\alpha_{k-1})$. Hence in order to prove \eqref{5.15}, we only need to show \begin{equation}\label{5.16} \pa_{(X_{1},X_{2})}^{k-1}(X_{1},X_{2})\in L^\infty_{\rm loc}\,\bigl(W^{2,p}(\mathbf{V}(t))\bigr), \quad \mathbf{V}(t)\buildrel\hbox{\footnotesize def}\over =\psi(t,\mathbf{V}_0). \end{equation} As \eqref{claim} guarantees \eqref{5.16}, we conclude that $\pa\Omega(t)\in W^{k+2,p}(\mathop{\mathbb R\kern 0pt}\nolimits^3)$ for any $t\in\mathop{\mathbb R\kern 0pt}\nolimits^+$ and hence Theorem \ref{thm1.2} follows. This completes the proof of the claim. \section{The proof of \eqref{claim}}\label{sec3} Similarly as in \cite{LZ}, in order to prove \eqref{claim}, we consider \begin{equation}\label{J0} \begin{split} J_0(t)\buildrel\hbox{\footnotesize def}\over = 1&+\|(\partial_t v,\Delta v,\nabla\pi)\|_{L^{r_0}_t(L^p)} +\|\nabla v\|_{L^{\sigma_r}_t(L^\infty)\cap L^{s_0}_t(L^p)} +\|v\|_{L^\infty_t(L^3)\cap L^{\sigma_s}_t(L^\infty)} , \end{split} \end{equation} and the following inductively defined quantities $J_\ell(t)$, $ \ell=1,\cdots,k,$ \begin{equation}\label{Jell} \begin{split} J_{\ell}(t)\buildrel\hbox{\footnotesize def}\over = &J_{\ell-1}(t) + \|(\partial_t \partial_W^{\ell } v,\Delta\partial_W^{\ell} v, \nabla\partial_W^{\ell }\pi)\|_{L^{r_{\ell }}_t(L^p)} +\|\nabla\partial_W^{\ell } v\|_{L^{r_{\ell }}_t(L^\infty) \cap L^{s_{\ell }}_t(L^p)} \\ &+\|\partial_W^{\ell } v\|_{L^{s_{\ell }}_t(L^\infty)\cap L^\infty_t(L^p)} + \|\partial_t \partial_W^{\ell-1}W\|_{L^{s_{\ell }}_t(W^{1,p})} + \|\partial_W^{\ell-1} W\|_{L^\infty_t(W^{2,p})}, \end{split} \end{equation} where $r_\ell, s_\ell, \sigma_r, \sigma_s$ can be taken freely as long as \begin{equation}\label{r} r_\ell \in \bigl]1,\f{2k}{k+\ell\e}\bigr[, \quad s_\ell\in \bigl]2,\f{2k}{\ell\e}\bigr[, \quad\sigma_r\in \bigl]\f{2p}{p+3}, \frac {2p}{3}\bigr[, \,\sigma_s\in\bigl]\frac{4p-6}{p}, \infty[, \quad \ell=0,\cdots,k. \end{equation} Our aim in this section is to show the following estimates \begin{equation}\label{bound} J_\ell(t)\leq {\mathcal H}_\ell(t)\buildrel\hbox{\footnotesize def}\over = C_0\underbrace{\exp\cdots\exp}_{\ell\hbox{ times }}(C_0 t), \quad \forall \ell=0,\cdots,k, \quad \forall t\in\mathop{\mathbb R\kern 0pt}\nolimits^+, \end{equation} where $C_0$ denotes some positive constant which depends only on the initial data and may vary from lines to lines in the following context. It is obvious that \eqref{claim} can be deduced from the above estimates \eqref{bound}. The following context is devoted to the proof of \eqref{bound}. In particular, Propositions \ref{prop3.1}, \ref{prop3.2} and \ref{prop4.1} in the following give \eqref{bound} for the case $\ell=0$, $\ell=1$ and $\ell\geq 2$ respectively. \smallbreak Let us first notice the following fact. \begin{prop}\label{prop2.1} {\sl Assume the same hypothesis as in Theorem \ref{thm2.1} and $v_0\in L^3$. Then for the global weak solution given in Theorem \ref{thm2.1}, there exists a positive constant $C_0$ such that \begin{equation}\label{vL3} \|v(t)\|_{L^3}^3+\|\nabla |v|^{\f32}\|_{L^2_t(L^2)}^2\leq C_0(\|v_0\|_{L^3}^3+1), \quad\forall t\in\mathop{\mathbb R\kern 0pt}\nolimits^+. \end{equation} } \end{prop} \begin{proof} It suffices to prove \eqref{vL3} for smooth solutions of the equation \eqref{1.1} \footnote{Indeed, we can take approximated smooth solutions $(\rho_n, v_n, \pi_n)$ of the equation \eqref{1.1} accompanied with the mollified initial data $(1+S_n(\rho_0-1), S_nv_0)$ such that \eqref{vL3} holds uniformly in $n$. Then a passage to the limit implies Proposition \ref{prop2.1}.}. Taking $L^2$ inner product between the momentum equation in \eqref{1.1} with $v|v|$ gives \begin{equation}\label{2.6} \f13\f{d}{dt}\|\rho^{\f13}v(t)\|_{L^3}^3 -\int_{\mathop{\mathbb R\kern 0pt}\nolimits^3}\Delta v\cdot v|v|dx +\int_{\mathop{\mathbb R\kern 0pt}\nolimits^3}\nabla\pi\cdot v|v|dx=0. \end{equation} Integrating by parts, we have \begin{equation}\label{2.7} \begin{split} -\int_{\mathop{\mathbb R\kern 0pt}\nolimits^3}\Delta v\cdot v|v|dx &=\int_{\mathop{\mathbb R\kern 0pt}\nolimits^3} |\nabla v|^2 |v|dx +\int_{\mathop{\mathbb R\kern 0pt}\nolimits^3} \nabla |v|\cdot\nabla v\cdot v dx \\ &\geq \frac12\int_{\mathop{\mathbb R\kern 0pt}\nolimits^3} \nabla|v|\cdot \nabla|v|^2 dx =\|\frac 23\nabla |v|^{\f32}\|_{L^2}^2. \end{split} \end{equation} Applying the three dimensional interpolation inequality that $$\|f\|_{L^q(\mathop{\mathbb R\kern 0pt}\nolimits^3)} \leq C\|f\|_{L^2(\mathop{\mathbb R\kern 0pt}\nolimits^3)}^{\f3q-\frac 12}\|\nabla f\|_{L^2(\mathop{\mathbb R\kern 0pt}\nolimits^3)}^{\frac 32-\f3q} \quad \mbox{for}\quad \forall q\in[2,6],$$ and Young's inequality, we obtain for $r\in [1,3]$ \begin{equation}\label{2.8} \begin{split} \int_{\mathop{\mathbb R\kern 0pt}\nolimits^3}\nabla\pi\cdot v|v|dx &\leq\|\nabla\pi\|_{L^{\f{3r}{3r-2}}}\||v|^{\f32}\|_{L^{2r}}^{\f43}\\ &\leq\|\nabla\pi\|_{L^{\f{3r}{3r-2}}}\||v|^{\f32}\|_{L^2}^{2\cdot\f{3-r}{3r}} \|\nabla |v|^{\f32}\|_{L^2}^{2\cdot\f{r-1}{r}}\\ &\leq\frac 19\|\nabla |v|^{\f32}\|_{L^2}^2 +C\|\nabla\pi\|_{L^{\f{3r}{3r-2}}}^{r}\||v|^{\f32}\|_{L^2}^2+C\|\nabla\pi\|_{L^{\f{3r}{3r-2}}}^{r}. \end{split}\end{equation} Substituting \eqref{2.7}, \eqref{2.8} into \eqref{2.6}, we get \begin{equation}\label{2.9} \f13\f{d}{dt}\|\rho^{\f13}v(t)\|_{L^3}^3+\frac 13\|\nabla |v|^{\f32}\|_{L^2}^2\leq C\|\nabla\pi\|_{L^{\f{3r}{3r-2}}}^{r}\|v\|_{L^3}^3+C\|\nabla\pi\|_{L^{\f{3r}{3r-2}}}^{r}. \end{equation} Then we take use of Gronwall's inequality, the estimate \eqref{vLr} and \eqref{rho} to achieve \eqref{vL3}. \end{proof} We will also use frequently the following two lemmas \begin{lem}\label{lem3.1} {\sl Let $p\in [\frac 32,\infty[$, $r\in ]1,2[$, $s\in ]2, \infty]$ and $q\in \bigl]\f{4p}{2p+3}, \frac{4p}{3}\bigr[.$ Let $v_0\in W^{1,p}$ and $ v_L(t)\buildrel\hbox{\footnotesize def}\over = e^{t\D}v_0.$ Then there exists some positive constant $C$ such that \begin{equation}\label{3.1} \|\D v_L\|_{L^{r}_t(L^p)}+\bigl\|\nabla v_L\bigr\|_{L^{s}_t(L^p)}+\|\nabla v_L\|_{L^q_t(L^{2p})}\leq C\|v_0\|_{W^{1,p}}, \quad \forall t\in \mathop{\mathbb R\kern 0pt}\nolimits^+. \end{equation}} \end{lem} \begin{lem}\label{lem3.2} {\sl Let $p\in [\frac 32,\infty[$ and $r\in ]1,2[$, then there exists some positive constant $C$ such that \begin{equation}\label{3.2} \begin{split} \Bigl\| \int^t_0 \Delta e^{(t-t')\Delta}& f(t')\,dt' \Bigr\|_{L^r_T (L^p)}+ \Bigl\| \int^t_0 \nabla e^{(t-t')\Delta} f(t')\,dt' \Bigr\|_{L^{\frac{2r}{2-r}}_T (L^p)}\\ &+\Bigl\| \int^t_0 \nabla e^{(t-t')\Delta} f(t')\,dt' \Bigr\|_{L^{q}_T (L^{2p})} \leq C\|f\|_{L^r_T(L^p)}, \quad\forall T\in \mathop{\mathbb R\kern 0pt}\nolimits^+, \end{split} \end{equation} for $q$ given by $\f1q=\f1r- \bigl(\frac 12-\f{3}{4p}\bigr).$} \end{lem} \begin{proof} Lemma \ref{lem3.1} and Lemma \ref{lem3.2} can be proved exactly along the same lines of the proofs of Lemma 4.1, Lemma 4.2 in \cite{LZ}, and Lemma 7.3 in \cite{LPG}. Hence we just sketch their proofs for the reader's convenience. The fact that $v_0\in W^{1,p}$ ensures $$ \nabla^2 v_0\in \dot B^{-1}_{p,\infty}\cap \dot B^{-2}_{p,\infty} \hookrightarrow \dot B^{-1-\sigma}_{p,1}, \quad \nabla v_0\in \dot B^{0}_{p,\infty}\cap \dot B^{-1}_{p,\infty} \hookrightarrow \dot B^{-\sigma}_{p,1} \hookrightarrow \dot B^{-\sigma-\frac{3}{2p}}_{2p,1}, \quad \forall \sigma\in ]0,1[. $$ Hence by the characterisation of the Besov spaces with negative index, we know $\forall \sigma\in ]0,1[$, \begin{align*} \|t^{\frac 12+\frac \sigma2-\frac 1r} e^{t\Delta}\nabla^2 v_0\|_{L^r(\mathop{\mathbb R\kern 0pt}\nolimits^+;L^p)} &+ \|t^{\frac \sigma2-\frac 1s} e^{t\Delta}\nabla v_0\|_{L^s(\mathop{\mathbb R\kern 0pt}\nolimits^+;L^p)} \\ &+\|t^{\frac \sigma2+\frac{3}{4p}-\frac 1q}e^{t\Delta}\nabla v_0\|_{L^q(\mathop{\mathbb R\kern 0pt}\nolimits^+;L^{2p})} \leq C\|\nabla v_0\|_{W^{1,p}}. \end{align*} This together with the fact $\|e^{t\Delta }\nabla v_0\|_{L^p}\leq C\|\nabla v_0\|_{L^p}$ ensures \eqref{3.1}. To prove \eqref{3.2} it suffices to write $$ e^{(t-t')\Delta} f(t')=\frac{1}{\sqrt{t-t'}^3}K(\frac{\cdot}{\sqrt{t-t'}})\ast f(t', \cdot), $$ with $K$ denoting the inverse Fourier transform of $e^{-|\xi|^2}$, and then to take Young's inequality first in the space variable and then in the time variable. \end{proof} Now we come to the proof of \eqref{bound} and we first consider $J_0(t)$. \begin{prop}\label{prop3.1} {\sl Assume the hypothesis in Theorem \ref{thm1.2}. Then the estimate \eqref{bound} holds true for $\ell=0$: \begin{equation}\label{3.4} \begin{split} J_0(t)\leq C_0, \quad \forall t\in\mathop{\mathbb R\kern 0pt}\nolimits^+. \end{split} \end{equation}} \end{prop} \begin{proof} We follow the idea in the proof of Proposition 4.2 in \cite{LZ}. We first introduce $a\buildrel\hbox{\footnotesize def}\over = 1/\rho-1$ so that the system \eqref{1.1} translates into the system for the unknowns $(a, v, \nabla\pi)$ \begin{equation}\label{2.1} \quad\left\{\begin{array}{lll} \displaystyle \pa_t a + v \cdot \nabla a=0 \qquad (t,x)\in\mathop{\mathbb R\kern 0pt}\nolimits^+\times\mathop{\mathbb R\kern 0pt}\nolimits^3, \\ \displaystyle \pa_t v + v \cdot \nabla v+ (1+a)(\nabla\pi-\D v)=0, \\ \displaystyle \mathop{\rm div}\nolimits\, v = 0, \\ \displaystyle (a, v)|_{t=0}=(a_0, v_{0}). \end{array}\right. \end{equation} For any $\lambda>0$, for any function $g(t)$, we denote \begin{equation*} \label{3.5} g_\lambda(t)\buildrel\hbox{\footnotesize def}\over = g(t)\exp\bigl(-\lambda\int_0^t V(t')\,dt'\bigr), \quad \hbox{ with }V(t)\buildrel\hbox{\footnotesize def}\over = \|v(t)\|_{L^{2p}}^{\f{4p}{2p-3}}\geq 0. \end{equation*} Then by virtue of \eqref{2.1}, $v_\lambda$ satisfies \begin{equation}\label{vlambda} \partial_t v_\lambda+ \lambda f(t)v_\lambda=(\partial_t v)_\lambda =\Delta v_\lambda +(-v\cdot\nabla v_\lambda +a\Delta v_\lambda-(1+a)\nabla\pi_\lambda), \end{equation} and also \begin{equation} \label{3.6} v_\lambda(t)=e^{t\Delta}v_{0,\lambda} +\int^t_0 e^{-\lambda\int_{t'}^t V(t'')\,dt''} e^{(t-t')\Delta} \Bigl( -v\cdot\nabla v_\lambda +a\Delta v_\lambda-(1+a)\nabla\pi_\lambda \Bigr)(t')\,dt'. \end{equation} Taking space divergence to \eqref{vlambda} gives $$ \D\pi_\lambda=-\mathop{\rm div}\nolimits(v\cdot\nabla v_\lambda)+\mathop{\rm div}\nolimits\bigl(a(\D v_\lambda-\nabla\pi_\lambda)\bigr), $$ from which and the fact that $\|a\|_{L^\infty}=\frac{|\eta|}{1-|\eta|}$ is sufficiently small, we infer \begin{equation}\label{3.7} \|\nabla\pi_\lambda(t)\|_{L^p}\leq C\bigl(\|v\cdot\nabla v_\lambda(t)\|_{L^p}+|\eta|\|\D v_\lambda(t)\|_{L^p}\bigr). \end{equation} In view of \eqref{3.6}, we get, by applying Lemma \ref{lem3.1}, Lemma \ref{lem3.2} and \eqref{3.7}, that \begin{eqnarray*} \begin{split} \|\D v_\lambda&\|_{L^{r_0}_t(L^p)} +\|\nabla v_\lambda\|_{L^{\f{2r_0}{2-r_0}}_t(L^p)} +\|\nabla v_\lambda\|_{L^{q_0}_t(L^{2p})} \\ \leq &\|\D v_L \|_{L^{r_0}_t(L^p)} +\|\nabla v_L \|_{L^{\f{2r_0}{2-r_0}}_t(L^p)} +\|\nabla v_L \|_{L^{q_0}_t(L^{2p})} \\ &+C\biggl(\int_0^te^{-\lambda r_0\int_{t'}^t V(t'')\,dt''}\Bigl(\|v\cdot\nabla v_\lambda(t')\|_{L^p}^{r_0}+\|a(t')\|_{L^\infty}^{r_0}\|\D v_\lambda(t')\|_{L^p}^{r_0}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad+\bigl(1+\|a(t')\|_{L^\infty}^{r_0}\bigr)\|\nabla\pi(t')\|_{L^p}^{r_0}\Bigr)\,dt'\biggr)^{\f{1}{r_0}}\\ \leq& C\biggl(\|v_0\|_{W^{1,p}}+|\eta|\|\D v_\lambda\|_{L^{r_0}_t(L^p)} +\Bigl(\int_0^te^{-\lambda r_0\int_{t'}^t V(t'')\,dt''}\|v\cdot\nabla v_\lambda(t')\|_{L^p}^{r_0}\,dt'\Bigr)^{\f{1}{r_0}}\biggr), \end{split} \end{eqnarray*} where $v_L=e^{t\Delta}v_0$, $\f{1}{q_0}=\f{1}{r_0}- \bigl(\frac 12-\f{3}{4p}\bigr)$ and $r_0$ is defined in \eqref{r}. Since \begin{eqnarray*} \begin{split} \Bigl(\int_0^t&e^{-\lambda r_0\int_{t'}^t V(t'')\,dt''}\|v\cdot\nabla v_\lambda(t')\|_{L^p}^{r_0}\,dt'\Bigr)^{\f{1}{r_0}}\\ \leq &\Bigl(\int_0^te^{-\f{4p }{2p-3}\lambda \int_{t'}^t V(t'')\,dt''}\|v(t')\|_{L^{2p}}^{\f{4p}{2p-3}}\,dt'\Bigr)^{\f{2p-3}{4p}}\|\nabla v_\lambda\|_{L^{q_0}_t(L^{2p})} \leq \f{C}{\lambda ^{\f{2p-3}{4p}}}\|\nabla v_\lambda\|_{L^{q_0}_t(L^{2p})}. \end{split} \end{eqnarray*} we take $|\eta|$ small enough and $\lambda$ large enough to achieve \begin{equation} \label {3.9} \|\D v_\lambda\|_{L^{r_0}_t(L^p)} +\|\nabla v_\lambda\|_{L^{\f{2r_0}{2-r_0}}_t(L^p)} +\|\nabla v_\lambda\|_{L^{q_0}_t(L^{2p})} \leq C\|v_0\|_{W^{1,p}} . \end{equation} By use of the estimates \eqref{vLr} and \eqref{vL3}, we deduce from the interpolation inequality that \begin{equation}\label{3.12} \bigl(\int^t_0 V(t')\,dt'\bigr)^{\frac{2p-3}{4p}} \equiv\|v\|_{L_t^{\f{4p}{2p-3}}(L^{2p})} \leq C\|v\|_{L_t^{\infty}(L^3)}^{1-\theta}\|\Delta v\|_{L_t^r(L^{\f{3r}{3r-2}})}^\theta \leq C_0, \end{equation} where $\theta=\f{(2p-3)r}{4p}\in]0,1[$. Thus we deduce from \eqref{3.9} that \begin{equation}\label{3.13} \begin{split} \|\D v\|_{L^{r_0}_t(L^p)} +&\|\nabla v\|_{L^{\f{2r_0}{2-r_0}}_t(L^p)}+\|\nabla v\|_{L^{q_0}_t(L^{2p})}\\ \leq& C\|v_0\|_{W^{1,p}}\cdot e^{\lambda \int_0^t V(t')\,dt'}\leq C\|v_0\|_{W^{1,p}}\cdot e^{\lambda C_0}\leq C_0. \end{split} \end{equation} This together with \eqref{3.7} and \eqref{3.12} ensures that \begin{equation}\label{3.14} \begin{split} \|\nabla\pi\|_{L^{r_0}_t(L^p)}\leq C\Bigl(\|v\|_{L^{\f{4p}{2p-3}}_t(L^{2p})}\|\nabla v\|_{L^{q_0}_t(L^{2p})}+|\eta|\|\D v\|_{L^{r_0}_t(L^p)}\Bigr)\leq C_0, \end{split} \end{equation} and hence we deduce from the velocity equation in \eqref{2.1} that \begin{equation}\label{3.14b} \begin{split} \|\partial_t v\|_{L^{r_0}_t(L^p)} \leq C_0. \end{split} \end{equation} Moreover, for any $p>3,\ r_0\in]1,2[$, there holds \begin{equation}\label{3.15} \|\nabla v\|_{L_t^{\f{2pr_0}{2p+3r_0-pr_0}}(L^\infty)}\leq C\|\nabla v\|_{L^{\f{2r_0}{2-r_0}}_t(L^p)}^{1-\f3p}\|\D v\|_{L^{r_0}_t(L^p)}^{\f3p}\leq C_0. \end{equation} It is easy to observe that when $r_0$ varies from 1 to 2, we have $\sigma_r\buildrel\hbox{\footnotesize def}\over =\f{2pr_0}{2p+3r_0-pr_0}\in\bigl]\f{2p}{p+3},\f{2p}{3} \bigr[$. Similarly we deduce \begin{equation}\label{3.16} \|v\|_{L_t^{\f{2p-3}{p}\cdot\f{2r_0}{2-r_0}}(L^\infty)}\leq C\|v\|_{L_t^{\infty}(L^3)}^{\f{p-3}{2p-3}}\|\nabla v\|_{L_t^{\f{2r_0}{2-r_0}}(L^p)}^{\f{p}{2p-3}}\leq C_0, \end{equation} and $\sigma_s\buildrel\hbox{\footnotesize def}\over = \f{r_0(4p-6)}{p(2-r_0)} \in\bigl]\f{4p-6}{p},\infty \bigr[$. By view of \eqref{J0}, we deduce \eqref{3.4} from the estimates \eqref{vL3}, \eqref{3.13}, \eqref{3.14}, \eqref{3.14b}, \eqref{3.15}, \eqref{3.16}. Hence the proposition follows. \end{proof} Next we shall prove the estimate \eqref{bound} for $\ell=1$. \begin{prop}\label{prop3.2} {\sl Assume the hypothesis in Theorem \ref{thm1.2}. Then the estimate \eqref{bound} holds true for $\ell=1$: \begin{equation}\label{3.22} \begin{split} J_{1}(t) \leq{\mathcal H}_1(t), \quad\forall t\in\mathop{\mathbb R\kern 0pt}\nolimits^+. \end{split} \end{equation} } \end{prop} \begin{proof} This proposition can be proved exactly along the same lines of the proof of Proposition 5.1 in \cite{LZ}. We sketch the proof here for the reader's convenience. Firstly, we know from the initial density assumptions \eqref{rho0} and \eqref{W0} that $$ \partial_{W_0}^\ell \rho_0=\partial_{W_0}^\ell a_0\equiv 0, \hbox{ with }a_0=\rho_0^{-1}-1, \quad \ell=1,\cdots,k. $$ On the other side, the equation \eqref{W} ensures that the operators $\partial_{X_i}$ and $(\partial_t+v\cdot\nabla)$ are commutative, and hence by view of \eqref{1.1}, $\partial_{X_i}^\ell \rho, \partial_{X_i}^\ell a$ satisfy the free transport equations $$ (\partial_t+v\cdot\nabla) (\partial_{X_i}^\ell\rho)=0, \quad (\partial_t+v\cdot\nabla) (\partial_{X_i}^\ell a)=0,\quad \forall\ell=1,\cdots,k, \quad \forall i=1,\cdots,5. $$ Therefore we derive that \begin{equation}\label{bound:a} \partial_{W}^\ell \rho\equiv0, \quad \partial_W^\ell a\equiv 0, \quad \forall \ell=1,\cdots,k. \end{equation} We next deduce from the equation \eqref{W} and Proposition \ref{prop3.1} that \begin{equation}\label{X:W1p} \begin{split} \| X_i(t)\|_{W^{1,p}} \leq \|X_{i,0}\|_{W^{1,p}} \exp\Bigl(C\int_0^t\|\nabla v(t')\|_{W^{1,p}}\,dt'\Bigr) \leq {\mathcal H}_1(t), \quad \forall i=1,\cdots,5, \end{split} \end{equation} and that \begin{equation}\label{X:W2p} \begin{split} \| X_i(t)\|_{W^{2,p}} &\leq\Bigl( \|X_{i,0}\|_{W^{2,p}}+\|\Delta \partial_{X_i} v\|_{L^1_t(L^p)} \Bigr) \exp\Bigl(C\int_0^t\|\nabla v(t')\|_{W^{1,p}}\,dt'\Bigr) \\ &\leq {\mathcal H}_1(t)(1+\|\Delta \partial_{X_i} v\|_{L^1_t(L^p)} ), \quad \forall i=1,\cdots,5. \end{split} \end{equation} We perform the operator $\partial_{X_i}$, $1\leq i\leq 5$ to the velocity equation in \eqref{2.1} to get the equation for $\partial_{X_i} v$: \begin{align}\label{4.13} \partial_t (\partial_{X_i} v) &+v\cdot\nabla(\partial_{X_i} v) +(1+a)(\nabla\partial_{X_i} \pi-\Delta \partial_{X_i} v)\notag \\ &=F_1(v,\pi,(i))\buildrel\hbox{\footnotesize def}\over = (1+a)\bigl(\nabla X_i\cdot\nabla \pi-\Delta X_i\cdot\nabla v -2\nabla X_i:\nabla^2 v\bigr), \end{align} and hence similar as \eqref{3.6} we write \begin{align*} (\partial_{X_i} v)(t)= &e^{t\Delta}(\partial_{X_{i,0}} v_0) \\ &+\int^t_0 e^{(t-t')\Delta} \Bigl( -v\cdot\nabla(\partial_{X_i} v) +a\Delta (\partial_{X_i} v) -(1+a)\nabla(\partial_{X_i} \pi)+F_1(v,\pi,(i)) \Bigr)(t')\,dt'. \end{align*} Furthermore, a direct calculation shows that $(\partial_{X_i} \pi)$ satisfies (noticing that $\div X_{i}=0$) \begin{align*} \div((1+a)\nabla(\partial_{X_i} \pi)) &=\div\Bigl( -\partial_t X_i\cdot\nabla v-\partial_t v\cdot\nabla X_i-v\cdot\nabla(\partial_{X_i} v) \\ &\qquad +\Delta X_i\cdot\nabla v+2\nabla X_i:\nabla^2 v +\Delta v\cdot\nabla X_i+a\Delta(\partial_{X_i} v)+F_1(v,\pi,(i))\Bigr). \end{align*} By use of Proposition \ref{prop3.1} and \eqref{X:W1p} we get the following estimate \begin{align}\label{bound:pi1} \|(v\cdot\nabla(\partial_{X_i}v), &\nabla (\partial_{X_i} \pi), F_1(v,\pi,(i)))\|_{L^{r_1}_t(L^p)} \leq {\mathcal H}_1(t) +C|\eta|\|\Delta (\partial_{X_i} v)\|_{L^{r_1}_t(L^p)}\notag \\ &\quad +C\Bigl(\int_0^t \bigl(\|\nabla v(t')\|_{W^{1,p}}^{r_1} + \|(v\otimes\nabla v(t'), \nabla\pi(t'))\|_{L^p}^{r_1} \bigr) \|\nabla X_i(t')\|_{W^{1,p}}^{r_1}\Bigr)^{\f1{r_1}}, \end{align} with $r_1$ defined in \eqref{r}. Therefore noticing $(\partial_{X_{i,0}} v_0 )\in W^{1-\frac \varepsilon k, p}$, we take use of Lemma \ref{lem3.1} and Lemma \ref{lem3.2} to achieve \begin{align*} &\|(\partial_t (\partial_{X_i} v), \Delta (\partial_{X_i} v), \nabla (\partial_{X_i} \pi))\|_{L^{r_1}_t(L^p)} +\|\nabla (\partial_{X_i} v)\|_{L^{\frac{2r_1}{2-r_1}}_t(L^p)} \\ &\leq {\mathcal H}_1(t) +C\Bigl(\int_0^t \bigl(\|\nabla v(t')\|_{W^{1,p}}^{r_1} + \|(v\otimes\nabla v(t'), \nabla\pi(t'))\|_{L^p}^{r_1} \bigr) \|\nabla X_i(t')\|_{W^{1,p}}^{r_1}\Bigr)^{\f1{r_1}}. \end{align*} We substitute the above estimate into \eqref{X:W2p} and take use of Proposition \ref{prop3.1} and Gronwall's inequality to arrive at $$ \|X_i\|_{L^\infty_t(W^{2,p})}\leq {\mathcal H}_1(t), \quad \forall i=1,\cdots,5. $$ By the definition \eqref{Jell}, the estimate \eqref{3.22} follows correspondingly. \end{proof} Before we consider the estimate \eqref{bound} for the case $\ell\geq 2$, we state the following commutator estimates which can viewed as generalisations of Lemma 6.1 in \cite{LZ}. \begin{lem}\label{lem4.1} {\sl For any $\ell\in\{1,\ldots,k\}$ and any system $W=\{X_1, \cdots, X_N\}$, let $\alpha(\ell)=(\alpha_1,\ldots,\alpha_\ell)$ be a multi-index of length $\ell$ with indices taking value in $\{1,\ldots,N\}$, and we denote $\widehat\alpha(i)=(\alpha_{\ell-i+1},\ldots,\alpha_\ell)$ for $i=0,\cdots,\ell$. \\ Then there exists a positive constant $C$ such that $\forall\,r_\ell \in \bigl]1,\f{2k}{k+\ell\e}\bigr[$, $s_\ell\in\bigl]2,\f{2k}{\ell\e}\bigr[$, $\forall\,X\in W$, $\forall 1\leq i\leq\ell$, \begin{equation}\label{4.3}\begin{split} &\bigl\|\partial_W^{\alpha(i)}\nabla \partial_W^{\widehat\alpha(\ell-i)} X-\nabla \partial_W^{\alpha(\ell)} X\|_{L^\infty_t(W^{1,p})}+\bigl\|\partial_W^{\alpha(i)}\nabla^2\partial_W^{\widehat\alpha(\ell-i)} X-\nabla^2 \partial_W^{\alpha(\ell)} X\bigr\|_{L^\infty_t(L^p)}\\ &\qquad\qquad+\bigl\|\partial_W^{\alpha(i)}\partial_t\partial_W^{\widehat\alpha(\ell-i)} X-\partial_t\partial_W^{\alpha(\ell)} X\bigr\|_{L^{s_\ell}_t(W^{1,p})} \leq CJ_{\ell}^{\ell+1}, \end{split} \end{equation} and when $i\neq \ell$, one has \begin{equation} \label{4.4}\begin{split} &\bigl\|\partial_W^{\alpha(i)} \nabla \partial_W^{\widehat\alpha(\ell-i)} v-\nabla\partial_W^{\alpha(\ell)} v\bigr\| _{L^{r_{\ell-1}}_t(L^\infty)\cap L^{s_{\ell-1}}_t(L^p)}\\ &+\bigl\|\partial_W^{\alpha(i)} \nabla^2 \partial_W^{\widehat\alpha(\ell-i)} v-\nabla^2\partial_W^{\alpha(\ell)} v\bigr\| _{L^{r_{\ell-1}}_t(L^p)} +\bigl\|\partial_W^{\alpha(i)} \nabla \partial_W^{\widehat\alpha(\ell-i)} \pi-\nabla\partial_W^{\alpha(\ell)} \pi\bigr\|_{L^{r_{\ell-1}}_t(L^p) }\\ &+\bigl\|\partial_W^{\alpha(i)} \partial_t\partial_W^{\widehat\alpha(\ell-i)} v-\partial_t\partial_W^{\alpha(\ell)} v\| _{L^{r_{\ell-1}}_t(L^p) } \leq C J_{\ell-1}^{\ell+1} , \end{split} \end{equation} and when $i=\ell$, there holds \begin{equation}\label{4.5}\begin{split} &\bigl\|\partial_W^{\alpha(\ell)}\nabla v-\nabla\partial_W^{\alpha(\ell)} v +\partial_W^{\alpha(\ell-1)}\nabla X_{\alpha_\ell}\cdot\nabla v \bigr\|_{L^{r_{\ell-1}}_t(L^\infty)\cap L^{s_{\ell-1}}_t(L^p)} \leq C J_{\ell-1}^{\ell+1}, \\ &\bigl\|\partial_W^{\alpha(\ell)} \Delta v - \Delta\partial_W^{\alpha(\ell)} v +\partial_W^{\alpha(\ell-1)} \Delta X_{\alpha_\ell}\cdot\nabla v +2\partial_W^{\alpha(\ell-1)}\nabla X_{\alpha_\ell}:\nabla^2 v\bigr\|_{L^{r_{\ell-1}}_t(L^p)} \leq C J_{\ell-1}^{\ell+1}, \\ &\bigl\|\partial_W^{\alpha(\ell)} \nabla \pi-\nabla\partial_W^{\alpha(\ell)} \pi +\partial_W^{\alpha(\ell-1)}\nabla X_{\alpha_\ell}\cdot\nabla\pi\bigr\| _{L^{r_{\ell-1}}_t(L^p) } \leq C J_{\ell-1}^{\ell+1},\\ &\bigl\|\partial_W^{\alpha(\ell)} \partial_t v-\partial_t\partial_W^{\alpha(\ell)} v +\partial_W^{\alpha(\ell-1)}\partial_t X_{\alpha_\ell}\cdot\nabla v\bigr\| _{L^{r_{\ell-1}}_t(L^p) } \leq C J_{\ell-1}^{\ell+1}. \end{split} \end{equation} Moreover, it follows from \eqref{Jell}, \eqref{4.3}, \eqref{4.4} and \eqref{4.5} that for any $0\leq i\leq\ell$, \begin{equation}\label{4.6}\begin{split} &\|\partial_W^{\alpha(i)}\nabla \partial_W^{\widehat\alpha(\ell-i)} X \|_{L^\infty_t(W^{1,p})} +\|\partial_W^{\alpha(i)}\nabla^2\partial_W^{\widehat\alpha(\ell-i)} X \|_{L^\infty_t(L^p)}\\ &\qquad\qquad+\|\partial_W^{\alpha(i)}\partial_t\partial_W^{\widehat\alpha(\ell-i)} X \|_{L^{s_{\ell+1}}_t(W^{1,p})} \leq C J_{\ell+1}^{\ell+1}, \end{split} \end{equation} and \begin{equation}\label{4.7} \begin{split} &\|\partial_W^{\alpha(i)} \nabla \partial_W^{\widehat\alpha(\ell-i)} v \| _{L^{r_{\ell }}_t(L^\infty)\cap L^{s_{\ell}}_t(L^p)} +\|\partial_W^{\alpha(i)} \nabla^2 \partial_W^{\widehat\alpha(\ell-i)} v \|_{L^{r_{\ell }}_t(L^p)} \\ &\qquad\qquad\qquad+\|\partial_W^{\alpha(i)} \nabla \partial_W^{\widehat\alpha(\ell-i)} \pi \| _{L^{r_{\ell }}_t(L^p) }+\|\partial_W^{\alpha(i)} \partial_t\partial_W^{\widehat\alpha(\ell-i)} v \| _{L^{r_{\ell }}_t(L^p) } \leq C J_{\ell}^{\ell+1}. \end{split} \end{equation} } \end{lem} \begin{proof} The calculation is similar to Lemma 6.1 of \cite{LZ}, so we just sketch it here. Firstly, for any $X,Y\in W$, it is easy to observe that \begin{eqnarray*} \begin{split} &\|\p_Y\nabla X-\nabla\p_YX\|_{L^\infty_t(W^{1,p})}+\|\p_Y\nabla^2X-\nabla^2\p_YX\|_{L^\infty_t(L^p)}\\ &\qquad\qquad+\|\p_Y\p_tX-\p_t\p_YX\|_{L^{s_1}_t(W^{1,p})}\\ &\leq C\|\nabla X\|_{L^\infty_t(W^{1,p})}\bigl(\|\nabla Y\|_{L^\infty_t(W^{1,p})}+\|\p_tY\|_{L^{s_1}_t(W^{1,p})}\bigr)\leq CJ_1^2. \end{split} \end{eqnarray*} This shows that \eqref{4.3} holds for $\ell=1.$ It is also easy to see that \eqref{4.4} and \eqref{4.5} hold trivially for $\ell=1.$ Hence Lemma \ref{lem4.1} holds for $k=1.$ Let us now assume that \eqref{4.3}-\eqref{4.7} hold for $\ell\leq j-1$ with $j\leq k.$ We are going to prove that \eqref{4.3}-\eqref{4.5} also hold for $\ell=j,$ which will imply immediately \eqref{4.6}-\eqref{4.7} for $\ell=j.$ In the following, for $n\leq m$ and for the $m$-length multi-index $(l_1,\cdots,l_m)$ such that $$ (l_1,\cdots,l_m)\in L^n_m=\{(l_1,\cdots,l_m)\,|\, l_1<\cdots<l_n,\,l_{n+1}<\cdots<l_{m},\, \{l_1,\cdots,l_m\}=\{1,\cdots,m\}\}, $$ we denote $\alpha^l(n)=(\alpha_{l_1}, \cdots, \alpha_{l_n})$ and $\widehat\alpha^l(m-n)=(\alpha_{l_{n+1}}, \cdots, \alpha_{l_m})$. For any positive integer $i\leq j-1$, a direct calculation and the induction assumptions give \begin{equation}\label{4.8} \begin{split} &\bigl\|\partial_W^{\alpha(i+1)}\nabla\partial_W^{\widehat\alpha(j-i-1)} X-\nabla\partial_W^{\alpha(j)} X\bigr\|_{L^\infty_t(W^{1,p})}\\ &=\bigl\|\sum_{m=0}^i\partial_W^{\alpha(m)} (\pa_{X_{\alpha_{m+1}}}\nabla -\nabla\partial_{X_{\alpha_{m+1}}})\partial_W^{\widehat\alpha(j-m-1)} X\bigr\|_{L^\infty_t(W^{1,p})}\\ &=\bigl\|\sum_{m=0}^i\partial_W^{\alpha(m)}\bigl(\nabla X_{\alpha_{m+1}}\cdot\nabla\partial_W^{\widehat\alpha(j-m-1)} X\bigr)\bigr\|_{L^\infty_t(W^{1,p})}\\ &\leq C\sum_{m=0}^{i}\sum_{n=0}^{m}\sum_{(l_1,\cdots,l_m)\in L^n_m} \|\partial_W^{\alpha^l(n)}\nabla X_{\alpha_{m+1}}\|_{L^\infty_t(W^{1,p})} \|\pa_W^{\widehat\alpha^l(m-n)}\nabla\partial_W^{\widehat\alpha(j-m-1)} X\|_{L^\infty_t(W^{1,p})} \\ &\leq C\sum_{m=0}^{i}\sum_{n=0}^{m} J_{n+1}^{n+1} J_{j-n }^{j-n} \leq CJ_j^{j+1}. \end{split} \end{equation} We follow the same lines as above to obtain \begin{equation}\label{4.9} \bigl\|\partial_W^{\alpha(i+1)}\pa_t\partial_W^{\widehat\alpha(j-i-1)} X-\pa_t\partial_W^{\alpha(j)} X\bigr\|_{L^{s_j}_t(W^{1,p})}\leq CJ_j^{j+1}, \end{equation} and \begin{equation}\label{4.10}\begin{split} &\bigl\|\partial_W^{\alpha(i+1)}\nabla^2\partial_W^{\widehat\alpha(j-i-1)} X-\nabla^2 \partial_W^{\alpha(j)} X\bigr\|_{L^\infty_t(L^p)}\\ &\leq C\sum_{m=0}^{i}\sum_{n=0}^{m} \sum_{(l_1,\cdots,l_m)\in L^n_m} \Bigl( \|\partial_W^{\alpha^l(n)}\nabla^2 X_{\alpha_{m+1}}\|_{L^\infty_t(L^{p})} \|\pa_W^{\widehat\alpha^l(m-n)}\nabla\partial_W^{\widehat\alpha(j-m-1)}X\|_{L^\infty_t(L^\infty)} \\ &\qquad\qquad\qquad +\|\partial_W^{\alpha^l(n)}\nabla X_{\alpha_{m+1}}\|_{L^\infty_t(L^{\infty})} \|\pa_W^{\widehat\alpha^l(m-n)}\nabla^2\partial_W^{\widehat\alpha(j-m-1)}X\|_{L^\infty_t(L^p)}\Bigr) \\ &\leq C \sum_{m=0}^{i}\sum_{n=0}^{m} J_{n+1}^{n+1} J_{j-n }^{j-n} \leq CJ_j^{j+1}. \end{split} \end{equation} The estimates \eqref{4.8}, \eqref{4.9} and \eqref{4.10} show that \eqref{4.3} holds for $\ell=j$. The same argument to achieve \eqref{4.3} for $\ell=j$ yield \eqref{4.4} and \eqref{4.5} for $\ell=j$. We complete the proof of Lemma \ref{lem4.1} by the induction argument. \end{proof} Now we come to the estimate \eqref{bound} for the case $\ell\geq 2$: \begin{prop}\label{prop4.1} {\sl Assume the hypothesis in Theorem \ref{thm1.2}. Then the estimate \eqref{bound} holds true for $\ell\geq 2$: \begin{equation} \label{4.2} J_\ell(t)\leq {\mathcal H}_\ell(t), \quad \forall\ \ell =2,\cdots, k, \quad \forall t\in\mathop{\mathbb R\kern 0pt}\nolimits^+. \end{equation} } \end{prop} \begin{proof} The proof is similar to the one of Proposition 6.1 in \cite{LZ} and we sketch it. For $\ell=2,\ldots,k$, and any multi-index $\alpha(\ell)=(\alpha_1,\ldots,\alpha_{\ell})$, we take the operator $\pa_W^{\alpha(\ell-1)}$ to the equation \eqref{4.13} for $(\partial_{X_{\alpha_\ell}}v)$ to get \begin{equation} \label{4.14}\begin{split} &\partial_t\partial_W^{\alpha(\ell)} v + v\cdot\nabla \partial_W^{\alpha(\ell)} v - (1+a)\bigl(\Delta\partial_W^{\alpha(\ell)} v - \nabla \partial_W^{\alpha(\ell)} \pi\bigr)\buildrel\hbox{\footnotesize def}\over = F_\ell(v,\pi,\alpha(\ell)). \end{split} \end{equation} Here $F_\ell(v,\pi,\alpha(\ell))$ is given by induction \begin{equation*}\label{4.17} F_\ell(v,\pi,\alpha(\ell))=\pa_{X_{\alpha_1}}F_{\ell-1}(v,\pi,\widehat\alpha(\ell-1)) +F_{1}(\pa_W^{\widehat\alpha(\ell-1)}v,\pa_W^{\widehat\alpha(\ell-1)}\pi,(\alpha_1)), \end{equation*} and hence by view of the definition of $F_1$ in \eqref{4.13} we arrive at \begin{equation*}\label{4.18} \begin{split} &F_\ell(v,\pi,\alpha(\ell)) =\sum_{i=0}^{\ell-1}\pa_W^{\alpha(\ell-1-i)} F_{1}(\pa_W^{\widehat\alpha(i)}v,\pa_W^{\widehat\alpha(i)}\pi,(\alpha_{\ell-i})) \\ &=(1+a)\sum_{i=0}^{\ell-1}\pa_W^{\alpha(\ell-1-i)} \Bigl( \nabla X_{\alpha_{\ell-i}}\cdot\nabla \pa_W^{\widehat\alpha(i)}\pi - \Delta X_{\alpha_{\ell-i}}\cdot\nabla \pa_W^{\widehat\alpha(i)}v -2\nabla X_{\alpha_{\ell-i}}:\nabla^2 \pa_W^{\widehat\alpha(i)}v \Bigr). \end{split} \end{equation*} By use of Lemma \ref{lem4.1}, we arrive at the following estimate \begin{equation}\label{4.19} \begin{split} \| F_{\ell}(v,\pi,\alpha(\ell))\|_{L^{r_{\ell}}_t(L^p)} &\leq CJ_{\ell-1}^{\ell+1} \\ &+C\Bigl(\int_0^t \bigl(\|\nabla v(t')\|_{W^{1,p}}^{r_\ell} + \| \nabla\pi(t')\|_{L^p}^{r_\ell} \bigr) \|\nabla \partial_W^{\ell-1}W(t')\|_{W^{1,p}}^{r_\ell}\Bigr)^{\f1{r_\ell}}. \end{split} \end{equation} Similar as the proof of \eqref{bound:pi1} and by an inductive argument (as the same method for proving Lemma 6.3 in \cite{LZ}), we obtain \begin{equation}\label{4.20}\begin{split} \| \nabla\p_W^\ell&\pi\|_{L^{r_{\ell}}_t(L^p)} \leq C J_{\ell-1}^{\ell+2} +|\eta|\|\D\p_W^\ell v\|_{L^{r_\ell}_t(L^p)}\\ &\quad+\Bigl(\int_0^t \bigl(\|\nabla v(t')\|_{W^{1,p}}^{r_\ell} + \|(v\otimes\nabla v(t'), \nabla\pi(t'))\|_{L^p}^{r_\ell} \bigr) \|\nabla \partial_W^{\ell-1}W(t')\|_{W^{1,p}}^{r_\ell}\Bigr)^{\f1{r_\ell}}. \end{split} \end{equation} Now noticing $(\partial_{X_{i,0}}^\ell v_0 )\in W^{1-\frac \ell k\varepsilon, p}$, we take use of Lemma \ref{lem3.1}, Lemma \ref{lem3.2} and the estimates \eqref{4.19}, \eqref{4.20} and Proposition \ref{prop3.1} to achieve \eqref{4.2} by induction argument. \end{proof} \smallskip \noindent {\bf Acknowledgments.} This work was done when we were visiting Morningside Center of the Academy of Mathematics and Systems Sciences, CAS. We would like to thank Professor Ping Zhang for introducing this interesting problem to us. \medskip
1,314,259,993,801
arxiv
\section{PID detectors and performance} The ALICE experiment is exploiting almost all known techniques for particle identification (PID). The particle identification of the Inner Tracking System (ITS) and the Time Projection Chamber (TPC) are based on the specific energy loss per unit path length of a particle which depends for a given momentum only on its charge and rest mass. Thus, the simultaneous measurement of track momentum (or rigidity) and signal amplitude in a sensitive detector volume allows to identify particles. The measured mean energy deposit of a track is denoted as d$E$/d$x$ hereafter. In practice, it can be described with parameterizations of the well-known Bethe-Bloch formula. Particle identification via d$E$/d$x$ needs to be supplemented by additional information for momenta where the Bethe-Bloch curves for different particle species cross. The momenta of these tracks are high enough to reach the Time-Of-Flight detector (TOF) which is used to measure the particle's velocity and, with a momentum measurement, to determine its mass. The reach of the hadron identification can be extended by the detection of Cherenkov radiation in the High Momentum Particle Identification Detector (HMPID) and a statistical analysis of the d$E$/d$x$ measurement on the relativistic rise in the TPC. High momenta electrons are identified via the detection of transition radiation. In the following section, the relevant PID detectors in the central barrel and their performance are described in increasing distance from the beam pipe. \begin{figure}[htbp] \centering \includegraphics[width=1.\textwidth]{FIGURE1_MERGED} \caption{d$E$/d$x$ spectrum for the ITS (left), track velocity $\beta$ vs. momentum $p$ for the TOF (middle), and signal amplitude for electrons and pions in the TRD (right).} \label{figAllDetectorsITSTOF} \end{figure} \smallskip {\bf Inner Tracking System (ITS).} Silicon Drift Detectors (SDD) and Silicon Strip Detectors (SSD), which form the two intermediate and two outer layers of ITS, respectively, provide analogue read-out for up to four samples for a truncated mean calculation of the d$E$/d$x$. Thus a resolution of $\sigma_{dE/dx} \approx 10-15\% $ is achieved (see fig.~\ref{figAllDetectorsITSTOF} left). The particle identification in the ITS combined with stand-alone tracking allows to identify pions with a minimum momentum of $p_{\rm t} \approx 100 \; \mathrm{MeV/}c$ which reduces the systematic error of yield and $\langle p_{\rm t}\rangle$ measurements due to extrapolation to $p_{\rm t} = 0$. \smallskip {\bf Time Projection Chamber (TPC).} The ALICE TPC with its 557$\,$568 readout channels \cite{Alme:2010ke} provides up to 159 ionization samples in a gas mixture of Ne and CO$_{2}$ (90\%/10\%). A truncated mean is used to reduce the Landau tail which results in a Gaussian distribution with a resolution of $\sigma_{dE/dx} \approx 5\% $. Figure \ref{figDeDx} (left) shows the measured d$E$/d$x$ versus the rigidity $R = p/z$ where $p$ corresponds to the track momentum and $z$ to the charge number. The lines show a parameterization of the Bethe-Bloch curve. The large dynamic range allows to detect particles with an average energy loss of 26 times minimum ionizing and thereby provides a clear identification of (anti-)nuclei. \smallskip {\bf Transition Radiation Detector (TRD).} Electron identification for momenta above 1 GeV/$c$ is achieved by the detection of transition radiation (see fig.~\ref{figAllDetectorsITSTOF} right). Transition radiation is produced by relativistic charged particles when they cross the many interfaces of two media of different dielectric constants in the radiator and is detected by the high-Z gas mixture of Xe and CO$_{2}$ (85\%/15\%). \smallskip {\bf Time-Of-Flight Detector (TOF).} The TOF detector is composed of 1638 multi-gap resistive plate chambers which provide an intrinsic resolution of approximately $80~{\rm ps}$. The overall time resolution for particle identification also depends on the time-0 uncertainty of the event. This results in a resolution of $\sigma_{TOF} = \sqrt{\sigma_{intr}^{2} + \sigma_{t0}^{2}} \approx 86~{\rm ps}$ for Pb--Pb collisions and $\sigma_{TOF} \approx 120~{\rm ps}$ for pp collisions. Hence a $2\sigma$-separation between protons and kaons up to 5 GeV/$c$ can be achieved in the high-multiplicity environment (see fig.~\ref{figAllDetectorsITSTOF} middle). \smallskip {\bf High Momentum Particle Identification Detector (HMPID).} The HMPID is a proximity focusing Ring Imaging Cherenkov detector with a liquid C$_{6}$F$_{14}$ radiator. It provides a separation of kaons and protons up to 5 GeV/$c$. \section{Single track and statistical particle identification} In regions of clear separation, an identification of individual tracks is feasible, e.g. by assigning the particle type with the closest distance to an expected response function value. In practice, for detectors with a Gaussian response function this distance is usually specified in multiples of the resolution (so called n$\sigma$-cuts). These methods also find their application in {\it indirect} applications of particle identification, especially the removal of background in invariant mass analyses. The use of 3$\sigma$-cuts leads in many cases to a significant rejection of background without loss of efficiency. For the direct extraction of spectra, statistical unfolding methods can be applied in regions of only limited separation. The $p_{\rm t}$-reach of identified hadron spectra is further extended using the relativistic rise in the ionization measurement of the TPC. \section{Topological particle identification} Weak decays of strange particles with a sufficiently long lifetime and $\gamma$-conversions can be identified via their characteristic decay topology \cite{Aamodt:2011zz}. This can be used to perform precise cross-checks between independent techniques. Spectra of kaons are obtained from five independent techniques: via the measurement of d$E$/d$x$, time-of-flight, Cherenkov radiation, as well as the V$^{0}$-type decay K$^{0}_{s} \rightarrow \pi^{+}\pi^{-}$, and the kink topology K$^{+} \rightarrow \mu^{+} \nu_{\mu}$. Figure \ref{figSpectra} illustrates the good agreement which is found for charged and neutral kaons. \begin{figure}[htbp] \centering \subfigure{ \label{figPionsPerDetector} \includegraphics[width=0.44\textwidth]{PionsPerDetector} } \subfigure{ \label{figKaonComp} \includegraphics[width=0.44\textwidth]{KaonComp} } \caption{Combined pion spectra of the different analysis for several centralities (left) and comparison of charged and neutral kaons in Pb--Pb collisions (right).} \label{figSpectra} \end{figure} \vspace{-10mm} \section{Physics Example I: Spectra of $\pi^{\pm}$, ${\rm K^{\pm}}$, ${\rm p}$, and $\bar{{\rm p}}$} The spectra of $\pi^{\pm}$, K$^{\pm}$, ${\rm p}$, and $\bar{{\rm p}}$ over a wide $p_{\rm t}$-range are extracted based on the methods outlined above \cite{Spectra900,ProceedingsMichele,ProceedingsMarek}. There, different strategies are followed by the individual detector projects: in the ITS and stand-alone TOF analysis, the distribution of the response function for tracks within $|y| <$ 0.5 is sliced in bins of p$_{t}$ and fitted with a superposition of Gaussian-like functions, to extract the yield of the different species following a statistical approach allowing a maximum $p_{\rm t}$-coverage of the individual detectors. The combined TPC-TOF analysis is based on a $3\sigma$-cut in the TPC at lower and in the TOF towards intermediate momenta where track-by-track PID is still possible. This allows to histogram the distance of closest approach to the primary vertex for each particle type. The raw yield of primary and secondary particles from weak decay or interaction with the detector material is then directly extracted via a fit of the corresponding Monte Carlo templates. Figure \ref{figSpectra} shows the combined pion spectrum of the different analyses. \section{Physics Example II: Observation of the ${\bf^{4}\overline{\bf{He}}}$-nucleus} \begin{figure}[htbp] \centering \subfigure{ \label{figUntriggeredDeDx} \includegraphics[width=0.53\textwidth]{untriggeredDeDx} } \subfigure{ \label{figTriggeredDeDx} \includegraphics[width=0.4\textwidth]{AntiAlphaPlot} } \caption{d$E$/d$x$ spectrum of the ALICE TPC of 2.2 million Pb--Pb events (left) and of the pre-selected tracks (by the offline trigger) in the full statistics (right). } \label{figDeDx} \end{figure} \noindent The ALICE experiment also observed four candidates of the ${^{4}\overline{\rm{He}}}$-nucleus, the measurement of which was recently published by the STAR collaboration \cite{Agakishiev:2011ib}. In total, 17.8 million nuclear collisions recorded in the heavy ion run of November 2010 were analyzed with an offline trigger selecting all ${^{3}\overline{\rm{He}}}$-nuclei or heavier. Figure \ref{figDeDx} (right) shows the d$E$/d$x$ versus rigidity distribution for negative particles in the region where the bands of ${^{3}\overline{\rm{He}}}$ and ${^{4}\overline{\rm{He}}}$ are clearly visible. Below a rigidity of $p/z \approx 2.2 \; {\rm GeV}/c$ two candidates are clearly identified only based on the d$E$/d$x$ information. Above, the mass determination of the candidate tracks must be combined with mass determined with the TOF system following \begin{equation} \label{eqMassTOF} m^{2} / z^{2} = R^{2} / (\gamma^{2} - 1) \; . \end{equation} \noindent The inlet in figure \ref{figDeDx} (right) shows the ${m^{2} \over z^{2}}$ distribution for all tracks within a 2$\sigma$-band around the expected d$E$/d$x$ for ${^{4}\overline{\rm{He}}}$. The four anti-alpha candidates are highlighted in red in both the ${m^{2} \over z^{2}}$ and the d$E$/d$x$ versus rigidity plot. The d$E$/d$x$ cut selects particles such that only tracks with $z = 2$ are contained in the sample which removes the ambiguity with deuterons (see eq. \ref{eqMassTOF}). \vspace{-5mm} \section*{References}
1,314,259,993,802
arxiv
\section{Introduction} \label{sec:intro} The setting-up of robust numerical methods to solve complex systems of partial differential equations has become a key issue in applied mathematics and engineering, driven by the increasing use of numerical simulation in both research and industry. Among the latter, virtual testing has become a short term aim, with the objective to replace expensive experimental studies and validations by numerical simulations, even in order to certify large structures as planes and bridges. Thus, one key point of the numerical methods to develop is the \emph{verification} of computations which enables to warranty that the computed solution is sufficiently close to the original continuum mechanics model. This topic of numerical analysis has been the subject of many studies for the last decades. Three main classes of error estimator have been developed, based either on equilibrium residuals \cite{babuska:78:resEq}, flux projection \cite{zienkiewicz:87:zz1} or error in constitutive law \cite{ladeveze:75:erreur}. An overview of those various methods can be found in \cite{ladeveze:04:mastCalculations}. Another key point of numerical methods is their ability to quickly provide solutions to large (nonlinear) systems. The most classical answer to this issue is to use domain decomposition methods in order to take advantage of the parallel hardware architecture of recent clusters and grids. In engineering, non-overlapping domain decomposition methods are mostly employed, such as the well known FETI \cite{FARHAT:1994:ADV} or BDD \cite{mandel:93:ddm}. An overview of the main approaches related to non-overlapping domain decomposition can be found in \cite{GOSSELET.2007.1}. We aim to provide fully integrated adaptive strategies to compute large structural mechanics problems with certified quality. To do that, our current approach is to explore some ways of making bidirectional interactions between domain decomposition and a posteriori error estimation. Our developments are based both on the error in constitutive relation to measure the quality of our results and to forecast mesh refinement, and on a generic vision of non-overlapping domain decomposition methods which enables to do high-performance computing. This paper focuses on the estimation of the global error in constitutive relation in order (among others) to study how it is influenced by the error in the convergence of the domain decomposition solver which is linked to the non-satisfaction of interface equations (continuity of displacements and balance of forces). To do so we propose a strategy to build, in parallel and during the iterations, displacement and stress fields which are kinematically admissible (KA) and statically admissible (SA) on the whole structure. We face two main difficulties. First, since before convergence interface fields do not possess the classical properties of discretized fields (continuity of displacements and weak equilibrium), the recovery of admissible displacements and stresses requires some preprocessing. Second, the computation of statically admissible fields being an operation which can not be conducted independently on each element (in some methods it can even be a large bandwidth operation), classical recovery methods \cite{ladeveze:leguillon:81:erdc:sarecovery, ladeveze:rougeot:1997:erdc:sarecovery, pares:diez:huerta:2006:fluxfreesubdomain, gallimard:2008:erdc:sarecovery, mointinhoDeAlmeida:maunder:2009:sarecoverypu, ladeveze:chamoin:florentin:2009:erdc:sarecovery} would require inter-subdomain communications. Our generic method to build continuous displacement and balanced traction fields for both primal and dual approaches of non-overlapping domain decomposition is presented through this paper. It will be shown that the properties of the preconditioners involved in domain decomposition solvers make this reconstruction costless, and that an error estimator can then be computed in a fully parallel way. This paper is organized as follows. Section \ref{sec:basics} recalls the general framework related to our upcoming developments, mainly the estimation of the error in constitutive equation and the use of domain decomposition method. Section \ref{sec:errordd} shows how the problem of error estimation in a substructured context can be brought back to the computation of nodal displacement and traction fields which are admissible in a discrete sense. Sections \ref{sec:bddrecovery} and \ref{sec:fetirecovery} describes how to obtain these fields without inter-subdomains exchanges when using classical primal (BDD) and dual (FETI) domain decomposition methods with good preconditioners. Section \ref{sec:numeric} presents numerical assessments, first to validate the parallel recovery procedure, then to prove that a good estimation can be obtained far earlier than the solver converged (in the sense of domain decomposition iterative solver). Finally, Section \ref{sec:conclusions} concludes this paper. \section{Framework of the study} \label{sec:basics} \subsection{Reference mechanical problem} \label{sec:crebasicsrefpb} Let us consider the static equilibrium of a structure which occupies the open domain $\Omega\subset\mathbb{R}^d$ and which is submitted to given body forces $f$, to given traction forces $g$ on $\partial_f\Omega$ and to given displacements $u_0$ on the complementary part $\partial_u\Omega\neq\emptyset$. We assume the structure undergoes small perturbations and that the material is linear elastic, characterized by the Hooke's tensor $\hooke$. Let $u$ be the unknown displacement field, $\varepsilon(u)$ the symmetric part of the gradient, $\sigma$ the Cauchy stress tensor. \begin{figure}[ht]\centering \includegraphics[width=.5\textwidth]{continuum-domain2}\caption{Domain $\Omega$, subdomain $\omega$ and boundaries}\label{fig:patate} \end{figure} Let $\omega \subset \Omega$ be an open subset of $\Omega$, $\partial_f \omega=\partial \omega \cap \partial_f \Omega$, $\partial_u \omega=\partial \omega \cap \partial_u \Omega$ and $\Gamma=\partial\omega\setminus(\partial_u \omega\cup\partial_f \omega)$ (see Figure \ref{fig:patate}). We introduce two affine subspaces and one positive form: \begin{itemize} \item Subspace of kinematically admissible fields \begin{equation}\label{eq:KA} \KA{\omega}=\left\{ u\in \left(\mathrm{H}^1(\omega)\right)^d,\ \trace (u) = u_0 \text{ on }\partial_u\omega \right\} \end{equation} where $\trace$ is the trace operator. \item Subspace of statically admissible fields \begin{multline}\label{eq:SA} \SA{\omega =\Bigg\lbrace \tau\in \left(\mathrm{L}^2(\omega)\right)^{d\times d}, \tau \text{ symmetric}, \ \forall u^*\in \KAoo{\omega},\ \\ \int_\omega \tau:\varepsilon(u^*) d\omega = \int_\omega f.u^* d\omega + \int_{\partial_f\omega} g.u^* dS \Bigg\rbrace \end{multline} \begin{equation*} \text{where } \KAoo{\omega}=\left\{ u\in \left(\mathrm{H}^1(\omega)\right)^d,\ \trace (u) = 0 \text{ on }\partial_u\omega\cup\Gamma \right\} \end{equation*} \item Measure of the non-verification of the constitutive equation \cite{ladeveze:75:erreur} \begin{equation}\label{eq:ecr} \ecr{u,\sigma}{\omega} = \enernorm{\sigma-\hooke:\strain{u}}{\omega} \end{equation} where ${\enernorm{x}{\omega}}=\displaystyle \sqrt{\int_\omega \left( x: {\hooke}^{-1} :x \right)d\omega}$ \end{itemize} The mechanical problem set on $\Omega$ can be formulated as: \begin{center} Find $\left(u_{ex},\sigma_{ex}\right)\in\KA{\Omega}\times\SA{\Omega}$ such that $\ecr{u_{ex},\sigma_{ex}}{\Omega}=0$ \end{center} \subsection{Finite element approximation for the global problem} Let $\Omega_h$ be a tessellation of $\bar{\Omega}$ to which we associate a finite dimensional subspace $\KAh{\Omega}$ of $\KA{\Omega}$. The classical finite element displacement approximation consists in searching \begin{equation} \begin{aligned} u_h&\in\KAh{\Omega}\\ \sigma_h&=\hooke:\varepsilon(u_h) \\ \int_\Omega \sigma_h:\varepsilon(u_h^*) d\Omega &= \int_\Omega f.u_h^* d\Omega + \int_{\partial_f\Omega} g.u_h^* dS, \qquad \forall u_h^*\in \KAhoo{\Omega} \end{aligned} \end{equation} After introducing the $d\times N_{dof}$ matrix $\shapev$ of shape functions which form a basis of $\KAh{\Omega}$ and the vector of nodal unknowns $\ensuremath{\mathbf{u}}$ (of size $N_{dof}$, number of degrees of freedom) so that $u_h=\shapev \ensuremath{\mathbf{u}}$, the classical finite element method leads to the well-known linear system: \begin{equation}\label{eq:globalFE} \ensuremath{\mathbf{K}} \ensuremath{\mathbf{u}} = \ensuremath{\mathbf{f}} \end{equation} where $\ensuremath{\mathbf{K}}$ is the (symmetric positive definite) stiffness matrix of domain $\Omega_h$ and $\ensuremath{\mathbf{f}}$ is the vector of generalized forces. \subsection{A posteriori error estimator} The finite element approximation $(u_h,\sigma_h)$ satisfies $u_h\in\KA{\Omega}$ and $\ecr{u_h,\sigma_h}{\Omega}=0$ but $\sigma_h\notin\SA{\Omega}$. The error in constitutive relation consists in deducing from $(u_h,\sigma_h)$ an admissible displacement-stress pair $(\admiss{u}_h,\admiss{\sigma}_h)\in\KA{\Omega}\times\SA{\Omega}$ in order to measure the residual on the constitutive equation \eqref{eq:ecr} $\ecr{\admiss{u}_h,\admiss{\sigma}_h}{\Omega}\geqslant 0$. Using the well-known Prager-Synge theorem it can be proved that \begin{equation*} \strainnorm{\varepsilon(u_{ex})-\varepsilon(\admiss{u}_h)}{\structure}^2+\enernorm{\sigma_{ex}-\admiss{\sigma}_h}{\structure}^2 = \left(\ecr{\admiss{u}_h,\admiss{\sigma}_h}{\structure}\right)^2 \end{equation*} Hence, the evaluation of the error in constitutive relation $\ecr{\admiss{u}_h,\admiss{\sigma}_h}{\structure}$ for any admissible pair ($\admiss{u}_h$, $\admiss{\sigma}_h$) provides a guaranteed upper bound of the global error \begin{equation} \label{eq:crebasicspragersynge} \strainnorm{\varepsilon(u_{ex})-\varepsilon(\admiss{u}_h)}{\structure}\leqslant \ecr{\admiss{u}_h,\admiss{\sigma}_h}{\structure} \end{equation} $\KAh{\Omega}$ being a subspace of $\KA{\Omega}$, the construction of an admissible displacement field $\admiss{u}_h$ is straightforward since it can be taken equal to $u_h$. On the other hand, as $\sigma_h$ is not statically admissible, the construction of an admissible stress field $\admiss{\sigma}_h\in\SA{\Omega}$ is a crucial point which has already been widely studied in the literature. A first solution is to use a dual formulation of the reference problem \cite{beckers:1998} to compute $\admiss{\sigma}$ from scratch. Unfortunately building a subspace of $\SA{\Omega}$ is a complex task and most people prefer to post-process a statically admissible field from Field $\sigma_h$ obtained by a displacement formulation. Classical methods are the element equilibration techniques \cite{ladeveze:leguillon:81:erdc:sarecovery,ladeveze:rougeot:1997:erdc:sarecovery}, which have been improved by the use of the concept of partition of unity which lead to \cite{gallimard:2008:erdc:sarecovery,mointinhoDeAlmeida:maunder:2009:sarecoverypu,ladeveze:chamoin:florentin:2009:erdc:sarecovery} and the flux-free method \cite{pares:diez:huerta:2006:fluxfreesubdomain}. In most cases they involve the computation of efforts on ``star-patches'' which are the set of elements sharing one node, for each node of the mesh. Though rather simple these computations are in great number and thus expensive. In the following, we note by $\mathcal{F}_h$ the algorithm which has been chosen to build an admissible stress field $\hat{\sigma}_h$. Whatever the choice, the algorithm takes as input not only the finite element stress field $\sigma_h$ but also the continuous representation of the imposed forces $(f, g)$. \begin{equation*} \hat{\sigma}_h=\mathcal{F}_h(\sigma_h, f, g) \in \SA{\Omega} \end{equation*} The algorithm we have used for our applications is the one proposed in \cite{ladeveze:rougeot:1997:erdc:sarecovery} using a three degrees higher polynomial basis when solving the local problems on elements \cite{babuska:94:vpeena}. \subsection{Substructured formulation} Let us consider a decomposition of domain $\Omega$ in open subsets $(\Omega\ensuremath{^{(s)}})_{1\leqslant s\leqslant N_{sd}}$ ($N_{sd}$ is the number of subdomains) so that $\Omega\ensuremath{^{(s)}}\cap\Omega^{(s')}=\emptyset$ for $s\neq s'$ and $\bar{\Omega}=\cup_s \bar{\Omega}\ensuremath{^{(s)}}$. Let $u\ensuremath{^\square}=(u\ensuremath{^{(s)}})_s$, we define the global assembling operator $\assemg$: \begin{equation} \begin{aligned} u= \assemg (u\ensuremath{^\square}) \Leftrightarrow u_{|\Omega\ensuremath{^{(s)}}}=u\ensuremath{^{(s)}} \end{aligned} \end{equation} In order to reformulate the mechanical problem on the substructured configuration, we need to specify the conditions that should be satisfied at the boundary between subdomains $\Gamma^{(ss')}=\partial\Omega\ensuremath{^{(s)}}\cap\partial\Omega^{(s')}$. We have the fundamental properties: \begin{equation}\label{eq:KAss} \assemg (u\ensuremath{^\square}) \in \KA{\Omega} \Leftrightarrow \left\lbrace \begin{array}{l} u\ensuremath{^{(s)}}\in \KA{\Omega\ensuremath{^{(s)}}},\ \forall s \\ \trace(u\ensuremath{^{(s)}})=\trace(u^{(s')}) \text{ on } \Gamma^{(ss')},\ \forall(s,s')\end{array}\right. \end{equation} \begin{equation}\label{eq:SAss} \assemg (\sigma\ensuremath{^\square}) \in \SA{\Omega} \Leftrightarrow \left\lbrace \begin{array}{l} \sigma\ensuremath{^{(s)}}\in \SA{\Omega\ensuremath{^{(s)}}},\ \forall s \\ \sigma\ensuremath{^{(s)}}.n\ensuremath{^{(s)}}+\sigma^{(s')}.n^{(s')}=0 \text{ on }\Gamma^{(ss')},\ \forall(s,s')\end{array}\right. \end{equation} In other words, in order to be admissible on the whole domain $\Omega$, not only fields need to be admissible in a local sense (independently on each $\Omega\ensuremath{^{(s)}}$), but they also need to satisfy interface conditions, namely displacements continuity and tractions balance (action-reaction principle). \subsection{Finite element approximation for the substructured problem} We assume that the tessellation of $\bar{\Omega}$ and the substructuring are conforming so that (i) each element only belongs to one subdomain and (ii) nodes are matching on the interfaces. Each degree of freedom is either located inside a subdomain (subscript $i$) or on its boundary $\Gamma\ensuremath{^{(s)}}=\cup_{s'}\Gamma^{(ss')}$ (subscript $b$) where it is shared with at least one neighboring subdomain. Let $\ensuremath{\boldsymbol{\lambda}}_b\ensuremath{^{(s)}}$ be the vector of unknown efforts imposed on the interface of subdomain $\Omega\ensuremath{^{(s)}}_h$ by its neighbors. The finite element problem \eqref{eq:globalFE} can be written highlighting the contributions of subdomains: \begin{equation}\label{eq:inter_bool} \forall s,\ \ensuremath{\mathbf{K}}\ensuremath{^{(s)}}\ensuremath{\mathbf{u}}\ensuremath{^{(s)}}= \ensuremath{\mathbf{f}}\ensuremath{^{(s)}} + {\ensuremath{\mathbf{t}}\ensuremath{^{(s)}}}^T\ensuremath{\boldsymbol{\lambda}}_b\ensuremath{^{(s)}} \text{ with } \left\{\begin{array}{l} \sum\limits_s \ensuremath{\mathbf{A}}\ensuremath{^{(s)}} \ensuremath{\boldsymbol{\lambda}}_b\ensuremath{^{(s)}} = \mathbf{0}\\ \sum\limits_s \ensuremath{\mathbf{\underline{A}}}\ensuremath{^{(s)}} \ensuremath{\mathbf{u}}_b\ensuremath{^{(s)}} = \mathbf{0} \end{array}\right. \end{equation} where $\ensuremath{\mathbf{t}}\ensuremath{^{(s)}}$ is the discrete trace operator ($\ensuremath{\mathbf{u}}\ensuremath{^{(s)}}_b=\ensuremath{\mathbf{t}}\ensuremath{^{(s)}}\ensuremath{\mathbf{u}}\ensuremath{^{(s)}}$) and where $\ensuremath{\mathbf{A}}\ensuremath{^{(s)}}$ and $\ensuremath{\mathbf{\underline{A}}}\ensuremath{^{(s)}}$ are assembling operators so that $\ensuremath{\mathbf{A}}\ensuremath{^{(s)}}$ enables to formulate the mechanical equilibrium of interfaces \eqref{eq:SAss} and $\ensuremath{\mathbf{\underline{A}}}\ensuremath{^{(s)}}$ enables to formulate the continuity of displacements \eqref{eq:KAss} (in the case of two subdomains, we have $\sum_s\ensuremath{\mathbf{A}}\ensuremath{^{(s)}}\ensuremath{\boldsymbol{\lambda}}_b=\ensuremath{\boldsymbol{\lambda}}_b^{(1)}+\ensuremath{\boldsymbol{\lambda}}_b^{(2)}=\mathbf{0}$ and $\sum_s\ensuremath{\mathbf{\underline{A}}}\ensuremath{^{(s)}}\ensuremath{\mathbf{u}}_b\ensuremath{^{(s)}}=\ensuremath{\mathbf{u}}_b^{(1)}-\ensuremath{\mathbf{u}}_b^{(2)}=\mathbf{0}$, see Fig.~\ref{fig:omegef:2} for less trivial example and \cite{GOSSELET.2007.1} for more an extensive description of all operators). One fundamental property of assembling operators is their orthogonality \begin{equation}\label{ortho_assem} \sum_s \ensuremath{\mathbf{\underline{A}}}\ensuremath{^{(s)}}{\ensuremath{\mathbf{A}}\ensuremath{^{(s)}}}^T=\mathbf{0} \end{equation} Note that the equilibrium of subdomain $\Omega\ensuremath{^{(s)}}$ also writes: \begin{equation}\label{eq:equi_sd} \begin{pmatrix} \ensuremath{\mathbf{K}}_{ii}\ensuremath{^{(s)}} & \ensuremath{\mathbf{K}}_{ib}\ensuremath{^{(s)}} \\ \ensuremath{\mathbf{K}}_{bi}\ensuremath{^{(s)}} & \ensuremath{\mathbf{K}}_{bb}\ensuremath{^{(s)}} \end{pmatrix} \begin{pmatrix} \ensuremath{\mathbf{u}}_{i}\ensuremath{^{(s)}} \\ \ensuremath{\mathbf{u}}_{b}\ensuremath{^{(s)}} \end{pmatrix} = \begin{pmatrix} \ensuremath{\mathbf{f}}_{i}\ensuremath{^{(s)}} \\ \ensuremath{\mathbf{f}}_{b}\ensuremath{^{(s)}} \end{pmatrix} +\begin{pmatrix} \mathbf{0}_{i}\ensuremath{^{(s)}} \\ \ensuremath{\boldsymbol{\lambda}}_{b}\ensuremath{^{(s)}} \end{pmatrix} \end{equation} or in an equivalent condensed form: \begin{equation}\label{eq:equi_sd_cond} \ensuremath{\mathbf{S}}\ensuremath{^{(s)}} \ensuremath{\mathbf{u}}_{b}\ensuremath{^{(s)}} = \ensuremath{\mathbf{b}}_p\ensuremath{^{(s)}} + \ensuremath{\boldsymbol{\lambda}}_{b}\ensuremath{^{(s)}} \end{equation} with \begin{equation}\label{eq:oper_sd_cond} \begin{aligned} \ensuremath{\mathbf{S}}\ensuremath{^{(s)}} & =\ensuremath{\mathbf{K}}\ensuremath{^{(s)}}_{bb}-\ensuremath{\mathbf{K}}\ensuremath{^{(s)}}_{bi}{\ensuremath{\mathbf{K}}\ensuremath{^{(s)}}_{ii}}^{-1}\ensuremath{\mathbf{K}}\ensuremath{^{(s)}}_{ib}\\ \ensuremath{\mathbf{b}}\ensuremath{^{(s)}} & =\ensuremath{\mathbf{f}}_b\ensuremath{^{(s)}} -\ensuremath{\mathbf{K}}\ensuremath{^{(s)}}_{bi}{\ensuremath{\mathbf{K}}\ensuremath{^{(s)}}_{ii}}^{-1}\ensuremath{\mathbf{f}}_i\ensuremath{^{(s)}}\\ \end{aligned} \end{equation} where $\ensuremath{\mathbf{S}}\ensuremath{^{(s)}}$ is the Schur complement and $\ensuremath{\mathbf{b}}\ensuremath{^{(s)}}$ is the condensed right-hand side. \section{A posteriori error estimator in substructured context}\label{sec:errordd} The key point for the efficient evaluation of the error in constitutive relation in a substructured context (without overlapping) is to define admissible pairs ($\admiss{u}\ensuremath{^{(s)}}_h$,$\admiss{\sigma}\ensuremath{^{(s)}}_h$) $\in \KA{\Omega\ensuremath{^{(s)}}} \times\SA{\Omega\ensuremath{^{(s)}}}$ on each subdomain so that the associated assembled pair is admissible for the reference problem $(\assemg (\admiss{u}_h\ensuremath{^\square}),\assemg (\admiss{\sigma}_h\ensuremath{^\square}))\in\KA{\Omega}\times\SA{\Omega}$. Due to the absence of overlap, the additive structure of the associated error in constitutive relation leads to a fully parallel evaluation of the a posteriori error estimator: \begin{equation*} \left(\ecr{\assemg (\admiss{u}_h\ensuremath{^\square}),\assemg (\admiss{\sigma}_h\ensuremath{^\square})}{\structure}\right)^2 = \sum_s \left(\ecr{\admiss{u}\ensuremath{^{(s)}}_h,\admiss{\sigma}\ensuremath{^{(s)}}_h}{\Omega\ensuremath{^{(s)}}}\right)^2 \end{equation*} The application of a classical recovery strategy to compute admissible fields raises two difficulties in a substructured context. First, the star-patches can not be employed on the boundary nodes without assuming communication between subdomains. Though these exchanges would remain limited, we propose an alternate strategy to achieve full parallelism without impairing the properties of the error in constitutive relation. Second, in order to solve the substructured problem \eqref{eq:inter_bool} parallel strategies consist in using iterative solvers which are based on the loosening of at least one of the interface conditions which is only verified (up to a certain precision) once the solver converged. Thus recovering strategies need to be adapted so that the local fields $(\admiss{u}_h\ensuremath{^{(s)}},\admiss{\sigma}_h\ensuremath{^{(s)}})$ satisfy the interface conditions. The aim of this section is to prove that the determination of the admissible pair $(\assemg(\admiss{u}_h\ensuremath{^\square}),\assemg (\admiss{\sigma}_h\ensuremath{^\square}))$ can be brought back to the determination of nodal interface fields $(\ensuremath{\admiss{\dep}}_b\ensuremath{^{(s)}},\ensuremath{\admiss{\lam}}_b\ensuremath{^{(s)}})_s$ which satisfy specific interface conditions. The construction of these nodal fields depends on the chosen domain decomposition strategy and is discussed in the following sections. \subsection{Kinematically admissible fields} In order to ensure interface Condition \eqref{eq:KAss} when building $\admiss{u}\ensuremath{^{(s)}}_h \in \KA{\Omega\ensuremath{^{(s)}}}$ so that $\assemg (\admiss{u}_h\ensuremath{^\square}) \in \KA{\Omega}$, we introduce continuous interface displacement fields $\hat{u}_{bh}\ensuremath{^{(s)}}$ from which we shall deduce internal displacement fields: \begin{equation*} \begin{aligned} \hat{u}\ensuremath{^{(s)}}_{bh}&= \hat{u}^{(s')}_{bh} \ , \ \forall (s,s')\\ \hat{u}\ensuremath{^{(s)}}_{h|\Gamma^{(ss')}} &= \hat{u}\ensuremath{^{(s)}}_{bh}, \ \forall s \end{aligned} \end{equation*} Since discretizations are matching on the interface, the first condition can directly be imposed on finite element nodal quantities: \begin{equation*}{\ensuremath{\admiss{\dep}}_b\ensuremath{^{(s)}}} = {\ensuremath{\admiss{\dep}}^{(s')}_b}\ , \ \forall (s,s')\end{equation*} In order to deduce the internal fields, one finite element problem is solved independently on each subdomain with imposed Dirichlet conditions on the interface: \begin{equation*} \begin{aligned} {\ensuremath{\admiss{\dep}}_i\ensuremath{^{(s)}}}&={\ensuremath{\mathbf{K}}\ensuremath{^{(s)}}_{ii}}^{-1}\left(\ensuremath{\mathbf{f}}_i\ensuremath{^{(s)}}-\ensuremath{\mathbf{K}}\ensuremath{^{(s)}}_{ib}{\ensuremath{\mathbf{A}}\ensuremath{^{(s)}}}^T\ensuremath{\admiss{\dep}}_b\ensuremath{^{(s)}}\right)\\ \hat{u}_h\ensuremath{^{(s)}}&=\shapev\ensuremath{^{(s)}} \ensuremath{\admiss{\dep}}\ensuremath{^{(s)}}= \begin{pmatrix}{\shapev}_{_i}\ensuremath{^{(s)}}& {\shapev}_{_b}\ensuremath{^{(s)}}\end{pmatrix}\begin{pmatrix} \ensuremath{\admiss{\dep}}_i\ensuremath{^{(s)}} \\\ensuremath{\admiss{\dep}}_b\ensuremath{^{(s)}} \end{pmatrix}\\ \admiss{u}&=\assemg (\admiss{u}_h\ensuremath{^\square}) \in \KA{\Omega} \end{aligned} \end{equation*} \subsection{Statically admissible fields} In order to ensure interface Condition \eqref{eq:SAss} when building $\admiss{\sigma}\ensuremath{^{(s)}}_h \in \SA{\Omega\ensuremath{^{(s)}}}$ so that $\assemg (\admiss{\sigma}_h\ensuremath{^\square}) \in \SA{\Omega}$, we introduce for each subdomain the continuous balanced interface traction fields $\ensuremath{\admiss{F}}_{bh}\ensuremath{^{(s)}}$ defined on $\Gamma\ensuremath{^{(s)}}$ which satisfy: \begin{equation}\label{eq:Fadm} \begin{aligned} \admiss{\sigma}_h\ensuremath{^{(s)}}.n\ensuremath{^{(s)}} &= \ensuremath{\admiss{F}}_{bh}\ensuremath{^{(s)}} \text{ on }\Gamma\ensuremath{^{(s)}}\\ \ensuremath{\admiss{F}}_{bh}\ensuremath{^{(s)}} + \ensuremath{\admiss{F}}_{bh}^{(s')} &=0 \text{ on }\Gamma^{(ss')}\\ \int_{\Omega\ensuremath{^{(s)}}} f .\rho d\Omega + \int_{\partial_f\Omega\ensuremath{^{(s)}}} g\ensuremath{^{(s)}} .\rho dS + \int_{\Gamma\ensuremath{^{(s)}}} \ensuremath{\admiss{F}}_{bh}\ensuremath{^{(s)}} .\rho dS & =0\quad \forall\rho \in \mathrm{RKA}^0(\Omega\ensuremath{^{(s)}}) \end{aligned} \end{equation} where $\mathrm{RKA}^0(\Omega\ensuremath{^{(s)}})$ is the set of rigid body motions which are compatible with Dirichlet conditions imposed on $\partial_u\Omega\ensuremath{^{(s)}}$ \begin{equation*} \mathrm{RKA}^0(\Omega\ensuremath{^{(s)}})=\left\{ \rho\in\mathrm{H}^1(\Omega\ensuremath{^{(s)}}),\ \rho=0 \text{ on }\partial_u\Omega\ensuremath{^{(s)}},\ \varepsilon(\rho)=0, \right\} \end{equation*} The last condition of \eqref{eq:Fadm} is the translation of Fredholm's alternative in order to ensure the well-posedness of the static problem on domain $\Omega\ensuremath{^{(s)}}$. To build these traction fields in a simple way, we associate them with the finite element nodal reaction field ${\ensuremath{\admiss{\lam}}}\ensuremath{^{(s)}}_b$: \begin{equation}\label{eq:salambda1} \int_{\Gamma^{(ss')}} \ensuremath{\admiss{F}}_{bh}\ensuremath{^{(s)}}\cdot{\shapef_j\ensuremath{^{(s)}}}_{|\Gamma^{(ss')}}dS = {\ensuremath{\admiss{\lam}}}_{b,j}\ensuremath{^{(s)}} \end{equation} where $j$ denote a node of the interface, $\shapef_j\ensuremath{^{(s)}}$ its associated shape function and ${\ensuremath{\admiss{\lam}}}\ensuremath{^{(s)}}_{b,j}$ the corresponding nodal component of ${\ensuremath{\admiss{\lam}}}\ensuremath{^{(s)}}_b$. This equation then imposes that the discrete field ${\ensuremath{\admiss{\lam}}}_{b}\ensuremath{^{(s)}}$ and the continuous field $\ensuremath{\admiss{F}}_{bh}\ensuremath{^{(s)}}$ develop the same virtual work in any finite element displacement field. The conditions on $\ensuremath{\admiss{F}}_{bh}\ensuremath{^{(s)}}$ have these discrete counterparts on ${\ensuremath{\admiss{\lam}}}\ensuremath{^{(s)}}_b$: \begin{equation \begin{aligned} \sum\limits_s \ensuremath{\mathbf{A}}\ensuremath{^{(s)}} \ensuremath{\admiss{\lam}}_b\ensuremath{^{(s)}} &= \mathbf{0}\\ {\ensuremath{\mathbf{R}}\ensuremath{^{(s)}}}^T\left({\ensuremath{\mathbf{t}}\ensuremath{^{(s)}}}^T{\ensuremath{\admiss{\lam}}}_{b}\ensuremath{^{(s)}}+\ensuremath{\mathbf{f}}\ensuremath{^{(s)}}\right)&=\mathbf{0} \end{aligned} \end{equation} where $\ensuremath{\mathbf{R}}\ensuremath{^{(s)}}$ is a basis of $\ker(\ensuremath{\mathbf{K}}\ensuremath{^{(s)}})$. As said earlier, the first equation corresponds to the equilibrium between subdomains. The second equation corresponds to the balance of the subdomain with respect to virtual rigid body motions (since this kind of displacement field is exactly represented in the finite element approximation, the discrete condition is equivalent to the continuous one). As a first approach, we define $\ensuremath{\admiss{F}}_{bh}\ensuremath{^{(s)}}$ as: \begin{equation} \label{eq:salambda3} \ensuremath{\admiss{F}}_{bh}\ensuremath{^{(s)}} = \shapev{}_{|\Gamma^{(s)}}\ensuremath{^{(s)}} {\ensuremath{\admiss{\F}}}_{b}\ensuremath{^{(s)}} \end{equation} where ${\ensuremath{\admiss{\F}}}_{b}\ensuremath{^{(s)}}$ is the vector of nodal values of $\ensuremath{\admiss{F}}_{bh}\ensuremath{^{(s)}}$ and $\shapev{}_{|\Gamma^{(s)}}\ensuremath{^{(s)}}$ refers to the vector of the trace on $\Gamma^{(s)}$ of finite element shape functions. Vector ${\ensuremath{\admiss{\F}}}_{b}\ensuremath{^{(s)}}$ is then obtained by the inversion of the (small) ``mass'' matrix of the interface of each subdomain. In the following, we denote by $\mathcal{G}_h$ the previous procedure which associates a continuous balanced interface force $\ensuremath{\admiss{F}}_{bh}\ensuremath{^{(s)}}$ to a balanced nodal interfaces forces ${\ensuremath{\admiss{\lam}}}\ensuremath{^{(s)}}_b$: \begin{equation} \label{eq:Fh} \ensuremath{\admiss{F}}_{bh}\ensuremath{^{(s)}}=\mathcal{G}_h({\ensuremath{\admiss{\lam}}}\ensuremath{^{(s)}}_b) \end{equation} The traction field $\ensuremath{\admiss{F}}_{bh}\ensuremath{^{(s)}}$ allows to satisfy the interface conditions associated to the static admissibility. The next step is to build internal finite element stress fields which match the associated nodal boundary field ${\ensuremath{\admiss{\lam}}}\ensuremath{^{(s)}}_b$. This is done by solving one finite element problem on each subdomain with imposed Neumann conditions on the interface. \begin{equation}\label{eq:bdd_pi} \begin{aligned} \tilde{\ensuremath{\mathbf{u}}}\ensuremath{^{(s)}} &={\ensuremath{\mathbf{K}}\ensuremath{^{(s)}}}^+\left(\ensuremath{\mathbf{f}}\ensuremath{^{(s)}}+{\ensuremath{\mathbf{t}}\ensuremath{^{(s)}}}^T{\ensuremath{\admiss{\lam}}}\ensuremath{^{(s)}}_b\right)\\ \admiss{\sigma}_h\ensuremath{^{(s)}}& =\mathcal{F}_h\left(\hooke:\varepsilon(\shapev\ensuremath{^{(s)}}\tilde{\ensuremath{\mathbf{u}}}\ensuremath{^{(s)}}), f\ensuremath{^{(s)}}, \left\{g\ensuremath{^{(s)}},\mathcal{G}_h({\ensuremath{\admiss{\lam}}}\ensuremath{^{(s)}}_b)\right\}\right)\\ \admiss{\sigma}_h&=\assemg (\admiss{\sigma}_h\ensuremath{^\square}) \in \SA{\Omega} \end{aligned} \end{equation} The use of the pseudo-inverse ${\ensuremath{\mathbf{K}}\ensuremath{^{(s)}}}^+$ is due to the potential lack of Dirichlet boundary conditions on the substructure. Displacement field $\tilde{\ensuremath{\mathbf{u}}}\ensuremath{^{(s)}}$ is defined up to a rigid body motion which needs not to be determined since only the symmetric gradient of the associated displacement field is required. It has to be noted that the fully parallel procedure $\mathcal{G}_h$ proposed above leads to a different admissible traction field as would have been obtained using standard patch-technique \cite{ladeveze:rougeot:1997:erdc:sarecovery} (referred in the sequel as the sequential approach). Thus the use of $\mathcal{G}_h$ implies that the parallel error estimation is different from the standard sequential one even when discrete interface conditions are satisfied. For now there are no theoretical results on the quality of the resulting fields, examples (as given in Section~\ref{sec:numeric}) show that sequential estimator and parallel estimator (when interface conditions have sufficiently converged, which happens very quickly) can not be distinguished. \section{Recovery of admissible fields in BDD} \label{sec:bddrecovery} In the Balancing domain decomposition \cite{MANDEL:1993:BAL,LETALLEC:1994:DDM}, a unique interface displacement unknown $\ensuremath{\mathbf{u}}_b$ is introduced so that continuity is always insured: \begin{equation} \ensuremath{\mathbf{u}}_b\ensuremath{^{(s)}}={\ensuremath{\mathbf{A}}\ensuremath{^{(s)}}}^T \ensuremath{\mathbf{u}}_b \Longrightarrow \sum_s\ensuremath{\mathbf{\underline{A}}}\ensuremath{^{(s)}}\ensuremath{\mathbf{u}}_b\ensuremath{^{(s)}} = \mathbf{0} \end{equation} Other quantities can be deduced from $\ensuremath{\mathbf{u}}_b$ and equations (\ref{eq:equi_sd},\ref{eq:equi_sd_cond}): \begin{equation}\label{eq:bdd1} \begin{aligned} \ensuremath{\mathbf{u}}_i\ensuremath{^{(s)}} &= {\ensuremath{\mathbf{K}}\ensuremath{^{(s)}}_{ii}}^{-1}\left(\ensuremath{\mathbf{f}}_i\ensuremath{^{(s)}}-\ensuremath{\mathbf{K}}\ensuremath{^{(s)}}_{ib}{\ensuremath{\mathbf{A}}\ensuremath{^{(s)}}}^T\ensuremath{\mathbf{u}}_b\right)\\ \ensuremath{\boldsymbol{\lambda}}_b\ensuremath{^{(s)}} &= \ensuremath{\mathbf{S}}\ensuremath{^{(s)}} \ensuremath{\mathbf{u}}_b - \ensuremath{\mathbf{b}}\ensuremath{^{(s)}} \end{aligned} \end{equation} The BDD solver consists in iteratively finding the interface displacement $\ensuremath{\mathbf{u}}_b$ which insure global equilibrium ($\sum_s\ensuremath{\mathbf{A}}^{(s)}\ensuremath{\boldsymbol{\lambda}}^{(s)}_b=\mathbf{0}$), \begin{equation}\label{eq:primal_sys} \mathbf{0}=\sum_s \ensuremath{\mathbf{A}}\ensuremath{^{(s)}}\ensuremath{\boldsymbol{\lambda}}_b\ensuremath{^{(s)}} = \left(\sum_s \ensuremath{\mathbf{A}}\ensuremath{^{(s)}}\ensuremath{\mathbf{S}}\ensuremath{^{(s)}}{\ensuremath{\mathbf{A}}\ensuremath{^{(s)}}}^T \right)\ensuremath{\mathbf{u}}_b - \left(\sum_s \ensuremath{\mathbf{A}}\ensuremath{^{(s)}}\ensuremath{\mathbf{b}}\ensuremath{^{(s)}}\right) \end{equation} \subsection{Recovery of KA fields} In the BDD solver, kinematic interface conditions are satisfied anytime and using $\ensuremath{\admiss{\dep}}_b\ensuremath{^{(s)}}=\ensuremath{\mathbf{u}}_b$ enables to build $\admiss{u}\ensuremath{^{(s)}}_h$ so that $\admiss{u}_h=\assemg\left(\admiss{u}_h\ensuremath{^\square}\right)\in\KA{\Omega}$. Note that all associated computations are realized during the standard resolution process so that no extra operation is required. \subsection{Recovery of SA fields} For a given interface displacement $\ensuremath{\mathbf{u}}_b$, we note: \begin{equation*} \llceil \ensuremath{\boldsymbol{\lambda}}_b \rrceil=\sum_s\ensuremath{\mathbf{A}}^{(s)}\ensuremath{\boldsymbol{\lambda}}^{(s)}_b=\sum_s\ensuremath{\mathbf{A}}^{(s)} \left( \ensuremath{\mathbf{S}}\ensuremath{^{(s)}} \ensuremath{\mathbf{u}}_b - \ensuremath{\mathbf{b}}\ensuremath{^{(s)}} \right \end{equation*} Obviously $\llceil \ensuremath{\boldsymbol{\lambda}}_b \rrceil$ is zero if and only if $\ensuremath{\mathbf{u}}_b$ is the solution to \eqref{eq:primal_sys}. We then define: \begin{equation}\label{eq:lam_feti} {\ensuremath{\admiss{\lam}}}\ensuremath{^{(s)}}_b = \ensuremath{\boldsymbol{\lambda}}\ensuremath{^{(s)}}_b- \left.\ensuremath{\tilde{\passem}}\ensuremath{^{(s)}}\right.^T\llceil \ensuremath{\boldsymbol{\lambda}}_b \rrceil \end{equation} where $(\ensuremath{\tilde{\passem}}\ensuremath{^{(s)}})_s$ are scaled assembling operators so that $\sum_s\ensuremath{\mathbf{A}}\ensuremath{^{(s)}}\left.\ensuremath{\tilde{\passem}}\ensuremath{^{(s)}}\right.^T= \mathbf{I}$. The multiplicity scaling is a typical example of such operator $\ensuremath{\tilde{\passem}}\ensuremath{^{(s)}}$: \begin{equation*} \left.\ensuremath{\tilde{\passem}}\ensuremath{^{(s)}}\right.^T = \left.\ensuremath{\mathbf{A}}\ensuremath{^{(s)}}\right.^T\left(\sum_j\ensuremath{\mathbf{A}}^{(j)}\left.\ensuremath{\mathbf{A}}^{(j)}\right.^T\right)^{-1} \end{equation*} which, in the case of two subdomains, gives $\left.\ensuremath{\tilde{\passem}}\ensuremath{^{(s)}}\right.^T \llceil \ensuremath{\boldsymbol{\lambda}}_b \rrceil=\displaystyle \frac{1}{2}\llceil \ensuremath{\boldsymbol{\lambda}}_b \rrceil$. In the case of heterogeneous structures, other scaled assembly operators which take the heterogeneity into account are used \cite{RIXEN:1998:SUPERL,KLAWONN:2001:FNN,GOSSELET:2002:DDM}. It is clear that by definition, $\ensuremath{\admiss{\lam}}\ensuremath{^{(s)}}_b$ is a balanced nodal reaction field: \begin{equation*} \sum_s\ensuremath{\mathbf{A}}^{(s)}{\ensuremath{\admiss{\lam}}}^{(s)}_b=0 \end{equation*} In order to prove that ${\ensuremath{\admiss{\lam}}}\ensuremath{^{(s)}}_b$ also satisfies Fredholm's alternative, we note that since $\ensuremath{\mathbf{R}}\ensuremath{^{(s)}}$ is a basis of $\mathrm{ker}(\ensuremath{\mathbf{K}}\ensuremath{^{(s)}})$ and $\ensuremath{\mathbf{K}}_{ii}\ensuremath{^{(s)}}$ is invertible, we have $\ensuremath{\mathbf{S}}\ensuremath{^{(s)}}\ensuremath{\mathbf{R}}\ensuremath{^{(s)}}_b=\mathbf{0}$ and $\ensuremath{\mathbf{R}}\ensuremath{^{(s)}}_i=-\left.\ensuremath{\mathbf{K}}_{ii}\ensuremath{^{(s)}}\right.^{-1}\ensuremath{\mathbf{K}}_{ib}\ensuremath{^{(s)}}\ensuremath{\mathbf{R}}\ensuremath{^{(s)}}_b$. The condition \label{eq:salambda2} then writes in an equivalent condensed form: \begin{equation*} \begin{aligned} \left.\ensuremath{\mathbf{R}}\ensuremath{^{(s)}}_b\right.^T \left({\ensuremath{\admiss{\lam}}}\ensuremath{^{(s)}}_b+\ensuremath{\mathbf{b}}\ensuremath{^{(s)}}\right)&=0\\ \left.\ensuremath{\mathbf{R}}\ensuremath{^{(s)}}_b\right.^T \left(\ensuremath{\mathbf{S}}\ensuremath{^{(s)}}\ensuremath{\mathbf{u}}_b-\ensuremath{\mathbf{b}}\ensuremath{^{(s)}}+ \left.\ensuremath{\tilde{\passem}}\ensuremath{^{(s)}}\right.^T\llceil \ensuremath{\boldsymbol{\lambda}}_b \rrceil+\ensuremath{\mathbf{b}}\ensuremath{^{(s)}}\right)&=0\\ \end{aligned} \end{equation*} Using the symmetry of $\ensuremath{\mathbf{S}}\ensuremath{^{(s)}}$ (inherited from the symmetry of $\ensuremath{\mathbf{K}}\ensuremath{^{(s)}}$) to nullify $\left.\ensuremath{\mathbf{R}}\ensuremath{^{(s)}}_b\right.^T\ensuremath{\mathbf{S}}\ensuremath{^{(s)}}$, the condition writes: \begin{equation}\label{BDD_cond} \begin{aligned} \left(\ensuremath{\tilde{\passem}}\ensuremath{^{(s)}}\ensuremath{\mathbf{R}}\ensuremath{^{(s)}}_b\right)^T \llceil \ensuremath{\boldsymbol{\lambda}}_b \rrceil=0 \end{aligned} \end{equation} which is exactly the balancing condition \cite{mandel:93:ddm} of the iterative BDD solver: the residual of the BDD iterative solver $\llceil \ensuremath{\boldsymbol{\lambda}}_b \rrceil=\left(\sum_s\ensuremath{\mathbf{A}}^{(s)}\ensuremath{\boldsymbol{\lambda}}^{(s)}_b\right)$ \eqref{eq:primal_sys} has to be orthogonal to all local weighted rigid body motions so that preconditioning step is well posed. Then we have constructed a pair of interface nodal Vectors $(\ensuremath{\admiss{\dep}}_b,\ensuremath{\admiss{\lam}}_b)$ which satisfy all required conditions to build admissible fields. Note that all the involved operations are already realized during classical steps of the primal domain decomposition approach with a Neumann-Neumann preconditioner and the associated coarse problem, so that all finite element quantities (even the internal ones) are available at no cost; the only extra operations are due to the use of Algorithms $\mathcal{G}_h$ (to compute $\admiss{F}_{bh}$) and $\mathcal{F}_h$ (to compute $\admiss{\sigma}_h$). \section{Recovery of admissible fields in FETI} \label{sec:fetirecovery} In the Finite Element Tearing and Interconnecting domain decomposition \cite{FARHAT:1994:ADV}, a unique interface effort unknown $\ensuremath{\boldsymbol{\lambda}}_b$ is introduced so that interface equilibrium is always insured: \begin{equation}\label{eq:FETI_global} \ensuremath{\boldsymbol{\lambda}}_b\ensuremath{^{(s)}}={\ensuremath{\mathbf{\underline{A}}}\ensuremath{^{(s)}}}^T \ensuremath{\boldsymbol{\lambda}}_b \Longrightarrow \sum_s\ensuremath{\mathbf{A}}\ensuremath{^{(s)}}\ensuremath{\boldsymbol{\lambda}}_b\ensuremath{^{(s)}} = \mathbf{0} \end{equation} Displacements can be deduced from $\ensuremath{\boldsymbol{\lambda}}_b$ if it satisfies Fredholm's alternative on each substructure: \begin{equation}\label{eq:feti1} \begin{aligned} \ensuremath{\mathbf{u}}\ensuremath{^{(s)}} &= {\ensuremath{\mathbf{K}}\ensuremath{^{(s)}}}^{+}\left(\ensuremath{\mathbf{f}}\ensuremath{^{(s)}}+{\ensuremath{\mathbf{t}}\ensuremath{^{(s)}}}^T{\ensuremath{\mathbf{\underline{A}}}\ensuremath{^{(s)}}}^T\ensuremath{\boldsymbol{\lambda}}_b\right)+\ensuremath{\mathbf{R}}\ensuremath{^{(s)}}\ensuremath{\boldsymbol{\alpha}}\ensuremath{^{(s)}}\\ \mathbf{0}&={\ensuremath{\mathbf{R}}\ensuremath{^{(s)}}}^T\left(\ensuremath{\mathbf{f}}\ensuremath{^{(s)}}+{\ensuremath{\mathbf{t}}\ensuremath{^{(s)}}}^T{\ensuremath{\mathbf{\underline{A}}}\ensuremath{^{(s)}}}^T\ensuremath{\boldsymbol{\lambda}}_b\right) \end{aligned} \end{equation} where $\ensuremath{\boldsymbol{\alpha}}\ensuremath{^{(s)}}$ is the unknown magnitude of rigid body motions. The FETI solver consists in iteratively finding an interface effort $\ensuremath{\boldsymbol{\lambda}}_b$, under the previous constraint, which insures the continuity of interface displacement: \begin{multline} \mathbf{0}=\sum_s \ensuremath{\mathbf{\underline{A}}}\ensuremath{^{(s)}}\ensuremath{\mathbf{u}}_b\ensuremath{^{(s)}} = \left(\sum_s \ensuremath{\mathbf{\underline{A}}}\ensuremath{^{(s)}}\ensuremath{\mathbf{t}}\ensuremath{^{(s)}}{\ensuremath{\mathbf{K}}\ensuremath{^{(s)}}}^+{\ensuremath{\mathbf{t}}\ensuremath{^{(s)}}}^T{\ensuremath{\mathbf{\underline{A}}}\ensuremath{^{(s)}}}^T\right)\ensuremath{\boldsymbol{\lambda}}_b \\+ \left(\sum_s \ensuremath{\mathbf{\underline{A}}}\ensuremath{^{(s)}}\ensuremath{\mathbf{t}}\ensuremath{^{(s)}}{\ensuremath{\mathbf{K}}\ensuremath{^{(s)}}}^+\ensuremath{\mathbf{f}}\ensuremath{^{(s)}}\right)+\left(\sum_s \ensuremath{\mathbf{\underline{A}}}\ensuremath{^{(s)}}\ensuremath{\mathbf{t}}\ensuremath{^{(s)}}\ensuremath{\mathbf{R}}\ensuremath{^{(s)}}\ensuremath{\boldsymbol{\alpha}}\ensuremath{^{(s)}}\right) \end{multline} \subsection{Recovery of SA fields} In the FETI solver, the nodal interface fields ${\ensuremath{\boldsymbol{\lambda}}}\ensuremath{^{(s)}}_b={\ensuremath{\mathbf{\underline{A}}}\ensuremath{^{(s)}}}^T\ensuremath{\boldsymbol{\lambda}}_b$ are by construction always balanced at the interface \eqref{eq:FETI_global} and associated to well-posed discrete Neumann problems on each substructure \eqref{eq:feti1}. Hence, we can directly set $\ensuremath{\admiss{\lam}}_b\ensuremath{^{(s)}}={\ensuremath{\boldsymbol{\lambda}}}\ensuremath{^{(s)}}_b$ and apply algorithms $\mathcal{G}_h$ and $\mathcal{F}_h$ to compute $\admiss{\sigma}\ensuremath{^{(s)}}_h \in \SA{\Omega\ensuremath{^{(s)}}}$ with $\admiss{\sigma}_h=\assemg (\admiss{\sigma}_h\ensuremath{^\square}) \in \SA{\Omega}$. \subsection{Recovery of KA fields} For a given balanced nodal interface traction $\ensuremath{\boldsymbol{\lambda}}_b$, we introduce, in agreement with \eqref{eq:feti1}, the gap of the interface displacement : \begin{equation*} \llfloor \ensuremath{\mathbf{u}}_b \rrfloor= \sum_s\ensuremath{\mathbf{\underline{A}}}^{(s)}\ensuremath{\mathbf{u}}^{(s)}_b \end{equation*} and we define \begin{equation*} \ensuremath{\admiss{\dep}}\ensuremath{^{(s)}}_b = \ensuremath{\mathbf{u}}\ensuremath{^{(s)}}_b- \left.\ensuremath{\tilde{\dassem}}\ensuremath{^{(s)}}\right.^T\llfloor \ensuremath{\mathbf{u}}_b \rrfloor \end{equation*} where $(\ensuremath{\tilde{\dassem}}\ensuremath{^{(s)}})_s$ are scaled assembling operators so that $\sum_s\ensuremath{\mathbf{\underline{A}}}\ensuremath{^{(s)}} \left.\ensuremath{\tilde{\dassem}}\ensuremath{^{(s)}}\right.^T= \mathbf{I}$. Similarly to the BDD case, a typical example of such operator $\ensuremath{\tilde{\dassem}}\ensuremath{^{(s)}}$ is the multiplicity scaling: \begin{equation} \left.\ensuremath{\tilde{\dassem}}\ensuremath{^{(s)}}\right.^T = \left.\ensuremath{\mathbf{\underline{A}}}\ensuremath{^{(s)}}\right.^T\left(\sum_j\ensuremath{\mathbf{\underline{A}}}^{(j)}\left.\ensuremath{\mathbf{\underline{A}}}^{(j)}\right.^T\right)^{-1} \end{equation} Note that in the case of two subdomains, we have: $\left.\ensuremath{\tilde{\dassem}}\ensuremath{^{(s)}}\right.^T \llfloor \ensuremath{\mathbf{u}}_b \rrfloor=\displaystyle \frac{1}{2}\llfloor \ensuremath{\mathbf{u}}_b \rrfloor$. The connection between FETI and BDD scaling operators (even in the heterogeneous case) is given in \cite{GOSSELET:2003:IEI}. It is clear that by construction \begin{equation*}\sum_s\ensuremath{\mathbf{\underline{A}}}^{(s)}\ensuremath{\admiss{\dep}}\ensuremath{^{(s)}}_b=\mathbf{0}\end{equation*} Hence nodal interface displacement $\ensuremath{\admiss{\dep}}\ensuremath{^{(s)}}_b$ can be used to deduce an admissible displacement field $\admiss{u}\ensuremath{^{(s)}}_h$ so that $\admiss{u}_h=\assemg\left(\admiss{u}_h\ensuremath{^\square}\right)\in\KA{\Omega}$. Then we have constructed a pair of interface nodal Vectors $(\ensuremath{\admiss{\dep}}_b,\ensuremath{\admiss{\lam}}_b)$ which satisfies all required conditions to build admissible fields. Note that all the involved operations are already realized during classical steps of the dual domain decomposition approach (with built-in coarse problem) with Dirichlet's preconditioner, so that all finite element quantities (even the internal ones) are available at no cost: the quantity $\llfloor \ensuremath{\mathbf{u}}_b \rrfloor$ is directly available during the classical solution procedure (without computing any $\ensuremath{\boldsymbol{\alpha}}^{(j)}$) which is based on an initialization/projection algorithm \cite{FARHAT:1994:ADV}, and the displacement field ${\ensuremath{\mathbf{u}}}\ensuremath{^{(s)}}$ can be defined up to an element of the kernel (a rigid body motion) since only its symmetric gradient is used during the computation of the error. The only extra operations are due to the use of algorithms $\mathcal{G}_h$ (to compute $\admiss{F}_{bh}$) and $\mathcal{F}_h$ (to compute $\admiss{\sigma}_h$). \section{Numerical assessment} \label{sec:numeric} In order to assess the performance of our parallel error estimator, we consider the 2D toy problem of the $\Gamma$-shape structure of Figure \ref{fig:gammaStruct} which has been used in other papers like \cite{chamoin:ladeveze:2008:nonIntrusiveBounds}. Plane stresses are assumed. The material behavior is isotropic, linear and elastic, with Young modulus $E=2000$ MPa and Poisson's ratio $\nu = 0.3$. The structure is clamped on its basis (whose length is denoted $L$) and it is submitted to traction and shear on its upper-right side, while all the remaining boundaries are traction-free. \begin{figure}[ht] \centering \subfigure[Finite element problem ($h=~\frac{L}{4})$\label{fig:gammaStruct}]{ \includegraphics[height=6.5cm]{portique-mesh}}\qquad \subfigure[Substructuring ($h=~\frac{L}{8}$, $N_{sd}=8$)\label{fig:gammaStruc8sd}]{ \includegraphics[height=6.1cm]{portique-dec8sd}} \caption{$\Gamma$-shape structure} \label{fig:gammaStructGen} \end{figure} Several regular meshes have been generated, constituted by triangular elements of characteristic size $h=\frac{L}{m}$ with $m=2,4,8,16,32$. For each mesh, a sequential (mono-domain) computation is driven, followed by domain decomposition computations obtained by an automatic splitting of the mesh in an increasing number $N_{sd}$ of subdomains ( $N_{sd}=2,4,8$ when $m\leqslant 4$ and $N_{sd}=2,4,8,16,32$ when $m\geqslant8$). Figure \ref{fig:gammaStruc8sd} shows such a decomposition for $N_{sd}=8$ and $m=8$. All the computations are driven in the ZeBuLoN finite element code \cite{ZEBUUSER:2001}, using elements of polynomial degree $p=1$. Both BDD and FETI algorithms are used to solve the substructured problems, respectively used together with Neumann-Neumann and Dirichlet preconditioners. Beside, the convergence criterion of the solver, which stands here for the interface traction gap (resp. displacement gap) in the primal (resp. dual) approach, is set to $10^{-6}$. On each case, in addition to the new parallel error estimator $\ecrpara$, we compute the standard sequential $\ecrseq$ and the true error $\ensuremath{e_h}$ obtained using a reference field $u_{ex}$ computed on a very fine mesh: \begin{equation*} \begin{aligned} \ecrseq &=\ecr{\admiss{u}_h,\admiss{\sigma}_h}{\Omega} \\ \ecrpara &=\sqrt{\sum_s \left(\ecr{\admiss{u}\ensuremath{^{(s)}}_h,\admiss{\sigma}\ensuremath{^{(s)}}_h}{\Omega\ensuremath{^{(s)}}}\right)^2}\\ \ensuremath{e_h}&= \strainnorm{\strain{u_{ex}-\hat{u}_h}}{\Omega} = \sqrt{{\strainnorm{\strain{u_{ex}}}{\Omega}^2} - {\strainnorm{\strain{\hat{u}_{h}}}{\Omega}^2}} \end{aligned} \end{equation*} \subsection{Quality of the parallel error estimator} We first study the quality of the parallel error estimator $\ecrpara$ for computations when convergence of the domain decomposition solver is reached. As said earlier, the proposed technique does not lead to the same statically admissible field because of the special treatment of the interface traction \eqref{eq:Fh}. Our estimator might then be sensitive to the substructuring, we thus compare the estimations obtained with meshes of characteristic size $h$ and decomposition into $N_{sd}$ subdomains. Results are given in Figure \ref{fig:error_vs_h} and Table \ref{tab:erel}. \begin{figure}[ht] \centering \includegraphics[width=0.75\textwidth]{figures/DP_error_h2} \caption{Convergence of Error $\ensuremath{e_h}$ and Estimators $\ecrseq$ and $\ecrpara$ (for various $N_{sd}$) vs. element size $h$ } \label{fig:error_vs_h} \end{figure} \begin{table}[ht] \centering \begin{tabular}[ht]{cccccccc} \hline $h$ & $L/2$ & $L/4$ & $L/8$ & $L/16$ & $L/32$ \\ \hline $\#$ dofs & 146 & 514 & 1922 & 7426 & 29186 \\ \hline \hline $\ensuremath{e_h}$ & 0.2347 & 0.1493 & 0.0937 & 0.0597 & 0.0386 \\ \hline $\ecrseq$ & 0.5712 & 0.4035 & 0.2662 & 0.1769 & 0.1151 \\ \hline \hline $N_{sd}$ & \multicolumn{5}{c}{$\ecrpara$} \\ \hline 2 &0.5657 &0.4021 &0.2648 &0.1747 &0.1151\\ 4 &0.5768 &0.4007 &0.2648 &0.1747 &0.1151\\ 8 &0.5546 &0.4007 &0.2676 &0.1747 &0.1165\\ 16 & & &0.2690 &0.1761 &0.1165\\ 32 & & &0.2787 &0.1789 &0.1178\\ \hline \end{tabular} \caption{Error $\ensuremath{e_h}$ and Estimators $\ecrseq$ and $\ecrpara$ (for various $N_{sd}$) vs. element size $h$} \label{tab:erel} \end{table} We observe that: \begin{itemize} \item The results obtained by FETI and BDD can not be distinguished (which is why only FETI results are given in Table \ref{tab:erel}). \item $\ecrpara$ barely depends on the substructuring; the results are quite similar whether they are conducted on a single domain (``sequential'' curve) or on $N_{sd}$ subdomains. Only a slight rise of the estimation can be observed when the number of interface degrees of freedom is not small compared to the number of internal degrees of freedom, which is logical since the description of interface traction fields is coarser in parallel than in sequential. \end{itemize} As a conclusion, the parallel error estimator $\ecrpara$ enables to recover the same efficiency factor as the standard sequential one, while the CPU-time is divided by $N_{sd}$. \subsection{Convergence of the parallel estimator along DD-solver iterations} Previous results enabled to analyse the quality of the parallel estimator when interface quantities had converged. A new feature associated to the use of an iterative solver for the domain decomposition (DD) problem is that the discretization error estimation can be conducted before DD convergence is reached, that is in presence of displacement or traction discontinuity at the interface as explained in Sections~\ref{sec:bddrecovery} and \ref{sec:fetirecovery} We then compute the parallel error estimator $\ecrpara$ at each iteration of the DD solver. Convergence curves of $\ecrpara$ during the FETI and BDD iterations are shown on Figure \ref{fig:error_convergence}. Parallel error estimator is plotted as a function of the FETI (resp. BDD) residual, defined (for Iteration $n$) as the normalized displacement (resp. traction) gap at the interface: \begin{equation} r^n=\frac{\llfloor \ensuremath{\mathbf{u}}_b^n \rrfloor_{\Gamma}}{\llfloor \ensuremath{\mathbf{u}}_b^0 \rrfloor_{\Gamma}} \qquad \text{ or }\qquad r^n=\frac{\llceil \ensuremath{\boldsymbol{\lambda}}_b^n \rrceil_{\Gamma}}{\llceil \ensuremath{\boldsymbol{\lambda}}_b^0 \rrceil_{\Gamma}} \end{equation} Classical stopping criterion for the of convergence of DD solver is this residual being below $10^{-6}$. Because of the similarity between the curves, the only shown cases correspond to $h=L/8$ and $h=L/16$. \begin{figure}[ht] \centering \begin{tabular}[h]{cc} \psfrag{error estimator}{$\ecrpara$} \includegraphics[width=0.465\textwidth]{D_error_gap_h08} &\psfrag{error estimator}{$\ecrpara$} \includegraphics[width=0.465\textwidth]{D_error_gap_h16}\\ FETI Case: $h=L/8$ & FETI Case: $h=L/16$\\ & \\ \psfrag{error estimator}{$\ecrpara$} \includegraphics[width=0.465\textwidth]{P_error_gap_h08} &\psfrag{error estimator}{$\ecrpara$} \includegraphics[width=0.4650\textwidth]{P_error_gap_h16}\\ BDD Case: $h=L/8$ & BDD Case: $h=L/16$\\ \end{tabular} \caption{Convergence of estimator vs. DD residual \label{fig:error_convergence} \end{figure} The curves show a rapid convergence of the parallel error estimator along iterations of the solver, so that $\ecrpara$ can be considered as converged when FETI residual reaches an order of magnitude of $5.10^{-3}$ or BDD residual reaches $5.10^{-1}$, which corresponds to at most 5 iterations whereas the solver convergence is achieved in 10 to 20 iterations. Actually, $\ecrpara$ is driven by both the discretization error and the convergence of the solver (interface error). The ``L''-shaped curves show that the impact of residual of the DD solver is preponderant only at the first iterations (when interface fields are very poorly estimated), after $\ecrpara$ stagnates at a value very close to $\ecrseq$ which is only associated to the discretization error. Then, it seems possible to stop the iterations of the solver far before convergence while still obtaining an accurate global estimate for the discretization error. Figures \ref{fig:eltmaph2} and \ref{fig:eltmaph8} show maps of the elementary contributions $\ecrparaE$ to the parallel error estimator $\ecrpara$ at different steps of the convergence, for $N_{sd}=8$ with $h_{e}=L/2$ or $h_{e}=L/8$. At the first iterations it can be seen that the estimator highlights both discretization errors (around the re-entrant angle) and lack of convergence of the solver (along the interfaces), whereas very quickly the solver (that is the interfaces) does not contribute any more to the estimator. The various examples show that the convergence of the global estimator is due to the convergence of elementary Contributions $\ecrparaE$, which means that when willing to carry out remeshing procedures, the maps obtained after few iterations of the solver are sufficient to define correct refinement instructions. \begin{figure}[ht] \centering \begin{tabular}[h]{ccc} \includegraphics[width=2.9cm]{h02sd8dec.eps} & \includegraphics[width=2.9cm]{h02sd8iter11dual-nb.eps} & \includegraphics[width=1.5cm]{legendeh02-nb.eps} \\ Decomposition & Reference map & Range \vspace{1pt} \\ \includegraphics[width=2.9cm]{h02sd8iter1primal-nb.eps} & \includegraphics[width=2.9cm]{h02sd8iter4primal-nb.eps} & \includegraphics[width=2.9cm]{h02sd8iter5primal-nb.eps}\\ Iteration 1 & Iteration 4 & Iteration 5 \\ & \textbf{BDD solver} & \\ \vspace{1pt} \\ \includegraphics[width=2.9cm]{h02sd8iter1dual-nb.eps} & \includegraphics[width=2.9cm]{h02sd8iter4dual-nb.eps} & \includegraphics[width=2.9cm]{h02sd8iter5primal-nb.eps}\\ Iteration 1 & Iteration 4 & Iteration 5 \\ & \textbf{FETI solver} & \end{tabular} \caption{Maps of $\ecrparaE$ for $h=L/2$ and $N_{sd}=8$ at various iterations} \label{fig:eltmaph2} \end{figure} \begin{figure}[ht] \centering \begin{tabular}[h]{ccc} \includegraphics[width=2.5cm]{h08sd8dec.eps} & \includegraphics[width=2.5cm]{h08sd8iter17dual-nb.eps} & \includegraphics[width=1.5cm]{legendeh08-nb.eps} \\ Decomposition & Reference map & Range \vspace{2pt} \\ \includegraphics[width=3.cm]{h08sd8iter1primal-nb.eps} & \includegraphics[width=3.cm]{h08sd8iter4primal-nb.eps} & \includegraphics[width=3.cm]{h08sd8iter5primal-nb.eps}\\ Iteration 1 & Iteration 4 & Iteration 5 \\ & \textbf{BDD solver} & \\ \vspace{2pt} \\ \includegraphics[width=3.cm]{h08sd8iter1dual-nb.eps} & \includegraphics[width=3.cm]{h08sd8iter4dual-nb.eps} & \includegraphics[width=3.cm]{h08sd8iter5primal-nb.eps}\\ Iteration 1 & Iteration 4 & Iteration 5 \\ & \textbf{FETI solver} & \\ \vspace{2pt} \\ \end{tabular} \caption{Maps of $\ecrparaE$ for $h=L/8$ and $N_{sd}=8$ at various iterations} \label{fig:eltmaph8} \end{figure} \section{Conclusions} \label{sec:conclusions} In this paper, we presented a new approach to handle robust model verification based on constitutive relation error in a domain decomposition context. The method relies on the construction of fields that are kinematically and statically admissible on the whole structure. We showed that a fully parallel construction is possible even when starting from fields which do not satisfy interface conditions. The construction is a three-step procedure: first displacement and traction nodal fields are built-up so that discrete admissibility conditions are satisfied, second continuous admissible traction fields are deduced, third these fields are used as input by any classical recovery procedure. The first step is implicitly done when good preconditioners are employed within the domain decomposition methods and the second step corresponds to the inversion of small and sparse ``mass'' matrices. Our first results show that not only the estimation error does not suffer from the approximation that are made at the interface in order to achieve full parallelism, but that even roughly estimated interface fields enable to obtain a good estimation of the discretization error and correct maps of elementary contributions which are required by mesh adaptation procedures. Thus not only the computational cost are divided by the number of processors but the prior obtainment of the finite element solution can be accelerated since a coarse solution is sufficient (which corresponded to 3 to 5 times less iterations in our case). Future studies will deal with parallel mesh adaptation.
1,314,259,993,803
arxiv
\section{Background}\label{sec:introduction} Stroke survivors commonly suffer physical deficits that manifest as disturbances to balance and gait \cite{strkDef2013}. Advances in affordable computer power and portable motion-sensing technology \cite{bmbf_Bal3} have led to an increasing role of technology in rehabilitation \cite{roleSensors}, for instance with biofeedback, where physiological or biomechanical information is made available to conscious experience to allow for greater self-awareness of these states, and modification where necessary \cite{bmbf_Book}. \emph{Biomechanical biofeedback (BMBF)} \cite{bmbf_Giggins} based on bodily kinematics or kinetics is the type of biofeedback most directly applicable to neurorehabilitation, specifically balance/mobility as well as lower limb activities and gait\cite{bmbf_Bal1, bmbf_Gait2}. Results based on studies with both healthy and impaired populations indicate the superiority of biofeedback in training compared to regular therapy protocols in improving postural sway \cite{costantini,bmbf_Bal5}, weight shifting and reaction time \cite{bmbf_Bal1}, as well as sit-stand transfers \cite{bmbf_Bal6} and gait kinematics \cite{bmbf_Bal5,bmbf_Gait1}. \emph{Auditory biofeedback (ABF)} involves the real-time conversion of measured bodily information into a sonic representation. By definition, it can thus be seen as a specific case of interactive sonification \cite{soniHandbook}, where data relations are rapidly converted into auditory relations \cite{abf_Bal2,costantini,abf_Gait6}. The supplied auditory information on movement execution serves as continuous or discrete guidance which assists in movement error correction \cite{strkRehab_G14,parseihianDyn,matsubara}. \subsection{Auditory Biofeedback in Movement Training} ABF has been applied to train postural control, with positive results \cite{abf_Bal2, costantini, strkBal_N3, abf_Bal5}. In a series of studies, Dozza and colleagues explored the use of multidimensional ABF using a system that sonified trunk accelerations/sway velocities \emph{continuously} through frequency, level and spatial balance of a stereo sound using nonlinear mappings. The biofeedback information provided was similar to that provided by the vestibular system \cite{dozza2011}, and the biofeedback improved balance overall, more so when other key sensory cues were unreliable or absent \cite{abf_Bal2}. Direction specificity of audio biofeedback reduced postural sway and increased the frequency of postural corrections in the direction of the biofeedback \cite{strkBal_N3,dozza2011}. Furthermore, the optimal mapping function for trunk sway to ABF was found to be sigmoid-shaped \cite{abf_Bal5}. Unlike the above continuous mapping paradigm, Costantini et al. \cite{costantini} successfully tested a biofeedback system that projected trunk inclination onto discrete 2D zones in the horizontal plane. Postural deviations triggered auditory warnings of proportional intensity using simple filtered and modulated noise. Though the authors only performed short-term evaluations with unimpaired subjects, they found significant reductions in postural sway in several conditions \cite{costantini}. Engardt et al. \cite{bmbf_Bal4} assessed the effects of ABF during training of \emph{sit-to-stand (STS)} transfers for hemiparetic stroke patients, finding short term improvements in body weight distribution between the paretic and non-paretic limb. Nicolai et al. \cite{bmbf_Bal6} found significant and sustained improvements in posture and balance post-intervention on patients with progressive supranuclear palsy. Patients received an auditory cue to stand up when an individually calibrated trunk flexion angle threshold was crossed \cite{bmbf_Bal6}. ABF has also shown positive effects in gait training \cite{bmbf_Bal3,bmbf_Gait2,bmbf_Gait1}. For instance, sonifying ankle rollover patterns as a series of data-driven synthesizers was found to bring about significant differences in cadence and walking velocity among participants \cite{abf_Gait6}. Torres et al. \cite{abf_Gait4} introduced an \emph{inertial measurement unit (IMU)}-based prototype and prescribed a number of movement-sound couplings, such as fixed movement thresholds to trigger discrete auditory feedback or modulate continuous auditory feedback \cite{abf_Gait4}. \subsection{Dynamic Trajectory Tracking} The above systems essentially provide \emph{error-based} feedback \cite{strkRehab_G14}, where the difference between a quantity and a \emph{constant} ``target" value is sonified over time. However, in the context of a variable target (as in dynamic movement training), there is evidence that error feedback may not be most ideal in terms of performance outcomes \cite{parseihianDyn,matsubara}, although available research is inconclusive. Rosati et al. \cite{rosati} showed that error feedback did not improve performance w.r.t. visual feedback alone, while an auditory representation of the visualized target motion (\emph{task}-related feedback) was more valuable. However, Boyer et al. \cite{boyer} found that both these feedback types could reduce tracking error and increase movement energy in visuo-manual tracking. Parseihian et al. \cite{parseihianDyn} conducted an audio-guided 2D dynamic trajectory-tracking experiment based on the above research, and found that information related to what the user ``needs to do" resulted in superior tracking performance compared to error feedback. Due to the dynamic nature of the task, they found that pitch and other auditory dimensions that allow rapid comprehension and adjustments on the part of the user were most optimal \cite{parseihianDyn}. The timing of task-related feedback presentation may also be critical, specifically whether task information is provided simultaneously with user feedback or slightly in advance (allowing for user anticipation). An example of the former \cite{matsubara} concurrently presented two auditory streams corresponding to the task (reference) and user's own performance, panned to opposite stereo locations, with the user's goal being to make them sound identical. While the interaction was generally feasible and comprehensible, position- and timing-based user performance errors (relative to target) were found to be significantly worse than with visual feedback. On the other hand, Parseihian et al. \cite{parseihianDyn} found that feedback based on \emph{anticipated distance error} afforded far superior performance to merely instantaneous distance error. Despite promise, ABF has failed to attain widespread practical adoption \cite{soni_Guide8,soni_Guide7}, partly due to a lack of focus on aesthetics and naturalness in sonic interaction design \cite{soni_Guide8} leading to poor user experience \cite{soni_Guide6}. Most ABF systems reviewed here, provide feedback through simple audio manipulations (e.g. pitch, loudness, brightness, spatialization), which generate aesthetically simple feedback signals. These are known to cause auditory fatigue, annoyance and dissatisfaction, making them less likely to be accepted by users \cite{soni_Guide8,vickers2016soniMusic,soni_Guide6,soni_Guide7,vogt2009physiosonic}. Naturalness and clear causality in the iconic gesture-sound mapping of auditory displays have been found to contribute to their perceived usability \cite{soni_Guide11}, in line with the general aesthetics-usability correlation seen in human-computer interaction literature \cite{HCI_Aesth2}. \subsection{Musical Biofeedback} The recent exploration of \emph{musical biofeedback (MBF)} \cite{music_Bf1,music_Bf12,soni_Guide13} has attempted to address the aesthetics issues of ABF and leverage the universal emotional and sociocultural appeal of music. MBF is a relatively complex signal due to its organization in time and frequency, possibly containing several coordinated instrumental and vocal elements that can provide variety to the feedback signal \cite{music_Bf12}. A general criticism levelled against such aesthetic approaches to sonification is that the interpretation of the underlying data is more difficult \cite{soni_Guide7,soni_Guide6}, and in the case of music entails the learning of a new `sonic grammar' \cite{musiSoniVickers}. It has, however, been argued that there is a cultural or aesthetic baseline in popular music systems, which is accessible to untrained listeners and allows them to appreciate music with minimal cognitive overhead in the absence of formal training \cite{musiSoniVickers}. For instance, listeners are able to recognize music genres within a fraction of a second \cite{mace_genre_2012}. The psychological and therapeutic benefits of music are well known \cite{sihvonen_music-based_2017} and decades of research in the discipline of neurologic music therapy have established the direct therapeutic benefits of music across multiple dimensions in physical rehabilitation \cite{nmt_1}. A prescribed and largely prevalent MBF approach is to sonify desired movement behaviors as pleasant auditory states and vice versa, often simultaneously using musical rhythm to temporally organize motor timing \cite{music_Bf1}. The design space for possible interactions is conceivably vast and as such, MBF systems to date have ranged widely in scope and complexity, manipulating either pre-recorded musical stimuli or real-time synthesized ones as follows: \paragraph{Pre-recorded Music} Some studies have performed simple manipulations of existing music waveforms, for example by adding noise \cite{music_Bf23}, filtering \cite{music_Bf22,vogt2009physiosonic} or adjusting audio quality \cite{music_Bf18,vogt2009physiosonic} to sonify physiological and biomechanical quantities. These interactions were found to be comprehensible by healthy and impaired populations, and capable of positively altering motor behavior while reducing perceived exertion \cite{music_Bf22}. Others sonified motor timing through music timing, such as the D-Jogger \cite{music_Bf9}, which synchronizes pre-existing music to detected gait patterns by time stretching algorithms, thus providing a sense of rhythmic agency to the user \cite{music_Bf19}. \paragraph{Synthesized Music} Other studies employed real-time synthesis approaches, through which it is easier to exert finer control over sonic parameters of music and more easily craft intimate and engaging musical interactions \cite{fabiani_InteractiveMusicSystems}. Sonification parameters used in these designs include musical pitch \cite{abf_Gait2,music_Bf27,music_Bf29}, tempo \cite{music_Bf12}, brightness \cite{music_Bf27}, track volumes \cite{music_Bf27}, chord arpeggio characteristics \cite{music_Bf13}, musical layer richness \cite{music_Bf14}, synthetic tone additions \cite{music_Bf14} and percussive sample triggering \cite{music_Bf25}. In most cases, the systems only underwent preliminary evaluation such as brief usability tests with convenience-sampled participants. But at the very least, the results indicate that these MBF interactions are feasible, perceptible and comprehensible, as well as potentially pleasurable experiences. \subsection{Appraisal of Earlier Work} Combining music with the portability, versatility and movement modification of ABF can enable powerful mediation of human behavior \cite{music_Bf19}, since music can motivate, monitor and modify human movement \cite{music_Bf1}, and is as effective as simple sine sonification while reducing auditory fatigue \cite{music_Bf12}. Hovever, the musical stimuli generated are usually rigid and simplistic, either monophonic instruments \cite{abf_Gait2} or very basic ensembles \cite{music_Bf14,music_Bf28}. The synthesis of stimuli resembling professionally produced music is undoubtedly challenging, and bare-bones aesthetics \cite{music_Bf13,vickers2016soniMusic} lacking the consideration of user preferences \cite{soni_Guide8,music_Fam} can hamper user experience. Systems also tend to be designed for specific applications, making their designs hard to generalize to other scenarios. While some works mention details of individual feedback tailoring to patients \cite{vogt2009physiosonic,abf_Gait4,linnhoffGait}, MBF literature typically does not provide detailed system design specifications, and data mapping configurations seem to be empirically designed and rigid, difficult to alter in real-time or retroactively tune \cite{soniHandbook} as part of user-centered approaches \cite{music_Bf19}. The use of expensive proprietary/custom-built hardware and software \cite{abf_Bal5,strkBal_N3,costantini,abf_Bal8,bmbf_Bal6,vogt2009physiosonic} makes these works difficult for other researchers to replicate and upgrade. Moreover, the use of visual programming environments in many studies \cite{abf_Gait3,music_Bf25,music_Bf9,music_Bf28,music_Bf23}, while excellent for preliminary testing, is computationally less efficient \cite{soni_Guide13} and arguably harder to scale in complexity than low-level programming languages, although technical performance details are seldom reported in research. E.g. most gait ABF systems claim to be `real-time', while few report feedback loop delay values \cite{linnhoffGait}. Lastly, it is unclear whether most systems possess the recommended BMBF supplementary functions \cite{bmbf_Book}, such as visualization, standby modes and time-series logging. The aim of the present work is to address several of the shortcomings in MBF system and build a clinically applicable MBF framework for balance and gait training using exclusively open-source development tools and an user-centred methodology \cite{higginbottom_participatory_2015}. As identified from the literature review, the system should fulfil a number of requirements such as 1) be computationally efficient, with adequate temporal performance and wireless sensing range; 2) have an architecture versatile enough to provide multiple types of user-adjustable training interactions within a single interface; and 3) these must be conceptually and practically applicable as a supplement to conventional stroke rehabilitation protocols. We developed a musical biofeedback system in collaboration with patients and clinical stakeholders. The main contribution lies in the flexibility of the biofeedback mapping architecture, layered, adjustable and efficient music synthesis as well as its ease of replication due to open-source development tools. \section{Design and Implementation} \label{sec:design} \begin{figure} [thb] \centering \includegraphics[width = 1\linewidth]{"Figures/ContextDiagram".PNG} \caption{High-level system schematic showing the organization of the hardware and software components of the framework, and where the user is situated among them.} \label{fig:contextDiag} \end{figure} Architecturally, we opted for a \emph{distributed} structure \cite{bmbf_Book} with wearable wireless IMU's and remote processing on a laptop. Sensor interfacing, music generation and biofeedback configuration are controlled by a Windows application that produces a stereo audio signal, which is fed to the patient via headphones or loudspeakers as shown in Fig. \ref{fig:contextDiag}. The source code of the system is licenced under GNU GPL 3.0\footnote{https://github.com/prithviKantanAAU/mbfFramework\textunderscore v4} The hardware sensing component consists of M5Stack Grey microcontrollers that are programmed in the Arduino IDE. IMU data is transmitted to the software application as OSC \emph{(Open Sound Control)} messages over WiFi-UDP. We wrote the software component of the system in C++ using the JUCE environment\footnote{JUCE Framework - https://juce.com/}, which we chose for its extensive and efficient set of libraries for timer callbacks, OSC, MIDI, graphical elements and file operations. For music synthesis and sonification, we implemented a FAUST \footnote{FAUST Programming Language - https://faust.grame.fr/} script, which was compiled to generate a JUCE-compatible DSP object in C++ and thus directly leverage the vast audio DSP functionality of FAUST. The interface layout is organized into three tabs for sensor interfacing, music control and biofeedback control (Supplementary Material 1). \paragraph{Sensing and Wireless Communication} Up to three M5Stack devices, securely mounted to the patient's lower back or ankles using a silicone housing and elastic straps, connect to a secure WiFi network hosted by the laptop via a bidirectional IP verification process. The sensors each transmit IMU data and battery status at 125 Hz to a predefined remote UDP port. The software constantly checks for new OSC messages received at each port to infer whether that sensor is online. The sensor tab in the software handles sensor assignment to body parts (trunk or either leg) along with bias calibration. \begin{figure} [thb] \centering \includegraphics[width = 0.65\linewidth]{"Figures/CallbackStructure".PNG} \caption{High-level software schematic showing how the different functions (UI update, music sequencing, biofeedback computation and music generation) are organized to run in timed callbacks at different frequencies.} \label{fig:callbackStruct} \end{figure} \paragraph{Software Application Topology} The software can be seen as a multi-instrument music synthesizer (FAUST DSP object) that is `played' by the JUCE component, and this `performance' comprises \emph{sequenced music} from song files and \emph{MBF} based on the sensors, which manipulates the sequenced music. The FAUST object has virtual controls to trigger the instruments, modify their properties, configure music mixing processors and apply MBF strategies. The JUCE component handles all this in appropriately timed callback functions as shown in Fig. \ref{fig:callbackStruct}. A single high resolution timer orchestrates the music sequencing callback (at 1 kHz) and the MBF callback (at 100 Hz). The sensor transmission rate (125 Hz) exceeds the MBF callback frequency to compensate for UDP packet drops. User interface element update occurs independently at 30 Hz, while the real-time audio callback itself is handled by the FAUST object at a sampling rate of 48 kHz with a software buffer of 480 samples (adding 10 ms of software latency). The UI controls modify the states of sequencing and MBF variables using listener callbacks for thread safety. \subsection{Music Generation} The system generates an eight-track stereo instrumental ensemble containing both melodic and percussive elements in a 4/4 time signature. These fulfill musical `roles' corresponding to percussion, melody and harmony in a simplified pop music style. A music piece can be reproduced in various rhythmic grooves and instrument textures to cater to varied tastes, switchable in real-time in the interface. Other aspects of the music can also be varied on the fly, such as tempo, number of instruments, track balance and mix processing parameters. The generation is MIDI-based with the option to load external song files, although inbuilt music is also provided for specific training settings. The MIDI files are encoded in a custom Type-1 schema for efficiency, and generation process involves two separate processes - sequencing and synthesis. \paragraph{Sequencing} Here, MIDI messages for all tracks are decoded into pitch and velocity information to map to the appropriate FAUST synthesizer controls. Song-specific information related to melody, bass and chord pitch is stored in loadable MIDI \emph{song files}. Instrument choices, rhythmic information and articulation for all tracks is encoded in four-bar \emph{style files} that are dynamically pre-populated at startup, making it easy to add new rhythms and styles to the software. MIDI information is stored as note matrices in program memory. During playback, the sequencing callback at 1 kHz (see middle branch of Fig. \ref{fig:callbackStruct}) increments the sequencer's elapsed MIDI ticks as per the configured tempo, checks MIDI timestamps in the note matrices for new events to be handled and counts them. The event types are identified (note on/off) and the event details (pitch/velocity) are preprocessed and mapped to the respective FAUST controls. The tempo slider controls playback rate by changing the tick increment per callback interval. Polyphonic tracks may have up to four voices (chords), and note frequencies are constrained to specific registers for pitched tracks to reduce sonic disparities among songs in different musical keys. Playback proceeds and the rhythmic pattern loops until all song events have been handled. \paragraph{Synthesis} Audio generation uses a hybrid approach where percussion instruments are sample-based, while melody instruments are synthesized using custom-built algorithms and/or FAUST library functions where available. Each instrument role can be reproduced in up to three sonic \emph{variants}, and appropriate combinations of variants are used for each musical style preset. For example, the `hi-hat' percussive role can be played using a regular hi-hat, ride cymbal or marimba sample. Synthesis is mainly custom FM or subtractive \cite{thesis}, with basic physical models such as Karplus-Strong and formant-filtered vocal simulations. Pitch parameters influence note frequencies of pitched tracks, while velocity influences level and note articulation properties. Tracks have their own channel compressors and 4-band fully parametric equalizers with pre-defined but modifiable settings for each variant. Temporal parameters like envelope time-constants, reverberation and echo time automatically adapt to the tempo. The tracks are summed and undergo master processing (see Fig. \ref{fig:callbackStruct}) comprising a master equalizer and limiter, akin to standard mixing workflows. \subsection{Movement-Music Interactions} \label{fig:zoneVisualizer} Various MBF interactions are possible for gait and balance training, each employing suitable feedback strategies to convey performance information. The strategies were designed and tested for perceptual salience, with most of them internally employing divergent (one-many) mappings between data and audio parameters for perceptual magnification \cite{soniHandbook}. The feedback is 1D or 2D and usually error-based \cite{strkRehab_G14}, where a measured movement parameter is compared to a \emph{target value range}, and feedback is provided based on patient compliance (see Supplementary Material 5 for video links). \paragraph{Static Balance} The principle is to reward the maintenance of a target trunk orientation (upright or otherwise) with pleasant music, and provide negative MBF proportional to deviations from this target. Angular trunk tilt from the vertical is projected onto the plane formed by the (\textit{mediolateral (ML) and anteroposterior (AP)}) axes, and allocated to one of six discrete feedback zones such that MBF intensity increases with distance from the configured target trunk orientation. The zones are concentric circular or elliptical ring shapes around the target as described in \cite{costantini}, with two rectangular zones to the extreme left and right. We made it possible to offset the target to a non-upright position and change its size to accommodate differently able patients. Strategies such as music dissonance, disturbance tones and ambulance siren \cite{thesis} may be used as negative feedback, the last of which provides helpful L-R directional cues \cite{strkBal_N3} through stereo panning. \paragraph{Dynamic Balance} Trunk training is common to stroke rehabilitation due to its strong contribution to balance and mobility recovery, and typically involves core stability exercises based on trunk flexion, extension, weight shift and reaching movements \cite{trunkTraining}. We augmented these exercises with the use of MBF as follows. For \textbf{Reaching}, 1D ML or AP trunk angle is sonified as melodic scale degree, allowing the patient to `play' melodies on an otherwise monotonous inbuilt music piece by tilting their trunk forward or sideways as part of a reaching motion. Melodic scale and key can be changed in real-time to mitigate interaction monotony. For \textbf{Trunk Control} (flexion and extension), we extended the static balance interaction by modulating the 2D target to follow a horizontal plane trajectory that the patient must match, in a configurable linear, diagonal, circular, square or rhombic shape. Frequency and phase of the trajectory progress are music-synced so that rhythmic cues thereof can assist movement planning, and the frequency can be adjusted to a sub-multiple of the music tempo to suit the patient. Feedback can be discrete error-based (zones as in static balance), 2D task-based (ML and AP target position sonified as separate MBF strategies independent of the patient) or continuous anticipated distance error-based as in \cite{parseihianDyn}. In the last, a sum of sigmoid functions \cite{abf_Bal5} is used to map directional feedback \cite{strkBal_N3}. For rapid comprehension \cite{parseihianDyn}, an exemplar MBF configuration uses L-R spatialization for ML axis feedback and and melody pitch skewing for AP. \paragraph{STS} As reviewed in \cite{balasubramanian2015analysis}, movements which happen in a continual fashion without any interruptions characterize most well-trained and healthy motor behavior, and serve as a marker of post-stroke motor recovery. This interaction rewards smooth sit-stand transitions with pleasant music, providing negative MBF to detected movement intermittencies by introducing salient disturbances in the music related to \textbf{jerkiness}. Mean-squared jerk captures instantaneous intermittencies with sufficient speed and sensitivity for meaningful smoothness-based feedback as no segmentation or windowing is required. \textbf{Trunk Flexion-based Cues} based on \cite{bmbf_Bal6} form the second interaction, where a sit/stand action cue is provided based on \textit{trunk flexion angle}. We augmented the principle musically here, as a neutral sonic artifact (e.g. bell or wah wah effect \cite{thesis}) is triggered within the music when separately configurable sit/stand angle thresholds are exceeded. \paragraph{Gait} The purpose of these interactions is to augment rhythmic auditory stimulation training \cite{nmt_1,sihvonen_music-based_2017} with synchronization-related feedback. The two interactions focus on step duration and phase respectively \cite{thesis}. In the former, the system detects every step taken by the patient and compares its duration to the musical beat interval, providing immediate negative feedback proportional to how much longer or shorter the step was than the beat interval (with a configurable dead-zone). The phase-based interaction is inspired by \cite{music_Bf25,music_Bf28}, where percussive musical events are triggered by footfalls, and the goal is to synchronize these events with the remaining ensemble, to encourage movement-music synchronization. Here, the bass drum and snare drum tracks are muted from the sequenced music and respectively triggered by the left and right foot. By walking in time, the patient is thus rewarded with a well-synchronized ensemble. \subsection{Feedback Calculation and Mapping} \begin{figure} [thb] \centering \includegraphics[width = 0.8\linewidth]{"Figures/1DMapping".PNG} \caption{The 1-D MBF mapping function for a specific movement parameter target range to generate a feedback variable between 0 and 1. MBF intensity increases differently outside this range depending on the order of the gamma function.} \label{fig:1DBMBF} \end{figure} The above interactions require a dedicated functional block to transform IMU signals to meaningful feedback signals (rightmost branch of Fig. \ref{fig:callbackStruct}). The transformation must be flexible enough to be tailored to various patient types. Our framework allows any 1D mapping combination between computable movement parameters \cite{thesis} and the available MBF strategies, with extensive control of mapping parameters (interactive data selection and mapping \cite{soniHandbook}). The 100~Hz mapping frequency yields a perceptually smooth result. The received raw IMU data first undergoes signal conditioning \cite{soniHandbook} - median filtering and smoothing using 6th order Butterworth lowpass filters. Filter parameters (median filter length in samples and cutoff frequency in Hz) are user-adjustable with suitable inbuilt defaults for each measured parameter. One of several exercise modes may be chosen (e.g. static balance, gait, etc.), each of which has its own set of movement parameters, MBF strategies and specific user controls. The movement parameter once computed is then transformed to a \emph{feedback variable}, which involves comparing the movement parameter value to the target range and normalizing compliance error between 0-1 \cite{soni_Guide8} followed by a gamma (exponential) function, which modifies the mapping function shape \cite{soni_Guide13} (see Fig. \ref{fig:1DBMBF}). As shown in Figure~\ref{fig:1DBMBF}, a feedback variable value of 1 indicates maximum MBF intensity and vice versa. The interface allows real-time mapping function control \cite{soni_Guide13}, feedback variable quantization and polarity inversion \cite{soniHandbook}. The result is mapped to the chosen MBF control of the FAUST object, which accordingly manipulates the music output to provide biofeedback. For directional MBF strategies, a feedback variable value of 0.5 corresponds to no feedback, while 0 and 1 correspond to the directional extremes of feedback intensity. \subsection{Supplementary Functionality} The system allows real-time data visualization and session logging. We implemented generalized graphical visualizers to monitor measured movement parameters. A progress bar shows song completion. Movement repetition data is also captured for dynamic reaching, STS and gait interactions. A time-series logging functionality is provided to capture training session progress in granular detail (100~Hz resolution) for further analysis. The system configuration state for a session can also be saved for future recall. Lastly, a standby mode option is provided to toggle biofeedback on and off to facilitate effect comparisons in future studies. \section{Evaluation} \label{sec:evaluation} The system and interactions were evaluated through technical tests (Supplementary Material 2) and expert interviews (see Supplementary Materials 3 and 4 for specifics). \subsection{Technical Testing} All tests were carried out on a Dell Inspiron 15 7000 Windows laptop with a 1.8 GHz i7 processor and 16 GB RAM. \emph{Biofeedback loop delay} was measured by comparing onset timestamps between system input (movement) and output (sound) for trunk angle, foot strike and jerk. Mean and standard deviations (in parenthesis) were \textbf{90 (5) ms} for trunk angle and \textbf{93 (48) ms} for foot strike/jerk. \emph{Computational performance} was measured as \% processor time, and logged using performance benchmarking software. We found the computational worst case scenario to be the \textit{slow Rock style @ 60 BPM, dynamic balance interaction with session logging on}. The measured \% Processor Time was \textbf{28.91 (4.09) (Peak: 40.80)}. Peak CPU usage (Windows Task Manager) was \textbf{11.1 \%} along with 157.0 MB of memory. \emph{Effective indoor sensor range} was assessed in terms of the percentage of MBF callbacks that detected new OSC messages in a short time-frame under different conditions. Results showed MBF Callbacks with new OSC data received up to 96.35 \% for 3 m direct line of sight; 96.10 \% for 7 m direct line of sight; and 82.50\% for 9 m around wall corner. \subsection{Expert Interviews } In addition to involving patients and a lead physiotherapist in the design iterations, we assessed the interactions from the perspective of clinicians and patients via structured expert interviews conducted online with five physiotherapists and two music therapists after the third development cycle. Details of interview materials, questionnaires, data analysis and results are in Supplementary Materials 3 and 4, and a short summary is given here. The participants stated that the most of the interactions were applicable to a range of acute and sub-acute patient types depending on the complexity of the training activity. They also expressed that the available software adjustments would be sufficient to tailor the training to different individuals. A recurring comment about the gait interactions was that they are more applicable to cerebellar or lower brainstem strokes than the more common cortical strokes. The participants estimated that the sensing mechanism would capture relevant movement features in all training activities, and that the auditory feedback in some cases would provide them with some extra information not as readily available visually (e.g. jerkiness, gait rhythm). The experts also felt that the interactions could both be integrated into existing clinical protocols and used to enhance patient autonomy when not undergoing training. For example, several suggested that the static balance interactions could be used to provide continuous feedback while performing other tasks. Numerous suggestions to adapt the interactions for other goal-oriented training activities (e.g. upper limb) were also made. While the participants found the majority of musical feedback strategies easily perceptible and comprehensible, they commented on the unpleasant nature of some of them, particularly those involving salient synthetic disturbances to the music. In contrast, strategies that operated on the level of musical structure (eg. foot strike drum triggering, music stop outside target range, music pitch) were seen more positively. Several also felt that the music sounded quite computerized, although individual patient reactions would vary. In terms of practicality, most felt that the sensor setup was simple, although one stated that there could be hygiene issues with using velcro straps. Several brought up patient safety concerns during training, stressing that the therapist needed to be able to operate the system in a hands-free manner. \section{Discussion and Conclusions} \label{sec:discussion} We presented an MBF framework for post-stroke movement rehabilitation co-developed with stakeholders and addressing several shortcomings of existing systems by integrating theories of BMBF system design \cite{bmbf_Book}, auditory guidance \cite{soni_Guide8}, music therapy \cite{nmt_1}, musical biofeedback \cite{music_Bf1} and interactive sonification \cite{soniHandbook,soni_Guide13}. Our framework enables multiple interactions catering to conventional protocols for balance and gait training within a single hardware-software architecture with granular real-time system control. Though tests of the final system with real patients are pending due to COVID-19 restrictions, our evaluation found the interactions to be technically sufficient, applicable to a wide demographic and relevant to therapy protocols. The fact that our system was built using free open-source tools and easily available, cheap hardware, also makes it accessible to the research community for replication and improvement. The ESP32-OSC-JUCE-FAUST approach yielded a practically feasible prototype. We found the sensing hardware to have sufficient range, and the software performed very efficiently even in its computational worst-case scenario, with overall system loop delays well below the human auditory reaction time \cite{bmbf_Book,linnhoffGait}. This approach makes high-level FAUST functionality available in a low-level programming environment, combining the rapid prototyping advantage of the former with the robustness and efficiency of the latter. The available computational and memory headroom allows future versions to be adapted for mobile versions and scaled in complexity when real-life tests expose necessary upgrades. All interactions and most MBF strategies were deemed by the participating experts to be meaningful and easily comprehensible for patients. This supports our argumentation for the universal comprehensibility of musical meaning without formal training \cite{musiSoniVickers}. The experts pointed out the excessive unpleasantness of some negative MBF strategies, although real-life tests with patients must further investigate this trade-off between feedback salience and pleasantness to inform an optimal MBF design philosophy. As reviewed in \cite{linnhoffGait}, positive feedback may generally be more conducive to long-term motor learning than negative feedback as it promotes motivation and invokes dopamine prediction error mechanisms as opposed to simply attentive processing of movement errors. Future studies will focus the use of our generic mapping framework on providing positive feedback. Overall, the technical framework also can be used with other patient groups than stroke by merely adding movement parameters and MBF strategies. The present functionality does make an effort to address the aesthetics problem \cite{soni_Guide8,soni_Guide7} and accommodate user preferences \cite{music_Fam}, but the music generation still has significant upgrade potential. From the comments made by the clinicians (not primarily music experts), there seem to be clear aesthetic limitations. The synthesis methods are relatively simple, and the sequencing process is deterministic and predictable with limited temporal variability. Even taking into account subjectivity in music preferences, the system generally produces a computerized-sounding output. Future versions of the system could integrate computational rules \cite{fabiani_InteractiveMusicSystems} for expressive music performance to yield a more vibrant stimulus. Although the expert interviews did provide insight, they cannot replace real usability tests and clinical effect studies to gauge therapeutic benefits from interacting with the system, which is the next step. Future work could also include comparisons of our system with other systems built using other tools and architectures. Nevertheless, the framework we have proposed and built has the potential to facilitate the future evaluation of various MBF paradigms in stroke rehabilitation. \section{Acknowledgments} We thank Helle Rovsing Møller Jørgensen and the participating patients and therapists. Author PK was main responsible for system development and manuscript writing. Authors SD and EGS supervised the MSc project \cite{thesis}, and assisted in writing. All authors approved of the final manuscript. \appendices \IEEEpeerreviewmaketitle \ifCLASSOPTIONcaptionsoff \newpage \fi
1,314,259,993,804
arxiv
\section{Introduction} Thanks to the practical realization of hardware aberration correction (AC), the point resolution of high-resolution transmission electron microscopy (HRTEM) is not limited by the positive spherical aberration ($C_s$) any longer~\cite{haider1998,scherzer1936,uhlemann1998}, and materials can now be routinely imaged at atomic resoultion with state-of-the-art instruments even at low acceleration voltages~\cite{meyer2008,krivanek2010}. Once the strong $C_s$ contribution is removed, other aberrations such as three-fold astigmatism ($A_2$) and coma ($B_2$) become measurable and can be corrected~\cite{uhlemann1998} down to a limit imposed by the measurement accuracy and the adjustment precision of the correcting elements. With advanced techniques, such as electron holography~\cite{gabor1948,lichte1986electron,geiger2008,Linck201377} and exit wave reconstruction from a focal series~\cite{saxton1994focus,op1996wave,kirkland1995super,coene1996maximum,kirkland1997multiple} direct information on the specimen, not obscured by the inevitable residual aberrations, can be acquired. However, these techniques require special arrangements/procedures during the acquisition of the images, and thus cannot be applied to a single HRTEM image {\em post-situ}. The wave aberrations distort the information transferred from the specimen to the imaging device, such as a CCD camera, making interpretation of the acquired images difficult if not sometimes impossible~\cite{krivanek1995}. In a corrected microscope this problem is to a large extent overcome, as is evident from the nearly aberration-free images of today. In the context of this article, the word {\em nearly} should be emphasized, however. The aberration coefficients can be tuned down to zero only with some accuracy. Thus, in practice residual aberrations are always present even in corrected instruments~\cite{meyer2011,biskupek2012}. Moreover, the instruments tend to drift away from the corrected state over time, and one can expect stronger residual aberrations as time passes after tuning the corrector~\cite{Barthel20136}. Residual aberrations can be also of benefit, like in the case of $C_s$, as non-zero $C_s$ is required for optimal phase contrast transfer~\cite{uhlemann1998}. \begin{figure} \includegraphics[width=\textwidth]{./fig1.jpg} \caption{{\bf Experimental AC-HRTEM images of graphene and MoSe$_2$ suffering from residual aberrations.} {\bf a}: A defect structure in single-layer graphene, where the graphene lattice does not appear hexagonal due to residual aberrations. This effect can be attributed to residual three-fold astigmatism ($A_2$), which induces a contrast difference between the sublattices of graphene, causing the lattice image to deviate from the hexagonal symmetric pattern expected for graphene. {\bf b}: Mirror-twin-boundaries in single-layer MoSe$_2$. Due to residual aberrations, the lattice has a completely different appearance on the opposite sides of the boundaries. Compare, for example, the two areas marked by the arrows. Also this effect can be explained by alignment of the MoSe$_2$ lattice with residual $A_2$ (panel {\bf c}) and by the swapping of the Mo and dual Se sites in the three-fold symmetric lattice when crossing the boundary (see panel {\bf d} for the structure, where the dark atoms represent Mo and light atoms Se, with two Se atoms always residing on top of each other). The reversal of the lattice results in the accentuation of different sublattices in these regions. The scale bars are 2~nm.} \label{residualA2} \end{figure} On the other hand, residual aberrations such as $A_2$ and $B_2$, which can be present in the order of 100~nm after correction, can lead to undesired artefacts in the images, and the effect of these can be anything from a minor nuisance to complete misinterpretation of the image. Examples of such effects are presented in Figure~\ref{residualA2}. In the first panel, an experimental aberration-corrected HRTEM (AC-HRTEM) image of a point defect in single-layer graphene can be seen. The presence of residual aberrations in the image is clear from the strongly non-hexagonal appearance of the graphene lattice. This effect can be attributed to the three-fold astigmatism $A_2$, which has three axes of symmetry in the image plane (see Figure~\ref{residualA2}c for a visualization): Depending on the relative orientation of $A_2$ to the graphene lattice, one of the two sublattices of graphene can produce a higher contrast than the other, resulting in the observed non-hexagonal pattern. The atomic structure of the defect can still be deduced from the experimental image, but the image is far from optimal. In the second panel, an experimental AC-HRTEM image of so-called mirror-twin-boundaries in a single-layer MoSe$_2$ sample can be seen. In an aberration-free image the Se sites produce higher contrast as compared to the Mo sites, as there are always two Se (Z=34) atoms at the same location as opposed to single Mo (Z=42) atoms. However, in the recorded image the lattice has a completely different appearance at the different sides of the linear boundaries (compare, for example, the areas denoted by the two arrows). On one side the second sublattice has a much higher contrast, but on the other side the sublattices have almost equal contrast, resulting in the hexagonal appearance of the lattice. This effect can again be attributed to residual $A_2$, which is oriented matching the sublattices of the crystal with trigonal symmetry (see Figure~\ref{residualA2} panel c). On one side of the boundary the Se sites are further enhanced, increasing the contrast difference of the Mo and Se sites. However, when crossing the boundary, the Mo and Se sites are interchanged (see Figure~\ref{residualA2} panel d), and now the $A_2$ has the effect of reducing the contrast difference resulting in nearly equal contrast for the Mo and Se sites. Evaluation of the contrast could lead one to deduce that there are two different materials on the opposite sides of the boundary, which in fact is not the case. We would like to emphasize that these artefacts are not introduced by exceptionally strong residual aberrations. Under our experimental conditions, the $A_2$ coefficient is in the order of 100~nm, which is a typical state of correction, after tuning, in our microscope (FEI Titan 80-300, operated at 80~kV). As mentioned above, numerous methods for acquiring truly aberration-free images in TEM have been proposed and implemented in practice. The common denominator of these methods is that they aim at recovering the object wave function, which then can be manipulated much more freely and interpreted more directly than a simple intensity image~\cite{geiger2008,Linck201377,kirkland1995super,coene1996maximum,kirkland1997multiple,Thust1996249}. In electron holography~\cite{gabor1948,lichte1986electron}, an iterference pattern between the object wave and a reference wave is generated with the help of a bi-prism, and direct information on the object wave can be recovered numerically from this pattern. The $C_s$ corrector improves the phase-detection limit of electron holography significantly~\cite{geiger2008}. Another group of methods is in-line holography, {\em i.e.}, exit wave reconstruction~\cite{saxton1994focus,op1996wave,kirkland1995super,coene1996maximum,kirkland1997multiple,saxton1988accurate}, where typically a series of images at equally spaced foci is acquired and subsequently the object wave is deduced iteratively. After the 'raw' electron wave is recovered, the influence of any residual aberrations can be corrected~\cite{Thust1996249,geiger2008,Linck201377}. A common drawback of these methods, however, is that they all require special arrangements and/or procedures during acquisition of the images, and hence cannot be applied to a single HRTEM image during post-processing. Such a situation can be quite frustrating if the microscopist has managed to capture an elusive target during imaging, but the resulting image is of poor quality. Here, we present a method for numerically removing the effects of a specific group of geometric aberrations (the anti-symmetric ones, as discussed later) {\em after} acquisition in single HRTEM images of weakly scattering objects. The requirement of the investigated object to be a weak scatterer is a strong one, and precludes a large number of typically studied materials from the applicability range of the method. However, one important and intensively studied group of materials, that is, the recently discovered class of 2D materials with graphene as the prime example, do approximately satisfy this condition at typically used acceleration voltages such as 80~kV~\cite{Lee201239}, making the method potentially useful in a large number of studies. In analogy to the phase reconstruction methods (focal series and off-axis holography) where residual aberrations are removed from the wave function after the reconstruction, we remove the contribution of the residual anti-symmetric aberrations using the fast Fourier transform (FFT), but instead of the full wave function, we work on a single image, which is possible in the case of a weakly scattering object, that is, when the linear imaging theory is valid~\cite{tembook}. Because the weakly scattering objects we investigate are 2D-objects like graphene, the tilt angles are usually small and the effect of object tilt will be neglected in the first approximation. We want to make clear, that the effect of the symmetric aberrations, such as defocus or $C_s$ cannot be remedied by our method. This is clear already from the fact that the symmetric aberrations lead to loss of information at certain frequencies (the zero crossing points of the contrast transfer function), which of course cannot be recovered. We find our method to be particularly useful for removing the effects of residual aberrations in distorted AC-HRTEM images of weakly scattering 2D-materials, where information is available on how the image {\em should} look like, as demonstrated with the examples of graphene and MoSe$_2$. \section{Methods} First, we present the theoretical background and justification of our method. We follow closely the notation of Ref.~\cite{ishizuka2013phase}. Assuming coherent illumination and exluding the damping envelopes from the treatment, the wave function $\Psi_z({\bf x})$ at the image plane at a distance $z$ from the object along the optical axis can be written as \begin{equation} \Psi_z({\bf x}) = \Psi_0({\bf x}) \otimes p_z({\bf x}) = (1+\phi({\bf x}))\otimes p_z({\bf x}), \end{equation} where $\Psi_0({\bf x})$ is the wave at the exit surface, $\phi({\bf x})$ is the scattered wave, $p_z({\bf x})$ is the lens transfer function and $\otimes$ denotes convolution. The image intensity $i_z({\bf x})$ is the square modulus of the wave at the detector plane, and its Fourier transform $I_z({\bf u})$ can be written as \begin{equation} \label{eq1} \begin{split} I_z({\bf u}) & \equiv \mathcal{F}\{\Psi_z\cdot\Psi^*_z\}\\ & = \mathcal{F}\{ 1 + \phi\otimes p_z + \phi^*\otimes p^*_z + (\phi\otimes p_z)(\phi^*\otimes p^*_z)\}, \end{split} \end{equation} In the case of a weakly scattering object (that is, when the weak phase object approximation is valid), an approximation can be made here~\cite{tembook}. That is, the last term on the second line of the equation can be deemed to be small, which is in effect a transition from the general non-linear imaging theory to the special case of linear imaging theory. Fortunately, 2D-materials such as graphene fulfil this condition at typical acceleration voltages such as 80~kV~\cite{Lee201239} and, as demonstrated by simulations and experiments below, the method can be applied on images of such materials. Equation~2 now becomes \begin{equation} I_z({\bf u}) \approx \delta({\bf u}) + \Phi({\bf u})\cdot e^{-i\chi({\bf u})} + \Phi^*(-{\bf u})\cdot e^{i\chi(-{\bf u})}, \end{equation} where $\Phi({\bf u})$ denotes the Fourier transform of $\phi({\bf x})$. The Fourier transform of $p_z$ is written in the explicit form $e^{-i\chi({\bf u})}$. The information about the microscope aberrations are encoded in the wave aberration function $\chi({\bf u})$. Now, $\chi({\bf u})$ can be represented as a power series where each term represents a specific aberration. The terms of the series can be divided into symmetric $\chi_s({\bf u}) = \chi_s(-{\bf u})$ and anti-symmetric $\chi_{as}({\bf u}) = -\chi_{as}(-{\bf u})$ parts. Notably, the defocus and $C_s$ belong to the symmetric group and $A_2$ and $B_2$ to the anti-symmetric group. Thus, $I_z({\bf u})$ can be rewritten as \begin{equation} I_z({\bf u}) = \delta({\bf u}) + \Phi({\bf u})\cdot e^{-i\chi_s({\bf u})}e^{-i\chi_{as}({\bf u})} + \Phi^*(-{\bf u})\cdot e^{i\chi_s({\bf u})}e^{-i\chi_{as}({\bf u})}. \end{equation} From this equation it is evident, that the effect of all the {\em anti-symmetric} aberrations can be removed from the image by simply multiplying $I_z({\bf u})$ by $e^{i\chi_{as}({\bf u})}$ and calculating the inverse Fourier transform. That is, a corrected image $i_c({\bf x})$ can be generated by \begin{equation} i_{c}({\bf x}) = |\mathcal{F}^{-1}\{ e^{i\chi_{as}({\bf u})}\cdot \mathcal{F}\{i_z({\bf x})\}\}|. \label{corrector} \end{equation} The problem of correcting the residual aberrations is two-fold, however. So far we have shown that, in principe, the anti-symmeric aberration contribution in a HRTEM image of a weakly scattering object can be freely manipulated in post-processing. However, in order to do actual correction, the aberration coefficients and orientations need to be determined. This is a shared challenge with all the methods aiming to correct aberrations in TEM, and of course one is limited by the accuracy at which the aberrations can be measured. Information on the residual aberrations is offered by the corrector based on a series of tilted-beam images~\cite{uhlemann1998,barthel2007}. Other methods for measuring the residual aberrations, {\em e.g.}, from a focus/tilt series~\cite{meyer2004a,meyer2004b} or even a single HRTEM image~\cite{stenkamp1998a} have been presented in the literature. The aberrations could, in priciple, be removed with the corrector once determined, but, {\em e.g.}, due to drift of the corrector state during searching of an area of interest and image acquisition~\cite{Barthel20136} and finite precision of the corrector adjustment, typically residual aberrations in the final images are inevitable. Post-acquisition correction by the harware corrector is of course not possible, and one needs to find new strategies for fine-tuning the existing images if necessary. One alternative is to look at a feature of an image ({\em e.g.}, the pristine graphene lattice) and find the coefficients which result in the expected appearance for an aberration-free image by trial and error. Great care should be taken with this approach, however, as the problem can be underdetermined. For example, in our example case of three-fold $A_2$ astigmatism and MoSe$_2$ (see e.g. the illustration in Figure~\ref{residualA2} panel c) the same sublattice imbalance can be introduced by different combinations of amplitude and relative orientation of the $A_2$ and the MoSe$_2$ lattice. A conservative approach in such a situation is to find the orientation where the smallest possible amplitude corrects the visible effect. If the actual orientation of the real residual aberration would be different from this, some residual aberration would remain (the new residual aberration is the vector sum of the real aberration and the correcting aberration), but the situation would still be improved and the aberration would not be 'over corrected'. The situation is further complicated, if multiple dominant residual aberrations are present. However, often the third-order anti-symmetric aberrations, that is $B_2$ and $A_2$ are dominating (assuming defocus and first-order astigmatism are properly tuned). In the special case of 2D-materials, which tend to have a three-fold symmetry, the three-fold symmetric $A_2$ has a distinctive effect on the appearance of the lattice, and thus can be distinguished from the effects of $B_2$. Thus, even if the aberration post-correction is not ideal, a significantly better result can be achieved in suitable situations, as shown below by experiments and simulations. All the experimental images presented here were acquired using an FEI Titan 80-300 microscope equipped with an image-side hexapole aberration corrector~\cite{haider1998}. The microscope was operated at 80~kV and a reduced extraction voltage of 2~kV, in order to reduce the energy spread of the electron beam. The $C_s$ was corrected to $<$20~$\mu$m and the $A_2$ and $B_2$ were corrected to the order of 100~nm (unless stated otherwise). The graphene samples were produced by the chemical vapor deposition method (acquired commercially from Graphenea S. A.) and transferred on Quantifoil TEM-grids by the procedure described in Ref.~\cite{algara2014dry}. The defect in Figure~\ref{residualA2}a is produced by carbon deposition, as will be described in detail in a separate paper~\cite{lehtinen2014implantation}. The MoSe$_2$ sample was produced by molecular beam epitaxy and transferred on to Pelco holey silicon nitride TEM grids (a detailed study on this sample will be presented also in a separate article~\cite{lehtinen2014microstructure}). The image simulations were conducted using the QSTEM software package~\cite{qstem}, using $C_s = $~20~$\mu$m and focus spread of 6~nm. The simulated structure consisted of 149 Mo and 288 Se atoms. and the simulation cell size was 6~nm$\times$6~nm with a sampling of 0.03~\AA/pixel (the panels in Figure~\ref{simulated_correction} are cropped from the top right). \section{Results} To verify our approach in the case of the low-order $A_2$ and $B_2$, we first conducted image simulations, where one has precise control over the aberrations influencing the image. In Figure~\ref{simulated_correction}a, a MoSe$_2$ target with an embedded triangular mirror-domain is simulated with $A_2 = B_2 = 0$. The domain-boundaries are visible, and the lattice inside and outside looks identical except for the reversed order of the darker and brighter appearing sublattices. In order to emulate the effect observed in the experimental image of~\ref{residualA2}, a HRTEM image was simulated with $A_2 = B_2 = 100$~nm, and 30$^\circ$ as the azimuthal orientation of both of the aberrations (Figure~\ref{simulated_correction}b). With these parameters the lattice appears completely different inside and outside the mirror-domain. This image was then processed according to equation~\ref{corrector} using the known $A_2$ and $B_2$ coefficients and orientations, and the resulting image is shown in panel~c of Figure~\ref{simulated_correction}. The aberration-free appearance of the image is restored by the correction procedure, as predicted. \begin{figure} \includegraphics[width=\textwidth]{./fig2.jpg} \caption{{\bf Simulated verification of the correction procedure.} {\bf a}: A simulated reference HRTEM image of MoSe$_2$ flake with an embedded mirror domain with zero anti-symmetric aberrations. The same structure simulated with $A_2 = B_2 = 100$~nm/30$^\circ$ ({\bf b}) and $A_2 = B_2 = 10$~$\mu$m/30$^\circ$ ({\bf d}). {\bf c} and {\bf e}: The images of {\bf b} and {\bf d} after applying the correction, with the known parameters. The aberration-free appearance of the image was restored by the method in both cases. The scale bar is 1~nm.} \label{simulated_correction} \end{figure} We also tested the procedure in more extreme conditions, that is, with $A_2=B_2=10$ $\mu$m. This does not represent typical experimental conditions, but should be considered as an artificial test with very strong aberrations. The simulated image is shown in Figure~\ref{simulated_correction}d, where the whole image is scrambled by the aberrations and interpretation of the image is practically impossible. However, after applying equation~\ref{corrector} with the known parameters, the image is restored to an aberration-free state. Here, it should be pointed out that close inspection of the corrected image reveals some features which are absent in the reference image (panel a), {\em e.g.}, in the vacuum area. This is due to some information being displaced outside the image frame by the (in this case) large point spread function / image delocalization induced by the large aberrations, which then is missing also in the corrected image. Generally speaking, one should avoid interpreting the corrected image within an margin at the image edges with a width determined by the point spread function / image delocalization. In order to verify our method also in an experimental situation, we took the AC-HRTEM images of figure~\ref{residualA2}, and applied equation~\ref{corrector}. In the case of graphene, we iteratively found the amplitude and orientation of $A_2=150$~nm/17$^\circ$ which resulted in the removal of visible asymmetry between the two sublattices of graphene. The corrected experimental image is presented in Figure~\ref{corrected_exps}a, with the original image shown as inset. The graphene lattice has now the expected hexagonal appearance, and interpretation of the image is made more straight-forward. For the case of MoSe$_2$, $A_2=75$~nm/15$^\circ$ was found to restore the expected appearance of the lattice on both sides of the mirror-twin-boundaries in the experimental image. An interesting observation in the case of MoSe$_2$ is that the point defects are actually more visible in one of the domains in the original (non-corrected) image, and also the boundaries are easier to locate due to the different appearances of the lattice on the opposite sides of the boundaries. This bears resemblance to the well known effect of moving away from the 'optimal' Scherzer focus in order to make point defects more visible to the eye (see, {\em e.g.} Refs.~\cite{meyer2011,lehtinen2014}). No new information is added to the image by the anti-symmetric aberrations, but features are made more evident to the eye. A clear advantage of this approach is that the aberrations can subsequently be corrected with no information losses, as the anti-symmetric aberrations introduce only phase shifts and the phase contrast transfer function is not altered. If the presence of anti-symmetric residual aberrations is deemed beneficial in some situation, our method can just as well be used to {\em increase} the effect of these aberrations during post-processing. Another experimental test was conducted by willfully introducing strong $A_2$ with the corrector of the microscope and acquiring images with such an ill-tuned system ($A_2 = 1.0\pm0.1$~$\mu$m/48$^\circ$, as measured by the corrector software). In Figure~\ref{corrected_exps2} a and b, two locations in the graphene sample are shown. In panel a it is clear that a tilt grain boundary runs through the center of the image, separating two areas with different lattice orientations. However, the structure of the boundary cannot be analyzed in the image, and the graphene lattice has a completely different appearance on the opposite sides of the boundary (due to different relative orientations of the lattices relative to the $A_2$ orientation). In panel b, a point defect in the graphene lattice can be detected, but again its structure remains hidden by $A_2$. The correction method was applied to both images using the $A_2$ value of 1~$\mu$m at an angle of 41.5$^\circ$ (measured by the corrector software) as the starting point, and fine tuning the parameters based on visual inspection of the image. In both cases the effect of $A_2$ was remedied. In the case of the grain boundary (panel c) a 5-7 dislocation is now visible, as expected for a tilt-grain-boundary in graphene~\cite{grantab2010anomalous,yazyev2010topological,huang2011,kurasch2012}. The image of the point defect (panel d) shows the recongnizable pattern of a reconstructed divacancy in graphene~\cite{banhart2010structural} after correction. \begin{figure} \includegraphics[width=.8\textwidth]{./fig3.jpg} \caption{{\bf Experimental verification of the correction method.} {\bf a}: The AC-HRTEM image of Figure~\ref{residualA2}a after application of the correction method with $A_2=150$~nm/17$^\circ$. The hexagonal appearance of the graphene lattice is restored and the structure of the defect is easier to analyze. {\bf b}: The AC-HRTEM image of Figure~\ref{residualA2}b after the correction method was applied with $A_2=75$~nm/15$^\circ$. The MoSe$_2$ lattice has identical appearance on both sides of the mirror-twin-boundary as expected for an aberration-free image. The scale bars are 2~nm.} \label{corrected_exps} \end{figure} \begin{figure} \includegraphics[width=.8\textwidth]{./fig4.jpg} \caption{{\bf An experimental test with an intentionally introduced large $A_2$ aberration.} {\bf a} and {\bf b}: AC-HRTEM images of a tilt grain boundary and a point defect in graphene with $A_2$ set to $1.0\pm0.1$~$\mu$m/48$^\circ$ (as measured by the corrector software). {\bf c} and {\bf d}: The previous two frames after correcting for 0.9~$\mu$m/42.5$^\circ$ of $A_2$ and 1.0~$\mu$m/41.5$^\circ$ of $A_2$, respectively. In both cases images free of $A_2$ are recovered and the imaged structure is easier to analyze. The scale bars are 2~nm.} \label{corrected_exps2} \end{figure} \section{Conclusion} To conclude, we have presented a method for correcting the anti-symmetric sub-group of aberrations in HRTEM images during numerical post-processing in the case of weakly scattering objects, and verified its applicability in the case of the often dominant low-order $A_2$ and $B_2$ aberrations through simulations and experiments with graphene and 2D MoSe$_2$. This procedure can be performed on a single conventional HRTEM image, that is, retrieval of the object wave is not necessary. The requirement of a weakly scattering object imposes a strong limitation on the range of materials where the method is applicable. However, the intensively studied class of 2D materials, with graphene as the prime example, do satisfy this condition at commonly used acceleration voltages such as 80~kV~\cite{Lee201239}. The contribution of the anti-symmetric aberrations is removed by applying the same aberrations with the opposite phase to the Fourier transform of the recorded intensity image and subsequently inverting the Fourier transform, that is, $i_{c}({\bf x}) = |\mathcal{F}^{-1}\{ e^{i\chi_{as}({\bf u})}\cdot \mathcal{F}\{i_z({\bf x})\}\}|$. We have presented the theoretical justification of the approach. The method was demonstrated on simulated and experimental HRTEM images suffering from residual anti-symmetric aberrations ($A_2$ and $B_2$). By applying the method, images with strongly reduced aberrations could be produced. In fact, the anti-symmetric aberration coefficients can be freely adjusted {\em post-situ} using the presented method. A prerequisite for applying this method is prior knowledge of the aberration coefficients. In the presented examples, the values measured by the corrector software were used, or alternatively the coefficients were found by narrowing down the coefficients leading to the correct appearance of the graphene and MoSe$_2$ lattices through trial and error. The latter approach has an interesting implication: this method can actually be used to estimate the aberration coefficients when there is prior knowledge on how an aberration-free image should look like. Great care should be taken, however, as the problem can be underdetermined, and multiple aberration coefficient combinations can lead to the same final result. The method allows some flexibility during the acquisition of images: As the anti-symmetric residual aberrations can be corrected during post-processing, it is not imperative to have the corrector 'perfectly' tuned at all times. A reasonably good state of correction is important during acquisition, however, in order to enable the operator of the microscope to recognize the imaged features (compare, {\em e.g.}, Figure~\ref{corrected_exps2} b and d). The loosened requirements for the tuning of the anti-symmetric aberration coefficients can accelerate the corrector tuning procedure. As the time available for the tuning is limited due to, {\em e.g.}, gradual drift of the corrector state~\cite{Barthel20136}, an improved final state of correction can be achieved by the faster tuning procedure. \vspace{.5cm} \noindent{\bf Acknowledgements} \noindent We are very grateful to our group member, senior professor Harald H. Rose for invaluable advice during this work. We would also like to thank Hannu-Pekka Komsa, Arkady V. Krasheninnikov and Martin Linck for helpful discussions, and Nilesh Vats for sample preparation. O.L, D.G., Z.L. and U.K. gratefully acknowledge the funding by the DFG (German Research Foundation) and the Ministry of Science, Research and the Arts (MWK) of Baden-Wuerttemberg in the framework of the SALVE (Sub Angstrom Low-Voltage Electron Microscopy) project. O.L. acknowledges support from the Finnish Cultural Foundation. M.B.W., M.-W.C. and A.K. acknowledge funding from European Research Council (grant no. 240076, FLATRONICS: Electronic devices based on nanolayers)
1,314,259,993,805
arxiv
\section*{Introduction} The intricate relations between the individual and collective levels are at the heart of many natural and social sciences. Different disciplines wonder how atoms combine to form solids \cite{cambridge,goodstein}, neurons give rise to consciousness \cite{damasio,changeux} or individuals shape markets and societies \cite{econophysics,smith,latour07}. However, scientific fields assume distinct points of view for defining the "normal", or "equilibrium" aggregated state. Physics looks at the collective level, selecting the configurations that minimize the global free energy \cite{goodstein}. In contrast, economic agents behave in a selfish way, and equilibrium is attained when no agent can increase its own satisfaction \cite{mascolell}. Although similar at first sight, the two approaches lead to radically different outcomes, and the selfish strategy may prove dramatically inefficient in presence of interactions between agents. We illustrate this effect on an exactly solvable model, similar to Schelling's segregation model \cite{schelling71}. Considering individual agents which prefer a mixed environment, we study the segregated or mixed patterns that emerge at the global level. A "tax" parameter monitors continuously the agents' degree of altruism or cooperativity, i.e., their consideration of the global welfare. At low degrees of cooperativity, segregation occurs and the agents' utilities remain low, in spite of continuous efforts to maximize their satisfaction. As the altruism parameter is increased, a phase transition occurs, driving the system towards a mixed phase of maximal utility. Our approach generalizes the free energy function used in physics, allowing to predict analytically the stationary states, which required so far numerical simulations \cite{fossett-clark08}. \section*{Model} Our model represents in a schematic way the dynamics of residential moves in a city. For simplicity, we include one type of agent, but our results can readily be generalized to deal with agents of two "colors", as in the original Schelling model \cite{schelling71} (see below). The city is divided into $Q$ blocks ($Q >> 1$), each block containing $H$ cells or flats (Fig. \ref{config}). We assume that each cell can contain at most one agent, so that the number $n_q$ of agents in a given block $q$ ($q=1,\ldots,Q$) satisfies $n_q \le H$, and we introduce the density of agents $\rho_q=n_q/H$. Each agent has the same utility function $u(\rho_q)$, which describes the degree of satisfaction concerning the density of the block it is living in. The collective utility is defined as the total utility of all the agents in the city: $U(x)=H\sum_q \rho_q u(\rho_q)$, where $x \equiv \{\rho_q\}$ corresponds to the coarse-grained configuration of the city, i.e. the knowledge of the density of each block. For a given $x$, there is a large number of ways to arrange the agents in the different cells. This number of arrangements is quantified by its logarithm $S(x)$, called the entropy of the configuration $x$. The dynamical rule allowing the agents to move from one block to another is the following. At each time step, one picks up at random an agent and a vacant cell, within two different blocks. Then the agent moves in that empty cell with probability: \be P_{xy} = \frac{1}{1+e^{-\mathcal{C}/T}}, \ee where $x$ and $y$ are respectively the configurations before and after the move, and $\mathcal{C}$ is the cost associated to the proposed move. The positive parameter $T$ is a "temperature" which introduces in a standard way \cite{anderson92} some noise on the decision process. It can be interpreted as the effect of features that are not explicitely included in the utility function but still affect the moving decision (urban facilities, friends\ldots). We write the cost $\mathcal{C}$ as : \bea \label{def-C} \mathcal{C} &=& \Delta u + \alpha (\Delta U - \Delta u) \eea where $\Delta u$ is the variation of the agent's own utility upon moving and $\Delta U$ is the variation of the total utility of all agents. The parameter $0\leq \alpha\leq 1$ weights the contribution of the other agents' utility variation in the calculation of the cost $\mathcal{C}$, and it can thus be interpreted as a degree of cooperativity (or altruism). For $\alpha =0$, the probability to move only depends on the selfish interest of the chosen agent, while for $\alpha=1$ it only depends on the collective utility. Varying $\alpha$ in a continuous way, one can interpolate between these two limiting behaviors. \section*{Potential function} We wish to find the stationary probability distribution $\Pi(x)$ of the microscopic configurations $x$. If the cost $\mathcal{C}$ can be written as $\mathcal{C}=\Delta V \equiv V(y)-V(x)$, where $V(x)$ is a function of the configuration $x$, then the dynamics satisfies detailed balance \cite{evans} and the distribution $\Pi(x)$ is given by \be \Pi(x) = \frac{1}{Z}\, e^{F(x)/T}, \ee with $F(x)=V(x)+TS(x)$ and $Z$ a normalization constant. The entropy has for large $H$ the standard expression $S(x)=H\sum_q s(\rho_q)$, with \be s(\rho) = -\rho\ln\rho -(1-\rho)\ln(1-\rho). \ee We now need to find the function $V(x)$, if it exists. Given the form (\ref{def-C}) of $\mathcal{C}$, finding such a function $V(x)$ amounts to finding a "linking" function $L(x)$, connecting the individual and collective levels, such that $\Delta u = \Delta L$. By analogy to the entropy, we assume that $L(x)$ can be written as a sum over the blocks, namely $L(x)=H\sum_q \ell(\rho_q)$. Considering a move from a block at density $\rho_1$ to a block at density $\rho_2$, $\Delta L$ reduces in the large $H$ limit to $\ell'(\rho_2)-\ell'(\rho_1)$, where $\ell'$ is the derivative of $\ell$. The condition $\Delta u = \Delta L$ then leads to the identification $\ell'(\rho)=u(\rho)$, from which the expression of $\ell(\rho)$ follows: \be \ell(\rho) = \int_0^{\rho} u(\rho') d\rho'. \label{link} \ee As a result, the function $F(x)$ can be expressed in the large $H$ limit as $F(x)=H\sum_q f(\rho_q)$, with a block potential $f(\rho)$ given by : \bea \nonumber f(\rho) &=& -T\rho\ln\rho - T(1-\rho)\ln(1-\rho)\\ &+& \alpha\rho u(\rho) + (1-\alpha)\int_{0}^{\rho} u(\rho')d\rho'. \label{f-rho} \eea The probability $\Pi(x)$ is dominated by the configurations $x=\{\rho_q\}$ that maximize the sum $\sum_q f(\rho_q)$ under the constraint of a fixed $\rho_0 = 1/Q \sum_{q=1}^Q \rho_q$. To perform this maximization procedure, we follow standard physics methods used in the study of phase transitions (like liquid-vapor coexistence \cite{callen85}), which can be summarized as follows. If $f(\rho)$ coincides with its concave hull at a given density $\rho_0$, then the state of the city is homogeneous, and all blocks have a density $\rho_0$. Otherwise, a phase separation occurs: some blocks have a density $\rho_1^*<\rho_0$, while the others have a density $\rho_2^*>\rho_0$. Interestingly, the potential $F$ can be calculated for arbitrary utility functions. It generalizes the free energy function used in physics, allowing to predict analytically the global town state. Such an achievement eluded so far individualistic, Schelling-type models, which had to be solved through numerical simulations \cite{fossett-clark08}. To obtain explicitly the equilibrium configurations, one needs to know the specific form of the utility function. To illustrate the dramatic influence of the cooperativity parameter $\alpha$, we use the asymmetrically peaked utility function \cite{pancs07}, which indicates that agents prefer mixed blocks (Figure \ref{um}). The overall town density is fixed at $\rho_0 = 1/2$ to avoid the trivial utility frustration resulting from the impossibility to attain the optimal equilibrium ($\rho_q = 1/2$ for all blocks). The qualitative behavior of the system is unchanged for $\rho_0 \neq 1/2$ or for low values of the temperature. In the collective case ($\alpha=1$), the optimal state corresponds to the configuration that maximizes the global utility, which can be immediately guessed from Figure \ref{um}, namely $\rho_q = 1/2$ for all $q$ (Fig \ref{config}a). On the contrary, in the selfish case ($\alpha=0$, Fig \ref{config}b), maximization of the potential $F(x)$ shows that the town settles in a segregated configuration where a fraction of the blocks are empty and the others have a density $\rho_s > 1/2$. Surprisingly, the city settles in this state of low utility in spite of agents' continuous efforts to maximize their own satisfaction. To understand this frustrated configuration, note that the collective equilibrium ($\rho_q = 1/2$ for all $q$) is now an {\it unstable} Nash equilibrium at $T > 0$. The instability can be understood by noting that at $T > 0$ there is a positive probability that an agent accepts a slight decrease of its utility, and leaves a block with density $\rho_q = 1/2$. The agents remaining in its former block now have a lower utility and are more likely to leave. This creates an avalanche which empties the block, as each move away further decreases the utility of the remaining agents. Mixed and segregated states are separated by a phase transition at the critical value $\alpha_c = 1/(3-2m)$ (Figure \ref{dp}). This transition differs from standard equilibrium phase transitions known in physics, which are most often driven by the competition between energy and entropy. Here, the transition is driven by a competition between the collective and individual components of the agents' dynamics. The unsatisfactory global state of the city can be interpreted, from the economics' point of view, as an effect of externalities: by moving to increase its utility, an agent may decrease other agents' utilities, without taking this into account. From a standard interpretation in terms of Pigouvian tax \cite{pigou}, one expects that $\alpha=1$ is necessary to reach the optimal state, since by definition this value internalizes all the externalities the agent causes to the others when moving. Our results show that the optimal state is maintained until much lower tax values (for example, $\alpha_c = 1/3$ at $m=0$), a surprising result which deserves further analysis. Another interesting effect is observed for $m>2/3$ (Figure \ref{dp}). Introducing a small tax has no effect on the overall satisfaction, the utility remaining constant until a threshold level is attained at $\alpha_t= (3m-2)/(6-5m)$.\\ We focused up to now on the zero temperature limit. For low temperatures, the main qualitative conclusions are not modified, as the phase diagram is modified only for extremal values of $\rho_0$ by entropic contributions. At higher temperatures the city tends to become homogeneous, as the effect of "noise" (i.e., of the features that are not described in the model) dominates over the utility associated to density of the blocks (see Fig.~\ref{ra-T}). \section*{Link to Schelling's original segregation model} There are two main differences between our simple model and Schelling's original model \cite{schelling71} : the existence of agents of {\em two} colors and the definition of the agents' neighborhoods. We now show that these additional features do not introduce any essential effect. Let us start by introducing agents of two "colors" (such as red and green). Simple calculations show that for two species which only care about the density of neighbors of their own color, the block potential (eq. \ref{f-rho}) becomes : \bea f(\rho_R,\rho_G) &=& -T\rho_R\ln\rho_R -T\rho_G\ln\rho_G\nonumber\\ &-& T(1-\rho_R -\rho_G)\ln(1-\rho_R -\rho_G) \nonumber\\ &+& \alpha\Big[\rho_R \,u_R(\rho_R) + \rho_G \,u_G(\rho_G)\Big]\nonumber\\ &+& (1-\alpha)\Big[\int_{0}^{\rho_R}u_R(\rho')d\rho' + \int_{0}^{\rho_G}u_G(\rho')d\rho'\Big] \nonumber \eea with straightforward notations (for example $u_R(\rho_R)$ represents the utility of a red agent in a block with a density $\rho_R$ of red agents). Finding the equilibrium configurations amounts to finding the set $\{\rho_{qR},\rho_{qG}\}$ which maximizes the potential $F(x)=\sum_q f(\rho_{qR},\rho_{qG})$ with the constraints $\sum_q\rho_{qR}=Q\rho_{0R}$ and $\sum_q\rho_{qG}=Q\rho_{0G}$, where $\rho_{0G}$ and $\rho_{0R}$ represent respectively the overall concentration of green and red agents. Because of the spatial constraints (the densities of red and green agents in each block $q$ must verify $\rho_{qR} + \rho_{qG} \leq 1$), the `two populations' model cannot formally be reduced to two independent `one population' models. However, the stationary states can still be computed. Let us focus once again on the $T \to 0$ limit and suppose for example that $\rho_{0R}=\rho_{0G}=\rho_0/2$. The stationary states depend once again on the values of $\rho_0$, $m$ and $\alpha$. For low values of $\alpha$, it can be shown that the system settles in a segregated state where each block contains only one kind of agent with a density $\rho_0$ (see Figure \ref{2pop}a). For $\alpha \geq \alpha_c$, the system settles in a mixed state where the density of a group in a block is either $0$ or $1/2$ (Figure \ref{2pop}b). We now turn to the difference in agent's neighborhoods. In Schelling's original model, agents' neighbors are defined as their 8 nearest neighbors. Our model considers instead predefined blocks of common neighbors. First, it should be noted that there is no decisive argument in favor of either neighborhood definition in terms of the realism of the description of real social neighborhoods. Second, we note that introducing blocks allows for an analytical solution for arbitrary utility functions. This contrasts with the nearest neighbor case, where the best analytical approach solves only a modified model which abandons the individual point of view and is limited to a specific utility function \cite{zhang}. Finally, the simulations presented on Figures \ref{2pop} show that the transition from segregated to mixed states is not affected by the choice of the neighborhood's definition. We conclude that the block description is more adapted to this kind of simple modelling, which aims at showing stylized facts as segregation transitions. \section*{Conclusion} Our simple model raises a number of interesting questions about collective or individual points of view. In the purely collective case ($\alpha=1$), the stationary state corresponds to the maximization of the average utility, in analogy to the minimization of energy in physics. In the opposite case ($\alpha=0$), the stationary state strongly differs from the simple collection of individual optima \cite{represent}: the optimization strategy based on purely individual dynamics fails, illustrating the unexpected links between micromotives and macrobehavior \cite{schelling78}. However, the emergent collective state can be efficiently captured by the maximization of the linking function $\ell(\rho)$ given in Eq.~(\ref{link}), up to constraints in the overall town density. This function intimately connects the individual and global points of view. First, it depends only on the global town configuration (given by the $\rho_q$), allowing a relatively simple calculation of the equilibrium. At the same time, it can be interpreted as the sum of the {\it individual} marginal utilities gained by agents as they progressively fill the city after leaving a reservoir of zero utility. In the stationary state, a maximal value of the potential $L$ is reached. This means that no agent can increase its utility by moving (since $\Delta u = \Delta L$), consistently with the economists' definition of a Nash equilibrium. Equilibrium statistical mechanics has developed powerful tools to link the microscopic and macroscopic levels. These tools are limited to physical systems, where dynamics is governed by a global quantity such as the total energy. By introducing a link function, analogous to state functions in thermodynamics or potential functions in game theory \cite{monderer96}, we have extended the framework of statistical mechanics to a Schelling-like model. Such an approach paves the way to analytical treatments of a much wider class of systems, where dynamics is governed by individual strategies. \begin{acknowledgments} We acknowledge interesting discussions with Florence Goffette-Nagot (Groupe d'Analyse et de Th\'eorie Economique, Lyon). \end{acknowledgments}
1,314,259,993,806
arxiv
\section{From non-commutative spacetime to the action} What follows is a short preview of the works in Refs.\ \refcite{ref1}, \refcite{ref2}. For a more comprehensive and in depth introduction to this framework, see for example Ref.\ \refcite{ref3}. The starting point of our considerations is given by Minkowski spacetime, which can in turn be defined in terms of non-commuting coordinates satisfying the $\mathfrak{an}(3)$ algebra defined as $[\hat{x}^0, \hat{\mathbf{x}}^i] = i\hat{\mathbf{x}}^i/\kappa$. Notice that $1/\kappa$ has dimensions of length\footnote{The formal limit $\kappa\rightarrow\infty$ reduces us to the canonical commutative Minkowski spacetime.}, and that these commutation relations are very different from the Moyal-type non-commutativity $[x^\mu,x^\nu]= i \theta^{\mu\nu}$. The most direct way to build easily manageable fields on this spacetime goes along the following lines. We first need plane waves, corresponding to the $AN(3)$ group elements $\hat{e}_k = \exp (i \mathbf{k}_i \hat{\mathbf{x}}^i) \exp (i k_0\hat{x}^0)$. Because of the group structure, addition of momenta is given by the group property, i.e., $\hat{e}_k \hat{e}_l = \hat{e}_{k\oplus l}$. In the same way, inverses of momenta are given by group inverses, i.e., $\hat{e}^{-1}_k = \hat{e}_{S(k)}$. We then employ a\footnote{There are many equivalent ways in which one could choose a suitable Weyl map. For a more in depth discussion, see Ref.\ \refcite{ref1}.} Weyl map $\mathcal{W}$, which sends group elements $\hat{e}_k$ to canonical plane waves $e_p := \exp(ip_\mu(k) x^\mu)$. The coordinates $x^\mu$ are now canonical coordinates, and the group structure is preserved through the introduction of a $\star$ product defined by $\mathcal{W}(\hat{e}_{k\oplus l}) = e_{p(k) \oplus q(l)}=: e_p \star e_q$. The $\star$ product is in general non-commutative. Because of this, considering the case of a complex scalar field, one has two possible choices of action, depending on whether one chooses the ordering $\phi\star \phi^\dag$ or $\phi^\dag \star \phi$ (and the same for the kinetic term, we are still considering a free action). The action we chose is \begin{equation} S = \frac{1}{2} \int_{\mathbb{R}^4} d^4x [(\partial^\mu\phi)^\dag \star (\partial_\mu\phi) + (\partial^\mu\phi) \star (\partial_\mu\phi)^\dag - m^2 (\phi^\dag \star \phi + \phi\star \phi^\dag)]. \label{aba:1} \end{equation} One can show that the field satisfies the Klein-Gordon equations, and that such a field can be described using the plane waves described above as \begin{equation} \phi(x)= \int \frac{d^3p}{\sqrt{2\omega_p}} \xi(p) a_\mathbf{p} e^{-i(\omega_pt - \mathbf{p}\mathbf{x})} + \int \frac{d^3p^*}{\sqrt{2|\omega_p^*|}} \xi(p) b^\dag_{\mathbf{p}^*} e^{i(S(\omega_p)t - S(\mathbf{p})\mathbf{x})}. \label{aba:2} \end{equation} Notice the presence of the antipode in the conjugate plane wave\footnote{The ${}^*$ on the momenta of the second wave is related to a second copy of momentum space, for more details, see Ref.\ \refcite{ref1}.}. Because of this feature, and thanks to our choice of action, the CPT transformation of the field are the same as in the non-deformed case, i.e., T$\phi(t,\mathbf{x})$T${}^{-1}=\phi(-t,\mathbf{x})$, P$\phi(t,\mathbf{x})$P${}^{-1}=\phi(t,-\mathbf{x})$, and C$\phi(t,\mathbf{x})$C${}^{-1}=\phi^\dag(t,\mathbf{x})$. Furthermore, the action is manifestly invariant under CPT and $\kappa$-deformed Lorentz transformations. To compute the charges, we first derived the translation charges directly from the Noether theorem, and then we built a covariant phase space approach which was able to reproduce the translation charges, allowing us to compute the remaining ones. As an example, the boost charges are \begin{align*} N_i = - \frac{1}{2} \int d^3p \left\{ S(\omega_p) \left[ \frac{\partial a_\mathbf{p}^\dag}{\partial S(\mathbf{p})^i} a_\mathbf{p} - a_\mathbf{p}^\dag \frac{\partial a_\mathbf{p}}{\partial S(\mathbf{p})^i} \right] + \omega_p \left[ b_{\mathbf{p}} \frac{\partial b_{\mathbf{p}}^\dag}{\partial \mathbf{p}^i} - \frac{\partial b_{\mathbf{p}}}{\partial \mathbf{p}^i} b_{\mathbf{p}}^\dag \right] \right\}. \end{align*} Furthermore, one can show that $[a, a^\dag] = [b, b^\dag]=1$, which then lets us verify that the deformed charges satisfy the canonical Poincar\'e algebra. One can also build the conjugation operator charge explicitly \begin{align} \mathcal{C} = \int \, d^3p \, ( b^\dag_{\mathbf{p}} a_\mathbf{p} + a^\dag_{\mathbf{p}} b_{\mathbf{p}} ). \end{align} The creation/annihilation operators algebra then allows us to verify that $[N_i, \text{C}]\neq 0$. More explicitly, one finds that $[N_i, \text{C}]$ is given by \begin{align} & \frac{i}{2} \int d^3p \Bigg\{ S(\omega_p) \left[ \frac{\partial a_{\mathbf{p}}}{\partial S(\mathbf{p})^i} b^\dag_{\mathbf{p}} - a_{\mathbf{p}} \frac{\partial b^\dag_{\mathbf{p}} }{\partial S(\mathbf{p})^i} + \frac{\partial a_{\mathbf{p}}^\dag}{\partial S(\mathbf{p})^i} b_{\mathbf{p}} - a_{\mathbf{p}}^\dag \frac{\partial b _{\mathbf{p}}}{\partial S(\mathbf{p})^i} \right] + \nonumber \\ &\qquad \qquad + \omega_p \left[ \frac{\partial b_{\mathbf{p}}^\dag}{\partial \mathbf{p}^i} a_\mathbf{p} - b_{\mathbf{p}}^\dag \frac{\partial a_\mathbf{p}}{\partial \mathbf{p}^i} + \frac{\partial b_{\mathbf{p}}}{\partial \mathbf{p}^i} a^\dag_{\mathbf{p}} - b_{\mathbf{p}} \frac{\partial a_\mathbf{p}^\dag}{\partial \mathbf{p}^i} \right] \Bigg\} \end{align} Clearly, in the limit $\kappa\rightarrow\infty$ one recovers the canonical $[N_i, \text{C}]=0$. The fact that $[N_i, \text{C}]\neq 0$ has several striking phenomenological consequences. In particular, defining $|\mathbf{p}\rangle_a = a_\mathbf{p}^\dag |0\rangle$ and $|\mathbf{p}\rangle_b = b_\mathbf{p}^\dag |0\rangle$, if we have a particle and an antiparticle at rest, then $P_i |\mathbf{p}\rangle_a = P_i |\mathbf{p}\rangle_b = 0$, and $P_0 |\mathbf{p}\rangle_a = M |\mathbf{p}\rangle_b$, $P_0 |\mathbf{p}\rangle_b = M |\mathbf{p}\rangle_b$ (showing that particles and antiparticles have the same mass). Boosting, e.g., in direction $1$, one goes from $|p_1, p_2, p_3\rangle $ to $|\cosh \xi p_1 + \sinh \xi \omega_p, p_2,p_3\rangle$ where $\xi$ is the rapidity. Then one can show that $P_1 |M\sinh\xi,0,0\rangle_a = -S(M\sinh\xi)|M\sinh\xi,0,0\rangle_a$, $P_1 |M\sinh\xi,0,0\rangle_b = M\sinh\xi|M\sinh\xi,0,0\rangle_b$, and analogously for $P_0$. Notice that $\text{C}|\mathbf{p}\rangle_b = |\mathbf{p}\rangle_a$ and viceversa. The difference between particles and antiparticles also reflects itself in a different Lorentz boost for each of them, respectively $-S(E)/M$ and $E/M$. Hence, in a boosted frame, there is a slight difference between the decay probability density function $\mathcal{P}$ between particles and antiparticles. One gets at first order in $1/\kappa$ \begin{eqnarray} {\cal P}_{\mbox{\scriptsize part}}(t) & = & \frac{\Gamma E}M\exp \,\left(-\Gamma \,\frac {E}{M}\, t \right), \label{decay}\\ {\cal P}_{\mbox{\scriptsize apart}}(t) & = & \Gamma\left(\frac EM - \frac{\mathbf p^2}{\kappa M}\right) \,\exp\,\left [-\Gamma \, \left(\frac EM - \frac{\mathbf p^2}{\kappa M}\right)\, t\,\right ].\label{decaya} \end{eqnarray} where $\Gamma=1/\tau$. Since the effects of deformation are given by the factor $\mathbf{p}^2/(\kappa M)$, experimental signatures might be easier to see in a $\mu^+ \mu^-$ pair, since muons are the lightest particles with the best measured lifetimes. As a final comment, we note that Greenberg's theorem is not valid in the context of our framework. \section*{Acknowledgments} Parts of these works were supported by funds provided by the Polish National Science Center, the project number 2019/33/B/ST2/00050.
1,314,259,993,807
arxiv
\section{Introduction} Throughout this article, let \(\dvr\) be a complete discrete valuation ring with uniformiser \(\dvgen\), fraction field \(\dvf\), and residue field \(\resf\). We assume throughout that~\(\dvf\) has characteristic zero. Bivariant \(K\)-theory was introduced by Kasparov (\cites{Kasparov:Invariants_elliptic,Kasparov:K-functors}) as a unification of complex topological \(K\)-theory and \(K\)-homology, with a view towards the Novikov conjecture. It has since been used in the classification of \(C^*\)-algebras, the Baum-Connes conjecture and in differential topology (\cites{Kasparov:Operator_K_applications,Kasparov:Novikov}). There are several equivalent ways of defining bivariant \(K\)-theory: in the form introduced by Kasparov, it is a category \(KK\) whose objects are separable \(C^*\)-algebras, and whose morphisms \(KK(A,B)\) are certain Hilbert \(B\)-modules. The viewpoint that will be most relevant in this article is the approach due to Cuntz, which exhibits the morphism space as a noncommutative analogue of the stable homotopy category (\cites{cuntz1987new}). In analogy with the category of noncommutative motives (see \cite{tabuada2011guided}), bivariant \(K\)-theory is the universal target for functors on the category of separable \(C^*\)-algebras that are homotopy invariant, stable by compact operators and excisive for extensions with completely positive sections. Typical examples of such functors are asymptotic, local and analytic cyclic homology due to Michael Puschnigg (\cite{puschnigg2006asymptotic}) and Ralf Meyer (\cite{Meyer:HLHA}). The source category of bivariant \(K\)-theory has since been enlarged to treat more general topological algebras, such as the Frechet algebra of smooth functions on a manifold, and the Weyl algebra with the fine topology (see \cites{Cuntz:Weyl,Cuntz-Thom:Algebraic_K}). This is done using \textit{classifying maps} of extensions of appropriate topological algebras with continuous linear sections. The most general class of algebras in which bivariant \(K\)-theory has been studied is the category of complete bornological \(\C\)-algebras. Bivariant \(K\)-theory in this generality is discussed in (\cite{Cuntz-Meyer-Rosenberg}). Away from the topological setting, a purely algebraic version of bivariant \(K\)-theory is developed in (\cite{Cortinas-Thom:Bivariant_K}). Together with its equivariant (see \cites{MR3123759,MR4018774, cortinas2022, arnone2022graded}), graded and Hermitian versions, these algebraic bivariant \(K\)-theories have led to important results in the classification theory of Leavitt path algebras (\cites{cortinas2020homotopy,cortinas2022}). The analytic \(K\)-theoretic invariants we propose are a combination of the tools developed in the operator algebraic bivariant \(K\)-theories and the purely algebraic version. This is justified by the fact that the topological algebras that arise in nonarchimedean geometry are completions or \textit{analytifications} of ordinary \(\dvr\)-algebras (see \cite{ben2022analytification}), which necessitates us to work in a general enough framework that allows for the passage between (homological) algebra and functional analysis. As in the archimedean case, the right source category to develop such theories is the category of complete, torsionfree bornological \(\dvr\)-algebras. Functional analysis in this context is developed in (\cites{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean, Meyer-Mukherjee:Bornological_tf}). This article follows a series of papers (\cites{Cortinas-Meyer-Mukherjee:NAHA, Meyer-Mukherjee:HA, Meyer-Mukherjee:HL}) that develop variants of periodic cyclic homology that have reasonable formal properties for nonarchimedean topological algebras. More concretely, the analytic cyclic homology complex is a functor \[\mathbb{HA} \colon \{\text{ Complete, torsionfree bornological } \dvr-\text{algebras}\} \to \overleftarrow{\mathsf{Der}(\mathsf{Ind}(\mathsf{Ban}_\dvf))}\] into the homotopy category of pro-ind-systems of complexes of Banach \(\dvf\)-vector spaces. It satisfies homotopy invariance with respect to the algebra \(\dvr[t]^\updagger\) of overconvergent power series, stability with respect to suitably complete matrix algebras and excision for semi-split extensions of complete, torsionfree bornological \(\dvr\)-algebras. One of the main motivations of this article is to find the universal functor \[j \colon \{\text{ Complete, torsionfree bornological } \dvr-\text{algebras}\} \to \mathsf{kk}^{\mathrm{an}}\] satisfying these properties. The existence and universality of such a functor means that \(\mathsf{kk}^{\mathrm{an}}\)-equivalences automatically yield \(\mathbb{HP} (- \otimes \dvf)\) and \(\mathbb{HA}\)-equivalences. This is important as although analytic cyclic homology only depends on its reduction mod \(\dvgen\) when restricted to a suitable subcategory, we cannot merely work in bivariant algebraic \(kk\)-category relative to the residue field: \(kk\)-equivalences between \(\resf\)-algebras only yield \(\mathbb{HA}\)-equivalences, and we do not yet know in what generality analytic and periodic cyclic homology agree. On the other hand, the more fundamental theory is of course periodic cyclic homology, and we use its computation for smooth, affinoid dagger algebras to construct Chern characters from homotopy algebraic \(K\)-theory to rigid cohomology. This is the analogue of the Chern character taking values in de Rham cohomology for manifolds. Finally, in future projects we also aim to study the Davis-L\"uck assembly map (\cite{davis1998spaces}) in the nonarchimedean analytic setting. In the purely algebraic case, recent work (\cite{ellis2022algebraic}) shows that the left hand side of the assembly map is a certain colimit of equivariant algebraic \(kk\)-groups. The assembly map is then a relationship between a variant of topological \(K\)-theory of \emph{completed} group algebras or crossed product algebras of discrete group actions, and equivariant bivariant analytic \(K\)-theory. The article is organised as follows. In Section \ref{sec:background}, we recall relevant background material on bornological functional analysis and topological \(K\)-theory in the nonarchimedean setting. Section \ref{sec:analytic-homotopy} introduces the \textit{overconvergent rigid \(n\)-simplex}, relative to which we define homotopies. This is defined as the simplicial ring \[[n] \mapsto \dvr\gen{\Delta^n}^\updagger \defeq \dvr[x_1,\dotsc,x_n]^\updagger/\gen{\sum x_ i - 1},\] where \(\dvr[x_1,\dotsc, x_n]^\updagger\) is the Monsky-Washnitzer algebra. We then describe the matrix algebras we seek stability results for. These include the \(\dvgen\)-adic completion \(\mathcal{M}^{\mathrm{cont}}\) of \(\mathbb{M}_\infty\), which is our main focus. In Section \ref{sec:definition-kk}, we define analytic \(kk\)-theory. The objects of this category are complete, torsionfree bornological \(\dvr\)-algebras, and its morphisms are \[\mathsf{kk}^{\mathrm{an}}(A,B) = \varinjlim [\jens^n A, \mathcal{M}_{\infty}^\mathrm{cont}(B)^{\mathcal{S}^n}],\] where \(\jens\) denotes the noncommutative loops coming from the universal \textit{tensor algebra extension}. The bounded algebra homomorphisms in the inductive limit are induced by the classifying maps of the universal extension. As in the topological and algebraic setting, the definition is constructed in a manner that we have homotopy invariance, \(\mathcal{M}^\mathrm{cont}\)-stablility and excision for semi-split extensions of complete, torsionfree bornological algebras. Section \ref{sec:triangulated} shows that \(\mathsf{kk}^{\mathrm{an}}\) is a triangulated category, where the distinguished triangles are isomorphic to diagrams of the form \[ \Omega(B) \to P_f \to A \to B,\] where \(P_f\) is the path algebra relative to a bounded algebra homomorphism \(f \colon A \to B\), and \(\Omega(B)\) is the \textit{loop functor} applied to \(B\). In Section \ref{sec:analytic-K}, we study the relationship between our bivariant \(K\)-theory with various constructions defined previously. These include the \(KV\)-theories studied by Calvo and Hamida (\cites{hamida, calvo}), and the overconvergent version due to Tamme (\cite{tamme:thesis}). The definition of the \(KV\)-spectrum is arrived at by topologising the algebraic \(KV\)-spectrum. More concretely, for a Banach \(\dvr\)-algebra (resp. affinoid dagger algebra) \(A\), the \textit{topological (resp. analytic) \(KV\)-theory spectrum} is defined as \[\mathsf{KV}^{\mathrm{top}}(A) \defeq \mathsf{BGL}^+(A \gen{\Delta^\bullet}) \text{ resp. } \mathsf{KV}^\mathrm{an}(A) \defeq \mathsf{BGL}^+(A\gen{\Delta^\bullet}^\updagger).\] The topological and analytic \(K\)-theory spectra coincide with the spectrum \(\mathsf{KV}(A/\dvgen A)\) associated to the reduction mod \(\dvgen\). We extend these definitions to nonconnective spectra using the Banach algebraic suspension \(\coma{\Sigma} \defeq \coma{\Gamma}/ \mathcal{M}^{\mathrm{cont}}\). The resulting theory, which we call \textit{overconvergent stabilised analytic \(K\)-theory} \(\tilde{K}^{\mathrm{an}, \updagger}(A) = K^{\mathrm{an}, \updagger}(\mathcal{M}^{\mathrm{cont}}(A))\) is the \(\mathcal{M}^{\mathrm{cont}}\)-stablisation of a version of Weibel's homotopy algebraic \(K\)-theory, which is the functor on the right hand side. The functor \(\tilde{K}^{\mathrm{an}, \updagger}\) defined on the category of complete, torsionfree bornological \(\dvr\)-algebras is dagger homotopy invariant, excisive and satisfies \(\mathcal{M}^{\mathrm{cont}}\)-stability by construction. The universal property of \(\mathsf{kk}^{\mathrm{an}}\) yields a natural map \(\mathsf{kk}^{\mathrm{an}}_n(\dvr, A) \to \tilde{K}_n^{\mathrm{an}, \updagger}(A)\), which we show is an isomorphism for each \(n \in \Z\) in Theorem \ref{thm:kk=KH}. Since the overconvergent analytic \(K\)-theory groups are an inductive limit of \(KV^{\mathrm{an}}\)-groups, they depend only on the reduction mod \(\dvgen\) of the algebra. In particular, since Weibel's homotopy algebraic \(K\)-theory satisfies \(\mathbb{M}_\infty\)-stability, we have \[\tilde{K}^{\mathrm{an}, \updagger}(A^\updagger) \cong K^{\mathrm{an}, \updagger}(A^\updagger) \cong KH(A/\dvgen A),\] whenever \(A^\updagger \subseteq \coma{A}\). Finally, since bivariant analytic cyclic homology satisfies dagger homotopy invariance, excision and \(\mathcal{M}^{\mathrm{cont}}\)-stability, the universal property of \(\mathsf{kk}^{\mathrm{an}}\) yields bivariant Chern characters \[ \mathsf{kk}^{\mathrm{an}}_n(A,B) \to \HA_n(A,B), \] for each \(n\). These specialise for \(A = \dvr\) to \[\tilde{K}^{\mathrm{an},\updagger}_n(B) \overset{\mathrm{ch}_n}\to \HA_n(B),\] for each \(n \in \Z\). Since the periodic cyclic homology also satisfies these properties, we also get bivariant Chern characters \(\mathsf{kk}^{\mathrm{an}}_n(A,B) \to \HP_n(A \otimes \dvf,B \otimes \dvf)\), which specialises when \(A = \dvr\) to group homomorphisms \[\tilde{K}^{\mathrm{an},\updagger}_n(B) \overset{\mathrm{ch}_n}\to \HP_n(B \otimes \dvf),\] for \(n \in \Z\). When \(B\) is the dagger completion of a smooth, finite-type \(\dvr\)-algebra, then we get Chern characters \(KH_n(B/\dvgen B) \to \HP_n(B \otimes \dvf) \cong \bigoplus_{j \in \Z} H_{\mathrm{rig}}^{n+2j}(B/\dvgen B, \dvf)\). This is analogous to the \textit{\(p\)-adic Chern character} from the \(p\)-completed, rationalised algebraic \(K\)-theory spectrum to the \(p\)-completed, rationalised periodic cyclic homology spectrum \[ K(A/\dvgen A, \Q_p) \to \HP(A, \Q_p)\] constructed in \cite{antieau2020beilinson}*{Definition 2.14}. \section{Background}\label{sec:background} \subsection{Preliminaries from bornological analysis} As in~\cites{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean, Meyer-Mukherjee:Bornological_tf, Cortinas-Meyer-Mukherjee:NAHA,Meyer-Mukherjee:HA}, we use the framework of bornologies to do nonarchimedean analysis. A \emph{bornology} on a set~\(X\) is a collection of its subsets, which are called \emph{bounded subsets}, such that finite subsets are bounded and finite unions and subsets of bounded subsets remain bounded. A \emph{bornological \(\dvr\)\nb-module} is a \(\dvr\)\nb-module~\(M\) with a bornology such that every bounded subset is contained in a bounded \(\dvr\)\nb-submodule. We call a \(\dvr\)\nb-module map \(f \colon M \to N\) \emph{bounded} if it maps bounded subsets of~\(M\) to bounded subsets of~\(N\). A bornological \(\dvr\)\nb-algebra is a bornological \(\dvr\)\nb-module with a bounded multiplication map. A \emph{complete} bornological \(\dvr\)\nb-module is a bornological \(\dvr\)\nb-module in which every bounded subset is contained in a bounded, \(\dvgen\)\nb-adically complete \(\dvr\)\nb-submodule. Every bornological \(\dvr\)\nb-module~$M$ has a completion~$\comb{M}$ (see \cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}*{Proposition~2.14}). \begin{example} The most basic example of a bornology on a \(\dvr\)\nb-module is the \emph{fine bornology}, which consists of those subsets that are contained in a finitely generated \(\dvr\)\nb-submodule. Any fine bornological \(\dvr\)\nb-module is complete. By default, we equip modules over the residue field~\(\resf\) with the fine bornology. \end{example} \begin{definition}[\cite{Meyer-Mukherjee:Bornological_tf}*{Definition~4.1}] \label{def:bornologically_tf} We call a bornological \(\dvr\)\nb-module~\(M\) (bornologically) \emph{torsionfree} if multiplication by~\(\dvgen\) is a bornological embedding, that is, $M$ is algebraically torsionfree and \(\dvgen^{-1} \cdot S \defeq \setgiven{x \in M}{\dvgen x \in S}\) is bounded for every bounded subset \(S \subseteq M\). A \(\dvr\)\nb-module with the fine bornology is bornologically torsionfree if and only if it is torsionfree in the purely algebraic sense. For the rest of this article, we briefly write ``torsionfree'' instead of ``bornologically torsionfree''. \end{definition} \begin{lemma}\label{lem:complete-category} The category of complete, bornologically torsionfree \(\dvr\)-modules is complete. \end{lemma} \begin{proof} It only needs to be checked that this category is closed under kernels and products. Given a map \(f \colon M \to N\) of complete bornological \(\dvr\)-modules, its kernel is a closed and hence a complete bornological \(\dvr\)-submodule of \(M\) by \cite{Meyer-Mukherjee:Bornological_tf}*{Theorem 2.3}. For products, consider a family \((M_i)_{i \in I}\) of complete, bornological \(\dvr\)-modules. Then the product bornology on \(\prod_{i \in I} M_i\) turns it into a complete bornological \(\dvr\)-module. The kernel of a map between bornologically torsionfree \(\dvr\)-modules is submodule with the subspace bornology, and is hence bornologically torsionfree by \cite{Meyer-Mukherjee:Bornological_tf}*{Lemma 4.2}. Finally, if \((M_i)_{i \in I}\) is a family of bornologically torsionfree \(\dvr\)-modules, then there are bornological embeddings \(M_i \subseteq M_i \otimes \dvf\) for each \(i\) by \cite{Meyer-Mukherjee:Bornological_tf}*{Proposition 4.3}. These are kernels in the category of bornological \(\dvr\)-modules, and hence commute with products. So there is a bornological embedding \(\prod_{i \in I} M_i \subseteq \prod_{i \in I} M_i \otimes \dvf\), from which we conclude that \(\prod_{i \in I} M_i\) is bornologically torsionfree. \end{proof} \begin{definition}[\cites{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean, Meyer-Mukherjee:Bornological_tf}] \label{def:dagger_algebra} We call a bornological \(\dvr\)\nb-algebra~\(D\) \emph{semidagger} if, for every bounded subset \(S \subseteq D\), the \(\dvr\)\nb-submodule \(\sum_{i=0}^\infty \dvgen^i S^{i+1}\) is bounded in~\(D\). A complete, torsionfree, semidagger bornological \(\dvr\)\nb-algebra is called a \emph{dagger algebra}. \end{definition} \begin{example} \label{exa:resf_semidagger} Any \(\resf\)\nb-algebra with the fine bornology is semidagger and complete. \end{example} \begin{example} \label{exa:Banach_algebra} Let~\(B\) be a Banach \(\dvf\)\nb-algebra. We assume the norm of~\(B\) to be submultiplicative. Let \(D\subseteq B\) be the unit ball. Then \(D\cdot D\subseteq D\), and~\(D\) becomes a \(\dvgen\)\nb-adically complete, torsionfree \(\dvr\)\nb-algebra. Conversely, if such an algebra~\(D\) is given, then \(D\hookrightarrow D\otimes \dvf\) and there is a unique norm on \(D\otimes \dvf\) with unit ball~\(D\). Let~\(D\) be the unit ball of a Banach \(\dvf\)\nb-algebra as above. Then we call~\(D\) with the bornology where all subsets are bounded a \emph{Banach \(\dvr\)\nb-algebra}. This bornology makes~\(D\) a dagger algebra. \end{example} \begin{definition}(\cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}) Any bornology on a \(\dvr\)\nb-algebra~\(D\) is contained in a smallest semidagger bornology, namely, the bornology generated by the \(\dvr\)\nb-submodules of the form \(\sum_{i=0}^\infty \dvgen^i S^{i+1}\), where \(S \subseteq D\) is bounded in the original bornology. This is called the \emph{linear growth bornology}. We denote~\(D\) with the linear growth bornology by~\(\ling{D}\). \end{definition} If~\(D\) is torsionfree, then the completion \(D^\dagger \defeq \comb{\ling{D}}\) is a dagger algebra (see \cite{Meyer-Mukherjee:Bornological_tf}*{Proposition~3.8} or, in slightly different notation, \cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}*{Lemma 3.1.12}). \begin{definition}\label{def:fine-mod-p}(\cites{Meyer-Mukherjee:HA, Meyer-Mukherjee:HL}) A bornological \(\dvr\)\nb-module~\(M\) is called \emph{fine mod~\(\dvgen\)} if the quotient bornology on \(M/\dvgen M\) is the fine one. Equivalently, any bounded subset is contained in \(\mathcal{F}+\dvgen M\) for a finitely generated \(\dvr\)\nb-submodule \(\mathcal{F}\subseteq M\). \end{definition} \begin{example} Any nuclear bornological \(\dvr\)-algebra (\cite{Meyer-Mukherjee:HL}*{Definition 3.1}) is fine mod \(\dvgen\). Examples of such algebras include torsionfree \(\dvr\)-algebras with the fine bornology, and any torsionfree \(\dvr\)-module with the bornology where a subset \(S\) is bounded if it is contained in a bounded \(\dvr\)-module \(T\) and there is \((t_n) \in c_0(\N) \cap T\) such that \(S = \setgiven{s = \sum_{n=0}^\infty c_n t_n}{(c_n) \in l^\infty(\N,\dvr), s \text{ converges in } T}\). \end{example} \subsection{Topological \(K\)-theories in the nonarchimedean context}\label{subsec:existing-theories} In this subsection, we recall some pre-existing constructions of topological \(K\)-theory in the context of nonarchimedean Banach algebras, due to Calvo and Hamida (\cites{hamida, calvo}). These are defined by modifying the interval objects of homotopy invariant versions of algebraic \(K\)-theory, namely \(KV\) (\cite{kv}) and \(KH\)-theory (\cite{kh}), taking into account the topology on the algebra. For nonarchimedean Banach algebras, a natural choice of interval object is the algebra of power series \[ \dvr\gen{x_1,\dotsc,x_n} = \setgiven{\sum_{I \subseteq \N^n} c_I x^I}{\lim_I \abs{c_I} = 0} \] convergent on the unit polydisc. Equipped with the Gauss norm \(\abs{\sum c_I x^I} \defeq \max_I \abs{c_I}\), this is a Banach \(\dvr\)-algebra. One then defines a simplicial ring \[ \dvr\gen{\Delta^\bullet} \defeq [n] \mapsto \dvr\gen{x_0,\dotsc, x_n}/ \gen{\sum_{i=0}^n x_i - 1},\] where the \(0\)-th term is just \(\dvr\). Now for any Banach \(\dvr\)-algebra, we can form the simplicial ring \(A\gen{\Delta^\bullet} \defeq A \hot \dvr\gen{\Delta^\bullet}\), where \(\hot\) denotes the completed, projective tensor product in the category of Banach \(\dvr\)-modules. Using this, they define the \textit{topological \(K\)-theory} of a unital Banach algebra \(A\) as the spectrum \[\mathsf{K}^{\mathrm{top}}(A) \defeq \mathsf{K}(A \gen{\Delta^\bullet}),\] and its homotopy groups \(K_n^{\mathrm{top}}(A)\) for \(n\geq 1\) as the topological \(K\)-theory groups of \(A\). The extension to non-unital algebras, as before, involves taking the homotopy fibre \(\mathsf{K}^{\mathrm{top}}(A) \defeq \mathsf{fib}(\mathsf{K}^{\mathrm{top}}(\tilde{A}) \to \mathsf{K}^{\mathrm{top}}(\dvr))\) of the unitalisation \(\tilde{A} = A \oplus \dvr\) with the product topology. Hamida's topological \(K\)-theory is homotopy invariant with respect to the closed unit disc \(\mathbb{A}^{1,\mathrm{an}}(1) = \mathrm{Sp}(\dvr\gen{x})\) with a \emph{fixed} radius (say radius \(1\)). The recent work of Kerz-Saito-Tamme \cite{MR4012551} develops a theory for Banach algebras over the fraction field \(\dvf\) that is homotopy invariant with respect to discs of all radii simultaenously. That is, their interval object is \(\mathbb{A}^{1,\mathrm{an}} = \mathrm{colim}_r \mathbb{A}^{1,\mathrm{an}}(r)\), where \(\mathrm{colim}_r \mathbb{A}^{1,\mathrm{an}}(r) = \mathrm{Sp}(\dvf\gen{x}_r)\) for \(\dvf \gen{x}_r = \setgiven{\sum_{n \in \N}c_n x^n}{\lim \abs{c_n}r^n = 0}\). For each such radius \(r>0\), they define the simplicial algebras \(A\gen{\Delta^\bullet}_r = A \hot \dvf\gen{\Delta^\bullet}_r\), which assembles into a projective system \[r \mapsto A\gen{\Delta^\bullet}_r,\] of simplicial algebras upon varying the radius. Taking their connective algebraic \(K\)-theory yields a pro-spectrum \[k^{\mathrm{an}}(A) \defeq \lim_r \mathsf{K}(A \gen{\Delta^\bullet}_r),\] which they call the \textit{connective analytic \(K\)-theory} of an affinoid algebra \(A\). The extension to nonconnective pro-spectra involves a delooping construction, which the interested reader can find in \cite{MR4012551}*{Section 4.4}. Since our main motivation is to develop bivariant \(K\)-theory for torsionfree \(\dvr\)-algebras, we do not attempt to specialise our theory to that in (\cite{MR4012551}), but rather only construct an integral version of it. \section{Analytic homotopies, stability and extensions}\label{sec:analytic-homotopy} In what follows, let \(\mathsf{Alg}_V^\mathrm{tf}\) denote the category of complete, bornologically torsionfree \(\dvr\)-algebras. Its objects are complete, bornologically torsionfree \(\dvr\)-algebras and its morphisms are bounded \(\dvr\)-algebra homomorphisms. \subsection{Analytic homotopies} Consider the overconvergent analytic \(n\)-simplex defined by \[\dvr\gen{\Delta^\bullet}^\updagger \defeq [n] \mapsto \dvr\gen{\Delta^n}^\updagger,\] where \(\dvr \gen{\Delta^n}^\updagger = \dvr[t_0,\dotsc,t_n]^\updagger/ \gen{\sum_{i = 0}^n t_i - 1}\). This is a simplicial object in the category \(\mathsf{Alg}_\dvr^\mathrm{tf}\). \begin{lemma}\label{lem:3} The simplicial ring \(\dvr\gen{\Delta^\bullet}^\updagger\) is weakly contractible. \end{lemma} \begin{proof} Denote by \((d_i)_{i \geq 0}\) the face maps of the simplicial group \(\dvr\gen{\Delta^\bullet}^\updagger\). These are defined as \[d_i(f)(t_0,\dotsc,t_n) = f(t_0,\dotsc,t_{i-1},0,t_i,\dotsc,t_n)\] for \(f \in \dvr\gen{\Delta^n}^\updagger\). By the Yoneda lemma, a \(1\)-simplex \(x_0 \in \dvr\gen{\Delta^1}^\updagger\) corresponds to a morphism of simplicial sets \(f_{x_0} \colon \Delta[1] \to \dvr\gen{\Delta^\bullet}^\updagger\) such that \(f_{x_0}(\delta_0) = d_1(x_0) = 1\) and \(f_{x_0}(\delta_1) = d_0(x_0) = 0\) for \(\delta_0\) and \(\delta_1 \in \Hom_{\Delta}([0],[1])\). The required simplicial homotopy is given by \(\dvr\gen{\Delta^\bullet}^\updagger \times \Delta[1] \to \dvr\gen{\Delta^\bullet}^\updagger\), \((g,t) \mapsto f_{x_0}(t)\cdot g\). This shows that the identity on \(\dvr\gen{\Delta^\bullet}^\updagger\) is null-homotopic, as required. \end{proof} Using the overconvergent analytic simplex, we simplicially enrich our category. Let \(A\) be a complete, bornologically torsionfree \(\dvr\)-algebra. We define the simplicial ring \(A\gen{\Delta^\bullet}^\updagger \colon [n] \mapsto A \hot \dvr \gen{\Delta^n}^\updagger\). The \textit{mapping space} bifunctor \(\Hom_{\mathsf{Alg}_V^\mathrm{tf}} \colon {\mathsf{Alg}_V^\mathrm{tf}}^\op \times \mathsf{Alg}_V^\mathrm{tf} \to \mathbb{S}\) is defined in the obvious way: \begin{equation}\label{eq:mapping-space} \Hom_{\mathsf{Alg}_V^\mathrm{tf}}(A,B) \colon [n] \mapsto \Hom_{\mathsf{Alg}_V^\mathrm{tf}}(A, B\gen{\Delta^n}^\updagger), \end{equation} wherein the composition rule for three algebras in \(\mathsf{Alg}_V^\mathrm{tf}\) is easy to define. With the following lemma, we conclude that \(\mathsf{Alg}_\dvr^\mathrm{tf}\) is a simplicial category. \begin{lemma}\label{lem:simplicial-category} For \(A \in \mathsf{Alg}_V^\mathrm{tf}\), the contravariant representable functor \(\Hom_{\mathsf{Alg}_V^\mathrm{tf}}(-,A)\) has a left adjoint defined by the functor \[A^{(-)} \colon \mathbb{S} \to {\mathsf{Alg}_V^\mathrm{tf}}^\op, \quad A^X \defeq \lim_{\Delta^n \to X} A\gen{\Delta^n}^\updagger,\] for a simplicial set \(X\). \end{lemma} \begin{proof} This is part of a general construction. Let \(\mathcal{C}\) be a locally small category with colimits, and \(F \colon \Delta \to \mathcal{C}\) a covariant functor. Then the functor \(R \colon \mathcal{C} \to \mathbb{S}\) defined by \(R(c)[n] \defeq \Hom_{\mathcal{C}}(F([n]), c)\) has a left adjoint, given by the Kan extension of \(F\) along the Yoneda embedding \(y \colon \Delta \to \mathbb{S}\). The resulting object is a coend, which is a colimit and hence exists by hypothesis. In our case, the category \(\mathcal{C} = {\mathsf{Alg}_\dvr^\mathrm{tf}}^\op\), and the functor \(F\) is the contravariant functor \(\Delta^\op \to \mathsf{Alg}_\dvr^\mathrm{tf}\), \([n] \mapsto A\gen{\Delta^\bullet}^\updagger\). A coend in \(\mathcal{C}\) is an end in \(\mathcal{C}^\op\), which exists in our case since \(\mathsf{Alg}_\dvr^\mathrm{tf}\) has all limits by Lemma ~\ref{lem:complete-category}. \qedhere \end{proof} \begin{lemma}\label{lem:2} For \(X \in \mathbb{S}\) and \(B \in \mathsf{Alg}_\dvr^\mathrm{tf}\), we have \(B^X \cong \Hom_{\mathbb{S}}(X, B\gen{\Delta^\bullet}^\updagger)\). \end{lemma} \begin{proof} By the adjuction in Lemma \ref{lem:simplicial-category}, we have \[\Hom_{\mathsf{Alg}_\dvr^\mathrm{tf}}(A,B^X) \cong \Hom_{\mathbb{S}}(X, \Hom_{\mathsf{Alg}_\dvr^\mathrm{tf}}(A,B)),\] for \(X \in \mathbb{S}\) and \(A\), \(B \in \mathsf{Alg}_\dvr^\mathrm{tf}\). Then for \(A = t \dvr[t]\) with the fine bornology, bounded algebra homomorphisms \(A \to C\) to a complete, bornologically torsionfree algebra \(C\) are in bijection with bounded \(\dvr\)-linear maps \(\dvr \to C\), which in turn is in bijection with \(C\). This applies to \(B^X\) on the left hand side, and to each of the \(n\)-simplicies \(\Hom_{\mathsf{Alg}_\dvr^\mathrm{tf}}(A, B\gen{\Delta^n}^\updagger)\) on the right hand side to yield the desired result. \end{proof} Now let \(\mathbb{S}_*\) denote the category of pointed simplicial sets. Suppose \((K,*) \in \mathbb{S}_*\), we define \begin{multline*} A^{(K,*)} \defeq \mathsf{Hom}_{\mathbb{S}_*}((K,*), A\gen{\Delta^\bullet}^\updagger) \\ \cong \ker(\Hom_{\mathbb{S}}(K,A) \to \Hom_{\mathbb{S}}(*,A)) \cong \ker(A^K \to A) \end{multline*} for an algebra \(A \in \mathsf{Alg}_\dvr^\mathrm{tf}\). We will need the following at several points in the paper: \begin{lemma}\label{lem:4} Let \(K\) be a finite simplicial set, \(*\) a base point of \(K\), and \(A\) a complete, bornologically torsionfree \(\dvr\)-algebra. Then there are natural isomorphisms \[\dvr^K \hot A \cong A^K, \quad \dvr^{(K,*)} \hot A \cong A^{(K,*)}.\] \end{lemma} \begin{proof} We first consider the unpointed part. Here we need to show that the canonical map \[V^K \hot A = (\underset{\Delta^n \to K}\lim \dvr \gen{\Delta^n}^\updagger ) \hot A \to \underset{\Delta^n \to K}\lim (\dvr\gen{\Delta^n}^\updagger \hot A) = A^K\] is an isomorphism. That is, tensoring with \(A\) preserves limits. Since \(K\) is a finite simplicial set, it suffices to show that \(A \hot -\) commutes with finite limits, or, finite products and kernels. Since the category of complete bornological modules is additive, finite products are finite direct sums, which the completed bornological tensor product preserves. That tensoring with \(A\) preserves kernels follows from \cite{Cortinas-Meyer-Mukherjee:NAHA}*{Proposition 2.4.5}. The pointed part follows from the fact that the extension of complete bornological \(\dvr\)-modules \[\dvr^{(K,*)} \rightarrowtail \dvr^K \twoheadrightarrow \dvr\] splits by a bounded \(\dvr\)-linear section, so that \(V^K \cong V^{(K,*)} \oplus V\). Now tensor by \(A\) and use the unpointed part to conclude that \(A \hot V^{(K,*)} \cong \ker(A^K \to A) \cong A^{(K,*)}\). \end{proof} Now consider the simplicial subdivision functor \(\mathsf{sd} \colon \mathbb{S} \to \mathbb{S}\) and its accompanying natural transformation \(h \colon \mathsf{sd} \Rightarrow 1\) (see \cite{Goerss-Jardine:Simplicial}*{III.4} for the construction). There is an induced pro-system of simplicial sets \[\mathsf{sd}^\bullet(K) \colon \mathsf{sd}^0(K) \overset{h_K}\leftarrow \mathsf{sd}^1(K) \overset{h_{\mathsf{sd}(K)}}\leftarrow \cdots.\] The functor \(A^{(-)} \colon \mathbb{S}^\op \to \mathsf{Alg}_\dvr^\mathrm{tf}\) extends to one on inductive systems \(\mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf})\) of complete, torsionfree bornological algebras by termwise application. Applied to \(\mathsf{sd}^\bullet(K)\), we get an inductive system \(A^{\mathsf{sd}^\bullet(K)} = \setgiven{A^{\mathsf{sd}^n(K)}}{n \in \N}\) of complete, torsionfree bornological algebras. Fixing \(K\), the functor \((-)^{\mathsf{sd}^\bullet(K)} \colon \mathsf{Alg}_\dvr^\mathrm{tf} \to \mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf})\) admits an extension to the category \(\mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf})\). The following carries over from the algebraic setting mutatis-mutandis: \begin{lemma}\label{lem:5} For \(A \in \mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf})\), the functor \(A^{\mathsf{sd}^\bullet(-)} \colon \mathbb{S}^\op \to \mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf})\) preserves finite limits. \end{lemma} \begin{proof} The simplicial subdivision functor \(\mathsf{sd} \colon \mathbb{S} \to \mathbb{S}\) is a left adjoint functor, so it preserves all colimits. Furthermore, for \(B \in \mathsf{Alg}_\dvr^\mathrm{tf}\), the functor \(B^{(-)}\) is a right adjoint functor, so it preserves all limits. So it takes colimits in \(\mathbb{S}^\op\) to limits in \(\mathsf{Alg}_\dvr^\mathrm{tf}\). Now if \(A = (A_i)_{i \in I} \in \mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf})\), then \[A^{\mathsf{sd}^\bullet(\mathrm{colim}_l K_l)} = \setgiven{\lim_l A_i^{\mathsf{sd}^n(K_l)}}{(i,n) \in I \times \N}\] is a limit, since finite limits are computed termwise in \(\mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf})\). \end{proof} Now let \(A\) and \(B\) be inductive systems of complete, bornologically torsionfree \(\dvr\)-algebras. We can define their mapping space \[\Hom_{\mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf})}^\bullet(A,B) \defeq ([n] \mapsto \Hom_{\mathsf{Alg}_\dvr^\mathrm{tf}}(A, B\gen{\Delta^n}^\updagger))\] by extending the mapping space bifunctor defined in Equation \ref{eq:mapping-space} to inductive systems of algebras. We can also define \[ \HOM_{\mathsf{Ind}(\mathsf{Alg}^\mathrm{tf})}^\bullet(A,B) \defeq ([n] \mapsto \Hom_{\mathsf{Alg}_\dvr^\mathrm{tf}}(A, B^{\mathsf{sd}^\bullet(\Delta^\bullet)})), \] and the two mapping spaces are related as follows: \begin{proposition}\label{prop:6} Let \(A\) and \(B\) be inductive systems of complete, torsionfree bornological algebras. Then \(\mathsf{Hom}_{\mathbb{S}}(K, \HOM_{\mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf})}^\bullet(A,B)) \cong \Hom_{\mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf})}^\bullet(A,B^{\mathsf{sd}^{\bullet K}})\). Furthermore, when \(A\) is a constant inductive system, then \(\HOM_{\mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf})}(A,B)\) is a fibrant simplicial set (or a Kan complex). \end{proposition} \begin{proof} The proofs in \cite{Cortinas-Thom:Bivariant_K}*{Proposition 3.2.2, Theorem 3.2.3} carry over mutatis-mutandis. \end{proof} We now introduce the notion of homotopy that is relevant for us. Recall that \[A\gen{\Delta^1}^\updagger = A \hot \dvr[t]^\updagger,\] where \(\dvr[t]^\updagger\) denotes the dagger completion of the polynomial ring in one variable. There is a canonical inclusion homomorphism \(\iota \colon A \to A \gen{\Delta^1}\) splitting the evaluation homomorphisms \(\ev_t \colon A \gen{\Delta^1}^\updagger \to A\) at \(t = 0,1\). An \textit{elementary homotopy} \(F \colon A_1 \to A_2\gen{\Delta^1}^\updagger\) between two bounded \(\dvr\)-algebra homomorphisms \(f_1, f_2 \colon A_0 \rightrightarrows A_1\) is a bounded \(\dvr\)-algebra homomorphism satisfying \(\ev_t \circ F = f_t\). We say that two morphisms between complete, torsionfree bornological algebras are \textit{homotopic} if they can be connected by a composition of elementary homotopies - that is, homotopy is the equivalence relation generated by elementary homotopies. Denote by \([A,B]\) the set of homotopy classes of algebra homomorphisms \(A \to B\). Now let \(A \in \mathsf{Alg}_\dvr^\mathrm{tf}\). We define \[A^{\mathcal{S}^1} \defeq A^{(\mathsf{sd}^\bullet(S^1), *)}, \quad A^{\mathcal{S}^{n+1}} \defeq (A^{\mathcal{S}^n})^{\mathcal{S}^1},\] using which we can define the homotopy groups of the mapping space: \begin{theorem}\label{thm:mapping space} There are natural isomorphisms \[[A, B^{\mathcal{S}^1}] \cong \pi_1(\HOM_{\mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf})}^\bullet(A, B)).\] \end{theorem} \begin{proof} To see that the two group structures are isomorphic, we make obvious modifications to the argument of \cite{Cortinas-Thom:Bivariant_K}*{Lemma 3.3.1}. \end{proof} Similarly, the Hilton-Eckmann argument implies that we can define higher homotopy groups as \([A, B^{\mathcal{S}^n}] \cong \pi_n(\mathsf{Hom}_{\mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf})}^\bullet(A, B))\), which are abelian for \(n \geq 2\). \subsection{Stability}\label{subsec:stability} Let \(X\) and \(Y\) be torsionfree bornological \(\dvr\)-modules, and let \[\gen{\cdot , \cdot} \colon Y \otimes X \to \dvr\] be a surjective \(\dvr\)-linear map. Such a map is automatically bounded. A pair \((X,Y)\) of torsionfree bornological \(\dvr\)-modules with a choice of surjection as above is called a \textit{matricial pair}. Given a matricial pair, one can define \(\mathcal{M}(X,Y)\) as the \(\dvr\)-module \(X \otimes Y\) with the product \[ (x_1 \otimes y_1) (x_2 \otimes y_2) \defeq \gen{y_1, x_2} x_1 \otimes y_2.\] This is a bounded morphism and automatically turns \(\mathcal{M}(X,Y)\) into a semi-dagger algebra. Its completion \(\mathcal{M}(X,Y)^\updagger = \comb{\mathcal{M}(X,Y)}\) is therefore a dagger algebra by \cite{Meyer-Mukherjee:Bornological_tf}*{Theorem 5.3}. \begin{remark} In the case of locally convex \(\C\)-algebras, the above definition recovers all reasonable notions of stability. For instance, if we take \(X = Y = \C^n\), then \(\mathcal{M}(X,Y) \cong \mathbb{M}_n(\C)\). When \(X = Y = \bigoplus_{n \in \N} \C\), we get \(\comb{\mathcal{M}(X,Y)} \cong \mathbb{M}_\infty(\C)\). For \(X = Y = l^2(\N)\), \(\comb{\mathcal{M}(X,Y)} \cong \mathcal{L}(l^2(\N))\) - algebra of trace class operators on the Hilbert space \(l^2(\N)\). Here \(\comb{\mathcal{M}(X,Y)}\) refers to the completed, projective tensor product \(X \hot Y\), with the appropriate extension of the bilinear form \(Y \otimes X \to \C\). \end{remark} Recall from \cite{Cortinas-Meyer-Mukherjee:NAHA}*{Section 6} that a \textit{homomorphism} between two matricial pairs \((X,Y)\) and \((W,Z)\) is a pair \(f = (f_1,f_2)\) of bounded linear maps \(f_1 \colon X \to W\) and \(f_2 \colon Y \to Z\) such that \(\gen{f_2(y), f_1(x)} = \gen{y,x}\) for all \(x \in X\) and \(y \in Y\). An \textit{elementary homotopy} is a pair \(H = (H_1,H_2)\) of bounded linear maps \(H_1 \colon X \to W[t]\) and \(H_2 \colon Y \to Z\) or \(H_1 \colon X \to W\) and \(H_2 \colon Y \to Z[t]\) such that \[ \begin{tikzcd} Y \hot X \arrow{r}{H_2 \otimes H_1} \arrow{d}{\gen{\cdot,\cdot}} & Z \otimes W[t] \arrow{d}{\gen{\cdot, \cdot } \otimes \mathrm{id}} \\ \dvr \arrow{r}{\subseteq} & \dvr[t] \end{tikzcd} \] commutes. Homomorphisms and homotopies of matricial pairs induce homomorphisms and dagger homotopies of algebras. We are mainly interested in homomorphisms where \(f_1 = f_2\), and we call the corresponding algebra homomorphisms \textit{standard homomorphisms}. Any pair \((x,y) \in X \times Y\) with \(\gen{y,x} = 1\) induces such a bounded algebra homomorphism \[\iota \colon V \to \mathcal{M}(X,Y)^\updagger, 1 \mapsto x \otimes y.\] Now suppose \(A\) is a complete, torsionfree bornological \(\dvr\)-algebra, there is a canonical map \[\iota_A \colon A \to A \hot \mathcal{M}(X,Y)^\updagger, \iota_A = 1_A \otimes \iota,\] where the target algebra is bornologically torsionfree by \cite{Meyer-Mukherjee:Bornological_tf}*{Proposition 4.12}. Similarly, when \(A\) is an inductive system of complete, bornologically torsionfree algebras, we can define such a canonical map, by applying the map above termwise. To simplify notation in the case where \(X = Y\), we denote the corresponding matrix algebra by \(\mathcal{M}_X^\updagger\). Our main cases of interest are the following matrix stabilisations: \begin{example}\label{ex:triv:matrix} Let \(X = Y = \bigoplus_{n \in \Lambda}\dvr\). This is the free \(\dvr\)-module on an arbitrary set \(\Lambda\), which we equip with the fine bornology. It has as basis characteristic functions \(\setgiven{\chi_n}{n \in \Lambda}\), using which we can define the bilinear form \(\gen{\chi_n,\chi_m} = \delta_{n,m}\). The corresponding matrix algebra is \(\mathbb{M}_\Lambda\), the algebra of finitely supported \(\Lambda \times \Lambda\)-matrices, which we equip with the fine bornology. We will denote this matrix algebra by \(\mathcal{M}_\Lambda^{\mathrm{alg}}\). \end{example} \begin{example}\label{ex:adic-matrix} Again, let \(\Lambda\) be an arbitrary set. Now take \(X = Y = \coma{\bigoplus_{n \in \Lambda}\dvr} \cong c_0(\Lambda, \dvr)\) - the Banach \(\dvr\)-module of null sequences \(\Lambda \to \dvr\) with the supremum norm. This is a bornological module with the bornology where every subset is bounded. The bilinear form above extends to one on \(c_0(\Lambda, \dvr) \otimes c_0(\Lambda, \dvr) \to \dvr\). The corresponding matrix algebra \(\comb{\mathcal{M}(c_0(\Lambda,\dvr), c_0(\Lambda,\dvr))}\) is isomorphic to \(c_0(\Lambda \times \Lambda,\dvr)\) with the convolution product. We will denote this matrix algebra by \(\mathcal{M}_\Lambda^{\mathrm{cont}}\). \end{example} \begin{example} \label{exa:length-controlled_matrices} Let \(l \colon \Lambda \to \N\) be a proper function, that is, for each \(n\in\N\) the set of \(x\in\Lambda\) with \(l(x) \le n\) is finite. Define~\(\dvr^{(\Lambda)}\) as in Example~\ref{ex:triv:matrix} and give it the bornology that is cofinally generated by the \(\dvr\)\nb-submodules \[ S_m \defeq \sum_{\lambda\in\Lambda} \dvgen^{\floor{l(\lambda)/m}} \chi_\lambda \] for \(m\in\N^*\). The bilinear form in Example~\ref{ex:triv:matrix} remains bounded for this bornology on~\(\dvr^{(\Lambda)}\). So \(\mathcal{M}(\dvr^{(\Lambda)},\dvr^{(\Lambda)})\) with the tensor product bornology from the above bornology is a bornological algebra as well. It is torsion-free and semi-dagger. So its dagger completion is the same as its completion. We denote it by~\(\mathcal{M}_\Lambda^l\). It is isomorphic to the algebra of infinite matrices \((c_{x,y})_{x,y\in\Lambda}\) for which there is \(m\in\N^*\) such that \(c_{x,y} \in \dvgen^{\floor{(l(x)+l(y))/m}}\) for all \(x,y\in\Lambda\); this is the same as asking for \(\lim {}\abs*{c_{x,y} \dvgen^{-\floor{(l(x)+l(y))/m}}} = 0\) because~\(l\) is proper. It makes no difference to replace the exponent of~\(\dvgen\) by \(\floor{l(x)/m}+\floor{l(y)/m}\) or \(\floor{\max \{l(x),l(y)\}/m}\) because we may vary~\(m\). \end{example} \begin{example} \label{exa:filtered_length-controlled_matrices} Let~\(\Lambda\) be a set with a filtration by a directed set~\(I\). That is, there are subsets \(\Lambda_S\subseteq \Lambda\) for \(S\in I\) with \(\Lambda_S \subseteq \Lambda_T\) for \(S\le T\) and \(\Lambda = \bigcup_{S\in I} \Lambda_S\). Let \(l\colon \Lambda \to \N\) be a function whose restriction to~\(\Lambda_S\) is proper for each \(S\in I\). For \(S\in\Lambda\), form the matrix algebra~\(\mathcal{M}_{\Lambda_S}^l\) as in Example~\ref{exa:length-controlled_matrices}. These algebras for \(S\in I\) form an inductive system. Let \(\varinjlim \mathcal{M}_{\Lambda_S}^l\) be its bornological inductive limit. This bornological algebra is also associated to a matricial pair, namely, the pair based on \(\varinjlim \dvr^{(\Lambda_S)}\), where each \(\dvr^{(\Lambda_S)}\) carries the bornology described in Example~\ref{exa:length-controlled_matrices}. \end{example} \begin{lemma}\label{lem:matrix-homotopy} Let \(F_0, F_1 \colon \mathcal{M}_X^\updagger \to \mathcal{M}_Y^\updagger\) be two standard homomorphisms, and let \[\iota_{\mathcal{M}_Y^\updagger} \colon \mathcal{M}_Y^\updagger \to \mathbb{M}_2 \hot \mathcal{M}_Y^\updagger\] be the canonical inclusion. Then \(\iota_{\mathcal{M}_Y^\updagger} \circ F_0\) and \(\iota_{\mathcal{M}_Y^\updagger} \circ F_1\) are dagger homotopic. \end{lemma} \begin{proof} We first observe that \(\mathbb{M}_2 \hot \mathcal{M}_Y^\updagger \cong \mathcal{M}_{Y \hot \dvr^2}^\updagger\). Picking an orthonormal basis \(\delta_{e_1}\) and \(\delta_{e_2}\) relative to the bilinear form in Example \ref{ex:triv:matrix}, we can define a linear homotopy \(x \mapsto (1-t) F_0(x) \otimes \delta_{e_1} + t F_1(x) \otimes \delta_{e_2}\) between \(F_0 \otimes \delta_{e_1}\) and \(F_1 \otimes \delta_{e_2}\). Similarly, we can define a linear homotopy between \(F_1 \otimes \delta_{e_1} \) and \(F_1 \otimes \delta_{e_2}\). Concatenating them, we get a homotopy \(X \to Y \hot \dvr^2\) between the homomorphisms \(F_0 \otimes \delta_{e_1}\) and \(F_1 \otimes \delta_{e_1}\). This induces the required dagger homotopy between the algebra homomorphisms \(\iota_{\mathcal{M}_Y^\updagger} \circ F_0\) and \(\iota_{\mathcal{M}_Y^\updagger} \circ F_1\). \end{proof} Now let \(Z\) be a torsionfree bornological \(\dvr\)-module with a nondegenerate, symmetric bilinear form \(\gen{\cdot,\cdot}_Z \to \dvr\), and \(\mathcal{M}_Z^\updagger\) its associated matrix algebra. Let \(\Gamma^Z \subseteq \mathsf{End}_\dvr(\mathcal{M}_Z^\updagger)\) be the multiplier algebra of \(\mathcal{M}_Z^\updagger\). Explicitly, this consists of pairs \((l,r)\) of right and left module maps \(\mathcal{M}_Z^\updagger \to \mathcal{M}_Z^\updagger\) such that \(x \cdot r(y) = l(x) \cdot y\) for \(x\), \(y \in \mathcal{M}_Z^\updagger\). We equip this with the equibounded bornology induced by \(\mathsf{Hom}_\dvr(\mathcal{M}_Z^\updagger, \mathcal{M}_Z^\updagger)\). Now suppose \(f_1\) and \(f_2\) are homomorphisms of the matricial pair corresponding to \(Z\), and satisfy \(\gen{f_1(x),y}_Z = \gen{x,f_2(y)}_Z\); we then call the pair \(f = (f_1,f_2)\) an \textit{adjoint pair}. Such a pair yields a multiplier via \[f_1 \cdot (x \otimes y) \defeq f_1(x) \otimes y, \quad (x \otimes y) \cdot f_2 \defeq x \otimes f_2(y).\] When we additionally have \(f_2 f_1 = 1_Z\), we call the pair an \textit{adjointable isometry}. Such a pair induces a bounded homomorphism \[\mathrm{Ad}_f \colon \mathcal{M}_Z^\updagger \to \mathcal{M}_Z^\updagger , T \mapsto f_1 \cdot T \cdot f_2.\] Note that since we assume the bilinear form \(\gen{\cdot,\cdot}_Z\) to be non-degenerate, the map \(f_2\) in an adjoint pair is unique if it exists. The symmetry assumption on the bilinear form implies that \(f_2^* = f_1\). Now consider the inductive system \[\mathcal{M}_{\infty,Z}^\updagger \defeq (\mathbb{M}_n)_{n \in \N} \otimes \mathcal{M}_Z^\updagger \] of complete bornological \(\dvr\)-algebras, where we equip the algebras \(\mathbb{M}_n\) with the fine bornology. At each level \(n\), we have \(\mathbb{M}_n \otimes \mathcal{M}_Z^\updagger \cong \mathcal{M}_{Z \otimes \dvr^n}^\updagger\). Tensoring with the identity on \((\mathbb{M}_n)_{n \in \N}\), we get an induced endomorphism \(1_{(\mathbb{M}_n)_n} \otimes \mathrm{Ad}_f \colon \mathcal{M}_{\infty, Z}^\updagger \to \mathcal{M}_{\infty, Z}^\updagger\). \begin{lemma}\label{lem:important-matrix} Let \(f = (f_1,f_2)\) be an adjointable isometry on a torsionfree bornological \(\dvr\)-module \(Z\). Then \(\iota_{\mathcal{M}_Z^\updagger} \circ \mathrm{Ad}_f\) is dagger homotopic to \(\iota_{\mathcal{M}_Z^\updagger}\), and \(1_{(\mathbb{M}_n)_{n \in \N}} \otimes \mathrm{Ad}_f\) is dagger homotopic to the identity on \(\mathcal{M}_{\infty,Z}^\updagger\). \end{lemma} \begin{proof} The map \(\mathrm{Ad}_f\) is the standard homomorphism corresponding to the pair \((f_1,f_2)\). So by Lemma \ref{lem:matrix-homotopy}, \(\iota_{\mathcal{M}_Z^\updagger} \circ \mathrm{Ad}_f \) is dagger homotopic to \(\iota_{\mathcal{M}_Z^\updagger}\). Consequently, \(\iota_{\mathcal{M}_{\infty,Z}^\updagger} \circ (1_{(\mathbb{M}_n)_{n \in \N}} \otimes \mathrm{Ad}_f) \colon \mathcal{M}_{\infty, Z}^\updagger \to \mathbb{M}_2 \otimes \mathcal{M}_{\infty,Z}^\updagger\) is ind-homotopic to \(\iota_{\mathcal{M}_Z^\updagger}\). Iteratively, for each \(n\), we obtain maps \[\iota_{\mathcal{M}_{\infty, Z}^\updagger}\circ (1_{(\mathbb{M}_n)_{n \in \N}} \otimes \mathrm{Ad}_f), \iota_{\mathcal{M}_{\infty, Z}^\updagger} \colon \mathcal{M}_{\infty,Z}^\updagger \rightrightarrows \mathbb{M}_{2n} \otimes \mathcal{M}_{\infty,Z}^\updagger.\] So it suffices to show that if \(f,g \colon A \to (\mathbb{M}_n)_{n \in \N} \otimes B\) are homomorphisms of inductive systems of complete, torsionfree bornological algebras such that \(\iota \circ f \sim \iota \circ g\), then \(f \sim g\). This follows from the same argument as in \cite{Cortinas-Thom:Bivariant_K}*{Lemma 4.1.1}. \qedhere \end{proof} To see the relevance of Lemma \ref{lem:important-matrix}, consider a torsionfree bornological \(\dvr\)-module \(Z\) with a bilinear form that satisfies \(Z \oplus Z \cong Z\) and \(Z \hot Z \cong Z\), and the isomorphisms preserve the bilinear forms. This happens whenever the underlying set \(\Lambda\) in Examples \ref{ex:triv:matrix}, \ref{ex:adic-matrix}, \ref{exa:filtered_length-controlled_matrices} is infinite for any choice of set-theoretic bijection \(\Lambda \times \Lambda \cong \Lambda\). In the complex case, this happens for any separable Hilbert space. We refer to such bornological \(\dvr\)-modules \(Z\) as \textit{product stable} bornological \(\dvr\)-modules. We then define the \textit{direct sum} \(\oplus \colon \mathcal{M}_Z^\updagger \hot \mathcal{M}_Z^\updagger \to \mathbb{M}_2(\mathcal{M}_Z^\updagger) \cong \mathcal{M}_Z^\updagger\) and the \textit{tensor product} \(\mathcal{M}_Z^\updagger \otimes \mathcal{M}_Z^\updagger \to \mathcal{M}_{Z \hot Z}^\updagger \cong \mathcal{M}_Z^\updagger\) operations on the algebra \(\mathcal{M}_Z^\updagger\). These definitions extend to the ind-algebra \(\mathcal{M}_{\infty, Z}^\updagger\). By Lemma \ref{lem:important-matrix}, these operations are homotopically associative, commutative and the tensor product distributes over the direct sum. Consequently, we get a homotopy semi-ring \((\mathcal{M}_{\infty,Z}^\updagger , \oplus, \otimes)\). For two inductive systems of complete, bornologically torsionfree algebras \(A\) and \(B\), we define \[\{A,B\} \defeq [A, \mathcal{M}_{\infty,Z}^\updagger\hot B],\] where \(Z\) is a product stable bornological \(\dvr\)-module (with a choice of bilinear map). To shorten notation, we will often denote \(\mathcal{M}_{\infty, Z}^\updagger(B) \defeq \mathcal{M}_{\infty, Z}^\updagger \hot B\). For each such choice of matrix stabilisation, we can define a category \(\mathsf{Alg}_{\mathcal{M}_{\infty, Z}^\updagger}^\mathrm{tf}\) whose objects are inductive systems of complete, torsionfree bornological algebras, and whose morphisms are \(\Hom_{\mathsf{Alg}_{\mathcal{M}_{\infty, Z}^\updagger}^\mathrm{tf}}(A,B) \defeq \{A,B\}.\) That this really is a category is shown in the following: \begin{lemma}\label{lem:composition-matrix} For three algebras \(A\), \(B\), \(C \in \mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf})\), we have a well-defined associative composition rule \(\{B,C\} \times \{A,B\} \to \{A,C\}\). The identity is given by the homotopy class of the map \(\iota_A\) above. \end{lemma} \begin{proof} Consider representatives of homotopy classes of maps \(f \colon A \to \mathcal{M}_{\infty, Z}^\updagger(B)\) and \(g \colon B \to \mathcal{M}_{\infty, Z}^\updagger(C)\) in \(\{A,B\}\) and \(\{B,C\}\). Let \(m \colon \mathcal{M}_{\infty, Z}^\updagger \hot \mathcal{M}_{\infty, Z}^\updagger \to \mathcal{M}_{\infty, Z}^\updagger\) be the tensor product of matrices. Then the composition \[A \overset{f}\to \mathcal{M}_{\infty,Z}^\updagger(B) \overset{1 \otimes g}\to \mathcal{M}_{\infty,Z}^\updagger \hot \mathcal{M}_{\infty,Z}^\updagger(C) \overset{m \otimes 1}\to \mathcal{M}_{\infty,Z}^\updagger \hot C,\] represents the composition \([g] \star [f]\) in \(\{A,C\}\). \end{proof} We say that two algebras are \textit{matrix homotopy equivalent} if they are isomorphic in the category \(\mathsf{Alg}_{\mathcal{M}_{\infty, Z}^\updagger}^\mathrm{tf}\). An algebra \(A\) is \textit{matricially stable} if it is isomorphic to \(\mathcal{M}_{\infty, Z}^\updagger(A)\). There is a canonical map \([A,B] \to \{A,B\}\), which is an isomorphism if \(B\) is matrically stable. We end this section by showing that an \(\mathbb{M}_2\)-stable functor acts trivially on inner endomorphisms. Recall that a functor \(F \colon \mathsf{Alg}_\dvr^\mathrm{tf} \to \mathcal{A}\) into an abelian category is \textit{\(\mathbb{M}_2\)-stable} if the canonical map \(A \to \mathbb{M}_2(A)\) induces an isomorphism \(F(A) \to F(\mathbb{M}_2(A))\). \begin{proposition}\label{prop:M2-inner-endo} Let \(F\) be an \(\mathbb{M}_2\)-stable functor, \(B \in \mathsf{Alg}_\dvr^\mathrm{tf}\) and \(A \subseteq B\) a bornological subalgebra. Suppose there are elements \(x\), \(y \in B\) such that \[y A, x A \subseteq A, \quad a xy b = ab, \quad [\dvr, x] = [\dvr, y] = 0\] for \(a\), \(b \in A\). Then \(\mathsf{Ad}(x,y) \colon A \to A\), \(a \mapsto y a x\) is a bounded \(\dvr\)-algebra homomorphism, and \(F(\mathsf{Ad}(x,y)) = \mathrm{id}_{F(A)}\). \end{proposition} \begin{proof} Let \(\iota_1\) and \(\iota_2 \colon A \to \mathbb{M}_2(A)\) denote the two corner embeddings into the upper left and lower right corners. Then \(F(\iota_1)\) is an isomorphism by assumption. Furthermore, conjugation by the matrix \(\begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix}\) defines an inner automorphism \(\sigma \colon \mathbb{M}_2(A) \to \mathbb{M}_2 (A)\) such that \(\sigma \circ \iota_2 = \iota_1\). Consequently, \(F(\iota_2)\) is invertible as well. Now consider the map \(\mathsf{Ad}(x \oplus \mathrm{id}, y \oplus \mathrm{id}) \colon \mathbb{M}_2(A) \to \mathbb{M}_2(A)\); it satisfies \(\mathsf{Ad}(x \oplus \mathrm{id}, y \oplus \mathrm{id}) \circ \iota_1 = \iota_1 \circ \mathsf{Ad}(x, y)\) and \(\mathsf{Ad}(x \oplus \mathrm{id}, y \oplus \mathrm{id}) \circ \iota_2 = \iota_2\). Since \(F(\iota_2)\) is invertible, the second identity says that \(F(\mathsf{Ad}(x \oplus \mathrm{id}, y \oplus \mathrm{id})) = \mathrm{id}_{F(\mathbb{M}_2(A))}\). Since \(F(\iota_1)\) is invertible, the first equality says that \(F(\mathsf{Ad}(x,y))\) is the identity on \(A\). \end{proof} \subsection{Extensions of complete bornological algebras} Let \(K \overset{f}\rightarrowtail E \overset{g}\twoheadrightarrow Q\) be an extension of inductive systems of complete, bornologically torsionfree \(\dvr\)-algebras. This can be represented by a diagram \((f_\alpha \colon K_\alpha \rightarrowtail E_\alpha)_{\alpha}\) and \((g_\alpha \colon E_\alpha \twoheadrightarrow Q_\alpha)_{\alpha}\) of bounded \(\dvr\)-algebra homomorphisms, where \(f_\alpha = \ker(g_\alpha)\) and \(g_\alpha = \coker(f_\alpha)\). An extension as above is called \textit{semi-split} if \(g\) has a bounded \(\dvr\)-linear section. We can now define several canonical extensions as in \cite{Cortinas-Thom:Bivariant_K}. \begin{example}[Path extension]\label{ex:path-extension} Let \(\Omega \defeq \dvr^{(S^1, *)} \cong \setgiven{\sum_{n \in \N} f \in \dvr[t]^\updagger}{f(0) = f(1) = 0} \cong t(t-1) \dvr[t]^\updagger\). By definition, this is part of an extension of complete bornological algebras \[ \Omega \rightarrowtail \dvr[t]^\updagger \overset{(\ev_0,\ev_1)}\twoheadrightarrow \dvr \oplus \dvr.\] By Lemma \ref{lem:4}, we can tensor with a complete, bornologically torsionfree algebra \(A\) and get an extension \(\Omega (A) \rightarrowtail A\gen{\Delta^1}^\updagger \overset{(\ev_0, \ev_1)}\twoheadrightarrow A \oplus A\), which we call the \textit{path extension} of \(A\). Here \(\Omega(A) = \Omega \hot A \cong t(t-1) A\gen{\Delta^1}^\updagger\). This is split by the bounded \(\dvr\)-linear section defined by \(A \oplus A \to A\gen{\Delta^1}^\updagger\), \((a_1, a_2) \mapsto (1-t)a_1 + t a_2\). \end{example} Next, we come to the \textit{universal extension}. Let \(F \colon \mathsf{Alg}_\dvr^\mathrm{tf} \to \mathsf{CBorn}_\dvr^\mathrm{tf}\) be the canonical forgetful functor that forgets the algebra structure. This has a left adjoint given by the \textit{tensor algebra} of a complete, bornologically torsionfree \(\dvr\)-module \(\tilde{T}(M) \defeq \bigoplus_{n \in \N} M^{\hot n}\), whose multiplication is given by concatenation of pure tensors. The tensor algebra is complete and bornologically torsionfree because \(M\) is so; this uses \cite{Meyer-Mukherjee:Bornological_tf}*{Theorem 4.6 and Proposition 4.12} and that completions and torsionfreeness are hereditary for direct sums. By termwise application, these functors extend to inductive systems of complete bornologically torsionfree \(\dvr\)-algebras and modules. We still denote them by \(\tilde{T}\) and \(F\), and their composition \(\tens = \tilde{T} \circ F \colon \mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf}) \to \mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf})\). The free-forgetful adjuction applied to the identity on an algebra \(A \in \mathsf{Alg}_\dvr^\mathrm{tf}\) yields a semi-split extension \[\jens (A) \rightarrowtail \tens (A) \overset{\eta_A}\twoheadrightarrow A\] of complete, torsionfree bornological algebras, where \(\jens(A) \defeq \ker(\eta_A)\). The \(\dvr\)-linear splitting is given by the obvious inclusion \(A \to \tens A\) into the first tensor factor. This extension is universal in the following sense: \begin{lemma}\label{lem:universal-extension} Let \(K \rightarrowtail E \twoheadrightarrow A\) be a semi-split extension of complete, bornologically torsionfree \(\dvr\)-algebras. Then there is a morphism of extensions \[ \begin{tikzcd} \jens (A) \arrow{r} \arrow{d}{\gamma_A} & \tens (A) \arrow{r} \arrow{d} & A \arrow{d}{1_A} \\ K \arrow{r} & E \arrow{r} & A, \end{tikzcd} \] where the map \(\gamma_A\) is called the \textit{classifying map} of the extension. Furthermore, the map \(\gamma_A\) is unique up to homotopy. \end{lemma} \begin{proof} The proof of \cite{Cortinas-Thom:Bivariant_K}*{Proposition 4.4.1} works mutatis-mutandis. \end{proof} In general, if \(K \rightarrowtail E \twoheadrightarrow B\) is any semi-split extension of complete, torsionfree bornological algebras, and \(f \colon A \to B\) is an algebra homomorphism. Then composing with a choice of section \(B \to E\), we get a bounded \(\dvr\)-linear map \(A \to E\). Using the universal property of the tensor algebra, we get a bounded \(\dvr\)-algebra homomorphism \(\tens (A) \to E\) extending \(f\). This restricts to an algebra homomorphism \(\gamma_A \colon \jens(A) \to K\). Furthermore, as in Lemma \ref{lem:universal-extension}, different choices of sections produce algebra homomorphisms \(\jens (A) \to K\) that are homotopic to \(\gamma_A\). The following map clarifies the functoriality of the classifying map: \begin{proposition}\label{prop:classifying-map-functorial} Let \[ \begin{tikzcd} A \arrow{r} \arrow{d}{f} & B \arrow{r} \arrow{d} & C \arrow{d}{g} \\ A' \arrow{r} & B' \arrow{r} & C' \end{tikzcd} \] is a morphism of semi-split extensions of complete, torsionfree bornological \(\dvr\)-algebras, then there is a homotopy commuting diagram \[ \begin{tikzcd} \jens(C) \arrow{r}{\gamma_C} \arrow{d}{\jens(g)} & A \arrow{d}{f} \\ \jens(C') \arrow{r}{\gamma_{C'}} & A'. \end{tikzcd} \] \end{proposition} \begin{proof} The proof of \cite{Cortinas-Thom:Bivariant_K}*{Proposition 4.4.2} works mutatis-mutandis. \end{proof} \begin{lemma}\label{lem:tensoring-univ-extension} Let \(A\) be a complete, bornologically torsionfree \(\dvr\)-algebra. Then for any complete, torsionfree bornological algebra \(R\), we have a semi-split extension \[ R \hot \jens(A) \rightarrowtail R \hot \tens(A) \twoheadrightarrow R \hot A\] of complete, bornologically torsionfree \(\dvr\)-algebras. \end{lemma} \begin{proof} By Lemma \ref{lem:4}, tensoring by a complete bornologically torsionfree \(\dvr\)-module preserves bornological embeddings. Applying this to the universal semi-split extension \(\jens(A) \rightarrowtail \tens(A) \twoheadrightarrow A\), we get the required extension that splits by the map \(1_R \otimes \sigma_A \colon R \hot A \to R \hot \tens A\), where \(\sigma_A\) is the section of \(\tens(A) \twoheadrightarrow A\). \end{proof} \begin{corollary}\label{cor:universal-simplicial-extension} Let \(K\) be a finite pointed simplicial set. Then there is a homotopy class of maps \(\jens(A^K) \to \jens(A)^K\) represented by the following extension \[\jens(A)^K \rightarrowtail \tens(A)^K \twoheadrightarrow A^K.\] These maps are natural in the sense that if \(K \to L\) is a morphism of simplicial sets, then there is a homotopy commuting diagram \[ \begin{tikzcd} \jens(A^K) \arrow{r} \arrow{d} & \jens(A)^K \arrow{d} \\ \jens(A^L) \arrow{r} & \jens(A)^L. \end{tikzcd} \] \end{corollary} \begin{proof} Consider the universal tensor algebra extension \(\jens(A) \rightarrowtail \tens(A) \twoheadrightarrow A\). Tensoring by \(\dvr^K\) viewed as a complete bornological \(\dvr\)-algebra with the fine bornology, we again get an extension \[ \jens(A) \hot \dvr^K \rightarrowtail \tens(A) \hot \dvr^K \twoheadrightarrow A \hot \dvr^K \] of complete, bornologically torsionfree \(\dvr\)-algebras. The result now follows from Lemma \ref{lem:4}. Now suppose \(K \to L\) is a morphism of finite simplicial sets. Then the functoriality of the assignment \(K \mapsto A^K\) gives a morphism of complete bornological \(\dvr\)-algebras \(\dvr^K \to \dvr^L\). Tensoring by \(A\), we get a map \(g \colon A^K \to A^L\) using Lemma \ref{lem:4}. So we have a morphism of extensions \[ \begin{tikzcd} \jens(A^K) \arrow{r} \arrow{d}{f} & \tens(A^K) \arrow{r} \arrow{d} & A^K \arrow{d}{g} \\ \jens(A^L) \arrow{r} & \tens(A^L) \arrow{r} & A^L. \end{tikzcd} \] Now use Proposition \ref{prop:classifying-map-functorial} and Lemma \ref{lem:tensoring-univ-extension}. \end{proof} So far, we have constructed the \(J\)-functor on the homotopy category of complete, bornologically torsionfree algebras. The following lemma shows that the assignment \(\jens \colon A \to \jens A\) is compatible with respect to matrix stabilisations, relative to any product stable bornological \(\dvr\)-module \(Z\): \begin{lemma}\label{lem:J-functor-matrices} The functor \(\jens \colon \mathsf{Alg}_\dvr^\mathrm{tf} \to \mathsf{Alg}_\dvr^\mathrm{tf}\) extends to a functor on the category \(\mathsf{Alg}_{\mathcal{M}_{\infty, Z}^\updagger}^\mathrm{tf}\). \end{lemma} \begin{proof} Any morphism in \(\mathsf{Alg}_{\mathcal{M}_{\infty, Z}^\updagger}^\mathrm{tf}\) can be represented by a bounded \(\dvr\)-algebra homomorphism \(f \colon A \to \mathcal{M}_{\infty, Z}^\updagger(B)\). This induces a map \(\jens(A) \to \jens(\mathcal{M}_{\infty, Z}^\updagger(B))\) (see Proposition \ref{prop:classifying-map-functorial}). Now apply Lemma \ref{lem:tensoring-univ-extension} to the algebra \(R = \mathcal{M}_{\infty, Z}^\updagger\), which provides a map \(\jens(\mathcal{M}_{\infty, Z}^\updagger \hot B) \to \jens(B) \hot \mathcal{M}_{\infty, Z}^\updagger\). The composition of these two maps yields the required representative \(\jens(A) \to \jens(B) \hot \mathcal{M}_{\infty, Z}^\updagger\) of a homotopy class in \([\jens A, \mathcal{M}_{\infty, Z}^\updagger \hot \jens B]\). \end{proof} We now define an extension that will be instrumental in defining the mapping space for bivariant analytic \(kk\)-theory, and proving several of its properties. Let \(A\) be a complete, bornologically torsionfree \(\dvr\)-algebra. Define \(P \defeq \dvr^{(\Delta^1, *)}\) and \(\mathcal{P} \defeq \dvr^{(\mathsf{sd}^\bullet(\Delta^1), *)}\), where \(\mathsf{sd}^\bullet\) is the simplicial subdivision functor discussed before Lemma \ref{lem:5}. Then \(P \cong \ker(\dvr[t]^\updagger \overset{\ev_0}\to \dvr)\), and we have an extension of complete, bornologically torsionfree algebras \[ \Omega \rightarrowtail P \overset{\ev_1} \twoheadrightarrow \dvr,\] which upon tensoring with \(A\) yields the semi-split extension \[\Omega(A) \rightarrowtail P(A) \twoheadrightarrow A,\] called the \textit{loop extension}. Here \(\Omega(A) = \Omega \hot A\) and \(P(A) = P \hot A\), and we have used Lemma \ref{lem:4} to justify that tensoring by \(A\) is indeed exact. The \(\dvr\)-linear splitting is the one induced by \(A \subseteq A \oplus A \to A \gen{\Delta^1}^\updagger\). Here \(P(A) \defeq P \hot A\). By the universal property of the tensor algebra extension, there is a natural map \[\varrho_A \colon \jens (A) \to \Omega(A)\] which is unique up to homotopy. In a similar manner, there is a semi-split extension \[A^{\mathcal{S}^1} \rightarrowtail \mathcal{P}(A) \overset{\ev_1}\twoheadrightarrow A,\] which yields the classifying map \(\jens(A) \to A^{\mathcal{S}^1}\), obtained from the composition \(\jens(A) \to \Omega(A) \to A^{\mathcal{S}^1}\). Now let \(f \colon A \to B\) be a bounded \(\dvr\)-algebra homomorphism. Then taking the pullback \begin{equation}\label{eq:mapping-path} \begin{tikzcd} \Omega(B) \arrow{r} \arrow{d} & P(B) \times_B A \arrow{d} \arrow{r} & A \arrow{d}{f} \\ \Omega(B) \arrow{r} & P(B) \arrow{r}{\ev_1} & B, \end{tikzcd} \end{equation} we get the \textit{mapping path extension of \(f\)}. Let us now consider the matrix algebra \(\mathcal{M}_{\N}^{\mathrm{alg}}\) from Example \ref{ex:triv:matrix}. It is the matrix algebra corresponding to the matricial pair \(X = Y = \dvr^{(\N)}\). These are the \(\dvr\)-valued functions \(\N \times \N \to \dvr\) with finite support. It is a dagger algebra with the fine bornology. Consider the algebra \(\Gamma^\dvr\) of functions \(f \colon \N \times \N \to \dvr\) such that the set \(\setgiven{f(n,m)}{(n,m) \in \N \times \N}\) is finite, and that there is an \(N\) such that the support of the functions \(f(n,-) \colon \N \to \dvr\) and \(f(-,n) \colon \N \to \dvr\) are bounded by \(N\). This is Karoubi's cone ring over \(\dvr\). Equipping it with the fine bornology, it becomes a complete, torsionfree bornological \(\dvr\)-algebra, which we denote simply by \(\Gamma\). Furthermore, \(\Gamma\) contains \(\mathcal{M}_\N^{\mathrm{alg}}\) as a (closed) ideal, whose quotient \(\Sigma = \Gamma /\mathcal{M}_\N^{\mathrm{alg}}\) is again a complete bornological \(\dvr\)-algebra, called the \textit{suspension algebra}. \begin{lemma}\label{lem:suspension-torsionfree} The suspension algebra \(\Sigma = \Gamma/\mathcal{M}_\N^{\mathrm{alg}}\) defined above is bornologically torsionfree. \end{lemma} \begin{proof} Let \(\mathcal{M}^\Z\) and \(\Gamma^\Z\) denote the algebras of finitely supported functions \(\N \times \N \to \Z\), and Karoubi's cone ring over \(\Z\). They yield an extension \[\mathcal{M}_{\infty}^{\Z} \rightarrowtail \Gamma^\Z \twoheadrightarrow \Sigma^\Z\] of \(\Z\)-algebras as in \cite{Cortinas-Thom:Bivariant_K}*{Equation 31}. This is also a split exact sequence of free \(\Z\)-modules. Tensoring with \(\dvr\) produces an extension of \(\dvr\)-modules \[\mathcal{M}_\infty^{\dvr} \rightarrowtail \Gamma^\dvr \twoheadrightarrow \Sigma^\dvr\] that splits by a \(\dvr\)-linear map. The fine bornology functor is exact, so applying it yields an extension \[\mathcal{M}^{\mathrm{alg}} \rightarrowtail \Gamma \twoheadrightarrow \Sigma\] of complete, bornological \(\dvr\)-algebras with a bounded \(\dvr\)-linear splitting. The existence of such a splitting implies that \(\Sigma\) is torsionfree, and since it has the fine bornology, it is also bornologically torsionfree. \end{proof} The extension \(\mathcal{M}^{\mathrm{alg}} \rightarrowtail \Gamma \twoheadrightarrow \Sigma\) is called the \textit{algebraic cone extension} of \(\dvr\). Tensoring it by a complete, torsionfree bornological \(\dvr\)-algebra \(A\), we get an extension \[\mathcal{M}^{\mathrm{alg}}(A) \rightarrowtail \Gamma(A) \twoheadrightarrow \Sigma(A),\] which we call the \textit{algebraic cone extension} of \(A\). Next we consider the matrix algebra \(\mathcal{M}_Z^\updagger\) corresponding to \(Z = \coma{\bigoplus_{n \in \N} \dvr} \cong c_0(\N)\), denoted as before by \(\mathcal{M}^{\mathrm{cont}}\). Recall from Example \ref{ex:adic-matrix} that this is the Banach \(\dvr\)-algebra \(c_0(\N \times \N)\) of functions \(\N \times \N \to \dvr\) vanishing at infinity, with the supremum norm. Now since the quotient in the extension \(\mathcal{M}_\infty^{\dvr} \rightarrowtail \Gamma^\dvr \twoheadrightarrow \Sigma^\dvr\) is torsionfree, \(\dvgen\)-adic completion is exact. Consequently, we get a semi-split extension \[\coma{\mathcal{M}_\infty^\dvr} \rightarrowtail \coma{\Gamma^\dvr} \twoheadrightarrow \coma{\Sigma^\dvr}\] of \(\dvgen\)-adically complete \(\dvr\)-algebras. This is also a bornological extension if we equip the algebras with the bornology where all subsets are bounded, thereby yielding a semi-split extension \(\mathcal{M}^{\mathrm{cont}} \rightarrowtail \coma{\Gamma} \twoheadrightarrow \coma{\Sigma}\) of complete, bornologically torsionfree \(\dvr\)-algebras. Finally, tensoring with a complete, bornologically torsionfree \(\dvr\)-algebra \(A\), we again have an extension of complete, torsionfree bornological \(\dvr\)-algebras \[\mathcal{M}^{\mathrm{cont}}(A) \rightarrowtail \coma{\Gamma}(A) \twoheadrightarrow \coma{\Sigma}(A),\] which we call the \textit{continuous cone extension} of \(A\). \begin{remark}\label{rem:Calkin} The \(\dvgen\)-adically completed suspension extension defined above can be regarded as a nonarchimedean analogue of the \textit{Calkin extension} \(\mathbb{K}(l^2(\N)) \rightarrowtail \mathbb{B}(l^2(\N)) \twoheadrightarrow \mathcal{Q}(l^2(\N))\). A possible alternative to this version of the Calkin extension which resembles the \(C^*\)-algebraic version uses the notion of bounded operators on a \(\dvgen\)-adic Hilbert space \(\dvf(X) = \setgiven{\psi \colon X \to \dvf}{\abs{\psi(x)} \leq 1 \text{ for all but finitely many } x \in X}\). This is studied in \cite{claussnitzer2019aspects}. We however find the suspension algebra defined above more conducive to the type of stability results we seek, and have proved in \cite{Cortinas-Meyer-Mukherjee:NAHA}. It is quite plausible that the \(\dvgen\)-adic Hilbert spaces and the corresponding operator algebras are special cases of matrix algebras arising from matricial pairs as in the Archimedean case. \end{remark} \begin{remark}\label{rem:analytic-cone} The reader may have noticed that we have not defined a version of the cone extension whose kernel is the matrix algebra from Example \ref{exa:filtered_length-controlled_matrices}. This is because dagger completion is \emph{not} an exact functor, so starting with the algebraic cone extension does not work. \end{remark} In light of Remark \ref{rem:analytic-cone}, from now on, we restrict our attention to stabilisations by the Banach algebra \(\mathcal{M}^{\mathrm{cont}}\). We now move on to the \textit{Toeplitz extension}. In the algebraic case, there is a semi-split extension \[\mathcal{M}_\infty^\Z \rightarrowtail \mathcal{T}^\Z \twoheadrightarrow \Z[t,t^{-1}]\] of \(\Z\)-algebras. This induces an extension of \(\dvr\)-algebras \(\mathcal{M}_\infty^\dvr \rightarrowtail \mathcal{T}^\dvr \twoheadrightarrow \dvr[t,t^{-1}]\). With the fine bornology, this becomes an extension of complete, bornologically torsionfree \(\dvr\)-algebras. Finally, if \(A\) is a complete, bornologically torsionfree \(\dvr\)-algebra, then there is an induced semi-split extension \[\mathcal{M}^{\mathrm{alg}}(A) \rightarrowtail \mathcal{T}(A) \twoheadrightarrow A[t,t^{-1}]\] of complete bornologically torsionfree \(\dvr\)-algebras. Here \(\tans (A) = \tans \otimes A\) and \(A[t,t^{-1}] = A \otimes \dvr[t,t^{-1}]\). The interesting extension is, of course, the \textit{\(\dvgen\)-adically completed Toeplitz extension}, which is obtained by taking the \(\dvgen\)-adic completion \[\mathcal{M}^{\mathrm{cont}}(A) \rightarrowtail \coma{\tans} \hot A \twoheadrightarrow \coma{\dvr[t,t^{-1}]} \hot A,\] of the (algebraic) Toeplitz extension, and equipping the algebras involved with the bornology where all subsets are bounded, and then finally tensoring with \(A\). \subsection{Free products and quasi-homomorphisms} We now discuss the free-double construction. Let \(A \in \mathsf{Alg}_\dvr^\mathrm{tf}\). Then the \textit{free product} \(Q(A) \defeq A * A\) of \(A\) with itself is a complete, bornological \(\dvr\)-algebra. Note that this exists for algebras internal to any closed symmetric monoidal category with direct sums that commute with \(\otimes\). That is, for any such category \(\mathcal{C}\), it is defined by the universal property \[\mathsf{Alg}(A * B, D) \cong \mathsf{Alg}(A,D) \times \mathsf{Alg}(B,D)\] for algebras \(A\), \(B\), \(D \in \mathsf{Alg}(\mathcal{C})\). Back to our case, denoting by \(q(A) \defeq \ker(Q(A) \to A)\), we get a split extension of complete, bornologically torsionfree \(\dvr\)-algebras \(q(A) \rightarrowtail Q(A) \twoheadrightarrow A\), where the splitting is given by each of the two canonical inclusions \(\iota_1, \iota_2 \colon A \rightrightarrows Q(A)\). Being sections, these inclusions satisfy \[\iota_1(a) - \iota_2(a) \in q(A)\] for all \(a \in A\). That is, the pair \((\iota_1, \iota_2)\) is a \textit{quasi-homomorphism} in the following sense: \begin{definition}\label{def:quasi-homomorphism} Let \(A\), \(B\) and \(D\) be complete, bornologically torsionfree \(\dvr\)-algebras, and let \(B \unlhd D\) be an ideal. A pair of bounded \(\dvr\)-algebra homomorphisms \(f,g \colon A \rightrightarrows D\) is called a \textit{quasi-homomorphism} if \(f(a) -g(a) \in B\) for all \(a \in A\), and the linear map \(f - g\) is bounded. \end{definition} The quasi-homomorphism \((\iota_1, \iota_2)\) above is universal in following sense: \begin{lemma}\label{lem:quasi-homomorphism-universal} Suppose \((f,g) \colon A \to D \unrhd B\) is a quasi-homomorphism, then there is a unique bounded \(\dvr\)-algebra homomorphism \(f * g \colon Q(A) \to D\) such that the following diagram commutes: \[ \begin{tikzcd} A \arrow{r}{\iota_1} \arrow[r, swap, shift right, "\iota_2"] \arrow{d}{=} & Q(A) \arrow{d}{} & q(A) \arrow[l, swap, tail, "\unrhd"] \arrow{d}{} \\ A \arrow{r}{f} \arrow[r, swap, shift right, "g"] & D & B \arrow[l, tail, "\unrhd"]. \end{tikzcd} \] The induced map \(q(A) \to B\) is called the \textit{classifying map} of \((f,g)\). \end{lemma} \begin{proof} The pair \((f,g)\) induces a unique bounded algebra homomorphism \(Q(A) \to D\) by the universal property of free products. To describe this map explicitly, we first observe that a monomial \(a_1 \otimes b_1 \otimes \cdots \otimes a_n \otimes b_n\) is identified with \(\iota_1(a_1)\iota_2(b_1) \cdots \iota_1(a_n)\iota_2(b_n)\), and the image of alternating sums of such monomials under the maps \(\iota_1\) and \(\iota_2\) generates \(A * A\). The required map \(f*g \colon Q(A) \to D\) is defined on each such monomial by \(f*g(\iota_1(a_1)\iota_2(b_1) \cdots \iota_1(a_n)\iota_2(b_n)) \defeq f(a_1)g(b_1)\cdots f(a_n)g(b_n)\). It is bounded because \(f\) and \(g\) are bounded. By construction, this map makes the diagram above commute. Furthermore, since \(f - g\) and the multiplication map \(D \times B \to B\) are bounded, and the map \(Q(A) \hot A \to q(A)\), \(x \otimes a \mapsto x \cdot (\iota_1(a) - \iota_2(a))\) has a bounded linear section, the restriction \(q(A) \to B\) is bounded. \end{proof} \section{Definition of bivariant analytic \(K\)-theory}\label{sec:definition-kk} We now define bivariant analytic \(K\)-theory. Let \(f \colon A \to B\) be a morphism in \(\mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf})\). Consider the mapping path extension \[B^{\mathcal{S}^1} \rightarrowtail P(B) \times_B A \to A\] from Equation \eqref{eq:mapping-path}, obtained by pulling back the extension \(B^{\mathcal{S}^1} \rightarrowtail P(B) \twoheadrightarrow B\) along \(f\). As this is a semi-split extension, the universal property of the tensor algebra extension \(\jens(A) \rightarrowtail \tens(A) \twoheadrightarrow A\) produces a classifying map \(\jens(A) \to B^{\mathcal{S}^1}\). Using the functoriality of \(\jens\) and iterating, we get maps \[\jens^n(A) \to B^{\mathcal{S}^n} \to \mathcal{M}_{\infty,Z}^\updagger (B^{\mathcal{S}^n})\] for each \(n\). Let \([\alpha_n] \in [\jens^n(A), \mathcal{M}_{\infty,Z}^\updagger (B^{\mathcal{S}^n})]\) be a homotopy class represented by a bounded algebra homomorphism \(\jens^n(A) \to \mathcal{M}_{\infty,Z}^\updagger(B^{\mathcal{S}^n})\). Here \(Z\) is a complete, torsionfree bornological \(\dvr\)-module witha bilinear form as in Subsection \ref{subsec:stability}. Using the tensor algebra extension again \[ \begin{tikzcd} \jens^{n+1}(A) \arrow[r, tail] \arrow[d, "\alpha_{n+1}"] & \tens(\jens^n(A)) \arrow[r, two heads] \arrow{d}{} & \jens^n(A) \arrow{d}{\alpha_n} \\ \mathcal{M}_{\infty,Z}^\updagger(B^{\mathcal{S}^{n+1}}) \arrow[r, tail] & P(\mathcal{M}_{\infty,Z}^\updagger(B^{\mathcal{S}^n})) \arrow[r, two heads] & \mathcal{M}_\infty(B^{\mathcal{S}^n}), \end{tikzcd} \] we get a bounded algebra homomorphism \(\alpha_{n+1}\) that represents a homotopy class in \([\jens^{n+1}(A), \mathcal{M}_\infty(B^{\mathcal{S}^n})]\). The assignment \([\alpha_n] \mapsto [\alpha_{n+1}]\) are the structure maps of an inductive system of abelian groups. \begin{definition}\label{def:bivariant_K} For a fixed complete, torsionfree bornological \(\dvr\)-module \(Z\) with a bilinear form, the \textit{bivariant analytic \(K\)-theory} groups (relative to \(Z\)) are defined as the colimit \[ \mathsf{kk}^{\mathrm{an}}_Z(A,B) \defeq \mathsf{colim}_n [\jens^n A, \mathcal{M}_{\infty,Z}^\updagger (B^{\mathcal{S}^n})].\] of dagger homotopy classes of bounded \(\dvr\)-algebra homomorphisms. \end{definition} We now define a category whose morphisms are given by \(\mathsf{kk}^{\mathrm{an}}_Z(A,B)\) for inductive systems of complete, torsionfree bornological \(\dvr\)-algebras. Consider the endofunctors \(\jens, (-)^{\mathsf{S}^1} \colon \mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf}) \rightrightarrows \mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf})\). Recall that the loop extension \[A^{\mathcal{S}^1} \rightarrowtail \mathcal{P}(A) \overset{\ev_1}\twoheadrightarrow A\] induces the classifying map \(\varrho_A \colon \jens(A) \to \mathsf{A}^{\mathcal{S}^1}\). This defines a natural transformation between the two endofunctors considered above. More concretely, if \(f \colon A \to B\) is a morphism in \(\mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf})\), we have a homotopy commutative diagram \[ \begin{tikzcd} \jens(A) \arrow{r}{\varrho_A} \arrow{d}{\jens(f)} & A^{\mathcal{S}^1} \arrow{d}{f^{\mathcal{S}^1}} \\ \jens(B) \arrow{r}{\varrho_B} & B^{\mathcal{S}^1}, \end{tikzcd} \] where homotopy commutativity means that \begin{equation}\label{eq:well-defined1} f^{\mathcal{S}^1} \circ \varrho_A = \varrho_B \circ \jens(f) \in [\jens(A), B^{\mathcal{S}^1}]. \end{equation} There is a canonical map \([-,-] \to \{-,-\}\), so the same equality holds under its image in the latter group. Now consider the classifying map \(\gamma_A \colon \jens(A^{\mathcal{S}^1}) \to \jens(A)^{\mathcal{S}^1}\) constructed in Corollary \ref{cor:universal-simplicial-extension}. This induces a map \begin{equation}\label{eq:well-defined2} \gamma_A \circ \jens(\varrho_A) = - \varrho_{\jens(A)} \colon \jens^2(A) \to \jens(A)^{\mathcal{S}^1} \in [\jens^2(A), \jens(A)^{\mathcal{S}^1}], \end{equation} where the equality of the two maps follows from the uniqueness of classifying maps up to homotopy. To discuss the composition rule in \(\mathsf{kk}^{\mathrm{an}}_Z\), we fix some notation. For an algebra \(A\), define \[ \gamma_A^{1,n} \defeq (\gamma_A)^{\mathcal{S}^{n-1}} \circ \dotsc \circ \gamma_{A^{\mathcal{S}^{n-2}}}^{\mathcal{S}^1} \circ \gamma_{A^{\mathcal{S}^{n-1}}} \colon \jens(A^{\mathcal{S}^{n}}) \to \jens(A)^{\mathcal{S}^{n}}\] and \begin{equation}\label{eqref:composition-kk} \gamma_{A}^{m,n} = \gamma_{\jens^{m-1}(A)}^{1,n} \circ \dotsc \circ \jens^{m-2} \gamma_{\jens A}^{1,n} \circ \jens^{m-1} \gamma_{A}^{1,n} \colon \jens^m(A^{\mathcal{S}^n}) \to \jens^m(A)^{\mathcal{S}^n} \end{equation} for \(m\), \(n \geq 0\). \begin{theorem}\label{lem:composition} Let \(A\), \(B\) and \(C \in \mathsf{Ind}(\mathsf{Alg}_\dvr^\mathrm{tf})\). There is an associative composition product \[\mathsf{kk}^{\mathrm{an}}_Z(B,C) \times \mathsf{kk}^{\mathrm{an}}_Z(A,B) \to \mathsf{kk}^{\mathrm{an}}_Z(A,C)\] given by extending the composition of algebra homomorphisms. \end{theorem} \begin{proof} Let \([f] \in \mathsf{kk}^{\mathrm{an}}_Z(A,B)\) and \([g] \colon \mathsf{kk}^{\mathrm{an}}_Z(B,C)\) be represented by the bounded \(\dvr\)-algebra homomorphisms \(f \colon \jens^n(A) \to \mathcal{M}_{\infty,Z}^\updagger(B^{\mathcal{S}^n})\) and \(g \colon \jens^m(B) \to \mathcal{M}_{\infty,Z}^\updagger(C^{\mathcal{S}^m})\). Their composition \([g] \circ [f]\) is represented by \begin{multline*} \jens^{n+m} (A) \overset{\jens^m(f)}\to \mathcal{M}_{\infty,Z}^\updagger \hot \jens^m (B^{\mathcal{S}^n}) \to \mathcal{M}_{\infty,Z}^\updagger \hot \jens^m(B)^{\mathcal{S}^n} \\ \to \mathcal{M}_{\infty,Z}^\updagger \hot \mathcal{M}_{\infty,Z}^\updagger \hot C^{\mathcal{S}^{n+m}} \to \mathcal{M}_{\infty,Z}^\updagger(C^{\mathcal{S}^{n+m}}). \end{multline*} Here the morphism \(\mathcal{M}_{\infty,Z}^\updagger \hot \jens^m(B^{\mathcal{S}^n}) \to \mathcal{M}_{\infty,Z}^\updagger \hot \jens^m(B)^{\mathcal{S}^n}\) is induced by the map \(\jens^m(B^{\mathcal{S}^n}) \to \jens^m(B)^{\mathcal{S}^n}\) from Equation \ref{eqref:composition-kk}. Explicitly, the composition is represented by the class \([g^{\mathcal{S}^n} \circ (-1)^{mn}\gamma_B^{m,n}] \star [\jens^m(f)]\). That the definition of the composition does not depend on specific choices of representatives and is associative follows Equations \ref{eq:well-defined1} and \ref{eq:well-defined2} and the naturality of the transformation \(\gamma_A\) discussed in Corollary \ref{cor:universal-simplicial-extension}. \end{proof} \begin{definition}\label{def:bivariant-analytic-category} We define a category \(\mathsf{kk}^{\mathrm{an}}_Z\) whose objects are complete, bornologically torsionfree \(\dvr\)-algebras, and whose morphisms are \(\mathsf{kk}^{\mathrm{an}}_Z(A,B)\) for two such algebras. \end{definition} There is a canonical functor \(j \colon \mathsf{Alg}_\dvr^\mathrm{tf} \to \mathsf{kk}^{\mathrm{an}}_Z\) which acts identically on objects and associates to each morphism \(f \colon A \to B\), its image under the canonical maps \[\Hom_{\mathsf{Alg}^\mathrm{tf}}(A,B) \to [A,B] \to \{A,B\} \to \mathsf{kk}^{\mathrm{an}}_Z(A,B).\] A morphism \(f \colon A \to B\) in \(\mathsf{Alg}^\mathrm{tf}\) is called a \textit{\(\mathsf{kk}^{\mathrm{an}}_Z\)-equivalence} if \(j(f)\) is invertible in the category \(\mathsf{kk}^{\mathrm{an}}_Z\). In this paper, we will mostly be interested in \(\mathsf{kk}^{\mathrm{an}}_Z\) for \(Z\) as in Example \ref{ex:adic-matrix}. We denote the resulting bivariant analytic \(K\)-theory simply by \(\mathsf{kk}^{\mathrm{an}}\) for the rest of this paper. \subsection{Excision} In this subsection, we prove that \(\mathsf{kk}^{\mathrm{an}}\) satisfies excision for semi-split extensions of complete, torsionfree bornological \(\dvr\)-algebras. The proof follows the same approach as \cite{Cuntz:Weyl}*{Section 5} or \cite{Cortinas-Thom:Bivariant_K}*{Section 6.3}, so we have decided to be brief in its demonstration. We first fix some notation: let \(f \colon A \to B\) be a bounded \(\dvr\)-algebra homomorphism between two such algebras. Consider the path algebra diagram \[ \begin{tikzcd} P(B) \times_B A \arrow{r}{p_f} \arrow{d} & A \arrow{d}{f} \\ P(B) \cong tB\gen{t}^\updagger \arrow{r}{\ev_1} & B \end{tikzcd} \] from Example \ref{ex:path-extension}. To shorten notation, we denote the pullback \(P(B) \times_B A\) by \(P_f\). When we use the path algebra \(\mathcal{P}(B) = B^{\mathsf{sd}^\bullet(\Delta^1)}\), we denote the resulting pullback by \(\mathcal{P}_f\). These two path algebras are \(\mathsf{kk}^{\mathrm{an}}\)-equivalent. Excision in the second variable can now be stated as follows: \begin{theorem}\label{thm:excision-1} Let \(D \in \mathsf{Alg}_\dvr^\mathrm{tf}\), and let \(A \overset{f}\rightarrowtail B \overset{g}\twoheadrightarrow C\) be a semi-split extension in \(\mathsf{Alg}_\dvr^\mathrm{tf}\). Then there is a natural long exact sequence \[\mathsf{kk}^{\mathrm{an}}(D, \Omega B) \overset{j(\Omega(g))^*}\to \mathsf{kk}^{\mathrm{an}}(D, \Omega C) \overset{\delta}\to \mathsf{kk}^{\mathrm{an}}(D, A) \overset{j(f)_*}\to \mathsf{kk}^{\mathrm{an}}(D, B) \overset{j(g)_*}\to \mathsf{kk}^{\mathrm{an}}(D, C)\] of \(\mathsf{kk}^{\mathrm{an}}\)-groups. \end{theorem} \begin{proof} By adapting the proof of \cite{Cuntz:Weyl}*{Lemma 5.1}, we see that the path extension of \(g\) yields a diagram \(\mathsf{kk}^{\mathrm{an}}(D, P_g) \to \mathsf{kk}^{\mathrm{an}}(D, B) \overset{j(g)_*}\to \mathsf{kk}^{\mathrm{an}}(D, C)\) that is exact in the middle. Since \(g\) is linearly split, there is a \(\mathsf{kk}^{\mathrm{an}}\)-equivalence between \(A\) and \(P_g\) by \cite{Cortinas-Thom:Bivariant_K}*{Lemma 6.3.2}, so that we can identify this diagram with the diagram \[\mathsf{kk}^{\mathrm{an}}(D,P_g) \cong \mathsf{kk}^{\mathrm{an}}(D, A) \overset{j(f)_*}\to \mathsf{kk}^{\mathrm{an}}(D, B) \overset{j(g)_*}\to \mathsf{kk}^{\mathrm{an}}(D,C).\] Applying the middle exactness of the path extension to the inclusion \(\iota_g \colon \Omega C \twoheadrightarrow P_g\), we again get an extension \[kk(D, P_{\iota_g}) \to \mathsf{kk}^{\mathrm{an}}(D, \Omega C) \to \mathsf{kk}^{\mathrm{an}}(D,P_g) \cong \mathsf{kk}^{\mathrm{an}}(D,A)\] that continues the extension above. The map \(\delta\) in the statement of the theorem is the composition \(\mathsf{kk}^{\mathrm{an}}(D, \Omega C) \to \mathsf{kk}^{\mathrm{an}}(D, P_g) \cong \mathsf{kk}^{\mathrm{an}}(D, A)\). Now apply the analogue of \cite{Cortinas-Thom:Bivariant_K}*{Corollary 6.3.5} to the map \(g \colon B \twoheadrightarrow C\) to get the identification \(\mathsf{kk}^{\mathrm{an}}(D, \Omega B) \cong \mathsf{kk}^{\mathrm{an}}(D, P_{\iota_g})\), which completes the proof. \end{proof} Dually, we have: \begin{theorem}\label{thm:excision-2} Let \(D \in \mathsf{Alg}_\dvr^\mathrm{tf}\), and let \(A \overset{f}\rightarrowtail B \overset{g}\twoheadrightarrow C\) be a semi-split extension in \(\mathsf{Alg}_\dvr^\mathrm{tf}\). Then there is a natural long exact sequence \[\mathsf{kk}^{\mathrm{an}}(C, D) \overset{j(g)_*}\to \mathsf{kk}^{\mathrm{an}}(B, D) \overset{j(f)_*}\to \mathsf{kk}^{\mathrm{an}}(A, D) \overset{\delta}\to \mathsf{kk}^{\mathrm{an}}(\Omega C, D) \overset{j\Omega(g)_*}\to \mathsf{kk}^{\mathrm{an}}(\Omega B, D)\] of \(kk\)-groups. Here \(\delta\) is the composition of \(\mathsf{kk}^{\mathrm{an}}(A, D) \cong \mathsf{kk}^{\mathrm{an}}(P_g, D) \to \mathsf{kk}^{\mathrm{an}}(\Omega C, D)\). \end{theorem} \begin{proof} We adapt the proof of \cite{Cortinas-Thom:Bivariant_K}*{Theorem 6.3.7} to our setting. Consider a semi-split bornological quotient map \(f \colon A \to B\). Then for an \([\alpha] \in \mathsf{kk}^{\mathrm{an}}(A,D)\) such that \(j(\pi_f)([\alpha]) = 0\) in \(\mathsf{kk}^{\mathrm{an}}(P_f, D)\), we can choose an \(n\) such that \(\alpha \circ \jens^n(\pi_f)\) is null-homotopic for a representative \(\alpha \colon \jens^nA \to \mathcal{M}_\infty^{\mathrm{cont}}(D^{\mathcal{S}^n})\). As a consequence, there is a bounded \(\dvr\)-algebra homomorphism \(\varphi \colon \jens^n(P_f) \to \mathcal{P}(\mathcal{M}_\infty^{\mathrm{cont}}(D^{\mathcal{S}^n})) \cong \mathcal{M}_\infty^{\mathrm{cont}}(\mathcal{P}(D^{\mathcal{S}^n}))\) that is part of the following commuting diagram \[ \begin{tikzcd} \ker(\jens^n(\pi_f)) \arrow{r}{} \arrow{d} & \jens^n(P_f) \arrow{r}{\jens^n(\pi_f)} \arrow{d}{\varphi} & \jens^n(A) \arrow{d}{\alpha} \\ \mathcal{M}_\infty^{\mathrm{cont}}(D^{\mathcal{S}^{n+1}}) \arrow{r}{} & \mathcal{M}_\infty^{\mathrm{cont}}(\mathcal{P}(D^{\mathcal{S}^n})) \arrow{r}{} & \mathcal{M}_\infty^{\mathrm{cont}}(D^{\mathcal{S}^n}). \end{tikzcd} \] Now consider the composite map \[\beta \colon \jens^{n+1}(B) \to \jens^n(\Omega B) \to \ker(\jens^n(\pi_f)) \to \mathcal{M}_\infty^{\mathrm{cont}}(D^{\mathcal{S}^{n+1}}).\] Then \(\jens(f)[\beta] = [\alpha]\) by the uniqueness of the classifying map. This shows that diagram \[\mathsf{kk}^{\mathrm{an}}(B,D) \overset{j(f)_*}\to \mathsf{kk}^{\mathrm{an}}(A,D) \overset{j(p_{f})_*}\to \mathsf{kk}^{\mathrm{an}}(P_f, D)\] is exact in the middle. The conclusion now follows \cite{Cortinas-Thom:Bivariant_K}*{Corollary 6.3.3, Corollary 6.3.5}, which adapt to our setting. \end{proof} \subsection{Looping and delooping} Recall the \textit{loop functor} \(\Omega\) defined on objects as \(\Omega(A) \defeq \ker(P(A) \overset{\ev_1}\twoheadrightarrow A)\) and on morphisms \(f \colon A \to B\) using the functoriality of \(\Omega\) and the canonical map \([\Omega(A), \Omega(B)] \to \mathsf{kk}^{\mathrm{an}}(\Omega(A), \Omega(B))\). In this section, we promote \(\Omega\) to a functor on \(\mathsf{kk}^{\mathrm{an}}\) and show that it is an equivalence of categories: \begin{proposition}\label{prop:loop-fully-faithful} The functor \(\Omega \colon \mathsf{kk}^{\mathrm{an}} \to \mathsf{kk}^{\mathrm{an}}\) is fully faithful. That is, \[\Omega \colon \mathsf{kk}^{\mathrm{an}}(A,B) \to \mathsf{kk}^{\mathrm{an}}(\Omega (A), \Omega (B))\] is an isomorphism of abelian groups. \end{proposition} \begin{proof} The same proof as in \cite{Cortinas-Thom:Bivariant_K}*{Lemma 6.3.8, Lemma 6.3.9} adapts to our setting. For clarity, we highlight the map in the other direction, called the \textit{delooping} map: associate to a class \([\beta] \in \mathsf{kk}^{\mathrm{an}}(A^{\mathcal{S}^1}, B^{\mathcal{S}^1})\) represented by \(\beta \colon \jens^n(A^{\mathcal{S}^1}) \to B^{\mathcal{S}^{n+1}}\), the class in \(\mathsf{kk}^{\mathrm{an}}(A,B)\) represented by \(\jens^{n+1}(A) \to \jens^n(A^{\mathcal{S}^1}) \overset{\beta}\to B^{\mathcal{S}^{n+1}}\). In op.cit., this map is shown to be well-defined at the level of \(kk\). \qedhere \end{proof} The following is a consequence of excision: \begin{lemma}\label{lem:noncommutative-loop} Let \(A \in \mathsf{Alg}_\dvr^\mathrm{tf}\). Then the natural map \(\varrho_A \colon \jens A \to \Omega(A)\) induces a \(\mathsf{kk}^{\mathrm{an}}\)-equivalence. \end{lemma} \begin{proof} Since the algebras \(\tens A\) and \(P A\) are contractible, Theorem \ref{thm:excision-1} implies that the boundary maps \( \mathsf{kk}^{\mathrm{an}}(D, \Omega A) \to \mathsf{kk}^{\mathrm{an}}(D, \jens A)\) and \(\mathsf{kk}^{\mathrm{an}}(D, \Omega A)\) are isomorphisms for all \(D \in \mathsf{Alg}_\dvr^\mathrm{tf}\). By the naturality of the exact sequences in Theorem \ref{thm:excision-1}, we get a commuting diagram \[ \begin{tikzcd} \mathsf{kk}^{\mathrm{an}}(D, \Omega A) \arrow{r}{\cong} \arrow{d}{\mathrm{id}} & \mathsf{kk}^{\mathrm{an}}(D, \jens A) \arrow{d}{\varrho_A^*} \\ \mathsf{kk}^{\mathrm{an}}(D,\Omega A) \arrow{r}{\cong} & \mathsf{kk}^{\mathrm{an}}(D, \Omega A), \end{tikzcd} \] which implies the result when we put \(D = \jens A\). \end{proof} The following description of \(\mathsf{kk}^{\mathrm{an}}\)-classes will be used in the subsequent sections: \begin{lemma}\label{lem:Lemma-6.3.11} Let \(f \colon \jens^n (A) \to B^{\mathcal{S}^n}\) denote a representative of a class in \(\mathsf{kk}^{\mathrm{an}}(A,B)\). The map induced in \(\mathsf{kk}^{\mathrm{an}}(\Omega^n A, \Omega^n B)\) by applying \(\Omega^n\) is given by the following composition: \[\Omega^n(A) \overset{j(\varrho_A^n)^{-1}}\to \jens^n (A) \overset{f}\to B^{\mathcal{S}^n} \cong \Omega^n B.\] \end{lemma} We now show essential surjectivity. But before we get there, we recall \textit{infinite sum rings} used in the complex operator algebraic case by Cuntz and in the algebraic case by Corti\~nas-Thom: \begin{definition} Let \(A\) be a complete, torsionfree bornological \(\dvr\)-algebra. \begin{itemize} \item A \textit{sum algebra} is a complete, torsionfree bornological \(\dvr\)-algebra \(A\) together with distinguished elements \(\alpha_1\), \(\alpha_2\), \(\beta_1\), \(\beta_2\) satisfying \(\alpha_1 \beta_1 = \alpha_2 \beta_2 = 1\), \(\beta_1 \alpha_1 + \beta_2 \alpha_2 = 1\), and \([\alpha_i, v] = [\beta_i, v] = 0\) for all \(v \in \dvr\), \(i = 1, 2\). We denote by \(a \oplus b \defeq \beta_1 a \alpha_1 + \beta_2 b \alpha_2\); \item Let \(B \in \mathsf{Alg}_\dvr^\mathrm{tf}\) and let \(\phi, \psi \colon B \rightrightarrows A\) be a bounded algebra homomorphism into a sum algebra. Let \(\phi \oplus \psi\) be the bounded algebra homomorphism \(B \to A\) defined by \(b \mapsto \psi(b) \oplus \phi(b)\); \item An \textit{infinite sum \(\dvr\)-algebra} is a sum algebra \(A\) with a bounded \(\dvr\)-algebra homomorphism \(\phi^\infty \colon A \to A\) satisfying \(\phi^\infty = \mathrm{id}_A \oplus \phi^\infty\). \end{itemize} \end{definition} \begin{lemma}\label{lem:wagorer-sumring} For a unital algebra \(A \in \mathsf{Alg}_\dvr^\mathrm{tf}\), the algebras \(\Gamma(A)\) and \(\coma{\Gamma(A)}\) are infinite sum rings. \end{lemma} \begin{proof} The proof in \cite{Cortinas-Thom:Bivariant_K}*{Lemma 4.8.2} shows that \((\Gamma(\dvr), \phi_\dvr^\infty)\) is an infinite sum \(\dvr\)-algebra. With the fine bornology, this is a complete, bornologically torsionfree algebra, and the homomorphism \(\phi^\infty\) is bounded. The induced map \(\coma{\phi_\dvr^\infty} \colon \coma{\Gamma(\dvr)} \to \coma{\Gamma(\dvr)}\) is a bounded algebra homomorphism in the bornology where all subsets are bounded. It satisfies \(\coma{\phi_\dvr^\infty} = \mathrm{id}_{\coma{\Gamma(\dvr)}} \oplus \coma{\phi_\dvr^\infty}\) and \(\coma{\Gamma(\dvr)}\) is an infinite sum ring with distinguished elements given by the class of the distinguished elements in the \(\dvgen\)-adic completion, which is additive and hence preserves the relations defining such elements. Tensoring with \(A\), we again get a bounded \(\dvr\)-algebra homomorphism \(\phi_A^\infty = \coma{\phi_\dvr^\infty} \hot \mathrm{id}_A \colon \coma{\Gamma} (A) \to \coma{\Gamma}(A)\) which satisfies \(\phi^\infty(a) = a \oplus \phi^\infty(a)\). \end{proof} \begin{lemma}\label{lem:technical} Let \((A, \phi^\infty)\) be an infinite sum ring and let \(B \unlhd A\) be an ideal such that \(\phi^\infty(B) \subseteq B\). Then \(B\) is \(\mathsf{kk}^{\mathrm{an}}\)-equivalent to \(0\). \end{lemma} \begin{proof} The conditions of Proposition \ref{prop:M2-inner-endo} are satisfied, and we have \(j(\mathrm{id}_B \oplus \phi_{\vert B}^\infty) = j(\mathrm{id}_B) \oplus j(\phi_{\vert B}^\infty)\). Now since \(A\) is an infinite-sum ring, we have \(j(\mathrm{id}_B \oplus \phi_{\vert B}^\infty) = j(\phi_{\vert B}^\infty)\), which shows that \(j(\mathrm{id}_B) = 0\) in \(\mathsf{kk}^{\mathrm{an}}\) as required. \end{proof} \begin{proposition}\label{prop:delooping} Let \(A \in \mathsf{Alg}_\dvr^\mathrm{tf}\). Then \(\coma{\Sigma}\) is a delooping of \(A\). That is, we have equivalences \(\Omega \coma{\Sigma}(A) \cong A\) in \(\mathsf{kk}^{\mathrm{an}}\). \end{proposition} \begin{proof} Lemma \ref{lem:wagorer-sumring} shows that for the unitalisation \(A^+ = A \oplus \dvr\), we have \(\Gamma(A^+)\) is an infinite sum ring. Next, Lemma \ref{lem:technical} shows that for nonunital \(A\), we have that \(\coma{\Gamma}(A)\) is \(\mathsf{kk}^{\mathrm{an}}\)-contractible. Now consider the extension \(\mathcal{M}^{\mathrm{cont}}(A) \rightarrowtail \coma{\Gamma}(A) \twoheadrightarrow \coma{\Sigma}(A)\). By Theorem \ref{thm:excision-2}, we get that the map \[\delta_D \colon \mathsf{kk}^{\mathrm{an}}(A,D) \cong \mathsf{kk}^{\mathrm{an}}(\mathcal{M}_\infty^{\mathrm{cont}}(A), D) \to \mathsf{kk}^{\mathrm{an}}(\Omega \coma{\Sigma}(A), D)\] is an isomorphism for each \(D \in \mathsf{Alg}_\dvr^\mathrm{tf}\). Setting \(D = A\) yields the desired result. \end{proof} We can now define \(\Z\)-graded \(\mathsf{kk}^{\mathrm{an}}\)-groups as follows: \[\mathsf{kk}^{\mathrm{an}}_n(A,B) = \begin{cases} \mathsf{kk}^{\mathrm{an}}(A, \Omega^n(B)) \text{ for } n \geq 0 \\ \mathsf{kk}^{\mathrm{an}}(\Omega^{-n}(A), B)= \mathsf{kk}^{\mathrm{an}}(A, \coma{\Sigma}^n(B)) \text{ for } n \leq 0. \end{cases} \] These are called the \textit{higher analytic \(\mathsf{kk}^{\mathrm{an}}\)-groups}. They express \(\mathsf{kk}^{\mathrm{an}}\) as a \(\Z\)-graded category. \section{The universal property of \(\mathsf{kk}^{\mathrm{an}}\)}\label{sec:triangulated} In this section, we formulate the universal property of \(\mathsf{kk}^{\mathrm{an}}\). Let \((\mathcal{T}, \Omega_\mathcal{T})\) be a triangulated category. \begin{definition}\label{def:excisive-functor} We say that a functor \(X \colon \mathsf{Alg}_\dvr^\mathrm{tf} \to \mathcal{T}\) is \textit{excisive} if for any semi-split extension \(A \overset{p}\rightarrowtail B \overset{q}\twoheadrightarrow C\) in \(\mathsf{Alg}_\dvr^\mathrm{tf}\), there is a collection of maps \(\delta \colon \Omega_{\mathcal{T}}X(C) \to X(A)\) satisfying the following: \begin{itemize} \item \(\Omega_\mathcal{T} X(C) \overset{\delta}\to X(A) \overset{X(p)}\to X(B) \overset{X(q)}\to X(C)\) is a distinguished triangle in \(\mathcal{T}\); \item For a morphism of extensions \[ \begin{tikzcd} A \arrow{r} \arrow{d}{f} & B \arrow{r} \arrow{d} & C \arrow{d}{g} \\ A' \arrow{r} & B' \arrow{r} & C' \end{tikzcd} \] the following diagram \[\begin{tikzcd} \Omega X(C) \arrow{r}{\delta} \arrow{d}{\Omega X(g)} & X(A) \arrow{d}{X(f)} \\ \Omega X(C') \arrow{r}{\delta'} & X(A') \end{tikzcd} \] commutes. \end{itemize} \end{definition} We call a functor \(X \colon \mathsf{Alg}_\dvr^\mathrm{tf} \to \mathcal{T}\) \textit{dagger homotopy invariant} if it maps the canonical bounded algebra homomorphism \(A \to A \hot \dvr[t]^\updagger\) to an isomorphism. It is called \textit{matricially stable} (relative to a choice of torsionfree \(\dvr\)-module \(Z\) as in subsection \ref{subsec:stability}) if it maps the canonical map \(A \to \mathcal{M}_Z^\updagger(A)\) into an isomorphism. Recall that we are primarily interested in the stabilisation relative to \(Z = \coma{\bigoplus_{n \in \N} \dvr}\), which yields the matrix algebra \(\mathcal{M}^{\mathrm{cont}}\). By Proposition \ref{prop:loop-fully-faithful} and \ref{prop:delooping}, the loop functor \(\Omega \colon \mathsf{kk}^{\mathrm{an}} \to \mathsf{kk}^{\mathrm{an}}\) is an autoequivalence. As in the algebraic and complex topological case, we similarly have the following: \begin{theorem}\label{thm:kk-triangulated} The category \(\mathsf{kk}^{\mathrm{an}}\) is a triangulated category whose distinguished triangles are diagrams isomorphic to those of the form \[\Omega(B) \to P_f \to A \overset{f}\to B,\] with auto-equivalence given by the loop functor \(\Omega \colon \mathsf{kk}^{\mathrm{an}} \to \mathsf{kk}^{\mathrm{an}}\). \end{theorem} \begin{proof} The proof of \cite{Cortinas-Thom:Bivariant_K}*{Section 6.5} carries over mutatis-mutandis. \end{proof} \begin{example}\label{ex:kk-everything} By construction, the functor \(j \colon \mathsf{Alg}_\dvr^\mathrm{tf} \to \mathsf{kk}^{\mathrm{an}}\) is a dagger homotopy invariant, \(\mathcal{M}^{\mathrm{cont}}\)-stable functor. For excision, let \(A \rightarrowtail B \twoheadrightarrow C\) be a semi-split extension and let \(\gamma \colon \jens C \to A\) be the classifying map. Then using the \(\mathsf{kk}^{\mathrm{an}}\)-inverse of \(\varrho_C \colon \jens C \to \Omega C\) from Lemma \ref{lem:noncommutative-loop}, we get a collection \(\gamma \circ \varrho_C^{-1} \in \mathsf{kk}^{\mathrm{an}}(\Omega C, A)\) as per the requirements of Definition \ref{def:excisive-functor}. \end{example} Adapting the proof of \cite{Cortinas-Thom:Bivariant_K}*{Theorem 6.6.2}, we have the following: \begin{theorem}\label{thm:kk-initial} Let \(X \colon \mathsf{Alg}_\dvr^\mathrm{tf} \to \mathcal{T}\) be a dagger homotopy invariant, \(\mathcal{M}^\mathrm{cont}\)-stable and excisive functor into a triangulated category. Then there is a unique triangulated functor \(F \colon \mathsf{kk}^{\mathrm{an}} \to \mathcal{T}\) such that the following diagram \[ \begin{tikzcd} \mathsf{Alg}_\dvr^\mathrm{tf} \arrow{r}{X} \arrow{d}{j} & \mathcal{T} \\ \mathsf{kk}^{\mathrm{an}} \arrow[ur, swap, "F"] & \end{tikzcd} \] of functors commutes. \end{theorem} The following are some important applications: \begin{example}[Chern characters into periodic cyclic homology] We first start with the Cuntz-Quillen pro-supercomplex \[\mathbb{HP} \colon \mathsf{Alg}(\mathsf{CBor}_\dvf) \to \mathsf{Der}(\overleftarrow{\mathsf{Ind}(\mathsf{Ban}_\dvf))})\] from the category of complete bornological \(\dvf\)-algebras into the derived category of the quasi-abelian category of pro-systems of inductive systems of Banach \(\dvf\)-vector spaces. The latter category is a triangulated category that arises as the homotopy category of a model category; this model category structure is studied in (\cite{mukherjee2022quillen}). This functor is dagger homotopy invariant (by \cite{Cortinas-Meyer-Mukherjee:NAHA}*{Theorem 4.6.1}), \(\mathcal{M}^{\mathrm{cont}}\)-stable and excisive (by \cite{Meyer:HLHA}*{Theorem 4.34, Section 4.3}). Since tensoring a complete torsionfree bornological \(\dvr\)-algebra with \(\dvf\) is an exact functor, the functor \[\mathsf{Alg}_\dvr^\mathrm{tf} \ni A \mapsto \mathbb{HP}(A \otimes \dvf) \in \mathsf{Der}(\overleftarrow{\mathsf{Ind}(\mathsf{Ban}_\dvf))})\] still satisfies these properties. In fact, all these properies hold for \emph{bivariant} periodic cyclic homology \(\HP_n(A,B) \defeq \Hom_{\mathsf{Der}(\overleftarrow{\mathsf{Ind}(\mathsf{Ban}_\dvf))})}(\mathbb{HP}(A), \mathbb{HP}(B)[n])\), so that by Theorem \ref{thm:kk-initial}, we obtain a triangulated functor \(\mathsf{kk}^{\mathrm{an}} \to \mathsf{Der}(\overleftarrow{\mathsf{Ind}(\mathsf{Ban}_\dvf))})\) and group homomorphisms \[\mathrm{ch}_n \colon \mathsf{kk}^{\mathrm{an}}_n(A,B) \to \HP_n(A \otimes \dvf,B \otimes \dvf)\] for all \(n \in \Z\). Setting \(A = \dvr\), we get \(\mathrm{ch}_n \colon \mathsf{kk}^{\mathrm{an}}_n(\dvr, B) \to \HP_n(B \otimes \dvf)\). By \cite{Cortinas-Cuntz-Meyer-Tamme:Nonarchimedean}*{Equation 14}, when \(B\) is the dagger completion of a smooth commutative \(\dvr\)-algebra lifting of a smooth commutative \(\resf\)-algebra, we get Chern characters \(\mathsf{kk}^{\mathrm{an}}_*(\dvr,B) \to \bigoplus_{j \in \Z} H_{\mathrm{rig}}^{* + 2j}(B/\dvgen B, \dvf)\), where the right hand side is periodified rigid cohomology. \end{example} \begin{example}[Analytic Chern characters]\label{ex:analytic-Chern} Now consider the homology theory defined in \cite{Cortinas-Meyer-Mukherjee:NAHA} \[\mathbb{HA} \colon \mathsf{Alg}_\dvr^\mathrm{tf} \to \mathsf{Der}(\overleftarrow{\mathsf{Ind}(\mathsf{Ban}_\dvf)}),\] which again satisfies \(\mathbb{HA}\), dagger homotopy invariance, excision and \(\mathcal{M}^{\mathrm{cont}}\)-stability. So Theorem \ref{thm:kk-initial} again produces a triangulated functor \(\mathsf{kk}^{\mathrm{an}} \to \mathsf{Der}(\overleftarrow{\mathsf{Ind}(\mathsf{Ban}_\dvf)})\) and group homomorphisms \[\mathrm{ch}_n \colon \mathsf{kk}^{\mathrm{an}}_n(A,B) \to \HA_n(A,B),\] for each \(n \in \Z\). We call the group homomorphisms \(\mathrm{ch}_n\) \textit{analytic Chern characters}. Since the left hand side is an \(\dvf\)-vector space, we get \(\dvf\)-linear maps \[\mathrm{ch}_n \colon \mathsf{kk}^{\mathrm{an}}_n(A,B) \otimes_\Z \dvf \to \HA_n(A,B)\] for each \(n\). When \(A = \dvr\), we get group homomorphisms \(\mathrm{ch}_n \colon \mathsf{kk}^{\mathrm{an}}_n(\dvr, B) \to \HA_n(B)\). Now suppose \(B\) is fine mod \(\dvgen\) as in Definition \ref{def:fine-mod-p} - this happens when a bornological algebra is \textit{nuclear} in the sense of \cite{Meyer-Mukherjee:HL}*{Definition 3.1}. We then have \[\mathsf{kk}^{\mathrm{an}}_n(\dvr, B) \otimes_\Z \dvf \to \mathrm{HL}_n(B) \cong \HA_n(B) \cong \HA_n(B/\dvgen B),\] where the right hand side is the analytic cyclic homology defined in \cite{Meyer-Mukherjee:HA}. In other words, in interesting cases, the image of the analytic Chern character depends only on the reduction mod \(\dvgen\) of the original algebra. In the next section, we will compare \(\mathsf{kk}^{\mathrm{an}}_n(\dvr, B)\) with a version of \textit{analytic \(K\)-theory} defined in \cite{MR4012551} for complete, bornologically torsionfree \(\dvr\)-algebras. \end{example} \begin{example}[Analytic Kasparov product] Let \(B\) be a fixed complete, torsionfree bornological \(\dvr\)-algebra. Then the functor \( - \hot B \colon \mathsf{Alg}_\dvr^\mathrm{tf} \to \mathsf{kk}^{\mathrm{an}}\) is excisive, homotopy invariant and stable. By the universal property of \(j \colon \mathsf{Alg}_\dvr^\mathrm{tf} \to \mathsf{kk}^{\mathrm{an}}\), there is a unique extension to a triangle functor, namely \(- \hot B \colon \mathsf{kk}^{\mathrm{an}} \to \mathsf{kk}^{\mathrm{an}}\). As a consequence, there is an associative product \[ \mathsf{kk}^{\mathrm{an}}(A_1, B_1) \otimes \mathsf{kk}^{\mathrm{an}}(A_2, B_2) \to \mathsf{kk}^{\mathrm{an}}(A_1 \hot A_2, B_1 \hot B_2) \] for \(A_i\), \(B_j\), \(i,j = 1,2\) in \(\mathsf{Alg}_\dvr^\mathrm{tf}\). \end{example} \subsection{Bivariant algebraic and analytic \(K\)-theories as stable \(\infty\)-categories} It is by now a well-known folklore that several triangulated categories arise as homotopy categories of stable \(\infty\)-category. In this subsection, we show that the bivariant algebraic \(K\)-theory of \cite{Cortinas-Thom:Bivariant_K}, and the analytic version under consideration in this article also arise this way. Our approach is along the lines of \cite{land2018}*{Section 3}. Let \(kk_\infty\) and \(\mathsf{kk}^{\mathrm{an}}_\infty\) denote the localisations of the \(\infty\)-categories \(N\mathsf{Alg}\) and \(N\mathsf{Alg}_\dvr^\mathrm{tf}\) of algebras (resp. complete, torsionfree bornological \(\dvr\)-algebras) at the \(kk\) (resp. \(\mathsf{kk}^{\mathrm{an}}\))-equivalences. Here \(N\) denotes the nerve of a category. \begin{proposition}\label{prop:stable-infinity-kk} The \(\infty\)-categories \(kk_\infty\) and \(\mathsf{kk}^{\mathrm{an}}_\infty\) are small, stable \(\infty\)-categories. Their homotopy categories are \(kk\) and \(\mathsf{kk}^{\mathrm{an}}\), respectively. \end{proposition} \begin{proof} We only provide the argument for the category \(\mathsf{Alg}\); the proof remains the same for the category \(\mathsf{Alg}_\dvr^\mathrm{tf}\). Since the category of algebras \(\mathsf{Alg}\) is small, so is \(N\mathsf{Alg}\), and therefore so is the localisation \(\mathsf{kk}_\infty\). To show that \(kk_\infty\) is stable, we need to show that it is pointed, has all finite limits, and the loop functor \(\Omega \colon kk_\infty \to kk_\infty\) is an equivalence. Since the category \(\mathsf{Alg}\) is a simplicial category by the same proof as in Lemma \ref{lem:simplicial-category}, we can view it is as a fibration category in the sense of \cite{brown1973abstract}, with weak equivalences given by polynomial homotopy equivalences, and fibrations by Serre fibrations. Furthermore, since \(kk(D,-)\) is excisive, it is a homology theory in the sense of \cite{uuye2013homotopical}*{Definition 1.29}. Consequently, the category \(\mathsf{Alg}\) with \(kk\)-equivalences as weak equivalences and fibrations as Serre fibrations is a fibration category by \cite{uuye2013homotopical}*{Theorem 1.33}. By the universal property of \(kk\), its homotopy category is precisely \(kk\). The proof of \cite{land2018}*{Proposition 3.3} now goes through mutatis-mutandis. \end{proof} \section{Analytic \(K\)-theories for bornological \(\dvr\)-algebras}\label{sec:analytic-K} In this section, we recall various \(K\)-theoretic constructions in the nonarchimedean setting. Let \(A\) be a unital, complete, torsionfree bornological \(\dvr\)-algebra. Consider the \textit{continuous path extension} \(\coma{\Omega}(A) \rightarrowtail \coma{P}(A) \twoheadrightarrow A\), where \(\coma{P}(A) = \ker(A\gen{\Delta^1} \overset{\ev_0}\to A)\). Here \(A\gen{\Delta^1} = A \hot \dvr\gen{\Delta^1}\), where \(\dvr\gen{\Delta^n} = \coma{\dvr[t_0,\dotsc,t_n]}/\gen{\sum_{i=0}^n t_i -1}\) and we equip the Tate algebra \(\coma{\dvr[t]}\) with the bornology where all subsets are bounded. This is a semi-split extension by complete, torsionfree bornological \(\dvr\)-algebras. Let \(\mathsf{GL}(A)' \defeq \mathsf{Im}(\mathsf{GL}(\coma{P}(A)) \to \mathsf{GL}(A))\). Dividing out this subgroup, we get \[ KV_1^\mathrm{an}(A) \defeq \mathsf{GL}(A)/ \mathsf{GL}(A)'\] the first \textit{analytic \(KV\)-group} of \(A\). For non-unital algebras \(A\), we use the unitalisation \(\tilde{A} = A \oplus \dvr\), and define \[ KV_1^\mathrm{an}(A) \defeq \ker(KV_1^\mathrm{an}(\tilde{A}) \to KV_1^\mathrm{an}(\dvr)). \] \noindent The following properties of \(\mathrm{KV}_1^\mathrm{an}(A)\) will be used in the remainder of the section: \begin{proposition}\label{prop:KV-properties} Consider \(\mathrm{KV}_1^\mathrm{an}\) as a functor \(\mathsf{Alg}_\dvr^\mathrm{tf} \to \mathsf{Mod}_\Z\). \begin{enumerate} \item\label{KV-quotient} There is a natural surjection \(\mathrm{K}_1(R) \twoheadrightarrow KV_1^\mathrm{an}(R)\); \item\label{KV-split} \(KV_1^\mathrm{an}\) is split exact; \item\label{KV-excision} Suppose \(A \rightarrowtail B \twoheadrightarrow C\) is an extension in \(\mathsf{Alg}_\dvr^\mathrm{tf}\) such that \(\mathsf{GL}(B)' \to \mathsf{GL}(C)'\) is surjective, then there is a long exact sequence \[\begin{tikzcd} KV_1^\mathrm{an}(A) \arrow{r}{} & KV_1^\mathrm{an}(B) \arrow{r}{} & KV_1^\mathrm{an}(C) \arrow{d}{} \\ K_0(C) & K_0(B) \arrow{l}{} & K_0(A) \arrow{l}{} \end{tikzcd} \] \item\label{KV-homotopy-invariance} \(KV_1^\mathrm{an}\) is additive, \(\coma{\dvr[t]}\)-homotopy invariant and \(\mathcal{M}_\N^{\mathrm{alg}}\)-stable. \end{enumerate} \end{proposition} \begin{proof} First, let \(A\) be a complete, torsionfree bornological \(\dvr\)-algebra. Then for \(i \neq j\), \(1 + t a e_{ij}\) is a path from \(1\) to \(1 + a e_{ij}\). Since elements of the form \(1 + a e_{ij}\) generate the subgroup \(E(R) \cap \mathsf{GL}(A)\), the latter subgroup is contained in \(\mathsf{GL}(A)'\). Consequently, we get a surjection \(K_1(A) = \mathsf{GL}(A)/E(A) \cap \mathsf{GL}(A) \twoheadrightarrow \mathsf{GL}(A)/\mathsf{GL}(A)'\) and the latter is \(KV_1^\mathrm{an}(A)\) by definition. Now suppose \(A\) is non-unital. Then the unital case, together with the fact that \(KV_1^\mathrm{an}(A) = \ker(KV_1^\mathrm{an}(A \oplus \dvr) \to KV_1^\mathrm{an}(\dvr))\) implies part \ref{KV-quotient}. To see \ref{KV-split}, let \(A \rightarrowtail B \twoheadrightarrow C\) be an extension of algebras. The the hypothesis that \(\mathsf{GL}(B)' \to \mathsf{GL}(C)'\) is onto implies that we get an exact sequence \(KV_1^\mathrm{an}(A) \to KV_1^\mathrm{an}(B) \to KV_1^\mathrm{an}(C)\). Using that the functor \(\mathsf{GL}\) and that tensoring with a complete bornological \(\dvr\)-module preserves kernels, we get a commuting diagram with exact rows and columns: \[ \begin{tikzcd} & 1 \arrow{d}{} & 1 \arrow{d}{} & 1 \arrow{d} \\ 1 \arrow{r}{} & \mathsf{GL}(\coma{\Omega}(A)) \arrow{r}{} \arrow{d}{} & \mathsf{GL}(\coma{\Omega}(B)) \arrow{r}{} \arrow{d}{} & \mathsf{GL}(\coma{\Omega}(C)) \arrow{d}{} \\ 1 \arrow{r}{} & \mathsf{GL}(\coma{P}(A)) \arrow{r}{} \arrow{d}{} & \mathsf{GL}(\coma{P}(B)) \arrow{r}{} \arrow{d}{} & \mathsf{GL}(\coma{P}(C)) \arrow{d}{} \\ 1 \arrow{r}{} & \mathsf{GL}(A) \arrow{r}{} & \mathsf{GL}(B) \arrow{r}{} & \mathsf{GL}(C). \end{tikzcd} \] Now suppose the extension \(A \rightarrowtail B \twoheadrightarrow C\) is split exact. Then the rows in the diagram above are split exact. Furthermore, it follows from the diagram above that \(\mathsf{GL}(A)' = \mathsf{GL}(B)' \cap \mathsf{GL}(A)\). Consequently, \(KV_1^\mathrm{an}(A) \to KV_1^\mathrm{an}(B)\) is injective, and the sequence \begin{equation}\label{eqref:KV-sequence} KV_1^\mathrm{an}(A) \to KV_1^\mathrm{an}(B) \to KV_1^\mathrm{an}(C) \end{equation} is exact. Part \ref{KV-excision} follows from Part \ref{KV-quotient} and Equation \ref{eqref:KV-sequence}. For the homotopy invariance claim in \ref{KV-homotopy-invariance}, we show that the split surjection \(KV_1^\mathrm{an}(A\gen{\Delta^1}) \to KV_1^\mathrm{an}(A)\) induced by the evaluation homomorphism is an injection. By split exactness, the kernel of this map is \(KV_1^\mathrm{an}(PA) = \mathsf{GL}(\coma{P}A) / \mathsf{GL}(\coma{P}A)'\), so it only remains to show that \(\mathsf{GL}(\coma{P}A) \subseteq \mathsf{GL}(\coma{P}A)'\). For this, take \(\alpha(s) \in \mathsf{GL}(\coma{P}A)\). Then \(\beta(s,t) \defeq \alpha(st) \in \mathsf{GL}(\coma{P}\coma{P}A)\) is the required null-homotopy. The proof of \(\mathcal{M}_\N^\mathrm{alg}\)-stability is the similar to that of \(K_1\). \end{proof} The \textit{higher analytic Karoubi-Villamayor groups} can be defined as \[KV_{n+1}^\mathrm{an}(A) \defeq KV_1(\coma{\Omega}^n(A)), \quad n \geq 1.\] By Proposition \ref{prop:KV-properties}, since \(\coma{\Gamma}(R)\) is an infinite sum ring, by adapting the proof of \cite{cortinas2011algebraic}*{Proposition 2.3.1} we have that \(K_0(\coma{\Gamma}(A)) = \mathrm{KV}_1^\mathrm{an}(\coma{\Gamma}(A)) = 0\). Consequently, the surjection \(K_1(\coma{\Sigma}(A)) \twoheadrightarrow \mathrm{KV}_1^\mathrm{an}(\coma{\Sigma}(A))\) factors through \(K_0(\coma{\Sigma}(A))\), inducing a surjection \(K_0(A) \twoheadrightarrow \mathrm{KV}_1^\mathrm{an}(\coma{\Sigma} A) \). Now applying part \ref{KV-excision} of Proposition \ref{prop:KV-properties} to the loop extension yields a map \begin{equation}\label{eqref:KV-K0} \mathrm{KV}_1^\mathrm{an}(A) \to K_0(\coma{\Omega} (A)). \end{equation} Substituting \(\coma{\Sigma}(A)\) for \(A\), and composing with the map \(K_0 (A) \to \mathrm{KV}_1^\mathrm{an}(\coma{\Sigma}(A))\), we get a morphism \(K_0(A) \to K_0(\coma{\Sigma} \coma{\Omega} (A))\). Iterating this construction, we get a nonconnective version of topological \(K\)-theory in our setting: \begin{definition}\label{def:KH-theory} The \textit{analytic \(K\)-theory} of a complete, torsionfree bornological \(\dvr\)-algebra \(A\) is defined as the abelian groups \[K_n^\mathrm{an}(A) \defeq \varinjlim_m K_0(\coma{\Sigma}^m \coma{\Omega}^{n+m}(A)),\] for \(n \in \Z\). \end{definition} We can also express the groups \(K_n^\mathrm{an}(A)\) in terms of the analytic \(KV\)-groups. To see this, we replace \(A\) by \(\coma{\Sigma}(A)\) in Equation \ref{eqref:KV-K0} and compose with the map \(K_0(-) \twoheadrightarrow KV_1(\coma{\Sigma}(-))\) to get \[KV_1^\mathrm{an}(\coma{\Sigma}(A)) \to K_0(\coma{\Sigma}\coma{\Omega}(A)) \to KV_1^\mathrm{an}(\coma{\Sigma}^2 \coma{\Omega}(A)) = KV_2^\mathrm{an}(\coma{\Sigma}^2(A)).\] Iterating, we get an isomorphic inductive system whose colimit is again \[K_n^\mathrm{an}(A) \cong \varinjlim_m KV_1^\mathrm{an}(\coma{\Sigma}^{m+1} \coma{\Omega}^{n+m}(A)) = \varinjlim_m KV_{n+m}^\mathrm{an}( \coma{\Sigma}^m(A))\] for \(n \in \Z\). As in the bornological \(\C\)-algebra case, to obtain stability of nontrivial stabilisations, we need to put this by hand. Define the \textit{stabilised analytic \(K\)-theory} functors by \[\tilde{K}_n^\mathrm{an}(A) \defeq K_n^\mathrm{an}(\mathcal{M}^{\mathrm{cont}}(A)),\] for \(n \in \Z\). This is a functor on the category \(\mathsf{Alg}_\dvr^\mathrm{tf}\) of complete, torsionfree bornological \(\dvr\)-algebras. \begin{theorem}\label{thm:KH-theory-properties} The functors \(\tilde{K}_n^\mathrm{an}\) on the category of complete, bornologically torsionfree \(\dvr\)-algebras satisfy the following properties: \begin{enumerate} \item Additivity; \item \(\coma{\dvr[t]}\)-homotopy invariance; \item \(\mathcal{M}^{\mathrm{cont}}\)-stability; \item Excision for semi-split extension of complete, torsionfree bornological \(\dvr\)-algebras. \end{enumerate} \end{theorem} The proof of Theorem \ref{thm:KH-theory-properties} will use the same properties of a version of negative \(K\)-theory whose domain is the category we are working in. But this causes no technical difficulties as just like with \(K_0\), we forget the topology on the algebra. Specifically, for \(n \geq 0\), we define the functors \[K_{-n}^{\mathrm{stab}} \colon \mathsf{Alg}_\dvr^\mathrm{tf} \to \mathsf{Mod}_\Z, \quad A \mapsto K_0(\coma{\Sigma}^n(\mathcal{M}^{\mathrm{cont}}(A))),\] which we call the \textit{stabilised negative \(K\)-theory} groups of a bornological algebra. \begin{lemma}\label{lem:nonpostive-K-properties} Let \(A \rightarrowtail B \twoheadrightarrow C\) be an extension of complete, torsionfree bornological \(\dvr\)-algebras. Then for \(n \leq 0\), we have a long exact sequence \[ \begin{tikzcd} K_n^{\mathrm{stab}}(A) \arrow{r}{} & K_n^{\mathrm{stab}}(B) \arrow{r}{} & K_n^{\mathrm{stab}}(C) \arrow{d}{\delta} \\ K_{n-1}^{\mathrm{stab}}(C) & K_{n-1}^{\mathrm{stab}}(B) \arrow{l}{} & K_{n-1}^{\mathrm{stab}}(A) \arrow{l}{} \end{tikzcd} \] \end{lemma} \begin{proof} We first see that since for any algebra \(D \in \mathsf{Alg}_\dvr^\mathrm{tf}\), \(\coma{\Gamma}(\tilde{D})\) is an infinite sum ring, the excision sequence for \(K_0\) and \(K_1\) applied to the cone extension \(\mathcal{M}^{\mathrm{cont}}(D) \rightarrowtail \coma{\Gamma}(D) \twoheadrightarrow \coma{\Sigma}(D)\) yields \(K_1(\coma{\Sigma}(\tilde{D}) \cong K_0(\mathcal{M}^{\mathrm{cont}}(\tilde{D})\). Now consider the extension \(A \rightarrowtail \tilde{B} \twoheadrightarrow \tilde{C}\) in \(\mathsf{Alg}_\dvr^\mathrm{tf}\). Then by \cite{Cortinas-Meyer-Mukherjee:NAHA}*{Proposition 2.4.5}, tensoring by \(\coma{\Sigma}\) and \(\mathcal{M}^{\mathrm{cont}}\) yields an extension \[\coma{\Sigma}(\mathcal{M}^{\mathrm{cont}}(A)) \rightarrowtail \coma{\Sigma}(\mathcal{M}^{\mathrm{cont}}(\tilde{B})) \twoheadrightarrow \coma{\Sigma}(\mathcal{M}^{\mathrm{cont}}(\tilde{C}))\] again. This yields a long exact sequence \[ \begin{tikzcd} K_0(\mathcal{M}^{\mathrm{cont}}(A)) \arrow{r}{} & K_0(\mathcal{M}^{\mathrm{cont}}(B)) \oplus K_0(\dvr) \arrow{r}{} & K_0(\mathcal{M}^{\mathrm{cont}}(C)) \oplus K_0(\dvr) \arrow{d}{\delta} \\ K_{-1}^{\mathrm{stab}}(C) \oplus K_{-1}^{\mathrm{stab}}(\dvr) & K_{-1}^{\mathrm{stab}}(B) \oplus K_{-1}^{\mathrm{stab}}(\dvr) \arrow{l}{} & K_{-1}^{\mathrm{stab}}(A), \arrow{l}{} \end{tikzcd} \] where we have used that \(K_0(\mathcal{M}^{\mathrm{cont}}) \cong K_0(\dvr)\) since \(\mathcal{M}^{\mathrm{cont}}\) is \(\dvgen\)-adically complete (see \cite{kbook}*{Lemma II.2.2}), and the \(\mathcal{M}^{\mathrm{triv}}\)-stability of \(K_0\). Splitting off the \(K_*(\dvr)\) summands, we get the result for \(n=0\). The general case follows by iteration. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:KH-theory-properties}] Additivity follows from the fact that \(\coma{\Sigma}\) commutes with finite products, which is a consequence of \cite{Cortinas-Meyer-Mukherjee:NAHA}*{Proposition 2.4.5}. By Proposition \ref{prop:KV-properties}, \(KV_1^\mathrm{an}\) is \(\coma{\dvr[t]}\)-homotopy invariant. Since \(\mathrm{K}_i^\mathrm{an}(A) = \varinjlim_n \mathrm{KV}_{i+n}^\mathrm{an}(\coma{\Sigma}^n(A))\), the homotopy invariance of \(\mathrm{K}_i^\mathrm{an}\) follows for each \(i\). Stability follows by construction since \(\mathcal{M}^{\mathrm{cont}} \hot \mathcal{M}^\mathrm{cont} \cong \mathcal{M}^\mathrm{cont}\). To see excision claim, let \(A \rightarrowtail B \twoheadrightarrow C\) be a semi-split extension of complete, torsionfree bornological \(\dvr\)-algebras. Then by \cite{Cortinas-Meyer-Mukherjee:NAHA}*{Proposition 2.4.5}, we see that repeated tensoring by \(\coma{\Sigma}\) and \(\coma{\Omega}\) is exact. The result now follows from the excision of stabilised non-positive \(K\)-theory in Lemma \ref{lem:nonpostive-K-properties}. \end{proof} We shall now express the analytic \(KV\) and \(K\)-theory as homotopy groups of appropriate spectra. For a unital algebra \(A \in \mathsf{Alg}_\dvr^\mathrm{tf}\), denote by \(\mathsf{K}(A) = \mathsf{BGL}(A)\) its connective algebraic \(K\)-theory spectrum, and \(\mathsf{KV}^\mathrm{an}(A) \defeq \mathsf{K}(A\gen{\Delta^\bullet})\) the \textit{analytic Karoubi-Villamayor spectrum}. The latter is defined as the spectrum corresponding to the simplicial set \(([n] \mapsto \mathsf{K}(A \gen{\Delta^n}))\), where \(A \gen{\Delta^\bullet} \defeq A \hot \dvr\gen{\Delta^\bullet}\) is the base change with standard rigid \(n\)-simplex. This is extended as usual to non-unital algebras by taking the homotopy fibre of the map \(\mathsf{KV}^\mathrm{an}(\tilde{A}) \to \mathsf{KV}^\mathrm{an}(\dvr)\). Recall that the \textit{nonconnective algebraic \(K\)-theory spectrum} \(\mathbf{K}(A)\) of a unital algebra \(A\). Its \(n\)-th space is defined as \[\mathbf{K}(A)_n \defeq \Omega \vert \mathsf{K}(\Sigma^{n+1}(A)) \vert,\] where \(\Omega\) denotes the loop space of a topological space, and \(\vert - \vert\) is the geometric realisation of a simplicial set. The \textit{analytic \(K\)-theory spectrum} \(\mathbf{K}^\mathrm{an}(A)\) of a unital complete, torsionfree bornological \(\dvr\)-algebra \(A\) is defined as the spectrum whose \(n\)-th space is \[\mathbf{K}^\mathrm{an}(A)_n \defeq \Omega \mathsf{K}(\coma{\Sigma}^{n+1}(A\gen{\Delta^\bullet})) = \Omega \mathsf{K}((\coma{\Sigma}^{n+1}A)\gen{\Delta^\bullet}) = \Omega \mathsf{KV}^\mathrm{an}(\coma{\Sigma}^{n+1}(A)).\] As in the purely algebraic case, the homotopy groups of the above spectra and the analytic \(KV\) and \(K\)-theory groups previous defined coincide: \begin{theorem}\label{thm:Banach-KH} For complete, torsionfree bornological \(\dvr\)-algebras, we have \(\pi_n(\mathsf{KV}^\mathrm{an}(A)) \cong KV_n^\mathrm{an}(A)\) for all \(n \geq 1\), and \(\pi_n(\mathbf{K}^\mathrm{an}(A)) \cong K_n^\mathrm{an}(A)\) for all \(n \in \Z\). \end{theorem} \begin{proof} The proofs of \cite{cortinas2011algebraic}*{Proposition 10.2.1} and \cite{cortinas2011algebraic}*{Proposition 10.3.2} work mutatis-mutandis. The only modification is to replace polynomial homotopies by \(\coma{\dvr[t]}\)-homotopies.\qedhere \end{proof} As in \cite{tamme:thesis}*{Definition 7.4}, we also define a version of analytic \(K\)-theory that is dagger homotopy invariant, as this is the right notion of homotopy invariance for the analytic cyclic theory. To this end, we again define \(KV_1^{\mathrm{an},\updagger}(A) \defeq \mathsf{GL}(A)/\mathsf{GL}(A')\) for unital algebras, and by a completely analogous version of Theorem \ref{prop:KV-properties}, we can use split exactness to extend to unital algebras. For \(n \geq 1\), we define \(KV_{n+1}^{\mathrm{an}, \updagger}(A) \defeq KV_1^{\mathrm{an},\updagger}(\Omega^n(A))\). Here we have used the path extension \(\Omega(A) \rightarrowtail P(A) \twoheadrightarrow A\) from Example \ref{ex:path-extension}. We call the functors \(KV_n^{\mathrm{an},\updagger} \colon \mathsf{Alg}_\dvr^\mathrm{tf} \to \mathsf{Mod}_\Z\) the \textit{overconvergent analytic \(K\)-theory}. The same construction as analytic \(K\)-theory applied to the overconvergent analytic \(KV\)-groups yields \[K_n^{\mathrm{an},\updagger}(A) \defeq \varinjlim_m KV_{n+m+1}^{\mathrm{an},\updagger}(\coma{\Sigma}^{m+1}(A))\] for \(n \in \Z\). Finally, we define the spectrum \[\mathsf{KV}^{\mathrm{an},\updagger}(A) \defeq \mathsf{BGL}(A \gen{\Delta^\bullet}^\updagger),\] where \(A\) is a unital, complete, torsionfree bornological \(\dvr\)-algebra, and extend to nonunital algebras by \(\mathsf{KV}^{\mathrm{an}, \updagger}(A) = \mathsf{fib}(\mathsf{KV}^{\mathrm{an}, \updagger}(\tilde{A}) \to \mathsf{KV}^{\mathrm{an}, \updagger}(\dvr))\). We call this the \textit{overconvergent analytic \(KV\)-spectrum.} Its homotopy groups are denoted \(KV_n^{\mathrm{an}, \updagger}(A)\) and are called the \textit{overconvergent analytic \(KV\)-groups} of \(A\). Likewise, we define a nonconnective spectrum with \(n\)-th space \[\mathbf{K}^{\mathrm{an}, \updagger}(A)_n = \Omega \mathsf{K}(\coma{\Sigma}^{n+1}(A \gen{\Delta^\bullet}^\updagger)) = \Omega \mathsf{KV}^{\mathrm{an}, \updagger}(\coma{\Sigma}^{n+1}(A)),\] and extend to nonunital algebras by \(\mathbf{K}^{\mathrm{an},\updagger}(A) \defeq \mathsf{fib}(\mathbf{K}(\tilde{A}) \to \mathbf{K}(\dvr)\); we call this the \textit{overconvergent analytic \(K\)-theory spectrum.} \begin{theorem}\label{thm:spectrum-homotopy-overconvergent} For every complete, torsionfree bornological \(\dvr\)-algebra \(A\), we have \(\pi_n(\mathsf{KV}^{\mathrm{an},\updagger}(A)) \cong KV_n^{\mathrm{an},\updagger}(A)\) for all \(n \geq 1\) and \(\pi_n(\mathbf{K}^{\mathrm{an},\updagger}(A)) \cong K_n^{\mathrm{an},\updagger}(A)\) for all \(n \in \Z\). \end{theorem} \begin{proof} Similar to proof of \cite{cortinas2011algebraic}*{Proposition 10.2.1} and \cite{cortinas2011algebraic}*{Proposition 10.3.2}. The only modification is to replace polynomial homotopies by dagger homotopies. \end{proof} In what follows, let \(\tilde{K}_n^{\mathrm{an}, \updagger}(R) \defeq K_n^{\mathrm{an}, \updagger}(\mathcal{M}^{\mathrm{cont}}(R))\) for a complete, torsionfree bornological \(\dvr\)-algebra \(R\). These are called the \textit{stabilised overconvergent analytic \(K\)-groups} of \(A\). \begin{theorem}\label{cor:KH-properties} The stabilised overconvergent analytic \(K\)-groups \(\tilde{K}_n^{\mathrm{an}, \updagger} \colon \mathsf{Alg}_\dvr^\mathrm{tf} \to \mathsf{Mod}_\Z\) satisfy: \begin{enumerate} \item{\label{dagger-htpy}} Dagger homotopy invariance, that is, \(K^{\mathrm{an}, \updagger}(R) \cong K^{\mathrm{an}, \updagger}(R \hot \dvr[t]^\updagger)\); \item{\label{matricial-stability}} \(\mathcal{M}^{\mathrm{cont}}\)-matricial stability, that is, \(K^{\mathrm{an}, \updagger}(R) \cong K^{\mathrm{an}, \updagger}(\mathcal{M}^{\mathrm{cont}}(R))\); \item{\label{excision}} Excision for semi-split extensions of complete, torsionfree bornological algebras: that is, for an extension \(I \rightarrowtail E \twoheadrightarrow Q\) of such algebras, we have a natural long exact sequence \[\dotsc \to K_{n+1}^{\mathrm{an}, \updagger}(I) \to K_{n+1}^{\mathrm{an}, \updagger}(E) \to K_{n+1}^{\mathrm{an}, \updagger}(Q) \to K_n^{\mathrm{an}, \updagger}(I) \to K_{n}^{\mathrm{an}, \updagger}(E) \to K_{n}^{\mathrm{an}, \updagger}(Q) \to \dotsc\] of overconvergent analytic \(K\)-theory groups. \end{enumerate} \end{theorem} \begin{proof} The same proof as Theorem \ref{thm:KH-theory-properties} works after making obvious modifications. \qedhere \end{proof} Finally, we relate the analytic and overconvergent \(K\)-theories with the \(KV\) and \(KH\)-theories of the reduction mod \(\dvgen\). \begin{theorem}\label{thm:reduction-mod-p} Let \(A\) be an \(\resf\)-algebra and let \(R\) be a complete, torsionfree bornological \(\dvr\)-algebra lifting that reduces mod \(\dvgen\) to \(A\). Suppose further that \(R^\updagger \subseteq \coma{R}\). Then \(KV_n^{\mathrm{an},\updagger}(R^\updagger) \cong KV_n^{\mathrm{an}}(\coma{R}) \cong KV_n(A)\) for \(n\geq 1\) and \(K_n^{\mathrm{an},\updagger}(R^\updagger) \cong KH_n(A) \cong K_n^\mathrm{an}(\coma{R})\) for all \(n \in \Z\). \end{theorem} \begin{proof} The overconvergent rigid \(n\)-simplex \(\dvr\gen{\Delta^\bullet}^\updagger \) embeds into the rigid \(n\)-simplex \(\dvr\gen{\Delta^\bullet}\), which upon tensoring with the inclusion \(R^\updagger \to \coma{R}\) yields an inclusion \(R^\updagger \gen{\Delta^\bullet}^\updagger \to \coma{R} \gen{\Delta^\bullet}\). Therefore, we have maps of simplicial groups \(\mathsf{GL}(R^\updagger \gen{\Delta^\bullet}^\updagger) \to \mathsf{GL}(\coma{R} \gen{\Delta^\bullet})\). Since \(\pi_n(\mathsf{GL}(A_\bullet)) \cong \varinjlim_m \pi_n(\mathsf{GL}_m(A_\bullet))\) for a simplicial abelian group \(A_\bullet\), it suffices to prove that the induced map \[\pi_n(\mathsf{GL}_m(R^\updagger \gen{\Delta^\bullet}^\updagger)) \to \pi_n(\mathsf{GL}_m(\coma{R}\gen{\Delta^\bullet}))\] is an isomorphism for \(n \geq 0\) and any \(m \geq 1\). For the isomorphism claim above, the same argument as in \cite{tamme:thesis}*{Proposition 7.5} carries over. To see the surjectivity of the map in question, take a class in \(\pi_n(\mathsf{GL}_m(\coma{R}\gen{\Delta^\bullet})\); this is represented by a matrix \(g \in \mathsf{GL}_m(\coma{R}\gen{\Delta^n})\) such that \(d_i(g) = 1\) for \( i = 0,\dotsc, n\), where \(d_i\) are the face maps. By \cite{tamme:thesis}*{Lemma 7.7}, there is a sequence of matrices \((g_N) \in \mathbb{M}_m(\coma{R}\gen{\Delta^n})\) that converge to \(g\), satisfying \(d_i(g_N) = 1\) for \(i = 0,\dotsc, n\). Since \(\mathsf{GL}_m(\coma{R}\gen{\Delta^n}) \subseteq \mathbb{M}_m(\coma{R}\gen{\Delta^n})\) is open, then sequence \((g_N)\) eventually lies in \(\mathsf{GL}_m(\coma{R}\gen{\Delta^n})\). That is, for a sufficiently large \(N\), \(g_N \in \mathsf{GL}_m(\coma{R}\gen{\Delta^n})\). By \cite{tamme:thesis}*{Lemma 7.8}, it actually lies in \(\mathsf{GL}_m(R^\updagger \gen{\Delta^n}^\updagger)\). By Lemma \ref{lem:3}, \(\coma{R}\gen{\Delta^\bullet}\) and hence \(\mathbb{M}_m(\coma{R}\gen{\Delta^\bullet})\) and \(\mathbb{M}_m(\coma{R}\gen{\Delta^\bullet})^{00} \defeq ([n] \mapsto \setgiven{g \in \mathbb{M}_m(\coma{R}\gen{\Delta^n}}{\norm{g}<1}\) are contractible. Here the norm of a matrix is defined as the maximum of the norm of the entries. Now choosing \(N\) sufficiently large, we have \(g_N g^{-1} - 1 \in \mathbb{M}_m(\coma{R}\gen{\Delta^n})^{00}\). The contractibility of this simplicial set implies that there is an \(h \in \mathbb{M}_m(\coma{R}\gen{\Delta^{n+1}})^{00}\) such that \(d_0(h) = 1\) and \(d_i(h) = 0\) for all \(i \geq 1\). By the Neumann series, since \(\norm{h}<1\), we have \(1 +h \in \mathsf{GL}_m(\coma{R}\gen{\Delta^{n+1}})\). Furthermore, \(\delta_0(1 + h) = g_N g^{-1}\) and \(\delta_i(1+h) = 1\). This implies \([g_Ng^{-1}] = [1]\), or that \([g_N] = [g]\), proving surjectivity. To see injectivity, let \(g \in \mathsf{GL}_m(R^\updagger \gen{\Delta^n}^\updagger)\) such that \(d_i(g) = 1\) for \(i = 0,\dotsc, n\). Assume there exists \(h \in \mathsf{GL}_m(\coma{R}\gen{\Delta^{n+1}})\) such that \(d_0(h) = g\) and \(d_i(h) = 0\) for \(i> 0\). Since the simplicial abelian group \(\mathbb{M}_m(R^\updagger\gen{\Delta^\bullet}^\updagger)\) is contractible, there is an \(\tilde{h} \in \mathbb{M}_m(R^\updagger\gen{\Delta^{n+1}}^\updagger)\) such that \(d_0(\tilde{h}) = g\) and \(d_i(\tilde{h}) = 1\) for \( i = 1,\dotsc, n+1\). By the same argument as for surjectivity applied to \(\tilde{h} - h\), there is a sequence \((h_N) \in \mathbb{M}_m(R^\updagger \gen{\Delta^{n+1}}^\updagger)\) converging to \(\tilde{h} - h\) such that \(\delta_i(h_N) = 0\) for \(i = 0,\dotsc, n+1\) and all \(N\). Then \(h_N + \tilde{h}\) converges to \(h \in \mathsf{GL}_m(\coma{R}\gen{\Delta^{n+1}})\). Since \(\mathsf{GL}_m(\coma{R}\gen{\Delta^{n+1}}) \subseteq \mathbb{M}_m(R\gen{\Delta^{n+1}})\) is open, applying \cite{tamme:thesis}*{Lemma 7.8}, we have \(h_N + \tilde{h} \in \mathsf{GL}_m(R^\updagger \gen{\Delta^{n+1}}^\updagger)\) for sufficiently large \(N\). As \(\delta_0(h_N + \tilde{h}) = g\) and \(\delta_i(h_N + \tilde{h}) = 1\) for \(i = 1,\dotsc, n+1\), we have \([g] = [1]\) as required. What we have proved so far is that \(KV_n^{\mathrm{an},\updagger}(R) \cong KV_n^{\mathrm{an}}(\coma{R})\) for \(n \geq 1\). The right hand side is isomorphic to \(KV_n(A)\) for \( n\geq 1\) by \cite{calvo}*{Proposition 2.1} (see also \cite{tamme2014karoubi}*{Remark 3.2 (ii)}). The proof uses that we have an extension of simplicial abelian groups \[1 + \dvgen \mathbb{M}_m(\coma{R}\gen{\Delta^\bullet}) \rightarrowtail \mathsf{GL}_m(\coma{R}\gen{\Delta^\bullet}) \twoheadrightarrow \mathsf{GL}_m(A[\Delta^\bullet]),\] and that \(\mathbb{M}_m(\coma{R}\gen{\Delta^\bullet})\) is contractible. The exactness of the inductive limit functor now yields that \(KV_n^\mathrm{an}(R) \cong KV_n(A)\) for all \(n \geq 1\). To see the claim about \(K_n^\mathrm{an}\), we replace \(R\) in the argument above by \(\coma{\Sigma}(R)\). Of course, we still have \((\coma{\Sigma} \hot R)^\updagger \subseteq \coma{\Sigma} \hot \coma{R}\). Therefore, \begin{multline*} K_n^{\mathrm{an},\updagger}(R) \cong \varinjlim_m KV_{n+m}^{\mathrm{an},\updagger}(\coma{\Sigma}^m(R^\updagger)) \\ \cong \varinjlim_{m} KV_{n+m}^\mathrm{an}(\coma{\Sigma}^m(\coma{R})) \cong \varinjlim_m KV_{n+m}(\Sigma^m (A)) \cong KH_n(A), \end{multline*} as required. \qedhere \end{proof} The hypotheses of Theorem \ref{thm:reduction-mod-p} is satisfied by any Banach \(\dvr\)-algebra and any affinoid dagger algebra. For noncommutative dagger algebras that are not Banach algebras, one has to check this condition by explicitly computing the dagger completion. This has already been done for monoid algebras and crossed product algebras (see \cite{Meyer-Mukherjee:Bornological_tf}*{Section 6, Proposition 7.5}). \begin{corollary}\label{cor:stab-mod-p} With \(A\) and \(R\) as in Theorem \ref{thm:reduction-mod-p}, we have \(\tilde{K}_n^{\mathrm{an},\updagger} (R^\updagger) \cong K_n^{\mathrm{an},\updagger}(R^\updagger) \cong KH_n(A)\). In particular, for these algebras, the stabilised and unstable overconvergent analytic \(K\)-theories coincide. \end{corollary} \begin{proof} We have \[\tilde{K}_n^{\mathrm{an},\updagger}(R^\updagger) = K_n^\mathrm{an}(\mathcal{M}^{\mathrm{cont}} \hot R^\updagger) \cong KH_n(\mathbb{M}_\infty(A)) \cong KH_n(A),\] where the second last isomorphism follows from Theorem \ref{thm:reduction-mod-p}, and the last isomorphism follows from the \(\mathbb{M}_\infty\)-stability of homotopy algebraic \(K\)-theory. \qedher \end{proof} We now have functors \(\tilde{K}_n^{\mathrm{an}, \updagger} \colon \mathsf{Alg}_\dvr^\mathrm{tf} \to \mathsf{Mod}_\Z\) which satisfy homotopy invariance, \(\mathcal{M}^{\mathrm{cont}}\)-stability and excision for semi-split extensions of complete, torsionfree bornological \(\dvr\)-algebras. Recall that a functor \(F \colon \mathsf{Alg}_\dvr^\mathrm{tf} \to \mathcal{A}\) from the category of complete, torsionfree bornological \(\dvr\)-algebras to an abelian category is called \textit{half-exact} (or \textit{homological}) if it maps a semi-split extension \(A \rightarrowtail B \twoheadrightarrow C\) to an exact sequence \(F(A) \rightarrowtail F(B) \twoheadrightarrow F(C)\). \begin{theorem}\label{thm:KH-kk} Let \(F \colon \mathsf{Alg}_\dvr^\mathrm{tf} \to \mathcal{A}\) be a half-exact, \(\mathcal{M}^{\mathrm{cont}}\)-stable, additive, homotopy invariant functor. Then there is a unique homological functor \(\tilde{F} \colon \mathsf{kk}^{\mathrm{an}} \to \mathcal{A}\) such that \(\tilde{F} \circ j = F\). \end{theorem} \begin{proof} Consider a semi-split extension \(E \defeq A \rightarrowtail B \overset{f}\twoheadrightarrow C\) in \(\mathsf{Alg}_\dvr^\mathrm{tf}\), and \(\iota \colon \Omega(C) \to P_f\) the canonical inclusion. Then \(F\) sends the canonical map \(A \to P_f\) to an isomorphism, and for \(\delta_E^F = F(l)^{-1} F(\iota)\), the following sequence is exact \begin{equation}\label{eq:G-extension} F(\Omega B) \to F(\Omega C) \overset{\delta_E^F}\to F(A) \to F(B) \overset{F(f)}\to F(C). \end{equation} The map \(\delta_l^F\) corresponding to the loop extension \(\Omega C \rightarrowtail PC \twoheadrightarrow C\) is the identity map ~\(F(\Omega C) \to F(\Omega C)\). By the homotopy invariance of \(F\), if \(B\) is contractible, then \(\delta_E^G\) above is an isomorphism. In particular, since \(\tens C\) is contractible for any \(C \in \mathsf{Alg}_\dvr^\mathrm{tf}\), we get that \(\delta_u^F\) is an isomorphism for the universal extension of \(C\). By \ref{eq:G-extension}, we see that \(F(\varrho)\) is an isomorphism, where \(\varrho \colon \jens C \to \Omega C\) is the canonical map. Composing with the image of a representative \(c_E \colon \jens C \to A\) in \(\mathsf{kk}^{\mathrm{an}}\) of the classifying map, we get that the connecting map is \(\delta_E^F = F(c_E) F(\varrho)^{-1}\) for any semi-split extension \(E\). Since the connecting maps in \(\mathsf{kk}^{\mathrm{an}}\) are \(\delta_E = c_E \circ \varrho^{-1}\), we are forced to define \(\tilde{F}(\delta_E) \defeq F(c_E)\circ F(\varrho)^{-1}\). Now as the proof of Theorem \ref{thm:kk-initial} shows, given a class \(\alpha \in \mathsf{kk}^{\mathrm{an}}(A,B)\), there is a unique way to define \(\tilde{F}(\Omega^n\alpha)\). Let \(\tau \colon \Omega \coma{\Sigma} \to \coma{\Sigma} \Omega\) be the natural isomorphism, and let \(\delta_c \in \{\Omega \coma{\Sigma}(A), A\}\) the connecting map for the cone extension \(\mathcal{M}^{\mathrm{cone}}(A) \rightarrowtail \coma{\Gamma}(A) \twoheadrightarrow \coma{\Sigma}(A)\). Then the hypotheses on \(F\) imply that \(\delta_c^F\) is an isomorphism. Furthermore, the class of \(\delta_c \tau\) in \(\mathsf{kk}^{\mathrm{an}}(\coma{\Sigma} A, A)\) is a natural isomorphism. Consequently, we must have \[\tilde{F}(\alpha) = \delta_c^F F(\tau) \tilde{F}(\Omega^n \alpha)F(\tau^{-1})(\delta_c^F)^{-1},\] which yields a functorial assignment. It remains to check that \(\tilde{F} \colon \mathsf{kk}^{\mathrm{an}} \to \mathcal{A}\) is a homological functor. The distinguished triangles in \(\mathsf{kk}^{\mathrm{an}}\) are those of the form \(\Omega A \overset{\Omega f}\to \Omega B \to P_f \to A\) for a bounded algebra homomorphism \(f \colon A \to B\). So it suffices to check that \[F(\Omega A) \overset{F(f)}\to F(\Omega B) \to F(P_f) \to F(A) \to F(B)\] is exact. This has already been checked everywhere, except at \(F(A)\), which follows from comparing the sequence above with the path sequence at \(\coma{\Sigma}\Omega f\). \end{proof} By Theorem \ref{cor:KH-properties}, \(\tilde{K}_0^{\mathrm{an},\updagger}\) satisfies the hypotheses of Theorem \ref{thm:KH-kk}. Consequently, there is a natural map \begin{equation}\label{eq:main} \mathsf{kk}^{\mathrm{an}}_0(\dvr, A) \to \Hom(K_0^{\mathrm{an},\updagger}(\dvr), \tilde{K}_0^{\mathrm{an},\updagger}(A)). \end{equation} \begin{theorem}\label{thm:kk=KH} The map in Equation \ref{eq:main} is an isomorphism for all complete, torsionfree bornological \(\dvr\)-algebras. \end{theorem} \begin{proof} Let \(A\) be a unital complete, torsionfree bornological \(\dvr\)-algebra. Then any class in \(K_0(A)\) comes from an idempotent \(e \in \mathcal{M}^{\mathrm{triv}}(A) \subseteq \mathcal{M}^{\mathrm{cont}}(A)\). This yields a bounded \(\dvr\)-algebra homomorphism \(\dvr \to \mathcal{M}^{\mathrm{cont}}(A)\). Since \(\mathsf{kk}^{\mathrm{an}}\) is in particular \(\mathbb{M}_2\)-stable, similar idempotents yield the same map in \(\mathsf{kk}^{\mathrm{an}}_0(\dvr, A)\). Consequently, we obtain a well-defined natural map \[K_0(A) \to \mathsf{kk}^{\mathrm{an}}_0(\dvr, A)\] for a unital algebra \(A\). To extend this natural map to non-unital algebras in \(\mathsf{Alg}_\dvr^\mathrm{tf}\), we apply \ref{thm:excision-1} to the extension \(A \rightarrowtail \tilde{A} \to \dvr\). Replacing \(A\) by \(\mathcal{M}^\mathrm{cont}(A)\), the map \(K_0(A) \to \mathsf{kk}^{\mathrm{an}}_0(\dvr, A)\) induces a map \begin{multline*} \tilde{K}_0^{\mathrm{an},\updagger}(A) = K_0^{\mathrm{an},\updagger}(\mathcal{M}^{\mathrm{cont}}(A)) = \varinjlim_n K_0(\coma{\Sigma}^n \Omega^n(\mathcal{M}^\mathrm{cont}(A))) \\ \longrightarrow \varinjlim_n \mathsf{kk}^{\mathrm{an}}_0(\dvr, \coma{\Sigma}^n \Omega^n(\mathcal{M}^\mathrm{cont}(A))) = \mathsf{kk}^{\mathrm{an}}_0(\dvr, \mathcal{M}^\mathrm{cont}(A)) \cong \mathsf{kk}^{\mathrm{an}}_0(\dvr, A), \end{multline*} which we call \(\alpha\). For the map in the other direction, let \(e \in K_0^{\mathrm{an},\updagger}(\dvr) \cong K_0(\dvr) \cong K_0(\resf)\) be the canonical generator, where the isomorphism is a special case of Proposition \ref{prop:regularity} below. By the universal property of \(\mathsf{kk}^{\mathrm{an}}\), there is a natural map \[ \beta \colon \mathsf{kk}^{\mathrm{an}}_0(\dvr, A) \to \Hom( \tilde{K}_0^{\mathrm{an},\updagger}(\dvr), \tilde{K}_0^{\mathrm{an},\updagger}(A)) \cong \tilde{K}_0^{\mathrm{an},\updagger}(A),\] which maps the class of a bounded algebra homomorphism \[\theta \colon \jens^n(\dvr) \to \mathbb{M}_r \hot \mathcal{M}^{\mathrm{cont}}(A^{\mathsf{sd}^p S^n})\] to the image of \(e\) under the map \[\delta_l^{-n} \delta_c^n \tilde{K}_0^{\mathrm{an},\updagger}(\coma{\Sigma}^n \theta) \delta_c^{-n} \delta_u^n \colon \tilde{K}_0^{\mathrm{an},\updagger}(\dvr) \to \tilde{K}_0^{\mathrm{an},\updagger}(A).\] The proof that the two maps \(\alpha\) and \(\beta\) are inverse to each other goes through verbatim from the purely algebraic case (see \cite{Cortinas-Thom:Bivariant_K}*{Theorem 8.2.1}). \end{proof} We end this section by comparing analytic \(K\)-theory with our version of negative \(K\)-theory for \(n \leq 0\), and the analytic Karoubi-Villamayor groups \(KV_n^\mathrm{an}\) for \(n \geq 0\). Recall that a ring \(A\) is called \textit{\(K_n\)-regular} if the canonical map \(A \to A[x_1,\dotsc, x_m]\) induces an isomorphism \[K_n(A) \to K_n(A[x_1,\dotsc,x_n])\] in negative \(K\)-theory for all \(m \geq 1\). Vorst's Theorem says that if a ring is \(K_0\)-regular, it is already \(K_n\)-regular for \(n \leq 0\). Examples of \(K_n\)-regular rings are, of course, regular rings. In characteristic zero, an excellent Noetherian \(k\)-algebra that is \(K_{\mathrm{dim}(A) + 1}\)-regular is regular \cites{MR2373359,kerz2021towards}. In positive characteristic, recent results \cites{kerz2021towards,geisser2012conjecture} indicate similar partial converses. \begin{proposition}\label{prop:regularity} Let \(R\) be a Banach \(\dvr\)-algebra algebra such that \(A=R/\dvgen R\) is \(K_n\)-regular for \(n\leq 0\). Then \[ K_n^{\mathrm{an},\updagger}(R) = \begin{cases} KV_n(R) \quad \text{ for } n \geq 1; \\ K_0(\coma{\Sigma}^n(R)) \quad \text{ for } n \leq 0. \end{cases} \] \end{proposition} \begin{proof} By Theorem \ref{thm:reduction-mod-p}, we have \[K_n^{\mathrm{an},\updagger}(R) \cong KH_n(A) \cong \begin{cases} KV_n (A) \qquad \text{ for } n \geq 1; \\ K_0(\Sigma^n(A)) \quad \text{ for } n \leq 0. \end{cases},\] where the second isomorphism follows from the \(K_n\)-regularity hypothesis. Now since \(\coma{\Sigma}(R) = \coma{\Sigma} \hot R\) is \(\dvgen\)-adically complete, by \cite{kbook}*{Lemma II.2.2}, we have \(K_0(\coma{\Sigma}(R)) \cong K_0(\Sigma(A))\). \end{proof} \section{Chern characters by lifting to \(\dvf\)-vector spaces} We conclude this article by essentially summarising the different relationships between bivariant \(K\)-theory and analytic cyclic homology for \(\resf\)-algebras and torsionfree \(\dvr\)-algebras. \subsection{The Chern character on bivariant algebraic \(K\)-theory:} Recall that the algebraic bivariant \(K\)-theory constructed in \cite{Cortinas-Thom:Bivariant_K} is the universal functor \(j \colon \mathsf{Alg}_l \to kk\) into a triangulated category satisfying polynomial homotopy invariance, \(\mathbb{M}_\infty\)-stability and excision. Here \(l\) is a commutative, unital ring. In particular, it applies to the case we are interested in, namely the residue field \(l = \resf\) of the discrete valuation ring \(\dvr\). The correct target for this is the analytic cyclic homology complex \(\mathbb{HA} \colon \mathsf{Alg}_\resf \to \overleftarrow{\mathsf{Kom}(\mathsf{Ind}(\mathsf{Ban}_\dvf))}\), which satisfies all the afforementioned properties. By the universal property of \(kk\), we get group homomorphisms \[ \mathrm{ch}_n \colon kk_n(A,B) \to \HA_n(A,B) \] for \(A\), \(B \in \mathsf{Alg}_\resf\), \(n \in \Z\). When \(A = \resf\), the left hand side is Weibel's homotopy \(K\)-theory \(KH_*(B)\) by \cite{Cortinas-Thom:Bivariant_K}*{Theorem 8.2.1}, while the right hand side is the analytic cyclic homology \(\HA_*(B)\) by \cite{Meyer-Mukherjee:HA}*{Theorem 3.10}. So we get group homomorphisms \(KH_n(B) \to \HA_n(B)\) for each \(n \in \Z\). Since the right hand side consists of \(\dvf\)-vector spaces, we summarily have \(\dvf\)-linear maps \begin{equation}\label{eq:algebraic-chern} KH_n(B) \otimes_\Z \dvf \to \HA_n(B) \end{equation} for each \(n \in \Z\). As already mentioned in Example \ref{ex:analytic-Chern}, by the universal property of \(\mathsf{kk}^{\mathrm{an}}\) and the properties of the functor \(\mathbb{HA} \colon \mathsf{Alg}_\dvr^\mathrm{tf} \to \overleftarrow{\mathsf{Kom}(\mathsf{Ind}(\mathsf{Ban}_\dvf))}\), we get group homomorphisms \[\mathrm{ch}_n \colon \mathsf{kk}^{\mathrm{an}}_n(R,S) \to \HA_n(R,S)\] for each \(n \in \Z\) and \(R\), \(S \in \mathsf{Alg}_\dvr^\mathrm{tf}\). Setting \(R = \dvr\), Theorem \ref{thm:kk=KH} and \cite{Cortinas-Meyer-Mukherjee:NAHA}*{Section 3.1} yield group homomorphisms \[\tilde{K}_n^{\mathrm{an}, \updagger}(S) \to \HA_n(S)\] for each \(n \in \Z\). Since the right hand side is an \(\dvf\)-vector space, we get \(\dvf\)-linear maps \begin{equation}\label{eq:analytic-chern-char} \mathrm{ch}_n \otimes \dvf \colon \tilde{K}_n^{\mathrm{an}, \updagger}(S) \otimes_\Z \dvf \to \HA_n(S) \end{equation} for each \(n \in \Z\). \begin{remark}\label{bootstrap-class} In the complex topological case, we get similar Chern characters \[\mathrm{ch}_n^{\mathrm{top}} \colon K_n^{\mathrm{top}}(A) \otimes_\Z \C \to \mathrm{HL}_n(A)\] from topological \(K\)-theory map to local cyclic homology. This map is an isomorphism for separable \(C^*\)-algebras in the \(C^*\)-algebraic bootstrap class (see \cite{Meyer:HLHA}*{Theorem 7.7}). This is unlikely to be true in the nonarchimedean case because the left hand side could have nontrivial (and non-isomorphic) groups for each \(n \in \Z\), while the right hand side is \(2\)-periodic by construction. To address this, we take the product periodification \(\tilde{K}^{\mathrm{an}, \updagger}(S)_{\ev} = \prod_{n \in \Z} \tilde{K}_{2n}^{\mathrm{an}, \updagger}(S)\) (resp. \(\tilde{K}^{\mathrm{an}, \updagger}(S)_{\odd} = \prod_{n \in \Z} \tilde{K}_{2n+1}^{\mathrm{an}, \updagger}(S)\)) on the left hand side and get maps \[\tilde{K}^{\mathrm{an}, \updagger}(S)_\ev \to \HA_0(S) \quad \text{ and } \quad \tilde{K}^{\mathrm{an}, \updagger}(S)_{\odd} \to \HA_1(S).\] \end{remark} \subsection{From analytic \(K\)-theory to its reduction mod \(\dvgen\)} The reduction mod \(\dvgen\) of a torsionfree bornological \(\dvr\)-algebra \(\mathsf{Alg}_\dvr^\mathrm{tf} \overset{\otimes_\dvr \resf}\to \mathsf{Alg}_\resf\) induces an obvious functor \(\mathsf{kk}^{\mathrm{an}} \to kk\). On the cyclic homology side, suppose \(A\) is an \(\resf\)-algebra and \(D\) is a dagger algebra that is fine mod \(\dvgen\) and satisfies \(D/\dvgen D \cong A\). When \(A\) is smooth, commutative, then there always exists a smooth, commutative \(\dvr\)-algebra lifting \(R\) such that \(R/\dvgen R \cong A\). Taking the dagger completion and equipping it with the compactoid bornology ensures that the quotient bornology on \(A\) is fine. Once we have such a dagger algebra lifting that is fine mod \(\dvgen\), we get a weak equivalence \(\mathbb{HA}(D) \cong \mathbb{HA}(A)\) by \cite{Meyer-Mukherjee:HA}*{Theorem 5.5}, for the model structure constructed in \cite{mukherjee2022quillen}. As a consequence, we get \(\HA_n(R, S) \cong \HA_n(A, B)\) for each \(n \in \Z\), where \(R\) and \(S\) are dagger algebra liftings that are fine mod \(\dvgen\). Summarily, we have a diagram \[ \begin{tikzcd} \mathsf{kk}^{\mathrm{an}}_n(R,S) \arrow{r}{\mathrm{ch}_n} \arrow{d}{\otimes_\dvr \resf} & \HA_n(R,S) \arrow{d}{\cong} \\ kk_n(A,B) \arrow{r}{\mathrm{ch}_n} & \HA_n(A,B) \end{tikzcd} \] of abelian groups for each \(n\in \Z\). Setting \(R = \dvr\) and \(A = \resf\), we get \begin{equation}\label{eq:kk-HA-chern} \begin{tikzcd} \tilde{K}_n^{\mathrm{an},\updagger}(S) \arrow{r}{\mathrm{ch}_n} \arrow{d} & \HA_n(S) \arrow{d}{\cong} \\ KH_n(B) \arrow{r}{\mathrm{ch}_n} & \HA_n(B) \end{tikzcd} \end{equation} for each \(n \in \Z\). By Theorem \ref{thm:reduction-mod-p}, if \(S\) is contained in its \(\dvgen\)-adic completion, the vertical map of \ref{eq:kk-HA-chern} is an isomorphism. \begin{remark} Referring again to Remark \ref{bootstrap-class}, a natural question again arises when the periodified Chern character \[\prod_{n \in \Z} KH_{2n}(B) \otimes_\Z \dvf \to \HA_0(B) \quad \text{ and } \prod_{n \in \Z} KH_{2n+1}(B) \otimes_\Z \dvf \to \HA_1(B)\] is an isomorphism for \(\resf\)-algebras \(B\). For \(\resf = \resf_p\) - the finite field with \(p\)-elements, since algebraic \(K\)-theory of \(\resf\) has \(p\)-torsion for all higher algebraic \(K\)-theory groups, the above isomorphism holds. More generally, we expect the result to hold for all algebras in the \textit{algebraic bootstrap class}, which is defined as the triangulated subcategory of \(kk\) generated by \(\resf\). Finally, if we consider algebras \(A\) in the algebraic bootstrap class that admit dagger algebra liftings \(D\) that reduce mod \(\dvgen\) to \(A\) with the fine bornology, in light of the diagram \ref{eq:kk-HA-chern}, we should get isomorphisms \begin{multline*} K^{\mathrm{an}, \updagger}(D)_\ev \cong \prod_{n \in \Z} KH_{2n}(A) \to \HA_0(A) \cong \HA_0(D) \\ K^{\mathrm{an}, \updagger}(D)_\odd \cong \prod_{n \in \Z} KH_{2n+1}(A) \to \HA_1(A) \cong \HA_1(D) \end{multline*} of \(\dvf\)-vector spaces. The algebraic bootstrap class for algebras over arbitrary commutative rings is presently being investigated by Guillermo Corti\~nas. \end{remark} \begin{bibdiv} \begin{biblist} \bibselect{References} \end{biblist} \end{bibdiv} \end{document}
1,314,259,993,808
arxiv
\section{Introduction} Cold diatomic molecules are often produced from atomic gases by varying a magnetic field around a zero-energy resonance~\cite{donley02, regal03, review} or by photo\-association~\cite{doyle}. Generally applicable implementations of the former technique are linear ramps of the magnetic field across the resonance~\cite{regal03, str03, cub03, xu03, herbig03, gre03, duerr04prl, muk04, duerr04pra, volz05, thompson05b}, and fast switches to fields close to the resonance~\cites{donley02}. In addition, the long lifetime of cold $^6$Li$_2$ dimers allows them to be created by holding the magnetic field close to resonance, causing thermalisation of the atomic into a molecular gas~\cite{jochim03}. In recent experiments, dimers of $^{85}$Rb$_2$~\cite{thompson05} and \mbox{$^{85}$Rb-$^{87}$Rb~\cite{papp06}} were associated from cold atomic gases by applying a magnetic field modulation resonant with the molecular binding energy. This technique eliminates the need for the magnetic field to spend time in the near-resonant, strongly interacting region. It therefore reduces the unwanted effect of heating~\cites{thompson05, hodby05} during the production of molecules. The narrow Fourier spectrum of the pulse accurately targets the molecular state, minimising the coupling to deeper bound states and highly energetic continuum states. In a direct comparison to a linear ramp using the same apparatus, Thompson \textit{et~al.} reported more efficient conversion using resonant association~\cite{thompson05}. This technique has also been used as an accurate probe of molecular binding energy~\cites{thompson05, papp06}. In addition, radio-frequency pulses have been used to associate \mbox{$^{40}$K-$^{87}$Rb} dimers~\cite{ospelkaus06}. In this paper we study the resonant association of molecules from thermal and condensed gases. Our approach precisely accounts for the continuum of states in a gas. The transition amplitude from a pair of unbound atoms to the bound molecular state depends on the relative kinetic energy of the atom pair. A resonant continuum energy exists, at which the transition amplitude to the molecular state increases linearly with time. At small modulation amplitudes, the resonant continuum energy is given by the difference between the energy corresponding to the modulation frequency and the molecular binding energy. The distribution of atoms in different continuum states, all contributing to the molecular production, gives the total conversion a dependence on temperature. The width of the thermal distribution increases with the temperature of the gas, weakening the resonant behaviour of the molecular production. The continuum distinguishes the current case from the association of atom pairs held in optical lattices~\cite{bertelsen06}, where the resonant modulation couples the discrete ground state of the tightly confining potential to the molecular bound state. We find damped oscillations in the number of molecules produced in the short-time limit, as observed in Ref.~\cite{thompson05}. The damping is caused by the dephasing of the transition amplitudes from states across the continuum. After the damping out of the initial oscillations, the conversion increases at a rate which displays resonant dependence on the modulation frequency. Maximal conversion is achieved when the frequency and amplitude of the magnetic field modulation are together optimised for the temperature and density of the gas. The modulation amplitude required depends on the sensitivity of the molecular state to the magnetic field. As the modulation amplitude increases states of lower continuum energy are resonantly coupled, until the zero-energy continuum state is reached. Beyond this point all continuum states are coupled in a non-resonant manner. There remain some modulation amplitudes where, for momenta close to the peak of the thermal distribution, the transition amplitude is large enough to lead to a revival in conversion efficiency. Our calculations of molecular production for binding energies ranging from \mbox{$h \times 5\:\p{kHz}$} to \mbox{$h \times 1\:\p{MHz}$} predict that resonant association can be effective over this range. We also examine short pulses in pure condensates, where the mean-field shift and the excitation of higher modes alter the dynamics. In condensates, the damping of oscillations in conversion due to the dephasing of the transition amplitudes from different continuum states is suppressed. In Sec.~\ref{sec:numerics} we introduce the magnetic field profile used for resonant association, and also set up the notation used in our calculations. We then discuss the dynamics of the transition amplitude for a pair of atoms to a molecule, and the dependence this has on the continuum, in Sec.~\ref{sec:continuum}. In Sec.~\ref{sec:dependence} we examine in turn the effects of altering the duration, frequency and amplitude of the modulation on the efficiency of the molecular production. We also discuss the dependence of the conversion efficiency on the temperature and density of the atomic gas. In each section we discuss the results of Thompson \textit{et al.}~\cite{thompson05}, which formed the original motivation for our studies, and then consider resonant association under a broader range of conditions. We conclude in Sec.~\ref{sec:conclusion}. \section{Magnetic field sequence} \label{sec:numerics} In this section we introduce the magnetic field sequence used in resonant association, as well as the notation used in this paper. Our calculations of molecular production from thermal gases use a two-channel approach~\cite{child74, moerdijk95, drummond98, timmermans99, mies00}. In the implementation of Ref.~\cites{goral04}, the two-channel, two-body Hamiltonian for the case of a time-varying magnetic field $B(t)$ is given by \begin{align} H_\p{2B}(B(t))&=|\p{bg}\rangle H_\p{bg}\langle\p{bg}| +W|\p{bg}\rangle\langle\p{cl}|\nonumber\\ &+|\p{cl}\rangle\langle\p{bg}|W +|\p{cl}\rangle H_\p{cl}(B(t)) \langle\p{cl}|\, . \label{eq:h2b} \end{align} Here, $W$ is the interchannel coupling between the entrance and closed channel spin configurations which are indicated by `bg' and `cl', respectively. Choosing the zero of energy to coincide with the dissociation threshold of the entrance channel makes the closed-channel Hamiltonian contain all of the magnetic field dependence of $H_\p{2B}(B(t))$. We make the single-resonance approximation~\cite{goral04}, neglecting all closed-channel states which are far detuned from $E = 0$. The single closed-channel state retained is referred to as the resonance level $\left|\phi_\p{res}\right>$, and is degenerate with the entrance-channel dissociation threshold at the field strength $B_{\p{res}}$, i.e. $E_{\p{res}}(B_{\p{res}})~=~0$. The closed-channel Hamiltonian is then given by \begin{eqnarray} H_{\p{cl}}(B(t)) & = & \left|\phi_{\p{res}}\right>\big[E_{\p{res}}^{\p{av}} + E_{\p{res}}^{\p{mod}}(t)\big]\left<\phi_{\p{res}}\right| . \label{eq:eres} \end{eqnarray} Here $E_{\p{res}}^{\p{av}} = \frac{\partial E_{\p{res}}}{\partial B}(B_{\p{av}} - B_{\p{res}})$, $E_{\p{res}}^{\p{mod}}(t) = \frac{\partial E_{\p{res}}}{\partial B}[B(t) - B_{\p{av}}]$, $B_\p{av}$ is the average magnetic field during the pulse, and $\frac{\partial E_{\p{res}}}{\partial B}$ is the difference in magnetic moment between the entrance and closed channels. The measurable location $B_0$ of the singularity in the scattering length $a$ is shifted from $B_\p{res}$ by the interchannel coupling. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth, clip]{pulse_revised_forpaper.eps} \caption{(Color online) A schematic of the magnetic field sequence used in Ref.~\cite{thompson05}. The ramps before and after the resonant pulse were shown not to create molecules. We assume a jump in $B(t)$ at $t_{\p{f}}$ when $B(t_{\p{f}}) \neq B_{\p{av}}$.} \label{fig:pulse} \end{figure} A schematic of the magnetic field sequence used in the experiments of Thompson \textit{et al.}~\cite{thompson05} is shown in \reffig{fig:pulse}, which also summarises our notation. Gases of $^{85}$Rb\: atoms in the $(F = 2, m_\p{F} = -2)$ excited Zeeman state were prepared at 162\,G. Experiments were performed using thermal gases of temperature \mbox{$T = 20$\,-\,80\,nK}, as well as partially and wholly condensed gases. Following a \mbox{5\,ms} ramp to an average field in the range \mbox{$156 - 157$\,G}, a sinusoidal magnetic field pulse was applied for a duration of up to \mbox{38\,ms}. The pulse resonantly associated molecules in the highest excited vibrational bound state, which we term Fesh\-bach molecules. The magnetic field was then ramped back to \mbox{162\,G} in \mbox{5\,ms}, and the gas held until all molecules were lost due to spin relaxation~\cites{thompson05, thompson05b, koehler05}. Absorption imaging before and after this sequence showed the depletion of the atomic gas. In our calculations of molecular conversion from thermal gases we neglect the \mbox{5\,ms} ramps on either side of the pulse, which were found in Ref.~\cite{thompson05} not to lead to molecular production. We have verified that including the ramps in our calculations causes only a negligible difference to the final result. The magnetic field pulse is taken to have the form $B(t) = B_\p{av} + B_\p{mod}\sin(\omega_\p{mod}t)$, where $B_\p{mod}$ and $\omega_\p{mod}$ are the amplitude and angular frequency of the modulation, as shown in \reffig{fig:pulse}. $B(t)$ is assumed to satisfy $B(0) = B(t_{\p{f}}) = B_{\p{av}}$. Where $B(t_{\p{f}}) \neq B_{\p{av}}$, a sudden jump at $t_{\p{f}}$ returning to $B_{\p{av}}$ is assumed, as illustrated in \reffig{fig:pulse}. Consequently, we analyze the initial and final states in terms of the eigenstates of $H_\p{2B}^\p{av} = H_\p{2B}(B_\p{av})$. The local density approximation is valid for weakly confining traps such as that used by Thompson \textit{et al.}~\cites{thompson05, cornish00}, and for simplicity we assume a homogeneous gas. The eigenstate describing the continuum state of a free pair of atoms with relative momentum $\mb{p}$ is the dressed scattering state $|\phi_\mb{p}^\p{av}\rangle$ satisfying \mbox{$H_\p{2B}^\p{av}|\phi_\mb{p}^\p{av}\rangle = (p^{2}/m)|\phi_\mb{p}^\p{av}\rangle$}, where $m$ is the atomic mass. The boundary conditions of $|\phi_\mb{p}^\p{av}\rangle$ correspond to an incoming plane wave plus an outgoing spherical wave~\cite{taylor72, goral04}. The Fesh\-bach molecular state at $B_\p{av}$, $|\phi_{\p{b}}^{\p{av}}\big>$\:, satisfies \mbox{$H_\p{2B}^\p{av}|\phi_\p{b}^\p{av}\rangle = E_\p{b}^\p{av}|\phi_\p{b}^\p{av}\rangle$}, where $E_\p{b}^\p{av} = E_\p{b}(B_\p{av})$ is the molecular bound state energy~\cite{goral04}. \section{Dynamics of an atom pair} \label{sec:continuum} In this section we discuss the two-body dynamics of a pair of atoms in the presence of a continuum, due to the magnetic field pulse introduced above. Numerically integrating the Schr\"{o}dinger equation associated with the Hamiltonian of \refeq{eq:h2b} for a particular pulse gives the two-body time evolution operator $U_{\p{2B}}(t_{\p{f}},0)$, linked to $H_{\p{2B}}(B(t))$ by \begin{align} i\hbar\frac{\partial}{\partial t}U_{\p{2B}}(t,t') =H_{\p{2B}}(B(t))U_{\p{2B}}(t,t') \, . \label{eq:u2b} \end{align} After the pulse, the wavefunction of a pair of atoms that were initially in the state $|\phi_\mb{p}^\p{av}\rangle$ has an overlap with the Fesh\-bach molecular state given by the transition amplitude \begin{equation} T(p, t_{\p{f}}) = \langle\phi_\p{b}^\p{av}|U_{\p{2B}}(t_{\p{f}},0)|\phi_\mb{p}^\p{av}\rangle. \label{eq:T} \end{equation} This in turn gives the probability density for a transition between a free pair of atoms of relative momentum $\mb{p}$ and a Fesh\-bach molecule to be \begin{equation} \rho(p,t_{\p{f}}) = \left|T(p,t_{\p{f}})\right|^{2} \, . \label{eq:rho} \end{equation} In this paper we consider resonances where a spherically symmetric resonance level is coupled to the entrance channel by spin exchange~\cite{review}. Consequently, the transition amplitude and probability density depend only on the modulus of the momentum. The transition probability density gives the dynamics of only one state in the continuum. This is to be distinguished from the conversion itself, which includes the contributions of all the continuum states. In the limit of short times and a small modulation amplitude, $U_{\p{2B}}(t_{\p{f}},0)$ may be approximated using time-dependent perturbation theory. Treating the oscillating component of the Hamiltonian of \refeq{eq:eres} as a perturbation to $H_\p{2B}^\p{av}$ gives the first order approximation to the two-body evolution operator: \begin{align} U_\p{2B}^{(1)}(t_\p{f},0) & \approx U_{\p{2B}}^\p{av}(t_\p{f}) \nonumber \\ & + \frac{1}{i\hbar}\int_0^{t_\p{f}} dt \: U_\p{2B}^\p{av}(t_\p{f} - t)|\phi_\p{res}\rangle E_\p{res}^\p{mod}(t)\langle \phi_\p{res} | U_{\p{2B}}^{\p{av}}(t) \, . \label{eq:U2B1} \end{align} Here, the two-body evolution operator at $B_\p{av}$ is given by $U_{\p{2B}}^{\p{av}}(t) = \exp(-i H_\p{2B}^\p{av}t/\hbar)$. Projecting the estimate of \refeq{eq:U2B1} onto $\langle\phi_{\p{b}}^{\p{av}} |$ on the left and $|\phi_{\mb{p}}^\p{av}\rangle$ on the right gives an approximation to the transition amplitude of \refeq{eq:T}: \begin{align} T^{(1)}(p,t_\p{f}) &= -\frac{1}{2\hbar}\frac{\partial E_{\p{res}}}{\partial B}B_{\p{mod}}e^{-i E_\p{b}^\p{av} t_\p{f}/\hbar} C(p) \, \nonumber \\ &\times\bigg[ e^{i\omega_+t_\p{f}} \frac{\sin(\omega_+t_\p{f})}{\omega_+} - e^{i\omega_-t_\p{f}} \frac{\sin(\omega_-t_\p{f})}{\omega_-} \bigg]\, . \label{eq:T1} \end{align} Here \begin{align} \omega_\pm= \left( E_\p{b}^\p{av}\pm\hbar\omega_\p{mod}-p^2/m \right)/(2\hbar) \,, \label{eq:wpm} \end{align} and $C(p)= \big<\phi_{\p{b}}^{\p{av}}\big|\phi_{\p{res}}\big>\big<\phi_{\p{res}}|\phi_\mb{p}^\p{av}\rangle$ is the product of the overlaps of the resonance state with the bound and scattering states at $B_{\p{av}}$. Since $E_\p{b}^\p{av}$, $\hbar \omega_{\p{mod}}$ and $p^{2}/m$ can all be of the same order of magnitude, it is not in general possible to make the rotating wave approximation and neglect the $\omega_-$ term in \refeq{eq:T1}. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth, clip]{matrixtime.eps} \caption{(Color online) Evolution of the transition probability density \mbox{$\rho(p,t)$}\: in a $^{85}$Rb\:gas for near-resonant continuum energies. The resonance condition of \refeq{eq:res} is fulfilled at \mbox{$p^{2}/m = h \times 0.73$\,kHz}. The solid lines show the numerical results, and the dashed lines show the perturbative estimate of \refeq{eq:T1}. The transition probability densities shown are labelled with the energy of the continuum state in $h \,\times\,$kHz. The inset shows the evolution of \mbox{$\rho(p,t)$}\: for the \mbox{0.98\,kHz} continuum state at longer times. Here, \mbox{$B_{\p{av}} = 156.45$\,G}, \mbox{$B_{\p{mod}} = 0.065$\,G}, \mbox{$E_{\p{b}}^{\p{av}}/h = -5.77$\,kHz}, and \mbox{$\omega_{\p{mod}}/2\pi = 6.5$\,kHz.}} \label{fig:mat_res} \end{figure} As implied by the first-order estimate of \refeq{eq:T1}, the fastest growth in the transition probability density \mbox{$\rho(p,t)$}\: for small $B_\p{mod}$ occurs for the resonant continuum energy $p_\p{res}^2/m$, which satisfies \begin{equation} E_\p{b}^\p{av} + \hbar\omega_{\p{mod}} - \frac{p_{\p{res}}^{2}}{m} = 0 \, . \label{eq:res} \end{equation} This corresponds to the sum of the relative kinetic energy of the atom pair and the molecular binding energy $|E_\p{b}^\p{av}|$ being exactly matched by the modulation frequency. As shown in \reffig{fig:mat_res}, the growth in the transition probability density \mbox{$\rho(p,t)$}\: for \mbox{$p^{2}/(mh) = 0.73$\,kHz} is quadratic. This corresponds to the transition amplitude of \refeq{eq:T} having a linearly increasing amplitude, similar to a resonantly driven harmonic oscillator. At sufficiently short times, \mbox{$\rho(p,t)$}\: grows quadratically for states detuned from the resonant continuum energy. \mbox{Figure~\ref{fig:mat_res}} shows that \mbox{$\rho(p,t)$}\: displays behaviour different to $\rho(p_{\p{res}},t)$ after a time of order \mbox{$\hbar/(|p^{2} - p_{\p{res}}^{2}|/m)$}. The inset of \reffig{fig:mat_res} shows the oscillatory nature of \mbox{$\rho(p,t)$}\:, reproduced at short times by the analytic estimate of \refeq{eq:T1}. The numerical approach, accounting precisely for the continuum of states, yields an envelope in the oscillation amplitude as well as a frequency shift from the estimate of \refeq{eq:T1}. The thermal gases measured in Ref.~\cite{thompson05} had temperatures in the range 20\,-\,80\,nK. The energy $k_\p{B} \times 50$\,nK corresponds to $h \times 1$\,kHz. Consequently, the phase difference between the continuum states spread over the thermal distribution becomes significant after times of the order of milliseconds. \section{Experimental parameters affecting the conversion efficiency} \label{sec:dependence} In this section we study the variation of the molecular production with the duration, frequency and amplitude of the magnetic field modulation, and the temperature and density of a thermal or fully condensed gas. We refer to the fraction of atoms converted to molecules as the conversion efficiency. In the limit of small depletion of a thermal atomic gas, this is given by a weighted average of the transition probability density $\rho(p,t_{\p{f}})$ over a Maxwell distribution: \begin{equation} \frac{2N_{\p{mol}}}{N} = 2n(2\pi\hbar)^{3}\left(\frac{\beta}{\pi m}\right)^{3/2} \int d\mb{p} \exp\left(\frac{-\beta p^{2}}{m}\right) \rho(p,t_{\p{f}}) \, . \label{eq:conv} \end{equation} Here, $\beta = 1/k_{\p{B}}T$, $n$ is the density of the atomic gas, $N$ is the initial number of atoms and $N_\p{mol}$ is the final number of molecules. In this limit the conversion efficiency is proportional to the density of the atomic gas. We note that this approach does not lead to saturation of the conversion efficiency, which would require the inclusion of genuinely many-body effects. This requires the solution of non-Markovian Boltzmann-like equations, whose Markov limit has previously been used to study the special case of saturation of molecular production from magnetic field ramps~\cite{williams06}. \subsection{Pulse duration} \label{sec:time} \subsubsection{Thermal gas} \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth, clip]{time.eps} \caption{(Color online) Conversion efficiency from a thermal $^{85}$Rb\: gas as a function of pulse duration, for a density of \mbox{$n = 10^{11}$\,cm$^{-3}$}. All other parameters are the same as those of the data represented in \reffig{fig:mat_res}, which has been thermally averaged according to \refeq{eq:conv} to give the conversion efficiency. The inset shows damped oscillations in the conversion efficiency, visible for \mbox{$T = 20$\,nK} but washed out for 50\,nK by the dephasing of the transition amplitudes from different continuum states. The dotted lines show the results given by thermally averaging the perturbative estimate of \refeq{eq:T1}. } \label{fig:time} \end{figure} We first consider the conversion efficiency from a thermal gas. The averaging of \refeq{eq:conv} gives a contribution from \mbox{$\rho(p,t)$}\: for each $p$, weighted according to the thermal distribution. \mbox{Figure~\ref{fig:time}} shows the resulting conversion efficiency for gases of 20, 50 and \mbox{80\,nK} as a function of pulse duration. The resonance condition of \refeq{eq:res} is fulfilled at a continuum energy of \mbox{$h\, \times\, 0.73$\,kHz}, which corresponds to \mbox{37\,nK}. Of the gas temperatures quoted in Ref.~\cite{thompson05}, \mbox{20\,nK} gives the highest conversion efficiency because the most atom pairs have energies close to the resonant continuum energy. In the experiments of Ref.~\cite{thompson05}, damped oscillations in the conversion efficiency as a function of time were observed over the first few milliseconds. In our calculations, damped oscillations are visible over approximately \mbox{2\,ms} for a temperature of \mbox{20\,nK}. We have verified for several values of $B_{\p{av}}$ and $\omega_{\p{mod}}$ that the frequency of the damped oscillations, $f_\p{conv}$, for the thermal gas case is close to $p_{\p{res}}^{2}/(mh)$. This is the value of $\omega_{+}/2\pi$ for \mbox{$p = 0$} in \refeq{eq:wpm}, corresponding to the detuning of the zero momentum state from the resonant continuum energy. Increasing the temperature causes a negative shift in the frequency of the damped oscillations, together with faster damping. For 50 and \mbox{80\,nK} gases our calculations do not predict damped oscillations large enough to be observed. The main cause of damping is the variation in the oscillation frequency of \mbox{$\rho(p,t)$}\: with $p$, as shown in \reffig{fig:mat_res} and discussed in Sec.~\ref{sec:continuum}. A wider thermal distribution corresponds to a wider spread in momentum of the atom pairs contributing to the conversion efficiency, and so the initial coherence in \mbox{$\rho(p,t)$}\: across the distribution is destroyed more quickly. \subsubsection{Condensed gas} The critical temperature reported in Ref.~\cite{thompson05} is \mbox{14\,nK}, with average densities of order \mbox{$10^{11}\,$cm$^{-3}$}. For a density of \mbox{$10^{11}\,$cm$^{-3}$}, which we use in our thermal gas calculations, and a magnetic field of \mbox{156.45\,G}, the dilute gas parameter $\sqrt{n a^{3}}$ is 0.02. For such gases, which are close to condensation or partially condensed, there will be a mean field shift in the frequency of the oscillations in conversion efficiency. We have analyzed this effect for the case of a pure, homogeneous condensate. For our studies of condensed gases we use the cumulant approach~\cites{koehler03pra, koehler03prl, koehler02pra}. In this approach, the atomic mean field $\Psi(t)$ is given by a non-Markovian, nonlinear Schr\"odinger equation~\cite{koehler03pra}: \begin{align} i\hbar \frac{\partial}{\partial t}\Psi(t) = H_\p{1B}\Psi(t) - \Psi^{*}(t) \int_{0}^\infty d\tau \, \Psi^2(\tau) \frac{\partial}{\partial \tau} h(t, \tau) \, . \label{nlse} \end{align} The coupling function $h(t, \tau)$ contains the exact two-body dynamics, and is given by \begin{align} h(t, \tau) = (2\pi\hbar)^3\big<0|V(t)U_{\p{2B}}(t,\tau)|0\big>\theta(t-\tau) \, . \label{httau} \end{align} Here, the evolution operator $U_\p{2B}(t, \tau)$ is defined in \refeq{eq:u2b}, $|0\big>$ is the plane wave of zero momentum, $V(t)$ is the diatomic potential, and \mbox{$\theta(t - \tau)$} is the step function, yielding 1 for \mbox{$t > \tau$} and 0 otherwise. The molecular conversion is given by a molecular mean field, \begin{align} \Psi_\p{b}(t) = -\frac{1}{\sqrt{2}}\int_{0}^\infty d\tau \, \Psi^2(\tau)\frac{\partial}{\partial \tau} h_\p{b}(t, \tau) \, , \label{psib} \end{align} where the bound state coupling function is given by \begin{align} h_\p{b}(t, \tau) = (2\pi\hbar)^{3/2}\big<\phi_\p{b}|U_{\p{2B}}(t,\tau)|0\big>\theta(t-\tau) \, . \label{hbttau} \end{align} The coupling functions of \mbox{Eqs.~\eqref{httau} and~\eqref{hbttau}} have been determined from the single-channel approach of Ref.~\cite{koehler03pra}. The magnetic fields used in this calculation are within the range for which single-channel approaches have been shown to be valid for the 155\,G resonance of $^{85}$Rb~\cites{koehler04}. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth, clip]{condensate_nopsfrag.eps} \caption{(Color online) Frequency of the oscillations in conversion efficiency at short times for a pure $^{85}$Rb\: condensate as a function of the initial density. The mean-field shift lowers the frequency of the oscillations in conversion efficiency from that given by the two-body approach, which is recovered in the limit of low density. The oscillations are, however, much clearer than those in a thermal gas due to the suppression of the contributions of different continuum states to the molecular production. The inset shows the variation in time of the conversion efficiency for densities of $10^{10}$ ($\times 50$ for clarity), $4\times10^{11}$ and \mbox{$8 \times 10^{11}$\,cm$^{-3}$}. The \mbox{0.5\,ms} ramp from \mbox{B = 157.45\,G} to \mbox{$B_\p{av} = 156.45$\,G}, not shown here, gives a density-dependent initial phase to the oscillations. Here \mbox{$B_\p{mod} = 0.065$\,G}, \mbox{$E_\p{b}^\p{av}/h = -5.86$\,kHz} and \mbox{$\omega_\p{mod}/2\pi = 7$\,kHz}. Because a single-channel approach is used in the condensed gas case, the bound state energy is slightly different to that given by the two-channel, thermal gas calculations above.} \label{fig:condensate} \end{figure} The oscillation frequency of the conversion efficiency at short times, $f_\p{conv}$, has a mean-field shift, as shown in \reffig{fig:condensate}. In the low-density limit, the value of $f_\p{conv}$ expected from the two-body picture is recovered. The oscillations in conversion efficiency are clearer and have weaker damping than those in thermal gases, as shown in the inset of \reffig{fig:condensate}. This is due to suppression of the dephasing between the transition amplitudes from different continuum states. The main cause of damping in this case is the decay of the condensate and molecular populations into the continuum. In these calculations we have included a \mbox{0.5\,ms} ramp from \mbox{$B_\p{av} + 1$\,G} to $B_\p{av}$, in analogy to the ramp shown in \reffig{fig:pulse}. Neglecting the ramp and simulating only the pulse corresponds to instantly turning on the interactions at the beginning of the pulse. For the parameters used here, this results in strong excitation of higher modes. The ramp reduces but does not completely eliminate the excitations, which are visible in the inset of \reffig{fig:condensate} as high frequency oscillations whose amplitude increases with condensate density. We have extracted the frequency of the damped oscillations using the fit procedure of Claussen \textit{et al.}~\cite{claussen03}, which includes exponential damping of the oscillations and a linear decay. Strong decay of the condensate into the continuum at higher densities makes the fit less reliable and meaningful, and we have therefore limited the analysis using this technique to densities below \mbox{$10^{12}\,$cm$^{-3}$}. \subsection{Modulation frequency} \label{sec:freq} \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth, clip]{freq.eps} \caption{(Color online) Resonance curve of conversion efficiency vs modulation frequency for different pulse durations, for a thermal $^{85}$Rb\: gas. Here \mbox{$B_{\p{av}} = 156.41$\,G}, \mbox{$E_\p{b}^\p{av}/h = -5.39$\,kHz}, \mbox{$B_{\p{mod}}= 0.065$\,G}, \mbox{$T = 20$\,nK} and \mbox{$n = 10^{11}$\,cm$^{-3}$}. Inset: Conversion efficiency after a \mbox{6\,ms} pulse with \mbox{$T = 20$\,nK} and \mbox{$T = 80$\,nK,} showing its weaker dependence on modulation frequency at higher temperatures. The dashed curves are thermal averages of the perturbative estimate of \refeq{eq:T1} for \mbox{6\,ms} pulses.} \label{fig:freq} \end{figure} Resonant behaviour was observed in Ref.~\cite{thompson05} in the strong variation of the conversion efficiency with modulation frequency, which is reproduced by our calculations. The conversion efficiency from a thermal gas due to a pulse of fixed duration and varying frequency is shown in \reffig{fig:freq}. For the resonance curve representing \mbox{6\,ms} pulses, the full-width at half-maximum is \mbox{0.75\,kHz}. From the Lorentzian fit to the \mbox{6\,ms} pulse in \mbox{Fig. 1} of Ref.~\cite{thompson05} we extract \mbox{0.9\,kHz}. At longer times many-body effects may lead to the production of molecules by thermalisation. This could lead, for example, to the production of molecules for modulation frequencies smaller than $-E_\p{b}^\p{av}/h$, and so increase the width of the resonance curve. The inset of \reffig{fig:freq} shows estimates of the conversion efficiency using the perturbative estimate of the transition amplitude in \refeq{eq:T1}. The agreement with the numerical result is significantly better than that of the transition probability density, shown in \reffig{fig:mat_res}, due to the effect of thermally averaging over all of the continuum states. The maximum of a thermal distribution is at a higher energy in a warmer gas, and so the optimal modulation frequency increases with temperature. However, the dependence of the conversion efficiency on modulation frequency weakens at higher temperatures, as shown in the inset of \reffig{fig:freq}. This is caused by the changes in the thermal distribution of the gas, which has a decreasing maximum and an increasing width as the temperature rises. The decreasing maximum of the distribution leads to less being gained by optimising the modulation frequency $\omega_\p{mod}/2\pi$. Conversely, the increasing width means that a wider range of $\omega_\p{mod}$ have a significant population of atoms close to the resonant continuum energy $p_\p{res}^2/m = E_\p{b}^\p{av} + \hbar \omega_\p{mod}$. In general, the stronger resonant behaviour in colder gases allows more efficient conversion. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth, clip]{csfreq.eps} \caption{(Color online) Conversion efficiency vs modulation frequency for a thermal $^{133}$Cs gas, from a 6\,ms pulse. The gas density and modulation amplitude are identical to the $^{85}$Rb\: curve shown in \reffig{fig:freq}. Here \mbox{$T = 20$\,nK}, \mbox{$n = 1 \times 10^{11}\,$cm$^{-3}$}, \mbox{$B_\p{av} = 21.37$\,G}, \mbox{$B_\p{mod} = 0.065$\,G}, and \mbox{$E_\p{b}^\p{av}/h = -1$\,MHz}.} \label{fig:csfreq} \end{figure} We have studied the conversion efficiency vs frequency for molecular binding energies of $h\times100$\,kHz, and found that it leads to a resonance curve of similar width and maximum to that for the binding energies examined above. $^{85}$Rb$_2$, though, is unstable with respect to inelastic spin relaxation~\cites{thompson05b, koehler05}. We have therefore also performed the calculation for $^{133}$Cs atoms in the \mbox{($F = 3$, $m_\p{F} = 3$)} Zeeman ground state for a molecular bound state energy of \mbox{$-h \times 1$\,MHz}. A similar resonance curve is obtained, as shown in \reffig{fig:csfreq}. It is broader and has a lower maximum than the comparable calculations for 6\,ms pulses in $^{85}$Rb, which had an identical modulation amplitude and gas density. Despite the deeper binding energy, the conversion efficiency grows at a similar rate. The evolution of the transition probability density for a continuum state depends primarily upon its detuning from the resonant continuum energy. Consequently, it is primarily the width of the thermal distribution, rather than the molecular bound state energy, that determines the order of magnitude of the pulse duration necessary for association. Our calculations indicate that resonant association can be efficient for binding energies ranging from $h \times 5$\,kHz to $h \times 1$\,MHz. It is necessary, however, that the chosen molecular binding energy be sensitive to variations in the magnetic field. If this is not the case, the magnetic field modulation has little or no effect on the diatomic level spectrum, and so significant transitions between the continuum states and the molecular bound state do not occur. Such a weak dependence on the magnetic field can occur due to an avoided crossing with another bound state, as occurs for $^{133}$Cs$_2$ at some binding energies~\cite{chin04}. In some cases, it may be possible to compensate for this by using a larger modulation amplitude. \subsection{Modulation amplitude} \label{sec:bosc} \begin{figure}[htbp] \centering \psfrag{LAB}{(a)} \includegraphics[width=\columnwidth, clip]{matrixbosc.eps} \psfrag{LAC}{(b)} \includegraphics[width=\columnwidth, clip]{continuum.eps} \caption{(Color online) The transition probability density \mbox{$\rho(p,t)$}\: in a $^{85}$Rb\: gas for different values of $B_{\p{mod}}$. Here \mbox{$B_{\p{av}} = 156.352$\,G}, \mbox{$\omega_{\p{mod}}/2\pi = 6.5$\,kHz}, and \mbox{$E_{\p{b}}^{\p{av}}/h = -4.88$\,kHz}. \mbox{(a) The} evolution of \mbox{$\rho(p,t)$}\: for the continuum state of energy satisfying the resonance condition of \refeq{eq:res}. Quadratic growth ceases to be observed when the modulation amplitude becomes too great. (b) The transition probability density distribution \mbox{$\rho(p, 1\,\p{ms})$} for $B_\p{mod} = $ 0.15, 0.3, 0.8 and 1.5\,G, as indicated in the contour plot shown in the inset. For each \mbox{$B_{\p{mod}}<0.9$\,G}, the peak in energy of \mbox{$\rho(p, t)$} grows resonantly on a timescale of \mbox{1\,ms}. The continuum energy of the maximal \mbox{$\rho(p, 1\,\p{ms})$} is negatively shifted from $p_{\p{res}}^{2}/m$ with increasing $B_{\p{mod}}$. Two bands of revival in \mbox{$\rho(p,t)$}\: can be seen in the inset, as well as in the distributions of \mbox{$\rho(p,t)$}\: for $B_\p{mod} = 0.8$ and 1.5\,G.} \label{fig:mat_bosc} \end{figure} In the experiments of Thompson \textit{et al.}, increasing the modulation amplitude $B_{\p{mod}}$ with a fixed frequency and pulse duration gave a point of maximum conversion, and after reaching a minimum a partial revival was observed~\cite{thompsonpc}. Examining the transition probability density of \refeq{eq:rho} for different $B_\p{mod}$ shows that as $B_{\p{mod}}$ is increased, the resonant growth of $\rho(p_{\p{res}},t)$ is at first amplified, as shown in \reffig{fig:mat_bosc}a. The faster resonant growth is also reflected in the proportionality of the analytic estimate of $T(p, t)$ in \refeq{eq:T1} to $B_{\p{mod}}$. For \mbox{$B_{\p{mod}} = 0.35$\,G}, there is no longer resonant growth in \mbox{$\rho(p_{\p{res}},t)$} over a \mbox{1\,ms} pulse duration, although this modulation amplitude does maximize \mbox{$\rho(p_{\p{res}}, 1\,\p{ms})$}. The changing amplitude and position of the maximum as $B_\p{mod}$ varies alters the quality of the fit to the thermal distribution, and consequently the conversion efficiency. For \mbox{$B_{\p{mod}}<0.9$\,G}, resonant growth is still observed; however, the continuum energy of the resonantly growing state is negatively shifted from that predicted by \refeq{eq:res}. As shown in \reffig{fig:mat_bosc}b, \mbox{$\rho(p, 1\,\p{ms})$} has a peak in momentum which, as $B_{\p{mod}}$ is increased, at first grows in amplitude and retains its width and position, before being shifted towards \mbox{$p = 0$}. Fully destructive interference for continuum energies up to a few kHz occurs when \mbox{$B_{\p{mod}} \approx 1.0$\,G} and so a minimum in conversion efficiency is produced, as reflected in \reffig{fig:bosc}. Beyond this modulation amplitude, there is no continuum energy for which quadratic growth of $\rho(p,t)$ is observed. \mbox{Figure~\ref{fig:mat_bosc}b} also shows two bands of constructive interference in $\rho(p,t)$. These bands have a peak energy which is also dependent on $B_\p{mod}$. Consequently, at the modulation amplitudes for which these peaks coincide with the thermal distribution, a revival of the conversion efficiency occurs. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth, clip]{bosc_nopsfrag.eps} \caption{(Color online) Conversion from a thermal $^{85}$Rb\: gas as a function of modulation amplitude for \mbox{$B_{\p{av}} = 156.352$\,G,} \mbox{$n = 10^{11}$\,cm$^{-3}$}, \mbox{$T = 20$\,nK} and \mbox{$\omega_\p{mod}/2\pi =$\:}4.9, 5.5 and 6.5\,kHz. The solid line \mbox{($\omega_{\p{mod}}/2\pi = 6.5$\,kHz)} is a thermal average of the data shown in \reffig{fig:mat_bosc}. The variation in conversion efficiency with $B_\p{mod}$ is caused by the changes in the distribution of \mbox{$\rho(p,t)$}\: shown in \reffig{fig:mat_bosc}b. The revivals are caused by the regions of constructive interference, shown in \reffig{fig:mat_bosc}b, coinciding with the thermal distribution. } \label{fig:bosc} \end{figure} The maximum, minimum and revival in conversion efficiency are shown for three different modulation frequencies in Fig.~\ref{fig:bosc}. The absolute conversion efficiency at the maximum is strongly temperature dependent, as shown for $B_\p{mod} = 0.065$\,G in \reffig{fig:freq}; however, the modulation amplitude giving the maximum conversion has a weak variation with temperature above \mbox{20\,nK}. The continuum energy for which the state is resonantly coupled varies with the modulation amplitude, and so both frequency and amplitude should be matched to the temperature of the gas for maximum conversion. Of the plots shown in \reffig{fig:bosc}, for example, the best conversion is achieved for \mbox{$B_{\p{mod}} = 0.7$\,G} and \mbox{$\omega_{\p{mod}}/2\pi = 6.5 $\,kHz}. The revival in conversion efficiency for larger modulation amplitudes occurs in a peak that is narrower, and is due to the constructive interference shown in \reffig{fig:mat_bosc}b. We note that for \mbox{$B_{\p{mod}} > 1.352$\,G} the resonance position $B_0$ is being crossed during the pulse. \section{Conclusions} \label{sec:conclusion} Resonant association has been experimentally shown to be an effective technique of producing molecules~\cites{thompson05,papp06}. Here we have studied the dependence of the conversion efficiency on the duration, frequency and amplitude of the pulse, and the density and temperature of the gas. We have shown that for a homogeneous gas, the continuum shapes the dynamics of the association in such a way that it is unlike a two-level system, in contrast to the case of resonant association in strongly confining optical lattices. The presence of other continuum states around that resonantly coupled to the Fesh\-bach molecule leads to the requirement of optimising the properties of the pulse for the gas in question. Maximum conversion requires the amplitude and frequency of the modulation to be together optimised for the density and temperature of the gas. Colder gases have narrower thermal distributions and so display stronger resonant behaviour. The width of the thermal distribution also leads to the dephasing of the oscillations in conversion efficiency observed at short times in Ref.~\cite{thompson05}. An increase in temperature causes a positive shift in the optimal frequency for association, but also lowers the maximum possible conversion efficiency. The amplitude of the modulation and mean-field shifts lead to the resonant coupling of continuum states of different energy, and thus also affect the conversion efficiency. A higher modulation amplitude causes a less energetic continuum state to be resonantly coupled. Beyond a certain amplitude, no resonant growth in transition probability density occurs; however, for the parameters of Ref.~\cite{thompson05} a revival in conversion efficiency is seen due to a region of constructive interference between the different continuum states. A weak dependence of the molecular binding energy on magnetic field limits the effectiveness of resonant association, although this can sometimes be compensated for by an increase in the modulation amplitude. The evolution of the transfer probability density from a state is primarily determined by its detuning from the resonant continuum energy. Consequently, the pulse duration necessary for association does not vary significantly with the molecular binding energy. We have performed calculations for molecular binding energies ranging from $h\,\times\,$5\,kHz to $h\,\times\,$1\,MHz, and predict that resonant association can be effective over this range. \section{Acknowledgments} This research has been supported by the General Sir John Monash Foundation and Universities UK (T.M.H.), and the Royal Society (K.B. and T.K.). We are grateful to Sarah Thompson and Krzysztof G\'oral for interesting discussions.
1,314,259,993,809
arxiv
\section*{{\normalsize \bf 1. Introduction}} In \cite{Jech2} Jech introduced the cut-and-choose game ${\cal G}_{\rm{c\&c}}$, played by two players, White and Black, in $\omega$-many moves on a complete Boolean algebra ${\Bbb{B}}$ in the following way. At the beginning, White picks a non-zero element $p\in{\Bbb{B}}$ and, in the $n$-th move, White picks a non-zero element $p_n<p$ and Black chooses an $i_n\in\{0,1\}$. In this way two players build a sequence $\langle p,p_0,i_0,p_1,i_1,\dots\rangle$ and White wins iff $\bigwedge_{n\in\omega }p_n^{i_n}=0$ (see Definition \ref{D900}). A winning strategy for a player, for example White, is a function which, on the basis of the previous moves of both players, provides ``good" moves for White such that White always wins. So, for a complete Boolean algebra ${\Bbb{B}}$ there are three possibilities: 1) White has a winning strategy; 2) Black has a winning strategy or 3) none of the players has a winning strategy. In the third case the game is said to be undetermined on ${\Bbb{B}}$. The game-theoretic properties of Boolean algebras have interesting algebraic and forcing translations. For example, according to \cite{Jech2} and well-known facts concerning infinite distributive laws we have the following results. \begin{te}\hspace{-2mm}{\bf .}\hspace{2mm}\rm\label{T914} (Jech) For a complete Boolean algebra ${\Bbb{B}}$ the following conditions are equivalent: (a) White has a winning strategy in the game ${\cal G}_{\rm{c\&c}}$; (b) The algebra ${\Bbb{B}}$ does not satisfy the $(\omega ,2)$-distributive law; (c) Forcing by ${\Bbb{B}}$ produces new reals in some generic extension; (d) There is a countable family of 2-partitions of the unity having no common refinement. \end{te} Also, Jech investigated the existence of a winning strategy for Black and using $\diamondsuit $ constructed a Suslin algebra in which the game ${\cal G}_{\rm{c\&c}}$ is undetermined. Moreover in \cite{Zapl1} Zapletal gave a ZFC example of a complete Boolean algebra in which the game ${\cal G}_{\rm{c\&c}}$ is undetermined. Several generalizations of the game ${\cal G}_{\rm{c\&c}}$ were considered. Firstly, instead of cutting of $p$ into two pieces, White can cut into $\lambda $ pieces and Black can choose more than one piece (see \cite{Jech2}). Secondly, the game can be of uncountable length so Dobrinen in \cite{Dobr1} and \cite{Dobr2} investigated the game ${\cal G} _{<\mu}^{\kappa }(\lambda )$ played in $\kappa $-many steps in which White cuts into $\lambda $ pieces and Black chooses less then $\mu$ of them. In this paper we consider three games ${\cal G}_2 , {\cal G}_3$ and ${\cal G}_4$ obtained from the game ${\cal G}_{\rm{c\&c}}$ (here denoted by ${\cal G}_1$) by changing the winning criterion in the following way. \begin{df} \hspace{-2mm}{\bf .}\hspace{2mm} \rm \label{D900} The games ${\cal G}_k $, $k\in \{ 1,2,3,4\} $, are played by two players, White and Black, on a complete Boolean algebra ${\Bbb{B}}$ in $\omega $-many moves. At the beginning White chooses a non-zero element $p\in{\Bbb{B}}$. In the $n$-th move White chooses a $p_n\in(0,p)_{{\Bbb{B}}}$ and Black responds choosing $p_n$ or $p\setminus p_n$ or, equivalently, picking an $i_n\in\{0,1\}$ chooses $p_n^{i_n}$, where, by definition, $p_n^0=p_n$ and $p_n^1=p\setminus p_n$. White wins the play $\langle p,p_0,i_0,p_1,i_1,\dots\rangle$ in the game \vspace{2mm} ${\cal G}_1 $ if and only if $\bigwedge_{n\in\omega } p_n^{i_n}=0$; ${\cal G}_2 $ if and only if $\bigvee_{k\in\omega }\bigwedge_{n\geq k}p_n^{i_n}=0$, that is $\liminf p_n^{i_n}=0$; ${\cal G}_3 $ if and only if $\bigvee_{A\in [\omega ]^{\omega }}\bigwedge_{n\in A}p_n^{i_n}=0$; ${\cal G}_4 $ if and only if $\bigwedge_{k\in\omega }\bigvee_{n\geq k}p_n^{i_n}=0$, that is $\limsup p_n^{i_n}=0$. \end{df} In the following theorem we list some results concerning the game ${\cal G}_4$ which are contained in \cite{KuSo}. \begin{te}\hspace{-2mm}{\bf .}\hspace{2mm}\rm\label{T910} (a) White has a winning strategy in the game ${\cal G}_4 $ played on a complete Boolean algebra ${\Bbb{B}}$ iff forcing by ${\Bbb{B}}$ collapses $\goth c$ to $\omega$ in some generic extension. (b) If ${\Bbb{B}}$ is the Cohen algebra $\rm{r.o.}({}^{<\omega}2,\supseteq)$ or a Maharam algebra (i.e. carries a positive Maharam submeasure) then Black has a winning strategy in the game ${\cal G}_4 $ played on ${\Bbb{B}}$. (c) $\diamondsuit$ implies the existence of a Suslin algebra on which the game ${\cal G}_4 $ is undetermined. \end{te} The aim of the paper is to investigate the game-theoretic properties of complete Boolean algebras related to the games ${\cal G}_2$ and ${\cal G}_3$. So, Section 2 contains some technical results, in Section 3 we consider the game ${\cal G}_2$, Section 4 is devoted to the game ${\cal G}_3$ and Section 5 to the algebras on which these games are undetermined. Our notation is standard and follows \cite{Jech3}. A subset of $\omega$ belonging to a generic extension will be called supported iff it contains an infinite subset of $\omega$ belonging to the ground model. In particular, finite subsets of $\omega$ are unsupported. \section*{{\normalsize \bf 2. Winning a play, winning all plays}} Using the elementary properties of Boolean values and forcing it is easy to prove the following two statements. \begin{lem}\hspace{-2mm}{\bf .}\hspace{2mm}\rm\label{T905} Let ${\Bbb{B}}$ be a complete Boolean algebra, $\langle b_n:n\in\omega\rangle$ a sequence in ${\Bbb{B}}$ and $\sigma=\{\langle\check n,b_n\rangle:n\in\omega\}$ the corresponding name for a subset of $\omega$. Then (a) $\bigwedge_{n\in\omega}b_n=\|\sigma=\check\omega\|$; (b) $\liminf b_n=\|\sigma\mbox{ is cofinite}\|$; (c) $\bigvee_{A\in[\omega]^{\omega}}\bigwedge_{n\in A}b_n=\|\sigma\mbox{ is supported}\|$; (d) $\limsup b_n=\|\sigma\mbox{ is infinite}\|$. \end{lem} \begin{lem}\hspace{-2mm}{\bf .}\hspace{2mm}\rm\label{T906} Let ${\Bbb{B}}$ be a complete Boolean algebra, $p\in{\Bbb{B}}^+$, $\langle p_n:n\in\omega\rangle$ a sequence in $(0,p)_{{\Bbb{B}}}$ and $\langle i_n:n\in\omega\rangle\in{}^{\omega}2$. For $k\in\{0,1\}$ let $S_k=\{n\in\omega:i_n=k\}$ and let the names $\tau$ and $\sigma$ be defined by $\tau=\{\langle\check n,p_n\rangle:n\in\omega\}$ and $\sigma=\{\langle\check n,p_n^{i_n}\rangle:n\in\omega\}$. Then (a) $p'\Vdash\tau=\sigma= \check\emptyset$; (b) $p\Vdash\tau=\sigma\triangle\check S_1$; (c) $p\Vdash\sigma=\tau\triangle\check S_1$; (d) $p\Vdash\sigma=\check\omega\Leftrightarrow\tau=\check S_0$; (e) $p\Vdash\sigma=^*\check\omega\Leftrightarrow\tau=^*\check S_0$; (f) $p\Vdash |\sigma|<\check\omega\Leftrightarrow\tau=^*\check S_1$. \end{lem} \begin{te}\hspace{-2mm}{\bf .}\hspace{2mm}\rm\label{T907} Under the assumptions of Lemma \ref{T906}, White wins the play $\langle p,p_0,i_0,$ $p_1,i_1,\dots\rangle$ in the game ${\cal G}_1$ iff $\| \sigma\mbox{ is not equal to }\check\omega \| =1$ iff $p\Vdash\tau\neq \check S_0$; ${\cal G}_2$ iff $\| \sigma \mbox{ is not cofinite}\| =1$ iff $p\Vdash\tau\neq^* \check S_0$; ${\cal G}_3$ iff $\| \sigma \mbox{ is not supported}\| =1$ iff $p\Vdash``\tau\cap \check S_0 \mbox{ and }\check S_1\setminus\tau$ are unsupported"; ${\cal G}_4$ iff $\| \sigma \mbox{ is not infinite}\| =1$ iff $p\Vdash\tau=^* \check S_1$. \end{te} \noindent{\bf Proof. } We will prove the statement concerning the game ${\cal G}_3$ and leave the rest to the reader. So, White wins ${\cal G}_3$ iff $\bigvee_{A\in[\omega]^{\omega}}\bigwedge_{n\in A}p_n^{i_n}=0$, that is, by Lemma \ref{T905}, $\|\sigma\mbox{ is not supported}\|=1$ and the first equivalence is proved. Let $1\Vdash``\sigma$ is not supported" and let $G$ be a ${\Bbb{B}}$-generic filter over $V$ containing $p$. Suppose $\tau_G\cap S_0$ or $S_1\setminus\tau_G$ contains a subset $A\in[\omega]^{\omega}\cap V$. Then $A\subseteq\sigma_G$, which is impossible. On the other hand, let $p\Vdash``\tau\cap\check S_0\mbox{ and }\check S_1\setminus\tau$ are unsupported" and let $G$ be a ${\Bbb{B}}$-generic filter over $V$. If $p'\in G$ then, by Lemma \ref{T906}(a), $\sigma_G=\emptyset$ so $\sigma_G$ is unsupported. Otherwise $p\in G$ and by the assumption the sets $\tau_G\cap S_0$ and $S_1\setminus\tau_G$ are unsupported. Suppose $A\subseteq\sigma_G$ for some $A\in[\omega]^{\omega}\cap V$. Then $A=A_0\cup A_1$, where $A_0=A\cap S_0\cap\tau_G$ and $A_1=A\cap S_1\setminus\tau_G$, and at least one of these sets is infinite. But from Lemma \ref{T906}(c) we have $A_0=A\cap S_0$ and $A_1=A\cap S_1$, so $A_0,A_1\in V$. Thus either $S_0\cap\tau_G$ or $S_1\setminus\tau_G$ is a supported subset of $\omega$, which is impossible. So $\sigma_G$ is unsupported and we are done. \hfill $\Box$ \par \vspace*{2mm} In the same way one can prove the following statement concerning Black. \begin{te}\hspace{-2mm}{\bf .}\hspace{2mm}\rm\label{T912} Under the assumptions of Lemma \ref{T906}, Black wins the play $\langle p,p_0,i_0,$ $p_1,i_1,\dots\rangle$ in the game ${\cal G}_1$ iff $\| \sigma\mbox{ is equal to }\check\omega \| >0$ iff $\exists q \leq p \;\; q \Vdash\tau = \check S_0$; ${\cal G}_2$ iff $\| \sigma$ is cofinite $\| >0$ iff $\exists q \leq p \;\; q\Vdash\tau = ^* \check S_0$; ${\cal G}_3$ iff $\| \sigma$ is supported $\| >0$ iff $\exists q \leq p \;\; q\Vdash ``\tau\cap \check S_0 \mbox{ or } \check S_1\setminus\tau$ is supported"; ${\cal G}_4$ iff $\| \sigma$ is infinite $\| >0$ iff $\exists q \leq p \;\; q \Vdash \tau \neq ^* \check S_1$. \end{te} \noindent Since for each sequence $\langle b_n \rangle$ in a c.B.a. ${\Bbb{B}}$ \begin{equation}\label{EQ900} \textstyle \bigwedge_{n\in\omega } b_n \leq \liminf b_n \leq \bigvee_{A\in [\omega ]^{\omega }}\bigwedge_{n\in A}b_n \leq \limsup b_n , \end{equation} we have \begin{pro} \hspace{-2mm}{\bf .}\hspace{2mm} \rm \label{T901} Let ${\Bbb{B}}$ be a complete Boolean algebra. Then (a) White has a w.s.\ in ${\cal G}_4$ $\Rightarrow$ White has a w.s.\ in ${\cal G}_3$ $\Rightarrow$ White has a w.s.\ in ${\cal G}_2$ $\Rightarrow$ White has a w.s.\ in ${\cal G}_1$. (b) Black has a w.s.\ in ${\cal G}_1$ $\Rightarrow$ Black has a w.s.\ in ${\cal G}_2$ $\Rightarrow$ Black has a w.s.\ in ${\cal G}_3$ $\Rightarrow$ Black has a w.s.\ in ${\cal G}_4$. \end{pro} \section*{{\normalsize \bf 3. The game ${\cal G}_2$}} \begin{te}\hspace{-2mm}{\bf .}\hspace{2mm}\rm\label{T909} For each complete Boolean algebra ${\Bbb{B}}$ the following conditions are equivalent: (a) ${\Bbb{B}}$ is not $(\omega,2)$-distributive; (b) White has a winning strategy in the game ${\cal G}_1 $; (c) White has a winning strategy in the game ${\cal G}_2 $. \end{te} \noindent{\bf Proof. } (a)$\Leftrightarrow$(b) is proved in \cite{Jech2} and (c)$\Rightarrow$(b) holds by Proposition \ref{T901}. In order to prove (a)$\Rightarrow$(c) we suppose ${\Bbb{B}}$ is not $(\omega,2)$-distributive. Then $p:=\|\exists x\subseteq\check\omega\;\;x\notin V\|>0$ and by The Maximum Principle there is a name $\pi\in V^{{\Bbb{B}}}$ such that \begin{equation}\label{EQ911} p\Vdash\pi\subseteq\check\omega\;\land\;\pi\notin V. \end{equation} Clearly $\omega=A_0\cup A\cup A_p$, where $A_0=\{n\in\omega:\|\check n\in\pi\|\wedge p=0\}$, $A=\{n\in\omega:\|\check n\in\pi\|\wedge p\in(0,p)_{{\Bbb{B}}}\}$ and $A_p=\{n\in\omega:\|\check n\in\pi\|\wedge p=p\}$. We also have $A_0,A,A_p\in V$ and \begin{equation}\label{EQ912} p\Vdash\pi=(\pi\cap\check A)\cup\check A_p. \end{equation} Let $f:\omega\rightarrow A$ be a bijection belonging to $V$ and $\tau=\{\langle\check n,\|f(n)\check{\enskip}\in\pi\|\land p\rangle:n\in\omega\}$. We prove \begin{equation}\label{EQ913} p\Vdash f[\tau]=\pi\cap\check A. \end{equation} Let $G$ be a ${\Bbb{B}}$-generic filter over $V$ containing $p$. If $n\in f[\tau_G]$ then $n=f(m)$ for some $m\in\tau_G$, so $\|f(m)\check{\enskip}\in\pi\|\wedge p\in G$ which implies $\|f(m)\check{\enskip}\in\pi\|\in G$ and consequently $n\in\pi_G$. Clearly $n\in A$. Conversely, if $n\in\pi_G\cap A$, since $f$ is a surjection there is $m\in\omega$ such that $n=f(m)$. Thus $f(m)\in\pi_G$ which implies $\|f(m)\check{\enskip}\in\pi\|\wedge p\in G$ and hence $m\in\tau_G$ and $n\in f[\tau_G]$. According to (\ref{EQ911}), (\ref{EQ912}) and (\ref{EQ913}) we have $p\Vdash\pi=f[\tau]\cup \check A_p\notin V$ so, since $A_p\in V$, we have $p\Vdash f[\tau]\notin V$ which implies $p\Vdash\tau\notin V$. Let $p_n=\|f(n)\check{\enskip}\in\pi\|\wedge p$, $n\in\omega$. Then, by the construction, $p_n\in(0,p)_{{\Bbb{B}}}$ for all $n\in\omega$. We define a strategy $\Sigma$ for White: at the beginning White plays $p$ and, in the $n$-th move, plays $p_n$. Let us prove $\Sigma$ is a winning strategy for White in the game ${\cal G}_2 $. Let $\langle i_n:n\in\omega\rangle\in{}^{\omega }2$ be an arbitrary play of Black. According to Theorem \ref{T907} we prove $p\Vdash\tau\neq^*\check S_0$. But this follows from $p\Vdash\tau\notin V$ and $S_0\in V$ and we are done. \hfill $\Box$ \par \vspace*{2mm} \section*{{\normalsize \bf 4. The game ${\cal G}_3$}} Firstly we give some characterizations of complete Boolean algebras on which White has a winning strategy in the game ${\cal G}_3$. To make the formulas more readable, we will write $w_\varphi$ for $w(\varphi)$. Also, for $i:\omega \rightarrow 2$ we will denote $g^i=\{i\upharpoonright n:n\in\omega \}$, the corresponding branch of the tree $^{<\omega }2$. \begin{te} \hspace{-2mm}{\bf .}\hspace{2mm} \rm \label{T902} For a complete Boolean algebra ${\Bbb{B}}$ the following conditions are equivalent: (a) White has a winning strategy in the game ${\cal G}_3$ on ${\Bbb{B}}$; (b) There are $p\in{\Bbb{B}}^+$ and $w:{}^{<\omega }2\rightarrow(0,p)_{{\Bbb{B}}}$ such that \begin{equation}\label{EQ901} \textstyle \forall i:\omega \rightarrow 2\; \bigvee_{A\in [\omega ]^{\omega }}\bigwedge_{n\in A}w_{i\upharpoonright n}^{i(n)}=0; \end{equation} (c) There are $p\in{\Bbb{B}}^+$ and $w:{}^{<\omega }2\rightarrow[0,p]_{{\Bbb{B}}}$ such that (\ref{EQ901}) holds. (d) There are $p\in {\Bbb{B}}^+$ and $\rho \in V^{{\Bbb{B}}}$ such that \parbox{11cm} {\begin{eqnarray*} & p\Vdash \rho\subseteq(^{<\omega }2)\check{\enskip}\!\!\!\! & \land\; \forall\varphi\in(^{<\omega }2)\check{\enskip}\; (\varphi ^{\smallfrown } \check{0} \in\rho\;\dot\lor\;\varphi ^{\smallfrown } \check{1}\in\rho) \\ && \land\; \forall i\in((^{\omega }2)^V)\check{\enskip}\; ( \rho\cap\check{g^i} \mbox{ is unsupported}). \end{eqnarray*}}\hfill\parbox{1cm}{\begin{equation}\label{EQ905}\end{equation}} (e) In some generic extension, $V_{{\Bbb{B}}}[G]$, there is a subset $R$ of the tree ${}^{< \omega }2$ containing either $\varphi ^{\smallfrown } 0 $ or $\varphi ^{\smallfrown }1$, for each $\varphi\in {}^{<\omega }2$, and having unsupported intersection with each branch of the tree ${}^{< \omega }2$ belonging to $V$. \end{te} \rm \noindent{\bf Proof. } (a)$\Rightarrow $(c). Let $\Sigma$ be a winning strategy for White. $\Sigma$ is a function adjoining to each sequence of the form $\langle p,p_0,i_0,\dots,p_{n-1},i_{n-1}\rangle$, where $p,p_0,\dots,p_{n-1}\in{\Bbb{B}}^+$ are obtained by $\Sigma$ and $i_0,i_1,\dots,i_{n-1}$ are arbitrary elements of $\{0,1\}$, an element $p_n =\Sigma(\langle p,p_0,i_0,\dots,p_{n-1},i_{n-1}\rangle)$ of $(0,p)_{{\Bbb{B}}}$ such that White playing in accordance with $\Sigma$ always wins. In general, $\Sigma$ can be a multi-valued function, offering more ``good" moves for White, but according to The Axiom of Choice, without loss of generality we suppose $\Sigma$ is a single-valued function, which is sufficient for the following definition of $p$ and $w:{}^{<\omega }2\rightarrow[0,p]_{{\Bbb{B}}}$. At the beginning $\Sigma$ gives $\Sigma(\emptyset)=p\in{\Bbb{B}}^+$ and, in the first move, $\Sigma(\langle p\rangle)\in(0,p)_{{\Bbb{B}}}$. Let $w_{\emptyset}=\Sigma(\langle p\rangle)$. Let $\varphi\in {}^{n+1}2$ and let $w_{\varphi\upharpoonright k}$ be defined for $k\leq n$. Then we define $w_{\varphi}=\Sigma(\langle p,w_{\varphi\upharpoonright 0},\varphi(0),\dots,w_{\varphi\upharpoonright n},\varphi(n)\rangle)$. In order to prove (\ref{EQ901}) we pick an $i:\omega \rightarrow 2$. Using induction it is easy to show that in the match in which Black plays $i(0),i(1),\dots,$ White, following $\Sigma$ plays $p,w_{i\upharpoonright 0},w_{i\upharpoonright 1},\dots$ Thus, since White wins ${\cal G}_3$, we have $\bigvee_{A\in [\omega ]^{\omega }}\bigwedge_{n\in A}w_{i\upharpoonright n}^{i(n)}=0$ and (\ref{EQ901}) is proved. (c)$\Rightarrow $(b). Let $p\in{\Bbb{B}}^+$ and $w:{}^{<\omega }2\rightarrow[0,p]_{{\Bbb{B}}}$ satisfy (\ref{EQ901}). Suppose the set $S=\{\varphi\in {}^{<\omega }2:w_{\varphi}\in\{0,p\}\}$ is dense in the ordering $\langle^{<\omega }2,\supseteq \rangle$. Using recursion we define $\varphi_k\in S$ for $k\in\omega $ as follows. Firstly, we choose $\varphi_0\in S$ arbitrarily. Let $\varphi_k$ be defined and let $i_k\in 2$ satisfy $i_k=0$ iff $w_{\varphi_k}=p$. Then we choose $\varphi_{k+1}\in S$ such that $\varphi_k ^{\smallfrown } i_k\subseteq\varphi_{k+1}$. Clearly the integers $n_k=\mathop{\rm dom}\nolimits(\varphi_k)$, $k\in \omega$, form an increasing sequence, so $i=\bigcup_{k\in\omega }\varphi_k:\omega \rightarrow 2$. Besides, $i\upharpoonright n_k=\varphi_k$ and $i(n_k)=i_k$. Consequently, for each $k\in\omega $ we have $w_{i\upharpoonright n_k}^{i(n_k)}=w_{\varphi_k}^{i_k}=p$. Now $A_0 = \{ n_k : k\in \omega \} \in [\omega ]^{\omega }$ and $\bigwedge_{n\in A_0}w_{i\upharpoonright n}^{i(n)}= p>0$. A contradiction to (\ref{EQ901}). So there is $\psi\in {}^{<\omega }2$ such that $w_{\varphi}\in(0,p)_{{\Bbb{B}}}$, for all $\varphi\supseteq \psi$. Let $m=\mathop{\rm dom}\nolimits(\psi)$ and let $v_{\varphi}$ for $\varphi\in {}^{<\omega }2$ be defined by $$v_{\varphi}=\left\{ \begin{array}{ll} w_{\psi} & \mbox{if }|\varphi| < m,\\ w_{\psi {}^{\smallfrown }(\varphi\upharpoonright(\mathop{\rm dom}\nolimits(\varphi)\setminus m))} & \mbox{if }|\varphi| \geq m. \end{array}\right.$$ Clearly $v: {}^{<\omega }2\rightarrow(0,p)_{{\Bbb{B}}}$ and we prove that $v$ satisfies (\ref{EQ901}). Let $i:\omega \rightarrow 2$ and let $j=\psi ^{\smallfrown }(i\upharpoonright(\omega \setminus m))$. Then for $n\geq m$ we have $v_{i\upharpoonright n}^{i(n)} =w_{\psi ^{\smallfrown } (i\upharpoonright(n\setminus m))}^{i(n)} =w_{j\upharpoonright n}^{j(n)}$. Let $A\in [\omega ]^{\omega }$. Then $A\setminus m \in [\omega ]^{\omega }$ and, since $w$ satisfies (\ref{EQ901}), for the function $j$ defined above we have $\bigwedge_{n\in A\setminus m}w_{j\upharpoonright n}^{j(n)}=0$, that is $\bigwedge_{n\in A\setminus m}v_{i\upharpoonright n}^{i(n)}=0$, which implies $\bigwedge_{n\in A}v_{i\upharpoonright n}^{i(n)}=0$ and (b) is proved. (b)$\Rightarrow $(a). Assuming (b) we define a strategy $\Sigma$ for White. Firstly White plays $p$ and $p_0=w_{\emptyset}$. In the $n$-th step, if $\varphi=\langle i_0,\dots,i_{n-1}\rangle$ is the sequence of Black's previous moves, White plays $p_n=w_{\varphi}$. We prove that $\Sigma$ is a winning strategy for White. Let $i:\omega \rightarrow 2$ code an arbitrary play of Black. Since White follows $\Sigma$, in the $n$-th move White plays $p_n=w_{i\upharpoonright n}$, so according to (\ref{EQ901}) we have $\bigvee_{A\in [\omega ]^{\omega }}\bigwedge_{n\in A}p_n^{i_n}=0$ and White wins the game. (b)$\Rightarrow $(d). Let $p\in{\Bbb{B}}^+$ and $w: {}^{<\omega }2\rightarrow(0,p)_{{\Bbb{B}}}$ be the objects provided by (b). Let us define $v_{\emptyset}=p$ and, for $\varphi\in {}^{<\omega }2$ and $k\in 2$, let $v_{\varphi ^{\smallfrown } k}=w_{\varphi}^k$. Then $\rho=\{\langle\check\varphi,v_{\varphi}\rangle:\varphi\in {}^{<\omega }2\}$ is a name for a subset of $^{<\omega }2$. If $i:\omega \rightarrow 2$, then $\sigma^i=\{\langle(i\upharpoonright n)\check{\enskip},v_{i\upharpoonright n}\rangle:n\in\omega \}$ is a name for a subset of $g^i$ and, clearly, \begin{equation}\label{EQ902} 1\Vdash\sigma^i=\rho\cap\check{g^i}. \end{equation} Let us prove \begin{equation}\label{EQ903} \forall i:\omega \rightarrow 2\;\;1\Vdash \rho\cap\check{g^i} \mbox{ is unsupported}. \end{equation} Let $i:\omega \rightarrow 2$. According to the definition of $v$, for $n\in\omega $ we have $w_{i\upharpoonright n}^{i(n)}=v_{i\upharpoonright(n+1)}$ so, by (\ref{EQ901}), $\bigvee_{A\in[\omega ]^{\omega }}\bigwedge_{n\in A}v_{i\upharpoonright(n+1)}=0$. By (\ref{EQ902}) we have $v_{i\upharpoonright(n+1)}=\|(i\upharpoonright(n+1))\check{\enskip}\in\rho\cap\check{g^i}\|$ and we have $\| \exists A \in (([\omega ]^{\omega })^V )\check{\enskip }\; \forall n\in A\; (i\upharpoonright(n+1))\check{\enskip}\in\rho\cap\check{g^i}\|=0$ that is $\| \neg \exists B \in (([{}^{< \omega }2 ]^{\omega })^V )\check{\enskip }\; B\subset \rho\cap\check{g^i}\|=1$ and (\ref{EQ903}) is proved. Now we prove \begin{equation}\label{EQ904} \forall\varphi\in {}^{<\omega }2\;\; p\Vdash\check\varphi ^{\smallfrown }\check 0\in\rho \;\; \dot\lor \;\; \check\varphi ^{\smallfrown }\check 1\in\rho. \end{equation} If $p\in G$, where $G$ is a ${\Bbb{B}}$-generic filter over $V$, then clearly $|G\cap\{w_{\varphi},p\setminus w_{\varphi}\}|=1$. But $w_{\varphi}=w_{\varphi}^0=v_{\varphi ^{\smallfrown }0}= \| \check\varphi ^{\smallfrown }\check 0\in\rho\|$ and $p\setminus w_{\varphi}=w_{\varphi}^1=v_{\varphi{}^{\smallfrown }1} =\|\check\varphi ^{\smallfrown }\check 1\in\rho\|$ and (\ref{EQ904}) is proved. (d)$\Rightarrow $(c). Let $p\in {\Bbb{B}}^+$ and $\rho \in V^{{\Bbb{B}}}$ satisfy (\ref{EQ905}). In $V$ for each $\varphi\in {}^{<\omega }2$ we define $w_{\varphi}=\|(\varphi ^{\smallfrown } 0)\check{\enskip}\in\rho\|\land p$ and check condition (c). So for an arbitrary $i:\omega \rightarrow 2$ we prove \begin{equation}\label{EQ906} \textstyle \bigvee_{A\in [\omega ]^{\omega }}\bigwedge _{n\in A}w_{i\upharpoonright n}^{i(n)}=0. \end{equation} According to (\ref{EQ905}) for each $n\in\omega $ we have $p\Vdash((i\upharpoonright n)^{\smallfrown } 0)\check{\enskip}\in\rho \; \dot\lor \; ((i\upharpoonright n)^{\smallfrown }1)\check{\enskip}\in\rho$, that is $p\leq a_0\lor a_1$ and $p\land a_0\land a_1=0$, where $a_k=\|((i\upharpoonright n)^{\smallfrown } k)\check{\enskip}\in\rho\|$, $k\in\{0,1\}$, which clearly implies $p\land a_0'=p\land a_1$, i.e. \begin{equation}\label{EQ907} p\land\|((i\upharpoonright n)^{\smallfrown } 0)\check{\enskip}\in\rho\|'=p \land\|((i\upharpoonright n)^{\smallfrown } 1)\check{\enskip}\in\rho\|. \end{equation} Let us prove \begin{equation}\label{EQ908} w_{i\upharpoonright n}^{i(n)}=\|(i\upharpoonright(n+1))\check{\enskip}\in\rho\|\land p. \end{equation} If $i(n)=0$, then $w_{i\upharpoonright n}^{i(n)}= \|((i\upharpoonright n) ^{\smallfrown } 0)\check{\enskip}\in\rho\|\land p=\|((i\upharpoonright n)^{\smallfrown } i(n))\check{\enskip}\in\rho\|\land p$ and (\ref{EQ908}) holds. If $i(n)=1$, then according to (\ref{EQ907}) $w_{i\upharpoonright n}^{i(n)}=p\setminus w_{i\upharpoonright n}= p\land\|((i\upharpoonright n)^{\smallfrown } 0)\check{\enskip}\in\rho\|'= p\land\|((i\upharpoonright n)^{\smallfrown }1)\check{\enskip}\in\rho\|= p\land\|((i\upharpoonright n)^{\smallfrown } i(n))\check{\enskip}\in\rho\|$ and (\ref{EQ908}) holds again. Now $ \bigvee_{A\in [\omega ]^{\omega }}\bigwedge _{n\in A} w_{i\upharpoonright n}^{i(n)} =p\land \| \exists A \in (([\omega ]^{\omega })^V )\check{\enskip } \; \forall n\in A\; \check{i}\upharpoonright(n+1)\in\rho\| =p\land\| \rho\cap\check{g^i} \mbox{ is supported} \|=0$, since by (\ref{EQ905}) $p \leq \| \rho\cap\check{g^i} \mbox{ is unsupported}\|$. Thus (\ref{EQ906}) is proved. (d)$\Rightarrow$(e) is obvious and (e)$\Rightarrow$(d) follows from The Maximum Principle. \hfill $\Box$ \par \vspace*{2mm} Concerning condition (e) of the previous theorem we note that in \cite{KuSo} the following characterization is obtained. \begin{te} \hspace{-2mm}{\bf .}\hspace{2mm} \rm \label{T913} White has a winning strategy in the game ${\cal G}_4$ on a c.B.a.\ ${\Bbb{B}}$ if and only if in some generic extension, $V_{{\Bbb{B}}}[G]$, there is a subset $R$ of the tree ${}^{< \omega }2$ containing either $\varphi ^{\smallfrown }0$ or $\varphi ^{\smallfrown }1$, for each $\varphi\in {}^{<\omega }2$, and having finite intersection with each branch of the tree ${}^{< \omega }2$ belonging to $V$. \end{te} \rm \begin{te}\hspace{-2mm}{\bf .}\hspace{2mm}\rm\label{T904} Let $\Bbb B$ be a complete Boolean algebra. If forcing by ${\Bbb{B}}$ produces an independent real in some generic extension, then White has a winning strategy in the game ${\cal G}_3$ played on ${\Bbb{B}}$. \end{te} \noindent{\bf Proof. } Let $p=\|\exists x\subseteq\check\omega\;\; x\mbox{ is independent}\|>0$. Then, by The Maximum Principle there is a name $\tau\in V^{\Bbb{B}}$ such that \begin{equation}\label{EQ909} p\Vdash\tau\subseteq\check\omega\;\land\;\forall A\in(([\omega]^{\omega})^V)\check{\enskip}\;\;(|A\cap\tau|=\check\omega\;\land\;|A\setminus\tau|=\check\omega). \end{equation} Let us prove that $K=\{n\in\omega:\|\check n\in\tau\|\land p\in\{0,p\}\}$ is a finite set. Clearly $K=K_0\cup K_p$, where $K_0=\{n\in\omega:p\Vdash\check n\notin\tau\}$ and $K_p=\{n\in\omega:p\Vdash\check n\in\tau\}$. Since $p\Vdash \check K_0\subseteq\check\omega\setminus\tau\;\land\;\check K_p\subseteq\tau$, according to (\ref{EQ909}) the sets $K_0$ and $K_p$ are finite, thus $|K|<\omega$. Let $q\in(0,p)_{{\Bbb{B}}}$ and let $p_n$, $n\in\omega$, be defined by $$p_n=\left\{\begin{array}{ll} q & \mbox{if }n\in K, \\ \|\check n\in\tau\|\land p & \mbox{if }n\in\omega\setminus K. \end{array}\right.$$ Then for $\tau_1=\{\langle\check n,p_n\rangle:n\in\omega\}$ we have $p\Vdash\tau_1=^*\tau$ so according to (\ref{EQ909}) \begin{equation}\label{EQ910} p\Vdash\tau_1\subseteq\check\omega\;\land\;\forall A\in(([\omega]^{\omega})^V)\check{\enskip}\;\;(|A\cap\tau_1|=\check\omega\;\land\;|A\setminus\tau_1|=\check\omega). \end{equation} Then $p_n=\|\check n\in\tau_1\|\in(0,p)_{{\Bbb{B}}}$ and we define a strategy $\Sigma$ for White: at the beginning White plays $p$ and, in the $n$-th move, White plays $p_n$. We prove $\Sigma$ is a winning strategy for White. Let $\langle p,p_0,i_0,p_1,i_1,\dots\rangle$ be an arbitrary play in which White follows $\Sigma$ and let $S_k=\{n\in\omega:i_n=k\}$, for $k\in\{0,1\}$. Suppose $q=\bigvee_{A\in[\omega]^{\omega}}\bigwedge_{n\in A}p_n^{i_n}>0$. Now $q\leq p$ and $q= \bigvee_{A\in[\omega]^{\omega}}(\bigwedge_{n\in A\cap S_0}\|\check n\in\tau_1\|\; \land\;\bigwedge_{n\in A\cap S_1}(p\land\|\check n\notin\tau_1\|) =p\land\bigvee_{A\in[\omega]^{\omega}}\|\check A\cap\check S_0\subseteq\tau_1\; \land\;\check A\cap\check S_1\subseteq \check \omega \setminus \tau_1\| \leq\|\exists A\in(([\omega]^{\omega})^V)\check{\enskip}\;(\check A\cap\check S_0\subseteq\tau_1\; \land\;\check A\cap\check S_1\subseteq \check\omega \setminus\tau_1)\|$. Let $G$ be a ${\Bbb{B}}$-generic filter over $V$ containing $q$. Then there is $A\in[\omega]^{\omega}\cap V$ such that $A\cap S_0\subseteq(\tau_1)_G$ and $A\cap S_1\subseteq\omega\setminus(\tau_1)_G$. But one of the sets $A\cap S_0$ and $A\cap S_1$ must be infinite and, since $p\in G$, according to (\ref{EQ910}), it must be split by $(\tau_1)_G$. A contradiction. Thus $q=0$ and White wins the game. \hfill $\Box$ \par \vspace*{2mm} \begin{te}\hspace{-2mm}{\bf .}\hspace{2mm}\rm\label{T908} Let ${\Bbb{B}}$ be an $(\omega,2)$-distributive complete Boolean algebra. Then (a) If $\langle p,p_0,i_0,p_1,i_1,\dots\rangle$ is a play satisfying the rules given in Definition \ref{D900}, then Black wins the game ${\cal G}_3$ iff Black wins the game ${\cal G}_4$. (b) Black has a winning strategy in the game ${\cal G}_3$ iff Black has a winning strategy in the game ${\cal G}_4$. \end{te} \noindent{\bf Proof. } (a) The implication ``$\Rightarrow$" follows from the proof of Proposition \ref{T901}(b). For the proof of ``$\Leftarrow$" suppose Black wins the play $\langle p,p_0,i_0,p_1,i_1,\dots\rangle$ in the game ${\cal G}_4$. Then, by Theorem \ref{T912} there exists $q\in{\Bbb{B}}^+$ such that $q\Vdash$``$\sigma$ is infinite". Since the algebra ${\Bbb{B}}$ is $(\omega,2)$-distributive we have $1\Vdash\sigma\in V$, thus $q\Vdash\sigma\in(([\omega]^{\omega})^V)\check{\enskip}$ and hence $\neg 1\Vdash``\sigma$ is not supported" so, by Theorem \ref{T912}, Black wins ${\cal G}_3$. (b) follows from (a). \hfill $\Box$ \par \vspace*{2mm} \section*{{\normalsize \bf 5. Indeterminacy, problems}} \begin{te}\hspace{-2mm}{\bf .}\hspace{2mm}\rm\label{T911} $\diamondsuit$ implies the existence of a Suslin algebra on which the games ${\cal G}_1$,${\cal G}_2$, ${\cal G}_3$ and ${\cal G}_4 $ are undetermined. \end{te} \noindent{\bf Proof. } Let ${\Bbb{B}}$ be the Suslin algebra mentioned in (c) of Theorem \ref{T910}. According to Proposition \ref{T901}(b) and since Black does not have a winning strategy in the game ${\cal G}_4 $, Black does not have a winning strategy in the games ${\cal G}_1 ,{\cal G}_2 ,{\cal G}_3 $ as well. On the other hand, since the algebra ${\Bbb{B}}$ is $(\omega,2)$-distributive, White does not have a winning strategy in the game ${\cal G}_1 $ and, by Proposition \ref{T901}(a), White does not have a winning strategy in the games ${\cal G}_2 ,{\cal G}_3 ,{\cal G}_4 $ played on ${\Bbb{B}}$. \hfill $\Box$ \par \vspace*{2mm} \begin{pb}\hspace{-2mm}{\bf .}\hspace{2mm}\rm\label{P901} According to Theorem \ref{T904}, Proposition \ref{T901} and Theorem \ref{T909} for each complete Boolean algebra ${\Bbb{B}}$ we have: \begin{center} ${\Bbb{B}}$ is $\omega$-independent $\Rightarrow$ White has a winning strategy in ${\cal G}_3 $ $\Rightarrow$ ${\Bbb{B}}$ is not $(\omega,2)$-distributive. \end{center} Can one of the implications be reversed? \end{pb} \begin{pb}\hspace{-2mm}{\bf .}\hspace{2mm}\rm\label{P902} According to Proposition \ref{T901}(b), for each complete Boolean algebra ${\Bbb{B}}$ we have: \begin{center} Black has a winning strategy in ${\cal G}_1 $ $\Rightarrow$ Black has a winning strategy in ${\cal G}_2 $ $\Rightarrow$ Black has a winning strategy in ${\cal G}_3 $. \end{center} Can some of the implications be reversed? \end{pb} We note that the third implication from Proposition \ref{T901}(b) can not be replaced by the equivalence, since if ${\Bbb{B}}$ is the Cohen or the random algebra, then Black has a winning strategy in the game ${\cal G}_4$ (Theorem \ref{T910}(b)) while Black does not have a winning strategy in the game ${\cal G}_3$, because White has one (the Cohen and the random forcing produce independent reals and Theorem \ref{T904} holds). \footnotesize
1,314,259,993,810
arxiv
\section{Introduction}\label{sec:intro} As language models (LMs) become increasingly good at generating text indistinguishable from human writing, a key question emerges: `How can we best control them to produce what is required while preventing unwanted generations?' This is especially critical for reducing issues of toxicity and bias~\cite{gehman2020toxicprompt, xu2021minority, perez2022redteaming} and misinformation~\cite{taylor2022galactica} in applications that build on these models. % Prior work has used special control codes~\cite{keskar2019ctrl} to steer the model towards generating text on certain topics, explored the use of classifiers at inference time to modify the LM's probability distribution~\cite{dathathri2020pplm,krause2021gedi,liu2021dexperts}, or prompting the LM itself to diagnosis and remove bias~\cite{schick2021diagnosisdebiasing}. While the former requires additional training with control codes, the other two approaches have only been shown to work with a small set of attributes as constraints. \input{figtext/teaser} \input{tables/dataset_examples.tex} In this work, we consider the problem of controlling generation in LMs with constraints specified in natural language (Figure~\ref{fig:teaser}). Our framework allows for the use of both guidance \emph{topics} that instructs the model on \emph{what to generate}, as well as \emph{constraints} that specifies \emph{what not to generate}, all described in plain English.\footnote{Although we focus on English, our techniques should generalize to other languages too.} The use of natural language allows for better scalability (since new concepts can be expressed in English), ease of specification by end users of the model, and coverage of knowledge-intensive concepts, while not requiring any special retraining of the LM itself. We create a new benchmark called \textsc{Cognac}{} for this task containing two datasets based on WordNet~\cite{miller1995wordnet} and Wikidata~\cite{vrandecic2014wikidata}. These datasets contain knowledge-focused constraints that strike a balance between broad attribute-level and narrow lexical-level controls, while allowing for easy evaluation of constraint conformation. We find that even state-of-the-art LMs fail to follow simple language constraints. Figure~\ref{fig:teaser} shows an example of how GPT-3~\cite{brown2020gpt3} ignores the directive of not mentioning politicians (in red). To mitigate this failure, we develop \textsc{CognacGen}{}, a language model generation method that can follow linguistic guidance and does not require any retraining of off-the-shelf LMs. \textsc{CognacGen}{} uses prefix-tuning~\cite{li-liang-2021-prefix} over a copy of the same LM to distill from a guidance model that can generate both topic- and constraint-related words given natural language specifications, which can then be used at inference time to modify the output probabilities of the LM for controlled generation. We develop three types of guidance models---binary verifier{}, top-k token{} generator, and textual example{} generator---that provide various levels of guidance to the LM. To handle the multi-token nature of the guidance examples, we also utilize a trie-based generation mechanism to track the guidance progress and ensure faithful guidance. Our results show that \textsc{CognacGen}{} outperforms prior methods and other strong baselines by a significant margin in our instruction conformance score metric, while keeping the generations fluent. When the topic and constraint are explicitly given (e.g., \ti{UK} and \ti{politican}; see Table~\ref{table:dataset_examples}), \textsc{CognacGen}{} outperforms previous methods for controlled generation by up to $12$ points. Furthermore, \textsc{CognacGen}{} leads $10$ points ahead of the prominent GPT-3 (\texttt{davinci}) model on both datasets when evaluating with natural language instructions. Our analysis shows that \textsc{CognacGen}{} is able to improve generation even with imperfect guidance and can successfully generalize to unseen instructions. \section{The {\textsc{Cognac}} Benchmark}\label{sec:setup} \subsection{Task Setup} We study the problem of conditional text generation with topics and constraints provided in natural language. As input, each context includes 1) a \textit{topic} to generate text on (e.g., ``List examples of people who are citizens of United Kingdom''), 2) a number of example generations under that topic (demonstrations) and 3) a \textit{constraint} that specifies what the model should not generate (e.g. ``Keep listing examples below, but do not mention any politician.'')---all specified in natural language. The goal is to train LMs to generate fluent on-topic content while respecting the specified constraint. LMs typically learn a probability distribution $p_\theta(x)$ on sequences of tokens. An autoregressive LM can generate text by predicting the probability of the next token conditioned on the previous tokens: $p_\theta(x_j \mid x_{<j})$. In our task, we consider the previous tokens in the context to include a task specification $t$, demonstrations $E = \{ e_k \}_{k=1}^K$, and a constraint $c$. We assume that the task description $t$ is based on a topic entity $\bar{t}$. For example, ``Talk about sports'' is based on the topic entity ``sports''. Similarly, the constraint text $c$ is generated based on a constraint entity $\bar{c}$. The topic and constraint entities are added to the demonstrations using a template (\S\ref{sec:datasets}) into a full instruction $\mathcal{I} = \mathcal{G}(t, c, E)$. \footnote{In our task, the demonstrations $E$ always share the same topic $\bar{t}$, yet they may violate the constraint $\bar{c}$.} This allows us to check the validity of each generation using a constraint checker $\mathcal{C} (x, \bar{c}) \in \{ 0, 1 \}$ and a topic checker $\mathcal{T}(x, \bar{t}) \in \{ 0, 1 \}$. Specifically, a sequence $x$ generated by the LM is deemed valid when $x \sim p_\theta(x \mid \mathcal{I})$ such that $\mathcal{C}(x, c) = 1$ (constraint conformed) and $\mathcal{T}(x, t) = 1$ (on topic). We show in Table~\ref{table:dataset_examples} examples of instructions and their corresponding topic and constraint entities. Our task is challenging for three key reasons: 1) the model has to understand the topic and constraint specified in natural language, and 2) the topics and constraints are knowledge-intensive---broader than lexical-level constraint (e.g., `Include words ``car'' and ``drive'' in a sentence.') yet more specific than broad attributes such as toxicity or sentiment, and 3) it has to respect both the topic (which specifies what to generate) and the constraint (which specifies what not to generate) simultaneously. \input{figtext/method} \subsection{Dataset Collection}\label{sec:datasets} To our knowledge, there do not exist datasets for our task that contain topic and constraint specifications in natural language. Therefore, we create two new datasets based on WordNet and Wikidata for our {\textsc{Cognac}} benchmark. \paragraph{WordNet.} We use WordNet \cite{miller1995wordnet} and its hypernymy relation to construct a hierarchical constraint dataset. We select five root nodes ``animal'', ``vehicle'', ``art'', ``food'', and ``sport'' from which the hierarchical structure is constructed. The leaf nodes are instances of their ancestors and are used as the topic and the constraint checker. Concretely, when evaluating the generated text $x$ against a constraint entity $\bar{c}$ using the WordNet constraint checker: $\mathcal{C}^{\text{wordnet}} (x, \bar{c}) =$ $\mathbbm{1}[\exists s \in \leafs(\bar{c}): \mathcal{M}(s, x) = 1]$, where $\mathcal{M}(\cdot)$ denotes whether $s$ is a substring of $x$.\footnote{We implement the checks using exact match including multi-token entities.} We sample two nodes as the topic and the constraint entities within the same subtree ($\texttt{higher-level}: \text{``vehicle''}, \texttt{lower-level}: \text{``car''}$ ) from the WordNet hierarchy, where the higher-level node is the topic and the lower-level node is the constraint. We collect a total of $221$ unique topics, $1,440$ unique constraints, and they form $3,073$ unique topic-constraint pairs. We sample three leaf nodes under the topic node and use them as demonstrations ($\lvert E \rvert = 3$), where the demonstration is the first sentence from its Wikipedia page. We collect a dataset of train/develop/test split of $3,000/ 500 / 500$. \paragraph{Wikidata.} We also use Wikidata~\cite{vrandecic2014wikidata} to construct a second dataset. Each property and value pair (e.g., $\texttt{property}: \text{``citizenship''}$, $\texttt{value}: \text{``United Kingdom''}$; shown in Table~\ref{table:dataset_examples}) contains a set of names (e.g., Winston Churchill). We use $5$ properties: \texttt{occupation}, \texttt{citizenship}, \texttt{education}, \texttt{birthplace}, and \texttt{deathplace} from Wikidata. In each instance, the topic entity is a sampled property-value pair and the demonstraitons $\lvert E \rvert = 3$ are from the property-value name set. The constraint entity is selected by choosing what the GPT2-XL \cite{radford2019gpt2} most likely to generate. When evaluating a generation $x$ with constarint entity $\bar{c}$, the Wikidata constraint checker $\mathcal{C}^{\text{wikidata}}(x, \bar{c}) = \mathbbm{1}[\exists s \in \text{name-set}(\bar{c}): \mathcal{M}(s, x) = 1]$. We scrape from Wikipedia the corresponding first sentence of each entity. We collect a total of $150$ unique topics, $261$ unique constraints, and they form $540$ unique topic-constraint pairs. We collect a dataset of train/develop/test split of $1500 / 500 / 198$ examples. We provide detailed data generation procedure for WordNet and Wikidata in \ref{sec:datagen}. Using the WordNet and Wikidata databases for the checker functions enjoys the benefit of straightforward and automatic evaluation. However, we recognize the knowledge bases come with their fundamental limits to capture all relevant entities. \paragraph{Diverse natural language instructions.} Our goal is to assess the model's ability to understand instructions that are diversely verbalized. For example, templates include instructions where the order of the topic and constraint vary and the lexical context differs. We collect $35$ unique templates. to reflect the diverse nature of the instructions and generate a total of $107,555$ and $18,900$ unique instructions for WordNet and Wikidata, respectively. We split them across train/develop/test as $3/3/29$ templates. The templates are collected by PhD students writing the first nine seed templates, which are then expanded by paraphrasing using GPT-3 \cite{brown2020gpt3}. The paraphrased templates were edited through human inspection to ensure quality. We provide examples in \S\ref{sec:templates}. \subsection{Evaluation Metrics} \label{sec:eval_metrics} To evaluate different generation methods of LMs, we use metrics that test for correctness and fluency of the generations. Correctness is measured by the model's ability to generate text that conforms to the constraint while staying on topic. Fluency is measured by the model's ability to generate text that is coherent and not overly repetitive or simply copying from the input. \paragraph{Instruction Conformance (IC).} The main metric we use is whether the generation $x$ conforms to the constraint $c$ while staying on topic $t$: \begin{align*} \text{IC} = \sum_{(\bar{t}, \bar{c}, E) \in \mathcal{D}} \frac{\mathbbm{1} [ \mathcal{T}(x, \bar{t}) = 1 \cap~\mathcal{C}(x, \bar{c}) = 1 ]}{\lvert \mathcal{D} \rvert}, \end{align*} where $\mathcal{D}$ is the evaluation dataset. A higher IC score indicates that the model can generate text that conforms to the constraint while staying on topic. We also report the on-topic score $\sum_{x \in \mathcal{D}} \frac{\mathbbm{1} [ \mathcal{T}(x, \bar{t}) = 1 ] }{\lvert \mathcal{D} \rvert}$ (higher is better) and the constraint violation score $\sum_{x \in \mathcal{D}} \frac{\mathbbm{1} [ \mathcal{C}(x, \bar{c}) = 1 ] }{\lvert \mathcal{D} \rvert}$ (lower is better). \paragraph{Copy-BLEU.} We report the extent to which the generation undesirably copies from the demonstrations. The Copy-BLEU score is calculated by taking the maximum BLEU score between the generated text and the $\lvert E \rvert$ demonstrations. The lower the Copy-BLEU, the less the generation copies from the demonstrations, hence more desirable. \paragraph{Repetition (Rep-n).} We report the ratio of the $n$-gram repetition (lower is better) in the generated text (\textbf{Rep-n}) proposed in \citet {welleck2020unlikelihood}. \paragraph{Perplexity (PPL).} The perplexity of the generated text is calculated with respect to a pre-trained GPT2-XL model \cite{radford2019gpt2} on the generated sentence (lower is better). \section{Method}\label{sec:method} \subsection{Overview} We posit that the due to the knowledge-intensive nature of \textsc{Cognac}{}, the model will benefit from an explicit use of its own knowledge by querying itself. To this end, we explicitly factorize the conditional probability as opposed to leaving the onus of inference to the LM. The desired distribution: \begin{align*} \begin{split} p(x \mid E, t, c) &\propto p(x \mid E) p(t, c \mid x, E) \\ &= p(x \mid E) p(t \mid x) p(c \mid x) \end{split} \end{align*} can be modeled by three components: 1) $p(x \mid E)$, which is the probability conditioned only on the demonstrations $E$ and 2) $p(t \mid x)$ that evaluates if the task is performed, and 3) $p(c \mid x)$ that evaluates if the constraint is conformed. The former is the \textit{generation model}, which be modeled with the original pre-trained LM reasonably well, as recent work demonstrates LMs' ability to perform in-context learning with task specification and in-context demonstrations. We use the latter as a \textit{guidance model} to steer generation explicitly. \subsection{Guided Generation}\label{sec:guided_generation} \textsc{CognacGen}{} updates the next token prediction probability from the generation model by modifying the logits using the guidance (the ``Generate'' step in Figure~\ref{fig:method}). Specifically, the next token probability is modified as \begin{align*} p(x_j \mid x_{<j}, \mathcal{I}) = \softmax(o_j + \alpha o_j^t - \beta o_j^c ), \label{eq:gen_prob} \end{align*} where $s_j$ is the logits corresponding to the original probability $p(x_j \mid x_{<j}, E)$, $o_j^t, o_j^c \in \{0, 1\}^{\lvert V \rvert}$ are the \textit{guidance logits} provided by the guidance model at each generation step $j$, and $\alpha$ and $\beta$ are the hyperparameters that control the strength of the guidance. We use greedy decoding to generate from the above probability for \textsc{CognacGen}{} in our experiments. We describe how guidance logits are obtained in the following sections. \subsection{Guidance Model}\label{sec:guidance_model} Given a topic $t$ or a constraint $c$, we construct a guidance model that modify the guidance probabilities $p(t \mid x)$ or $p(c \mid x)$. The guidance model has the same architecture as the generation language model. We use the guidance model to produce a \textit{guidance logits} that modifies the next token logits of the language model at subword token indicies described in \S\ref{sec:guided_generation}. We explore three variations of guidance model: 1) \textit{binary verifier{}}, 2) \textit{top-k token{}}, and 3) \textit{textual example{}}. All guidance models compute the guidance probability $p_\text{guide}(\cdot \mid q)$, where $q$ is a query based on a predefined template. The query template takes the constraint entity $\bar{c}$ as input. We use $\bar{c} = \text{`wine'}$ as an example throughout. \input{tables/guide_queries.tex} \paragraph{Binary verifier.} % The binary verifier{} evaluates the probability $p_{\text{guide}}(\text{``yes''} \mid q)$, where $q = \text{``Is}~x_j~\text{a type of}~\bar{c}~\text{?''}$, where $x_j$ is the token to generate at timestep $j$. Since sometimes $x_j$ does not carry clear meaning as a single token, we first perform a greedy decoded look-ahead \cite{lu2021neurologic} using the generation LM to construct a multi-token $x_{j:j+M}$ and obtain guidance from the verifier model \footnote{We use SpaCy part-of-speech parser to detect noun, noun phrase, or names. Therefore, $M$ is determined by the parser.}. Therefore, instead of using ``noir'' as the query word, we set $w = \text{``pinot noir''}$ to construct the query and send to the verifier model. When $p_\text{guide}(\text{``yes''} \mid q) > p_\text{guide}(\text{``no''} \mid q)$, the generated entity $w$ is tokenized to construct a verifier guidance logits $o_j^c$, where its $i$-th index is $\mathbbm{1}[i \in \{ x_j, x_{j+1}, \dots, x_{j+M} \}]$. The binary classifier guidance model can be viewed as an approximation of the constraint checker $\mathcal{C}(\cdot, \cdot)$ but rely only on the existing knowledge in the LM. \paragraph{Top-k token.} The top-k token{} guidance uses the next token probability distribution from the guidance model $p_{\text{guide}}(\cdot \mid q)$, where $q = \text{``What are some examples of wine?''}$. Concretely, we use the top-$k$ tokens of this probability as guidance and construct the top-$k$ guidance logits as $o_j^c$, where its $i$-th index is $\mathbbm{1}[i \in \texttt{top-k}(p_{\text{guide}}(\cdot \mid q)) ]$. This variant falls short on providing guidance for multi-token entities due to its single-step nature (more discussion in \S\ref{sec:method_detail}). \paragraph{Textual example.} The textual example{} guidance model takes a query $q = \text{``What are some examples of wine?''}$ and generates a set of guidance examples such as ``cabernet'', ``merlot'', ``pinot noir'', ``pinot gris''. We use top-$p$ \cite{holtzman2019curious} sampling with beam search to generate a diverse set of guidance examples. Directly tokenizing the examples into a set of subword tokens and use them to modify the logits might lead to suboptimal generation due to loss of order. For instance, a guidance example ``pinot noir'' may be split into ``pinot'' and ``noir'', and the probability of the two tokens will be modified in the same timestep. To mitigate the issue, we propose a trie tree based approach to decide what guidance to apply at each step. We construct a trie tree $\Gamma$ based on the generated guidance examples. With the above guidance examples, the root node will connect to its children nodes ``cabernet'', ``merlot'', and ``pinot''. The node ``pinot'' will connect to its children nodes ``noir'' and ``gris''. At each generation step $j$, the trie tree takes the last generated token and return \textit{only} its children node tokens as guidance. For instance, if ``pinot'' is the current token to generate, the returned set of tokens are ``noir'' and ``gris'' as guidance. We show the same procedure in Figure~\ref{fig:method} with names as an example. This set of tokens is used to construct the textual guidance logits $o_j^c$, where its $i$-th index is $\mathbbm{1}[i \in \Gamma(x_{j-1})]$. We summarize the different guidance models and their corresponding elements in Table~\ref{table:guide_queries} and provide a more detailed description for the textual example guidance in Algorithm~\ref{algo:gen_and_guide} in \ref{appx:appendix}. \subsection{Tackling Diverse Natural Language Instructions} The method described so far assumes that the topic and constraint are given and can be used in a query template to obtain guidance. However, the full \textsc{Cognac}{} task requires reading the entire instruction and demonstrations as input. We propose to train the model to take natural language instruction and demonstrations and generate the guidance directly. With the set of diverse instructions (described in \S\ref{sec:datasets}), the model needs to infer the topic and the constraint entities from the full input containing the instruction and demonstrations. We fine-tune the generation model using prefix-tuning \cite{li-liang-2021-prefix} on the model's textual example{} generated examples. This can be thought of as distilling the model's own knowledge by mimicking the textual guidance as the output and generalizing the implicit topic and constraint inference to unseen instructions. Formally, we fine-tune the added prefix weights and save the prefix activations of the fixed guidance model $p_{\text{guide}}(y \mid [\mathcal{I}; P]; \theta, \phi)$, where $y$ are the examples generated by the textual example{} model\footnote{We only show results for textual example{} since it works best in our experiments, but the distillation procedure can be applied to all three guidance models.} , $\phi$ is the added fine-tuning weights to generate the activations ($\theta$ remains fixed throughout), and $P$ is the added prefix tokens. The fine-tuning objective minimizes the loss $\mathcal{L}(\phi) = - \sum_t \log p_{\text{guide}}(y_t^* \mid [\mathcal{I}; P]; \theta, \phi)$. At the end of the training, only prefix activations $\phi(P)$ are saved. This step distills the model's own knowledge and generalizes the implicit topic and constraint inference to unseen natural language instructions (Figure~\ref{fig:method} Stage 1). \citet{schick2021diagnosisdebiasing} share a similar high-level idea to use the same model's ability to identify bias and modify its generation. The authors propose to self-debias by prompting the model to obtain a biased probability, and subtract the probability from the original generation probability. However, our method focuses on a more knowledge-intensive task, which requires the guidance to provide specific knowledge instead of a broader detection of biases. Our task also requires staying on topic and avoid constraints \textit{at the same time}. This warrants a different design for $p(t \mid x)$ and $p(c \mid x)$, which leads to developing the three guidance models and their tailored decoding design (e.g., incorporating trie). Finally, our setting expands to inferring topic and constraint (not given as control codes or attributes) from natural language instructions. \section{Datasets}\label{sec:data} \subsection{Constraint Dataset Construction} Exsiting benchmarks for controlling generation mainly focus on reducing toxicity, transferring style, mapping tabular data to text, or steering the model to generate text given high-level sentiment or topical control code. We focus on assessing the model's ability to stay on topic and avoid violating constraint, which requires leveraging its own knowledge and using it appropriately during generation. To this end, we construct two datasets: 1) a hierarchical constraint dataset based on WordNet and 2) a constraint dataset based on Wikidata. \kn{Might be worth adding 1 example from each dataset in a table, along with the sampled latent variables like $\bar{t}$, $\bar{c}$, etc. to make our collection process easier to understand.} \paragraph{WordNet.} We use WordNet \cite{} and its hypernymy relation to construct a hierarchical constraint dataset. We select $5$ root nodes ``animal'', ``vehicle'', ``art'', ``food'', and ``sport'' from which the hierarchical structure is constructed. The leaf nodes are instances of their ancestors and are used as the topic and constraint checker. Concretely, when evaluating the generated text $x$ against a constraint entity $\bar{c}$ using the WordNet constraint checker: $\mathcal{C}^{\text{wordnet}} (x, \bar{c}) = \mathbbm{1}[ x \cap \leafs(\bar{c})$ ]. For each leaf node and its predecessor node (e.g., $\texttt{leaf}: \text{``Labrador''}, \texttt{predecessor}: \text{``dog''}$, we scrape from Wikipedia the corresponding sentences in the first paragraph that contains it. We sample two nodes within the same sub-tree from the WordNet hierarchy, where the higher-level node is the topic, and the lower-level node is the constraint. Finally, we sample $3$ leaf nodes from below the topic node and use them as demonstrations ($\lvert E \rvert = 3$). Each example is constructed by: 1) sample a node as the topic, 2) sample $\lvert E \rvert = 3$ nodes under the topic node, and 3) generate continuation from GPT2-XL and use the generated node as the constraint. We collect a dataset of train/develop/test split of \todo{XX} examples. \paragraph{Wikidata.} We use Wikidata \cite{} to construct the constraint dataset. Each entity in Wikidata is characterized by a property and its value ($\texttt{property}: \text{``occupation''}$, $\texttt{value}: \text{``politician''}$). We use \todo{XX} properties from Wikidata (\S\ref{appx:appendix}). We scrape from Wikipedia the corresponding first sentence of each entity. Each example is constructed by: 1) first sample a property and a value as the topic, 2) sample $\lvert E \rvert = 3$ entities from the property-value set, and 3) generate continuation from GPT2-XL and use the generated entity as the constraint. We collect a dataset of train/develop/test split of \todo{XX} examples. \paragraph{Diverse instruction templates.} Our goal is to assess the model's ability to understand instructions that are diversely verbalized. For example, the templates include instructions where the order of the topic and constraint vary and the lexical context differ (see \S\ref{appx:appendix} for examples). We collect \todo{XX} templates to reflect he diverse nature of the instructions. \subsection{Metrics} We evaluate on two axies: 1) the model's ability to stay on topic and avoid violating constraint, and 2) maintaining fluency. \paragraph{Instruction Conformance (IC).} The primary metric we use is whether the generation $x$ conforms to the constraint $c$ while staying on topic $t$: \begin{align} \text{IC} = \sum_{x \in \mathcal{D}} \frac{\mathbbm{1} [ \mathcal{T}(x, \bar{t}) = 1 ] \cdot \mathbbm{1} [ \mathcal{C}(x, \bar{c}) = 1 ]}{\lvert \mathcal{D} \rvert}. \end{align} A higher IC indicates that the model is able to generate text that conforms to the constraint while staying on topic. We also report the on-topic score $\sum_{x \in \mathcal{D}} \frac{\mathbbm{1} [ \mathcal{T}(x, \bar{t}) = 1 ] }{\lvert \mathcal{D} \rvert}$ (higher is better) and the constraint violation score $\sum_{x \in \mathcal{D}} \frac{\mathbbm{1} [ \mathcal{C}(x, \bar{c}) = 1 ] }{\lvert \mathcal{D} \rvert}$ (lower is better). \paragraph{Copy-BLEU.} We report the extent to which the generation undesirably copies from the demonstrations. The copying BLEU is calculated by taking the maximum BLEU score between the generated text and the $\lvert E \rvert$ demonstrations. The lower the copying BLEU, the less the generation copies from the demonstrations, hence more desirable. \paragraph{Repetition.} We report the ratio of repetition (lower is better) in the generated text (\textbf{seq-rep-n}) proposed in \citet {welleck2020unlikelihood}. \paragraph{Perplexity (PPL).} We report the perplexity (lower is better) of the generated text. Perplexity is calculated with respect to GPT2-XL in the generated sentence. \section{Related Work}\label{sec:related} \paragraph{Constrained text generation.} Prior approaches to constrained text generation fall into several categories. First, works like CTRL~~\cite{keskar2019ctrl}, GeDi \cite{krause2021gedi} and Neurologic decoding~\cite{lu2021neurologic,lu2022neurologicAstar} use additional context information such as control codes, attributes or word lists to condition their generations. Second, papers like PPLM~\cite{dathathri2020pplm} and DExperts \cite{liu2021dexperts} modify the model's output probabilities during inference using classifiers and auxiliary models, respectively. Along the same lines, Unlikelihood training \cite{welleck2020unlikelihood} and CRINGE~\cite{adolphs2022cringe} use auxiliary token-level and sequence-level objectives to discourage models from assigning high probabilities to certain tokens or sequences, while Quark~\cite{lu2022quark} and \citet{liu2021constrained} use reinforcement learning to do the same. All these approaches are limited by the type of control they exert over the language model, restricted to high-level concepts like sentiment, toxicity or repetition and usually employing a fixed set of pre-determined binary or categorical codes. The third category consists of methods that use a language model's own knowledge to guide its generations, which is probably most similar to our work. This includes self-debiasing~\cite{schick2021diagnosisdebiasing}, which reduces toxicity by prompting the model to generate toxic content and offset this behavior from the main generation LM. This method works is limited to a single high-level attribute (e.g. toxicity) that needs to be suppressed while \textsc{CognacGen}{} can handle a composition of attributes (topic + constraints) based on precise factual knowledge. More recently, Self-correction~\cite{welleck2022selfcorrect} learns a correction module that iteratively edits generated text and is trained using scalar or language feedback. Their method requires progressively training and updating the corrector module and the generation uses multiple iterations, whereas our guidance module is only prefix-tuned once and can generate text in one pass. \paragraph{Instruction following.} A large body of literature in embodied agent learning has focused on following instructions or constraints in a grounded setting~\citep{vogel2010learning,chen2011learning,artzi2013weakly,luketina2019survey,misra2018mapping,yang2020safe}.These papers focus on instruction understanding that maps to actions in an external environment, as opposed to text generation. More recently, papers have looked explored finetuning language models to follow instructions in natural language for various NLP tasks~\cite{ouyang2022instructgpt,mishra-etal-2022-cross,wang2022super,wei2021finetuned,bach2022promptsource}. In contrast to our work, these methods do not focus on using language to control the generated text in a fine-grained manner and require costly fine-tuning or large-scale prompt creation. \section{Experimental Setup}\label{sec:exp} \paragraph{Evaluation} We perform evaluations under two settings for both datasets in \textsc{Cognac}{}: \begin{enumerate} \item Both the topic and the constraint are specified using a \textbf{control code} each; \item The topic and the constraint are specified in the form of a \textbf{natural language instruction}. \end{enumerate} The control code setting allows us to better compare with prior work, which mostly uses a small set of attributes to steer generation. In this setting, we examine \textsc{CognacGen}{} with all three guidance types: binary verifier{}, top-k token{}, and textual example{}. We adapt \textsc{CognacGen}{} to this setting by skipping the self-guidance distillation step and use the topic and constraint directly as control code. However, the NL instruction setting is more realistic and closer to the real-world use case, where a user can control the LM with natural language. For this setting, the test set split (\S\ref{sec:setup}) contains a set of unseen instruction templates that are never seen in the train set (details in \S\ref{sec:datasets}). We use textual example{} guidance for \textsc{CognacGen}{} in this setting because we observe its superior performance across the board in the control code setting. \paragraph{Baselines} When evaluating with control codes, we compare \textsc{CognacGen}{} to a fine-tuned model baseline built on CTRL \cite{keskar2019ctrl}, where the topic and the constraint are provided as control codes that are appended at the beginning of the input text. We also compare to the self-debiasing technique proposed in \cite{schick2021diagnosisdebiasing}, as it is the only method in the recent controllable generation approach that can apply to arbitraty number of control codes/attributes without fine-tuning. To adapt \textsc{CognacGen}{} to the control code setting, we can simply skip the self-guidance distillation stage and use the topic and constraint as control. When evaluating with natural language instructions, we compare with two large language models (175B parameters): GPT-3 (\texttt{davinci}) \cite{brown2020gpt3}, and InstructGPT (\texttt{text-davinci-002}) \cite{ouyang2022instructgpt}. \paragraph{Model details} All of our \textsc{CognacGen}{} variants use GPT-2 XL (1.5B parameters) \cite{radford2019gpt2} for both generation and guidance models. The top-k token{} uses top $20$ tokens for topic and top $40$ tokens for the constraint. The textual example{} guidance generates $200$ tokens for building the trie. For both GPT-3 and InstructGPT, we use top-$p = 0.95$ and temperature $\tau = 0.9$. We provide more details about self-guidance distillation training details in \S\ref{sec:method_detail}. \section{Results}\label{sec:results} \paragraph{Main results.} Tables~\ref{table:main_results} and \ref{table:main_results_2} display the results for the two evaluation settings, respectively. In the control code setting (Table~\ref{table:main_results}), \textsc{CognacGen}{} (textual example{}) achieves the best instruction conformance (IC) scores, outperforming the self-debiasing baseline by $12$ points on WordNet and by $16$ points on Wikidata. The fine-tuned baseline achieves the lowest IC scores across both datasets. Among \textsc{CognacGen}{}'s variants, textual example{} guidance performs better than top-k token{} guidance and the binary verifier{}. All model variants of \textsc{CognacGen}{} seem to be equally fluent, with \textsc{CognacGen}{} textual example{} having a desirable slightly lower $47.9$ perplexity. In the NL instruction setting (Table~\ref{table:main_results_2}), \textsc{CognacGen}{} textual example{} achieves a higher performance than GPT-3 (legacy) by $10.0$ points on WordNet and $11.6$ points on Wikidata, despite having much fewer parameters (1.5B vs 175B). InstructGPT achieves much higher scores (49 IC on Wordnet and 41.9 IC on Wikidata), but it is also a much larger model(175B) and is also fine-tuned on instruction following using human feedback (RLHF)~\cite{ouyang2022instructgpt}. To analyse model performance on different kinds of templates, we report IC scores for each of the three templates in development sets in Table~\ref{table:nl-instruct-generalize}. We observe that performance among different templates stays about the same for WordNet, but for Wikidata, the template with the topic and constraint specified at the end proves to be more challenging than others. This highlights challenges due to structural variations in instruction templates and how this may manifest differently in each dataset. \input{tables/nl_instruct_generalize} \paragraph{Performance analysis by category.} We analyze the performance of \textsc{CognacGen}{} (textual example{}) by category for both WordNet (Table~\ref{table:wordnet_breakdown}) and Wikidata (Table~\ref{table:wikidata_breakdown}), revealing how each category provides different challenges. We observe that \textsc{CognacGen}{} struggles to avoid violating constraints for the `Art' category at a IC of $15.0$, a much lower score compared to other categories which all have $>30.0$ IC. Moreover, for knowledge-heavy categories such as `Art' in WordNet and `birthplace'/`deathplace' (as topic) in Wikidata, \textsc{CognacGen}{} struggles to stay on topic. \input{tables/wordnet_breakdown} \input{tables/wikidata_breakdown} \paragraph{Model ablations.} To provide more insight into the workings of \textsc{CognacGen}{}, we ablate away the trie-based generation and also compare with a database oracle model on Wikidata, which provides an upper bound on IC score when using the proposed decoding method proposed in \S\ref{sec:guided_generation}. This oracle model has access to the knowledge base, and hence can provideperfect guidance (Table~\ref{table:trie_oracle_ablation}). The oracle achieves an IC of 73, compared to \textsc{CognacGen}{}'s 35.8, indicating that there is quite a bit of room for improvement on our task, both in terms of generating more on-topic text and avoiding violations. Further, both \textsc{CognacGen}{} and the oracle degrade in performance drastically when the tries are removed, highlighting the effectiveness of using tries to guide generation. This degradation is particularly pronounced due to the need for generating multi-token names in Wikidata. \begin{table}[t] \centering \resizebox{1.0\columnwidth}{!}{% \begin{tabular}{lcccc} \toprule Model & IC $\uparrow$ & On-Topic $\uparrow$ & Vio. $\downarrow$ & PPL $\downarrow$\\ \midrule \textsc{CognacGen}{} & \bf{35.8} & \bf{43.8} & 14.2 & 6.4\\ ~~~- w/o trie & 10.4 & 11.2 & \bf{3.2} & \bf{5.0}\\ \midrule Oracle & \bf{73.0} & \bf{73.4} & \bf{0.4} & 9.9\\ ~~~- w/o trie & 13.2 & 12.6 & 2.0 & \bf{3.8}\\ \bottomrule \end{tabular} } \caption{Ablation on trie between \textsc{CognacGen}{} (textual example{}) and the oracle which assumes access to the knowledge base, instead of relying on the LM’s internal knowledge. The ablation is on Wikidata. } \label{table:trie_oracle_ablation} \end{table} \paragraph{Qualitative examples.} Finally, Table~\ref{table:generation_examples} shows example generations from \textsc{CognacGen}{} and GPT-3 (\texttt{davinci}) on WordNet and Wikidata. For WordNet, \textsc{CognacGen}{} generates constraint comforming output yet GPT-3's generation violates the constraint by generating examples including scallop. On Wikidata, \textsc{CognacGen}{} is able to follow the instructions and generate a sentence about a journalist, while GPT-3 fails to stay on topic. \section{Conclusion}\label{sec:conclusion} We have introduced a new task for controllable generation in language models with constraints specified in natural language. We developed \textsc{Cognac}{}, a new benchmark containing knowledge-based constraints using data from Wordnet and Wikidata and showed that even state-of-the-art language models like GPT-3 fail to conform to the provided instructions. We then develop \textsc{CognacGen}{}, a method to use knowledge internal to a language model to guide its own generations. Our approach involves several key innovations such as guidance self-distillation using prefix-tuning and a trie-based decoding scheme based on the guidance of textual examples. This helps the model generate on-topic text that violates constraints less frequently compared to several baselines, including much larger models like GPT-3. More importantly, our method require training only the prefix parameters and can easily be scaled to larger models without requiring significant computational overhead. Our analysis also revealed that there is still significant room to improve on \textsc{Cognac}{} and we hope future approaches will find the benchmark useful for developing better methods to control language models. \section*{Limitations}\label{sec:limitation} Our work is aimed at reducing undesirable generations in LMs while promoting desirable text. A successful scenario would increase the instruction conformance score when our method is applied. However, our benchmark is limited by the comprehensiveness of the underlying knowledge bases (KB) used. Any generation that goes beyond the factual knowledge present in the KB would be deemed incorrect, which may amplify any bias existing in the KB, e.g., people with certain background or ethnicity might be underrepresented. Furthermore, even when the generation is within the scope of the KB, the model might still have a tendency to choose certain types of knowledge over another. These implicit biases might cause unfairness to the end users of the model. \section*{Acknowledgements} \label{sec:ack} We thank the members of the Princeton NLP group for the helpful discussions. In particular, we thank Austin Wang and Tianyu Gao for their valuable feedback. \section{Appendix}\label{appx:appendix} \subsection{Data Generation Process}\label{sec:datagen} Table~\ref{fig:datagen} shows how the topic and constraint are sampled from the two datasets WordNet and Wikidata. \paragraph{WordNet.} Each example is constructed by: 1) sampling a node as the topic, 2) sampling $\lvert E \rvert = 3$ nodes under the topic node, and 3) generating a continuation from GPT2-XL \cite{radford2019gpt2} and using the generated node as the constraint. Note that both the topic and the constraint are within the same category. \paragraph{Wikidata.} Each example is constructed by: 1) first sample a property and a value as the topic, 2) sample $\lvert E \rvert = 3$ entities from the property-value name set, and 3) generate a continuation from GPT2-XL and use the generated entity as a constraint. In contrast to WordNet, the topic and constraint are from different categories. To ensure their information does not update over time, we use only names of deceased people. \subsection{Natural Language Instruction Templates}\label{sec:templates} We provide the natural language instruction template examples in Table~\ref{table:nl_temp} for training (template 0-2) and development sets (template 3-5). The templates vary in their instruction positions. In template 0 and 3, the topic and constraint specification is added to the beginning and the end, respectively, with demonstration examples in the middle. On the other hand, template 1 and 4 put demonstrations to the bottom and specify the topic and constraint at once in the beginning. Note that the word use also differs between templates, sometimes \subsection{Method Details}\label{sec:method_detail} \input{algos/gen_and_guide} \paragraph{Training and inference details.} During self-guidance distillation, we add for each topic and constraint $10$ prefix tokens and the MLP with hidden size $512$, and save only the activation for inference. We train with batch size of $16$ using the AdamW optimizer \cite{loshchilov2017decoupled} with learning rate $3e-5$ for $20$ epochs. During guided generation, we set $\alpha = 5.0$ and $\beta = 100.0$ and use greedy decoding. The binary verifier guidance uses $8$ tokens for greedy look-ahead. We provide a complete algorithm for textual example{} in Algorithm~\ref{algo:gen_and_guide}. \input{figtext/datagen.tex} \input{tables/nl_templates.tex} \paragraph{Top-k token guidance.} While we only use top-k of the next token probability from the guidance distribution, we could decode multiple steps to handle multi-token entities. To encourage only generating one entity, the query template can be modified to ``What is \textit{one} example of $\bar{c}$''. We leave this to future exploration and research. \subsection{Qualitative Examples} We show qualitative example of input instance and model generated output in Table~\ref{table:generation_examples}. \input{tables/generation_examples.tex}
1,314,259,993,811
arxiv
\section{Introduction} The stationary distribution of a Markov process gives, when it is unique, the average fraction of time the process spends in any given state in the long-time limit. When finite-time trajectories are considered, fluctuations around this average occupation occur, with a probability that depends on the forces and noise acting on the process. The position of a Brownian particle, for example, is positive in one dimension on average half of the time, yet sample trajectories have a strong tendency to stay positive or negative for any finite time, pushing the positive occupation above or below~$\frac{1}{2}$. Similarly, Brownian particles evolving in complex potentials tend to spend most of their time around the stable equilibria of the acting potential, but are also likely to `climb' it in finite time to reach possible unstable or metastable states. Similar fluctuations of the occupation that persist in time are observed in almost all random systems, including jump processes describing noisy chemical reactions and particle transport \cite{kampen1992,gardiner1985,derrida2007,sekimoto2010}, phase ordering and coarsening dynamics in magnetic systems \cite{dornic1998,drouffe1998,baldassarri1999}, financial time series \cite{bouchaud2000b}, queueing systems \cite{shwartz1995}, as well as random walks on graphs \cite{montanari2002,kishore2012,kishore2013,bacco2015}. In these and many other applications, it is of interest not only to determine the probability that a process ventures in an atypical region of the state space, for example, around a metastable or unstable state, but also to describe with a \emph{modified process} the effective dynamics of the process in that region. We show in this paper how to formulate this problem as a Markov occupation conditioning problem which can be solved using the general framework proposed recently in \cite{chetrite2013,chetrite2014,chetrite2015}. The general idea is illustrated in Fig.~\ref{figtraj1}. We consider a general Markov process $X_t$ and condition probabilistically that process on spending a fraction $R_T$ of the time interval $[0,T]$ in some subset $S$ of its state space. Following \cite{chetrite2013,chetrite2014,chetrite2015}, we then derive a new Markov process $\hat X_t$, called the \emph{driven} process, which is equivalent to the conditioned process at the level of stationary states. In particular, the mean occupation of $\hat X_t$ in $S$ is $R_T$, so it realizes what is a fluctuation for $X_t$ in a typical way. The driven process is in this sense the modified process mentioned before: it represents the effective (stochastic) dynamics of $X_t$ as this process is seen to `fluctuate' in $S$ for a fraction of time given by the occupation measure $R_T$. When applied to noisy chemical reactions, for example, the driven process gives an effective chemical reaction with modified rates accounting for concentration fluctuations. For a random walk visiting a `rare' graph component, it gives a new random walk that concentrates on that component. This effective representation of fluctuations can be constructed for any ergodic Markov processes, including Markov chains and jump processes. Here, we focus on diffusions described by stochastic differential equations in order to provide a new application of the framework developed in \cite{chetrite2013,chetrite2014,chetrite2015} and to set a template for applications based on continuous-space and continuous-time Markov models. As an example, we study in Sec.~\ref{secOU} the Langevin equation conditioned on staying in the interval $[a,b]$, the half-line $[a,\infty)$, and the point $\{a\}$ for a fraction of the time interval $[0,T]$. The second conditioning is a variant of the so-called \emph{Brownian meander}, corresponding to Brownian motion restricted to stay positive for $t\in [0,T]$ \cite{revuz1999,janson2007,majumdar2015}. Other physical and manmade applications of the driven process related to more complex diffusions, jump processes, and random walks are mentioned in the conclusion of the paper. \begin{figure*}[t] \includegraphics[width=2.9in]{traj1f}% \hspace{0.5in}% \includegraphics[width=2.9in]{rhotraj1f} \caption{Illustration of the driven process for the occupation region $S=[0,1]$. Left: Sample trajectory of a process $X_t$ (black curve) spending about $40\%$ of its time in $S$ (gray region) compared to a sample trajectory of the driven process $\hat X_t$ (blue curve) representing the process $X_t$ conditioned on spending $80\%$ of its time in $S$. Right: Fraction $R_T$ of the time interval $[0,T]$ spent in $S$ as a function of $T$ for $X_t$ (black) and $\hat X_t$ (blue).} \label{figtraj1} \end{figure*} \section{Occupation conditioning} \label{secOC} We explain in this section how the conditioned and the driven processes are constructed for a conditioning involving an occupation measure. This is a special case of the framework presented in \cite{chetrite2013,chetrite2014,chetrite2015} dealing with general, time-integrated random variables for the conditioning. \subsection{Model} We consider a pure diffusion process $X_t\in\mathbb{R}^m$ described by the following (It\^o) stochastic differential equation (SDE): \begin{equation} dX_t=F(X_t)dt+\sigma dW_t, \label{eqsde1} \end{equation} where $F:\mathbb{R}^m\rightarrow\mathbb{R}^m$ is the drift, $W_t$ is an $n$-dimensional Brownian motion, and $\sigma$ is the $m\times n$ noise matrix, assumed for simplicity to be constant in space and non-singular (invertible) \footnote{See \cite{chetrite2014} for the case of multiplicative noise involving a noise matrix $\sigma(x)$ depending on $X_t$.}. The probability density $p(x,t)$ of this process evolves according to the Fokker-Planck equation \begin{equation} \partial_t p(x,t)=L^\dag p(x,t), \end{equation} expressed here in terms of the Fokker-Planck operator, \begin{equation} L^\dag =-\nabla \cdot F +\frac{1}{2}\nabla\cdot D\nabla \end{equation} with $D=\sigma\sigma^T$ the diffusion matrix. For the remaining, we also need the adjoint of the Fokker-Planck operator, \begin{equation} L=F\cdot \nabla+\frac{1}{2}\nabla\cdot D\nabla, \label{eqgen1} \end{equation} which generates the evolution of expectations of $X_t$ \cite{risken1996}. Given the evolution of $X_t$, we now consider a subset $S\subset\mathbb{R}^m$ and look at the fraction of time that $X_t$ spends in $S$ in the time interval $[0,T]$ using \begin{equation} R_{T}=\frac{1}{T}\int_0^T 1\!\! 1_S(X_t)\, dt \label{eqempmeas1} \end{equation} where $1\!\! 1_S(x)$ denotes the indicator function equal to $1$ if $x\in S$ and $0$ otherwise. This random variable, which explicitly depends on both $S$ and $T$, is called the \emph{occupation measure} of $S$ or the (normalized) \emph{local time} when $S$ is a single point. Assuming that $X_t$ has a unique stationary distribution $p^*$ satisfying $L^\dag p^*=0$, we have by the ergodic theorem that \begin{equation} \lim_{T\rightarrow\infty} R_T =E_{p^*}[1\!\! 1_S]=\int_S p^*(x)\, dx, \end{equation} so that $R_T$ converges in probability to the mean occupation in $S$ given by $p^*(S)$. For finite integration times, $R_T$ fluctuates around this concentration point according to its probability density $P(R_T=r)$, which can be expressed for large times as \begin{equation} P(R_{T}=r) = e^{-T I(r)+o(T)} \label{eqldp1} \end{equation} or, equivalently, \begin{equation} \lim_{T\rightarrow\infty} -\frac{1}{T}\ln P(R_T=r)=I(r). \end{equation} This scaling of the distribution is known as a \emph{large deviation principle} (LDP) \cite{dembo1998,hollander2000,touchette2009}. The rate of decay $I(r)$ is called the \emph{rate function} and can be obtained from the \emph{contraction principle} of large deviation theory by the following minimization: \begin{equation} I(r)=\min_{\rho:C(\rho)=r} J(\rho), \label{eqcps1} \end{equation} which involves the Donsker-Varadhan or level-2 rate function, \begin{equation} J(\rho)=-\min_{h>0} \int \rho(x) (h^{-1} Lh)(x)\, dx, \label{eqdv1} \end{equation} and the contraction linking $\rho$ to $R_T$: \begin{equation} C(\rho)=\int_{\mathbb{R}^m} \rho(x) 1\!\! 1_S(x)\, dx=\int_S \rho(x) \, dx. \end{equation} This result is derived in Appendix~\ref{appLDP1}. The LDP (\ref{eqldp1}) shows that $X_t$ is exponentially unlikely for long times $T$ to enter the region $S$ for a fraction $R_T$ of time, except when $R_T$ is the stationary fraction $r^*$ of time spent in $S$. The ergodic theorem indeed states that $P(R_T=r^*)\rightarrow 1$ as $T\rightarrow\infty$, which implies $I(r^*)=0$, corresponding to the \emph{typical} occupation of $X_t$. Any other fraction $R_T\neq r^*$ represents an \emph{atypical} occupation of $X_t$ in $S$ characterized by $I(r)>0$ and so $P(R_T=r)\rightarrow 0$ as $T\rightarrow\infty$. For more information about the large deviations and applications of occupation times, see \cite{majumdar2002b,majumdar2002c,majumdar2005,sabhapandit2006} \subsection{Conditioned and driven processes} We now consider a fixed occupation $R_T=r$ of $X_t$ in $S$ and derive the effective driven process $\hat X_t$ that describes $X_t$ conditioned on (or restricted to) this occupation. The construction of $\hat X_t$ is explained in \cite{chetrite2013,chetrite2014,chetrite2015} and requires that we find the dominant eigenvalue $\lambda(k)$ and corresponding eigenfunction $r_k$ of the \emph{tilted generator}, defined by \begin{equation} \mathcal{L}_k=L+k1\!\! 1_S, \end{equation} where $k\in\mathbb{R}$ and $L$ is the generator (\ref{eqgen1}) of $X_t$. With these elements, the driven process is defined as the Markov process with modified generator \begin{equation} L_k= r_k^{-1} \mathcal{L}_k r_k -r_k^{-1}(\mathcal{L}_kr_k) \label{eqdoob1} \end{equation} acting on functions $f$ according to \begin{eqnarray} (L_kf)(x) &=&\frac{1}{r_k(x)} (\mathcal{L}_k r_k f)(x)-\frac{1}{r_k(x)} (\mathcal{L}_k r_k)(x) f(x)\nonumber\\ &=& \frac{1}{r_k(x)} (\mathcal{L}_k r_k f)(x)-\lambda(k) f(x). \end{eqnarray} As shown in \cite{chetrite2014}, the effect of this transform on the SDE (\ref{eqsde1}) is to change the drift $F$ to the modified or \emph{driven} drift \begin{equation} \label{eq:effective_f} F_k(x)=F(x)+D\nabla\ln r_k(x) \end{equation} while keeping the diffusion matrix $D$ constant. The evolution of the driven process $\hat X_t$ is thus given by the modified SDE \begin{equation} d\hat X_t=F_k(\hat X_t)dt+\sigma dW_t \end{equation} perturbed by the same Gaussian noise as $X_t$ but involving the new driven drift $F_k$. The connection between the driven process and the conditioning of $X_t$ on $R_T=r$ is illustrated again in Fig.~\ref{figtraj1} and is fully explained in \cite{chetrite2014}. The idea briefly is that the driven process $\hat X_t$ and the conditioned process $X_t|R_T=r$ have the same stationary properties, in addition to having similar probabilities for their trajectories as $T\rightarrow\infty$, if the rate function $I(r)$ of $R_T$ is convex and $k$ is chosen so that \begin{equation} k=I'(r). \label{eqtemp1} \end{equation} In this sense, we then say that $\hat X_t$ is \emph{equivalent} to $X_t|R_T=r$ or \emph{realizes} that conditioned process in the long-time limit. This equivalence is similar to the equivalence of the microcanonical and canonical ensembles in equilibrium statistical mechanics \cite{chetrite2014}: the conditioned process $X_t|R_T=r$ is essentially a process generalization of the microcanonical ensemble in which the `energy' $R_T$ is constant and equal to $r$, whereas the driven process is a generalization of the canonical ensemble in which $R_T$ fluctuates but concentrates in the `thermodynamic limit' $T\rightarrow\infty$ to $r$, the constant of the microcanonical ensemble. This is achieved by matching the `temperature' $k$ to the constraint $R_T=r$ according to (\ref{eqtemp1}), which is an analog of the thermodynamic temperature-energy relation. Another way to understand the driven process is as an optimal change of measure or process \cite{chetrite2015}. Recall that the event $R_T=r$ is a rare fluctuation in the original process $X_t$ having an exponentially small probability for long times $T$. The driven process, by contrast, is such that $R_T=r$ happens with certainty as $T\rightarrow\infty$, so that the transformation (\ref{eqdoob1}) modifies the process $X_t$ to make a rare occupation typical. In general, many transformed processes can be used to achieve this reweighting of rare events. The driven process is special in that it the process \emph{closest} to $X_t$, with respect to a metric defined by the relative entropy, that makes the occupation $R_T=r$ typical; see \cite{chetrite2015} for more details. \subsection{Spectral problem and effective potential} The difficulty of constructing the driven process comes from solving the spectral problem \begin{equation} \mathcal{L}_kr_k=\lambda(k) r_k \label{eq:spectral:right} \end{equation} for the dominant eigenvalue and its corresponding eigenfunction. Depending on the form of generator $L$ considered and, more precisely, its self-adjointness, three cases arise: \begin{description} \item[Case 1] $L=L^\dag$. This is the simplest case determining a reversible process with respect to the Lebesgue (uniform) measure. In this case, the techniques of quantum mechanics apply: the spectrum of $L$ or $\mathcal{L}_k$ is real and the eigenfunction $r_k$ must be found by solving (\ref{eq:spectral:right}) with vanishing boundary condition for $r_k^2(x)$ as $|x|\rightarrow\infty$. \item[Case 2] $L\neq L^\dag$ but the spectrum of $L$ is real. This arises, for example, when $X_t$ is a reversible or equilibrium diffusion having a gradient drift \begin{equation} F=-\frac{D}{2}\nabla U \label{eq:force-potential} \end{equation} and, therefore, a Gibbs stationary distribution \begin{equation} p^*(x)=e^{-U(x)}. \end{equation} In this case, it is known that $L$ is self-adjoint with respect to an inner product defined with $p^*$ and that this can be used to `symmetrize' $L$ into a self-adjoint operator $H$, playing the role of a quantum Hamiltonian \cite{majumdar2002}. This symmetrization is simply defined as \begin{equation} H=e^{-U/2}Le^{U/2} \end{equation} and leads, when applied to $\mathcal{L}_k$, to the \emph{tilted Hamiltonian} \begin{equation} \mathcal{H}_k =e^{-U/2} \mathcal{L}_k e^{U/2} =\frac{D}{2} \left[ \Delta + \frac{\Delta U}{2} - \left(\frac{\nabla U }{2}\right)^2 \right] + k 1\!\! 1_S. \label{eq:hamiltonian} \end{equation} This operator has of course the same real spectrum as $\mathcal{L}_k$, so that \begin{equation} \mathcal{H}_k\psi_k=\lambda(k)\psi_k, \end{equation} but its dominant eigenfunction $\psi_k$, obtained with the natural vanishing boundary condition $\psi_k(x)^2=0$ at infinity, is related to $r_k$ by $r_k=e^{U/2}\psi_k$. \item[Case 3] $L\neq L^\dag$ and the spectrum is complex. This happens when $F$ is not gradient, $\sigma$ depends on $X_t$, or external reservoirs are included in this process as boundary conditions. In this case, $X_t$ represents a genuine nonequilibrium process supporting non-vanishing probability currents, for which $L$ or $\mathcal{L}_k$ cannot be symmetrized. Moreover, the spectral problem (\ref{eq:spectral:right}) on its own is incomplete: it must be solved in tandem with the dual problem \begin{equation} \mathcal{L}_k^\dag l_k=\lambda(k) l_k \label{eq:spectral:left} \end{equation} and the the boundary condition $l_k(x)r_k(x)=0$ at infinity \footnote{This arises from the definition of the adjoint with the inner product based on the Lebesgue measure.}. This is more difficult to solve in general than the case of self-adjoint operators (Cases~1 and~2). \end{description} We focus in the rest of the paper mostly on Case 2, which is equivalent to a quantum ground state problem with effective Schr\"odinger Hamiltonian $\mathcal{H}_k$. Assuming that the drift is conservative, as in (\ref{eq:force-potential}), we can express the driven drift (\ref{eq:effective_f}) in this case also in gradient form, \begin{equation} F_k=-\frac{D}{2}\nabla U_k, \label{eq:eff-potential} \end{equation} by introducing the \emph{effective} or \emph{driven potential} \begin{equation} U_k(x)=U(x)-2\ln r_k(x) =-2\ln \psi_k(x), \label{eq:eff-potential-eigen} \end{equation} which realizes the occupation conditioning. Non-reversible diffusions falling in Case 3 cannot be represented by such an effective potential, even though the modified drift $F_k$, as given by (\ref{eq:effective_f}), is always a gradient perturbation of the original drift $F$ when $D$ is constant. This property of $F_k$ comes from the time-additive form of $R_T$. For other conditionings, based for example on currents or the entropy production, the perturbation $F_k-F$ can have a non-conservative and, therefore, nonequilibrium component; see Sec.~5.5 of \cite{chetrite2014} for more detail. \section{Application} \label{secOU} We illustrate in this section our results for an exactly-solvable model based on the linear Langevin equation or one-dimensional Ornstein-Uhlenbeck process defined by \begin{equation} dX_t=-\gamma X_tdt+\sigma dW_t, \label{eq:ou1} \end{equation} where $X_t\in\mathbb{R}$, $W_t\in\mathbb{R}$, with $\gamma$ and $\sigma$ positive constants. This process obviously falls in Case 2 of the previous section, as do all processes defined on $\mathbb{R}$ without sinks or sources. The linear drift $F(x)=-\gamma x$ is associated with the parabolic potential \begin{equation} U(x)=\frac{\alpha x^2}{2}, \end{equation} where $\alpha =2\gamma/ D$ and $D=\sigma^2$. The quantum problem that we need to solve therefore is \begin{equation} \left[ \frac{d^2}{dx^2} + \frac{\alpha }{2} - \frac{\alpha ^2}{4} x^2 + \frac{2k}{D} 1\!\! 1_S(x) \right] \psi(x) = \frac{2 \lambda}{D} \psi(x), \label{eq:eigen:chi} \end{equation} which we convert to \begin{equation} \Psi''(x) - \frac{x^2}{4} \Psi(x) +\left[ \frac{1}{2} + \frac{2}{D \alpha}\Big( k 1\!\! 1_{{\sqrt \alpha} S}(x) - \lambda \Big)\right] \Psi(x) =0 \label{eq:eigen:chi:adim} \end{equation} with the rescaling $\Psi (x) = \psi(x/{\sqrt \alpha})$. The same quantum problem can be obtained using path integral methods as applied to Brownian functionals and the Feynman-Kac equation; see \cite{majumdar2002,majumdar2005}. Equation~(\ref{eq:eigen:chi:adim}) is essentially the Weber equation with piecewise-constant coefficients, representing a quantum harmonic oscillator with piecewise-shifted potential. It can be solved exactly in and out of the conditioning interval $S$ and then pieced together by requiring continuity at the boundaries of $S$. This is done next for three occupations, namely, $S=[a,b]$ (finite interval), $S=[a,\infty)$ (half-line), and $S=\{a\}$ (point conditioning). \subsection{Finite interval} For $S=[a,b]$, we must solve (\ref{eq:eigen:chi:adim}) on the three regions $(-\infty , a)$, $[a,b]$, and $(b, \infty)$, and piece the three solutions, as mentioned, continuously at $x=a$ and $x=b$. Over each region, the Weber equation has the form \begin{equation} \Psi''(x)- \left( \frac{x^2}{4} + \nu (x) \right)\Psi(x)=0, \label{eq:eigen:chi:ext} \end{equation} where \begin{equation} \nu (x)= \frac{2}{\alpha D } \left(\lambda - \frac{\alpha D} {4} - k 1\!\! 1 _{{\sqrt \alpha} S}(x) \right). \label{eq:nu1} \end{equation} This function takes only two values, denoted from now on by $\nu =\nu (S)$ and $\nu'=\nu(\mathbb{R}\backslash S)$. The solution space of the Weber equation is spanned by \begin{equation} \begin{aligned} s_1(\nu,x) &= e^{-\frac{x^2}{4} } \, \hM{\frac{\nu }{2} + \frac 1 4}{\frac 1 2} {\frac{x^2}{2} } \\ s_2(\nu,x) &= x e^{-\frac{x^2}{4} } \, \hM{\frac{\nu }{2} + \frac 3 4}{\frac 3 2} {\frac{x^2}{2} }, \end{aligned} \end{equation} where $\hM{a}{b}{x}$ is the confluent hypergeometric function of the first kind. From these two particular solutions, it is possible to construct a solution outside $[a,b]$ that decays to 0 at infinity: \begin{widetext} \begin{equation} W( \nu , x ) = \frac 1 { 2^{\ea + \frac{1}{4}} \sqrt {\pi } } \left[ \cos \p{\textstyle\frac{\nu}{2} \pi + \frac{\pi } 4 } \Gamma \p{\textstyle\frac 1 4 - \frac{\nu}{2}} s_1(\nu,x) - \sqrt {2} \sin\p{\textstyle\frac{\nu}{2} \pi + \frac{\pi } 4 } \Gamma \p{\textstyle\frac 3 4 - \frac{\nu}{2}} s_2(\nu,x) \right]. \end{equation} Combining these solutions, we then construct the complete eigenfunction as \begin{equation} \Psi (x) = \begin{cases} K_1 W( \nu' , -x) & x < a \\ K_2 s_1( \nu , x) + K_3 s_2( \nu , x) & a<x<b \\ K_4 W( \nu' , x ) & b < x, \\ \end{cases} \label{eq:fullsol1} \end{equation} where the $K_i$'s are constants to be adjusted by imposing continuity. To this end, we define the vector $K=(K_1,K_2,K_3,K_4)^T$ and the matrix \begin{equation} C(\lambda,k)= \begin{pmatrix} - W( \nu ' , - a) & s_1( \nu , a ) & s_2( \nu , a ) & 0 \\ \Diff{W}( \nu ', - a) & \Diff{s_1}( \nu , a ) & \Diff{s_2}( \nu , a ) & 0 \\ 0 & s_1( \nu , b ) & s_2( \nu , b ) & - W( \nu ', b ) \\ 0 & \Diff{s_1}( \nu , b ) & \Diff{s_2}( \nu , b ) & - \Diff{W}( \nu ', b ) \\ \end{pmatrix}, \end{equation} \end{widetext} where $\partial_{x}$ denotes the first derivative with respect to the second coordinate (noted $x$ above). The continuity of $\Psi(x)$ at $a$ and $b$ is equivalent to the following linear equation: \begin{equation} C(\lambda ,k) \, K = 0. \end{equation} Non-trivial solutions therefore exist if, and only if, \begin{equation} \det C(\lambda , k) =0. \label{eq:eigen:trans} \end{equation} This defines a transcendental equation in $\lambda$ involving hypergeometric functions, which can easily be solved numerically to obtain $\lambda(k)$ with an arbitrary precision. To find the associated rate function $I(r)$, we then use the fact that $\lambda(k)$ and $I(r)$ are related by Legendre transform when the former is differentiable \cite{dembo1998,hollander2000,touchette2009}: \begin{equation} I(r)=\sup_{k\in\mathbb{R}}\{kr -\lambda(k)\}. \end{equation} In parametric form, we therefore have \begin{equation} I( \lambda '(k) ) = k \lambda '(k) - \lambda (k),\qquad k\in\mathbb{R}. \label{eq:I:parametric} \end{equation} \begin{figure*}[t] \centering \includegraphics[width=2.9in]{asymscgf1f}% \hspace*{0.5in}% \includegraphics[width=3in]{asymratefct1f} \caption{SCGF (left) and rate function (right) for $S=[0,1]$. Parameters: $\alpha=1$, $D=2$.} \label{fig:rate} \end{figure*} Figure~\ref{fig:rate} shows the result of these expressions for $S=[0,1]$. As can be seen, the tails of $\lambda(k)$ are asymptotically linear with slopes $0$ and $1$ as $|k|\rightarrow\infty$, reflecting the fact that $I(r)$ is defined for $r\in [0,1]$ and is steep at $r=0$ and $r=1$. This is important for what follows as it means, following (\ref{eqtemp1}), that the effective potential associated with no occupation in $[a,b]$ is obtained by taking the limit $k\rightarrow -\infty$, whereas full occupation in $[a,b]$ is obtained with $k\rightarrow\infty$. In between, $k$ is related to the occupation fraction $r$ via (\ref{eqtemp1}) or equivalently $\lambda'(k)=r$. To find the effective potential $U_k$, we compute the kernel of the matrix $C(k,\lambda )$ to obtain $\Psi$ via (\ref{eq:fullsol1}), and then rescale $\Psi$ back to $\psi$. Figure~\ref{fig:asym:effective_potential} shows the result of these calculations for $S=[0,1]$ and different values of $k$. For $k>0$, we see that $U_k(x)$ becomes steeper around $[0,1]$ compared to the `natural' potential $U(x)$ obtained for $k=0$. This confines the process inside $[0,1]$, and so increases naturally the time spent inside this interval. In the limit $k\rightarrow\infty$, the process is completely confined inside that interval by an infinitely-steep potential $U_\infty(x)$ shown in Fig.~\ref{fig:asym:limit_potential}. In this case, it is easy to see by analogy with the confined harmonic oscillator \cite{dean1966,consortini1976} that $U_{\infty}(x)$ must diverge logarithmically near $x=0$ and $x=1$, since $\psi_k(x)$ vanishes at these points. This yields a diffusive version of the so-called $Q$-process, arising in the context of quasi-stationary distributions \cite{darroch1965,darroch1967,villemonais2012,collet2014}, which corresponds here to the Ornstein-Uhlenbeck process conditioned on not leaving $[0,1]$. The effective potential is more interesting for $k<0$. In this case, a non-trivial barrier develops inside $[0,1]$ so as to `deconfine' the process from $[0,1]$, leading to a reduced occupation in that interval. As $k\rightarrow-\infty$, $U_k(x)$ becomes steep near $x=0^-$, as shown in Fig.~\ref{fig:asym:limit_potential}, preventing the process to reach $[0,1]$ from negative initial conditions. It also becomes steep near $x=1^+$ while being raised, as shown in the left plot of Fig.~\ref{fig:asym:effective_potential}. However, because the height of the potential obtained for $x>1$ does not play any role when it becomes disconnected from the one obtained for $x<0$ \footnote{Only the gradient of the potential has a physical meaning.}, we can shift the former down to zero, yielding the limiting potential shown in Fig.~\ref{fig:asym:limit_potential}. This leads effectively to a breaking of ergodicity for the process conditioned on not entering $[0,1]$: the process started in the region $x<0$ stays in that region and cannot visit the region $x>1$ because of the infinite barrier at $x=0$. Conversely, when $X_t$ is started in the region $x>1$, it stays in that region and cannot cross to $x<0$. For initial conditions in $[0,1]$, the process is not defined, at least not in the formal limit $k=-\infty$. For any finite $k$, however, the driven process is ergodic. \begin{figure*}[t] \centering \includegraphics[width=2.9in]{asympotpos1f.pdf}% \hspace*{0.5in}% \includegraphics[width=2.9in]{asympotneg1f.pdf} \caption{(Color online) Effective potential $U_k(x)$ for $S=[0,1]$. Left: $k =0:2:10$ (from bottom to top curves) using the notation $k=k_{\min} : dk : k_{\max}$. Right: $k=0:-2:-10$ (from bottom to top curves). Parameters: $\alpha=1$, $D=2$.} \label{fig:asym:effective_potential} \end{figure*} \begin{figure}[t] \includegraphics[width=2.9in]{asymlimit1f.pdf} \caption{(Color online) Black: Effective potential $U_{-\infty}(x)$ preventing any occupation in $S=[0,1]$. Blue: Effective potential $U_\infty(x)$ forcing a total occupation in $S=[0,1]$. Parameters: $\alpha=1$, $D=2$.} \label{fig:asym:limit_potential} \end{figure} \subsection{Half line} We now consider $S=[a,\infty)$ as the occupation set to show how our results can be used to study variants of the Brownian meander process corresponding to Brownian motion conditioned on staying positive. The Weber solution in this case has two branches: \begin{equation} \Psi (x) = \begin{cases} K_1 W( \nu' ,-x) & x < a \\ K_2 W(\nu , x) & x > a \\ \end{cases} \end{equation} linked continuously at $x=a$ by solving (\ref{eq:eigen:trans}) using the matrix \begin{equation} C(\lambda ,k) = \begin{pmatrix} - W( \nu ', - a) & W( \nu , a) \\ \Diff{W}( \nu ', -a) & \Diff{W}( \nu , a) \end{pmatrix}. \end{equation} Figure~\ref{fig:half:lambda} shows the results of the numerical calculation of $\lambda (k)$ and $I(r)$ from this matrix for $a=1$, which are overall qualitatively similar to those of Fig.~\ref{fig:rate} because of the restriction $r\in [0,1]$. In Fig.~\ref{fig:half:effective_potential} we show the effective potential $U_k(x)$ for positive and negative values of $k$ related to a confinement in the region $x>1$ and $x<1$, respectively. The shape of $U_k(x)$ is also qualitatively similar to the previous results obtained for $[0,1]$, except that it develops only one steep barrier instead of two. As before, the divergence of $U_k(x)$ near $x=1$ appearing in the limit $k\rightarrow\pm\infty$ is logarithmic, since the wavefunction $\psi_k(x)$ of the quantum harmonic oscillator with a infinite wall at $x=1$, the equivalent quantum problem, vanishes at $x=1$ \cite{dean1966,consortini1976}. \begin{figure*}[t] \centering \includegraphics[width=2.9in]{halfscgf1f.pdf}% \hspace*{0.5in} \includegraphics[width=3in]{halfratefct1f.pdf} \caption{SCGF (left) and rate function (right) for $S=[1,\infty)$. Parameters: $\alpha=1$, $D=2$.} \label{fig:half:lambda} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=2.9in]{halfpotpos1f.pdf}% \hspace*{0.5in}% \includegraphics[width=2.9in]{halfpotneg1f.pdf} \caption{(Color online) Effective potential $U_k(x)$ for $S=[1,\infty)$. Left: $k =0:2:10$ (from bottom to top colored curves) leading to more confinement in $S$. The black curve is the asymptotic effective potential $U_{\infty}(x)$ that confines the occupation of $X_t$ in $S$. Right: $k=0:-2:-10$ (from bottom to top colored curves) leading to less confinement. The black curve is the asymptotic effective potential $U_{-\infty}(x)$ preventing occupation in $S$. Parameters: $\alpha=1$, $D=2$.} \label{fig:half:effective_potential} \end{figure*} Another interesting feature to observe in Fig.~\ref{fig:half:effective_potential} is that the `natural' potential $U=U_0$ is not modified much by the conditioning on the side of occupation; that is to say, more occupation for $x>1$ (respectively, $x<1$) does not change the right (respectively, left) branch of $U$ significantly. This can be understood from the quantum perspective by noting that the introduction of a wall or well in the parabolic potential does not affect the tails of $\psi_k$ far away from this wall or well. The same phenomenon can also be explained using recent results \cite{chetrite2015} showing that the modified force $F_k$ minimizes a cost function involving a weighted integral of $(F-F_k)^2$ and the occupation $R_T=r$. As a result, the drift $F$ is modified only minimally whenever it contributes `naturally' to the occupation targeted. This cost, importantly, is a function of the drift and not the potential, so that large differences between $U_k$ and $U$, such as those seen in the right plot of Fig.~\ref{fig:asym:effective_potential}, do not necessarily translate into large differences between $F_k$ and $F$ and, therefore, large costs. Considering $a=0$ instead of $a=1$ does not change these results much. The only difference is that the rate function shown in Fig.~\ref{fig:half:lambda} is symmetric about $r=0.5$, which leads to an effective potential $U_k(x)$ for $k<0$ that is the mirror image of $U_k(x)$ for $k>0$, that is, $U_k(x)=U_{-k}(-x)$. A logarithmic singularity near $x=0$ also appears for $a=0$ in the limits $k\rightarrow -\infty$ and $k\rightarrow\infty$, which restrict the occupation in the negative and positive regions, respectively. The positive case is interesting as it is related to the so-called arc sine law \cite{majumdar2005,rouault2002,kac1951} and leads to a generalization of the Brownian meander. Indeed, solving the Weber equation for $k\rightarrow\infty$, which is equivalent to the quantum harmonic oscillator with a wall \cite{dean1966,consortini1976}, we find that the \emph{asymptotic Ornstein-Uhlenbeck meander} defined as the SDE (\ref{eq:ou1}) conditioned on staying positive \emph{at all times}, is a nonlinear diffusion with potential $U_{\text{OUm}}=U_\infty$ having the following tails: \begin{equation} U_{\text{OUm}}(x) \sim \begin{cases} -c \ln x & x\rightarrow 0^+\\ \beta x^2/2 & x\rightarrow\infty, \end{cases} \end{equation} where $c$ and $\beta$ are constants that can be determined numerically from $C(\lambda,k)$. The drift of this meander is thus given asymptotically by \begin{equation} F_{\text{OUm}}(x) \sim \begin{cases} c/x & x\rightarrow 0^+\\ -\beta x & x\rightarrow\infty. \end{cases} \end{equation} For pure Brownian motion ($\gamma=\alpha=0$), we find $\beta=0$, which is consistent with the exact drift of the Brownian meander; see Eq.~(21) of \cite{majumdar2015}. \subsection{Point occupation} \begin{figure*}[t] \includegraphics[width=2.9in]{diracpotpos1f.pdf}% \hspace*{0.5in}% \includegraphics[width=2.9in]{diracpotneg1f.pdf} \caption{(Color online) Effective potential $U_k(x)$ for the point occupation at $x=0$. Left: $k=0, 1.01, 2.02, 4.04$, and $6.06$ (from bottom to top curves) leading to more occupation at $x=0$. Right: $k=-1.01, -2.02, -4.04$, and $-10.1$ (from bottom to top curves at $x=0$) leading to less occupation at $x=0$. Parameters: $\alpha=1$, $D=2$.} \label{fig:dirac:effective_potential} \end{figure*} The third and last application that we consider is the point occupation at $x=a$, obtained by replacing $1\!\! 1_S(x)$ by $\delta(x-a)$ in the definition of $R_T$ to obtain the local time at $a$. This case can also be considered as the limit $\epsilon\rightarrow 0$ of $S=[a-\epsilon/2,a+\epsilon/2]$, with $1\!\! 1_S$ replaced by $1\!\! 1_S(x)/\epsilon$ and leads to the following Weber solution: \begin{equation} \Psi (x) = \begin{cases} K_1 W( \nu ', -x) & x < a \\ K_2 W( \nu ', x) & x > a \\ \end{cases} \end{equation} with continuity conditions \begin{equation} \begin{aligned} K_1 W(\nu ,-a) - K_2 W(\nu , a) &=0 \\ K_1 \partial_xW(\nu , -a) + K_2 \partial_xW(\nu ,a) & = k \Psi (a). \end{aligned} \label{eq:dirac:cont} \end{equation} In the particular case $a=0$, these conditions reduce to the following relation: \begin{equation} k = -\frac{\cot \p{\pi \p{\frac {\nu } 2+\frac 1 4} } \Gamma \p{\frac 1 4-\frac {\nu } 2}}{\sqrt {2}\, \Gamma \p{ \frac 3 4-\frac {\nu } 2}}, \label{eq:dirac:zero} \end{equation} which can be used as an implicit equation to find $\lambda(k)$ via the expression (\ref{eq:nu1}) of $\nu$. The solution for $\psi_k$ that we find in this case is similar to the one found for the half line, except that the derivative of $\psi_k$ is now discontinuous because of the delta source at $x=a$, and jumps according to the second line in (\ref{eq:dirac:cont}). This introduces a kink at $x=a$ in the effective potential $U_k(x)$, illustrated in Fig.~\ref{fig:dirac:effective_potential} for $a=0$, which is reminiscent of the kink seen in the potential of Brownian motion with dry or solid friction \cite{gennes2005,touchette2010c,gnoli2013}. This makes sense intuitively: for the process $X_t$ to have a larger local time at $x=a$, it has to `stick' more onto that point, similarly to what is observed with solid friction. Conditioning on having a smaller local time at $x=a$ forces, on the other hand, $X_t$ to `avoid' that point as if there was a `negative' solid friction force (see left plot of Fig.~\ref{fig:dirac:effective_potential}). In the limit $k\rightarrow\infty$, the potential $U_k$ becomes degenerate and concentrates the process on $x=a$, whereas for $k\rightarrow-\infty$, it develops an infinite barrier at $x=a$ with two logarithmic branches that prevents occupation onto that point. The latter limit yields a $Q$-process version of the Ornstein-Uhlenbeck process conditioned on not reaching $x=a$, which also breaks ergodicity. \section{Perturbation theory} We complement the exact results of the previous section by developing a perturbation theory in the parameter $k$ for obtaining $\lambda(k)$, $I(r)$, and $U_k(x)$. In principle, this perturbation can be applied around any value $k$ for which the spectrum of $\mathcal{H}_k$ or $\mathcal{L}_k$ is known, even if $\mathcal{L}_k$ is not symmetrizable \footnote{In this case, one has to use perturbation theory for non-self-adjoint linear operators.}. For simplicity we consider reversible processes with effective (self-adjoint) Hamiltonian $\mathcal{H}_k$ and develop a perturbation in the form \begin{equation}\ \mathcal{H}_{k+\Delta k} = \mathcal{H}_k + \Delta k\, 1\!\! 1 _S. \end{equation} A natural starting point is $k=0$, since $\mathcal{H}_0=H$ is simply the Hamiltonian (obtained by symmetrization of $L$) of the quantum harmonic oscillator with shifted energy levels, so that $\lambda(0)=0$ and $\psi_0=e^{-U/2}=\sqrt{p^*}$. The application of standard perturbation theory for self-adjoint operators with non-degenerate spectrum gives directly \cite{kato1995}: \begin{equation} \partial _k \lambda _{n}(k) = \langle\Psi _{n}(k)|1\!\! 1_S |\Psi _{n}(k)\rangle \label{eq:pert:evalue} \end{equation} and \begin{equation} \partial _k \Psi _{n}(k) = \sum _{m \ne n} \frac{ \langle \Psi _{m}(k)|1\!\! 1_S |\Psi _{n}(k)\rangle }{\lambda _{m}(k) - \lambda _{n}(k)} \Psi _{m}(k).\label{eq:pert:evector} \end{equation} Here, we use the quantum bracket notation for the inner product, and now denote by $\lambda_n(k)$ and $\Psi_n(k)$ the $n$th eigenvalue of $\mathcal{H}_k$ and its corresponding eigenfunction, respectively. The matrix elements \begin{equation} N_{i,j}(k) = \langle \Psi _{i}(k)|1\!\! 1_S |\Psi _{j}(k)\rangle \end{equation} driving the `evolution' of $\lambda_n(k)$ and $\Psi_n(k)$ as a function of $k$ have a natural geometric interpretation: they represent an \emph{orthogonality defect} of the basis $\{\Psi_n(k)\}$ with respect to the modified inner product, \begin{equation} \langle \Psi_m(k) |1\!\! 1_S | \Psi_n(k) \rangle =\int_S \Psi_m^*(k,x) \Psi_n(k,x)dx, \end{equation} which defines mathematically a semi-positive sesquilinear form. To complete these equations, we can calculate the evolution of the orthogonality defect matrix $N$ itself with the perturbation: \newcommand{\Sym}[1]{\underset{#1}{\mathrm{Sym}}} \begin{equation} \begin{aligned} \partial _k N_{i,j}(k) &= \langle \Psi _{i}(k)|1\!\! 1_S | \partial _k \Psi _{j}(k)\rangle + \langle \partial _k \Psi _{i}(k)|1\!\! 1_S | \Psi _{j}(k)\rangle \\ &= \sum _{m \ne i } \frac{ N_{i,m}(k) N_{m,j}(k) }{\lambda_i(k) - \lambda_m(k) } + \sum _{m \ne j } \frac{ N_{j,m}(k) N_{m,i}(k) }{\lambda_j(k) - \lambda_m(k)}, \end{aligned} \label{eq:N:ode} \end{equation} which simplifies on the diagonal to \begin{equation} \partial _k N_{i,i}(k) = \sum _{m \ne i} \frac{2 |N_{i,m}(k)|^2} {\lambda_m(k) - \lambda_i(k) }. \label{eq:ev:N:diag} \end{equation} It can be checked that these differential equations for $N(k)$ admit the set of diagonal matrices as fixed points. Knowing that $k\rightarrow-\infty$ is equivalent to total deconfinement in $S$, we find however that $N(-\infty)=0$ is the only stable fixed point reached in that limit. Similarly, since $k\rightarrow\infty$ corresponds to total confinement in $S$, we must have $N(+\infty )= \mathbb I$. Equations (\ref{eq:pert:evalue}), (\ref{eq:pert:evector}), and (\ref{eq:N:ode}) define a set of coupled nonlinear differential equations that can be used to find the SCGF $\lambda(k)$, which corresponds to the dominant eigenvalue $\lambda_0(k)$, and its associated eigenfunction $\Psi$, which corresponds to $\Psi_0(k)$. From the dominant eigenfunction, we then find the driven potential $U_k$ as in the previous section. Moreover, using the 0th component of (\ref{eq:pert:evalue}), we can obtain the rate function $I(r)$ by rewriting the parametric expression (\ref{eq:I:parametric}) as \begin{equation} I\p{ N_{0,0} (k) } = k N_{0,0} (k) - \lambda _0(k). \label{eq:I:parametric:N} \end{equation} \begin{figure*}[t] \includegraphics[width=2.9in]{asympertscgf1f}% \hspace*{0.5in}% \includegraphics[width=2.9in]{asympertratefct1f} \caption{(Color online) Left: SCGF computed through perturbation for $S=[0,1]$ with $M=2,5,10,20$ modes (from bottom to top curves). The black curve shows the exact $\lambda(k)$. Right: Corresponding rate function. The black curve shows the exact $I(r)$. Parameters: $\alpha=1$, $D=2$.} \label{fig:rate:perturbative} \end{figure*} Figure~\ref{fig:rate:perturbative} shows the perturbation results for $\lambda(k)$ and $I(r)$ obtained by integrating Eqs.~(\ref{eq:pert:evalue}), (\ref{eq:pert:evector}), and (\ref{eq:N:ode}) starting from the known eigenvalues $\lambda_n(0)$ and eigenstates $\Psi_n(0)$ of the quantum harmonic oscillator. The results are for the unit interval occupation, $S=[0,1]$, and are also obtained by truncating $N(k)$ to a finite size $M$. As can be seen, the difference between the perturbative and exact results decreases by increasing $M$, as expected, and becomes negligible for $M=10$. The perturbation also converges quickly for $S=[1,\infty)$ (results not shown), but not for the point occupation case, as shown in Fig.~\ref{fig:dirac:rate:perturbative}. For the latter, the SCGF $\lambda (k)$ obtained by perturbation strongly differs from the exact result obtained from the methods of the previous section for $k$ beyond some positive value $k_c$, which is only slightly shifted by increasing $M$. This arises because the introduction of a Dirac \emph{well} in a potential (the quantum problem for $k>0$) strongly modifies the eigenfunctions and eigenvalues of that potential. By comparison, a Dirac \emph{wall} (the quantum problem for $k<0$) modifies these eigenfunctions essentially only in the way that they are joined at $x=a$, which leads to a small perturbation of the eigenvalues, including the dominant eigenvalue $\lambda(k)$ which converges to a constant as $k\rightarrow-\infty$. \begin{figure}[t] \centering \includegraphics[width=2.9in]{diracpertscgf1f} \caption{(Color online) SCGF computed through perturbation for the point occupation at $x=0$. Number of modes used: $M=20:20:120$ (from bottom to top curves). The black line corresponds to the exact $\lambda(k)$. Parameters: $\alpha=1$, $D=2$.} \label{fig:dirac:rate:perturbative} \end{figure} To complete the perturbation analysis, we show in Fig.~\ref{fig:Ns} the evolution of the matrix $N(k)$ for increasing values of $k$, obtained by integrating the same differential equations truncated to order $M=10$. For $S=[1,\infty)$ (top panel), we see that this evolution essentially involves three phases: a first for $k<0$ in which $N$ has the approximate block form \begin{equation} N\approx \begin{pmatrix} 0 & 0 \\ 0 & \mathbb I \end{pmatrix}, \label{eq:N:corners1} \end{equation} a second around $k=0$ in which $N\approx\mathbb I$, and then a third phase obtained for $k>0$ in which \begin{equation} N\approx \begin{pmatrix} \mathbb I & 0 \\ 0 & 0 \end{pmatrix}. \label{eq:N:corners2} \end{equation} The first and last block phases are approximations of the extreme solutions $N(-\infty)=0$ and $N(\infty)=\mathbb I$, respectively, containing errors in the lower block coming from the truncation. In each case, the upper corner of $N$ follows the extreme solutions, which confirms that the largest eigenvalues~--~in particular, the dominant eigenvalue~--~are minimally affected by truncation. A similar evolution of $N(k)$ is observed for $S=[0,1]$ (lower panel), although convergence to $N(-\infty)=0$ and $N(\infty)=\mathbb I$ in this case is slower and involves more truncation errors outside the upper corner of $N$. These results are obtained with a direct truncation of $N(k)$ in the eigenbasis defined by $\Psi_n(k)$; a more efficient truncation leading to smaller errors could be constructed in principle by choosing a different function basis. \begin{figure*}[t] \includegraphics[width=\textwidth]{pertmatrix1f}% \caption{(Color online) Orthogonality matrix $N(k)$ for $k=-10,-5,0,5,10$ (from left to right). Top: $S=[1,\infty)$. Bottom: $S=[0,1]$. $M=10$ modes are used. Parameters: $\alpha=1$, $D=2$.} \label{fig:Ns} \end{figure*} \section{Conclusion} We have shown how a Markov process which is observed to spend a long time in some region of its state space can be represented by a modified Markov process, called the driven process, representing physically the dynamics of the original process restricted to that region. We have constructed this driven process for the Ornstein-Uhlenbeck process, and have shown how it can be used to obtain two important probabilistic constructions, namely, stochastic meanders which are confined in a certain region of space, and $Q$-processes which avoid a region of space. The application of these results to higher-dimensional diffusions that are reversible should follow the example of the Ornstein-Uhlenbeck process. In this case, the driven process is obtained by solving a corresponding quantum ground state problem, as we have seen, which means that it can be solved using many powerful techniques of quantum mechanics (e.g., discretization, mesh or base function methods) \cite{thijssen2007}. For nonreversible diffusions, the problem is more complicated: there is no mapping to the quantum problem and the full spectral problem that must be solved involves, as mentioned, the tilted generator and its dual, with non-trivial boundary conditions imposed on the product of their respective eigenfunctions. An alternative method is to construct the driven process using optimal control representations detailed in~\cite{chetrite2015} or to discretize the underlying space to obtain a jump process which can then be studied using exact diagonalization or the density-matrix renormalization techniques developed in \cite{gorissen2009,hooyberghs2010,gorissen2011}. For jump processes, the tilted generator becomes the tilted matrix \begin{equation} \mathcal{W}_k(x,y)= W(x,y)+k1\!\! 1_S(x)\delta_{x,y}, \end{equation} where $W(x,y)$ is the transition rate (probability per unit time) for the transition $x\rightarrow y$, and $\delta_{x,y}$ is the Kronecker symbol. Moreover, the driven process is then the jump process with modified transition rates given by \begin{equation} W_k(x,y)=r_k(x)^{-1} W(x,y) r_k(y), \end{equation} where $r_k$ is the eigenvector associated with the dominant eigenvalue $\lambda(k)$ of $\mathcal{W}_k$ \cite{chetrite2014}. This result suggests many possible applications of the occupation conditioning problem beyond diffusions, including for example: \begin{itemize} \item Chemical reactions producing abnormally high or low concentrations of chemical species because of thermal noise. In this case, the state $X_t$ is the vector $(n^1_t,n^2_t,\ldots,n^m_t)$ of concentrations in time for $m$ chemical species so that $X_t\in\mathbb{N}_+^m$ or $X_t\in\mathbb{R}_+^m$ \cite{kampen1992,gardiner1985}. \item Queues in which the number $X_t\in\mathbb{N}_+$ of waiting `customers' goes beyond a certain threshold such as the queue capacity; see, e.g., \cite{iglehart1974,asmussen1982,shwartz1995}. \item Random walks on regular or random graphs that visit `rare' or `metastable' nodes or graph components (e.g., nodes with low pagerank) \cite{montanari2002,kishore2012,kishore2013,bacco2015}. In this case, $X_t$ is simply the node visited at time $t$ while the state space is the set of nodes. \item Interacting particle systems on lattice, such as the zero-range process, showing condensation transitions where a macroscopic number of particles get to occupy one lattice site \cite{grosskinsky2003,evans2005b,levine2005,grosskinsky2008}. The dynamics leading to this condensation and metastable phases related to it have been studied using occupation conditioning in \cite{grosskinsky2008,chleboun2010,chleboun2015}. \item Other general Markov processes having metastable states; see, e.g., \cite{cassandro1984,beltran2010,bianchi2011} and references therein. The occupation set $S$ defining the conditioning can be chosen to include one or more metastable states or a set of states connecting stable and metastable states so as to study transition pathways, also called reactive paths. \end{itemize} In all cases, the driven process provides a way to understand the dynamics of a stochastic process as it evolves in atypical states (concentrations, nodes, regions, etc.). This can take the form of a chemical reaction with modified rates, as already mentioned, or a queue with modified arrival and serving rates leading to a specific mean occupation. Similar interpretations apply to the other applications listed above, and should yield new insights in understanding in general how large fluctuations arise in time and how they can be simulated efficiently.
1,314,259,993,812
arxiv
\section{Introduction} Since the accelerated expansion of the universe is discovered by Type Ia supernovae (SNe Ia) \cite{1,2} and ensured by two independent probes baryon acoustic oscillations (BAO) \cite{3,4} and cosmic microwave background (CMB) radiation \cite{5,6,7}, the elegant standard six-parameter cosmological model, $\Lambda$-cold dark matter ($\Lambda$CDM) has achieved great success in explaining the physical phenomena at both small and large scales. Up to now, the nature of both dark energy (DE) and dark matter (DM) is still mysterious and unclear, and we just know phenomenologically the following several basic properties of them: (i) DE is a cosmic fluid with an effective equation of state (EoS) $\omega\approx-1$, which violates the strong energy condition; (ii) DE obeys a too much smaller clustering property than DM and is homogeneously permeated in the universe at cosmological scale; (iii) effects of DM clustering have been measured to 2$\sim$3$\%$ precision by several large weak lensing experiments including the Kilo-Degree Survey (KiDS) \cite{8}, the Dark Energy Survey (DES) \cite{9} and the Subaru Hyper-Suprime Camera (HSC) \cite{10}. Recently, the Planck-2018 CMB final release \cite{5} with improved measurement of the reionized optical depth has confirmed, once again, the validity of the simple $\Lambda$CDM cosmology in describing the evolution of the universe. However, this model is not as perfect as we imagine and faces at least two intractable problems, namely the cosmological constant and coincidence problems. The former indicates that the observed value for vacuum energy density is far smaller than its theoretical estimation, i.e., the so-called 120-orders-of-magnitude inconsistence that makes the physical explanation of vacuum very confusing, while the latter is why energy densities of DE and DM are of the same order of magnitude today, since their energy densities are so different from each other during the evolution of the universe. Meanwhile, the $\Lambda$CDM model also faces at least two important tensions emerged from recent cosmological observations, namely the Hubble constant ($H_0$) and matter fluctuation amplitude ($\sigma_8$) tensions, where the former is more severe than the latter. The $H_0$ tension is that the direct measurement of today's cosmic expansion rate from the Hubble Space Telescope (HST) is over 4$\sigma$ level higher than the indirectly derived value from the Planck-2018 CMB measurement, while the $\sigma_8$ one indicates that today's matter fluctuation amplitude in linear regime measured by several low redshift probes including weak gravitational lensing \cite{11}, cluster counts \cite{12} and redshift space distortions \cite{13} is still lower than that indirectly measured by the Planck-2018 CMB data \cite{5}. It is nature that one may query the correctness of $\Lambda$CDM in characterizing the background evolution and structure formation of the universe. As a consequence, a wide variety of cosmological models based on some physical mechanism have been proposed to explain the late-time cosmic acceleration. Most recently, due to severer $H_0$ tension and richer data from large scale galaxy survey than before \cite{14}, cosmologists have a stronger motivation and more interests to resolve or even solve these tensions by confronting existing cosmological models or constructing new ones with current observations. It is worth noting that possible systematic errors or independent determinations on $H_0$ and $\sigma_8$ from new probes can also alleviate these tensions. To resolve $H_0$ and $\sigma_8$ tensions, in previous works, many authors always combine CMB data with BAO, SNe Ia, local $H_0$ observation to give tight constraints on a specific model. We argue that, more or less, this kind of constraint can only give an indirect answer for cosmological tensions, and that the most direct method is to check the model dependence of Planck-2018 CMB data. In this study, our motivation is to explore whether one of the simplest extensions of general relativity (GR), $f(R)$ gravity \cite{15,16}, can relieve current $H_0$ and $\sigma_8$ tensions. In $f(R)$ gravity, the modified Friedmann equations can be obtained by varying a generalized Lagrangian which is a function of the Ricci scalar $R$. Although many authors have constrained specific $f(R)$ models with joint cosmological observations in recent years, there is still a lack of a direct test of the ability to alleviate $H_0$ and $\sigma_8$ tensions for $f(R)$ gravity in light of Planck CMB data. Especially, due to three reasons: (i) the data of Planck-2018 full mission is released; (ii) $H_0$ tension becomes more serious than before; (iii) richer data from large scale galaxy survey to study DM clustering is gradually obtained, this is an urgent issue needed to be addressed. By implementing numerical analysis, we find that the Hu-Sawicki $f(R)$ gravity cannot reduce $H_0$ and $\sigma_8$ tensions. This work is organized as follows. In the next section, we introduce the basic equations of $f(R)$ gravity and a specific $f(R)$ model to be investigated in this analysis. In Section III, we display the data and analysis method. In Section IV, the numerical results are presented. The discussions and conclusions are exhibited in the final section. \section{$f(R)$ gravity} To construct a modifieoSd theory of gravity, one can introduce some terms such as $R^2$, $R^{\mu\nu}R_{\mu\nu}$, $R^{\mu\nu\alpha\beta}R_{\mu\nu\alpha\beta}$, or $R\square^nR$, when quantum corrections are taken into account. In $f(R)$ gravity, different from the above high-order derivative gravity, the modification is just a function of Ricci scalar $R$. $f(R)$ gravity was firstly introduced by Buchdahl \cite{15} in 1970 and the readers can find more details in recent reviews \cite{16,17}. The action is written as \begin{equation} S=\int d^4x\sqrt{-g}\left[R+f(R)+\mathcal{L}_m\right], \label{1} \end{equation} where $f(R)$, $\mathcal{L}_m$ and $g$ denote a function of $R$, the standard matter Lagrangian and the trace of the metric, respectively. By varying Eq.(\ref{1}), one can obtain the modified Einstein field equation \begin{equation} G_{\mu\nu}+f_RR_{\mu\nu}+(\Box f_R-\frac{f}{2})g_{\mu\nu}-\nabla_\mu\nabla_\nu f_R=8\pi GT_{\mu\nu}, \label{2} \end{equation} where $f_R\equiv df/dR$ denotes an extra scalar degree of freedom, i.e., the so-called scalaron and $T_{\mu\nu}$ is energy-momentum tensor. In a spatially flat Friedmann-Robertson-Walker (FRW) universe, the equation of background evolution in $f(R)$ gravity is expressed as \begin{equation} H^2+\frac{f}{6}-(H^2+H\frac{dH}{dN})f_R+H^2\frac{dR}{dN}f_{RR}=\frac{8\pi G}{3} \rho_m, \label{3} \end{equation} where $f_{RR}\equiv df_R/dR$, $N\equiv \mathrm{ln}\,a$, $H$ is Hubble parameter, $a$ is scale factor and $\rho_m$ is matter energy density. We are also of interests to study the perturbations in $f(R)$ gravity and just consider the linear part here. For sub-horizon modes ($k\gtrsim aH$) in the quasi-static approximation, the linear growth of matter density perturbations is shown as \cite{18} \begin{equation} \frac{\mathrm{d}^2\delta}{\mathrm{d}a^2}+\left(\frac{1}{H}\frac{\mathrm{d}H}{\mathrm{d}a}+\frac{3}{a}\right)\frac{\mathrm{d}\delta}{\mathrm{d}a}-\frac{3\Omega_{m}H_0^2a^{-3}}{(1+f_R)H^2}\left(\frac{1-2X}{2-3X}\right)\frac{\delta}{a^2}=0, \label{4} \end{equation} where $\Omega_{m}$ denotes the effective matter density ratio at present. The function $X$ has the following form \begin{equation} X(k,a) = -\frac{2f_{RR}}{1+f_R}\left(\frac{k}{a}\right)^2. \label{5} \end{equation} It is noteworthy that the function $X$ in Eq.(\ref{4}) induces a scale dependence of linear growth factor $\delta(k,a)$ in $f(R)$ gravity, when the growth factor is just a function of scale factor $a$ in GR. In general, a viable $f(R)$ model should be responsible for the inflationary behavior in the very early universe, reproduce the late-time cosmic acceleration, pass the local gravity test, and satisfy the stability conditions. To efficiently investigate cosmological tensions in $f(R)$ gravity, we consider the viable Hu-Sawicki $f(R)$ model (hereafter HS model) \cite{19} in this work and it is given by \begin{equation} f(R)=-\frac{2\Lambda R^n}{R^n+\mu^{2n}}, \label{6} \end{equation} where $\mu$ and $n$ are free parameters characterizing this model. By adopting $R\gg\mu^2$, the approximate $f(R)$ function shall be written as \begin{equation} f(R)=-2\Lambda-\frac{f_{R0}}{n}\frac{R_0^{n+1}}{R^n}, \label{7} \end{equation} where $R_0$ is the present-day value of Ricci scalar and $f_{R0}=f_R(R_0)=-2\Lambda\mu^2/R_0^2$. For the purpose of constraining this model with data, one should first obtain the evolution of background and perturbation by inserting Eq.(\ref{7}) into Eqs.(\ref{3}-\ref{4}). To the best of our knowledge, there are three main methods to confront $f(R)$ gravity with cosmological observations. The first is numerically solving the above equations in a direct way \cite{20,21,22,23,24,25,26,27,28,29}. The second is adopting an approximate framework to obtain the analytic solutions of the above equations and this method, to a large extent, can save computational cost \cite{30,31,32,33}. The third one is studying the effects of viable $f(R)$ gravity on the large scale structure formation by using N-body and hydrodynamical simulations \cite{34}. Note that the last method always spend more computational cost and storage space than two previous ones. \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{H0_Omgea_m.pdf} \hspace{10mm} \includegraphics[scale=0.5]{H0_log10F_R0.pdf} \caption{ The constrained 2-dimensional parameter spaces ($H_0$, $\Omega_{m0}$) and ($\log_{10}f_{R0}$, $H_0$) from the ``C'' dataset are shown for HS $f(R)$ models with $n=1$ (red), 2 (green), 3 (grey), 4 (orange) and free $n$ (blue), respectively. The grey dashed line and magenta bands denote $H_0=74.03\pm1.42$ km s$^{-1}$ Mpc$^{-1}$ measured by the HST \cite{14}. }\label{f1} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.385]{sigma8_Omega_m_LCDM.pdf} \includegraphics[scale=0.385]{sigma8_Omega_m_n1.pdf} \includegraphics[scale=0.385]{sigma8_Omega_m_nfree.pdf} \caption{The constrained 2-dimensional parameter spaces ($\Omega_m$, $\sigma_8$) from the ``C'' (red) and ``R'' (blue) datasets are shown for $\Lambda$CDM, HS $f(R)$ models with $n=1$ and free $n$, respectively. } \label{f2} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{sigma8_log10F_R0_CMB.pdf} \caption{ The constrained 2-dimensional parameter spaces ($\log_{10}f_{R0}$, $\sigma_8$) from the ``C'' dataset are shown for HS $f(R)$ models with $n=1$ (red), 2 (green), 3 (grey), 4 (orange) and free $n$ (blue), respectively. }\label{f3} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{sigma8_log10F_R0_CBSHR.pdf} \caption{The constrained 2-dimensional parameter spaces ($\log_{10}f_{R0}$, $\sigma_8$) from the data combination ``CBSHR'' are shown for HS $f(R)$ models with $n=1$ (red) and free $n$ (blue), respectively. The magenta dashed line denotes $\log_{10}f_{R0}=-6$. }\label{f4} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{sigma8_n_C_CBSHR.pdf} \caption{The constrained 2-dimensional parameter spaces ($n$, $\sigma_8$) are shown for HS $f(R)$ model with free $n$ by using the ``C'' (red) and ``CBSHR'' (blue) datasets, respectively. }\label{f5} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.465]{n1_C_CBSHR.pdf} \caption{The marginalized constraints on the HS $f(R)$ model with $n=1$ are shown by using the ``C'' (red) and ``CBSHR'' (blue) datasets, respectively.}\label{f6} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.41]{nfree_C_CBSHR.pdf} \caption{The marginalized constraints on the HS $f(R)$ model with free $n$ are shown by using the ``C'' (red) and ``CBSHR'' (blue) datasets, respectively. }\label{f7} \end{figure} \section{Data and method} Since our aim is to study whether HS $f(R)$ gravity can alleviate the $H_0$ and $\sigma_8$ tensions, first of all, we use the following two main datasets. {\it CMB}: Although the mission of the Planck satellite is completed, its meaning for cosmology and astrophysics is extremely important. It has measured many aspects of formation and evolution of the universe such as matter components, topology and large scale structure effects. Here we shall use updated Planck-2018 CMB temperature and polarization data including likelihoods of temperature at $30\leqslant \ell\leqslant 2500$ and the low-$\ell$ temperature and polarization likelihoods at $2\leqslant \ell\leqslant 29$, namely TTTEEE$+$lowE, and Planck-2018 CMB lensing data \cite{5}. We denote this dataset as ``C''. {\it RSD}: To study the alleviation of $\sigma_8$ tension in $f(R)$ gravity, we adopt the redshift space distortions (RSD) as our reference probe which is sensitive to large scale structure formation. Specifically, we use the so-called ``Gold-2018'' growth-rate dataset \cite{35}. This dataset is denoted as ``R''. Furthermore, to break the parameter degeneracy and give tight constraints on on free parameters of HS model, we also employ the following four probes. {\it BAO}: By measuring the position of these oscillations in the matter power spectrum at different redshifts, the BAO, a standard cosmological ruler, can place constraints on the expansion history of the universe after decoupling and break the parameter degeneracy better. It is unaffected by errors in the nonlinear evolution of the matter density field and other systematic uncertainties. Specifically, we take the 6dFGS sample at the effective redshifts $z_{eff}=$ 0.106 \cite{36}, the SDSS-MGS one at $z_{eff}=$ 0.15 \cite{37} and the BOSS DR12 dataset at three effective redshifts $z_{eff}=$ 0.38, 0.51 and 0.61 \cite{38}. This dataset is identified as ``B''. {\it SNe Ia}: SNe Ia, the so-called standard candle, is a powerful distance indicator to study the background evolution of the universe, particularly, the Hubble parameter and EoS of DE. In this analysis, we use the largest SNe Ia ``Pantheon'' sample today, which integrates the SNe Ia data from the Pan-STARRS1, SNLS, SDSS, low-z and HST surveys and encompasses 1049 spectroscopically confirmed points in the redshift range $z \in [0.01, 2.3]$ \cite{39}. We refer to this dataset as ``S''. {\it Cosmic Chronometers}: As a complementary probe to investigate the late-time evolution of the universe, we also include the cosmic chronometers in our numerical analysis. Specifically, we employ 31 chronometers to constrain the HS model \cite{40}. Hereafter we denote this dataset as ``H''. It is worth noting that we take the first method (see Section II), namely numerically solving the background and perturbation equations, to implement constraints on HS $f(R)$ model. In order to obtain the posterior probability density distributions of model parameters, we incorporate the modified equations governing the evolution of background and perturbation of HS $f(R)$ model into the public online packages {\it CAMB} and {\it CosmoMC} \cite{41,42}. The latter package can be used for implementing a standard Bayesian analysis via the Markov Chain Monte Carlo method to infer the posterior probability density distributions of parameters. For HS $f(R)$ model, we choose the following prior ranges for different parameters: $\Omega_bh^2 \in [0.005, 0.1]$, $\Omega_ch^2 \in [0.001, 0.99]$, $100\theta_{MC} \in [0.5, 10]$, $\mathrm{ln}(10^{10}A_s) \in [2, 4]$, $n_s \in [0.8, 1.2]$ $\tau \in [0.01, 0.8]$, $\log_{10} f_{R,0} \in [-9, 1]$ and $n \in [0, 20]$. To investigate comprehensively both $H_0$ and $\sigma_8$ tensions in HS model, we carry out the following numerical analysis. For $H_0$ tension, respectively, we constrain five models, i.e., $n=1, 2, 3, 4$ and free $n$ with the ``C'' dataset when keeping the typical parameter $\log_{10} f_{R,0}$ free. For $\sigma_8$ tension, we just present the constraining results of the representative case $n=1$ and the general one free $n$ from ``C'' and ``R'' datasets, respectively. We also display the comprehensive constraints on two models ($n=1$ and free $n$) by using the data combination ``CBSHR''. \begin{table*}[!t] \renewcommand\arraystretch{1.5} \caption{The marginalized constraints on the HS $f(R)$ models with $n=1$, 2, 3, 4 and free $n$ using the ``C'' dataset are shown, respectively. For the typical parameter $\log_{10}f_{R0}$, we quote $2\sigma$ ($95\%$) uncertainties or bounds. The symbol ``$\diamondsuit$'' denotes the parameter that cannot be well constrained by observed data.} \begin{tabular} { l |c| c |c| c| c } \hline \hline Data & \multicolumn{5}{c}{C} \\ \hline Model & $n=1$ &$n=2$ &$ n=3 $ &$n=4 $ & free $n$ \\ \hline {\boldmath$\Omega_b h^2 $} & $0.02228\pm 0.00016 $ & $0.02226\pm 0.00015 $ &$0.02228\pm 0.00016 $ & $0.02226\pm 0.00016 $ &$0.02228\pm 0.00015 $ \\ {\boldmath$\Omega_c h^2 $} & $0.1190\pm 0.0014 $ & $0.1194\pm 0.0014 $ & $0.1190\pm 0.0014 $ & $0.1195\pm 0.0015 $ & $0.1190\pm 0.0013 $ \\ {\boldmath$100\theta_{MC} $} & $1.04079\pm 0.00034 $ & $1.04079\pm 0.00031 $ & $1.04080^{+0.00032}_{-0.00029}$ & $1.04081\pm 0.00034 $ & $1.04086\pm 0.00031 $ \\ {\boldmath$\tau $} & $0.068\pm 0.013 $ &$0.060^{+0.011}_{-0.015} $ & $0.058\pm 0.016 $ & $0.056^{+0.019}_{-0.013} $ & $0.066\pm 0.015 $ \\ {\boldmath${\rm{ln}}(10^{10} A_s)$} & $3.070\pm 0.024 $ & $3.055^{+0.021}_{-0.025} $ &$3.048\pm 0.032 $ & $3.045^{+0.035}_{-0.026} $ & $3.065^{+0.030}_{-0.027} $ \\ {\boldmath$n_s $} & $0.9659\pm 0.0047 $ & $0.9648\pm 0.0045 $ & $0.9659\pm 0.0046 $ &$0.9645\pm 0.0050 $ & $0.9657\pm 0.0047 $ \\ {\boldmath$\log_{10} f_{R0}$} & $ < -4.02$ (2$\sigma$) & $ < -3.00$ (2$\sigma$) & $ < -1.68$ (2$\sigma$) & $-4.1^{+3.6}_{-4.3}$ (2$\sigma$) & $\diamondsuit$ \\ {\boldmath$n$} & --- & --- & --- & --- & $\diamondsuit$ \\ \hline {\boldmath$H_0 $ }& $67.58\pm 0.64 $ & $67.45\pm 0.63 $ & $67.59\pm 0.65 $ & $67.42\pm 0.67 $ & $67.61\pm 0.60 $ \\ {\boldmath$\Omega_m $ }& $0.3110\pm 0.0087 $ & $0.3128\pm 0.0087 $ & $0.3108\pm 0.0088 $ & $0.3134^{+0.0086}_{-0.010} $ & $0.3105\pm 0.0081 $ \\ {\boldmath$\sigma_8 $ }& $0.859^{+0.041}_{-0.051} $ & $0.884^{+0.120}_{-0.081} $ & $0.908\pm 0.076 $ & $0.909\pm 0.068 $ & $0.878^{+0.043}_{-0.065} $ \\ \hline \hline \end{tabular} \label{t1} \end{table*} \begin{table*}[!t] \renewcommand\arraystretch{1.5} \caption{The marginalized constraints on the HS $f(R)$ models with $n=1$ and free $n$ are shown by using the ``R'' and ``CBSHR'' datasets, respectively. Similarly, for the typical parameter $\log_{10}f_{R0}$, we quote $2\sigma$ ($95\%$) uncertainties. The symbol ``$\diamondsuit$'' denotes the parameter that cannot be well constrained by observed data. } \begin{tabular} { l |c| c |c| c } \hline \hline Data & \multicolumn{2}{c}{R} & \multicolumn{2}{|c}{CBSHR} \\ \hline Model & $n=1$ & free $n$ & $n=1$ & free $n$ \\ \hline {\boldmath$\Omega_b h^2 $} & --- & --- &$0.02233\pm 0.00013 $ & $0.02234\pm 0.00014 $ \\ {\boldmath$\Omega_c h^2 $} & --- & --- & $0.11844\pm 0.00095 $ & $0.11816\pm 0.00093 $ \\ {\boldmath$100\theta_{MC} $} & --- & --- & $1.04088\pm 0.00029 $ & $1.04093\pm 0.00030 $ \\ {\boldmath$\tau $} & --- & --- & $0.062\pm 0.010 $ & $0.068\pm 0.011 $ \\ {\boldmath${\rm{ln}}(10^{10} A_s)$} & --- & --- & $3.055^{+0.019}_{-0.021} $ & $3.066\pm 0.021 $ \\ {\boldmath$n_s $} & --- & --- & $0.9666\pm 0.0038 $ & $0.9675\pm 0.0038 $ \\ {\boldmath$\log_{10} f_{R0}$} & $< -0.773$ $(2\sigma)$ & $\diamondsuit$ & $< -6.75$ $(2\sigma)$ & $< -6.60$ $(2\sigma)$ \\ {\boldmath$n$} & --- & $\diamondsuit$ & --- & $\diamondsuit$ \\ \hline {\boldmath$H_0 $ }& $75\pm 10 $ & $> 63.7$ $(2\sigma)$ & $67.86\pm 0.42 $ & $67.99^{+0.40}_{-0.45} $ \\ {\boldmath$\Omega_m $ }& $0.243^{+0.044}_{-0.060} $ & $0.245^{+0.045}_{-0.063} $ & $0.3071\pm 0.0057 $ & $0.3054\pm 0.0056 $ \\ {\boldmath$\sigma_8 $ }& $0.769^{+0.056}_{-0.043} $ & $0.761\pm 0.055 $ & $0.8128\pm 0.0073 $ & $0.823^{+0.0100}_{-0.0089} $ \\ \hline \hline \end{tabular} \label{t2} \end{table*} \section{Numerical Results} For the purpose of studying the alleviation of two important cosmological tensions in the framework of HS $f(R)$ models, our main numerical results are displayed in Fig.\ref{f1} and Fig.\ref{f2} and marginalized constraining results are presented in Tab.\ref{t1} and Tab.\ref{t2}. We find that the $H_0$ values (see Tab.\ref{f1}) derived from Planck-18 data in five HS models are now $4.14\sigma$, $4.23\sigma$, $4.12\sigma$, $4.21\sigma$ and $4.16\sigma$ lower than that directly measured by the HST, and that these new $H_0$ values relieve hardly the existing $4.39\sigma$ tension under the assumption of $\Lambda$CDM. In Fig.\ref{f1}, we have exhibited the constrained 2-dimensional parameter spaces ($\Omega_m$, $\sigma_8$) for five HS models, it is easy to see the large $H_0$ gap between CMB and HST observations. Only using the CMB data, we conclude that $H_0$ is insensitive to typical model parameter $\log_{10}f_{R0}$ in all five models (see the right panel of Fig.\ref{f1}). To investigate the $\sigma_8$ tension, in Fig.\ref{f2}, first of all, we display the constrained $\Omega_m$-$\sigma_8$ plane for $\Lambda$CDM as a reference. Then, we present constrained $\Omega_m$-$\sigma_8$ planes for the commonly used HS model with $n=1$ and for the complete HS model with free $n$, respectively. We find that the relatively small $\sigma_8$ discrepancy in both considered HS scenarios with a little larger parameter spaces ($\Omega_m$, $\sigma_8$) cannot be resolved and is still over $1\sigma$ level. This implies that the HS $f(R)$ gravity cannot reduce current $H_0$ and $\sigma_8$ tensions, which is the key result of this work. It is also interesting to study the parameter degeneracy between $\log_{10}f_{R0}$ and $\sigma_8$. When only using CMB data, for five HS models, one may find that $\log_{10}f_{R0}$ is positively correlated with $\sigma_8$, which indicates that stronger deviations in HS f(R) gravity from GR lead to larger effects of matter clustering (see Fig.\ref{f3}). However, when using the combined dataset CBSHR, this positive correlation disappears and $\sigma_8$ seems to be insensitive to $\log_{10}f_{R0}$ (see Fig.\ref{f4}). Meanwhile, we are also of interests to study the degeneracy between the additional parameter $n$ and $\sigma_8$, and find that the amplitude of matter clustering is insensitive to $n$ regardless of the use of C or CBSHR datasets (see Fig.\ref{f5}). Furthermore, to study the degeneracies between parameters better, we exhibit the marginalized constraints on HS $f(R)$ models with $n=1$ and free $n$ in Fig.\ref{f6} and Fig.\ref{f7}, and obtain the following conclusions: (i) to a large extent, the parameter spaces are compressed when combining C with BSHR datasets; (ii) in all cases, two typical parameters $\log_{10}f_{R0}$ and $n$ are insensitive to other cosmological parameters, which is clarified for the first time in the literature. Moreover, in Tab.\ref{t1}, we can find that the best constraint $\log_{10}f_{R0} < -4.02$ at the $2\sigma$ confidence level originates from the case of $n=1$ by only using CMB data, while two typical parameters $\log_{10}f_{R0}$ and $n$ in the free $n$ case cannot be well constrained (see also Fig.\ref{f6} and Fig.\ref{f7}). Subsequently, in Tab.\ref{t2}, we find that, when using RSD data alone, constraints on typical parameters of HS models are poor and smaller $\sigma_8$ values are obtained, which indicates that this RSD dataset gives a smaller effect of matter clustering at late times than the CMB observation. Interestingly, although the mean value of the constraint $H_0=75\pm10$ km s$^{-1}$ Mpc$^{-1}$ from RSD data is consistent with the HST result, it has a much larger uncertainty. Finally, at the $2\sigma$ confidence level, we give our best constraint on the typical parameter $\log_{10}f_{R0}<-6.75$ in the case of $n=1$, while $\log_{10}f_{R0}<-6.60$ in the free $n$ case. It is worth noting that we still cannot provide good constraint on $n$ even using the joint dataset CBSHR. \section{Discussions and conclusions} Recently, the $H_0$ and $\sigma_8$ tensions under the standard cosmological paradigm have re-activated a wide variety of alternative cosmological models. However, all the time, there is a lack of direct tests of $f(R)$ gravity in resolving both tensions. To address this urgent issue, we confront the popular HS $f(R)$ gravity with current observations. By testing five specific HS $f(R)$ models with observational datasets, we obtain two main conclusions: (i) HS $f(R)$ gravity cannot resolve both $H_0$ and $\sigma_8$ tensions; (ii) the typical parameters $\log_{10}f_{R0}$ and $n$ are insensitive to other cosmological parameters. Meanwhile, in the HS $f(R)$ model with $n=1$, we give our best constraint $\log_{10}f_{R0}<-6.75$ at the $2\sigma$ confidence level. It is noteworthy that a coupling between matter and geometry in the framework of $f(R)$ gravity may help resolve these tensions, and that other $f(R)$ gravity models may relieve both discrepancies much better than the considered HS $f(R)$ one. We expect that future high-precision CMB and SNe Ia observations and independent probes such as gravitational waves could help reduce or even solve these intractable cosmological tensions. \section{Acknowledgements} Deng Wang thanks David Mota for helpful discussions on modified gravity and Yuan Sun for useful communications on gravitational theories.
1,314,259,993,813
arxiv
\section{Introduction} Bell showed in 1964 \cite{Bell's theorem} that certain correlations between the outcomes of two spacelike separated quantum measurements over entangled states violate his eponymous inequalities. Any model that reproduces the quantum correlation must therefore give up at least one of the plausible premises based on which the Bell inequalities are derived: predetermination or realism, locality, and free choice or no superdeterminism \cite{Bell's theorem,CHSH inequality}. See Ref. \cite{Wiseman Bell's two theorems} for other possible premises, and the different combination of the premises underlying the Bell inequalities. Hitherto, there is no general consensus as to the precise implication of the above Bell theorem concerning the nature of physical realities and/or the structure of causation underlying the microscopic phenomena \cite{Norsen nonlocality,Tumulka nonlocality,Gisin nonlocality,Hall nonlocality,Muynck CFI,Zukowski nonrealism,Zukowski CFI,Blaylock CFI,Fine-Bell theorem,Fine prism model,Fine prism model2,Jarrett incompleteness,Ballentine-Jarrett paper,Howard nonseparability,Fuchs Qbism,Griffiths consistent locality,Price backward causation,Spekkens fine tuning}. The alternative possible explanations of the violation of the Bell inequalities arguably cannot be separated from the different resolutions of the long standing measurement problem which is central in the debate about the meaning of quantum mechanics \cite{Bub book}. For example, Bohmian mechanics \cite{Bohmian mechanics}, which resolves the measurement problem by introducing a hidden variable determining the measurement outcomes, must have a gross nonlocality to comply with the Bell theorem. By contrast, Copenhagen interpretation rejects the presence of such variables, and argues that the violation of Bell inequalities does not imply nonlocality \cite{Peres instrumentalist credo,Peres instrumentalist book}. But, what compels measurement to violate the Bell inequalities? Are there some profound principles that measurement must obey so that it is willing to give up plausible and intuitive concepts such as locality \cite{Bohmian mechanics}, determinism \cite{Peres instrumentalist credo,Peres instrumentalist book}, or/and free choice \cite{Brans fully causal hidden variable model,Hall relaxing measurement independence}? In attempt to better understand this question, let us first suppose that, regardless of measurement, one can assign to a spin-$\frac{1}{2}$ particle (or, a generic qubit) a definite (i.e., determinate) value of spin, called c-valued spin variable. Moreover, let us assume that the c-valued spin variable may take on any continuous real number prior to the measurement, and the spin measurement maps it onto the standard binary quantum spin values $\pm 1$. While measurement in general changes the c-valued spin variable, it is natural to require that measurement preserves the statistical correlation between the c-valued spin variables of two particles. Can real c-valued spin variables be constructed which meet the above conditions for measurement? A positive answer is given in the present work. In the above model of measurement, we may therefore argue that it is the requirement of conservation of correlation which compels the violation of Bell inequalities for entangled state with bizarre possible implications, in a similar fashion that assuming the constancy of speed of light for all inertial coordinate systems implies counterintuitive observable effects such as time dilation. To make the idea more transparent, we then construct a game of spacelike joint mappings as follows. Suppose that Alice and Bob, isolated from one another, are given a pair of real numbers, one pair at a time, sampled from a specific distribution associated with a vector in four dimensional complex Hilbert space. We ask them to independently map the pair of real numbers onto a pair of binary numbers $\pm 1$, with a constraint that their outputs must preserve the statistical correlation of the inputs. Is there a classical (i.e., local and deterministic) joint strategy for Alice and Bob to always win the game? We show that the requirement of conservation of correlation forces Alice and Bob's joint strategy to comply with the Bell theorem. This implies that for certain initial correlations associated with nonfactorizable vector in the Hilbert space, Alice and Bob will never win the game using any classical joint strategy, i.e., their mappings will violate the conservation of correlation. They can instead win the game by running a quantum strategy using an ensemble of entangled pair of spin-$\frac{1}{2}$ particles (qubits) and quantum circuits for local measurement of spin observables. \section{Real c-valued spin variables and conservation of bipartite correlation in measurement} In this article, for our purpose, we only consider systems of two spin-$\frac{1}{2}$ particles (or, a pair of arbitrary physical qubits), referred to as particle 1 and particle 2. First, we choose a set of basis vectors $\{\ket{\eta_{12}}\}$ of the four dimensional Hilbert space, so that $\sum_{\eta_{12}}\ket{\eta_{12}}\bra{\eta_{12}}=\hat{\mathbb{I}}_{12}$, where $\hat{\mathbb{I}}_{12}$ is the $4\times 4$ identity matrix. We refer to such a complete set of basis vectors as the reference coordinate basis, and assume that it is factorizable, i.e., $\{\ket{\eta_{12}}\}=\{\ket{\eta_1}\ket{\eta_2}\}$, where $\{\ket{\eta_{\mu}}\}$ is the complete basis of the Hilbert space associated with particle $\mu$, $\mu=1,2$. Then, regardless of any measurement, given a preparation represented by a pure quantum state $\ket{\psi_{12}}$ of the two particles, the `c-valued spin variable' associated with particle $\mu$ along a direction represented by a unit vector $\vec{n}_{\mu}$ in three dimensional space, within the reference basis $\{\ket{\eta_{12}}\}$ with $\braket{\eta_{12}|\psi_{12}}\neq 0$, is defined as follows: \begin{eqnarray} &&\tilde{s}_{\vec{n}_{\mu}}(\eta_{12},\xi|\psi_{12})\nonumber\\ &&\doteq{\rm Re}\Big\{\frac{\braket{\eta_{12}|\hat{\sigma}_{\vec{n}_{\mu}}|\psi_{12}}}{\braket{\eta_{12}|\psi_{12}}}\Big\}+\frac{\xi}{\hbar}{\rm Im}\Big\{\frac{\braket{\eta_{12}|\hat{\sigma}_{\vec{n}_{\mu}}|\psi_{12}}}{\braket{\eta_{12}|\psi_{12}}}\Big\}. \label{real continuum c-valued spin variable} \end{eqnarray} Here, $\hat{\sigma}_{\vec{n}_{\mu}}\doteq\vec{n}_{\mu}\cdot\vec{\hat{\sigma}}$, where $\vec{\hat{\sigma}}=(\hat{\sigma}_x,\hat{\sigma}_y,\hat{\sigma}_z)$ is the vector of the Pauli operators, and $\xi$ is a real-valued global-nonseparable variable. $\hat{\sigma}_{\vec{n}_{\mu}}$ in the numerator is the short hand for $\hat{\sigma}_{\vec{n}_{\mu}}\otimes\hat{\mathbb{I}}_{\nu}$, $\mu\neq\nu$, $\mu,\nu=1,2$, where $\hat{\mathbb{I}}_{\nu}$ is the $2\times 2$ identity matrix of the Hilbert space of the particle $\nu$. We further assume that the probability distribution for the coordinate value $\eta_{12}$ follows the Born's rule, i.e., \begin{eqnarray} {\rm Pr}(\eta_{12}|\psi_{12})=|\braket{\eta_{12}|\psi_{12}}|^2. \label{Born's rule} \end{eqnarray} Moreover, $\xi$ is assumed to fluctuate randomly on a microscopic time scale independent of the prepared quantum state $\ket{\psi_{12}}$, the spin observable $\hat{\sigma}_{\vec{n}_{\mu}}$ and the reference basis $\{\ket{\eta_{12}}\}$, with its first two moments are given by \begin{eqnarray} \overline{\xi}\doteq\sum_{\xi}\xi\chi(\xi)=0,~~ \overline{\xi^2}=\hbar^2, \label{Planck constant} \end{eqnarray} where $\chi(\xi)$ is the probability distribution of $\xi$, and the summation is replaced by a suitable integration for a continuous $\xi$. The ensemble average of any function $f(\tilde{s}_{\vec{n}_1},\tilde{s}_{\vec{n}_2})$ of two c-valued spin variables for a given prepraration $\ket{\psi_{12}}$ is then defined as in the conventional probability theory: \begin{eqnarray} \braket{f(\tilde{s}_{\vec{n}_1}(\eta_{12},\xi|\psi_{12}),\tilde{s}_{\vec{n}_2}(\eta_{12},\xi|\psi_{12}))}\nonumber\\ \doteq\sum_{\eta_{12}}\sum_{\xi}f(\tilde{s}_{\vec{n}_1},\tilde{s}_{\vec{n}_2})\chi(\xi){\rm Pr}(\eta_{12}|\psi_{12}). \label{ensemble average} \end{eqnarray} The above definition of c-valued spin variables can be extended to general quantum observables acting on general quantum states \cite{Agung general c-valued physical quantities and uncertainty relation}. It was initially conceived for phase space variables to study a specific epistemic (i.e., statistical) restriction underlying the incompatibility between quantum observables for position and momentum \cite{Agung ERPS representation,Agung-Daniel model}. We note that the two terms on the right-hand side of Eq. (\ref{real continuum c-valued spin variable}) can be operationally interpreted respectively as the real and imaginary part of weak value obtained in weak measurement with postselectdion \cite {Aharonov weak value,Aharonov-Daniel book,Lundeen complex weak value,Jozsa complex weak value}. They can also be interpreted respectively as the optimal estimate of the left-hand side and the associated estimation error \cite{Agung epistemic interpretation,Agung estimation independence,Hall exact UR,Johansen weak value best estimation,Hall prior information,Hofmann imaginary part of weak value in optimal estimation}. However, in this work, we are not concerned with such operational interpretations. Rather, the c-valued spin variable is defined independently of any kind of measurement. Hence, unlike the weak value and the scheme of optimal estimation above, in this article, the reference basis is fixed once and for all, i.e., it cannot be varied freely by the experimenter. Notice first that, unlike the standard quantum spin values, given $\ket{\psi_{12}}$, $\{\ket{\eta_{12}}\}$ and $\xi$, the c-valued spin variable $\tilde{s}_{\vec{n}_{\mu}}(\eta_{12},\xi|\psi_{12})$ defined in Eq. (\ref{real continuum c-valued spin variable}) for any direction $\vec{n}_{\mu}$ has always definite value. Moreover, $\tilde{s}_{\vec{n}_{\mu}}(\eta_{12},\xi|\psi_{12})$ may take on continuum real numbers depending on the continuous parameterization of $\ket{\psi_{12}}$ and $\vec{n}_{\mu}$ associated with the spin observable $\hat{\sigma}_{\vec{n}_{\mu}}$, as expected for variables in classical mechanics; see an example below. Next, the value assignment of $\tilde{s}_{\vec{n}_{\mu}}(\eta_{12},\xi|\psi_{12})$ of particle $\mu$ depends in general on the global prepared quantum state $\ket{\psi_{12}}$ of the two particles. In the specific case when the prepared quantum state is factorizable, i.e., $\ket{\psi_{12}}=\ket{\psi_1}\ket{\psi_2}$, where $\ket{\psi_1}$ and $\ket{\psi_2}$ are the quantum states associated with the independent preparation of the particle 1 and particle 2, respectively, then, noting that $\frac{\braket{\eta_{12}|\hat{\sigma}_{\vec{n}_{\mu}}|\psi_{12}}}{\braket{\eta_{12}|\psi_{12}}}=\frac{\braket{\eta_{\mu}|\hat{\sigma}_{\vec{n}_{\mu}}|\psi_{\mu}}}{\braket{\eta_{\mu}|\psi_{\mu}}}$, $\mu=1,2$, one has \begin{eqnarray} &&\tilde{s}_{\vec{n}_{\mu}}(\eta_{12},\xi|\psi_{12})\nonumber\\ &=&{\rm Re}\Big\{\frac{\braket{\eta_{\mu}|\hat{\sigma}_{\vec{n}_{\mu}}|\psi_{\mu}}}{\braket{\eta_{\mu}|\psi_{\mu}}}\Big\}+\frac{\xi}{\hbar}{\rm Im}\Big\{\frac{\braket{\eta_{\mu}|\hat{\sigma}_{\vec{n}_{\mu}}|\psi_{\mu}}}{\braket{\eta_{\mu}|\psi_{\mu}}}\Big\}\nonumber\\ &=&\tilde{s}_{\vec{n}_{\mu}}(\eta_{\mu},\xi|\psi_{\mu}), \label{real-global local deterministic continuum c-valued spin for separable state} \end{eqnarray} namely, given the value of $\xi$, the c-valued spin variable associated with particle $\mu$ is independent of that associated with particle $\nu$, $\mu\neq\nu$. However, even when the two particles are independently prepared, the c-valued spins associated with the two particles, i.e., $\tilde{s}_{\vec{n}_1}(\eta_1,\xi|\psi_1)$ and $\tilde{s}_{\vec{n}_2}(\eta_2,\xi|\psi_2)$, are in general instantaneously connected via the global variable $\xi$ regardless of their spatial distance. There is an exception. When $\ket{\psi_{\mu}}$ in Eq. (\ref{real-global local deterministic continuum c-valued spin for separable state}) is given by one of the eigenstates of $\hat{\sigma}_{\vec{n}_{\mu}}$, which is just the case after the measurement of $\hat{\sigma}_{\vec{n}_{\mu}}$, the second term on the right-hand side vanishes. Moreover, the first term is exactly equal to the eigenvalue $o_{\vec{n}_{\mu}}$ of $\hat{\sigma}_{\vec{n}_{\mu}}$ so that we have $\tilde{s}_{\vec{n}_{\mu}}=o_{\vec{n}_{\mu}}=\pm 1$ independent of $\xi$. Hence, in this specific case, the two c-valued spins associated with two independently prepared particles, are fully independent of each other, given by the standard quantum spin values. Next, despite the c-valued spin variables are always determinate in the absence of measurement, they satisfy a complementarity principle, in the following sense. Consider two spin operators $\hat{\sigma}_{\vec{n}_{\mu}}$ and $\hat{\sigma}_{\vec{n}'_{\mu}}$ associated with particle $\mu$, so that $[\hat{\sigma}_{\vec{n}_{\mu}},\hat{\sigma}_{\vec{n}'_{\mu}}]\neq 0$, $\mu=1,2$. Then, for any preparation $\ket{\psi_{12}}$, the associated c-valued spin variables, i.e., $\tilde{s}_{\vec{n}_{\mu}}(\eta_{12},\xi|\psi_{12})$ and $\tilde{s}_{\vec{n}'_{\mu}}(\eta_{12},\xi|\psi_{12})$, cannot simultaneously equal to $\pm 1$ independent of $\xi$ \cite{Agung general c-valued physical quantities and uncertainty relation}. For example, if for a given $\ket{\psi_{12}}$ we have $\tilde{s}_{\vec{n}_{\mu}}(\eta_{12},\xi|\psi_{12})=\pm 1$ independent of $\xi$ which is the case when $\ket{\psi_{12}}$ is the eigenstate of $\hat{\sigma}_{\vec{n}_{\mu}}$, then we must have $\tilde{s}_{\vec{n}'_{\mu}}(\eta_{12},\xi|\psi_{12})\neq\pm 1$ fluctuating randomly with $\xi$, and vice versa. This captures the quantum complementarity between $\hat{\sigma}_{\vec{n}_{\mu}}$ and $\hat{\sigma}_{\vec{n}'_{\mu}}$ in the Copenhagen interpretation, that is, the two noncommuting spin operators cannot be jointly measured, thus assigned $\pm 1$ values, simultaneously. Indeed, like the standard quantum spin values, the variances of the c-valued spin variables defined in Eq. (\ref{real continuum c-valued spin variable}) satisfy the Heisenberg-Kennard-Robertson-Schr\"odinger uncertainty relation \cite{Agung general c-valued physical quantities and uncertainty relation}. The above observation shows that the c-valued spin variables defined in Eq. (\ref{real continuum c-valued spin variable}) share many qualitative features of the standard quantum spin values. Moreover, while the exact value of the c-valued spin variable depends on the choice of reference basis, its qualitative features do not. This suggests that the c-valued spin variables can be seen as a natural extension of the standard quantum spin values to the situation when there is no measurement. Additionally, as shown below, regardless of the choice of the reference basis, the local average and bipartite correlation of the c-valued spin variables prior to measurement are equal respectively to the local average and bipartite correlation of the associated standard quantum spin values obtained in measurement. For illustration and later reference, let us first consider the case when the prepared quantum state of the pair of the particles is given by the singlet, i.e., \begin{eqnarray} \ket{\psi_{12}^{\mathcal S}}\doteq\frac{1}{\sqrt{2}}(\ket{01}-\ket{10}), \label{singlet state} \end{eqnarray} where $\{\ket{0},\ket{1}\}$ are the eigenstates of $\hat{\sigma}_{z}$. Let us choose the following complete set of vectors as the reference basis: $\{\ket{y+}\ket{x+},\ket{y+}\ket{x-},\ket{y-}\ket{x+},\ket{y-}\ket{x-}\}$, where $\ket{x\pm}=\frac{1}{\sqrt{2}}(\ket{0}\pm\ket{1})$ and $\ket{y\pm}=\frac{1}{\sqrt{2}}(\ket{0}\pm i\ket{1})$. Moreover, without loosing generality, assume that the spin operator of the first particle lies on the $xz$-plane with a polar angle $\theta_1$ with respect to the positive $z$-axis. Computing the c-valued spin $\tilde{s}_{\vec{n}_{\theta_1}}$ defined in Eq. (\ref{real continuum c-valued spin variable}), one obtains \begin{eqnarray} \tilde{s}_{\vec{n}_{\theta_1}}(y+,x+,\xi|\psi_{12}^{\mathcal S})&=&-\sin\theta_1-\frac{\xi}{\hbar}\cos\theta_1;\nonumber\\ \tilde{s}_{\vec{n}_{\theta_1}}(y+,x-,\xi|\psi_{12}^{\mathcal S})&=&\sin\theta_1+\frac{\xi}{\hbar}\cos\theta_1;\nonumber\\ \tilde{s}_{\vec{n}_{\theta_1}}(y-,x+,\xi|\psi_{12}^{\mathcal S})&=&-\sin\theta_1+\frac{\xi}{\hbar}\cos\theta_1;\nonumber\\ \tilde{s}_{\vec{n}_{\theta_1}}(y-,x-,\xi|\psi_{12}^{\mathcal S})&=&\sin\theta_1-\frac{\xi}{\hbar}\cos\theta_1. \label{c-valued spin for along theta for singlet} \end{eqnarray} Hence, it varies continuously with the direction of the spin observable parameterized by $\theta_1$, as classical angular momentum. Let us proceed to consider the case when $\vec{n}_1=\vec{n}=\vec{n}_2$, i.e., the spin observables of the two particles are pointing along the same direction. Then, noting that $(\hat{\sigma}_{\vec{n}_1}\otimes\hat{\mathbb{I}}_2)\ket{\psi_{12}^{\mathcal S}}=-(\hat{\mathbb{I}}_1\otimes\hat{\sigma}_{\vec{n}_2})\ket{\psi_{12}^{\mathcal S}}$, and inserting into Eq. (\ref{real continuum c-valued spin variable}), we have \begin{eqnarray} \tilde{s}_{\vec{n}_1}(\eta_{12},\xi|\psi^{\mathcal S}_{12})=-\tilde{s}_{\vec{n}_2}(\eta_{12},\xi|\psi^{\mathcal S}_{12}), \label{opposite spinning pair for an singlet state} \end{eqnarray} i.e., they are always perfectly anti-correlated like the associated standard quantum spin values. Hence, the conservation of spin angular momentum for singlet holds even in the absence of measurement, as expected in classical mechanics. Now, consider the case when $\vec{n}_1$ and $\vec{n}_2$ are coplanar lying on the $xz$-plane tilted from the positive $z$-axis with polar angles respectively given by $\theta_1$ and $\theta_2$. Computing the correlation between the c-valued spins $\tilde{s}_{\vec{n}_{\theta_1}}$ and $\tilde{s}_{\vec{n}_{\theta_2}}$ associated with the singlet state, one recovers, using Eqs. (\ref{c-valued spin for along theta for singlet}) and noting (\ref{opposite spinning pair for an singlet state}), the correlation between the associated standard quantum spin values for the singlet state which violates the Bell inequalities: \begin{eqnarray} &&\braket{\tilde{s}_{\vec{n}_{\theta_1}}(\eta_{12},\xi|\psi^{\mathcal{S}}_{12})\tilde{s}_{\vec{n}_{\theta_2}}(\eta_{12},\xi|\psi^{\mathcal{S}}_{12})}\nonumber\\ &&\doteq\sum_{\eta_{12}}\sum_{\xi}~\tilde{s}_{\vec{n}_1}(\eta_{12},\xi|\psi^{\mathcal{S}}_{12})\tilde{s}_{\vec{n}_2}(\eta_{12},\xi|\psi^{\mathcal{S}}_{12})\nonumber\\ &&\hspace{5mm}\times\chi(\xi){\rm Pr}(\eta_{12}|\psi_{12})\nonumber\\ &&=-\sin\theta_1\sin\theta_2-\cos\theta_1\cos\theta_2=-\cos(\theta_2-\theta_1)\nonumber\\ &&=\braket{\psi^{\mathcal{S}}_{12}|(\hat{\sigma}_{\vec{n}_{\theta_1}}\otimes\hat{\sigma}_{\vec{n}_{\theta_2}})|\psi^{\mathcal{S}}_{12}}, \end{eqnarray} where we have used Eq. (\ref{Planck constant}) and noting that ${\rm Pr}(\pm y,\pm x|\psi^{\mathcal{S}}_{12})=|\braket{\pm y,\pm x|\psi^{\mathcal{S}}_{12}}|^2=1/4$. Indeed, the above result applies to general cases as stated by the following theorem. \\ {\bf Theorem 1}:\\ The statistical correlation between two continuum c-valued spins $\tilde{s}_{\vec{n}_1}$ and $\tilde{s}_{\vec{n}_2}$ along any pair of directions $(\vec{n}_1,\vec{n}_2)$ within any reference basis $\{\ket{\eta_{12}}\}$ for arbitrary prepared quantum state $\ket{\psi_{12}}$, is equal to the correlation between the binary standard quantum spin values obtained from the measurement of the quantum spin observables $(\hat{\sigma}_{\vec{n}_1}\otimes\hat{\sigma}_{\vec{n}_2})$ over $\ket{\psi_{12}}$, i.e., \begin{eqnarray} \braket{\tilde{s}_{\vec{n}_1}(\eta_{12},\xi|\psi_{12})\tilde{s}_{\vec{n}_2}(\eta_{12},\xi|\psi_{12})}=\braket{\psi_{12}|\hat{\sigma}_{\vec{n}_1}\otimes\hat{\sigma}_{\vec{n}_2}|\psi_{12}}. \label{Theorem 1} \end{eqnarray} This theorem is a special case of a theorem presented in the previous work \cite{Agung general c-valued physical quantities and uncertainty relation}. Moreover, taking $\hat{\sigma}_{\nu}=\hat{\mathbb{I}}_{\nu}$, and noting that $\tilde{\mathbb{I}}_{\nu}(\eta_{12},\xi|\psi_{12})=1$, we have $\braket{\tilde{s}_{\vec{n}_\mu}(\eta_{12},\xi|\psi_{12})}=\braket{\psi_{12}|\hat{\sigma}_{\vec{n}_\mu}\otimes\hat{\mathbb{I}}_{\nu}|\psi_{12}}={\rm Tr}_{\mu}\{\hat{\sigma}_{\vec{n}_\mu}\hat{\varrho}_{\mu}\}$, where $\hat{\varrho}_{\mu}={\rm Tr}_{\nu}\{\ket{\psi_{12}}\bra{\psi_{12}}\}$, $\nu\neq\mu$, i.e., the local ensemble average of the c-valued spin variable for any $\ket{\psi_{12}}$ is also equal to the local quantum expectation value. We note that, crucially, to arrive at the equality of Eq. (\ref{Theorem 1}), $\xi$ must be indeed global-nonseparable. Such a global-nonseparable variable $\xi$ presumes a preferred spacetime reference frame violating the Lorentz invariance underlying the theory of relativity. Next, at first sight, due to the nonseparability of $\xi$ which connects instantaneously the two c-valued spins, the model apparently will not be able to reconstruct the quantum correlation when the two particles are independently prepared so that the associated quantum state is factorizable, i.e. $\ket{\psi_{12}}=\ket{\psi_1}\ket{\psi_2}$. Remarkably, however, this is not the case since Theorem 1 applies for general quantum states, factorizable or not. Finally, note that Eq. (\ref{Theorem 1}) still applies if we replace the c-valued spin variables on the left hand side with the associated weak value of spin \cite{Hosoya-Shikano counterfactual value,Hall weak value to quantum uncertainty}. However, the weak values may take complex values when $\ket{\psi_{12}}$ is entangled (one can show that when $\ket{\psi_{12}}$ is factorizable, the real part of the weak values are sufficient to reconstruct the quantum correlation). In contrast to this, the c-valued spin variables defined in Eq. (\ref{real continuum c-valued spin variable}), which are always real, allow for the reconstruction of the quantum spins correlation for arbitrary quantum states at the cost of introducing the global-nonseparable variable $\xi$. Now, according to the standard quantum mechanics, the measurement of $\hat{\sigma}_{\vec{n}_{\mu}}$ inevitably projects the prepared quantum state $\ket{\psi_{12}}$ randomly onto one of the eigenstates $\ket{\phi_{12}^{\vec{n}_{\mu}}}$ of $\hat{\sigma}_{\vec{n}_{\mu}}$, i.e., $\ket{\psi_{12}}\mapsto \ket{\phi_{12}^{\vec{n}_{\mu}}}$, with the measurement outcomes given by the associated eigenvalues $o_{\vec{n}_{\mu}}=\pm 1$. Moreover, recall that evaluating the associated c-valued spin variable defined in Eq. (\ref{real continuum c-valued spin variable}) for these post-measurement quantum states $\ket{\phi_{12}^{\vec{n}_{\mu}}}$, we regain the standard quantum spin values, i.e., $\tilde{s}_{\vec{n}_{\mu}}(\eta_{12},\xi|\phi_{12}^{\vec{n}_{\mu}})=o_{\vec{n}_{\mu}}=\pm 1$. This observation suggests a model for the quantum spin measurement wherein it maps the c-valued spin variables from the continuous range of possible real values prior to measurement, onto the binary values $\pm 1$, i.e., \begin{eqnarray} \mathbb{R}\ni\tilde{s}_{\vec{n}_{\mu}}(\eta_{12},\xi|\psi_{12})\mapsto\tilde{s}_{\vec{n}_{\mu}}(\eta_{12},\xi|\phi_{12}^{\vec{n}_{\mu}})\in\{-1,1\}, \label{mapping from continuum real number to binary in quantum measurement} \end{eqnarray} while preserving the bipartite correlation as per Theorem 1. Hence, we have upgraded the conservation of bipartite correlation as a principle which governs the measurement. In such a model, the restriction imposed by the violation of Bell inequalities to the statistics of the measurement outcomes $\pm 1$ for entangled state, thus arises from, and compelled by, the requirement of conservation of correlation in measurement. A few remarks are in order. First, let us emphasize that it is the statistical correlation that is preserved by the measurement, not the value assignment of the c-valued spin variables. We have thus relaxed the requirement for measurement in classical mechanics: i.e., rather than revealing the values of the variables prior to measurement, it is only required to reveal the bipartite correlation (and also the local average) prior to measurement. Hence, the measurement induced disturbance must comply with the principle of conservation of bipartite correlation. Next, when the prepared state is entangled, one finds that there is a nonlocal dependence of the value assignment of the c-valued spin variable of one particle on the spin measurement of the other (possibly arbitrarily remote) particle. For example, when the prepared state is a singlet, a spin measurement along the direction $\vec{n}_2=\vec{n}$ of particle $2$ with the outcome $+1(-1)$, will need to be accompanied by the mapping of $\tilde{s}_{\vec{n}_1}(\eta_{12},\xi|\psi_{12}^{\mathcal{S}})$ assigned to the particle $1$, where $\vec{n}_1=\vec{n}$, from its value prior to measurement given by Eq. (\ref{c-valued spin for along theta for singlet}), onto binary standard quantum spin values $-1(+1)$. However, since the statistics of the standard quantum spin values follows the Born's rule, such a nonlocal value assignment cannot be used for signalling. Hence, we have assumed that quantum bipartite correlation exists prior to measurement in terms of the correlation between the real c-valued spin variables. Remarkably, the c-valued spin variables can be constructed operationally via weak measurement with postselection and a classical postprocessing involving $\xi$. This correlation between the real c-valued spin variables already cannot be explained locally in terms of the correlation between classical variables in spacetime due to the dependence of the c-valued spin variables on the fluctuations of the global variable $\xi$. Joint spin measurement of the two particles preserves the correlation. Moreover, after the measurement, the dependence of the c-valued spin variables (now equal to the standard quantum spin values) on the global variable $\xi$ disappears. But, the nonclassicality reappears in the form of a nonlocal dependence of c-valued spin variable of one particle on the measurement of the other remote particle. \section{A game of joint mapping under conservation of correlation} Bell theorem is most eloquently described in terms of spacelike coordination games which smartly exploit the classically counterintuitive features of quantum entanglement. For example, in the well-known CHSH game \cite{CHSH inequality,Jennings-Leifer review paper - nonclassicality}, Alice and Bob, spatially separated from each other, are required to independently come out with a pair of outputs based on a pair of inputs given randomly by a referee, Charlie, so that the outputs and inputs satisfy a simple arithmatic relation: \begin{eqnarray} xy=a+b~ ({\rm mod}~2). \label{CHSH condition} \end{eqnarray} Here, $a$ is Alice's output given input $x$, and $b$ is Bob's output given input $y$, where $(x,y,a,b)=\{0,1\}$. Namely, to win the game, Alice and Bob must pop out different outputs, i.e., $a\neq b$, when their inputs are $x=y=1$, and pop out the same output, i.e., $a=b$, when at least one of their inputs is 0. One can show that if they only use classical (i.e., local and deterministic) joint strategy, their winning probability is lower than or equal to $3/4$ (assuming that all the four combinations of the inputs are equally sampled), a form of Bell inequalities. Surprisingly, if Alice and Bob share an ensemble of entangled qubits and have access to local spin measurement devices, they can win the game with a larger probability, as large as $(2+\sqrt{2})/4$ \cite{Tsirelson bound}. While the above game and other similar games \cite{Aravind game} strikingly show that entanglement is a nonclassical resource in certain protocols of information processing involving spacelike separated parties, the apparently mathematically simple winning condition of Eq. (\ref{CHSH condition}) is difficult to fathom in physical terms. What does the condition of Eq. (\ref{CHSH condition}) tell us about Nature so that it distinguishes quantum strategy from the classical strategy? Can we develop a different game with a winning condition that forces the violation of the Bell inequalities, which is more transparent and physically plausible, so that it can be upgraded as an axiom? Moreover, in the CHSH game, quantum measurement is treated as a total black box \cite{Popescu - Daniel PR box,Popescu review on PR box}, so that the physical constraint which compels the measurement to violate the Bell inequalities is not clear. Note that, within this point of view, nonlocality or/and indeterminism are not the constraints which force the measurement to violate the Bell inequalities, rather, they are the tricks that are possibly used by the measurement to satisfy the constraint. They (like time dilation in the theory of special relativity) should not therefore be upgraded as axioms, rather they are the implications of a deeper physical constraint. But, what is this deep physical constraint? Here, with Theorem 1 in mind, we construct a different two parties coordination game which to an extent captures the model of measurement speculated in the last paragraphs of the previous Section, as follows. First, a referee, Charlie, and two players, Alice and Bob, situated sufficiently faraway from each other, agree on a choice of a complex valued vector $\ket{\psi_{12}}$ in the computational basis. At each round of the game, Charlie samples a pair of random variables $(\eta_{12},\xi)$ from the joint probability distribution ${\rm Pr}(\eta_{12},\xi|\chi,\psi_{12})=|\braket{\eta_{12}|\psi_{12}}|^2\chi(\xi)$. Charlie then randomly chooses a pair of unit vectors, denoted respectively by $\vec{n}_1$ and $\vec{n}_2$, and use them, to compute $\tilde{s}_{\vec{n}_1}(\eta_{12},\xi|\psi_{12})$ and $\tilde{s}_{\vec{n}_2}(\eta_{12},\xi|\psi_{12})$ using the prescription in Eq. (\ref{real continuum c-valued spin variable}). In this way, these sets of numbers are effectively sampled from the joint probability distribution \begin{eqnarray} &&{\rm Pr}(\tilde{s}_{\vec{n}_1},\tilde{s}_{\vec{n}_2},\eta_{12},\xi|\vec{n}_1,\vec{n}_2,\psi_{12})\nonumber\\ &&=\delta\Big(\tilde{s}_{\vec{n}_1};{\rm Re}\Big\{\frac{\braket{\eta_{12}|\hat{\sigma}_{\vec{n}_1}|\psi_{12}}}{\braket{\eta_{12}|\psi_{12}}}\Big\}+\frac{\xi}{\hbar}{\rm Im}\Big\{\frac{\braket{\eta_{12}|\hat{\sigma}_{\vec{n}_1}|\psi_{12}}}{\braket{\eta_{12}|\psi_{12}}}\Big\}\Big)\nonumber\\ &&\times\delta\Big(\tilde{s}_{\vec{n}_2};{\rm Re}\Big\{\frac{\braket{\eta_{12}|\hat{\sigma}_{\vec{n}_2}|\psi_{12}}}{\braket{\eta_{12}|\psi_{12}}}\Big\}+\frac{\xi}{\hbar}{\rm Im}\Big\{\frac{\braket{\eta_{12}|\hat{\sigma}_{\vec{n}_2}|\psi_{12}}}{\braket{\eta_{12}|\psi_{12}}}\Big\}\Big)\nonumber\\ &&\times\chi(\xi)\big|\braket{\eta_{12}|\psi_{12}}\big|^2. \label{join factorizable distribution of continuum c-valued spins} \end{eqnarray} Here $\delta(k;l)$ is the Kroneker delta, i.e., $\delta(k;l)=1$ if $k=l$, and $\delta(k;l)=0$ if $k\neq l$. Next, Charlie sends the triple of random variables $\{\eta_{12},\xi,\tilde{s}_{\vec{n}_1}\}$ to Alice, and $\{\eta_{12},\xi,\tilde{s}_{\vec{n}_2}\}$ to Bob. Given all the above information, Alice and Bob joint task is to pick up a pair of binary numbers, either $1$ or $-1$, independently of each other. Hence, denoting Alice's output as a binary random variable $o_{\vec{n}_1}$ and that of Bob's as $o_{\vec{n}_2}$, their task is essentially to independently map the pair of real numbers $(\tilde{s}_{\vec{n}_1},\tilde{s}_{\vec{n}_2})$ onto a pair of binary numbers $(o_{\vec{n}_1},o_{\vec{n}_2})$, i.e., \begin{eqnarray} \mathcal{F}_{12}[\psi_{12},\eta_{12},\xi]:(\tilde{s}_{\vec{n}_1},\tilde{s}_{\vec{n}_2})\mapsto (o_{\vec{n}_1},o_{\vec{n}_2}), \label{coordinate mapping} \end{eqnarray} where $\tilde{s}_{\vec{n}_{\mu}}\in\mathbb{R}$ and $o_{\vec{n}_{\mu}}\in\{+1,-1\}$, $\mu=1,2$, and $\mathcal{F}_{12}$ describes their joint strategy. They can devise any classical algorithm or strategy to accomplish the task and program it to their computational devices together before they are moving separately to their laboratories. Since Alice and Bob do the mappings independently of each other, the conditional probability that Alice pops out $o_{\vec{n}_1}$ is statistically independent of $o_{\vec{n}_2}$ and $\tilde{s}_{\vec{n}_2}$, i.e., ${\rm Pr}(o_{\vec{n}_1}|o_{\vec{n}_2},\tilde{s}_{\vec{n}_1},\tilde{s}_{\vec{n}_2},\eta_{12},\xi,\psi_{12})={\rm Pr}(o_{\vec{n}_1}|\tilde{s}_{\vec{n}_1},\eta_{12},\xi,\psi_{12})$, and likewise that Bob pops out $o_{\vec{n}_2}$ is independent of $o_{\vec{n}_1}$ and $\tilde{s}_{\vec{n}_1}$, i.e., ${\rm Pr}(o_{\vec{n}_2}|o_{\vec{n}_1},\tilde{s}_{\vec{n}_1},\tilde{s}_{\vec{n}_2},\eta_{12},\xi,\psi_{12})={\rm Pr}(o_{\vec{n}_2}|\tilde{s}_{\vec{n}_2},\eta_{12},\xi,\psi_{12})$, so that we have the factorizability condition: \begin{eqnarray} &&{\rm Pr}(o_{\vec{n}_1},o_{\vec{n}_2}|\tilde{s}_{\vec{n}_1},\tilde{s}_{\vec{n}_2},\eta_{12},\xi,\psi_{12})\nonumber\\ &=&{\rm Pr}(o_{\vec{n}_1}|\tilde{s}_{\vec{n}_1},\eta_{12},\xi,\psi_{12}){\rm Pr}(o_{\vec{n}_2}|\tilde{s}_{\vec{n}_2},\eta_{12},\xi,\psi_{12}). \label{factorizability condition} \end{eqnarray} We then say they win the game if the statistical correlation between $o_{\vec{n}_1}$ and $o_{\vec{n}_2}$, obtained by repeating the above protocol (in principle infinitely) many times, are equal to the initial correlation between $\tilde{s}_{\vec{n}_1}$ and $\tilde{s}_{\vec{n}_2}$, i.e., \begin{eqnarray} &&\sum_{(\eta_{12},\xi)}\sum_{(\tilde{s}_{\vec{n}_1},\tilde{s}_{\vec{n}_2})}\sum_{(o_{\vec{n}_1},o_{\vec{n}_2})}~o_{\vec{n}_1}o_{\vec{n}_2}\nonumber\\ &&\times{\rm Pr}(o_{\vec{n}_1},o_{\vec{n}_2}|\tilde{s}_{\vec{n}_1},\tilde{s}_{\vec{n}_2},\eta_{12},\xi,\psi_{12})\nonumber\\ &&\times{\rm Pr}(\tilde{s}_{\vec{n}_1},\tilde{s}_{\vec{n}_2},\eta_{12},\xi|\vec{n}_1,\vec{n}_2,\psi_{12})\nonumber\\ &&=\braket{\tilde{s}_{\vec{n}_1}(\eta_{12},\xi|\psi_{12})\tilde{s}_{\vec{n}_2}(\eta_{12},\xi|\psi_{12})}. \label{conservation of correlation} \end{eqnarray} To summarize, what Alice and Bob have to do is to independently map the pair of random variables $(\tilde{s}_{\vec{n}_1},\tilde{s}_{\vec{n}_2})$ which may take any continuous real numbers depending on the choice of $(\vec{n}_1,\vec{n}_2)$, onto binary random variables $(o_{\vec{n}_1},o_{\vec{n}_2})$, based on a joint strategy, so that the resulting correlation between $o_{\vec{n}_1}$ and $o_{\vec{n}_2}$ preserves the initial correlation between $\tilde{s}_{\vec{n}_1}$ and $\tilde{s}_{\vec{n}_2}$. We argue below that there is a class of games wherein no classical joint strategy can ever win as stated by the following theorem.\\ {\bf Theorem 2}:\\ For a class of games with initial value of correlation $\braket{\tilde{s}_{\vec{n}_1}(\eta_{12},\xi|\psi_{12})\tilde{s}_{\vec{n}_2}(\eta_{12},\xi|\psi_{12})}$ associated with a nonfactorizable complex vector $\ket{\psi_{12}}$, no classical (i.e., local and deterministic) joint strategy of Alice and Bob will win the spacelike game of joint mapping, i.e. their mappings must violate the conservation of correlation of Eq. (\ref{conservation of correlation}). \\ {\bf Proof:}\\ First, combining Eq. (\ref{conservation of correlation}) with Eq. (\ref{Theorem 1}) of the Theorem 1, and noting Eq. (\ref{factorizability condition}), to win the game, Alice and Bob joint strategy must yield outcomes which satisfy the following relation: \begin{eqnarray} &&\sum_{(\eta_{12},\xi)}\sum_{(\tilde{s}_{\vec{n}_1},\tilde{s}_{\vec{n}_2})}\sum_{(o_{\vec{n}_1},o_{\vec{n}_2})}~o_{\vec{n}_1}o_{\vec{n}_2}\nonumber\\ &\times&{\rm Pr}(o_{\vec{n}_1}|\tilde{s}_{\vec{n}_1},\eta_{12},\xi,\psi_{12}){\rm Pr}(o_{\vec{n}_2}|\tilde{s}_{\vec{n}_2},\eta_{12},\xi,\psi_{12})\nonumber\\ &\times&{\rm Pr}(\tilde{s}_{\vec{n}_1},\tilde{s}_{\vec{n}_2},\eta_{12},\xi|\vec{n}_1,\vec{n}_2,\psi_{12})\nonumber\\ &=&\braket{\psi_{12}|\hat{\sigma}_{\vec{n}_1}\otimes\hat{\sigma}_{\vec{n}_2}|\psi_{12}}. \label{condition for winning pre} \end{eqnarray} Next, inserting Eq. (\ref{join factorizable distribution of continuum c-valued spins}) into Eq. (\ref{condition for winning pre}), we get, after summing over $\tilde{s}_{\vec{n}_{\mu}}$, $\mu=1,2$, \begin{eqnarray} &&\sum_{(\eta_{12},\xi)}\sum_{(o_{\vec{n}_1},o_{\vec{n}_2})}~~o_{\vec{n}_1}o_{\vec{n}_2}\nonumber\\ &&\times{\rm Pr}(o_{\vec{n}_1}|\vec{n}_1,\eta_{12},\xi,\psi_{12}){\rm Pr}(o_{\vec{n}_2}|\vec{n}_2,\eta_{12},\xi,\psi_{12})\nonumber\\ &&\times{\rm Pr}(\eta_{12},\xi|\chi,\psi_{12})=\braket{\psi_{12}|\hat{\sigma}_{\vec{n}_1}\otimes\hat{\sigma}_{\vec{n}_2}|\psi_{12}}, \label{condition for winning} \end{eqnarray} where we have defined the conditional probabilities as \begin{eqnarray} &&{\rm Pr}(o_{\vec{n}_{\mu}}|\vec{n}_{\mu},\eta_{12},\xi,\psi_{12})\doteq\sum_{\tilde{s}_{\vec{n}_{\mu}}}{\rm Pr}(o_{\vec{n}_{\mu}}|\tilde{s}_{\vec{n}_{\mu}},\eta_{12},\xi,\psi_{12})\nonumber\\ &&\times\delta\Big(\tilde{s}_{\vec{n}_{\mu}};{\rm Re}\Big\{\frac{\braket{\eta_{12}|\hat{\sigma}_{\vec{n}_{\mu}}|\psi_{12}}}{\braket{\eta_{12}|\psi_{12}}}\Big\}+\frac{\xi}{\hbar}{\rm Im}\Big\{\frac{\braket{\eta_{12}|\hat{\sigma}_{\vec{n}_{\mu}}|\psi_{12}}}{\braket{\eta_{12}|\psi_{12}}}\Big\}\Big), \nonumber\\ \label{conditional local probability} \end{eqnarray} $\mu=1,2$. The condition for winning the game of Eq. (\ref{condition for winning}) therefore requires the two players to reconstruct the quantum spin correlation on the right-hand side, using a local hidden variable or local causal model given on the left-hand side. Noting this, when $\ket{\psi_{12}}$ is nonfactorizable, according to the Bell's theorem \cite{Bell's theorem,CHSH inequality}, no joint classical strategy of Alice and Bob are able to satisfy Eq. (\ref{condition for winning}). Namely, for the class of games wherein the initial correlation between $\tilde{s}_{\vec{n}_1}$ and $\tilde{s}_{\vec{n}_2}$ is equal to the quantum spins correlation over an entangled quantum state $\ket{\psi_{12}}$ (per Theorem 1), Alice and Bob outputs will always violate the constraint of conservation of correlation of Eq. (\ref{conservation of correlation}). We note that it needs an infinite number of rounds to be able to compute the correlation. One can however develop a winning criteria for a finite number of rounds, by looking at the convergence rate of the finite-ensemble correlation. As a concrete example, we can follow the CHSH set-up \cite{CHSH inequality} to run the game. Namely, at each round of the game, Charlie chooses one pair out of four alternative pairs of unit vectors, i.e., $(\vec{n}_1,\vec{n}_2)$, $(\vec{n}_1,\vec{n}'_2)$, $(\vec{n}'_1,\vec{n}_2)$ $(\vec{n}_1',\vec{n}'_2)$, randomly, and use them to compute a pair of c-values spin variables. Let us denote the correlation between the outputs, e.g., $o_{\vec{n}_1}$ and $o_{\vec{n}_2}$, i.e., Alice's output when she is given $\tilde{s}_{\vec{n}_1}$ and Bob's output when he is given $\tilde{s}_{\vec{n}_2}$, as $C_{\vec{n}_1\vec{n}_2}$. Then, assuming that the four pairs of unit vectors are sampled equally likely, if Alice and Bob joint strategy is classical, the correlation of their outputs must satisfy the Bell-CHSH inequality, i.e., $C_{12}^{\rm CHSH}\doteq C_{\vec{n}_1\vec{n}_2}+C_{\vec{n}_1\vec{n}'_2}+C_{\vec{n}'_1\vec{n}_2}-C_{\vec{n}'_1\vec{n}'_2}\le 2$. On the other hand, since the original correlation between real valued variables $\tilde{s}_{\vec{n}_1}$ and $\tilde{s}_{\vec{n}_2}$ is equal to the quantum spin correlation (as per Theorem 1), when $\ket{\psi_{12}}$ is entangled, computing the CHSH correlation $C_{12}^{\rm CHSH}$ for the associated c-valued spin variables will yield a value larger than 2 with a maximum value $2\sqrt{2}$. Hence, when the original correlation between the pair of real-valued variables correspond to a nonfactorizable vector $\ket{\psi_{12}}$, the fact that these correlations violate Bell inequalities says that no classical strategy of the joint mappings will ever win the game. If Alice and Bob have quantum circuits, and $\vec{n}_1,\vec{n}'_1,\vec{n}_2,\vec{n}'_2$ are coplanar so that their direction are parametered only by polar angles, then they can always win the game, by running the following strategy. First, they need to physically encode the complex vector $\ket{\psi_{12}}$ which generates the joint probability distribution of real-valued numbers of Eq. (\ref{join factorizable distribution of continuum c-valued spins}), as an ensemble of entangled pairs of spin-$\frac{1}{2}$ particles (or entangled pair of any physical qubits) in the quantum state $\ket{\psi_{12}}$. Alice then stores one of the particles in the entangled pairs to her circuit, and Bob the other particles in the pairs, and bring them to their separated labs. Next, Alice, upon receiving the triple $\{\eta_{12},\xi,\tilde{s}_{\vec{n}_1}\}$ from Charlie, infers $\vec{n}_1$ using the relation of Eq. (\ref{real continuum c-valued spin variable}). For example, in the case when $\ket{\psi_{12}}$ is given by the singlet of Eq. (\ref{singlet state}) with the reference basis $\{\ket{\eta_{12}}\}=\{\ket{y+}\ket{x+},\ket{y+}\ket{x-},\ket{y-}\ket{x+},\ket{y-}\ket{x-}\}$, its polar angle $\theta_1$ can be easily inferred from $\{\eta_{12},\xi,\tilde{s}_{\vec{n}_1}\}$ using Eq. (\ref{c-valued spin for along theta for singlet}). Likewise, Bob, upon receiving the triple $\{\eta_{12},\xi,\tilde{s}_{\vec{n}_2}\}$ from Charlie, infers $\vec{n}_2$ using the relation of Eq. (\ref{real continuum c-valued spin variable}). They then use the inferred unit vectors as the directions along which they make local spin measurements to their respective particles. Namely, Alice makes measurement of $\hat{\sigma}_{\vec{n}_1}$ to her particle, and similarly Bob makes measurement of $\hat{\sigma}_{\vec{n}_2}$ to his particle, yielding outcomes $\pm 1$ randomly. Alice assigns her outcomes to $o_{\vec{n}_1}$, and Bob to $o_{\vec{n}_2}$. In this sense, they map $(\tilde{s}_{\vec{n}_1},\tilde{s}_{\vec{n}_2})$ onto $(o_{\vec{n}_1},o_{\vec{n}_2})$, using the entangled particles and local spin measurement device. By construction, the correlation between $o_{\vec{n}_1}$ and $o_{\vec{n}_2}$ is given by the quantum spin correlation of $\braket{\psi_{12}|\hat{\sigma}_{\vec{n}_1}\otimes \hat{\sigma}_{\vec{n}_2}|\psi_{12}}$. Theorem 1 then guarantees that this correlation between the binary standard quantum spin values is equal to the original correlation between the continuum c-valued spins $\braket{\tilde{s}_{\vec{n}_1}(\eta_{12},\xi|\psi_{12})\tilde{s}_{\vec{n}_2}(\eta_{12},\xi|\psi_{12})}$. Hence, it satisfies the requirement to win the game, i.e., the constraint of conservation of correlation of Eq. (\ref{conservation of correlation}). It is thus clear that quantum entangled states are the nonclassical resource to win the above statistical game of spacelike joint mappings under correlation conservation. What is special about the mapping generated by the local spin measurement over the entangled quantum states so that it can be used to win the game while any classical strategy must fail? The basic assumption underlying the classical strategy is that the joint independent mapping of Eq. (\ref{coordinate mapping}) can be represented by the conditional probabilities ${\rm Pr}(o_{\vec{n}_{\mu}}|\tilde{s}_{\vec{n}_{\mu}},\eta_{12},\xi,\psi_{12})$, $\mu=1,2$ implying the factorizability condition of Eq. (\ref{factorizability condition}). Hence, the mapping generated by the local spin measurements over the entangled states somehow violates this plausible assumption, either by allowing nonlocal influence so that the conditional probability of $o_{\vec{n}_{\mu}}$ may depend on $o_{\vec{n}_{\nu}}$ or $\tilde{s}_{\vec{n}_{\nu}}$, $\nu\neq\mu$, or the mapping is acausal so that the above conditional probabilities simply cannot be defined. It is intriguing to pounder how the above game of joint mapping under conservation of correlation is related to what is really happening in the spin measurements in Bell-type experiment. Finally, we emphasize that it is the requirement of conservation of correlation which forces any strategy to comply with the Bell theorem so that it must violate the Bell inequalities when $\ket{\psi_{12}}$ is nonfactorizable. We further note that while the protocol of the game of joint mapping is not as simple as the protocol of the CHSH game discussed at the beginning of this section, the requirement of conservation of correlation is much easier to grasp in physical terms than the winning condition of CHSH game of Eq. (\ref{CHSH condition}). Conservation of correlation appeals directly to intuition, and moreover, conservation laws have played prominent roles in the past in the construction of physical theories. \section{Conclusion} The empirical violation of Bell inequalities \cite{Hensen loophole free Bell test,Giustina loophole free Bell test,Shalm loophole free Bell test} implies that we must give up, as measurement is concerned, at least one of the following: realism, locality, and free choice. This suggests that there must be a deep principle which measurement cannot resist obeying so that it is willing to sacrifice such intuitive and plausible concepts. To study this problem, we have assumed that a spin-$\frac{1}{2}$ particle (or any generic physical qubit) can always be assigned a definite c-valued spin variable regardless of measurement. The c-valued spin variable may take any continuum real value in the absence of measurement, and reduces to the binary values $\pm 1$ after the measurement reproducing the standard value of quantum spin. Moreover, the bipartite correlation of the c-valued spin variables prior to measurement is always equal to the quantum correlation obtained in quantum spin measurement. This motivates a speculation that quantum spin measurement maps the c-valued spin variables from continuous range of possible real number onto the binary $\pm 1$, while respecting the principle of conservation the correlation. In such a model, it is the plausible requirement of conservation of bipartite correlation which compels the measurement to violate the Bell inequalities when the prepared state is entangled. We then constructed a statistical game of joint mappings which, to an extent, captures the above model of measurement. Alice and Bob, sufficiently faraway separated from each other, are asked to map, independently, a pair of real numbers sampled from a specific distribution, onto a pair of binary numbers $\pm 1$, with the condition that the statistical correlation is preserved. The winning condition of correlation conservation forces the game to comply with the Bell theorem which implies that, for certain class of the games associated with a nonfactorizable vector in Hilbert space, Alice and Bob can never win the game using any classical (i.e., local and deterministic) joint strategy. They can instead easily win the game with a quantum strategy using an ensemble of entangled spin-$\frac{1}{2}$ particles (qubits) and quantum circuits for local spin measurement. The game suggests that quantum protocols utilizing entanglement may exhibit quantum advantage | by way of violating Bell inequalities | in information processing tasks requiring conservation of bipartite correlation. \begin{acknowledgments} This work is partly funded by Lembaga Penelitian dan Pengabdian Masyarakat, Institut Teknologi Bandung, under the program of research assignment with the contract number: 2971/IT1.B07.1/TA.00/2021. It is also in part supported by the Indonesia Ministry of Research, Technology, and Higher Education through PDUPT research scheme with the contract number: 2/E1/KP.PTNBH/2019. I would like to thank the anonymous Referees for constructive criticism and generous recommendation, and Hermawan K. Dipojono for useful discussion. \end{acknowledgments}
1,314,259,993,814
arxiv
\section{Introduction} We study two well-known representations of uncertain texts: \emph{weighted sequences} and \emph{profiles}. A \emph{weighted sequence} (also known as uncertain sequence or position weight matrix, PWM) for every position and every letter of the alphabet specifies the probability of occurrence of this letter at this position; see \cref{table:weighted_sequence} for an example. A weighted sequence represents many different strings, each with the probability of occurrence equal to the product of probabilities of its letters at subsequent positions of the weighted sequence. Usually a threshold \ensuremath{\frac1z}\ is specified, and one considers only strings that match the weighted sequence with probability at least \ensuremath{\frac1z}. A \emph{scoring matrix} (or a profile) of length $m$ is an $m \times \sigma$ matrix. The \emph{score} of a string of length $m$ is the sum of scores in the scoring matrix of the subsequent letters of the string at the respective positions. A string is said to match a scoring matrix if its matching score is above a specified threshold $Z$. \begin{figure}[htpb] \renewcommand*{\arraystretch}{1.2} \begin{center} \begin{tabular}{|c|c|c|c|} \hline $X[1]$ & $X[2]$ & $X[3]$ & $X[4]$ \\ \hline \ $\pi_1^{(X)}(\mathtt{a})=1/2$ \ & \ $\pi_2^{(X)}(\mathtt{a})=1$ \ & \ $\pi_3^{(X)}(\mathtt{a})=3/4$ \ & \ $\pi_4^{(X)}(\mathtt{a})=0$ \ \\ \ $\pi_1^{(X)}(\mathtt{b})=1/2$ \ & \ $\pi_2^{(X)}(\mathtt{b})=0$ \ & \ $\pi_3^{(X)}(\mathtt{b})=1/4$ \ & \ $\pi_4^{(X)}(\mathtt{b})=1$ \ \\ \hline \end{tabular} \end{center} \caption{A weighted sequence $X$ of length 4 over the alphabet $\Sigma=\{\mathtt{a},\mathtt{b}\}$}\label{table:weighted_sequence} \end{figure} \subparagraph*{\textsc{Weighted Pattern Matching}\xspace and \textsc{Profile Matching}\xspace} First of all, we study the standard variants of pattern matching problems on weighted sequences and profiles, in which only the pattern or the text is an uncertain sequence. In the best-known formulation of the \textsc{Weighted Pattern Matching}\xspace problem, we are given a weighted sequence of length $n$, called a text, a solid (standard) string of length $m$, called a pattern, both over an alphabet of size $\sigma$, and a \emph{threshold probability} \ensuremath{\frac1z}. We are asked to find all positions in the text where the fragment of length $m$ represents the pattern with probability at least \ensuremath{\frac1z}. Each such position is called an \emph{occurrence} of the pattern in the text; we also say that the fragment of the text and the pattern \emph{match}. The \textsc{Weighted Pattern Matching}\xspace problem can be solved in $\mathcal{O}(\sigma n \log m)$ time via the Fast Fourier Transform~\cite{KCL_publication}. In a more general indexing variant of the problem, considered in \cite{amir_weighted_property_matching_j,costas_weighted_suffix_tree_j}, one can preprocess a weighted text in $\mathcal{O}(n z^2 \log z)$ time to report all $occ$ occurrences of a given solid pattern of length $m$ in $\mathcal{O}(m+occ)$ time. (A similar indexing data structure, which assumes $z = \mathcal{O}(1)$, was presented in~\cite{DBLP:conf/edbt/BiswasPTS16}.) Very recently, the index construction time was reduced to $\mathcal{O}(nz)$ for constant-sized alphabets \cite{CPM2016}. In the classic \textsc{Profile Matching}\xspace problem, the pattern is an $m \times \sigma$ profile, the text is a solid string of length $n$, and our task is to find all positions in the text where the fragment of length $m$ has a score above a specified threshold $Z$. A naive approach to the \textsc{Profile Matching}\xspace problem works in $\mathcal{O}(nm+m\sigma)$ time. A broad spectrum of heuristics improving this algorithm in practice is known; for a survey see~\cite{DBLP:journals/tcs/PizziU08}. One of the principal techniques, coming in different flavours, is \emph{lookahead scoring} that consists in checking if a partial match could possibly be completed by the following highest scoring letters in the scoring matrix and, if not, pruning the naive search. The \textsc{Profile Matching}\xspace problem can also be solved in $\mathcal{O}(\sigma n \log m)$ time via the Fast Fourier Transform~\cite{DBLP:journals/jcb/RajasekaranJS02}. \subparagraph*{\textsc{Weighted Consensus}\xspace and \textsc{Profile Consensus}\xspace} As our most involved contribution, we study a general variant of pattern matching on weighted sequences and the consensus problems on uncertain sequences, which are closely related to the \textsc{Multichoice Knapsack}\xspace problem. In the \textsc{Weighted Consensus}\xspace problem, given two weighted sequences of the same length, we are to check if there is a string that matches each of them with probability at least \ensuremath{\frac1z}. A routine to compare user-entered weighted sequences with existing weighted sequences in the database is used, e.g., in JASPAR, a well-known database of PWMs \cite{JASPAR}. In the \textsc{General Weighted Pattern Matching}\xspace (\textsc{GWPM}\xspace) problem, both the pattern and the text are weighted. In the most common definition of the problem (see \cite{DBLP:conf/cwords/BartonP15,costas_weighted_suffix_tree_j}), we are to find all fragments of the text that give a positive answer to the \textsc{Weighted Consensus}\xspace problem with the pattern. The authors of~\cite{DBLP:conf/cwords/BartonP15} proposed an algorithm for the \textsc{GWPM}\xspace problem based on the weighted prefix table that works in $\mathcal{O}(n z^2 \log z + n\sigma)$ time. In an analogous way to the \textsc{Weighted Consensus}\xspace problem, we define the \textsc{Profile Consensus}\xspace problem. Here we are to check for the existence of a string that matches both the scoring matrices above threshold $Z$. The \textsc{Profile Consensus}\xspace problem is actually a special case of the well-known (especially in practice) \textsc{Multichoice Knapsack}\xspace problem (also known as the \textsc{Multiple Choice Knapsack} problem). In this problem, we are given $n$ classes $C_1,\ldots,C_n$ of at most $\lambda$ items each---$N$ items in total---each item $c$ characterized by a value $v(c)$ and a weight $v(c)$. The goal is to select one item from each class so that the sums of values and of weights of the items are below two specified thresholds, $V$ and $W$. (In the more intuitive formulation of the problem, we require the sum of values to be \emph{above} a specified threshold, but here we consider an equivalent variant in which both parameters are symmetric.) The \textsc{Multichoice Knapsack}\xspace problem is widely used in practice, but most research concerns approximation or heuristic solutions; see \cite{DBLP:books/daglib/0010031} and references therein. As far as exact solutions are concerned, the classic meet-in-the middle approach by Horowitz and Sahni~\cite{DBLP:journals/jacm/HorowitzS74}, originally designed for the (binary) \textsc{Knapsack}\xspace problem, immediately generalizes to an $\mathcal{O}^*(\lambda^{\lceil{\frac{n}{2}\rceil}})$-time\footnote{The $\mathcal{O}^*$ notation suppresses factors polynomial with respect to the instance size (encoded in binary). } solution for \textsc{Multichoice Knapsack}\xspace. Several important problems can be expressed as special cases of the \textsc{Multichoice Knapsack}\xspace problem using folklore reductions (see~\cite{DBLP:books/daglib/0010031}). This includes the \textsc{Subset Sum}\xspace problem, which for a set of $n$ integers asks whether there is a subset summing up to a given integer $Q$, and the $k$-\textsc{Sum}\xspace problem which, for $k=\mathcal{O}(1)$ classes of $\lambda$ integers, asks to choose one element from each class so that the selected integers sum up to zero. These reductions give immediate hardness results for the \textsc{Multichoice Knapsack}\xspace problem, and they can be adjusted to yield the same consequences for \textsc{Profile Consensus}\xspace. For the \textsc{Subset Sum}\xspace problem, as shown in \cite{DBLP:conf/mfcs/EtscheidKMR15,DBLP:books/daglib/0069796}, the existence of an $\mathcal{O}^*(2^{\varepsilon n})$-time solution for every $\varepsilon > 0$ would violate the Exponential Time Hypothesis (ETH)~\cite{DBLP:journals/jcss/ImpagliazzoP01,ETHsurvey}. Moreover, the $\mathcal{O}^*(2^{n/2})$ running time, achieved in \cite{DBLP:journals/jacm/HorowitzS74}, has not been improved yet despite much effort. The 3-\textsc{Sum}\xspace conjecture \cite{DBLP:journals/comgeo/GajentaanO95} and the more general $k$-\textsc{Sum}\xspace conjecture state that the 3-\textsc{Sum}\xspace and $k$-\textsc{Sum}\xspace problems cannot be solved in $\mathcal{O}(\lambda^{2-\varepsilon})$ time and $\mathcal{O}(\lambda^{\ceil{\frac{k}{2}}(1-\varepsilon)})$ time, respectively, for any $\varepsilon>0$. \subparagraph*{Our Results} As the first result, we show how the lookahead scoring technique combined with a data structure for answering longest common prefix queries in a string can be applied to obtain simple and efficient algorithms for the standard pattern matching problems on uncertain sequences. For a weighted sequence, by $R$ we denote the size of its list representation, and by $\lambda$ the maximal number of letters with score at least $\frac{1}{z}$ at a single position (thus $\lambda \le \min(\sigma,z)$). In the \textsc{Profile Matching}\xspace problem, we set $M$ as the number of strings that match the scoring matrix with score above $Z$. In general $M \le \sigma^m$, however, we may assume that for practical data this number is actually much smaller. We obtain the following running times: \begin{itemize} \item $\mathcal{O}(m\sigma+n \log M)$ for \textsc{Profile Matching}\xspace; \item $\mathcal{O}(R\log^2\log \lambda+n \log z)$ deterministic and $\mathcal{O}(R+n \log z)$ randomized (Las Vegas, failure with probability $R^{-c}$ for any given constant $c$) for \textsc{Weighted Pattern Matching}\xspace. \end{itemize} The more complex part of our study is related to the consensus problems and to the \textsc{GWPM}\xspace problem. Instead of considering \textsc{Profile Consensus}\xspace, we study the more general \textsc{Multichoice Knapsack}\xspace. We introduce parameters based on the number of solutions with \emph{feasible} weight or value: $A_V = |\{(c_1,\ldots,c_n)\,:\,c_i \in C_i\mbox{ for all }i=1,\ldots,n,\,\sum_i v(c_i) \le V\}|$, that is, the number of choices of one element from each class that satisfy the value threshold; $A_W = |\{(c_1,\ldots,c_n)\,:\,c_i \in C_i\mbox{ for all }i=1,\ldots,n,\,\sum_i w(c_i) \le W\}|$; $A = \max(A_V,A_W)$, and $a=\min(A_V,A_W)$. We obtain algorithms with the following complexities: \begin{itemize} \item $\mathcal{O}(N+\sqrt{a\lambda} \log A)$ for \textsc{Multichoice Knapsack}\xspace; \item $\mathcal{O}(R+\sqrt{z \lambda} (\log \log z+\log \lambda))$ for \textsc{Weighted Consensus}\xspace and $\mathcal{O}(n\sqrt{z \lambda} (\log \log z+\log \lambda))$ for \textsc{General Weighted Pattern Matching}\xspace. \end{itemize} Since $a \le A \le \lambda^n$, our running time for \textsc{Multichoice Knapsack}\xspace in the worst case matches (up to lower order terms) the time complexities of the fastest known solutions for both \textsc{Subset Sum}\xspace (also binary \textsc{Knapsack}\xspace) and 3-\textsc{Sum}\xspace. The main novel part of our algorithm for \textsc{Multichoice Knapsack}\xspace is an appropriate (yet intuitive) notion of ranks of partial solutions. We also provide a simple reduction from \textsc{Multichoice Knapsack}\xspace to \textsc{Weighted Consensus}\xspace, which lets us transfer the negative results to the \textsc{GWPM}\xspace problem. \begin{itemize} \item The existence of an $\mathcal{O}^*(z^{\varepsilon})$-time solution for \textsc{Weighted Consensus}\xspace for every $\varepsilon>0$ would violate the Exponential Time Hypothesis. \item For every $\varepsilon>0$, an $\mathcal{O}^*(z^{0.5-\varepsilon})$-time solution for \textsc{Weighted Consensus}\xspace would imply an $\mathcal{O}^*(2^{(0.5 -\varepsilon)n})$-time algorithm for \textsc{Subset Sum}\xspace. \item For every $\varepsilon>0$, an $\tilde{\mathcal{O}}(R+z^{0.5}\lambda^{0.5-\varepsilon})$-time\footnote{ The $\tilde{\mathcal{O}}$ notation ignores factors polylogarithmic with respect to the input size. } solution for \textsc{Weighted Consensus}\xspace would imply an $\tilde{\mathcal{O}}(\lambda^{2-\varepsilon})$-time algorithm for 3-\textsc{Sum}\xspace. \end{itemize} For the higher-order terms our complexities match the conditional lower bounds; therefore, we put significant effort to keep the lower order terms of the complexities as small as possible. \subparagraph*{Model of Computations} For problems on weighted sequences, we assume the word RAM model with word size $w = \Omega(\log n + \log z)$ and $\sigma = n^{\mathcal{O}(1)}$. We consider the log-probability model of representations of weighted sequences, that is, we assume that probabilities in the weighted sequences and the threshold probability \ensuremath{\frac1z}\ are all of the form $c^{\frac{p}{2^{dw}}}$, where $c$ and $d$ are constants and $p$ is an integer that fits in a constant number of machine words. Additionally, the probability 0 has a special representation. The only operations on probabilities in our algorithms are multiplications and divisions, which can be performed exactly in $\mathcal{O}(1)$ time in this model. Our solutions to the \textsc{Multichoice Knapsack}\xspace problem only assume the word RAM model with word size $w=\Omega(\log S+\log a)$, where $S$ is the sum of integers in the input instance; this does not affect the $\mathcal{O}^*$ running time. \subparagraph*{Structure of the Paper} We start with Preliminaries, where we formally introduce the problems and the main notions used throughout the paper. The following three sections describe our algorithms: in \cref{sec:EWPM} for \textsc{Profile Matching}\xspace and \textsc{Weighted Pattern Matching}\xspace; in \cref{sec:MK} for \textsc{Profile Consensus}\xspace; and in \cref{sec:GWPMReduction} for \textsc{Weighted Consensus}\xspace and \textsc{General Weighted Pattern Matching}\xspace. A tailor-made, yet more efficient algorithm for \textsc{General Weighted Pattern Matching}\xspace is presented in \cref{app:SDWC}. We conclude with \cref{app:fast}, where we introduce faster algorithms and matching lower bounds for \textsc{Multichoice Knapsack}\xspace and \textsc{GWPM}\xspace in the case that $\lambda$ is large. \section{Preliminaries}\label{sec:Preliminaries} Let $\Sigma=\{s_1,s_2,\ldots,s_{\sigma}\}$ be an alphabet of size $\sigma$. A \emph{string} $S$ over $\Sigma$ is a finite sequence of letters from $\Sigma$. We denote the length of $S$ by $|S|$ and, for $1 \le i \le |S|$, the $i$-th letter of $S$ by $S[i]$. By $S[i..j]$ we denote the string $S[i] \ldots S[j]$ called a \emph{factor} of $S$ (if $i>j$, then the factor is an empty string). A factor is called a \emph{prefix} if $i=1$ and a \emph{suffix} if $j=|S|$. For two strings $S$ and $T$, we denote their concatenation by $S \cdot T$ ($ST$ in short). For a string $S$ of length $n$, by $\mathit{lcp}(i,j)$ we denote the length of the longest common prefix of factors $S[i..n]$ and $S[j..n]$. The following fact specifies a known efficient data structure answering such queries. It consists of the suffix array with its inverse, the LCP table and a data structure for range minimum queries on the LCP table; see \cite{AlgorithmsOnStrings} for details. \begin{fact}\label{fct:ver} Let $S$ be a string of length $n$ over alphabet of size $\sigma = n^{\mathcal{O}(1)}$. After $\mathcal{O}(n)$-time preprocessing, given indices $i$ and $j$ ($1 \le i,j \le n$) one can compute $\mathit{lcp}(i,j)$ in $\mathcal{O}(1)$ time. \end{fact} The \emph{Hamming distance} between two strings $X$ and $Y$ of the same length, denoted by $d_H(X,Y)$, is the number of positions where the strings have different letters. \subsection{Profiles} In the \textsc{Profile Matching}\xspace problem, we consider a \emph{scoring matrix} (a profile) $P$ of size $m \times \sigma$. For $i \in \{1,\ldots,m\}$ and $j \in \{1,\ldots,\sigma\}$, we denote the integer score of the letter $s_j$ at the position~$i$ by $P[i,s_j]$. The \emph{matching score} of a string $S$ of length $m$ with the matrix $P$ is $$\mathrm{Score}(S,P) = \sum_{i=1}^m P[i,S[i]].$$ If $\mathrm{Score}(S,P) \ge Z$ for an integer \emph{threshold} $Z$, then we say that the string $S$ \emph{matches the matrix $P$ with threshold $Z$}. We denote the number of strings $S$ that math $P$ with threshold $Z$ by $\mathrm{NumStrings}_Z(P)$. For a string $T$ and a scoring matrix $P$, we say that $P$ \emph{occurs in $T$ at position $i$ with threshold $Z$} if $T[i..i+m-1]$ matches $P$ with threshold $Z$. We denote the set of all positions where $P$ occurs in $T$ by $\mathit{Occ}_Z(P,T)$. These notions let us define the \textsc{Profile Matching}\xspace problem: \defdsproblemoutpar{\textsc{Profile Matching}\xspace}{ A string $T$ of length $n$, a scoring matrix $P$ of size $m \times \sigma$, and a threshold $Z$. }{ The set $\mathit{Occ}_Z(P,T)$. }{ $M = \mathrm{NumStrings}_Z(P)$. } \subsection{Weighted Sequences} A \emph{weighted sequence} $X=X[1] \ldots X[n]$ of length $|X|=n$ over alphabet $\Sigma=\{s_1,s_2,\ldots,s_{\sigma}\}$ is a sequence of sets of pairs of the form: $$X[i] = \{(s_j,\ \pi^{(X)}_i(s_j))\ :\ j \in \{1,2,\ldots,\sigma\}\}.$$ Here, $\pi_i^{(X)}(s_j)$ is the occurrence probability of the letter $s_j$ at the position $i \in \{1,\ldots,n\}$. These values are non-negative and sum up to 1 for a given $i$. For all our algorithms, it is sufficient that the probabilities sum up to \emph{at most} 1 for each position. Also, the algorithms sometimes produce auxiliary weighted sequences with sum of probabilities being smaller than 1 on some positions. We denote the maximum number of letters occurring at a single position of the weighted sequence (with non-zero probability) by $\lambda$ and the total size of the representation of a weighted sequence by $R$. The standard representation consists of $n$ lists with up to $\lambda$ elements each, so $R = \mathcal{O}(n \lambda)$. However, the lists can be shorter in general. Also, if the threshold probability $\frac1z$ is specified, at each position of a weighted sequence it suffices to store letters with probability at least $\frac1z$, and clearly there are at most $z$ such letters for each position. This reduction can be performed in linear time, so we shall always assume that $\lambda \le z$. The \emph{probability of matching} of a string $S$ with a weighted sequence $X$, $|S|=|X|=m$, is $$\P(S,X) = \prod_{i=1}^m \pi^{(X)}_i(S[i]).$$ We say that a string $S$ \emph{matches} a weighted sequence $X$ with probability at least \ensuremath{\frac1z}, denoted by $S \approx_{\frac1z} X$, if $\P(S,X) \ge \frac1z$. Given a weighted sequence $T$, by $T[i..j]$ we denote weighted sequence, called a \emph{factor} of $T$, equal to $T[i] \ldots T[j]$ (if $i>j$, then the factor is empty). We then say that a string $P$ \emph{occurs} in $T$ at position $i$ if $P$ matches the factor $T[i..i+m-1]$. We also say that $P$ is a \emph{\ensuremath{\frac1z}-solid factor} of $T$ at position $i$ (a \emph{\ensuremath{\frac1z}-solid prefix} if $i=1$ and a \emph{\ensuremath{\frac1z}-solid suffix} if $j=n$). We denote the set of all positions where $P$ occurs in $T$ by $\mathit{Occ}_\ensuremath{\frac1z}(P,T)$. \defdsproblem{\textsc{Weighted Pattern Matching}\xspace}{ A string $P$ of length $m$ and a weighted sequence $T$ of length $n$ with at most $\lambda$ letters at each position and $R$ in total, and a threshold probability \ensuremath{\frac1z}. }{ The set $\mathit{Occ}_\ensuremath{\frac1z}(P,T)$. } \section{Profile Matching and Weighted Pattern Matching}\label{sec:EWPM} In this section we present a solution to the \textsc{Profile Matching}\xspace problem. Afterwards, we show that it can be applied for \textsc{Weighted Pattern Matching}\xspace as well. For a scoring matrix $P$, the \emph{heavy string} of $P$, denoted $\H(P)$, is constructed by choosing at each position the heaviest letter, that is, the letter with the maximum score (breaking ties arbitrarily). Intuitively, $\H(P)$ is a string that matches $P$ with the maximum score. \begin{observation}\label{obs:crucial_profile} If we have $\mathrm{Score}(S,P) \ge Z$ for a string $S$ of length $m$ and an $m \times \sigma$ scoring matrix $P$, then $d_H(\H(P),S) \le \floor{\log M}$ where $M = \mathrm{NumStrings}_Z(P)$. \end{observation} \begin{proof} Let $d = d_H(\H(P),S)$. We can construct $2^{d}$ strings of length $|S|$ that match $P$ with a score above $Z$ by taking either of the letters $S[j]$ or $\H(P)[j]$ at each position $j$ such that $S[j]\ne \H(P)[j]$. Hence, $2^{d} \le M$, which concludes the proof. \end{proof} Our solution for the \textsc{Profile Matching}\xspace problem works as follows. We first construct $P' = \H(P)$ and the data structure for finding lcp values between suffixes of $P'$ and $T$. Let the variable $s$ store the matching score of $P'$. In the $p$-th step, we calculate the matching score of $T[p..p+m-1]$ by iterating through subsequent mismatches between $P'$ and $T[p..p+m-1]$ and making adequate updates in the matching score $s'$, which starts at $s'=s$. The mismatches are found using lcp-queries. This process terminates when the score $s'$ drops below $Z$ or when all the mismatches have been found. In the end, we include $p$ in $\mathit{Occ}_Z(P,T)$ if $s' \ge Z$. A pseudocode of this approach is given below for completeness. \begin{theorem} \textsc{Profile Matching}\xspace problem can be solved in $\mathcal{O}(m\sigma + n \log M)$ time. \end{theorem} \begin{proof} Let us bound the time complexity of the presented algorithm. The heavy string $P'$ can be computed in $\mathcal{O}(m\sigma)$ time. The data structure for $\mathit{lcp}$-queries in $P'T$ can be constructed in $\mathcal{O}(n+m)$ time by \cref{fct:ver}. Each query for $\mathit{lcp}(P'[i..m],T[j..n])$ can then be answered in constant time by a corresponding $\mathit{lcp}$-query in $P'T$, potentially truncated to the end of $P'$. Finally, for each position $p$ in the text $T$ we will consider at most $\floor{\log M}+1$ mismatches between $P'$ and $T$, as afterwards the score $s'$ drops below $Z$ due to \cref{obs:crucial_profile}. \end{proof} \begin{procedure}[htpb] $m:=|P|$;\ \ $n:=|T|$;\ \ $\mathit{Occ}:=\emptyset$\; $P' := \H(P)$\; Compute the data structure for $\mathit{lcp}$-queries in $P'T$\; $s := \sum_{j=1}^m P[j,P'[j]]$\; \For{$p:=1$ \KwSty{to} $n-m+1$}{ $s':=s$;\ \ $i:=1$;\ \ $j:=p$\; \While{$s' \ge Z$ \KwSty{and} $i \le m$}{ $\Delta := \mathit{lcp}(P'[i..m],T[j..n])$\; $i:=i+\Delta+1$;\ \ $j:=j+\Delta+1$\; \If{$i \le m+1$}{ $s' := s' + P[i-1,T[j-1]] - P[i-1,P'[i-1]]$; } } \lIf{$s' \ge Z$}{insert $p$ to $\mathit{Occ}$} } \Return{$\mathit{Occ}$}\; \caption{ProfileMatching($P$, $T$, $Z$)} \end{procedure} Basically the same approach can be used for \textsc{Weighted Pattern Matching}\xspace. In a natural way, we extend the notion of a heavy string to weighted sequences. Now we can restate \cref{obs:crucial_profile} in the language of probabilities instead of scores: \begin{observation}\label{obs:crucial} If a string $P$ matches a weighted sequence $X$ of the same length with probability at least \ensuremath{\frac1z}, then $d_H(\H(X),P) \le \floor{\log z}$. \end{observation} Comparing to the solution to \textsc{Profile Matching}\xspace, we compute the heavy string of the text instead of the pattern. An auxiliary variable $\alpha$ stores the matching probability between a factor of $\H(T)$ and the corresponding factor of $T$; it needs to be updated when we move to the next position of the text. In the implementation, we perform the following operations on a weighted sequence: \begin{itemize} \item computing the probability of a given letter at a given position, \item finding the letter with the maximum probability at a given position. \end{itemize} \begin{procedure}[h] \caption{WeightedPatternMatching($P$, $T$, \ensuremath{\frac1z})} $m:=|P|$;\ \ $n:=|T|$;\ \ $\mathit{Occ}:=\emptyset$\; $T' := \H(T)$\; Compute the data structure for $\mathit{lcp}$-queries in $PT'$\; $\alpha := \prod_{j=1}^m \pi^{(T)}_j(T'[j])$\; \For{$p:=1$ \KwSty{to} $n-m+1$}{ $\alpha':=\alpha$;\ \ $i:=1$;\ \ $j:=p$\; \While{$\alpha' \ge \frac1z$ \KwSty{and} $i \le m$}{ $\Delta := \mathit{lcp}(P[i..m],T'[j..n])$\; $i:=i+\Delta+1$;\ \ $j:=j+\Delta+1$\; \If{$i \le m+1$}{ $\alpha' := \alpha'\,\cdot\,\pi^{(T)}_{j-1}(P[i-1]) \,/\, \pi^{(T)}_{j-1}(T'[j-1])$; } } \lIf{$\alpha' \ge \frac1z$}{insert $p$ to $\mathit{Occ}$} \If{$p \le n-m$}{ $\alpha := \alpha \,\cdot\, \pi^{(T)}_{p+m}(T'[p+m]) \,/\, \pi^{(T)}_p(T'[p])$\; } } \Return{$\mathit{Occ}$}\; \end{procedure} In the standard list representation, the latter can be performed on a single weighted sequence in $\mathcal{O}(1)$ time after $\mathcal{O}(R)$-time preprocessing. We can perform the former in constant time if, in addition to the list representation, we store the letter probabilities in a dictionary implemented using perfect hashing \cite{DBLP:journals/jacm/FredmanKS84}. This way, we can implement the algorithm in $\mathcal{O}(n \log z + R)$ time w.h.p. Alternatively, a deterministic dictionary~\cite{DBLP:conf/icalp/Ruzic08} can be used to obtain a deterministic solution in $\mathcal{O}(R\log^2\log \lambda + n\log z)$ time. We arrive at the following result. \begin{theorem}\label{thm:wpm} \textsc{Weighted Pattern Matching}\xspace can be solved in $\mathcal{O}(R+n \log z)$ time with high probability by a Las-Vegas algorithm or in $\mathcal{O}(R\log^2 \log \lambda + n\log z)$ time deterministically. \end{theorem} \begin{remark} In the same complexity one can solve the \textsc{GWPM}\xspace problem with a solid text. \end{remark} \section{Profile Consensus as Multichoice Knapsack}\label{sec:MK} Let us start with a precise statement of the \textsc{Multichoice Knapsack}\xspace problem. \defdsproblempar{\textsc{Multichoice Knapsack}\xspace}{ A set $\mathcal{C}$ of $N$ items partitioned into $n$ disjoint classes $C_i$, each of size at most $\lambda$, two integers $v(c)$ and $w(c)$ for each item $c\in \mathcal{C}$, and two thresholds $V$ and $W$. }{ Does there exist a \emph{choice} $S$ (a set $S\subseteq \mathcal{C}$ such that $|S\cap C_i|=1$ for each $i$) satisfying both $\sum_{c\in S} v(c) \le V$ and $\sum_{c\in S} w(c) \le W$? }{ $A_V$ and $A_W$: the number of choices $S$ satisfying $\sum_{c\in S} v(c) \le V$ and $\sum_{c\in S} w(c) \le W$, respectively, as well as $A = \max(A_V,A_W)$ and $a=\min(A_V,A_W)$. } Indeed we see that the \textsc{Profile Consensus}\xspace problem reduces to the \textsc{Multichoice Knapsack}\xspace problem. For two $m \times \sigma$ scoring matrices, we construct $n=m$ classes of $\lambda=\sigma$ items each, with values equal to the scores of the letters in the first matrix and weights equal to the scores in the second matrix; both thresholds $V$ and $W$ are equal to $Z$. For a fixed instance of \textsc{Multichoice Knapsack}\xspace, we say that $S$ is a \emph{partial choice} if $|S\cap C_i|\le 1$ for each class. The set $D=\{i : |S\cap C_i|=1\}$ is called its \emph{domain}. For a partial choice $S$, we define $v(S) = \sum_{c \in S} v(c)$ and $w(S) = \sum_{c \in S} w(c)$. The classic $\mathcal{O}(2^{n/2})$-time solution to the \textsc{Knapsack}\xspace problem~\cite{DBLP:journals/jacm/HorowitzS74} partitions $D=\{1,\ldots,n\}$ into two domains $D_i$ of size roughly $n/2$, and for each $D_i$ it generates all partial choices $S$ ordered by $v(S)$. Hence, it reduces the problem to an instance of \textsc{Multichoice Knapsack}\xspace with $2$ classes. It is solved using the following lemma, proved below for completeness. \begin{lemma}\label{lem:knap2} The \textsc{Multichoice Knapsack}\xspace problem can be solved in $\mathcal{O}(N)$ time if $n=2$ and the elements $c$ of $C_1$ and $C_2$ are sorted by $v(c)$. \end{lemma} \begin{proof} Since the items of $C_1$ and $C_2$ are sorted by $v(c)$, a single scan through these items lets us remove all irrelevant elements, that is, elements dominated by other elements in their class. Next, for each $c_1\in C_1$ we compute $c_2\in C_2$ such that $v(c_2)\le V-v(c_1)$ but otherwise $v(c_2)$ is largest possible. As we have removed irrelevant elements from $C_2$, this item also minimizes $w(c_2)$ among all elements satisfying $v(c_2)\le V-v(c_1)$. Hence, if there is a feasible solution containing $c_1$, then $\{c_1,c_2\}$ is feasible. If we process elements $c_1$ by non-decreasing values $v(c_1)$, the values $v(c_2)$ do not increase, and thus the items $c_2$ can be computed in $\mathcal{O}(N)$ time in total. \end{proof} The same approach generalizes to \textsc{Multichoice Knapsack}\xspace. The partition is chosen to balance the number of partial choices in each domain, and the worst-case time complexity is $\mathcal{O}(\sqrt{Q\lambda})$, where $Q=\prod_{i=1}^n |C_i|$ is the number of choices. Our aim in this section is to replace $Q$ with the parameter $a$ (which never exceeds $Q$). The overall running time is going to be $\mathcal{O}(N+\sqrt{a\lambda}\log A)$: an overhead of $\mathcal{O}(\log A)$ appears. Two challenges arise once we adapt the meet-in-the-middle approach: how to restrict the set of partial choices to be generated so that a feasible solution is not missed, and how to define a partition $D=D_1\cup D_2$ to balance the number of partial choices generated for $D_1$ and $D_2$. A natural idea to deal with the first issue is to consider only partial choices with small values $v(S)$ or $w(S)$. This is close to our actual solution, which is based on the notion of \emph{ranks} of partial choices. Our approach to the second problem is to consider multiple partitions: those of the form $D=\{1,\ldots,j\}\cup\{j+1,\ldots,n\}$ for $1\le j \le n$. This results in an extra $\mathcal{O}(n)$ factor in the time complexity. However, in \cref{ss} we introduce preprocessing that lets us assume that $n=\mathcal{O}(\frac{\log A}{\log \lambda})$. While dealing with these two issues, some further effort is required to avoid few other extra terms in the running time. In case of our algorithm, this is only $\mathcal{O}(\log \lambda)$, which stems from the fact that we need to keep partial solutions ordered by $v(S)$. For a partial choice $S$, we define $\rank_V(S)$ as the number of partial choices $S'$ with the same domain for which $v(S')\le v(S)$. We symmetrically define $\rank_W(S)$. Ranks are introduced as an analogue of probabilities in weighted sequences. Probabilities are multiplicative, while for ranks we have submultiplicativity: \begin{fact}\label{fct:comb} Assume that $S=S_1\cup S_2$ is a decomposition of a partial choice $S$ into two disjoint subsets. Then $\rank_V(S_1)\rank_V(S_2)\le \rank_V(S)$ (and same for $\rank_W$). \end{fact} \begin{proof} Let $D_1$ and $D_2$ be the domains of $S_1$ and $S_2$, respectively. For every partial choices $S'_1$ over $D_1$ and $S'_2$ over $D_2$ such that $v(S'_1) \le v(S_1)$ and $v(S'_2) \le v(S_2)$, we have $v(S'_1 \cup S'_2)=v(S'_1)+v(S'_2)\le v(S)$. Hence, $S'_1\cup S'_2$ must be counted while determining $\rank_V(S)$. \end{proof} For $0\le j \le n$, let $\L_j$ be the list of partial choices with domain $\{1,\ldots,j\}$ ordered by value $v(S)$, and for $\ell>0$ let $V^{(\ell)}_{\L_j}$ be the value $v(S)$ of $\ell$-th element of $\L_j$ ($\infty$ if $\ell>|\L_j|$). Analogously, for $1\le j \le n+1$, we define $\mathcal{R}_j$ as the list of partial choices over $\{j,\ldots,n\}$ ordered by $v(S)$, and for $r>0$, $V^{(r)}_{\mathcal{R}_j}$ as the value of the $r$-th element of $\mathcal{R}_j$ ($\infty$ if $r>|\mathcal{R}_j|$). The following two observations yield a decomposition of each choice into a single item and two partial solutions of a small rank. In particular, we do not need to know $A_V$ in order to check if the ranks are sufficiently large. \begin{lemma}\label{lem:decomp} Let $\ell$ and $r$ be positive integers such that $V^{(\ell)}_{\L_j}+V^{(r)}_{\mathcal{R}_{j+1}}> V$ for each $0\le j \le n$. For every choice $S$ with $v(S)\le V$, there is an index $j\in\{1,\ldots,n\}$ and a decomposition $S=L\cup\{c\}\cup R$ such that $v(L) < V^{(\ell)}_{\L_{j-1}}$, $c\in C_j$, and $v(R) < V^{(r)}_{\mathcal{R}_{j+1}}$. \end{lemma} \begin{proof} Let $S=\{c_1,\ldots,c_n\}$ with $c_i\in C_i$ and, for $0\le i \le n$, let $S_i = \{c_1,\ldots,c_i\}$. If $v(S_{n-1})< V^{(\ell)}_{\L_{n-1}}$, we set $L=S_{n-1}$, $c=c_n$, and $R=\emptyset$, satisfying the claimed conditions. Otherwise, we define $j$ as the smallest index $i$ such that $v(S_i) \ge V^{(\ell)}_{\L_i}$, and we set $L=S_{j-1}$, $c=c_j$, and $R=S\setminus S_j$. The definition of $j$ implies $v(L)<V^{(\ell)}_{\L_{j-1}}$ and $v(L\cup\{c\})\ge V^{(\ell)}_{\L_j}$. Moreover, we have $v(L\cup \{c\})+v(R)=v(S)\le V < V^{(\ell)}_{\L_j}+V^{(r)}_{\mathcal{R}_{j+1}},$ and thus $v(R) < V^{(r)}_{\mathcal{R}_{j+1}}$. \end{proof} \begin{fact}\label{fct:bound} Let $\ell,r>0$. If $V^{(\ell)}_{\L_j}+V^{(r)}_{\mathcal{R}_{j+1}}\le V$ for some $j\in\{0,\ldots,n\}$, then $\ell \cdot r \le A_V$. \end{fact} \begin{proof} Let $L$ and $R$ be the $\ell$-th and $r$-th entry in $\L_j$ and $\mathcal{R}_{j+1}$, respectively. Note that $v(L\cup R)\le V$ implies $\rank_V(L\cup R) \le A_V$ by definition of~$A_V$. Moreover, $\rank_V(L)\ge \ell$ and $\rank_V(R)\ge r$ (the equalities may be sharp due to draws). Now, \cref{fct:comb} yields the claimed bound. \end{proof} Note that $\L_j$ can be obtained by interleaving $|C_j|$ copies of $\L_{j-1}$, where each copy corresponds to extending the choices from $\L_{j-1}$ with a different item. If we were to construct $\L_{j}$ having access to the whole $\L_{j-1}$, we could proceed as follows. For each $c\in C_j$, we maintain an \emph{iterator} on $\L_{j-1}$ pointing to the first element $S$ on $\L_{j-1}$ for which $S\cup\{c\}$ has not yet been added to $\L_{j}$. The associated \emph{value} is $v(S\cup\{c\})$. All iterators initially point at the first element of $\L_{j-1}$. Then the next element to append to $\L_j$ is always $S\cup\{c\}$ corresponding to the iterator with minimum value. Having processed this partial choice, we advance the pointer (or remove it, once it has already scanned the whole $\L_{j-1}$). This process can be implemented using a binary heap $H_j$ as a priority queue, so that initialization requires $\mathcal{O}(|C_j|)$ time and outputting a single element takes $\mathcal{O}(\log |C_j|)$ time. For all $r \ge 0$, let $\L^{(r)}_j$ be the prefix of $\L_j$ of length $\min(r,|\L_j|)$ and $\mathcal{R}^{(r)}_j$ be the prefix of $\mathcal{R}_j$ of length $\min(r, |\mathcal{R}_j|)$. A technical transformation of the procedure stated above leads to an online algorithm that constructs the prefixes $\L^{(r)}_j$ and $\mathcal{R}^{(r)}_j$. Along with each reported partial choice $S$, the algorithm also computes $w(S)$. \begin{lemma}\label{lem:generate} After $\mathcal{O}(N)$-time initialization, one can construct $\L^{(i)}_1,\ldots,\L^{(i)}_n$ online for $i=0,1,\ldots$, spending $\mathcal{O}(n\log \lambda)$ time per each step. Symmetrically, one can construct $\mathcal{R}^{(i)}_1,\ldots,\mathcal{R}^{(i)}_n$ in the same time complexity. \end{lemma} \begin{proof} Our online algorithm is going to use the same approach as the offline computation of lists $\L_j^{(r)}$. The order of computations is going to be changed, though. At each step, for $j=1$ to $n$ we shall extend lists $\L^{(i-1)}_j$ with a single element (unless the whole $\L_j$ has been already generated) from the top of the heap $H_j$. Note that this way each iterator in $H_j$ always points to an element that is already in $\L^{(i-1)}_{j-1}$ or to the first element that has not been yet added to $\L_{j-1}$, which is represented by the top of the heap $H_{j-1}$. We initialize the heaps as follows: we introduce $H_0$ which represents the empty choice $\emptyset$ with $v(\emptyset)=0$. Next, for $j=1,\ldots,n$ we build the heap $H_j$ representing $|C_j|$ iterators initially pointing to the top of $H_{j-1}$. The initialization takes $\mathcal{O}(N)$ time in total since a binary heap can be constructed in time linear in its size. At each step, the lists $\L^{(i-1)}_j$ are extended for consecutive values $j$ from $1$ to $n$. Since $\L^{(i-1)}_{j-1}$ is extended before $\L^{(i-1)}_j$, all iterators in $H_j$ point to the elements of $\L^{(i)}_{j-1}$ while we compute $\L_j^{(i)}$. We take the top of $H_j$ and move it to $\L^{(i)}_j$. Next, we advance the corresponding iterator and update its position in the heap $H_j$. After this operation, the iterator might point to the top of $H_{j-1}$. If $H_{j-1}$ is empty, this means that the whole list $\L_{j-1}$ has already been generated and traversed by the iterator. In this case, we remove the iterator. It is not hard to see that this way we indeed simulate the previous offline solution. A single phase makes $\mathcal{O}(1)$ operations on each heap $H_j$. The running time is bounded by $\mathcal{O}(\sum_{j} \log |C_j|)=\mathcal{O}(n\log \lambda)$ at each step of the algorithm. \end{proof} The reduction of the following lemma is presented in \cref{ss}. Note that we may always assume that $\lambda \le a \le A$. Indeed, if we order the items $c\in C_i$ according to $v(c)$, then only the first $A_V$ of them might belong to a choice $S$ with $v(S)\le V$. \begin{lemma}\label{lem:knapred2} Given an instance $I$ of the \textsc{Multichoice Knapsack}\xspace problem, one can compute in $\mathcal{O}(N+\lambda\log A)$ time an equivalent instance $I'$ with $A_V'\le A_V$, $A_W'\le A_W$, $\lambda'\le \lambda$, and $n'=\mathcal{O}(\frac{\log A}{\log \lambda})$. \end{lemma} \begin{theorem}\label{thm:knap} \textsc{Multichoice Knapsack}\xspace can be solved in $\mathcal{O}(N+\sqrt{a\lambda}\log A)$ time. \end{theorem} \begin{proof} Below, we give an algorithm in $\mathcal{O}(N+\sqrt{A_V\lambda}\log A)$ time. The final solution runs it in parallel on the original instance and on the instance with $v$ and $V$ swapped with $w$ and~$W$, waiting until at least one of them terminates. We increment an integer $r$ starting from~$1$, maintaining $\ell=\ceil{\frac{r}{\lambda}}$ and the lists $\L_{j}^{(\ell)}$ and $\mathcal{R}_{j+1}^{(r)}$ for $0\le j \le n$, as long as $V^{(\ell)}_{\L_j}+V^{(r)}_{\mathcal{R}_{j+1}}\le V$ for some $j$ (or until all the lists have been completely generated). By \cref{fct:bound}, we stop at $r=\mathcal{O}(\sqrt{A_V \lambda})$. \cref{lem:knapred2} lets us assume that $n=\mathcal{O}(\frac{\log A}{\log \lambda})$, so the running time of this phase is $\mathcal{O}(N+\sqrt{A_V \lambda}\log A)$ due to \cref{lem:generate}. Due to \cref{lem:decomp}, every feasible solution $S$ admits a decomposition $S=L\cup\{c\}\cup R$ with $L\in \L_{j-1}^{(\ell)}$, $c\in C_j$, and $R\in \mathcal{R}_{j+1}^{(r)}$ for some index $j$. We consider all possibilities for $j$. For each of them we will reduce searching for $S$ to an instance of the \textsc{Multichoice Knapsack}\xspace problem with $N'=\mathcal{O}(\sqrt{A_V\lambda})$ and $n'=2$. By \cref{lem:knap2}, these instances can be solved in $\mathcal{O}(n\sqrt{A_V\lambda})=\mathcal{O}(\sqrt{A_V\lambda}\frac{\log A}{\log \lambda})$ time in total. The items of the $j$-th instance are going to belong to classes $\L_{j-1}^{(\ell)}\odot C_j$ and $\mathcal{R}_{j+1}^{(r)}$, where $\L_{j-1}^{(\ell)}\odot C_j = \{L\cup \{c\} : L\in \L_{j-1}^{(\ell)} , c\in C_j\}$. The set $\L_{j-1}^{(\ell)}\odot C_j$ can be constructed by merging $|C_j|\le \lambda$ sorted lists each of size $\ell=\mathcal{O}(\sqrt{A_V/\lambda})$, i.e., in $\mathcal{O}(\sqrt{A_V\lambda}\log \lambda)$ time. Summing up over all indices $j$, this gives $\mathcal{O}(\sqrt{A_V \lambda}\log\lambda \frac{\log A}{\log \lambda})=\mathcal{O}(\sqrt{A_V \lambda}\log A)$ time. Clearly, each feasible solution of the constructed instances represents a feasible solution of the initial instance, and by \cref{lem:decomp}, every feasible solution of the initial instance has its counterpart in one of the constructed instances. \end{proof} \subsection{Proof of Lemma~\ref{lem:knapred2}}\label{ss} Our reduction consists of two steps. Its implementation uses the following notions: For each class $C_i$, let $v_{\min}(i) = \min\{v(c) : c\in C_i\}$. Also, let $V_{\min} = \sum_{i=1}^n v_{\min}(i)$; note that $V_{\min}$ is the smallest possible value $v(S)$ of a choice $S$. We symmetrically define $w_{\min}(i)$ and $W_{\min}$. First, we make sure that $n=\mathcal{O}(\log A)$. \begin{lemma}\label{lem:knapred} Given an instance $I$ of the \textsc{Multichoice Knapsack}\xspace problem, one can compute in linear time an equivalent instance $I'$ with $N'\le N$, $A_V'\le A_V$, $A_W'\le A_W$, $\lambda'\le \lambda$, and $n' \le 2\log A$. \end{lemma} \begin{proof} Observe that if some class $C_i$ contains a single item $c$ for which both $v(c)=v_{\min}(i)$ and $w(c)=w_{\min}(i)$, then we can greedily include it in the solution $S$. Hence, we can remove such a class, setting $V := V- v_{\min}(i)$ and $W := W- w_{\min}(i)$. We execute this reduction rule exhaustively, which clearly takes $\mathcal{O}(N)$ time in total and may only decrease the parameters $A_V$ and $A_W$. After the reduction, each class contains at least two items. We shall prove that now we can either find out that $A \ge 2^{n/2}$ or that we are dealing with a NO-instance. To decide which case holds, let us define $\Delta_V(i)$ as the difference between the second smallest value in the multiset $\{v(c) : c\in C_i\}$ and $v_{\min}(i)$. We set $\Delta_V^{\mathrm{mid}}$ as the sum of the $\ceil{\frac{n}{2}}$ smallest values $\Delta_V(i)$ for $1\le i \le n$; analogously we define $\Delta_W^{\mathrm{mid}}$. \begin{claim} If $V_{\min} + \Delta_V^{\mathrm{mid}} \le V$, then $A_V \ge 2^{n/2}$; if $W_{\min} + \Delta_W^{\mathrm{mid}} \le W$, then $A_W \ge 2^{n/2}$; otherwise, we are dealing with a NO-instance. \end{claim} \begin{proof} First, assume that $V_{\min} + \Delta_V^{\mathrm{mid}} \le V$. This means that there is a choice $S$ with $v(S)\le V$ containing at least $\frac{n}{2}$ items $c$ such that $\rank_V(c)\ge 2$. \cref{fct:comb} yields $\rank_V(S)\ge 2^{\ceil{n/2}}$ and consequently $A_V \ge 2^{n/2}$, as claimed. Symmetrically, if $W_{\min} + \Delta_W^{\mathrm{mid}} \le W$, then $A_W \ge 2^{n/2}$. Now, suppose that there is a feasible solution $S$. As no class contains a single item minimizing both $v(c)$ and $w(c)$, there are at least $\ceil{\frac{n}{2}}$ classes for which $S$ contains an item not minimizing $v(c)$, or at least $\ceil{\frac{n}{2}}$ classes for which $S$ contains an item not minimizing $w(c)$. Without loss of generality, we assume that the former holds. Let $D$ be the set of at least $\ceil{\frac{n}{2}}$ classes $i$ satisfying the condition. If $c\in C_i$ does not minimize $v(c)$, then $v(c)\ge v_{\min}(i)+\Delta_V(i)$. Consequently, $V\ge v(S) = V_{\min} + \sum_{i\in D} \Delta_V(i)$. However, observe that $ \sum_{i\in D} \Delta_V(i) \ge \Delta_V^{\mathrm{mid}}$, so $V \ge V_{\min} + \Delta_V^{\mathrm{mid}}$, as claimed. \end{proof} The conditions from the claim can be verified in $\mathcal{O}(N)$ time using a linear-time selection algorithm to compute $\Delta_V^{\mathrm{mid}}$ and $\Delta_W^{\mathrm{mid}}$. If any of the first two conditions holds, we return the instance obtained using our reduction. Otherwise, we output a dummy NO-instance. \end{proof} Before we proceed with the second reduction, let us introduce an auxiliary notion. An item $c\in C_j$ is \emph{irrelevant} if there is another item $c'\in C_j$ that \emph{dominates} $c$, i.e., such that $v(c)>v(c')$ and $w(c)>w(c')$. Removing irrelevant items leads to an equivalent instance of the \textsc{Multichoice Knapsack}\xspace problem, and it may only decrease the parameters. \begin{lemma}\label{lem:redstep} Consider a class of items in an instance of the \textsc{Multichoice Knapsack}\xspace problem. In linear time, we can remove some irrelevant items from the class so that the resulting class $C$ satisfies $\max(\rank_V(c),\rank_W(c)) > \frac13 |C|$ for each item $c\in C$. \end{lemma} \begin{proof} First, note that using a linear-time selection algorithm, we can determine for each item $c$ whether $\rank_V(c)\le \frac13|C|$ and whether $\rank_W(c)\le \frac13|C|$. If there is no item satisfying both conditions, we keep $C$ unaltered. Otherwise, we have an item which dominates at least $|C|-\rank_V(c)-\rank_W(c) \ge \frac13|C|$ other items. We scan through all items in $C$ and remove those dominated by $c$. Next, we repeat the algorithm. The running time of a single phase is clearly linear, and since $|C|$ decreases geometrically, the total running time is also linear. \end{proof} A straightforward way to decrease the number of classes is to replace two distinct classes $C_i$, $C_j$ with their Cartesian product $C_i \times C_j$, assuming that the weight of a pair $(c_i,c_j)$ is the sum of weights of $c_i$ and $c_j$. This clearly leads to an equivalent instance of the \textsc{Multichoice Knapsack}\xspace problem, does not alter the parameters $A_V$, $A_W$, and decreases $n$. On the other hand $N$ and $\lambda$ may increase; the latter happens only if $|C_i| \cdot |C_j| > \lambda$. These two reduction rules let us implement our reduction procedure which constitutes the proof of Lemma~\ref{lem:knapred2}. \begin{proof} First, we apply \cref{lem:knapred} to make sure that $n\le 2\log A$ and $N = \mathcal{O}(\lambda \log A)$. We may now assume that $\lambda \ge 3^6$, as otherwise we already have $n = \mathcal{O}(\frac{\log A}{\log \lambda})$. Throughout the algorithm, whenever there are distinct classes of size at most $\sqrt{\lambda}$, we shall replace them with their Cartesian product. This may happen only $n-1$ times, and a single execution takes $\mathcal{O}(\lambda)$ time, so the total running time needed for this part is $\mathcal{O}(\lambda \log A)$. Furthermore, for every class that we get in the input instance or obtain as a Cartesian product, we apply \cref{lem:redstep}. The total running time spent on this is also $\mathcal{O}(\lambda \log A)$. Having exhaustively applied these reduction rules, we are guaranteed that $\max(\rank_V(c),\rank_W(c))>\frac13\sqrt{\lambda}\ge \lambda^{\frac13}$ for items $c$ from all but one class. Without loss of generality, we assume that the classes satisfying this condition are $C_1,\ldots,C_k$. Recall that $v_{\min}(i)$ and $w_{\min}(i)$ are defined as minimum values and weights of items in class $C_i$ and that $V_{\min}$ and $W_{\min}$ are their sums over all classes. For $1\le i \le k$, we define $\Delta_V(i)$ as the difference between the $\big\lceil{\lambda^{\frac13}}\big\rceil$-th smallest value in the multiset $\{v(c) : c\in C_i\}$ and $v_{\min}(i)$. Next, we define $\Delta_V^{\mathrm{mid}}$ as the sum of the $\ceil{\frac{k}{2}}$ smallest values $\Delta_V(i)$. Symmetrically, we define $\Delta_W(i)$ and $\Delta_W^{\mathrm{mid}}$. We shall prove a claim analogous to that in the proof of \cref{lem:knapred}. \begin{claim} If $V_{\min} + \Delta_V^{\mathrm{mid}}\le V$, then $A_V \ge \lambda^{\frac16 k}$; if $W_{\min} + \Delta_W^{\mathrm{mid}}\le W$, then $A_W \ge \lambda^{\frac16 k}$; otherwise, we are dealing with a NO-instance. \end{claim} \begin{proof} First, suppose that $V_{\min} + \Delta_V^{\mathrm{mid}}\le V$. This means that there is a choice $S$ with $v(S)\le V$ which contains at least $\frac{k}{2}$ items $c$ with $\rank_V(c)\ge \lambda^{\frac13}$. By \cref{fct:comb}, the rank of this choice is at least $\lambda^{\frac16 k}$, so $A_V \ge \lambda^{\frac16 k}$, as claimed. The proof of the second case is analogous. Now, suppose that there is a feasible solution $S=\{c_1,\ldots,c_n\}$. For $1\le i \le k$, we have $\rank_V(c_i)\ge \lambda^{\frac13}$ or $\rank_W(c_i) \ge \lambda^{\frac13}$. Consequently, $\rank_V(c_i)\ge \lambda^{\frac13}$ holds for at least $\ceil{\frac{k}{2}}$ classes or $\rank_W(c_i)\ge \lambda^{\frac13}$ holds for at least $\ceil{\frac{k}{2}}$ classes. Without loss of generality, we assume that the former holds. Let $D$ be the set of (at least $\ceil{\frac{k}{2}}$) classes $i$ satisfying the condition. For each $i\in D$, we clearly have $v(c_i)\ge v_{\min}(i)+\Delta_V(i)$, while for each $i\notin D$, we have $v(c_i)\ge v_{\min}(i)$. Consequently, $V\ge v(S) \ge V_{\min} + \sum_{i\in D} \Delta_V(i) \ge V_{\min} + \Delta_V^{\mathrm{mid}}$. Hence, $V \ge V_{\min} + \Delta_V^{\mathrm{mid}}$, which concludes the proof. \end{proof} The condition from the claim can be verified using a linear-time selection algorithm: first, we apply it for each class to compute $\Delta_V(i)$ and $\Delta_W(i)$, and then, globally, to determine $\Delta_V^{\mathrm{mid}}$ and $\Delta_W^{\mathrm{mid}}$. If one of the first two conditions hold, we return the instance obtained through the reduction. It satisfies $A \ge \lambda^{\frac 16 k}$, i.e., $n \le 1+k \le 1+6\frac{\log A}{\log \lambda}$. Otherwise, we construct a dummy NO-instance. \end{proof} \section{Weighted Consensus and General Weighted Pattern Matching}\label{sec:GWPMReduction} The \textsc{Weighted Consensus}\xspace problem is formally defined as follows. \defdsproblem{\textsc{Weighted Consensus}\xspace}{ Two weighted sequences $X$ and $Y$ of length $n$ with at most $\lambda$ letters at each position and $R$ in total, and a threshold probability \ensuremath{\frac1z}. }{ A string $S$ such that $S \approx_{\frac1z} X$ and $S \approx_{\frac1z} Y$ or NONE if no such string exists. } If two weighted sequences satisfy the consensus, we write $X \approx_{\frac1z} Y$ and say that $X$ \emph{matches} $Y$ with probability \ensuremath{\frac1z}. With this definition of a match, we extend the notion of an occurrence and the notation $\mathit{Occ}_\ensuremath{\frac1z}(P,T)$ to arbitrary weighted sequences. \defdsproblem{\textsc{General Weighted Pattern Matching}\xspace (\textsc{GWPM}\xspace)}{ Two weighted sequences $P$ and $T$ of length $m$ and $n$, respectively, with at most $\lambda$ letters at each position and $R$ in total, and a threshold probability \ensuremath{\frac1z}. }{ The set $\mathit{Occ}_\ensuremath{\frac1z}(P,T)$. } In the case of the \textsc{GWPM}\xspace problem, it is more useful to provide an \emph{oracle} that finds solid factors that correspond to particular occurrences of the pattern. Such an oracle, given $i \in \mathit{Occ}_\ensuremath{\frac1z}(P,T)$, computes a string that matches both $P$ and $T[i..i+m-1]$. We say that a string $P$ is a \emph{maximal \ensuremath{\frac1z}-solid prefix} of a weighted sequence $X$ if $P$ is a \ensuremath{\frac1z}-solid prefix of $X$ and no string $P' = Ps$, for $s \in \Sigma$, is a \ensuremath{\frac1z}-solid prefix of $X$. Our algorithms rely on the following simple combinatorial observation, originally due to Amir et al.\ \cite{amir_weighted_property_matching_j}. \begin{fact}[\cite{amir_weighted_property_matching_j}]\label{fct:maxprefixes} A weighted sequence has at most $z$ different maximal \ensuremath{\frac1z}-solid prefixes. \end{fact} The \textsc{Weighted Consensus}\xspace problem is actually a special case of \textsc{Multichoice Knapsack}\xspace. Namely, given an instance of the former, we can create an instance of the latter with $n$ classes $C_i$, each containing an item $c_{i,s}$ for every letter $s$ which has non-zero probability at position $i$ in both $X$ and $Y$. We set $v(c_{i,s})=-\log \pi^{(X)}_i(s)$ and $w(c_{i,s})=-\log \pi^{(Y)}_i(s)$ for this item, whereas the thresholds are $V=W=\log z$. It is easy to see that this reduction indeed yields an equivalent instance and that it can be implemented in linear time. By \cref{fct:maxprefixes}, we have $A\le z$ for this instance, so \cref{thm:knap} yields the following result: \begin{corollary}\label{cor:red_simple} \textsc{Weighted Consensus}\xspace problem can be solved in $\mathcal{O}(R+\sqrt{z\lambda}\log z)$ time. \end{corollary} The \textsc{GWPM}\xspace problem can be clearly reduced to $n+m-1$ instances of \textsc{Weighted Consensus}\xspace. This leads to a naive $\mathcal{O}(nR + n\sqrt{z\lambda}\log z)$-time algorithm. Below, we remove the first term in this complexity. Our solution applies the approach used in \cref{sec:EWPM} for \textsc{Weighted Pattern Matching}\xspace and uses an observation analogous to \cref{obs:crucial}. \begin{observation}\label{obs:crucial2} If $X$ and $Y$ are weighted sequences that match with threshold $\ensuremath{\frac1z}$, then $d_H(\H(X),\H(Y)) \le 2\floor{\log z}$. Moreover there exists a consensus string $S$ such that $S[i] = \H(X)[i] = \H(Y)[i]$ unless $\H(X)[i]\ne \H(Y)[i]$. \end{observation} Our algorithm starts by computing $P'=\H(P)$ and $T'=\H(T)$ and the data structure for $\mathit{lcp}$-queries in $P'T'$. We try to match $P$ with every factor $T[p..p+m-1]$ of the text. Following \cref{obs:crucial2}, we check if $d_H(T'[p..p+m-1],P') \le 2\floor{\log z}.$ If not, then we know that no match is possible. Otherwise, let $D$ be the set of positions of mismatches between $T'[p..p+m-1]$ and $P'$. Assume that we store $\alpha = \prod_{j=1}^{m} \pi^{(T)}_{p+j-1}(T'[p+j-1])$ and $\beta = \prod_{j=1}^m \pi^{(P)}_j(P'[j]).$ Then, in $\mathcal{O}(|D|)$ time we can compute $\alpha'=\prod_{j\notin D} \pi^{(T)}_{p+j-1}(T'[p+j-1])$ and $\beta' = \prod_{j \notin D} \pi^{(P)}_j(P'[j])$. Now, we only need to check what happens at the positions in $D$. If $D = \emptyset$, then it suffices to check if $\alpha \ge \frac1z$ and $\beta \ge \frac1z$. Otherwise, we construct two weighted sequences $X$ and $Y$ by selecting only the positions from $D$ in $T[p..p+m-1]$ and in $P$. We multiply the probabilities of all letters at the first position in $X$ by $\alpha'$ and in $Y$ by $\beta'$. It is clear that $X\approx_{\frac1z} Y$ if and only if $T[p..p+m-1]\approx_{\frac1z} P$. Thus, we reduced the \textsc{GWPM}\xspace problem to at most $n-m+1$ instances of the \textsc{Weighted Consensus}\xspace problem for strings of length $\mathcal{O}(\log z)$. By \cref{cor:red_simple}, solving such an instance takes $\mathcal{O}(\lambda\log z + \sqrt{z\lambda}\log z)=\mathcal{O}(\sqrt{z\lambda}\log z)$ time. Our reduction requires $\mathcal{O}(R\log^2 \log \lambda)$ time to preprocess the input (as in \cref{thm:wpm}), but this is dominated by the $\mathcal{O}(n\sqrt{z\lambda}\log z )$ total time of solving the \textsc{Weighted Consensus}\xspace instances. If we memorize the solutions to all those instances together with the sets of mismatches $D$ that lead to those instances, then we can also implement the oracle for the \textsc{GWPM}\xspace problem with $\mathcal{O}(m)$-time queries. In \cref{app:SDWC}, we design a tailor-made solution to replace the generic algorithm for the \textsc{Multichoice Knapsack}\xspace problem, which lets us improve the $\log z$ factor to $\log\log z + \log \lambda$. The following reduction from \textsc{Multichoice Knapsack}\xspace to \textsc{Weighted Consensus}\xspace immediately yields that any significant improvement in the dependence on $z$ and $\lambda$ in the running time of our algorithm would lead to breaking long-standing barriers for special cases of \textsc{Multichoice Knapsack}\xspace. \begin{lemma}\label{lem:red} Given an instance $I$ of the \textsc{Multichoice Knapsack}\xspace problem with $n$ classes of size $\lambda$, in linear time one can construct an equivalent instance of the \textsc{Weighted Consensus}\xspace problem with $z=\mathcal{O}(\prod_{i=1}^{n}|C_i|)$ and sequences of length $\mathcal{O}(n)$ over alphabet of size $\lambda$. \end{lemma} \begin{proof} We construct a pair of weighted sequences $X,Y$ of length $n$ over alphabet $\Sigma=\{1,\ldots,\lambda\}$. Intuitively, choosing letter $j$ at position $i$ will correspond to taking the $j$-th element of $C_i$ to the solution $S$, which we denote as $c_{i,j}$. Without loss of generality, we assume that weights and values are non-negative. Otherwise, we may subtract $v_{\min}(i)$ from $v(c_{i,j})$ and $w_{\min}(i)$ from $w(c_{i,j})$ for each item $c_{i,j}$, as well $V_{\min}$ from $V$ and $W_{\min}$ from $W$. We set $M$ to the smallest power of two such that $M\ge\max(n, V, W)$. Let $p_i^{(X)}(j) = \log \pi_i^{(X)}(j)$ and $p_i^{(Y)}(j) = \log \pi_i^{(Y)}(j)$ for $j \in \Sigma$. For $j\in \{1,\ldots,|C_i|\}$, we set: $$p_i^{(X)}(j) = -\frac{\ceil{M\log|C_i|} + v(c_{i,j})}{M}, \quad p_i^{(Y)}(j)=-\frac{\ceil{M\log|C_i|} +w(c_{i,j})}{M}.$$ Clearly, $\sum_{j=1}^{|C_i|} 2^{p_i^{(X)}(j)}\le 1$ and $\sum_{j=1}^{|C_i|} 2^{p_i^{(Y)}(j)}\le 1$. Moreover, we set $$\log z_X = \frac1M \left(V + \sum_{i=1}^n\ceil{M\log|C_i|}\right) \quad \text{and} \quad \log z_Y = \frac1M\left(W + \sum_{i=1}^n\ceil{M\log|C_i|}\right).$$ By the choice of $M$, we have $\max(z_X,z_Y) \le 2^{\frac1M(\max(V,W)+n)}\prod_{i=1}^{n}|C_i|\le 4\prod_{i=1}^{n}|C_i|$. This way, for a string $P$ of length $n$, we have $$\log \P(P,X)=-\frac1M\left(\sum_{i=1}^n\ceil{M\log|C_i|}+\sum_{i=1}^n v(c_{i,P[i]})\right) \ge -\log z_X \; \Longleftrightarrow \; \sum_{i=1}^n v(c_{i,P[i]}) \le V$$ and $$\log \P(P,Y)=-\frac1M\left(\sum_{i=1}^n\ceil{M\log|C_i|}+\sum_{i=1}^n w(c_{i,P[i]})\right) \ge -\log z_Y \; \Longleftrightarrow \; \sum_{i=1}^n w(c_{i,P[i]}) \le W.$$ Thus, $P$ is a solution to the constructed instance of the \textsc{Weighted Consensus}\xspace problem with two threshold probabilities, $\frac{1}{z_X}$ and $\frac{1}{z_Y}$, if and only if $S = \{c_{i,j}\,:\,P[i]=j\}$ is a solution to the underlying instance of the \textsc{Multichoice Knapsack}\xspace problem. To have a single threshold $z=\max(z_X,z_Y)$, we append an additional position $n+1$ with symbol 1 only, with $p_{n+1}^{(X)}(1)=0$ and $p_{n+1}^{(Y)}(1)=\log z_Y - \log z_X$ provided that $z_X \ge z_Y$, and symmetrically otherwise. If one wants to make sure that the probabilities at each position sum up to exactly one, two further letters can be introduced, one of which gathers the remaining probability in $X$ and has probability 0 in $Y$, and the other gathers the remaining probability in $Y$, and has probability 0 in $X$. \end{proof} \begin{theorem}\label{thm:lb} \textsc{Weighted Consensus}\xspace problem is NP-hard and cannot be solved in: \begin{enumerate} \item $\mathcal{O}^*(z^{o(1)})$ time unless the Exponential Time Hypothesis (ETH) fails; \item $\mathcal{O}^*(z^{0.5-\varepsilon})$ time for some $\varepsilon>0$, unless there is an $\mathcal{O}^*(2^{(0.5-\varepsilon)n})$-time algorithm for the \textsc{Subset Sum}\xspace problem; \item $\tilde{\mathcal{O}}(R+z^{0.5}\lambda^{0.5-\varepsilon})$ time for some $\varepsilon>0$ and for $n=\mathcal{O}(1)$, unless there is an $\mathcal{O}(\lambda^{2(1-\varepsilon)})$-time algorithm for 3-\textsc{Sum}\xspace. \end{enumerate} \end{theorem} \begin{proof} We use \cref{lem:red} to derive algorithms for the \textsc{Multichoice Knapsack}\xspace problem based on hypothetical solutions for \textsc{Weighted Consensus}\xspace. \textsc{Subset Sum}\xspace is a special case of \textsc{Multichoice Knapsack}\xspace with $\lambda=2$, i.e., $\prod_{i}|C_i|=2^n$. Hence, an $\mathcal{O}^*(z^{o(1)})$-time solution for \textsc{Weighted Consensus}\xspace would yield an $\mathcal{O}^*(2^{o(n)})$-time algorithm for \textsc{Subset Sum}\xspace, which contradicts ETH by the results of Etscheid et al.~\cite{DBLP:conf/mfcs/EtscheidKMR15} and Gurari~\cite{DBLP:books/daglib/0069796}. Similarly, an $\mathcal{O}^*(z^{0.5-\varepsilon})$-time solution for \textsc{Weighted Consensus}\xspace would yield an $\mathcal{O}^*(2^{(0.5-\varepsilon)n})$-time algorithm for \textsc{Subset Sum}\xspace. Moreover, $k$-\textsc{Sum}\xspace is a special case of \textsc{Multichoice Knapsack}\xspace with $n=k=\mathcal{O}(1)$, i.e., $\prod_{i}|C_i|=\lambda^{k}$. Hence, an $\tilde{\mathcal{O}}(R+z^{0.5}\lambda^{0.5-\varepsilon})$-time solution for \textsc{Weighted Consensus}\xspace yields an $\mathcal{O}(\lambda + \lambda^{1.5+0.5-\varepsilon})=\mathcal{O}(\lambda^{2-\varepsilon})$-time algorithm for 3-\textsc{Sum}\xspace. \end{proof} Nevertheless, it might still be possible to improve the dependence on $n$ in the \textsc{GWPM}\xspace problem. For example, one may hope to achieve $\tilde{\mathcal{O}}(nz^{0.5-\varepsilon}+z^{0.5})$ time for $\lambda=\mathcal{O}(1)$. \section{Faster \textsc{GWPM}\xspace via Short Dissimilar Weighted Consensus}\label{app:SDWC} This section provides a faster solution for the \textsc{General Weighted Pattern Matching}\xspace problem. The key ingredient is an improved solution for the following \textsc{Short Dissimilar Weighted Consensus}\xspace problem: \defdsproblem{\textsc{Short Dissimilar Weighted Consensus}\xspace (\textsc{SDWC}\xspace)}{ A threshold probability \ensuremath{\frac1z}\ and two weighted sequences $X$ and $Y$ of length $n\le 2\floor{\log z}$ with at most $\lambda\le z$ letters at each position and such that $\H(X)$ and $\H(Y)$ are \emph{dissimilar}, i.e., $\H(X)[i] \ne \H(Y)[i]$ for each position~$i$. }{ A string $S$ such that $S \approx_{\frac1z} X$ and $S \approx_{\frac1z} Y$ or NONE if no such string exists. } Note that the instances of the \textsc{Weighted Consensus}\xspace problem produced by the reduction of \cref{sec:GWPMReduction} are actually instances of the \textsc{SDWC}\xspace problem. Our tailor-made solution for the \textsc{SDWC}\xspace problem works in $\mathcal{O}(\sqrt{z\lambda} (\log\log z + \log \lambda))$ time. It assumes that the letters at each position of the weighted sequences are sorted according to probabilities (in addition to storing the dictionary of letters and probabilities). This can be achieved in $\mathcal{O}(\lambda \log \lambda)$ time for each position. We have just proved: \begin{lemma}\label{lem:sdwc} The \textsc{GWPM}\xspace problem and the computation of its oracle can be reduced in $\mathcal{O}(n \lambda \log \lambda)$ time to at most $n-m+1$ instances of the \textsc{SDWC}\xspace problem. \end{lemma} \subsection{Combinatorial Prerequisites} Our improvement upon the algorithm of \cref{thm:knap} is based on \cref{fct:maxprefixes}, whose analogue does not hold for \textsc{Multichoice Knapsack}\xspace in general. Technically, instead of the notion of maximal \ensuremath{\frac1z}-solid prefixes, the algorithm relies on \emph{light \ensuremath{\frac1z}-solid prefixes} defined as follows: We say that a string $P$ of length $k$ is a light \ensuremath{\frac1z}-solid prefix of a weighted sequence $X$ if $k=0$ or $P$ is a \ensuremath{\frac1z}-solid prefix of $X$ such that $P[k] \ne \H(X)[k]$. We symmetrically define \emph{light \ensuremath{\frac1z}-solid suffixes} of $X$. \cref{fct:maxprefixes} lets us bound the number of light solid prefixes. \begin{fact}\label{fct:lightprefixes} A weighted sequence has at most $z$ different light \ensuremath{\frac1z}-solid prefixes. \end{fact} \begin{proof} We show a pair of inverse mappings between the set of maximal \ensuremath{\frac1z}-solid prefixes of a weighted sequence $X$ and the set of light \ensuremath{\frac1z}-solid prefixes of $X$. If $P$ is a maximal \ensuremath{\frac1z}-solid prefix of $X$, then we obtain a light \ensuremath{\frac1z}-solid prefix by removing all trailing letters of $P$ that are heavy letters at the corresponding positions in $X$. For the inverse mapping, we extend each light \ensuremath{\frac1z}-solid prefix by heavy letters as long as the prefix is \ensuremath{\frac1z}-solid. \end{proof} We use the notions of light \ensuremath{\frac1z}-solid prefixes and light \ensuremath{\frac1z}-solid suffixes to express a result that we will use instead of \cref{lem:decomp,fct:bound}. \begin{lemma}\label{fct:key} Consider an instance of the \textsc{SDWC}\xspace problem, and let $z_\ell,z_r \ge 1$ be real numbers such that $z_\ell \cdot z_r \ge z$. Every consensus string $S$ can be decomposed into $S= L \cdot c\cdot C \cdot R$ such that the following conditions hold for some $U,V\in \{X,Y\}$: \begin{itemize} \item $L$ is a light $\frac{1}{z_\ell}$-solid prefix of $U$, \item $c$ is a single letter, \item all characters of $C$ are heavy in $V$, \item $R$ is a light $\frac{1}{z_r}$-solid suffix of $V$. \end{itemize} \end{lemma} \begin{proof} We set $L$ as the longest proper prefix of $S$ which is a $\frac{1}{z_{\ell}}$-solid prefix of both $X$ and $Y$, and we define $k := |L|$. Note that $L$ is a light $\frac{1}{z_{\ell}}$-solid prefix of $X$ or $Y$, because $\H(X)$ and $\H(Y)$ are dissimilar. If $k=n-1$, we conclude the proof setting $c=S[n]$ and $C=R$ to empty strings. Otherwise, we have $\P(S[1..k+1],V[1..k+1])<\frac{1}{z_\ell}$ for $V=X$ or $V=Y$. Since $\P(S,V)\ge \ensuremath{\frac1z}$ and $z_\ell \cdot z_r \ge z$, this implies $\P(S[k+2..n],V[k+2..n])\ge \frac{1}{z_r}$, i.e., that $C\cdot R = S[k+2..n]$ is a $\frac{1}{z_r}$-solid suffix of $V$. We set $C$ as the longest prefix of $S[k+2..n]$ composed of letters heavy in $V$. This way $R$ is clearly a light $\frac{1}{z_r}$-solid suffix of $V$. \end{proof} \subsection{Computing Light Solid Prefixes} We say that a string $P$ is a common \ensuremath{\frac1z}-solid prefix (suffix) of $X$ and $Y$ if it is a \ensuremath{\frac1z}-solid prefix (suffix) of both $X$ and $Y$. A \emph{standard representation} of a common \ensuremath{\frac1z}-solid prefix $P$ of length $k$ of $X$ and $Y$ is a triple $(P,p_1,p_2)$ such that $p_1$ and $p_2$ are the probabilities $p_1 = \P(P,X[1..k])$ and $p_2 = \P(P,Y[1..k])$. The string $P$ is written using variable-length encoding so that a letter that occurs at a given position with probability $p$ in $X$ or $Y$ has a representation that consists of $\mathcal{O}(\log\frac1p)$ bits. For every position $i$, the encoding can be constructed as follows: we sort letters $c$ according to $\max(\pi_i^{(X)}(c), \pi_i^{(Y)}(c))$ and assign subsequent integer identifiers according to this order. This lets us store a \ensuremath{\frac1z}-solid factor using $\mathcal{O}(\log z)$ bits: we concatenate the variable-length representations of its letters and we store a bit mask of size $\mathcal{O}(\log z)$ that stores the delimiters between the representations of single letters. An analogous representation can be applied also to common \ensuremath{\frac1z}-solid suffixes. Our assumptions on the model of computations imply that the standard representation takes constant space. Moreover, constant time is sufficient to extend a common \ensuremath{\frac1z}-solid prefix by a given letter. The following observation describes longer light solid prefixes in terms of shorter ones. \begin{observation}\label{obs:light_step} Let $P$ be a non-empty light \ensuremath{\frac1z}-solid prefix of $X$. If one removes its last letter and then removes all the trailing letters which are heavy at the respective positions in $X$, then a shorter light \ensuremath{\frac1z}-solid prefix of $X$ is obtained. \end{observation} We build upon \cref{obs:light_step} to derive an efficient algorithm constructing light solid prefixes.% \begin{lemma}\label{lem:lightprefixes_algo} Let $(X,Y,\ensuremath{\frac1z})$ be an instance of the \textsc{SDWC}\xspace problem and let $z'\le z$. All common \ensuremath{\frac1z}-solid prefixes of $X$ and $Y$ being light $\frac{1}{z'}$-solid prefixes of $X$, sorted first by their length and then by the probabilities in $X$, can be generated in $\mathcal{O}(z' (\log \log z+\log \lambda)+\log^2 z)$ time. \end{lemma} \begin{proof} For $k \in \{0,\ldots,n\}$, let $\mathcal{B}_k$ be a list of the requested solid prefixes of length $k$ sorted by their probabilities $p_1$ in $X$. \cref{fct:lightprefixes} guarantees that $\sum_{k=0}^n |\mathcal{B}_k| \le z'$. We compute the lists $\mathcal{B}_k$ for subsequent lengths $k$. We start with $\mathcal{B}_0$ containing the empty string with its probabilities $p_1=p_2=1$. To compute $\mathcal{B}_k$ for $k>0$, we use \cref{obs:light_step}. We consider all candidates $i=k-1,\ldots,0$ for the length of the shorter light $\frac{1}{z'}$-solid prefix, and then all letters $s\ne \H(X)[k]$ to put at position $k$ of the new light $\frac{1}{z'}$-solid prefix. For a given $i$, we iterate over all elements $(P,p_1,p_2)$ of $\mathcal{B}_i$ ordered by the non-increasing probabilities $p_1$, and try to extend each of them by the heavy letters in $X$ at positions $i+1,\ldots,k-1$ and by the letter $s$ at position $k$. We process the letters $s$ ordered by $\pi_k^{(X)}(s)$, ignoring the first one ($\H(X)[k]$) and stopping as soon as we do not get a $\frac{1}{z'}$-solid prefix of $X$. More precisely, with $X'=\H(X)$, we compute $$p'_1:=p_1 \cdot \prod_{j=i+1}^{k-1} \pi^{(X)}_j(X'[j]) \cdot \pi^{(X)}_k(s)\quad\mbox{and}\quad p'_2:=p_2 \cdot \prod_{j=i+1}^{k-1} \pi^{(Y)}_j(X'[j]) \cdot \pi^{(Y)}_k(s),$$ check if $p'_1 \ge \frac{1}{z'}$ and $p'_2 \ge \frac1z$, and, if so, insert $(P \cdot X'[i+1..k-1] \cdot s,p'_1,p'_2)$ at the beginning of a new list $L_{i,s}$, indexed both by the letter $s$ and by the length $i$ of the shorter light $\frac{1}{z'}$-solid prefix. When we encounter an element $(P,p_1,p_2)$ of $\mathcal{B}_i$ and a letter $s$ for which $p'_1 < \frac{1}{z'}$, we proceed to the next element of $\mathcal{B}_i$. If this happens for the heaviest letter $s\ne \H(X)[k]$, we stop considering the current list $\mathcal{B}_i$ and proceed to $\mathcal{B}_{i-1}$. The final step consists in merging all the $k\lambda$ lists $L_{i,s}$ in the order of probabilities in $X$; the result is $\mathcal{B}_k$. Let us analyse the time complexity of the $k$-th step of the algorithm. If an element $(P,p_1,p_2)$ and letter $s$ that we consider satisfy $p'_1 \ge \frac{1}{z'}$, this accounts for a new light $\frac{1}{z'}$-solid prefix of $X$. Hence, in total (over all steps) we consider $\mathcal{O}(z')$ such elements. Note that some of these elements may be discarded due to the condition on $p'_2$. For each inspected element $(P,p_1,p_2)$, we also consider at most one letter $s$ for which $p'_1$ is not sufficiently large. If this is not the only letter considered for this element, such candidates can be charged to the previous class. The opposite situation may happen once for each list $\mathcal{B}_i$, which may give $\mathcal{O}(k)$ additional operations in the $k$-th step, $\mathcal{O}(\log^2 z)$ in total. Thanks to the order in which the lists are considered, the products of probabilities $\prod_{j=i+1}^{k-1} \pi^{(X)}_j(X'[j])$, $\prod_{j=i+1}^{k-1} \pi^{(Y)}_j(X'[j])$ and factors $X'[i+1..k-1]$ can be stored so that the representation of each subsequent light $\frac{1}{\sqrt{z}}$-solid prefix of length $k$ is computed in $\mathcal{O}(1)$ time. Finally, the merging step in the $k$-th phase takes $\mathcal{O}(|\mathcal{B}_k|\log(k\lambda)) = \mathcal{O}(|\mathcal{B}_k| (\log \log z+\log \lambda))$ time if a binary heap is used. The time complexity of the whole algorithm is $\mathcal{O}(\log^2 z + \sum_{k=1}^{n}|\mathcal{B}_k| (\log \log z+\log \lambda))$. By the already mentioned \cref{fct:lightprefixes}, this is $\mathcal{O}(\log^2 z+z' (\log \log z+\log \lambda))$. \end{proof} \subsection{Merge-in-the-Middle Implementation} In this section we apply \cref{fct:key} to solve the \textsc{SDWC}\xspace problem. We use \cref{lem:lightprefixes_algo} to generate all candidates for $L\cdot c$ and $R$, and we apply a divide-and-conquer procedure to fill this with $C$. Our procedure works for fixed $U,V\in \{X,Y\}$; the algorithm repeats it for all four choices. Let $\L_i$ denote a list of all common \ensuremath{\frac1z}-solid prefixes of $X$ and $Y$ obtained by extending a light $\frac{\sqrt{\lambda}}{\sqrt{z}}$-solid prefix of $U$ of length $i-1$ by a single letter $s$ at position $i$, and let $\mathcal{R}_i$ denote a list of all common $\frac{1}{z}$-solid suffixes of $X$ and $Y$ of length $n-i+1$ that are light $\frac1{\sqrt{z\lambda}}$-solid suffixes of $V$. We assume that the lists $\L_i$ and $\mathcal{R}_i$ are sorted according to the probabilities in $U$ and $V$, respectively. \begin{lemma}\label{lem:L_R} The lists $\L_i$ and $\mathcal{R}_i$ for $i \in \{1,\ldots,n+1\}$ can be computed in $\mathcal{O}(\sqrt{z\lambda} (\log \log z+\log \lambda))$ time. Their total size is $\mathcal{O}(\sqrt{z \lambda})$. \end{lemma} \begin{proof} $\mathcal{O}(\sqrt{z\lambda} (\log \log z+\log\lambda))$-time computation of the lists $\mathcal{R}_i$ is directly due to \cref{lem:lightprefixes_algo}. As for the lists $\L_i$, we first compute in $\mathcal{O}(\frac{\sqrt{z}}{\sqrt{\lambda}}(\log \log z+\log\lambda))$ time the lists of all light $\frac{\sqrt{\lambda}}{\sqrt{z}}$-solid prefixes of $U$, sorted by the lengths of strings and then by the probabilities in $U$, again using \cref{lem:lightprefixes_algo}. Then for each length $i-1$ and for each letter $s$ at the $i$-th position, we extend all these prefixes by a single letter. This way we obtain $\lambda$ lists for a given $i-1$ that can be merged according to the probabilities in $U$ to form the list~$\L_i$. Generation of the auxiliary lists takes $\mathcal{O}(\frac{\sqrt{z}}{\sqrt{\lambda}}\cdot \lambda)=\mathcal{O}(\sqrt{z\lambda})$ time in total, and merging them using a binary heap takes $\mathcal{O}(\sqrt{z\lambda} \log \lambda)$ time. This way we obtain an $\mathcal{O}(\sqrt{z\lambda} (\log \log z+\log\lambda))$-time algorithm. \end{proof} Let $\mathcal{L}^*_{a,b}$ be a list of common \ensuremath{\frac1z}-solid prefixes of $X$ and $Y$ of length $b$ obtained by taking a common \ensuremath{\frac1z}-solid prefix from $\L_i$ for some $i \in \{a,\ldots,b\}$ and extending it by $b-i$ letters that are heavy at the respective positions in $V$. Similarly, $\mathcal{R}^*_{a,b}$ is a list of common \ensuremath{\frac1z}-solid suffixes of length $n-a+1$ obtained by taking a common \ensuremath{\frac1z}-solid suffix from $\mathcal{R}_i$ for some $i \in \{a,\ldots,b\}$ and prepending it by $i-a$ letters that are heavy in $V$. Again, we assume that each of the lists $\mathcal{L}^*_{a,b}$ and $\mathcal{R}^*_{a,b}$ is sorted according to the probabilities in $U$ and $V$, respectively. A \emph{basic interval} is an interval $[a,b]$ represented by its endpoints $1 \le a \le b \le n+1$ such that $2^j \mid a-1$ and $b=\min(n+1,a+2^j-1)$ for some integer $j$ called the \emph{layer} of the interval. For every $j=0,\ldots,\ceil{\log n}$, there are $\Theta(\frac{n}{2^j})$ basic intervals and they are pairwise disjoint. \begin{example} For $n=7$, the basic intervals are $[1,1],\ldots,[8,8],[1,2],[3,4],[5,6],[7,8],\allowbreak[1,4],[5,8],[1,8]$. \end{example} \begin{lemma}\label{lem:Ls_Rs} The lists $\mathcal{L}^*_{a,b}$ and $\mathcal{R}^*_{a,b}$ for all basic intervals $[a,b]$ use $\mathcal{O}(\sqrt{z\lambda}\log\log z)$ space and can be constructed in $\mathcal{O}(\sqrt{z\lambda}(\log\log z+\log \lambda))$ time. \end{lemma} \begin{proof} We compute all the lists $\mathcal{L}^*_{a,b}$ and $\mathcal{R}^*_{a,b}$ for consecutive layers $j=0,\ldots,\ceil{\log n}$ of basic intervals $[a,b]$. For $j=0$, we have $\mathcal{L}^*_{a,a} = \L_a$ and $\mathcal{R}^*_{a,a} = \mathcal{R}_a$. Suppose that we wish to compute $\mathcal{L}^*_{a,b}$ for $a<b$ at layer $j$ (the computation of $\mathcal{R}^*_{a,b}$ is symmetric). Take $c=a+2^{j-1}-1$. Let us iterate through all the elements $(P,p_1,p_2)$ of the list $\mathcal{L}^*_{a,c}$, extend each string $P$ by $\H(V)[c+1..b]$, and multiply the probabilities $p_1$ and $p_2$ by $$\prod_{i=c+1}^{b} \pi^{(X)}_i(\H(V)[i]) \quad\mbox{and}\quad \prod_{i=c+1}^{b} \pi^{(Y)}_i(\H(V)[i]),$$ respectively. If a common \ensuremath{\frac1z}-solid prefix is obtained, it is inserted at the end of an auxiliary list $L$. The resulting list $L$ is merged with $\mathcal{L}^*_{c+1,b}$ according to the probabilities in $U$; the result is $\mathcal{L}^*_{a,b}$. Thus, we can compute $\mathcal{L}^*_{a,b}$ in time proportional to the sum of lengths of $\mathcal{L}^*_{a,c}$ and $\mathcal{L}^*_{c+1,b}$. (Note that the necessary products of probabilities can be computed in $\mathcal{O}(n) = \mathcal{O}(\log z)$ total time.) For every $j=1,\ldots,\ceil{\log n}$, the total length of the lists from the $j$-th layer does not exceed the total length of the lists from the $(j-1)$-th layer. By \cref{lem:L_R}, the lists at the $0$-th layer have size $\mathcal{O}(\sqrt{z\lambda})$. The conclusion follows from the fact that $\log n = \mathcal{O}(\log\log z)$. \end{proof} Next, we provide an analogue of \cref{lem:knap2}. \begin{lemma}\label{lem:meet} Let $L$ and $R$ be lists containing, for some $k\in\{0,\ldots,n\}$, standard representations of common \ensuremath{\frac1z}-solid prefixes of length $k$ and common \ensuremath{\frac1z}-solid suffixes of length $n-k$ of $X$ and $Y$. If the elements of each list are sorted according to non-decreasing probabilities in $X$ or $Y$, one can check in $\mathcal{O}(|L|+|R|)$ time whether the concatenation of any \ensuremath{\frac1z}-solid prefix from $L$ and \ensuremath{\frac1z}-solid suffix from $R$ yields a string $S$ such that $S \approx_{\frac1z} X$ and $S \approx_{\frac1z} Y$. \end{lemma} \begin{proof} First, we filter out dominated elements of the lists, i.e., elements $(P,p_1,p_2)$ such that there exists another element $(P',p_1',p_2')$ with $p_1'> p_1$ and $p_2'> p_2$. After this operation, we make sure that both lists are sorted with respect to the non-decreasing probabilities in $X$; this might require reversing the list. For every element $(P,p_1,p_2)$ of $L$, we compute the first (leftmost) element $(P',p'_1,p'_2)$ of $R$ such that $p_1 p'_1 \ge \frac1z$. This element maximizes $p'_2$ among all elements satisfying the latter condition. Hence, it suffices to check if $p_2 p'_2 \ge \frac1z$, and if so, report the result $S=PP'$. As the lists are ordered by $p_1$ and $p'_1$, respectively, all such elements can be computed in $\mathcal{O}(|L|+|R|)$ total time. \end{proof} Finally, we are ready to apply a divide-and-conquer approach to the \textsc{SDWC}\xspace problem: \begin{lemma}\label{lem:DWM_hard} The \textsc{SDWC}\xspace problem can be solved in $\mathcal{O}(\sqrt{z\lambda} (\log \log z + \log \lambda))$ time. \end{lemma} \begin{proof} The algorithm goes along \cref{fct:key}, considering all choices of $U$ and $V$. For each of them, we proceed as follows: First, we compute the lists $\L_i$, $\mathcal{R}_i$ and $\mathcal{L}^*_{a,b}$, $\mathcal{R}^*_{a,b}$ for all basic intervals. By \cref{lem:L_R,lem:Ls_Rs}, this takes $\mathcal{O}(\sqrt{z\lambda} (\log\log z+\log \lambda))$ time. In order to find out if there is a feasible solution, it suffices to attempt joining a common \ensuremath{\frac1z}-solid prefix from $\L_j$ with a common \ensuremath{\frac1z}-solid suffix from $\mathcal{R}_k$ for some indices $1 \le j < k \le n+1$ by heavy letters of $V$ at positions $j+1,\ldots,k-1$. We use a recursive routine to find such a pair of indices $j$, $k$ in a basic interval $[a,b]$ which has positive length and therefore can be decomposed into two basic subintervals $[a,c]$ and $[c+1,b]$. Then either $j \le c < k$, or both indices $j$, $k$ belong to the same interval $[a,c]$ or $[c+1,b]$. To check the former case, we apply the algorithm of \cref{lem:meet} to $L = \mathcal{L}^*_{a,c}$ and $R = \mathcal{R}^*_{c+1,b}$. The two latter cases are solved by recursive calls for the subintervals. The recursive routine is called first for the basic interval $[1,n+1]$. The computations performed by the routine for the basic intervals at the $j$-th level take at most the time proportional to the total size of lists $\mathcal{L}^*_{a,b}$, $\mathcal{R}^*_{a,b}$ at the $(j-1)$-th level. \cref{lem:Ls_Rs} shows that the total size of the lists at all levels is $\mathcal{O}(\sqrt{z\lambda} \log\log z)$. Consequently, the whole procedure works in $\mathcal{O}(\sqrt{z\lambda} (\log\log z+\log \lambda))$ time. \end{proof} \cref{lem:DWM_hard} combined with \cref{lem:sdwc} provides an efficient implementation of the \textsc{General Weighted Pattern Matching}\xspace. \begin{theorem}\label{lem:gwpm} The \textsc{GWPM}\xspace problem can be solved in $\mathcal{O}(n\sqrt{z\lambda}(\log \log z + \log \lambda))$ time. An oracle for the \textsc{GWPM}\xspace problem using $\mathcal{O}(n \log z)$ space and supporting queries in $\mathcal{O}(m)$ time can be computed within the same time complexity. \end{theorem} % % \section{Faster Algorithms for Large $\lambda$}\label{app:fast} In this section we analyse the running times of algorithms for the \textsc{Multichoice Knapsack}\xspace problem expressed as $\mathcal{O}(n^{\mathcal{O}(1)}\cdot T(a,\lambda))$ for some function $T$ monotone with respect to both arguments. The algorithm of \cref{thm:knap} proves that achieving $T(a,\lambda)=\sqrt{a\lambda}$ is possible. On the other hand, if we assume that \textsc{Subset Sum}\xspace does not admit an $\mathcal{O}^*(2^{(0.5-\varepsilon)n})$-time solution, then we immediately get that we cannot have $T(a,2)=\mathcal{O}(a^{0.5 -\varepsilon})$ for any $\varepsilon \ge 0$. Similarly, the 3-\textsc{Sum}\xspace conjecture implies that $T(\lambda^3,\lambda)=\mathcal{O}(\lambda^{2-\varepsilon})$ is impossible. While this already refutes the possibility of having $T(a,\lambda)=\mathcal{O}(a^{0.5}\lambda^{0.5-\varepsilon})$ across all arguments $(a,\lambda)$, such a bound may still hold for some special cases covering an infinite number of arguments. For example, we may potentially achieve $T(a,\lambda)=\mathcal{O}((a\lambda)^{0.5-\varepsilon})=\mathcal{O}(\lambda^{1.5-\varepsilon})$ for $a=\lambda^2$. Before we prove that this is indeed possible, let us see the consequences of the hardness of 3-\textsc{Sum}\xspace and, in general, $(2k-1)$-\textsc{Sum}\xspace. For a non-negative integer $k$, the $(2k-1)$-\textsc{Sum}\xspace conjecture refutes $T(\lambda^{2k-1},\lambda)=\mathcal{O}(\lambda^{k-\varepsilon})$. By monotonicity of $T$ with respect to the first argument, we conclude that $T(\lambda^{c},\lambda)=\mathcal{O}(\lambda^{k-\varepsilon})$ is impossible for $c\ge 2k-1$. On the other hand, monotonicity with respect to the second argument shows that $T(\lambda^{c},\lambda)=\mathcal{O}(\lambda^{c\frac{k}{2k-1}-\varepsilon})$ is impossible for $c\le 2k-1$. The lower bounds following from $(2k-1)$-\textsc{Sum}\xspace and $(2k+1)$-\textsc{Sum}\xspace turn out to meet at $c=2k-1+\frac{1}{k+1}$; see \cref{fig:graph}. \begin{figure}[hb] \begin{center} \begin{tikzpicture} \draw[->] (0.5,0.5) -- (7.5,0.5) node[right] {$c$}; \draw[->] (0.5,0.5) -- (0.5,4.5); \draw (1,1) -- (1.5, 1) -- (3,2) -- (10/3,2) -- (5,3) -- (5.25,3) -- (7,4); \draw[thin, dotted] (1,1) -- (7,4); \foreach \x in {1,...,7} { \draw (\x,0.6) -- (\x, .4) node[below] {\tiny $\x$}; } \foreach \x in {1,...,4} { \draw (0.6,\x) -- (.4,\x) node[left] {\tiny $\x$}; } \end{tikzpicture} \end{center} \caption{Illustration of the upper bound (dotted) and lower bound (solid) on $\log_{\lambda}T(\lambda^c,\lambda)$.}\label{fig:graph} \end{figure} Consequently, we have some room between the lower and the upper bound of $\sqrt{a \lambda}$. In the aforementioned case of $a=\lambda^2$, the upper bound is $\lambda^{\frac32}$, compared to the lower bound of $\lambda^{\frac43-\varepsilon}$. Below, we show that the upper bound can be improved to meet the lower bound. More precisely, we show an algorithm whose running time is $\mathcal{O}(N + (a^{\frac{k+1}{2k+1}}+\lambda^k)\log\lambda \cdot n^k)$ for every positive integer $k$. Note that $a^{\frac{k+1}{2k+1}}+\lambda^k = \lambda^{c\frac{k+1}{2k+1}}+ \lambda^k$, so for $2k-1\le c \le 2k+1$ the running time indeed matches the lower bounds up to the $n^k$ term. Due to \cref{lem:knapred2}, the extra $n^k$ term reduces to $\mathcal{O}((\frac{\log A}{\log \lambda})^k)$, and if we measure the running time using $A$ instead of $a$, it becomes a constant ($k^{\mathcal{O}(k)}$). In particular, this lets us prove that the \textsc{GWPM}\xspace problem can be solved in $\mathcal{O}(n(z^{\frac{k+1}{2k+1}}+\lambda^k)\log\lambda)$ time for any integer $k=\mathcal{O}(1)$, improving upon the solution of \cref{sec:GWPMReduction} unless $z=\lambda^{\omega(1)}$ or $z=\lambda^{c\pm o(1)}$ for an odd integer $c$. \subsection{Algorithm for Multichoice Knapsack}\label{app:fastmk} Let us start by discussing the bottleneck of the algorithm of \cref{thm:knap} for large $\lambda$. The problem is that the size of the classes does not let us partition every choice $S$ into a prefix $L$ and a suffix $R$ with ranks both $\mathcal{O}(\sqrt{A_V})$. \cref{lem:decomp} leaves us with an extra letter $c$ between $L$ and $R$, and in the algorithm we append it to the prefix (while generating $\L_{j-1}^{(\ell)}\odot C_j$). We provide a workaround based on reordering of classes. Our goal is to make sure that items with large rank appear only in a few leftmost classes. For this, we guess the classes of the $k$ items with largest rank (in a feasible solution) and move them to the front. Since this depends on the sought feasible solution, we shall actually verify all $\binom{n}{k}$ possibilities. Now, our solution considers two cases: For $j>k$, the reordering lets us assume $\rank_V(c)\le \ell^{\frac{1}{k}}$, so we do not need to consider all items from $C_j$. For $j\le k$, on the other hand, we exploit the fact that $|\L_{j-1}^{(\ell)}\odot C_j|\le \lambda^{j}$, which at most $\lambda^k$. Combinatorial foundation of this intuition is formalized as a variant of \cref{lem:decomp}: \begin{lemma}\label{lem:decomp2} Let $\ell$ and $r$ be positive integers such that $V^{(\ell)}_{\L_j}+V^{(r)}_{\mathcal{R}_{j+1}}> V$ for every $0\le j \le n$. Let $k \in \{1,\ldots,n\}$ and suppose that $S$ is a choice with $v(S)\le V$ such that $\rank_V(S\cap C_i)\ge \rank_V(S\cap C_j)$ for $i \le k < j$. There is an index $j\in\{1,\ldots,n\}$ and a decomposition $S=L\cup\{c\}\cup R$ such that $L\in \L_{j-1}^{(\ell)}$, $R\in \mathcal{R}_{j+1}^{(r)}$, $c\in C_j$, and either $\rank_V(c)\le \ell^{\frac{1}{k}}$ or $j \le k$. \end{lemma} \begin{proof} We claim that the decomposition constructed in the proof of \cref{lem:decomp} satisfies the extra $\rank_V(c)\le \ell^{\frac{1}{k}}$ condition if $j > k$. Let $S = \{c_1,\ldots,c_n\}$ and $S_i = \{c_1,\ldots,c_i\}$. Obviously $\rank_V(c_i)\ge 1$ for $k < i < j$ and, by the extra assumption, $\rank_V(c_i)\ge \rank_V(c)$ for $1\le i \le k$. Hence, \cref{fct:comb} yields $\rank_V(S_{j-1})\ge \rank_V(c)^k$. Simultaneously, we have $v(S_{j-1})<V^{(\ell)}_{\L_j}$, so $\rank_V(S_{j-1})<\ell$. Combining these inequalities, we immediately get the claimed bound. \end{proof} \begin{theorem}\label{thm:knap3} For every positive integer $k=\mathcal{O}(1)$, the \textsc{Multichoice Knapsack}\xspace problem can be solved in $\mathcal{O}(N+\allowbreak {(a^{\frac{k+1}{2k+1}}+\lambda^k)}\log A (\frac{\log A}{\log \lambda})^{k})$ time. \end{theorem} \begin{proof} As in the proof of \cref{thm:knap}, we actually provide an algorithm whose running time depends on $A_V$ rather than $a$. Moreover, \cref{lem:knapred2} lets us assume that $n=\mathcal{O}(\frac{\log A}{\log \lambda})$. We first guess the $k$ positions where items with largest ranks $\rank_V$ are present in the solution $S$ and move these positions to the front. This gives $\binom{n}{k}=\mathcal{O}((\frac{\log A}{\log \lambda})^k)$ possible selections. For each of them, we proceed as follows. We increment an integer $r$ starting from $1$, maintaining $\ell=\big\lceil r^{\frac{k}{k+1}}\big\rceil$ and all the lists $\L_{j}^{(\ell)}$ and $\mathcal{R}_{j+1}^{(r)}$ for $0\le j \le n$, as long as $V^{(\ell)}_{\L_j}+V^{(r)}_{\mathcal{R}_{j+1}}\le V$ for some $j$. By \cref{fct:bound}, we stop with $r=\mathcal{O}(A_V^{\frac{k+1}{2k+1}})$ and thus the total time of this phase is $\mathcal{O}(A_V^{\frac{k+1}{2k+1}}\log A)$ due to the online procedure of \cref{lem:generate}. By \cref{lem:decomp2}, every feasible solution $S$ admits a decomposition $S=L\cup\{c\}\cup R$ for some $j$; we shall consider all possibilities for $j$. For each of them, we reduce searching for $S$ to an instance of the \textsc{Multichoice Knapsack}\xspace problem with $N'=\mathcal{O}(A_V^{\frac{k+1}{2k+1}}+\lambda^k)$ and $n'=2$. By \cref{lem:knap2}, these instances can be solved in $\mathcal{O}((A_V^{\frac{k+1}{2k+1}}+\lambda^k)\frac{\log A}{\log \lambda})$ time in total. For $j\le k$, the items of the $j$-th instance are going to belong to classes $\L_{j-1}^{(\ell)}\odot C_j$ and $\mathcal{R}_{j+1}^{(r)}$, where $\L_{j-1}^{(\ell)}\odot C_j = \{L\cup \{c\} : L\in \L_{j-1}^{(\ell)} , c\in C_j\}$. The set $\L_{j-1}^{(\ell)}\odot C_j$ can be sorted by merging $|C_j|$ sorted lists of size at most $\lambda^{j-1}$ each, i.e., in $\mathcal{O}(\lambda^k \log \lambda)$ time. On the other hand, for $j > k$, we take $\{L\cup \{c\} : L\in \L_{j-1}^{(\ell)} , c\in C_j, \rank_V(c)\le \ell^{\frac{1}{k}}\}$ and $\mathcal{R}_{j+1}^{(r)}$. The former set can be constructed by merging $\ell^{\frac{1}{k}}=\mathcal{O}(r^\frac{1}{k+1})$ sorted lists of size $\mathcal{O}(r^\frac{k}{k+1})$ each, i.e., in $\mathcal{O}(r\log \lambda)=\mathcal{O}(A_V^{\frac{k+1}{2k+1}}\log \lambda)$ time. Summing up over all indices $j$, this gives $\mathcal{O}((A_V^{\frac{k+1}{2k+1}} + \lambda^k)\log A)$ time for a single selection of the $k$ positions with largest ranks, and $\mathcal{O}((A_V^{\frac{k+1}{2k+1}} + \lambda^k)\log A (\frac{\log A}{\log \lambda})^{k})$ in total. Clearly, each solution of the constructed instances represents a solution of the initial instance, and by \cref{lem:decomp2}, every feasible solution of the initial instance has its counterpart in one of the constructed instances. Before we conclude the proof, let us note that the optimal $k$ does not need to be known in advance. To deal with this issue, we try consecutive integers $k$ and stop the procedure if \cref{fct:bound} yields that $A_V > \lambda^{2k+1}$, i.e., if $r$ is incremented beyond $\lambda^{k+1}$. If the same happens for the other instance of the algorithm (operating on $\rank_W$ instead of $\rank_V$), we conclude that $a>\lambda^{2k+1}$, and thus we shall better use larger $k$. The running time until this point is $\mathcal{O}(\lambda^{k+1}\log\lambda (\frac{\log A}{\log \lambda})^k)$ due to \cref{lem:generate}. On the other hand, if $r\le \lambda^{k+1}$, the algorithm behaves as if $a \le \lambda^{2k+1}$, i.e., runs in $\mathcal{O}(\lambda^{k+1}\log\lambda (\frac{\log A}{\log \lambda})^k)$ time. This workaround (considering all smaller values $k$) adds extra $\mathcal{O}(\lambda^{k}\log\lambda (\frac{\log A}{\log \lambda})^{k-1})$ to the time complexity for the optimal value $k$, which is less than the upper bound on the running time we have for this value $k$. \end{proof} If we are to bound the complexity in terms of $A$ only, the running time becomes $${\mathcal{O}(N+ {(A^{\frac{k+1}{2k+1}}+\lambda^k)}\log A (\frac{\log A}{\log \lambda})^{k})}.$$ Assumptions that $A\le \lambda^{2k+1}$ and $k=\mathcal{O}(1)$ let us get rid of the $(\frac{\log A}{\log \lambda})^{k}$ term, which can be bounded by $(2k+1)^k=\mathcal{O}(1)$. If one of these assumptions is not satisfied, we can improve the bound on the running time anyway: using \cref{thm:knap3} with increased $k$ if $A> \lambda^{2k+1}$, and using \cref{thm:knap} if $k=\omega(1)$. \begin{corollary} Let $k=\mathcal{O}(1)$ be a positive integer such that $A\le \lambda^{2k+1}$. The \textsc{Multichoice Knapsack}\xspace problem can be solved in $\mathcal{O}(N+ {(A^{\frac{k+1}{2k+1}}+\lambda^k)}\log \lambda)$ time. \end{corollary} This leads to the following results for weighted pattern matching: \begin{theorem}\label{thm:fast} Suppose that $\lambda^{2k-1}\le z \le \lambda^{2k+1}$ for some positive integer $k=\mathcal{O}(1)$. Then the \textsc{SDWC}\xspace problem can be solved in $\mathcal{O}((z^{\frac{k+1}{2k+1}} + \lambda^{k})\log\lambda)$ time, and the \textsc{GWPM}\xspace problem can be solved in $\mathcal{O}(n(z^{\frac{k+1}{2k+1}} + \lambda^{k})\log\lambda)$ time. \end{theorem} As we noted at the beginning of this section, \cref{lem:red} implies that any improvement of the dependence of the running time on $z$ or $\lambda$ by $z^{\varepsilon}$ (equivalently, by $\lambda^{\varepsilon}$) would contradict the $k$-\textsc{Sum}\xspace conjecture. \section{Final Remarks}\label{sec:fr} In \cref{sec:MK}, we gave an $\mathcal{O}(N+a^{0.5}\lambda^{0.5}\log A)$-time algorithm for the \textsc{Multichoice Knapsack}\xspace problem. Improvement of either exponent to $0.5 - \varepsilon$ would result in a breakthrough for the \textsc{Subset Sum}\xspace and 3-\textsc{Sum}\xspace problems, respectively. Nevertheless, this does not refute the existence of faster algorithms for some particular values $(a,\lambda)$ other than those emerging from instances of \textsc{Subset Sum}\xspace or 3-\textsc{Sum}\xspace. In \cref{app:fast}, we show an algorithm that is superior if $\frac{\log a}{\log \lambda}$ is a constant other than an odd integer. We also prove it to be optimal (up to lower order terms) for every constant $\frac{\log a}{\log \lambda}$ unless the $k$-\textsc{Sum}\xspace conjecture fails. \bibliographystyle{plain}
1,314,259,993,815
arxiv
\section{Introduction} Claude Shannon elected the bit as the fundamental unit of information. A system which can be only ``on'' or ``off'' is the simplest choice, but no fundamental reason prevent to adopt $d>2$ logical levels for information processing. Nowadays, qudits, i.e. $d$ level quantum systems, can be easily engineered, controlled and measured, thus ensuring more freedom in choosing which dimensionality to use. The interest for these systems resides on the fact that dealing with arbitrary dimensions may allow to simplify the general structure of a quantum protocol. Moreover, quantum key distribution schemes have been demonstrated to be more resilient to a specific class of eavesdropping attacks when adopting qutrits ($d=3$) or ququads ($d=4$) instead of qubits \cite{00-bec-qua, 02-bru-opt, 02-cer-sec, 03-dur-sec}. Multi-level systems and in particular qutrits are shown to be more efficient also for designing other security protocols, e.g. bit commitment or coin tossing \cite{04-lan-mea,05-mol-exp}, and for fundamental tests of quantum mechanics \cite{04-the-bel,02-col-bel,02-kas-cla}. Some optical realizations and applications of qutrits, exploiting different physical processes, have been demonstrated \cite{05-bar-gen}. Time bin entangled qudits are generated by a time-frequency entangled photon pair through a multi-armed Franson interferometer \cite{04-the-bel}. In this case the dimensionality $d$ is given by the number of arms. This scheme presents a certain rigidity in switching among different states. A different approach exploits orbital angular momentum entanglement of single photons generated by Spontaneous Parametric Down Conversion (SPDC), but only a partial control of the qutrit state is provided. Indeed, in the method of Refs. \cite{04-lan-mea,05-bar-gen,04-mol-tri,06-gro-exp} a specific hologram is needed for each qutrit state. Transverse momentum correlation has also been used to realize spatial bins \cite{05-osu-pix,05-nev-gen}. However, also in this case it seems unclear how to perform efficiently the rotation of the generated state. More recently, the experimental realization of arbitrary qutrit states, adopting the polarization degree of freedom of a two-photon state, was reported \cite{04-bog-qut}. By this technique three parametric sources, two type I and one type II nonlinear crystals, placed respectively within and outside an interferometer, are shined by a common laser and determines the critical adjustment of the qutrit phase. Moreover, the two collinear photons determining the qutrit state are divided by a symmetric BS. This contributes to further reduce the quite low production rate of the 3-level systems. It is worth noting that qutrits have also been prepared by postselection from a four photon entangled state \cite{02-how-exp}. In this paper we present the experimental realization of the proposal of Ref. \cite{05-dar-gen} to generate qutrits by using a single non linear crystal and linear optical elements such as wave plates. Qutrits are encoded in the polarization of two photons initially prepared in a non-maximally entangled state, which plays the role of a ``seed'' state. Mutually unbiased bases can be obtained by linear optical transformations acting on two different seeds. This technique presents the advantage of merging accurate control and flexibility in the generation of the state at a high brilliance level. The paper is organized as follows. Section \ref{sec:theory} concerns the description of the theoretical proposal of \cite{05-dar-gen}. We explain how to generate a two photon polarization qutrit starting from a non-maximally entangled state and using linear optics elements. Section \ref{sec:experiment} shows the experimental results obtained by our technique. First we describe the source of entangled photons used in our experiment (subsection \ref{sec:source}) and present the experimental realization of the seed states (\ref{sec:seed}). Then, in subsection \ref{sec:hada} and \ref{sec:phase}, the last stage of qutrits preparation, namely the application of unitary transformations to each photon, is shown. \section{\label{sec:theory}Theory} Let's consider the polarization qutrit \begin{equation}\label{xi} \ket{\xi_{\psi,\phi}}=\frac{1}{\sqrt3}\left(\ket{H}_1\ket H_2+e^{\text i\psi}\ket{V}_1\ket V_2+ e^{\text i\phi}\ket{\psi^+}_{12}\right)\,, \end{equation} where $1$ and $2$ label the two particles, $|H\rangle $ and $|V\rangle $ correspond to the horizontal and vertical polarization states and $\ket{\psi^+}_{12}=\frac{1}{\sqrt{2}}(\ket{H}_1\ket{V}_2+\ket{V}_1\ket{H}_2)$ is one of the four polarization Bell states. The states in eq. \eqref{xi} span the symmetrical subspace of the two qubits Hilbert space. We are interested to the generation of a set of mutually unbiased (m.u.) bases, which are the basic tool for quantum key distribution \cite{00-bec-qua,84-ben-qua}. On this purpose, we require that in the superposition state \eqref{xi}, the three terms of the computational basis $\{\ket{H}_1\ket{H}_2, \ket{V}_1\ket{V}_2, \ket{\psi^+}_{12}\}$ appear with the same probability amplitude. Indeed, our method is suitable to adjust at the same time both the balancement between the three contributions and the phases $\phi$ and $\psi$ needed to obtain m.u. bases. Such states are obtained by applying two unitaries to a {\it seed} non-maximally entangled state, \begin{equation} \ket{\chi_{\psi,\phi}}=d_H\ket{H}_1\ket H_2+d_V\ket{V}_1\ket V_2\,. \end{equation} The dependence on the phases $\psi$ and $\phi$ is implicit in $d_H$ and $d_V$, which are chosen to be real numbers: \begin{equation}\label{x+-} d_{H}=|x_+|\,,\quad d_{V}=|x_-|\,, \end{equation} where \begin{equation} x_\pm=\frac{\sqrt2\pm e^{\text i(\phi-\frac\psi2)}}{\sqrt6}\,. \end{equation} We can write explicitly the transformation which maps the seed state $\ket{\chi_{\psi,\phi}}$ into the desired qutrit state as \begin{equation}\label{xi-chi} \ket{\xi_{\psi,\phi}}=(U\otimes W)\ket{\chi_{\psi,\phi}}\,, \end{equation} up to an irrelevant global phase. The two unitaries $U$ and $W$, applied to photons $1$ and $2$ respectively and expressed in the $\{\ket H,\ket V\}$ basis, are: \begin{align}\label{W} &W= \underbrace{ \begin{pmatrix} 1 & 0\\ 0 & e^{\text i\alpha} \end{pmatrix}}_{P_{\alpha}} \underbrace{\frac1{\sqrt2} \begin{pmatrix} 1 & 1\\ -1 & 1 \end{pmatrix}}_{H'},\quad&&\alpha=\frac\psi2+\pi\\ \label{U} &U=W \begin{pmatrix} 1 & 0\\ 0 & e^{\text i\Gamma} \end{pmatrix}\,,&&\Gamma=\text{arg}\left(\frac{x_-}{x_+}\right)\,. \end{align} The phase shift $\Gamma$ can be introduced contextually with the generation of the seed state. Indeed, thanks to the explicit expression of $U$ and $W$, the eq. \eqref{xi} can be written as \begin{equation}\label{xi'} \ket{\xi_{\psi,\phi}}=(P_\alpha\otimes P_\alpha)(H'\otimes H')\ket{\chi'_{\psi,\phi}} \end{equation} where \begin{equation}\label{chi'} \ket{\chi'_{\psi,\phi}}=d_H\ket{H}_1\ket H_2+e^{\text i\Gamma}d_V\ket{V}_1\ket V_2 \end{equation} and the unitaries $P_\alpha$ and $H'$ are defined in \eqref{W}. The gate $P_\alpha$ represents a phase shifter that adds a phase difference $\alpha=\frac{\psi}{2}+\pi$ between the states $\ket{V}$ and $\ket{H}$. The gate $H'$ (similar to the Hadamard gate) performs the transformations $\ket{H}\rightarrow\frac{1}{\sqrt{2}}(\ket{H}-\ket{V})$ and $\ket{V}\rightarrow\frac{1}{\sqrt{2}}(\ket{H}+\ket{V})$ \footnote{Note that the transformation $H'$ is related to the usual Hadamard transformation $H$ by a unitary matrix, i.e. $H'=\sigma_zH$, where $\sigma_z$ is the usual Pauli matrix.}. These unitaries are attainable by simple linear optical elements as wave plates. \begin{table} \begin{ruledtabular} \begin{tabular}{|c|cc|cc|cc|cc|cc|cc|} & $\psi$ && $\phi$ && $\boldsymbol\alpha$ && $\boldsymbol d_H$ && $\boldsymbol d_V$ && $\boldsymbol\Gamma$ &\\ \hline $\ket{v_1}$ & $0$ && $0$ && $0$ &&&&&&&\\ \cline{1-7} $\ket{v_2}$ & $\frac23\pi$ && $-\frac23\pi$ && $-\frac{2}{3}\pi$&& $\frac{\sqrt2-1}{\sqrt6}$ && $\frac{\sqrt2+1}{\sqrt6}$ && 0 & \\ \cline{1-7} $\ket{v_3}$ & $-\frac23\pi$ && $\frac23\pi$ && $\frac{2}{3}\pi$&&&&&&&\\ \hline\hline $\ket{w_1}$ & $-\frac23\pi$ && $-\frac23\pi$ && $\frac{2}{3}\pi$&&&&&&&\\ \cline{1-7} $\ket{w_2}$ & $\frac23\pi$ && $0$ && -$\frac{2}{3}\pi$&& $\sqrt{\frac{3+\sqrt2}{6}}$ && $\sqrt{\frac{3-\sqrt2}{6}}$ && \text{arcsin}$\sqrt{\frac{6}{7}}$&\\ \cline{1-7} $\ket{w_3}$ & $0$ && $\frac23\pi$ && $0$&&&&&&&\\ \hline\hline $\ket{z_1}$ & $\frac23\pi$ && $\frac23\pi$ && $-\frac{2}{3}\pi$&&&&&&&\\ \cline{1-7} $\ket{z_2}$ & $-\frac23\pi$ && $0$ && $\frac{2}{3}\pi$ && $\sqrt{\frac{3+\sqrt2}{6}}$ && $\sqrt{\frac{3-\sqrt2}{6}}$ && -\text{arcsin}$\sqrt{\frac{6}{7}}$&\\ \cline{1-7} $\ket{z_3}$ & $0$ && $-\frac23\pi$ && $0$&&&&&&&\\ \end{tabular} \end{ruledtabular} \caption{Theoretical values of $\alpha$, $d_H,d_V$ and $\Gamma$ for the states of the m.u. bases.} \label{table1} \end{table} As said, we are interested in particular to generate three sets of m.u. bases. The (nine) vectors corresponding to the three bases sets, all expressed in the form of eq. \eqref{xi}, are explicitly given in the following: \begin{align} &\text{1)}\quad\begin{cases}\begin{aligned} &\ket{v_1}=\frac1{\sqrt3}\left(\ket{HH}+\ket{VV}+\ket{\psi^+}\right)\\ &\ket{v_{2}}=\frac1{\sqrt3}\left(\ket{HH}+e^{\frac23\pi\text i}\ket{VV}+e^{-\frac23\pi\text i}\ket{\psi^+}\right)\\ &\ket{v_{3}}=\frac1{\sqrt3}\left(\ket{HH}+e^{-\frac23\pi\text i}\ket{VV}+e^{\frac23\pi\text i}\ket{\psi^+}\right) \end{aligned}\end{cases} \\ &\text{2)}\quad \begin{cases}\begin{aligned} &\ket{w_1}=\frac1{\sqrt3}\left(\ket{HH}+e^{-\frac23\pi\text i}\ket{VV}+e^{-\frac23\pi\text i}\ket{\psi^+}\right)\\ &\ket{w_{2}}=\frac1{\sqrt3}\left(\ket{HH}+e^{\frac23\pi\text i}\ket{VV}+\ket{\psi^+}\right)\\ &\ket{w_{3}}=\frac1{\sqrt3}\left(\ket{HH}+\ket{VV}+e^{\frac23\pi\text i}\ket{\psi^+}\right) \end{aligned}\end{cases} \\ &\text{3)}\quad \begin{cases}\begin{aligned} &\ket{z_1}=\frac1{\sqrt3}\left(\ket{HH}+e^{\frac23\pi\text i}\ket{VV}+e^{\frac23\pi\text i}\ket{\psi^+}\right)\\ &\ket{z_{2}}=\frac1{\sqrt3}\left(\ket{HH}+e^{-\frac23\pi\text i}\ket{VV}+\ket{\psi^+}\right)\\ &\ket{z_{3}}=\frac1{\sqrt3}\left(\ket{HH}+\ket{VV}+e^{-\frac23\pi\text i}\ket{\psi^+}\right) \end{aligned}\end{cases} \end{align} Note that in order to obtain a full set of m.u. bases, a fourth one, namely $\{\ket{HH},\ket{VV},\ket{\psi^+}\}$, must be considered \cite{86-woo-qua}. We give in Table \ref{table1} the explicit values of $\alpha$, $d_H,d_V$ and $\Gamma$ for all the states in the three m.u. bases. Detailed calculations are given in Appendix \ref{sec:calculus}. \section{\label{sec:experiment}Experiment} \begin{figure}[t] \centering \includegraphics[scale=0.36]{QT1.eps} \caption{(Color online) Optical setup for generation and analysis of polarization qutrits. a) The entanglement source is used to produce the seed state. The reciprocal weights of the $\ket{H}_1 \ket{H}_2$ and $\ket{V}_1 \ket{V}_2$ components are set by controlling the pump beam polarization in the first passage through BBO by the $\lambda_p/2$ half wave plate and in the second passage by the $\lambda_p/4$ quarter wave plate. b) The qutrit is encoded by applying the $H'\otimes H'$ transformation by two HWP plates and by proper phase shifts $P_\alpha\otimes P_\alpha$ performed by QWP plates. c) Finally the state is characterized by polarization quantum state tomography.} \label{fig:QT1} \end{figure} In this Section we explain how to implement the procedure described in Section \ref{sec:theory} and show the obtained experimental results. From eq. \eqref{xi'} and \eqref{chi'}, it follows that all the states $\ket{\xi_{\psi,\phi}}$, expressed as \eqref{xi}, can be produced in four steps: \begin{itemize} \item[\bf I)] Choose $\phi$ and $\psi$ and generate the corresponding (non-maximally entangled) seed state $\ket{\chi_{\psi,\phi}}$. \item[{\bf II})] Change the relative phases between $\ket{H}_1\ket H_2$ and $\ket{V}_1\ket V_2$ in order to obtain $\ket{\chi'_{\psi,\phi}}$. \item[{\bf III})] Apply the gates $H'$ to each photon. This is performed by a half wave plate (HWP) whose axis is at -22.5$^\circ$ with respect to the horizontal direction. \item[{\bf IV})] Apply the phase shifter $P_{\alpha}$ to each photon. This phase shift is realized by a birefringent medium, e.g. a quarter wave plate (QWP), with the optical axis oriented in the horizontal plane. The corresponding induced phase $\alpha$ is varied by rotating the plate along its vertical axix [cfr. Fig. \ref{fig:QT1}]. \end{itemize} In the actual realization we performed step III) before step II). In this way the phase $\Gamma$ can be easily set by considering that the $H'\otimes H'$ gate transforms the seed $\ket{\chi'}$ in the following way: \begin{multline}\label{Gamma} H'\otimes H'\ket{\chi'}=\\ \frac{d_{H}+e^{\text i\Gamma}d_{V}}{2}(\ket{HH}+\ket{VV})-\frac{d_{H}-e^{\text i\Gamma}d_{V}}{\sqrt2}\ket{\psi^{+}}\,. \end{multline} For fixed values of $d_H$ and $d_V$, the value of $\Gamma$ determines the relative weight of $\ket{HH}$ (or $\ket{VV}$) and $\ket{\psi^{+}}$. In this way the value of $\Gamma$ is chosen in order to make equal the two weights. \subsection{\label{sec:source}Parametric Source} Photon pairs are generated by a SPDC source whose detailed description is given in \cite{03-bar-det,04-bar-gen-PRL,04-cin-par}. It allows the efficient generation of the polarization entangled states $|\Phi _{\theta }\rangle=\frac{1}{\sqrt{2}} (|H\rangle _{1}|H\rangle _{2}+e^{i\theta }|V\rangle _{1}|V\rangle _{2})$ by using a type I, $0.5mm$ thick, $\beta $-$BaB_{2}O_{4}$ (BBO) crystal. In the source, the entanglement arises from the superposition of the degenerate parametric emissions ($\lambda=728nm$) of the crystal, excited in two opposite directions $\vec{k}_{p}$ and $-\vec{k}_{p}$ by a $V$-polarized Argon laser beam ($\lambda_p=364nm$). In the following we will refer to the emission excited in the direction $\vec{k}_p$ as the ``left'' emission (i.e. on the left of the BBO crystal in Fig. \ref{fig:QT1}), while the emission excited in the direction $-\vec{k}_p$ is the ``right'' one. The $H$-polarized photons belonging to the ``left'' emission are transformed $\ket{H}\rightarrow\ket{V}$ by a double passage through a quarter wave plate ($\lambda/4$ in Fig. \ref{fig:QT1}). Phase $\theta$ can be easily set by a micrometric translation of the spherical mirror M. Parametric radiation is coupled to two single mode fibers, achieving a coincidence level of $\sim1000$ sec$^{-1}$, over the 20nm bandwidth of two interference filters (IF, Fig. \ref{fig:QT1}). By this source we can easily generate the states $\ket{HH}$, $\ket{VV}$ and $\ket{\psi^+}$. The first two states are simply obtained by selecting only the right or left emission, with fidelities $F_{\ket{HH}}=0.991\pm0.010$ and $F_{\ket{VV}}=0.960\pm0.008$. The state $\ket{\psi^+}$ can be generated from the state $\ket{\Phi_0}$ by applying a HWP at $45^\circ$ on one photon, obtaining the fidelity $F_{\ket{\psi^+}}=0.966\pm0.008$. {The fidelities of $\ket{HH}$ and $\ket{VV}$ are different mainly because of the non ideal behavior of the $\lambda/4$ waveplate. Indeed the operational wavelenght of all the waveplate adopted in our experiment is equal to 750nm. As we shall see below, this feature partially affects the overall fidelities of the generated qutrits. An other possible source of imperfection arises from the critical spatial matching between the right and left parametric emission. This is overcome by the adoption of a thin crystal and single mode fibers. Moreover by this scheme no temporal or spatial crystal walkoff is present with Type I phase matching.} \subsection{\label{sec:seed}Seed state generation (Step I)} The generation of non-maximally entangled states by the above described SPDC source was previously demonstrated in Ref. \cite{05-bar-tow}. The basic idea consists of tuning the polarization of the pump beam so that the nonlinear gain for the SPDC process can be varied. Indeed, if the pump beam is linearly polarized at an angle $\Theta_p$ with respect to the BBO optic axis, the SPDC probability is $p\propto\cos^2\Theta_p$. Therefore, by inserting a QWP intercepting only the pump beam between the BBO and the mirror M ($\lambda_p/4$ in Fig. \ref{fig:QT1}), the right emission probability becomes lower and the seed state \begin{equation} \ket{\chi'}=d_H\ket{HH}+e^{\text i\Gamma}d_V\ket{VV},\quad d_H<d_V\,, \end{equation} is generated. Phase $\Gamma$ is set by finely translating the spherical mirror, as said. On the other hand, seed states with higher $\ket{HH}$ component ($d_H>d_V$) can be generated by inserting a further HWP before BBO ($\lambda_p/2$ in Fig. \ref{fig:QT1}). In this way, by changing the $\vec{k}_p$ pump polarization, we lower the efficiency for the left emission. The $\lambda_p/4$ waveplate is used to rotate back the $-\vec{k}_p$ beam polarization to the vertical direction, then raising the right emission. Then the states \begin{equation} \ket{\chi'}=d_H\ket{HH}+e^{\text i\Gamma}d_V\ket{VV},\quad d_H>d_V\,, \end{equation} are generated. \begin{figure}[t] \includegraphics[scale=.37]{visibilita.eps} \caption{(Color online) Visibility (V) of non-maximally entangled state $\ket{\chi}$ vs. the $\ket{HH}$ weight $d_H^2$. The black line represents the theoretical curve, $V=2\sqrt{d^2_H(1-d^2_H)}$. Error bars are lower than the dimension of the point symbols. } \label{fig:visibilita} \end{figure} For our experiment, two different seed states are needed (cfr. Table \ref{table1}), namely: \begin{equation}\label{2seeds} \begin{aligned} \ket{\chi_{1}}=&\frac{\sqrt{2}-1}{\sqrt6}\ket{HH}+\frac{\sqrt{2}+1}{\sqrt6}\ket{VV}\\ {\simeq}&0.169\ket{HH}+0.986\ket{VV}\\ \ket{\chi_{2}}=&\sqrt{\frac{3+\sqrt{2}}{6}}\ket{HH}+\sqrt{\frac{3-\sqrt{2}}{6}}\ket{VV}\\ {\simeq}&0.858\ket{HH}+0.514\ket{VV} \end{aligned} \end{equation} The first seed state $\ket{\chi_{1}}$ is used for the first basis set $\{\ket{v_{a}}\}$, while the second seed state $\ket{\chi_{2}}$ is used for the remaining two sets, namely $\{\ket{w_{a}}\}$ and $\{\ket{z_{a}}\}$. Note that the intrinsic difficulty to implement the first state is due to the required unbalancement of the two contributions, $d_H^2/d_V^2\approx0.03$, almost comparable with the experimental uncertainties associated to each polarization contribution. We show in Fig. \ref{fig:visibilita} the visibility $V=\frac{N_{max}-N_{min}}{N_{max}+N_{min}}$ of different non-maximally entangled state as a function of the probability $d_H^2$ of $\ket{HH}$. It is calculated by the coincidences of the two photons measured in the diagonal component $\frac{1}{\sqrt2}(\ket{H}+\ket{V})$ varying the phase $\Gamma$ from $0$ to $\pi$. $N_{max}$ ($N_{min}$) are the coincidence counts corresponding to $\Gamma=0$ ($\Gamma=\pi$). The two blue points refer to the states $\ket{\chi_1}$ and $\ket{\chi_2}$. The points on the left ($d^2_H<0.5$) are closer to the theoretical curve probably because only the insertion of $\lambda_p/4$ is required for those states. \begin{figure}[t] \includegraphics[scale=.6]{tomo1.eps} \caption{(Color online) Experimental quantum tomographies (real parts) of the seed states $\ket{\chi_1}$ and $\ket{\chi_2}$ expressed in the $\{\ket{HH},\ket{HV},\ket{VH},\ket{VV}\}$ basis. For the two states we obtain the purities $\mathcal P_{\ket{\chi_1}}=0.908\pm0.034$ and $\mathcal P_{\ket{\chi_2}}=0.930\pm0.036$. The imaginary components are negligible. } \label{fig:tomo1} \end{figure} For a complete characterization of the two seed states \eqref{2seeds}, we performed a complete quantum tomography of the states, whose resulting diagrams are shown in Fig. \ref{fig:tomo1}. We used the ``Maximum Likelihood Estimation'' method described in \cite{01-jam-mea}, obtaining the fidelity $F_1=0.912\pm0.010$ for $\ket{\chi_1}$ and $F_2=0.946\pm0.016$ for $\ket{\chi_2}$. We also measured the trace of the square of the experimental density matrix, i.e. the purity of the generated states $\mathcal P_\rho=\text{Tr}[\rho^2]$. The results are given in the caption of Fig. \ref{fig:tomo1}. \subsection{\label{sec:hada}$H'$ gate and $\Gamma$ phase setting (Steps II,III)} \begin{figure} \centering \subfigure{\includegraphics[scale=.35]{qtrit4.eps}} \subfigure{\includegraphics[scale=.35]{qtrit4teo.eps}} \caption{(Color online) Experimental quantum tomography (a) and theoretical density matrices (b) of the states $\ket{v_1}$, $\ket{w_3}$ and $\ket{z_3}$. The upper pictures represent the real (Re) parts of the density matrices, while the lower pictures represent the imaginary (Im) parts. We measured the purities $\mathcal P_{\ket{v_1}}=0.974\pm0.030$, $\mathcal P_{\ket{w_3}}=0.904\pm0.033$, $\mathcal P_{\ket{z_3}}=0.895\pm0.028$. \label{fig:qtrit4} } \end{figure} The following steps for qutrit generation correspond to apply the $H'$ transformation [Fig. \ref{fig:QT1}] and the $\Gamma$ phase setting to each photon. As said, the $H'\otimes H'$ transformation is performed by the action of two HWP's oriented at $-22.5^\circ$ with respect to the vertical direction. Phase $\Gamma$ needed for the $\ket{\chi'}$ generation is set, as already said, after the insertion of the HWP wave plates that implement the unitary gate $H'\otimes H'$. The correct position is changed by micrometric translation of the mirror M [see Fig. \ref{fig:QT1}] and fixed by observing that the count rate for $\ket{H}_1\ket{H}_2$ events doubles that of the $\ket{H}_1\ket{V}_2$ contribution. It is evident from Table \ref{table1} that the states $\ket{v_{1}},\ket{w_{3}}$ and $\ket{z_{3}}$ can be generated by applying only the previous operations, i.e. without the need of inserting the phase gates $P_{\alpha}\otimes P_{\alpha}$. The corresponding experimental density matrices are shown in Fig. \ref{fig:qtrit4}, with fidelities $F_{\ket{v_1}}=0.949\pm0.010$, $ F_{\ket{w_{3}}}=0.931\pm0.011$ and $F_{\ket{z_{3}}}=0.932\pm0.010$. Here and in the following we will use the basis $\{\ket{HH},\ket{VV},\ket{\psi^+},\ket{\psi^-}\}$ in order to have a better comparison with \eqref{xi}. These states are obtained by the insertion of two half wave-plate (HWP in Fig. \ref{fig:QT1}) and correct phase $\Gamma$ setting (see Table \ref{table1}), as said. \subsection{\label{sec:phase}Phase gate (Step IV)} \begin{figure} \centering \subfigure{\includegraphics[scale=.45]{qtrit5.eps}} \subfigure{\includegraphics[scale=.45]{qtrit5teo.eps}} \caption{(Color online) Experimental quantum tomography (a) and theoretical density matrices (b) of the states $\ket{w_1}$ and $\ket{w_2}$. We have the purities $\mathcal P_{\ket{w_1}}=0.969\pm0.030$, $\mathcal P_{\ket{w_2}}=0.918\pm0.024$. } \label{fig:qtrit5} \end{figure} The implementation of the last gate of the protocol, namely the $P_\alpha\otimes P_\alpha$ operation, is realized by inserting for each photon a QWP with vertical optical axis. It is mounted on a rotating stage which allows to tune the actual thickness. In this way different phase shifts between the vertical and horizontal polarization components are achieved. In Fig. \ref{fig:qtrit5} we show the two states $\ket{w_1}$ and $\ket{w_2}$ obtained by implementating the gate. The experimental fidelities are $ F_{\ket{w_{1}}}=0.901\pm0.010$ and $F_{\ket{w_{2}}}=0.939\pm0.009$. {We also generated the two remaining states of the $\ket{z_a}$ basis (see Fig. \ref{fig:qtrit6}). The experimental fidelities are given by $F_{\ket{z_{1}}}=0.918\pm0.009$ and $F_{\ket{z_{2}}}=0.933\pm0.009$. We did not actually generate the other two states $\ket{v_{2}}$ and $\ket{v_{3}}$ of the fourth basis, but we expect similar results for them. However it is well known that a qutrit-based quantum key distribution adopting only three mutually unbiased bases is more secure than qubit-based schemes \cite{00-bec-qua}. Furthermore, it allows a higher transmission rate. } \begin{figure} \centering \subfigure{\includegraphics[scale=.45]{qtrit6.eps}} \subfigure{\includegraphics[scale=.45]{qtrit6teo.eps}} \caption{(Color online) Experimental quantum tomography (a) and theoretical density matrices (b) of the states $\ket{z_1}$ and $\ket{z_2}$. We have the purities $\mathcal P_{\ket{z_1}}=0.931\pm0.028$, $\mathcal P_{\ket{z_2}}=0.937\pm0.032$. } \label{fig:qtrit6} \end{figure} \section{Conclusions} In this paper we have shown the experimental feasibility of the proposal given in \cite{05-dar-gen} for the realization of polarization qutrit states. The protocol starts from the generation of a two photon non-maximally entangled state and is based on the application of two unitary transformations to each photon. Each relevant parameter of the qutrit states can be easily tuned by this protocol. The experimental procedure can be described in four steps; we showed the experimental results corresponding to each step, demonstrating in this way the actual implementation of the procedure. This methods is very powerful as demonstrated by the high coincidence rate and the high values of fidelities of the states. Moreover the simplicity of this scheme could allow an easy experimental implementation of quantum security protocols. \begin{acknowledgments} We thank Massimiliano Sacchi and Mauro D'Ariano for useful discussions. This work was supported by the PRIN 2005 ({\it New perspectives in entanglement and hyper-entanglement generation and manipulation}) of MIUR (Italy). \end{acknowledgments}
1,314,259,993,816
arxiv
\section{Example Probability} \label{app:prob} We additionally evaluate our density estimation methods using $\log p(x)$ as a detection measure. In the case of text, $\log p(x)$ is defined as $\sum_{i=1}^t \log p(x_i \mid x_{<i})$. \begin{figure} \centering \includegraphics[width=\linewidth]{figs/imdb_px.png} \caption{OOD detection performance as measured by AUROC using different measures for binary sentiment classification based background shift, using IMDB as ID data. We can see that using $\log p(x)$ as a measure is highly noisy due to its dependency on sequence lengths.} \label{fig:imdb_px} \end{figure} While \textit{PPL}\xspace~accounts for varying sequence lengths by averaging word likelihoods over the input sequence, $\log p(x)$ does not. Figure \ref{fig:imdb_px} shows that this difference significantly impacts performance. With IMDB as the ID data, using $\log p(x)$ fails for SST-2, achieving close to 100 FAR95 and near 0 AUROC. We suspect this because IMDB examples are a full paragraph while SST-2 examples are one to two sentences. $\log p(x)$ would naturally be smaller for IMDB examples than these OOD examples, resulting in complete failure for simple thresholding methods measured by AUROC. \section{FAR95 Results} \label{app:far95} We additionally evaluate the performance for all experiments using FAR95, which measures the false positive rate at 95\% recall. In the context of OOD detection, this measure gives the misclassification rate of ID data at 95\% recall of OOD classification, hence a lower value indicates better performance. \begin{table} \small \centering \setlength\tabcolsep{3pt} \begin{tabular}{llllr} \toprule \multirow{2}{*}{ID} & \multirow{2}{*}{OOD} & \multicolumn{3}{c}{FAR95 ($\downarrow$)} \\ & & \textit{PPL}\xspace & \textit{MSP}\xspace & Oracle \\ \midrule News Top-5 & News Rest & 88.5 & \textbf{75.7} & 80.4 \\ DBPedia Top-4 & DBPedia Rest & \textbf{78.3} & 86.3 & 1.3 \\ \bottomrule \end{tabular} \caption{FAR95 scores obtained using \textit{PPL}\xspace, \textit{MSP}\xspace~and Oracle for semantic shifts, with lower score (among \textit{PPL}\xspace/\textit{MSP}\xspace) in \textbf{bold}.} \label{tab:semantic-shift-far95} \end{table} \begin{table} \centering \small \begin{tabular}{ll>{\bfseries}rrr} \toprule \multirow{2}{*}{ID} & \multirow{2}{*}{OOD} & \multicolumn{3}{c}{FAR95 ($\downarrow$)} \\ & & \textnormal{\textit{PPL}\xspace} & \textit{MSP}\xspace & Oracle \\ \midrule \multirow{2}{*}{SST-2} & IMDB & 8.6 & 76.5 & 0.0 \\ & Yelp & 5.2 & 83.0 & 0.0 \\ \midrule \multirow{2}{*}{IMDB} & SST-2 & 17.0 & 47.7 & 0.2 \\ & Yelp & 70.2 & 82.6 & 0.0 \\ \midrule \multirow{2}{*}{Yelp} & SST-2 & 3.1 & 45.4 & 1.1 \\ & IMDB & 36.2 & 90.4 & 0.0 \\ \midrule \multirow{2}{*}{SNLI} & RTE & 19.1 & 61.4 & 0.7 \\ & MNLI & 14.7 & 62.5 & 0.3 \\ \midrule \multirow{2}{*}{RTE} & SNLI & 62.5 & 95.3 & 0.0 \\ & MNLI & 64.3 & 93.9 & 10.3 \\ \midrule \multirow{2}{*}{MNLI} & SNLI & 70.9 & 84.6 & 1.2 \\ & RTE & \textnormal{93.2} & \textbf{69.8} & 6.2 \\ \bottomrule \end{tabular} \caption{FAR95 scores obtained using \textit{PPL}\xspace, \textit{MSP}\xspace~and Oracle for background shift caused by shift in domain. For each pair, lower score obtained (by \textit{PPL}\xspace~or \textit{MSP}\xspace) is in \textbf{bold}.} \label{tab:domain-shift-far95} \end{table} \begin{table} \centering \small \resizebox{\columnwidth}{!}{% \begin{tabular}{lr>{\bfseries}rrrr} \toprule \multirow{2}{*}{ID} & \multirow{2}{*}{OOD} & \multicolumn{3}{c}{FAR95 ($\downarrow$)} \\ & & \textnormal{\textit{PPL}\xspace} & \textit{MSP}\xspace & Oracle \\ \midrule \multirow{4}{*}{Fiction} & Government & 57.4 & 95.0 & 9.7 \\ & Slate & 66.0 & 92.7 & 37.7 \\ & Telephone & 29.1 & 93.3 & 36.0 \\ & Travel & 58.0 & 93.3 & 10.0 \\ \midrule \multirow{4}{*}{Government} & Fiction & 74.7 & 92.6 & 6.4 \\ & Slate & 70.7 & 92.1 & 13.7 \\ & Telephone & 35.2 & 95.5 & 6.2 \\ & Travel & 52.8 & 92.4 & 6.2 \\ \midrule \multirow{4}{*}{Slate} & Fiction & 90.6 & 96.2 & 32.2 \\ & Government & 90.0 & 96.1 & 12.6 \\ & Telephone & 57.4 & 96.0 & 22.7 \\ & Travel & 83.3 & 95.8 & 16.8 \\ \midrule \multirow{4}{*}{Telephone} & Fiction & 54.2 & 93.3 & 32.5 \\ & Government & 50.9 & 93.7 & 8.5 \\ & Slate & 49.6 & 91.1 & 36.3 \\ & Travel & 44.6 & 91.4 & 10.7 \\ \midrule \multirow{4}{*}{Travel} & Fiction & 74.5 & 95.5 & 10.2 \\ & Government & 69.0 & 94.4 & 7.8 \\ & Slate & 75.9 & 93.8 & 16.8 \\ & Telephone & 30.3 & 93.7 & 9.5 \\ \bottomrule \end{tabular} } \caption{FAR95 scores obtained using \textit{PPL}\xspace, \textit{MSP}\xspace~and Oracle for background shift caused by shift in MNLI genre. For each pair, lower score obtained (by \textit{PPL}\xspace~or \textit{MSP}\xspace) is in \textbf{bold}.} \label{tab:MNLI-genre-far95} \end{table} Tables \ref{tab:semantic-shift-far95}, \ref{tab:domain-shift-far95}, \ref{tab:MNLI-genre-far95} and \ref{tab:MNLI-challenge-far95} show the results obtained using FAR95 as a metric for the corresponding ID/OOD pairs used earlier. We observe that FAR95 results are in line with AUROC results except for DBPedia, in which case density estimation methods yield a better result. The difference may be a result of the accumulative nature of AUROC in contrast to FAR95, which is a point measurement. \begin{table} \small \centering \setlength\tabcolsep{3pt} \resizebox{\columnwidth}{!}{% \begin{tabular}{lllr>{\bfseries}rr} \toprule \multirow{2}{*}{ID} & \multirow{2}{*}{OOD} & \multirow{2}{*}{Shift} & \multicolumn{3}{c}{FAR95 ($\downarrow$)} \\ & & & \textit{PPL}\xspace & \textnormal{\textit{MSP}\xspace} & Oracle \\ \midrule IMDB & c-IMDB & Semantic & 93.1 & 82.8 & \textbf{69.3} \\ \midrule \multirow{7}{*}{MNLI} & HANS & Background & \textbf{4.2} & \textnormal{73.1} & 0.0 \\ & Negation & Background & 94.9 & 93.5 & 0.1 \\ & Len. Mismatch & Background & 98.3 & 95.0 & 0.1 \\ & Spell. Error & Background & 96.9 & 92.4 & 3.0 \\ & Word Overlap & Background & 96.0 & 94.4 & 1.1 \\ & Antonym & Semantic & 100.0 & 90.8 & 6.3 \\ & Num. Reas. & Semantic & 99.5 & 77.6 & 0.7 \\ \bottomrule \end{tabular} } \caption{FAR95 scores obtained using \textit{PPL}\xspace, \textit{MSP}\xspace~and Oracle for challenge data. The primary type of shift observed is indicated in the 'Shift' column. Lower score (among \textit{MSP}\xspace/\textit{PPL}\xspace) for each pair is in \textbf{bold}.} \label{tab:MNLI-challenge-far95} \end{table} \section{Background shift in MNLI Genres} MNLI is a crowd-sourced collection of sentence pairs for textual entailment sourced from 10 genres including Fiction, Government, Slate, Telephone, and Travel. We use examples from these five MNLI genres and separately consider each genre as ID and OOD, using the validation splits for evaluation. Table \ref{tab:MNLI-genre} shows the results for MNLI genres. The discriminative model generalizes well to other genres and we find that the OOD detection performance of calibration method is close to random (50) because of the higher confidence on correct OOD predictions by a well-calibrated model. \begin{table} \centering \small \resizebox{\columnwidth}{!}{% \begin{tabular}{lr>{\bfseries}rrrrr} \toprule \multirow{2}{*}{ID} & \multirow{2}{*}{OOD} & \multicolumn{3}{c}{AUROC} & \multicolumn{2}{c}{Accuracy} \\ & & \textnormal{\textit{PPL}\xspace} & \textit{MSP}\xspace & Oracle & OOD & ID \\ \midrule \multirow{4}{*}{Fiction} & Govt. & 83.3 & 48.5 & 98.4 & 87.0 & \multirow{4}{*}{86.1} \\ & Slate & 81.6 & 54.1 & 92.7 & 82.2 \\ & Tel. & 92.3 & 51.0 & 94.6 & 84.0 \\ & Travel & 82.2 & 49.9 & 98.3 & 84.3 \\ \midrule \multirow{4}{*}{Govt.} & Fiction & 75.2 & 57.4 & 98.9 & 82.8 & \multirow{4}{*}{88.4} \\ & Slate & 77.1 & 58.3 & 97.7 & 82.0 \\ & Tel. & 89.9 & 57.6 & 98.5 & 82.8 \\ & Travel & 82.6 & 57.1 & 99.4 & 84.1 \\ \midrule \multirow{4}{*}{Slate} & Fiction & 60.6 & 48.2 & 94.2 & 84.3 & \multirow{4}{*}{82.5} \\ & Govt. & 61.3 & 45.3 & 97.7 & 87.8 \\ & Tel. & 83.0 & 49.8 & 95.3 & 84.1 \\ & Travel & 63.7 & 46.8 & 97.6 & 84.6\\ \midrule \multirow{4}{*}{Tel.} & Fiction & 85.7 & 55.9 & 95.2 & 82.5 & \multirow{4}{*}{85.7} \\ & Govt. & 86.0 & 52.5 & 98.3 & 85.9 \\ & Slate & 86.8 & 59.2 & 94.2 & 80.6 \\ & Travel & 87.8 & 56.8 & 98.6 & 82.5 \\ \midrule \multirow{4}{*}{Travel} & Fiction & 76.4 & 54.8 & 98.0 & 81.3 & \multirow{4}{*}{86.7} \\ & Govt. & 78.8 & 49.0 & 98.7 & 87.4 \\ & Slate & 77.2 & 55.8 & 96.3 & 80.8 \\ & Tel. & 92.7 & 56.0 & 98.1 & 82.2 \\ \bottomrule \end{tabular} } \caption{Performance on background shifts caused by shift in MNLI genre. For each pair, higher score obtained (by \textit{PPL}\xspace~or \textit{MSP}\xspace) is in \textbf{bold}. We can see that the density estimation method using \textit{PPL}\xspace~significantly outperforms the calibration method.} \label{tab:MNLI-genre} \end{table} \section{Conclusion} Despite the extensive literature on outlier and OOD detection, previous work in NLP tends to lack consensus on a rigorous definition of OOD examples, instead relying on arbitrary dataset pairs from different tasks. In our work, we approach this problem in natural text and simulated data by categorizing OOD examples as either \textit{background} or \textit{semantic} shifts and study the performance of two common OOD detection methods---calibration and density estimation. For both types of data, we find that density estimation methods outperform calibration methods under background shifts while the opposite is true under semantic shifts. However, we find several failure cases from challenge examples that target model shortcomings. \textcolor{black}{As explained in Section \ref{sec:ood-text}, we assume that $\phi_s$ and $\phi_b$ map $x$ to two disjoint sets of components for simplicity. This assumption helps us simplify the framework and compare the two types of detection methods in relation to the two types of shifts.} While this simplified framework explains much of the differences between the two methods, failure cases from challenge examples highlight the room for better frameworks and a more explicit definition of OOD to progress the development of OOD detection methods. Such a definition can inform the creation of benchmarks on OOD detection that reflect realistic distribution shifts. Defining (or at least explicitly stating) the types of OOD examples that predictors are designed to target can also guide future modeling decisions between using calibration and density estimation methods, and help improve detection. Some promising directions include test-time fine-tuning \cite{DBLP:conf/icml/SunWLMEH20} and data augmentation \cite{DBLP:journals/corr/abs-2006-15207}, which can be guided towards a specific type of distribution shift for improved detection performance against it. Finally, the methods we studied work well for one type of shift, which motivates the use of hybrid models \cite{DBLP:conf/eccv/0002LG020, DBLP:journals/corr/abs-2007-09070} that use both calibration and density estimation when both types of shift occur at the same time. \section*{Ethical Considerations} As society continues to rely on automated machine learning systems to make important decisions that affect human lives, OOD detection becomes increasingly vital to ensure that these systems can detect natural shifts in domain and semantics. If medical chat-bots cannot recognize that new disease variants or rare co-morbidities are OOD while diagnosing patients, they will likely provide faulty and potentially harmful recommendations \footnote{\url{https://www.nabla.com/blog/gpt-3/}} if they don't contextualize their uncertainty. We believe that implementing OOD detection, especially for more challenging but commonly occurring semantic shifts should be part of any long-lasting production model. In addition, OOD detection can be used to identify and alter model behavior when encountering data related to minority groups. For example, \citet{DBLP:journals/corr/abs-2012-07421} present a modified version of the CivilComments dataset \cite{10.1145/3308560.3317593}, with the task of identifying toxic user comments on online platforms. They consider domain annotations for each comment based on whether the comment mentions each of 8 demographic identities - \textit{male, female, LGBTQ, Christian, Muslim, other religions, Black and White}. They note that a standard BERT-based model trained using ERM performs poorly on the worst group, with a 34.2 \% drop in accuracy as compared to the average. Such models may lead to unintended consequences like flagging a comment as toxic just because it mentions some demographic identities, or in other words, belongs to some domains. Our work can be useful in altering the inference-time behavior of such models upon detection of such domains which constitute a larger degree of background shift. Of course, nefarious agents could use the same pipeline to alter model behavior to identify and discriminate against demographics that display such background shifts. \section*{Acknowledgements} \textcolor{black}{We thank the anonymous reviewers, Richard Pang, Ethan Perez, Angelica Chen and other members of the Machine Learning for Language Lab at New York University for their thoughtful suggestions on improving the paper. We also want to thank Diksha Meghwal, Vaibhav Gadodia and Ambuj Ojha for their help with an initial version of the project and experimentation setup.} \section{Experiments and Analysis} \label{sec:experiments} We perform head-to-head comparisons of calibration and density estimation methods on 14 ID/OOD pairs categorized as either background shift or semantic shift, as well as 8 pairs from challenge datasets. \subsection{Setup} \paragraph{OOD detectors.} Recall that the \textbf{calibration method}~ \textit{MSP}\xspace relies on a classifier trained on the ID data. We fine-tune the RoBERTa \citep{DBLP:journals/corr/abs-1907-11692} model on the ID data and compute its prediction probabilities (see Equation \eqref{eqn:msp}). For the \textbf{density estimation} method \textit{PPL}\xspace, we fine-tune GPT-2 \citep{radford2019language} on the ID data and use perplexity as the OOD score (see Equation \eqref{eqn:ppl}).\footnote{ We also use the sentence probability ($p(x)$) as the score, but find it highly sensitive to sequence lengths (Appendix \ref{app:prob}). } % To control for model size of the two methods, we choose RoBERTa$_{\rm{Base}}$ and GPT-2$_{\rm{Small}}$, which have 110M and 117M parameters, respectively. We also experiment with two larger models, RoBERTa$_{\rm{Large}}$ and GPT-2$_{\rm{Medium}}$ with 355M and 345M parameters, respectively. We evaluate the OOD detectors by AUROC and the False Alarm Rate at 95\% Recall (FAR95), which measures the misclassification rate of ID examples at 95\% OOD recall. Both metrics show similar trends (see Appendix \ref{app:far95} for FAR95 results). \paragraph{Training details.} For RoBERTa, we fine-tune the model for 3 epochs on the training split of ID data with a learning rate of 1e-5 and a batch size of 16. For GPT-2, we fine-tune the model for 1 epoch on the training split of ID data for the language modeling task, using a learning rate of 5e-5 and a batch size of 8. \footnote{Our code can be found at \url{https://github.com/uditarora/ood-text-emnlp}.} \paragraph{Oracle detectors.} To get an estimate of the upper bound of OOD detection performance, we consider the situation where we have access to the OOD data and can directly learn an OOD classifier. Specifically, we train a logistic regression model with bag-of-words features using 80\% of the test data and report results on the remaining 20\%. \subsection{Semantic Shift} Recall that the distribution of discriminative features changes in the semantic shift setting, i.e. $p_{\text{train}}(\phi_s(x)) \neq p_{\text{test}}(\phi_s(x))$ (\refsec{ood-text}). We create semantic shift pairs by including test examples from classes unseen during training. Thus, semantic features useful for classifying the training data are not representative in the test set. We use the News Category \citep{news_category_dataset} and DBPedia Ontology Classification \citep{NIPS2015_250cf8b5} multiclass classification datasets to create two ID/OOD pairs. The News Category dataset consists of HuffPost news data. We use the examples from the five most frequent classes as ID (News Top-5) and the data from the remaining 36 classes as OOD (News Rest). The DBPedia Ontology Classification dataset consists of data from Wikipedia extracted from 14 non-overlapping classes of DBPedia 2014 \citep{lehmann2015dbpedia}. We use examples from the first four classes by class number as ID (DBPedia Top-4) and the rest as OOD (DBPedia Rest). \paragraph{Results.} \begin{table} \small \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{llllr} \toprule \multirow{2}{*}{ID} & \multirow{2}{*}{OOD} & \multicolumn{3}{c}{AUROC} \\ & & \textit{PPL}\xspace & \textit{MSP}\xspace & Oracle \\ \midrule News Top-5 & News Rest & 60.2 & \textbf{78.9} & 72.0 \\ DBPedia Top-4 & DBPedia Rest & 75.4 & \textbf{88.8} & 99.6 \\ \bottomrule \end{tabular} } \caption{Performance on semantic shifts, with higher score (among \textit{PPL}\xspace/\textit{MSP}\xspace) in \textbf{bold}. We can see that the calibration method using \textit{MSP}\xspace~significantly outperforms the density estimation methods.} \label{tab:semantic-shift} \end{table} Table \ref{tab:semantic-shift} shows the results for our semantic shift pairs. The calibration method consistently outperforms the density estimation method, indicating that calibration methods are better suited for scenarios with large semantic shifts, which is in line with our simulation results (\refsec{toy}). \hh{The takeaway should probably focus on semantic shift, e.g. they are good at detecting semantic shift. Need comments on PPL method too: it still works, but is probably influenced by the large amounts of background features (give examples). Would be nice to refer to relevant simulation results too.} \subsection{Background Shift} Recall that background features (e.g. formality) do not depend on the label. Therefore, we consider domain shift in sentiment classification and natural language inference (NLI) datasets. For our analysis, we use the SST-2 \cite{socher2013SST2}, IMDB \citep{maas2011IMDB}, and Yelp Polarity \citep{NIPS2015_250cf8b5} binary sentiment classification datasets. The SST-2 and IMDB datasets consist of movie reviews with different lengths. Meanwhile, the Yelp polarity dataset contains reviews for different businesses, representing a domain shift from SST-2 and IMDB. Each of these datasets is used as ID/OOD, using the validation split of SST-2 and test split of IMDB and Yelp Polarity for evaluation. We also use the SNLI \cite{DBLP:conf/emnlp/BowmanAPM15}, MNLI \cite{N18-1101} and RTE (from GLUE, \citeauthor{wang18GLUE}, \citeyear{wang2018glue}) datasets. SNLI and MNLI consist of NLI examples sourced from different genres. RTE comprises of examples sourced from a different domain. Where there is some change in semantic information since the task has the two labels (\textit{entailment} and \textit{non-entailment}) as opposed to three (\textit{entailment}, \textit{neutral} and \textit{contradiction}) in SNLI and MNLI,\footnote{Both neutral and contradiction are considered as non-entailment when evaluating accuracy with RTE vs SNLI/MNLI or vice-versa.} domain/background shift is more prominent since the semantic features for the NLI task are similar. Each of these datasets is used as either ID or OOD, and we use the validation set of the OOD data for evaluation. \paragraph{Results.} \begin{table} \centering \small \resizebox{\columnwidth}{!}{% \begin{tabular}{ll>{\bfseries}rrrrr} \toprule \multirow{2}{*}{ID} & \multirow{2}{*}{OOD} & \multicolumn{3}{c}{AUROC} & \multicolumn{2}{c}{Accuracy} \\ & & \textnormal{\textit{PPL}\xspace} & \textit{MSP}\xspace & Oracle & OOD ($\Delta$) & ID \\ \midrule \multirow{2}{*}{SST-2} & IMDB & 97.9 & 66.2 & 100.0 & 92.0 (-1.8) & \multirow{2}{*}{93.8} \\ & Yelp & 98.7 & 57.5 & 99.8 & 94.4 (+0.6) & \\ \midrule \multirow{2}{*}{IMDB} & SST-2 & 96.9 & 82.6 & 100.0 & 89.2 (-6.3) & \multirow{2}{*}{95.5} \\ & Yelp & 77.9 & 67.1 & 100.0 & 95.4 (-0.1) & \\ \midrule \multirow{2}{*}{Yelp} & SST-2 & 98.9 & 85.9 & 99.8 & 88.9 (-9.3) & \multirow{2}{*}{98.2} \\ & IMDB & 86.6 & 61.8 & 100.0 & 93.2 (-5.0) & \\ \midrule \multirow{2}{*}{SNLI} & RTE & 94.6 & 78.7 & 99.8 & 67.5 (-22.6) & \multirow{2}{*}{90.1} \\ & MNLI & 96.7 & 75.6 & 99.7 & 77.9 (-12.2) & \\ \midrule \multirow{2}{*}{RTE} & SNLI & 81.2 & 45.1 & 99.7 & 82.0 (+6.9) & \multirow{2}{*}{75.1} \\ & MNLI & 81.4 & 55.5 & 97.0 & 77.3 (+2.2) & \\ \midrule \multirow{2}{*}{MNLI} & SNLI & 75.7 & 56.1 & 99.7 & 80.4 (-4.4) & \multirow{2}{*}{84.8} \\ & RTE & \textnormal{68.0} & \textbf{76.5} & 96.7 & 76.5 (-8.3) & \\ \bottomrule \end{tabular} } \caption{Performance on background shifts caused by shift in domain. For each pair, higher score obtained (by \textit{PPL}\xspace~or \textit{MSP}\xspace) is in \textbf{bold}. The density estimation method using \textit{PPL}\xspace~outperforms the calibration method.} \label{tab:domain-shift} \end{table} Table \ref{tab:domain-shift} shows the results for binary sentiment classification and NLI domain shifts. The density estimation method consistently outperforms the calibration method (for all pairs except MNLI vs RTE), indicating that \textit{PPL}\xspace~ is more sensitive to changes in background features. Further, in cases where the discriminative model generalizes well (as evident by the small difference in ID and OOD accuracy numbers), we find that the calibration method performance is close to random (50) because a well-calibrated model also has higher confidence on its correct OOD predictions. We note that the discriminative models tend to generalize well here, hence it might be better to focus on domain adaptation instead of OOD detection when the shift is predominantly a background shift. We discuss this further in Section \ref{sec:related}. \subsection{Analysis} \paragraph{Controlled distribution shifts.} We use two controlled distribution shift experiments on real text data to further study the framework of semantic and background shifts. For background shift, we append different amounts of text from Wikitext \citep{DBLP:conf/iclr/MerityX0S17} and Civil Comments \citep{borkan2019CivilComments} to SST-2 examples to create synthetic ID and OOD examples, respectively. We append the unrelated texts with lengths $\in (25,50,100,150,200)$ words. For semantic shift, we use the News Category dataset and move classes from ID to OOD. We start with the top 40 ID classes by frequency and move classes in increments of 10. The ID coverage of semantic information decreases as more classes move to the OOD subset, resulting in a larger semantic shift. \paragraph{Results.} \begin{figure}[h] \centering \begin{subfigure}[]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{figs/controlled_background.pdf} \label{subfig:synthetic-background} \end{subfigure} \begin{subfigure}[]{0.5\textwidth} \includegraphics[width=\textwidth]{figs/controlled_semantic.pdf} \label{subfig:synthetic-semantic} \end{subfigure} \caption{AUROC of \textit{PPL}\xspace~(orange) and \textit{MSP}\xspace~(blue) for controlled background and semantic shift experiments. The density estimation method performance improves as we increase the amount of background shift by appending longer texts, and the calibration method performance increases as we increase the amount of semantic shift by moving more classes to OOD.} \label{fig:synthetic-experiments} \end{figure} Figure \ref{fig:synthetic-experiments} shows the AUROC score obtained from both methods for our controlled distribution shift experiments. We see that the density estimation method is more sensitive to the amount of synthetic background text than calibration methods, and that the calibration method is more sensitive to the number of ID/OOD classes. This is in line with our intuition about the shifts and the results we obtain from simulated data (\refsec{toy}). \begin{table} \centering \small \setlength\tabcolsep{3pt} \resizebox{\columnwidth}{!}{% \begin{tabular}{llrrrrr} \toprule \multirow{2}{*}{ID} & \multirow{2}{*}{OOD} & \multicolumn{2}{c}{Base} & \multicolumn{2}{c}{Large} & \\ & & \textit{PPL}\xspace & \textit{MSP}\xspace & \textit{PPL}\xspace & \textit{MSP}\xspace & Oracle \\ \midrule IMDB & Yelp & \textbf{77.9} & 67.1 & 75.5 & 74.5 & 100.0 \\ News Top-5 & News Rest & 60.2 & 78.9 & 61.7 & \textbf{79.1} & 72.0 \\ \bottomrule \end{tabular} } \caption{Performance of Base and Large models for a background shift pair and semantic shift pair each, with higher score in \textbf{bold}. The larger discriminative model helps close the performance gap between the calibration method and density estimation method for background shift.} \label{tab:large-performance} \end{table} \paragraph{Larger models.} Table \ref{tab:large-performance} shows the results using larger models for OOD detection. We observe that the larger discriminative model achieves a much higher score for the background shift pair, closing the gap with the language model performance. We speculate that the larger model is able to learn some of the background features in its representation. The performance for the semantic shift pair is largely unchanged when using the larger models \subsection{Challenge Data} Challenge datasets are designed to target either superficial heuristics adopted by a model (e.g. premise-hypothesis overlap) or model deficiencies (e.g. numerical reasoning in NLI), which creates significant challenges for deployed models \cite{checklist:acl20}. It is therefore desirable to abstain on detected OOD examples. We consider the following challenge datasets. \paragraph{Human-generated challenge data.} \citet{kaushik2020cIMDB} crowdsourced a set of counterfactually-augmented IMDB examples (c-IMDB) by instructing annotators to minimally edit examples to yield counterfactual labels. \textcolor{black}{This changes the distribution of semantic features with high correlation to labels such that $p_\text{train}(\phi_s(x)) \neq p_\text{test}(\phi_s(x))$, creating a semantic shift. We consider IMDB as ID and c-IMDB as OOD, combining the training, validation, and test splits of c-IMDB for evaluation.} \paragraph{Rule-based challenge data.} HANS \citep{mccoy-etal-2019-right} consists of template-based examples that have high premise-hypothesis overlap but are non-entailment, which mainly results in background shift due to the specific templates/syntax. Similarly, the Stress Test dataset \citep{naik-etal-2018-stress} is a set of automatically generated examples designed to evaluate common errors from NLI models. We categorize the type of distribution shifts from these test categories with respect to MNLI (ID) depending on whether they append ``background'' phrases to the ID examples or replace discriminative phrases (Table \ref{tab:MNLI-challenge}). \textcolor{black}{Antonym (changing premise to obtain an antonymous hypothesis resulting in contradiction despite high overlap) and Numerical Reasoning (different semantic information than MNLI training set) constitute semantic shifts, as the set of semantic features now focus on specific types of entailment reasoning (e.g. antonymy and numerical representation). Negation (appending ``and false is not true'' to hypothesis), Spelling Errors (randomly introducing spelling errors in one premise word), Word Overlap (appending ``and true is true'' to each hypothesis), and Length Mismatch (appending a repetitive phrase ``and true is true'' five times to the premise) constitute background shifts because they introduce population level changes (e.g. appending ``and true is true'' to each hypothesis) that are unrelated to the entailment conditions of each example.} \textcolor{black}{We consider the matched Negation, Spelling Errors, Word Overlap and Length Mismatch examples from the Stress Test as background shifts, and the Numerical Reasoning and Antonym examples as semantic shifts. We consider MNLI as ID for these challenge examples and use the validation split of HANS and MNLI for evaluation.} \begin{table} \small \centering \setlength\tabcolsep{3pt} \resizebox{\columnwidth}{!}{% \begin{tabular}{lllr>{\bfseries}rr} \toprule \multirow{2}{*}{ID} & \multirow{2}{*}{OOD} & \multirow{2}{*}{Shift} & \multicolumn{3}{c}{AUROC} \\ & & & \textit{PPL}\xspace & \textnormal{\textit{MSP}\xspace} & Oracle \\ \midrule IMDB & c-IMDB & Semantic & 53.5 & \textbf{63.7} & 77.5 \\ \midrule \multirow{7}{*}{MNLI} & HANS & Background & \textbf{98.3} & \textnormal{55.0} & 100.0 \\ & Negation & Background & 44.5 & 60.5 & 99.9 \\ & Len. Mismatch & Background & 19.6 & 51.6 & 100.0 \\ & Spell. Error & Background & 43.9 & 57.7 & 98.4 \\ & Word Overlap & Background & 42.4 & 61.7 & 99.8 \\ & Antonym & Semantic & 4.5 & 55.3 & 97.3 \\ & Num. Reason. & Semantic & 27.5 & 75.8 & 99.7 \\ \bottomrule \end{tabular} } \caption{AUROC scores obtained using \textit{PPL}\xspace, \textit{MSP}\xspace~and Oracle for challenge data. The primary type of shift observed is indicated in the `Shift' column. Higher performance (among \textit{MSP}\xspace/\textit{PPL}\xspace) for each pair is in \textbf{bold}. We can see that both methods struggle with most types of challenge data.} \label{tab:MNLI-challenge} \end{table} \paragraph{Failure case 1: spurious semantic features.} Challenge data is often constructed to target \emph{spurious features} (e.g. premise-hypothesis overlap for NLI) that are useful on the training set but do not correlate with the label in general, e.g. on the test set. Therefore, a discriminative model would be \emph{over-confident} on the OOD examples because the spurious semantic features that were discriminative during training, while still prominent, are no longer predictive of the label. As a result, in Table \ref{tab:MNLI-challenge}, \textit{MSP}\xspace struggles with most challenge data, achieving an AUROC score close to random (50). On the other hand, the density estimation method achieves almost perfect performance on HANS. \paragraph{Failure case 2: small shifts.} While density estimation methods perform better in background shift settings, our simulation results show that they still struggle to detect small shifts when the ID and OOD distributions largely overlap. Table \ref{tab:MNLI-challenge} shows similar findings for Negation and Word Overlap Stress Test categories that append short phrases (e.g. ``and true is true'') to each ID hypothesis. \paragraph{Failure case 3: repetition.} For Antonym, Numerical Reasoning, and Length Mismatch, \textit{PPL}\xspace performance is \emph{significantly worse than random}, indicating that our language model assigns higher likelihoods to OOD than ID examples. These challenge examples contain highly repetitive phrases (e.g. appending ``\textit{and true is true}'' five times in Length Mismatch, or high overlap between premise and hypothesis in Numerical Reasoning and Antonym), which is known to yield high likelihood under recursive language models \cite{holtzman2019curious}. Thus repetition may be used as an attack to language model-based OOD detectors. Overall, the performance of both methods drops significantly on the challenge datasets. Among these, human-generated counterfactual data is the most difficult to detect, and rule-based challenge data can contain unnatural patterns that cause unexpected behavior. \subsection{Discussion} The performance of calibration and density estimation methods on OOD examples categorized along the lines of semantic and background shift provides us with insights that can be useful in improving OOD detection. This framework can be used to build better evaluation benchmarks that focus on different challenges in OOD detection. A choice between the two methods can also be made based on the anticipated distribution shift at test time, i.e, using calibration methods when detecting semantic shift is more important, and using density estimation methods to detect background shifts. However, we observe failure cases from challenge examples, with density estimation methods failing to detect OOD examples with repetition and small shifts, and calibration methods failing to detect most challenge examples. This indicates that these challenge examples constitute a type of OOD that target the weaknesses of both approaches. This highlights the room for a more explicit definition of OOD to progress the development of OOD detection methods and create benchmarks that reflect realistic distribution shifts. \section{Introduction} Current NLP models work well when the training and test distributions are the same (e.g.\ from the same benchmark dataset). However, it is common to encounter out-of-distribution (OOD) examples that diverge from the training data once the model is deployed to real settings. When training and test distributions differ, current models tend to produce unreliable or even catastrophic predictions that hurt user trust \citep{checklist:acl20}. Therefore, it is important to identify OOD inputs so that we can modify models' inference-time behavior by abstaining, asking for human feedback, or gathering additional information \cite{DBLP:journals/corr/AmodeiOSCSM16}. \begin{figure} \includegraphics[width=\columnwidth]{figs/toy.pdf} \caption{Illustration of semantic shift and background shift in $\BR^2$. Each point consists of semantic features ($x$-axis) and background features ($y$-axis). OOD examples (red points) can shift in either direction. The background color indicates regions of ID (light) and OOD (dark) given by the density estimation method (left) and the calibration method (right). The calibration method fails to detect OOD examples due to background shift.} \label{fig:OOD_example} \end{figure} Current work in NLP either focuses on specific tasks like intent classification in task-oriented dialogue \cite{ZhengCH20}, or arbitrary in-distribution (ID) and OOD dataset pairs \cite{hendrycks2020pretrained, DBLP:conf/iclr/HendrycksMD19, DBLP:journals/corr/abs-2104-08812}, e.g.\ taking a sentiment classification dataset as ID and a natural language inference dataset as OOD. However, getting inputs intended for a different task is rare in realistic settings as users typically know the intended task. In practice, an example is considered OOD due to various reasons, e.g.\ being rare \cite{sagawa2020distributionally}, out-of-domain \cite{daume07easyadapt}, or adversarial \cite{carlini2017adversarial}. This broad range of distribution shifts makes it unreasonable to expect a detection algorithm to work well for arbitrary OOD examples without assumptions on the test distribution \cite{DBLP:conf/aaai/AhmedC20}. In this paper, we categorize OOD examples by common types of distribution shifts in NLP problems inspired by \citet{NEURIPS2019_1e795968} and \citet{DBLP:conf/cvpr/HsuSJK20}. Specifically, we assume an input (e.g. a movie review) can be represented as background features (e.g. genre) that are invariant across different labels, and semantic features (e.g. sentiment words) that are discriminative for the prediction task. Correspondingly, at test time we consider two types of OOD examples characterized by a major shift in the distribution of background and semantic features, respectively. While the two types of shifts often happen simultaneously, we note that there are realistic settings where distribution shift is dominated by one or the other. For example, background shift dominates when the domain or the style of the text changes \cite{pavlick2017style}, e.g. from news to tweets, and semantic shift dominates when unseen classes occur at test time, as in open-set classification \cite{scheirer2013toward}.\footnote{ We exclude \emph{task} shift where the OOD examples are from a different task, e.g. textual entailment inputs for a text classification model, because it is less likely to happen in realistic settings where users are often aware of the intended use of the model. } We use this categorization to evaluate two major approaches to OOD detection, namely calibration methods that use the model's prediction confidence \citep{hendrycks2016baseline, DBLP:conf/iclr/LiangLS18} and density estimation methods that fit a distribution of the training inputs \citep{DBLP:conf/iclr/NalisnickMTGL19, DBLP:journals/corr/abs-2007-05566, DBLP:conf/nips/KirichenkoIW20}. We show that the two approaches make implicit assumptions on the type of distribution shift, and result in behavioral differences under each type of shift. By studying ID/OOD pairs constructed from both simulations and real datasets, we find that the density estimation method better accounts for shifts in background features, consistently outperforming the calibration method on \textit{background} shift pairs. We further see the opposite in \textit{semantic} shift pairs, with the calibration method consistently yielding higher performance. In addition, we analyze the detection performance on challenge datasets \cite{mccoy2019hans,naik2018stress} through the lens of background/semantic shift. We find that these challenge datasets provide interesting failure cases for both methods. Calibration methods completely fail when the model is over-confident due to spurious semantic features. While density estimation methods are slightly more robust, language models are easily fooled by repetitions that significantly increase the probability of a piece of text. Together, our findings suggest that better definitions of OOD and corresponding evaluation datasets are required for both model development and fair comparison of OOD detection methods. \section{Categorization of OOD Examples} \label{sec:ood-text} \subsection{Problem Statement} Consider classification tasks where each example consists of an input $x\in\sX$ and its label $y\in\sY$. In the task of OOD detection, we are given a training dataset $\mathcal{D}_{\text{train}}$ of $(x,y)$ pairs sampled from the training data distribution $p(x,y)$. At inference time, given an input $x'\in\sX$ the goal of OOD detection is to identify whether $x^\prime$ is a sample drawn from $p(x,y)$. \subsection{Types of Distribution Shifts} \label{subsec:types_ood} As in \citep{NEURIPS2019_1e795968}, we assume that any representation of the input $x$, $\phi(x)$, can be decomposed into two independent and disjoint components: the background features $\phi_b(x)\in\BR^m$ and the semantic features $\phi_s(x)\in\BR^n$. Formally, we have \begin{align} \phi(x) &= [\phi_s(x); \phi_b(x)],\\ p(x) &= p(\phi_s(x))p(\phi_b(x) \end{align} Further, we assume that $\phi_b(x)$ is independent of the label while $\phi_s(x)$ is not. Formally, $\forall y\in\sY$, \begin{align} p(\phi_b(x) \mid y) = p(\phi_b(x)), \\ p(\phi_s(x) \mid y) \neq p(\phi_s(x)) \end{align} \textcolor{black}{Note that $p$ refers to the ground truth distribution, as opposed to one learned by a model.} Intuitively, the background features consist of population-level statistics that do not depend on the label, whereas the semantic features have a strong correlation with the label. \textcolor{black}{A similar decomposition is also used in previous work on style transfer \citep{DBLP:conf/aaai/FuTPZY18}, where a sentence is decomposed into the content (semantic) and style (background) representations in the embedding space.} Based on this decomposition, we classify the types of OOD data as either \textit{semantic} or \textit{background} shift based on whether the distribution shift is driven by changes in $\phi_s(x)$ or $\phi_b(x)$, respectively. An example of background shift is a sentiment classification corpus with reviews from IMDB versus GoodReads where phrases indicating positive reviews (e.g. ``best'', ``beautifully'') are roughly the same while the background phrases change significantly (e.g. ``movie'' vs ``book''). On the other hand, semantic shift happens when we encounter unseen classes at test time, e.g.\ a dialogue system for booking flight tickets receiving a request for meal vouchers \cite{ZhengCH20}, \textcolor{black}{or a question-answering system handling unanswerable questions \cite{rajpurkar2018squadrun}.} We note that the two types of shifts may happen simultaneously in the real world, and our categorization is based on the most prominent type of shift. \section{OOD Detection Methods} To classify an input $x\in\sX$ as ID or OOD, we produce a score $s(x)$ and classify it as OOD if $s(x) < \gamma$, where $\gamma$ is a pre-defined threshold. Most methods differ by how they define $s(x)$. Below we describe two types of methods commonly used for OOD detection. \paragraph{Calibration methods.} These methods use the model's prediction confidence as the score. A well-calibrated model's confidence score reflects the likelihood of the predicted label being correct. Since the performance on OOD data is usually lower than on ID data, lower confidence suggests that the input is more likely to be OOD. The simplest method to obtain the confidence score is to directly use the conditional probability produced by a probabilistic classifier $p_{\text{model}}$, referred to as maximum softmax probability \cite[\textit{MSP}\xspace;][]{hendrycks2016baseline}. Formally, \begin{align} \label{eqn:msp} s_{\textit{MSP}\xspace}(x) = \max_{k\in\sY} p_\text{model}(y=k\mid x). \end{align} While there exist more sophisticated methods that take additional calibration steps \cite{DBLP:conf/iclr/LiangLS18, lee2018simple}, \textit{MSP}\xspace proves to be a strong baseline, especially when $p_\text{model}$ is fine-tuned from pre-trained transformers \cite{hendrycks2020pretrained, desai2020calibration}. \paragraph{Density estimation methods.} These methods use the likelihood of the input given by a density estimator as the score. For text or sequence data, a language model $p_\text{LM}$ is typically used to estimate $p(x)$ \cite{NEURIPS2019_1e795968}. To avoid bias due to the length of the sequence (see analysis in Appendix \ref{app:prob}), we use the token perplexity (\textit{PPL}\xspace) as the score. Formally, given a sequence $x=(x_1, \ldots, x_T)$, \begin{align} \label{eqn:ppl} s_{\textit{PPL}\xspace}(x) = \exp\pc{\frac{1}{T} \sum_{t=1}^T \log p_{\text{LM}}(x_t\mid x_{1:t-1}) } \end{align} While there are many works on density estimation methods using flow-based models in computer vision \citep[e.g.][]{DBLP:conf/iclr/NalisnickMTGL19,zhang2020open}, there is limited work experimenting with density estimation methods for OOD detection on text \cite{DBLP:journals/corr/abs-2006-04666}. \paragraph{Implicit assumptions on OOD.} One key question in OOD detection is how the distribution shifts at test time, i.e.\ what characterizes the difference between ID and OOD examples. Without access to OOD data during training, the knowledge must be incorporated into the detector through some inductive bias. Calibration methods rely on $p(y\mid x)$ estimated by a classifier, thus they are more influenced by the semantic features which are correlated with the label. We can see this formally by \begin{align} p(y\mid x) &\propto p(x\mid y)p(y) \\ &= p(\phi_b(x) \mid y) p(\phi_s(x) \mid y) p(y)\\ &\propto p(\phi_s(x) \mid y) p(y) . \end{align} In contrast, density estimation methods are sensitive to all components of the input, including both background and semantic features, even in situations where distribution shifts are predominately driven by one particular type. In the following sections, we examine how these implicit assumptions impact performance on different ID/OOD pairs. \section{Related Work} \label{sec:related} \paragraph{Distribution shift in the wild.} Most early works on OOD detection make no distinctions on the type of distribution shift observed at test time, and create synthetic ID/OOD pairs using different datasets based on the setup in \citet{hendrycks2016baseline}. Recently, there is an increasing interest in studying real-world distribution shifts \cite{DBLP:conf/aaai/AhmedC20,DBLP:conf/cvpr/HsuSJK20,DBLP:journals/corr/abs-1911-11132,koh2020wilds}. On these benchmarks with a diverse set of distribution shifts, no single detection method wins across the board. We explore the framework of characterization of distribution shifts along the two axes of semantic shift and background (or non-semantic) shift, shedding light on the performance of current methods. \paragraph{OOD detection in NLP.} Even though OOD detection is crucial in production (e.g. dialogue systems \cite{ryu2018ood}) and high-stake applications (e.g. healthcare \cite{borjali2020natural}), it has received relatively less attention in NLP until recently. Recent works evaluated/improved the calibration of pretrained transformer models \cite{hendrycks2020pretrained,goyal2020evaluating,kong2020calibrated,DBLP:journals/corr/abs-2104-08812}. They show that while pretrained transformers are better calibrated, making them better at detecting OOD data than previous models, there is scope for improvement. Our analysis reveals one limitation of calibration-based detection when faced with a background shift. Other works focus on specific tasks, including prototypical network for low-resource text classification \cite{DBLP:conf/emnlp/TanYWWPCY19} and data augmentation for intent classification \cite{ZhengCH20}. \paragraph{Inductive bias in OOD detection.} Our work shows that the effectiveness of a method largely depends on whether its assumption on the distribution shift matches the test data. One straightforward way to incorporate prior knowledge on the type of distribution shift is through augmenting similar OOD data during training, i.e., the so-called outlier exposure method \cite{DBLP:conf/iclr/HendrycksMD19}, which has been shown to be effective on question answering \cite{DBLP:conf/acl/KamathJL20}. Given that the right type of OOD data can be difficult to obtain, another line of work uses a hybrid of calibration and density estimation methods to achieve a balance between capturing semantic features and background features. These models are usually trained with both a discriminative loss and a generative (or self-supervised) loss \cite{DBLP:journals/corr/abs-2007-05566,zhang2020open,nalisnick2019hybrid}. \textcolor{black}{\paragraph{Domain adaptation versus OOD detection.} There are two ways of handling the effect of OOD data: 1) build models that perform well across domains (i.e., background shifts), i.e., domain adaptation \cite{DBLP:conf/coling/ChuW18, DBLP:conf/naacl/KashyapHKZ21} or 2) allow models to detect a shift in data distribution, and potentially abstain from making a prediction. In our setting (2), we want to guard against all types of OOD data without any access to it, unlike domain adaptation which usually relies on access to OOD data. This setting can be more important than (1) for safety-critical applications, such as those in healthcare, because the potential cost of an incorrect prediction is greater, motivating a more conservative approach to handling OOD data by abstaining. This could also help improve performance in selective prediction \cite{DBLP:conf/acl/KamathJL20, DBLP:conf/acl/XinTYL20}.} \chapter{#2}\label{chp:#1}} \newcommand\Section[2]{\section{#2}\label{sec:#1}} \newcommand\Subsection[2]{\subsection{#2}\label{sec:#1}} \newcommand\Subsubsection[2]{\subsubsection{#2}\label{sec:#1}} \ifthenelse{\isundefined{\definition}}{\newtheorem{definition}{Definition}}{} \ifthenelse{\isundefined{\assumption}}{\newtheorem{assumption}{Assumption}}{} \ifthenelse{\isundefined{\hypothesis}}{\newtheorem{hypothesis}{Hypothesis}}{} \ifthenelse{\isundefined{\proposition}}{\newtheorem{proposition}{Proposition}}{} \ifthenelse{\isundefined{\theorem}}{\newtheorem{theorem}{Theorem}}{} \ifthenelse{\isundefined{\lemma}}{\newtheorem{lemma}{Lemma}}{} \ifthenelse{\isundefined{\corollary}}{\newtheorem{corollary}{Corollary}}{} \ifthenelse{\isundefined{\alg}}{\newtheorem{alg}{Algorithm}}{} \ifthenelse{\isundefined{\example}}{\newtheorem{example}{Example}}{} \newcommand\cv{\ensuremath{\to}} \newcommand\cvL{\ensuremath{\xrightarrow{\mathcal{L}}}} \newcommand\cvd{\ensuremath{\xrightarrow{d}}} \newcommand\cvP{\ensuremath{\xrightarrow{P}}} \newcommand\cvas{\ensuremath{\xrightarrow{a.s.}}} \newcommand\eqdistrib{\ensuremath{\stackrel{d}{=}}} \newcommand{\E}{\ensuremath{\mathbb{E}}} \newcommand\KL[2]{\ensuremath{\text{KL}\left( #1 \| #2 \right)}} \section{Simulation of Distribution Shifts} \label{sec:toy} As an illustrative example, we construct a toy OOD detection problem using a binary classification setting similar to the one depicted in Figure \ref{fig:OOD_example}. This allows us to remove estimation errors and study optimal calibration and density estimation detectors under controlled semantic and background shifts. \subsection{Data Generation} \label{subsec:toy_generation} We generate the ID examples from a Gaussian Mixture Model (GMM): \begin{align} y &= \begin{cases} 0 & \text{w.p. } 0.5 \\ 1 & \text{otherwise} \end{cases}, \\ x \mid y=i &\sim \sN(\mu^i, \Sigma) . \end{align} The centroids are sets of semantic and background features such that $\mu^1=[\mu_s, \mu_b]$ and $\mu^0=[-\mu_s, \mu_b]$, where $\mu_s\in \BR^n$ and $\mu_b\in \BR^m$. In the 2D case in Figure \ref{fig:OOD_example}, this corresponds to the two Gaussian clusters where the first component is the semantic feature and the second is the background feature. In this case, we know the true calibrated score $p(y\mid x)$ and the true density $p(x)$ given any inputs. Specifically, the optimal classifier is given by the Linear Discriminant Analysis (LDA) predictor. By setting $\Sigma$ to the identity matrix, it corresponds to a linear classifier with weights $[2\mu_s, \mathbf{0}_b]$, where $\mathbf{0}_b \in \BR^m$ is a vector of all $0$s. For simplicity, we set $\mu_s = \mathbf{1}_s$ and $\mu_b = \mathbf{0}_b$, where $\mathbf{1}_s \in \BR^n, \mathbf{0}_b \in \BR^m$ are vectors of all $0$s. \subsection{Semantic Shift} We generate sets of OOD examples using a semantic shift by varying the overlap of ID and OOD semantic features. Formally, we vary the overlap rate $r$ such that \begin{align} r &= \frac{\vert \mu_s \cap \mu_s^{\text{Shift}} \vert}{\vert \mu_s \vert} \end{align} where $\mu_s, \mu_s^{\text{Shift}} \in \BR^n$ are the set of semantic features for ID and OOD, respectively, $\mu_s \cap \mu_s^{\text{Shift}}$ represents the common features between the two, and $\vert \cdot \vert$ denotes the number of elements. We fix the total dimensions to $n + m = 200$ and set $n = 40$ (semantic features) and $m = 160$ (background features). Further, we vary $r$ by increments of $10\%$. Larger $r$ indicates stronger semantic shift. For each $r$, we randomly sample ID and OOD semantic features and report the mean over $20$ trials with $95\%$ confidence bands in Figure \ref{fig:gaussian_toy}. \subsection{Background Shift} We generate sets of OOD examples using a background shift by applying a displacement vector $z=[\mathbf{0}_s, z_b]$ to the two means. Formally, \begin{align} \mu^{i, \text{ Shift}} = \mu^i + z \end{align} where $\mathbf{0}_s \in \BR^n$ is a vector of all $0$s. We set $z = \alpha [\mathbf{0}_s, \mathbf{1}_b]$, where $\mathbf{1}_b \in \BR^m$ is a vector of $1$s. Note that this shift corresponds to a translation of the ID distribution along the direction of $\mu_b$. We set the total dimensions to $n + m = 200$ while varying the split between semantic ($n$) and background ($m$) components by increments of $20$. \subsection{Simulation Results} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figs/gaussian_toy_tworegime.pdf} \caption{Area Under Receiver Operating Characteristics (AUROC) of calibration (blue) and density estimation (orange) methods for OOD detection using our toy binary classification problem. The calibration method outperforms the density estimation method under larger semantic shifts while the opposite is true under larger background shifts.} \label{fig:gaussian_toy} \end{figure} Figure \ref{fig:gaussian_toy} shows the OOD detection performance of our simulated experiment. We use Area Under the Receiver Operating Characteristics (AUROC) as our performance metric. We see that the calibration method generally outperforms density estimation. Further, the performance gap between the two methods decreases as both methods approach near-perfect performance under large semantic shifts with no overlap in semantic features, and approach chance under no semantic shift with completely overlapping semantic features. However, the calibration method is unable to improve performance under background shifts in either regime because the background features do not contribute to $p(y\mid x)$ as the LDA weights are $0$ for these components (Section \ref{subsec:toy_generation}). We find these results in line with our expectations and use them to drive our intuition when evaluating both types of OOD detection methods for real text data
1,314,259,993,817
arxiv
\section{Introduction} This paper is devoted to a new elementary geometric construction of the universal $1$-singular Gelfand-Tsetlin module. Denote $\mathfrak g_k:=\mathfrak{gl}_k(\mathbb C)$, where $k=1,\ldots n$, and consider the flag $\mathfrak g_1\subset \mathfrak g_2 \subset\cdots \subset \mathfrak g_{n-1} \subset \mathfrak g_n$ of Lie algebras, where $\mathfrak g_{k-1} \subset \mathfrak g_k$ is the inclusion with respect to the left top corner. This flag gives rise to the following flag of universal enveloping algebras $$ \mathcal U(\mathfrak g_1)\subset \mathcal U(\mathfrak g_2) \subset\cdots \subset \mathcal U(\mathfrak g_{n-1}) \subset \mathcal U(\mathfrak g_n). $$ Denote by $\mathcal Z_k$ the center of $\mathcal U(\mathfrak g_{k})$. Then the subalgebra $\Gamma\subset \mathcal U(\mathfrak g_n)$, generated by $\mathcal Z_k$, where $k = 1, \ldots, n$, is a maximal commutative subalgebra \cite{Ov1}. It is called the {\it Gelfand-Tsetlin subalgebra}. A $\mathcal U(\mathfrak g_n)$-module $M$ is called a {\it Gelfand-Tsetlin module} if the action of $\Gamma$ on $M$ is locally finite. In the classical Gelfand-Tsetlin theory \cite{Gelfand} an explicit construction of an action of $\mathfrak g_n$ with respect to a basis consisting of Gelfand-Tsetlin tableaux is given providing explicit formulas for $\mathfrak g_n$-action. These formulas for $\mathfrak g_n$-action are called {\it classical Gelfand-Tsetlin formulas}. It was noticed in \cite{DFO1,DFO2,DFO3,DFO4} that the classical Gelfand-Tsetlin formulas may be used to obtain a family of infinite dimensional Gelfand-Tsetlin modules: so-called {\it generic regular Gelfand-Tsetlin modules}. An essential progress in the theory of Gelfand-Tsetlin modules was done in \cite{Ov1,Ov2} and later in \cite{FO}. In particular the following important construction was obtained there. Let $V\simeq \mathbb C^{n(n+1)/2}$ be the vector space of all Gelfand-Tsetlin tableaux of fixed order $n$, see the main text for details. Denote by $\gimel$ a certain abelian group acting freely on $V$ and by $\mathcal M\star \gimel$ the sheaf of meromorphic functions on $V$ with values in $\gimel$. Then there exists a ring structure on $\mathcal R:=H^0(V,\mathcal M \star \gimel)$ such that the classical Gelfand-Tsetlin formulas define a ring homomorphism $\Phi: \mathcal U(\mathfrak g_n) \to \mathcal R$. In the case when $\Im\Phi$ is holomorphic at a neighborhood of the orbit $\gimel(v)$ of a point $v\in V$, we can define a $\mathfrak g_n$-module structure on the vector space with the basis $\{ev_v, \,\, v\in \gimel(v)\}$, where $ev_v$ is the evaluation map at the point $v$. These $\mathfrak g_n$-modules are exactly generic regular Gelfand-Tsetlin modules. This construction does not work if $\Im\Phi$ is not holomorphic in any neighborhood of $\gimel(v)$. The study of the case when $\Im\Phi$ is not holomorphic in $\gimel(v)$ but has at most one simple pole, or in other words $\Im\Phi$ is $1$-singular, was initiated by V.~Futorny, D.~Grantcharov and E.~Ramirez in \cite{Futorny}. The authors \cite{Futorny} constructed the universal $1$-singular Gelfand-Tsetlin $\mathfrak{gl}_n(\mathbb C)$-module using additional formal variables that were called {\it derivative tableaux}. For another construction of the universal $1$-singular Gelfand-Tsetlin $\mathfrak{gl}_n(\mathbb C)$-module see \cite{Zad}, which was posted to the arXiv when the present paper was in preparation. In the present paper we define a subring $\mathcal D_v$ of $\mathcal R$, where $v$ is a certain point of a $1$-singular $\gimel$-orbit. To the ring $\mathcal D_v$ we associate the vector space $\mathcal S=\mathcal S(\mathcal D_v)$ with basis $\mathcal B = \mathcal B(\mathcal D_v)$ consisting of some local distributions supported at $\gimel(v)$ such that $\mathcal S$ is a natural $\mathcal D_v$-module. In particular this implies the following universal property of $ \mathcal D_v$: for any homomorphism of rings $\Psi: \mathcal U(\mathfrak h) \to \mathcal D_v$ the vector space $\mathcal S$ is also an $\mathfrak h$-module. Due to this we call the ring $ \mathcal D_v$ the {\it universal ring}. Further, we observe that $\Phi (\mathcal U(\mathfrak g)) \subset \mathcal D_v$. Hence our construction gives rise to a $\mathfrak g$-module structure on $\mathcal S$ that is isomorphic to the universal $1$-singular Gelfand-Tsetlin module obtained in \cite{Futorny}. Our observation leads to a new geometric interpretation of the universal $1$-singular Gelfand-Tsetlin module from \cite{Futorny} that allows to simplify proofs from \cite{Futorny} and avoid the use of formal variables. Moreover, similar ideas that we present here can be used in the case of other singularities, see \cite{EMV}. \bigskip \textbf{Acknowledgements:} E.~V. was partially partially supported by SFB TR 191 and by the Universidade Federal de Minas Gerais. \section{Preliminaries} A Gelfand-Tsetlin tableau is a tableau $(a_{ki})$ of complex numbers, where $1\leq i \leq k \leq n$ and $n\geq 2$. Further we will consider the set $V$ of all Gelfand-Tsetlin tableaux as a complex manifold that is isomorphic to $\mathbb C^{n(n+1)/2}$. Let $\gimel \simeq \mathbb Z^{n(n-1)/2}$ be the free abelian group generated by $\sigma_{st}$, where $1\leq t \leq s \leq n-1$. We fix the following action of $\gimel$ on $V$: $\sigma_{st} (x) = (x_{ki} +\delta_{ki}^{st})$, where $x = (x_{ki})\in V$ and $\delta_{ki}^{st}$ is the Kronecker delta. This is $\delta_{ki}^{st}=1$ if $(ki) = (st)$ and $\delta_{ki}^{st}=0$ otherwise. Further we put $G=S_1\times S_2\times \cdots \times S_n$, so $G$ is the product of symmetric groups $S_i$. The group $G$ acts on $V$ in the following way $(s (x))_{ki} = x_{k s_k(i)},$ where $s=(s_1,\ldots, s_n)\in G$. Denote by $\mathcal M$ and by $\mathcal O$ the sheaves of meromorphic and holomorphic functions on $V$, respectively. Let us take $f\in H^0(V,\mathcal M)$, $s\in G$ and $\sigma\in \gimel$. We set $$ \sigma(f) = f\circ \sigma^{-1}, \quad s(f) = f\circ s^{-1},\quad s(\sigma) = s\circ \sigma\circ s^{-1}. $$ These formulas define an action of $\gimel$ on $H^0(V,\mathcal M)$ and actions of $G$ on $H^0(V,\mathcal M)$ and on $\gimel$, respectively. Denote by $\mathcal M\star \gimel$ the sheaf of meromorphic functions on $V$ with values in $\gimel$. An element of $H^0(V,\mathcal M\star \gimel)$ is a finite sum $\sum\limits_i f_i \sigma_i$, where $f_i\in \mathcal M$ and $\sigma_i\in \gimel$. In other words, $\mathcal M\star \gimel$ is the sheaf of meromorphic sections of the trivial bundle $V\times \bigoplus_{\sigma\in \gimel} \mathbb C \sigma \to V$. There exists a structure of a skew group ring on $H^0(V,\mathcal M\star \gimel)$, see \cite{FO}. Indeed, $$ \sum_if_i \sigma_i \circ \sum_j f'_j \sigma'_j := \sum_{ij} f_i \sigma_i(f'_j) \sigma_i\circ \sigma'_j. $$ Here $f_i, f'_j\in H^0(V,\mathcal M)$ and $\sigma_i,\sigma'_j \in \gimel$. This skew group ring we denote by $\mathcal R$. To simplify notations we use $\circ$ for the multiplication in $\mathcal R$ and for the product in $\gimel$. On $H^0(V,\mathcal M\star \gimel)$ we will consider also the following multiplication $A*B := B\circ A$. Recall that a Gelfand-Tsetlin tableau is called {\it generic} if $x_{rt} - x_{rs} \notin \mathbb Z$ for any $r$ and $s\ne t$. The definition of a standard Gelfand-Tsetlin tableau can be found in \cite{Futorny}. The classical Gelfand-Tsetlin formulas have the following form in terms of generators of $\mathfrak{gl}_n(\mathbb C)$, see for instance \cite{Futorny}, Theorems $3.6$ and $3.8$, for details. \begin{equation}\label{eq G-Ts generators} \begin{split} E_{k,k+1}(v) &= - \sum_{i=1}^k \frac{\prod_{j= 1}^{k+1} (x_{ki}- x_{k+1,j})}{\prod_{j\ne i}^k (x_{ki}- x_{kj})} (v + \delta_{ki});\\ E_{k+1,k} (v) & = \sum_{i=1}^k \frac{\prod_{j= 1}^{k-1} (x_{ki}- x_{k-1,j})}{\prod_{j\ne i}^k (x_{ki}- x_{kj})} (v - \delta_{ki});\\ E_{k,k} (v) & = \Big( \sum_{i=1}^k (x_{ki} +i-1) - \sum_{i=1}^{k-1} (x_{k-1,i} +i-1) \Big) (v), \end{split} \end{equation} Here $E_{st}\in \mathfrak{gl}_n(\mathbb C)$ are standard generators and $v\in V$ is either a standard or generic Gelfand-Tsetlin tableau with coordinates $v=(x_{ki})$ and $(v \pm \delta_{ki}) = \sigma^{\pm 1}_{ki}(v)$. Assume that $v$ is a generic Gelfand-Tsetlin tableau. Theorem $3.8$ in \cite{Futorny} says that Formulas (\ref{eq G-Ts generators}) define a $\mathfrak g$-module structure on the vector space spanned by the elements of the orbit $\gimel(v)$. Let us identify the point $v\in V$ with the evaluation map $ev_v: H^0(V,\mathcal O)\to \mathbb C$. Then we can define the map $E_{st} \mapsto \Phi(E_{st})\in \mathcal R$ using the equality $ev_{v} \circ \Phi(E_{st}) = E_{st} (v)$ for $v$ generic. Since generic points are dense in $V$, $\Phi(E_{st})$ is a well-defined element of $\mathcal R$. For example, $$ E_{k,k+1} = - \sum_{i=1}^k \frac{\prod_{j= 1}^{k+1} (x_{ki}- x_{k+1,j})}{\prod_{j\ne i}^k (x_{ki}- x_{kj})} \sigma^{-1}_{ki}. $$ In \cite{FO} the following theorem was proved. \medskip \t\label{teor Fut Ovs} {\sl The map $\Phi: \mathcal U(\mathfrak g) \to \mathcal R$ is a homomorphism of rings. Here $E_{st} \mapsto \Phi(E_{st})$, where $E_{st}\in \mathfrak{gl}_n(\mathbb C)$, is as above. } \medskip \noindent{\bf Remark.} Note that $\Im(\Phi)$ is $G$-invariant. This fact can be verified directly. \medskip From Theorem \ref{teor Fut Ovs} it follows that for any generic $x\in V$ the formula $ \Phi(E_{st}) (ev_y) = ev_y \circ \Phi(E_{st})$ defines an action of $\mathfrak{gl}_n(\mathbb C)$ on the vector space spanned by local distributions $ev_y$, where $y\in \gimel(x)$. Here $ev_y\circ (f\sigma) = ev_y(f) ev_y\circ \sigma$ and $ev_y\circ \sigma(g) = ev_y(\sigma(g))$ for $g\in \mathcal O$. Since elements $\Phi(E_{st})$ are holomorphic in a sufficiently small neighborhood of the orbit $\gimel(x)$, the expression $ev_y \circ \Phi(E_{st})$ is well-defined. More generally, any homomorphism of rings $\Psi: \mathcal U(\mathfrak h) \to \mathcal R$, where $\mathfrak h$ is any Lie algebra, such that the image $\Psi(\mathfrak h)$ is holomorphic in a neighborhood of $\gimel(x)$ defines an action of $\mathfrak{gl}_n(\mathbb C)$ on the vector space spanned by the local distributions $ev_y$, where $y\in \gimel(x)$. The interpretation of a point $y\in V$ as a local distribution $ev_y$ suggests the possibility to define a $\mathfrak{gl}_n(\mathbb C)$-module structure on other local distributions, i.e. on linear maps $D_y: \mathcal O_y\to \mathbb C$ with $\mathfrak m_y^p\subset \Ker(D_y)$, where $p>0$ and $\mathfrak m_y$ is the maximal ideal in the local algebra $\mathcal O_y$. This idea we develop in the present paper. The main problem is that the ring $\mathcal R$ does not act on the vector space of local distributions, because of singularities. In the next section we will construct the universal ring $\mathcal D_v\subset \mathcal R^{G_v}$, where $v$ is a certain $1$-singular point in $V$ and $G_v\subset G$ is the stabilizer of $v$. We will show that $\mathcal D_v$ acts on $G_v$-invariant holomorphic functions $H^0(V,\mathcal O^{G_v})$, where the action is given by $(f\circ \sigma) (F) = f F \circ \sigma^{-1}$. This action induces an action $(f\circ \sigma)(D_y) = D_y \circ (f\circ \sigma)$ of $(\mathcal D_v,*)$ on $G_v$-invariant holomorphic local distributions $D: H^0(V,\mathcal O^{G_v}) \to \mathbb C$ supported at $\gimel(v)$. By Theorem \ref{teor Fut Ovs} we have also a structure of $\mathfrak{gl}_n(\mathbb C)$-module on the vector space of these local distributions. Further we will consider local distributions $ev_v\circ A : H^0(V,\mathcal O^{G_v}) \to \mathbb C$, where $A\in \mathcal D_v$. Clearly this vector space is $\mathcal D_v$- and hence $\mathfrak{gl}_n(\mathbb C)$-submodule. The last step is to find a basis $\mathcal B_v$ for the vector space spanned by $\{ev_{v} \circ A,\,\,|\,\, A\in \mathcal D_v\}$. This basis we call the universal basis for the universal ring $\mathcal D_v$. Our construction implies that for any homomorphism $\Psi: \mathcal U(\mathfrak h) \to \mathcal D_v$, where $\mathfrak h$ is a Lie algebra, $\mathcal B_v$ is a basis for the corresponding $\mathfrak h$-module. We will see that $\Im (\Phi) \subset \mathcal D_v$ and that $\mathcal B_v$ coincide with the basis constructed in \cite{Futorny}. We develop these ideas in the case of any point $x\in V$ in \cite{EMV}. \section{Main result} A point $x=(x_{kj})\in V$ is called {\it $1$-singular} if there exist $x_{ki}$ and $x_{kj}$, where $i\ne j$, such that $x_{ki} - x_{kj}\in \mathbb Z$ and $x_{rs} - x_{rt}\notin \mathbb Z$, where $s\ne t$, for each $(r,s,t)\ne (k,i,j)$. Note that the generators from (\ref{eq G-Ts generators}) have one simple pole at the orbit $\gimel(x)$ for any $1$-singular point $x$. Let us fix an $1$-singular point $x^0=(x^0_{kj})\in V$ such that $x^0_{ki} - x^0_{kj}\in \mathbb Z$. We put $z_1= x_{ki} - x_{kj}$, $z_2=x_{ki} + x_{kj}$ and we denote by $z_3,z_4,\ldots$ other coordinates $(x_{st})$ in $V$. So $(z_i)$ are new coordinates in $V$. From now on we fix a point $v=(0,z^0_2,\ldots, z^0_n)\in \gimel(x^0)$ and a sufficiently small neighborhood $W$ of the orbit $\gimel(v) = \gimel(x^0)$ that is invariant with respect to the group $\gimel$ and with respect to $\tau\in G$, where $\tau$ is defined by $\tau(z_1)=-z_1$ and $\tau(z_i)=z_i$, $i>1$. From now on we will consider restrictions of elements of $\mathcal R$ on $W$. We denote by $G_v =\{\id,\tau\}\subset G$ the stabilizer of $v$. We say that an element $A\in\mathcal R$ is at most $1$-singular at $v$, if $A=\sum_ih_i\sigma_i$, where $h_i$ are holomorphic at $v$ or have the form $h_i=g_i/z_1$, where $g_i$ are holomorphic at $v$. We need the following proposition. \medskip \prop\label{prop A_1 ...A_n is 1 sing} { \sl Let $A_j = \sum\limits_i (H^j_{i}/z_1)\sigma_{i}\in \mathcal R^{G_v}$, where $j=1,\ldots,m$ and $H^j_{i}$ are holomorphic in $W$. Then the product $A_1\circ \cdots \circ A_m$ is at most $1$-singular at $v$. } \medskip \noindent{\it Proof.} Assume by induction that for $k=m-1$ our statement holds. In other words, assume that $A_1\circ \cdots \circ A_{m-1} = \sum\limits_i(G_{i}/z_1)\sigma_{i}$, where $G_{i}$ are holomorphic at $v$. We have \begin{align*} A_1\circ \cdots \circ A_{m} = \sum\limits_{i,j} \frac{G_{i} \sigma_i(H^m_{j})}{z_1\sigma_i(z_1)} \sigma_{i} \circ \sigma_{j}. \end{align*} Assume that this product is two singular at $v$. Note that $\sigma_i(H^m_{j})$ is holomorphic in $W$. Therefore, $\sigma_{i_0}(z_1) = z_1$ for a certain $i_0$ and, hence, $\tau(\sigma_{i_0}) = \sigma_{i_0}$. Further, $\tau(\sum\limits_i(G_{i}/z_1)\sigma_{i}) = \sum\limits_i(G_{i}/z_1)\sigma_{i}$, since $A_1\circ \cdots \circ A_{n}$ is $\tau$-invariant. Therefore $\tau(G_{i_0}/z_1 \sigma_{i_0}) = \tau(G_{i_0}/z_1) \sigma_{i_0} = G_{i_0}/z_1 \sigma_{i_0}$. Hence $\tau(G_{i_0}/z_1)$ is also $\tau$-invariant and $\tau(G_{i_0}) = -G_{i_0}$. Therefore, $G_{i_0} = z_1G'_{i_0}$, where $G'_{i_0}$ is holomorphic at $v$. Therefore, $G_{i_0} \sigma_{i_0}(H^m_{j})/ z_1\sigma_{i_0}(z_1) = G'_{i_0} \sigma_{i_0}(H^m_{j})/ z_1$ is $1$-singular. The proof is complete.$\Box$ \medskip \noindent{\bf Remark.} Elements $A_j = \sum\limits_i (H^j_{i}/z_1)\sigma_{i}$ as in Proposition \ref{prop A_1 ...A_n is 1 sing} generate a subring $\mathcal D_{v}$ in $\mathcal R^{G_v}$. We call this ring the {\it universal ring of} $v$. By Proposition \ref{prop A_1 ...A_n is 1 sing} any element in $\mathcal D_{v}$ is at most $1$-singular at $v$. If $A = \sum\limits_i (H_{i}/z_1)\sigma_{i}$ is a generator of $\mathcal D_{v}$ and $F\in H^0(W,\mathcal O^{G_v})$, then $A(F) = F'/z_1$ is at most $1$-singular at $v$, holomorphic in $W\setminus\{v \}$ and $G_v$-invariant. Therefore $\tau(F') = -F'$ and hence $A(F)$ is holomorphic. So we defined an action of $\mathcal D_{v}$ on $H^0(W,\mathcal O^{G_v})$. \medskip We put $g_i:= z_1h_i$, where $\sum\limits_ih_i\sigma_i\in \mathcal R$. Consider the following set of $G_v$-invariant local distributions defined on $H^0(W,\mathcal O^{G_v})$: \begin{equation}\label{eq diff operator basis} \begin{split} D^1_{\sigma}:=\frac12 ev_v \circ (\sigma + \tau(\sigma) ),\quad D^2_{\sigma'}:= ev_v \circ \frac{(\sigma' - \tau(\sigma') )}{2z_1}, \quad \sigma, \sigma' \in \gimel, \,\,\tau(\sigma')\ne \sigma'. \end{split} \end{equation} Note that $(\sigma' - \tau(\sigma'))/ z_1$ and $\sigma + \tau(\sigma)$ are elements of $\mathcal D_{v}$, hence Formulas (\ref{eq diff operator basis}) are well-defined. Moreover we have the following equalities \begin{equation}\label{eq relations n=1} D^1_{\tau(\sigma)} = D^1_{\sigma} \quad \text{and} \quad D^2_{\tau(\sigma')} = - D^2_{\sigma'}. \end{equation} Denote $\Delta: = \{\sigma\in \gimel\,\,|\,\, \sigma(x_{ki},x_{ki}) = (x_{ki}+m_1,x_{ki}+m_2), \,\, m_1 \leq m_2 \}$ and consider the set $\mathcal B_v:= \{D^1_{\sigma}, D^2_{\sigma'}\,\, | \,\, \sigma,\sigma'\in \Delta,\,\, \tau(\sigma')\ne \sigma' \}$. The set $\mathcal B_v$ is a set of linearly independent distributions defined on $H^0(W,\mathcal O^{G_v})$. To see this we should apply $D^i_{\sigma}$ to a linear combination $\alpha + \beta z_1^2$ of $G_v$-invariant functions $1$ and $z_1^2$. Hence $\mathcal B_v$ is a basis of the vector subspace $S$ in $H^0(W,\mathcal O^{G_v})^*$ spanned by elements from $\mathcal B_v$. In the next proposition we show that $S$ is a $\mathcal D_v$- and $\mathfrak {gl}_n(\mathbb C)$-module. This $\mathfrak {gl}_n(\mathbb C)$-module is isomorphic to the universal $1$-singular Gelfand-Tsetlin module constructed in \cite{Futorny}, see Section $4$ for details. \medskip \prop\label{prop formulas n=1} {\sl Let us take $\sum\limits_ih_i\sigma_i\in \mathcal D_v$. Then we have \begin{equation}\label{eq formula n=1, sing} \begin{split} ev_v \circ (\sum\limits_ih_i\sigma_i) = \sum\limits_i g_i(v) \cdot D^2_{\sigma_i} + \sum\limits_i\frac{\partial g_i}{\partial z_1}(v) D^1_{\sigma_i}, \end{split} \end{equation} where $g_i = z_1 h_i$. Note that in the case if $h_i$ is holomorphic, we have $g_i(v)=0$ and $\frac{\partial g_i}{\partial z_1}(v) = h_i(v)$. Therefore, $S$ is a $\mathcal D_v$-module with basis $\mathcal B_v$. } \medskip \noindent{\it Proof.} Using the series expansion $g_i = g_i|_{z_1=0} + \frac{\partial g_i}{\partial z_1}|_{z_1=0} z_1 + \cdots$, we get $$ ev_v \circ \sum\limits_ih_i\sigma_i = \sum\limits_i g_i(v) ev_v \circ \frac{\sigma_i}{z_1} + \sum\limits_i \frac{\partial g_i}{\partial z_1}(v) ev_v \circ \sigma_i. $$ Note that $ev_v \circ z_1^m\sigma_i =0$ for $m>1$. Using the symmetrization $2ev_v \circ \sum\limits_ih_i\sigma_i = ev_v \circ \sum\limits_ih_i\sigma_i + \tau(ev_v \circ \sum\limits_ih_i\sigma_i)$, we obtain the result.$\Box$ \medskip Let $X$ be one of generators (\ref{eq G-Ts generators}). Clearly $\Phi(X)\in \mathcal D_v$, see Remark after Theorem \ref{teor Fut Ovs}. \medskip \t\label{teor main}{\bf [Main result 1]} {\sl The vector space $S$ spanned by elements of $\mathcal B$ is a $\mathfrak{gl}_n(\mathbb C)$-module. The action is given by Formulas (\ref{eq formula n=1, sing}). } \medskip \noindent{\it Proof.} The result follows from Theorem \ref{teor Fut Ovs} and Proposition \ref{prop formulas n=1}. Indeed, $\Im(\Phi)\subset \mathcal D_v$, hence we get a $\mathfrak{gl}_n(\mathbb C)$-module.$\Box$ \medskip In fact we proved a more general result than it is formulated in Theorem \ref{teor main}. \medskip \t\label{teor main general}{\bf [Main result 2]} {\sl Let $\mathfrak h$ be a Lie algebra and $\Psi : \mathcal U(\mathfrak h) \to \mathcal D_v$ be a homomorphism of rings. Then the vector space $S$ spanned by elements from $\mathcal B$ is an $\mathfrak h$-module. In other words the basis $\mathcal B$ is universal for any homomorphisms $\Psi : \mathcal U(\mathfrak h) \to \mathcal D_v$. } \section{Appendix} Theorem \ref{teor main} recovers one of the main results of \cite{Futorny}, a construction of the universal $\mathfrak{gl}_n(\mathbb C)$-module. Another main result of \cite{Futorny} is that in many cases the $\mathfrak{gl}_n(\mathbb C)$-module $S$ is irreducible, see Theorem $4.14$. Let us give an explicit correspondence between the basis constructed in \cite{Futorny} and our basis $\mathcal B$. We use notations from \cite{Futorny}. In \cite{Futorny} the authors consider the basis $\{ T(\sigma(v)),\,\, \mathcal {D}T(\sigma'(v)) \}$, where $\sigma,\sigma'\in \gimel$, such that $T(\sigma(v)) - T(\tau(\sigma(v)))=0$, $ \mathcal {D}T(\sigma'(v)) + \mathcal {D}T(\tau(\sigma'(v)))=0$ and $\tau(\sigma')\ne \sigma'$, see Remark $4.5$ in \cite{Futorny}. The element $T(\sigma(v))$ was considered as a point in $V$ and $\mathcal {D}T(\sigma'(v))$ was considered as a formal additional variable. (In our notations, $T(\sigma(v))$ is just $\sigma(v)\in V$, where $v$ as above.) Further, in \cite{Futorny} the action of $\mathfrak{gl}_n(\mathbb C)$ is given by the following formulas, \cite[Theorem 4.11]{Futorny}: \begin{align*} E_{rs} (T(\sigma(v)))& = ev_{v}\circ\frac{\partial}{\partial z_1} z_1 E_{rs}(T(\sigma(x)));\\ E_{rs} (\mathcal {D}T(\sigma'(v))& = ev_{v}\circ\frac{\partial}{\partial z_1} E_{rs}(T(\sigma'(x))), \,\,\, E_{rs}\in \mathfrak{gl}_n(\mathbb C), \end{align*} where $x=(x_{ki})$ are coordinates in a neighborhood of $v$. The explicit correspondence between the bases is given by the following formulas \begin{align*} &2D^2_{\sigma}(T(v)) = \mathcal {D}T(\sigma'(v)) - \mathcal {D}T(\tau(\sigma'(v))), \quad 2D^1_{\sigma}(T(v)) = T(\sigma(v)) + T(\tau(\sigma(v))). \end{align*}
1,314,259,993,818
arxiv
\section{Introduction} \input{sec_introduction} \section{Fundamental Concepts}\label{sec::fundamentalconcepts} \input{sec_basics} \section{The Discrete Problem}\label{sec::discrete} \input{sec_iga} \section{Details of Implementation}\label{sec::implement} \input{sec_disc} \section{Numerical Examples}\label{sec::num} \input{sec_num} \section{Conclusion}\label{sec::conclusion} \input{sec_conclusion} \begin{footnotesize} \section*{Acknowledgments} The authors would like to thank Lucy Weggler for providing the numerical implementation of the reference solution for the Mie scattering. This work is supported by DFG Grants \emph{SCHO1562/3-1} and \emph{KU1553/4-1} within the project \emph{Simulation of superconducting cavities with isogeometric boundary elements (IGA-BEM)}. Jürgen Dölz is an \emph{Early Postdoc.Mobility fellow}, funded by the Swiss National Science Foundation through the project \emph{174987 H-Matrix Techniques and Uncertainty Quantification in Electromagnetism}, the \emph{Excellence Initiative} of the German Federal and State Governments and the \emph{Graduate School of Computational Engineering} at TU Darmstadt. The work of Felix Wolf is supported by the \emph{Excellence Initiative} of the German Federal and State Governments and the \emph{Graduate School of Computational Engineering} at TU Darmstadt. \end{footnotesize} \bibliographystyle{siamplain} \input{sec_references} \subsection{The Electromagnetic Scattering Problem} On the bounded domain $\Omega\subset\mathbb{R}^3$ and for $0\leq s$, we denote by $H^s(\Omega)$ the usual Sobolev spaces \cite{McLean_2000aa}, and by $\pmb H^s(\Omega)$ their vector valued counterparts. For $s=0$ we utilize the convention $H (\Omega) = H^0 (\Omega) = L^2(\Omega)$ and $\pmb H (\Omega) = \pmb H^0 (\Omega) = \pmb L^2(\Omega)$. On unbounded domains $\Omega^c:=\mathbb{R}^3\setminus\overline{\Omega}$ we utilize the same notation together with the subscript ``${\mathrm{loc}}$'' in the form of $H^s_{\mathrm{loc}}(\Omega^c)$ and $\pmb H^s_{\mathrm{loc}}(\Omega^c)$ to denote that the required regularity conditions must only be fulfilled on all bounded subdomains of $\Omega^c$. For compact manifolds $\Gamma$ we denote by $H^s(\Gamma)$ the usual construction of Sobolev spaces on manifolds via charts, and by $\pmb H^s(\Gamma)$ their vector valued counterparts. As usual, we define the spaces $H^{-s}(\Gamma)$ and $\pmb H^{-s}(\Gamma)$ as the dual spaces of $H^s(\Gamma)$ and $\pmb H^s(\Gamma)$ w.r.t.~$L^2(\Gamma)$ and $\pmb L^2(\Gamma)$ as pivot spaces. Let $\mathcal{M}$ be one of the domains $\Omega$, $\Omega^c$, or the boundary $\Gamma$. For any differential operators $\operatorname{d}$ defined on $\mathcal{M}$, we define the spaces $H^s(\operatorname{d},\mathcal{M})$ via the closure of $H^s(\mathcal{M})$ under the graph norm $\norm{\cdot}_{H^s(\mathcal{M})}+\norm{\operatorname d(\cdot)}_{H^s(\mathcal{M})}$, equipping the spaces with the same. The definition of graph norms is generalised to vector-valued differential operators and spaces in complete analogy {and we denote by $\pmb H^s(\div 0,\mathcal{M})$ the elements of $\pmb H^s(\mathcal{M})$ with zero divergence.} The following trace operator for vector fields onto Lipschitz boundaries will be required to describe meaningful boundary data to the electric wave equation. \begin{definition}[Rotated Tangential Trace Operators, \cite{Buffa_2003ab}] For $\pmb u\in C(\Omega^c; \mathbb C^3)$, with $\Omega$ being a domain with Lipschitz boundary, we define the \emph{exterior rotated tangential trace operator} as \begin{align*} \pmb \gamma_{ t}^+ (\pmb u)(\pmb x_0) & \coloneqq \lim_{\substack{\pmb x\to \pmb x_0\\\pmb x\in\Omega^c}}\pmb u(\pmb x) \times \pmb{n}_{\pmb x_0},\quad\text{for all}~\pmb x_0\in\Gamma, \end{align*} where $\pmb{n}_{\pmb x_0}$ denotes the exterior normal vector of $\Omega$ at $\pmb x_0$. {The interior trace $\pmb \gamma_{t}^-$ is defined accordingly, using the exterior normal.} \end{definition} By density arguments, see also \cite{Buffa_2003ab}, this notation can be extended to be applicable to the spaces $\pmb H^{s+1/2}_{{\mathrm{loc}}}(\Omega^c)$ for $0<s<1$ and $\pmb H_{\mathrm{loc}}(\bcurl,\Omega^c)$. Thus, we define $\pmb H^s_\times(\Gamma)\coloneqq \pmb \gamma_t^+\big(\pmb H^{s+1/2}_{{\mathrm{loc}}}(\Omega^c)\big)$ for all $0<s<1$ as well as \begin{align*} {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{} \coloneqq \pmb \gamma_t^+\big(\pmb H_{\mathrm{loc}}(\bcurl,\Omega^c)\big). \end{align*} It is known that $\pmb \gamma_t^+ \colon\pmb H_{\mathrm{loc}}(\bcurl,\Omega^c)\to {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}$ is a bounded linear operator \cite{Buffa_2003ab}. With respect to the pairing \[ \langle \pmb\mu,\pmb\nu\rangle_{\times} = \int_\Gamma (\pmb\mu\times\pmb n_{\pmb x}) \cdot \pmb\nu \,\operatorname{d} \sigma_{\pmb x}, \] we define the spaces $\pmb H^{-s}_\times(\Gamma)$ by duality to $\pmb H^s_\times(\Gamma)$ for $0<s<1$. Note, however, that the space $ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}$ cannot be defined via such a duality if $\Gamma$ is non-smooth, cf.~\cite{Buffa_2003ab}. Given a perfectly conducting object $\Omega$ with Lipschitz boundary $\Gamma$ in a surrounding $\Omega^c$, we are interested in the scattered field $\pmb e_s$ of an electric incident wave $\pmb e_i$ hitting the scatterer $\Omega$. Assuming a time-harmonic problem, the scattered field $\pmb e_s$ can then be described in the frequency domain by the \emph{electric wave equation} \begin{align}\label{problem::ext_scattering}\left\lbrace\qquad \begin{aligned} \bcurl \bcurl\, \pmb e_s -\kappa^2\pmb e_s &= 0&&\text{in}~\Omega^c,\\ \pmb\gamma_t^+\pmb e_s &= -\pmb\gamma_t^+\pmb e_i&&\text{on}~\Gamma,\\ \big|\bcurl\,\pmb e_s(\pmb x)\times{\pmb x}\cdot{|\pmb x|}^{-1}-i\omega\varepsilon _0\pmb e_s(\pmb x)\big|&=\mathcal{O}(|\pmb x|^{-2}),&&|\pmb x|\to\infty. \end{aligned}\right. \end{align} The wavenumber $\kappa=\omega\sqrt{\varepsilon_0\mu_0}$ is described in terms of the frequency $\omega$, as well as the material parameters \emph{permittivity} $\varepsilon_0>0$ and \emph{permeability} $\mu_0>0$, which we assume to be constant. It is known that \eqref{problem::ext_scattering} is uniquely solvable for any sufficiently regular Dirichlet data and wavenumbers $\kappa >0$, see \cite{Buffa_2004aa}. Given an incident wave $\pmb e_i$, the total electric field $\pmb e$ in $\Omega^c$ is then given by $\pmb e = \pmb e_i+\pmb e_s$. \subsection{The Electric Field Integral Equation} Since \eqref{problem::ext_scattering} is an unbounded exterior problem in a homogeneous medium, it is convenient to use the following boundary integral representation. \begin{lemma}[Representation Formula, \cite{Buffa_2003ab}] For any solution $\pmb e_s$ of \eqref{problem::ext_scattering} there exists a density $\pmb w\in {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}$ such that $\pmb e_s (\pmb x) = (\tilde{{\bb{\mathscr V}}}_\kappa\pmb w)(\pmb x)$ for all $\pmb x\in \Omega^c$, where \begin{align}\label{eq:ESLP} (\tilde{{\bb{\mathscr V}}}_\kappa\pmb w)(\pmb x) = \int _\Gamma G_\kappa(\pmb x-\pmb y)\pmb w(\pmb y)\,\mathrm{d}\sigma_{\pmb y} + \frac{1}{\kappa^2} \pmb\grad_{\pmb x} \int _\Gamma G_\kappa(\pmb x-\pmb y)\div_{\Gamma}\pmb w(\pmb y)\,\mathrm{d}\sigma_{\pmb y}. \end{align} $G_\kappa(\pmb x-\pmb y)$ is herein given by the Helmholtz fundamental solution \begin{align} \label{eq:HelmholtzGreen} G_\kappa(\pmb x-\pmb y)=\frac{e^{i\kappa\|\pmb x-\pmb y\|}}{4\pi\|\pmb x-\pmb y\|}. \end{align} Moreover, the \emph{electric single layer potential} given in \eqref{eq:ESLP} is a continuous operator $\tilde{{\bb{\mathscr V}}}_{\kappa}\colon \pmb H^{-1/2}_\times(\div_\Gamma,\Gamma)\to\pmb H_{\mathrm{loc}}(\bcurl\,\bcurl,\Omega^c),$ such that the image of $\tilde{{\bb{\mathscr V}}}_\kappa$ is divergence free within $\Omega^c$. \end{lemma} We remark that the very same representation formula holds also for the electric wave equation in the \emph{bounded} domain $\Omega$, which we shall not need here. However, for our following considerations it is important to keep in mind that the interior and the exterior problem are closely related to each other. More precisely, the following considerations for the \emph{exterior} problem fail, if $\kappa$ is a resonant wavenumber of the \emph{interior} problem, see \cite{Buffa_2003ab} for a precise definition and discussion. By the Lemma above we know that a density $\pmb w$, {which has a physical meaning in terms of a surface current,} with $ \pmb e_s=\tilde{{\bb{\mathscr V}}}_{\kappa}\pmb w, $ exists. To obtain it, we apply the tangential trace on both sides of \eqref{eq:ESLP}, which yields the \emph{electric field integral equation} \begin{align}\label{eq:EFIE} -\pmb\gamma_t^+\pmb e_i=(\pmb\gamma_t^+\tilde{{\bb{\mathscr V}}}_\kappa)(\pmb w)=:{\bb{\mathscr V}}_\kappa\pmb w. \end{align} The variational formulation for the electric field integral equation \eqref{eq:EFIE} is as follows. \begin{problem}[Continuous Problem] Find $\pmb w\in {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}$ such that \begin{align} \langle {\bb{\mathscr V}}_{\kappa}\pmb w,\pmb\xi \rangle_\times = -\langle \pmb\gamma_t^+\pmb e_i,\pmb\xi \rangle_\times, \label{problem::variational::cont} \end{align} for all $\pmb\xi \in {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}$. \end{problem} As done in \cite{Buffa_2003aa}, one can utilize a generalized {G\r arding}{}-inqequality to show well posedness of this continuous problem for non resonant wavenumbers. \subsection{Assembly of the System Matrix} We assume that the B-spline space $\pmb \S_{\pmb p,\pmb\Xi_m}^1(\Gamma)$ is, on each patch $\Gamma _j$, generated by the tuple $\pmb\Xi _{m,j}=(\Xi_m,\Xi_m)$, where $\Xi_m$ is an equidistant knot vector with $2^m$, $m\geq 0$, elements. This corresponds to $m$ steps of uniform refinement in terms of the reference domain and generates a nested sequence of meshes. Then, for each level of refinement $m$, the mesh consists of $4^m$ elements per patch. \begin{figure} \centering \begin{tikzpicture} \draw (0,0) -- (0,2); \draw (0,2) -- (2,2); \draw (2,2) -- (2,0); \draw (2,0) -- (0,0); \node (C1) at (1,1) {$\Gamma_{i,0,0}$}; \node (A) at (2.5,1) {}; \node (B) at (3.5,1) {}; \draw [->] (A.center) -- (B.center) node [midway, above] {refine}; \draw (0+4,0) -- (0+4,2); \draw (0+4,2) -- (2+4,2); \draw (2+4,2) -- (2+4,0); \draw (2+4,0) -- (0+4,0); \draw (0+4,1) -- (2+4,1); \draw (0+4+1,0) -- (0+4+1,2); \node (C21) at (1+4-.5,1-.5) {$\Gamma_{i,1,0}$}; \node (C22) at (1+4+.5,1-.5) {$\Gamma_{i,1,1}$}; \node (C23) at (1+4+.5,1+.5) {$\Gamma_{i,1,2}$}; \node (C24) at (1+4-.5,1+.5) {$\Gamma_{i,1,3}$}; \node (C) at (6.5,1) {}; \node (D) at (7.5,1) {}; \draw [->] (C.center) -- (D.center) node [midway, above] {refine}; \path [pattern = north east lines] (0+9,0) rectangle (0+9+1,1); \draw (0+8,0) -- (0+8,2); \draw (0+8,2) -- (2+8,2); \draw (2+8,2) -- (2+8,0); \draw (2+8,0) -- (0+8,0); \draw (0+8,1) -- (2+8,1); \draw (0+8+1,0) -- (0+8+1,2); \draw (0+8+.5,0) -- (0+8+.5,2); \draw (0+8+1.5,0) -- (0+8+1.5,2); \draw (0+8,1.5) -- (2+8,1.5); \draw (0+8,.5) -- (2+8,.5); \draw[ultra thick] (0+9,0) -- (0+9,1); \draw[ultra thick] (0+9+0,0) -- (0+9+1,0); \draw[ultra thick] (0+9+1,1) -- (0+9,1); \draw[ultra thick] (0+9+1,1) -- (0+9+1,0); \end{tikzpicture} \caption{Refinement of the patch induced by the $i$-th mapping. Bold region corresponds to cluster $\Gamma_{\pmb \lambda}$ with $\pmb \lambda = (i,1,1)$.} \label{fig::refinement} \end{figure} The key point of this refinement strategy is that it induces a quadtree structure on the geometry, cf.~Figure \ref{fig::refinement}, which we will use for our compression scheme. Each element $\Gamma_{i,j,k}$ within the nested sequence of meshes will be refered to by a tuple $(i,j,k)\eqqcolon \pmb\lambda$, where $i$ denotes the corresponding parametric mapping, $j$ showcases the level of refinement of the element and $k$ denotes the index of the element in hierarchically order. For notational purposes, we will define $\abs{\pmb \lambda}\coloneqq j$ and also introduce the diffeomorphisms $\pmb F_{\pmb\lambda}\colon\square\to\Gamma_{\pmb\lambda}$ which can easily be defined by combining $\pmb F_i$ with a suitable affine transformation. For the efficient compression, each instance of $\Gamma_{\pmb\lambda}$ is also considered as a \emph{cluster}, in the sense that $\Gamma_{\pmb\lambda}$ will be considered as the set of tree leaves appended to the subtree with root $\Gamma_{\pmb\lambda}.$ Na\"ively said, $\Gamma_{\pmb \lambda}$ can be visualised as ``a square region on the geometry''. The hierarchically ordered collection of all $\Gamma_{\pmb\lambda}$ will be called \emph{cluster tree} and denoted by $\mathcal T$. For each pair of clusters in $\mathcal T$, the fundamental solution $G_\kappa$ from \eqref{eq:HelmholtzGreen} can be localized to a \emph{localized kernel function} \begin{align} G_{\kappa,\pmb\lambda,\pmb \lambda'} \colon \square\times \square & \to\mathbb C, \qquad G_{\kappa,\pmb\lambda,\pmb \lambda'}(\pmb s,\pmb t)=G_\kappa\big(\pmb F_{\pmb\lambda}(\pmb s)-\pmb F_{\pmb\lambda'}(\pmb t)\big)\label{eq:lockernel} \end{align} which reparametrizes the fundamental solution to $\square\times\square$. This reduces the dimension (in terms of input variables) of the fundamental solution artificially. On each element $\Gamma_{\pmb\lambda}$, ansatz functions $\varphi_{\pmb\lambda}$ can be defined by lifting suitable shape functions $\hat\varphi$ on $\square$ to the surface by {the suitable (localized) pullback, thus defining $\varphi _{\pmb\lambda}$}. To define suitable shape functions of polynomial degree $p$ on $\square$, we introduce the knot vector $\Xi_m^*$, which is generated from $\Xi_m$ by increasing the multiplicity of each knot to $p+1$. We then define the spaces $\S_{p,m}^*(\square)$, to be the discontinuous spaces generated by $\pmb p=(p,p)$ and $\pmb\Xi_m^*=(\Xi_m^*,\Xi_m^*)$. Then, for the particular case $m=0$, $\S_{p,0}^*(\square)$ contains all tensorised polynomials of degree $p$ on $\square$. Later, we will also require $\S_{p,m}^*(\square)$, $m>0$, which is generated by tensorised polynomials of degree $p$ on every element on the unit square. The span of all ansatz functions $\varphi_{\pmb\lambda}$ with $|\pmb\lambda|=m$ then yields a global discrete discontinuous function space $\S^*_{p,m}(\Gamma)$ of dimension $k\coloneqq 2^{2m}N(p+1)^2$. Since B-splines are piecewise polynomials, it clearly holds that \[ \pmb \S_{\pmb p,\pmb\Xi_m}^1(\Gamma)\subseteq\pmb\S^*_{p,m}(\Gamma):=\S^*_{p,m}(\Gamma)\times\S^*_{p,m}(\Gamma), \] with $\pmb p=(p,p)$ and $\pmb\Xi_m=(\Xi_m,\Xi_m)$. We can therefore represent each basis function of $\pmb \S_{\pmb p,\pmb\Xi_m}^1(\Gamma)$ by a linear combination of basis functions of $\pmb\S^*_{p,m}(\Gamma)$. This yields a transformation matrix $\pmb T$, which transforms the coefficient vector of a function in $\pmb \S_{\pmb p,\pmb\Xi_m}^1(\Gamma)$ to the coefficient vector of the corresponding function in $\pmb\S^*_{p,m}(\Gamma)$. Then, instead of assembling the system of linear equations \eqref{eq:linsys} with respect to $\pmb\S_{\pmb p,\pmb\Xi_m}^1(\Gamma)$, one may assemble it with respect to $\pmb\S^*_{p,m}(\Gamma)$ to obtain a system matrix $\pmb V_{\kappa,h}^*$ and a vector $\pmb f_h^*$. A linear system of equations equivalent to \eqref{eq:linsys} is then given by \begin{align} \pmb T^\intercal \pmb V_{\kappa,h}^* \pmb T \pmb w = -\pmb T^\intercal \pmb f_h^*.\label{eq::superspace} \end{align} Since the dimension of $\pmb\S^*_{p,m}(\Gamma)$ is larger than the dimension of $\pmb\S_{\pmb p,\pmb\Xi_m}^1(\Gamma)$, the matrix $\pmb V_{\kappa,h}^*$ is larger than the matrix $\pmb V_{\kappa,h}$. However, it has been shown in \cite{WegglerMatrix} for the case of classical higher order Raviart-Thomas elements that the superspace approach can achiever better compression rates and, thus, better computation times. In this particular case, the non-zero elements in $\pmb T$ were either $1$ or $-1$. In \cite{Dolz_2016aa, Dolz_2018aa}, the superspace approach has been applied to represent higher order B-spline spaces for Laplace and Helmholtz problems, where the elements of $\pmb T$ were the coefficients of a suitable basis transformation. Thus, the superspace approach in \eqref{eq::superspace} can be implemented as a mixture of the two: Whereas, on each patch $\Gamma _j$, one can use the approach of \cite{Dolz_2018aa} to find a suitable transformation matrix between $\S_{p,m}^*(\Gamma_j)\times\S_{p,m}^*(\Gamma_j)$ and $\S _{\pmb p, \pmb\Xi _{m}}^1(\Gamma)|_{\Gamma _j}$, one can use the approach of \cite{WegglerMatrix} to enforce continuity across patch boundaries. The transformation matrix $\pmb T$ can then be seen as the product of two sparse matrices. \begin{remark} From an implementation point of view, the transformation matrix between $\S_{p,m}^*(\Gamma_j)\times\S_{p,m}^*(\Gamma_j)$ and $\S _{\pmb p, \pmb\Xi _{m}}^1(\Gamma)|_{\Gamma _j}$ can easily be constructed in a black-box fashion by exploiting the tensor product structure of the two spaces and spline-interpolation in one dimension. \end{remark} The highly local support of the ansatz functions in $\pmb\S^*_{p,m}(\Gamma)$ {has several advantages. First, the numerical integration for the evaluation of the matrix entries can be done with standard quadrature methods for higher order boundary element methods, see \cite{SS97} or \cite{Harbrecht_2001aa}. Second, it} will allow us to employ a version of the fast multipole method for the matrix compression which perfectly fits the framework of isogeometric analysis. Of course, one may also use any other compression method to approximate $\pmb V_{\kappa,h}^*$, but we will see that our version of the fast multipole method in combination with the structure of the isogemoetric mappings directly fits into the efficient $\mathcal{H}^2$-matrix framework. Other compression methods tailored to isogeometric mappings, but in the lowest-order context and in the less efficient $\mathcal{H}$-matrix framework, have been compared in \cite{Harbrecht_2013ab}. Before we introduce the compression scheme, we first have to pull the matrix represention \eqref{eq:MSLtested} back to the reference domain. According to \cite{Peterson_2006aa}, for two basis functions $\pmb\varphi_i$ and $\pmb\varphi_j$ of $\pmb\S^*_{p,m}(\Gamma)$ supported on $\Gamma_{\pmb\lambda(i)}$ and $\Gamma_{\pmb\lambda(j)}$, the first integral is given by \begin{align}\label{eq:MSLref1} \begin{aligned} &\int_\Gamma\int_\Gamma G_k(\pmb x-\pmb y)\pmb\varphi_i(\pmb x)\cdot\pmb\varphi_j(\pmb y)\,\mathrm{d}\sigma_{\pmb y}\,\mathrm{d}\sigma_{\pmb x}\\ &{}\qquad\qquad=\int_\square\int_\square G_{\kappa,\pmb\lambda(i),\pmb\lambda(j)}(\pmb s,\pmb t)\hat{\pmb\varphi}_j(\pmb s)^{\intercal}d\pmb F_{\pmb\lambda(j)}(\pmb s)^\intercal d\pmb F_{\pmb\lambda(i)}(\pmb t)\hat{\pmb\varphi}_i(\pmb t)\,\mathrm{d}\pmb t\,\mathrm{d}\pmb s\end{aligned} \end{align} and the second by \begin{align}\label{eq:MSLref2} \begin{aligned} &\int_\Gamma\int_\Gamma G_k(\pmb x-\pmb y)\div_\Gamma\pmb\varphi_j(\pmb x)\div_\Gamma\pmb\varphi_i(\pmb y)\,\mathrm{d}\sigma_{\pmb y}\,\mathrm{d}\sigma_{\pmb x}\\ &{}\qquad\qquad=\int_\square\int_\square G_{\kappa,\pmb\lambda(i),\pmb\lambda(j)}(\pmb s,\pmb t)\div\hat{\pmb\varphi}_j(\pmb s)\div\hat{\pmb\varphi}_i(\pmb t)\,\mathrm{d}\pmb t\,\mathrm{d}\pmb s.\end{aligned} \end{align} Assuming that a finite dimensional basis of $\pmb\S^*_{p,m}(\Gamma)$ is given in terms of scalar functions, i.e., \[ \bigg\{ \begin{bmatrix} \varphi_i\\ 0 \end{bmatrix}, \begin{bmatrix} 0\\ \varphi_j \end{bmatrix} \colon \varphi_i,\varphi_j~\text{basis functions of}~\S^*_{p,m}(\Gamma) \bigg\}, \] the matrix $\pmb V_{\kappa,h}^*$ can be further decomposed into \[ \pmb V_{\kappa,h}^* = \begin{bmatrix} \pmb V_{\kappa,h}^{(1,1)} & \pmb V_{\kappa,h}^{(1,2)}\\ \pmb V_{\kappa,h}^{(2,1)} & \pmb V_{\kappa,h}^{(2,2)} \end{bmatrix}, \] with \begin{align}\label{eq:Vkh} \begin{aligned} \Big[\pmb V_{\kappa,h}^{(\alpha,\beta)}\Big]_{i,j} ={}& \int_\square\int_\square G_{\kappa,\pmb\lambda(i),\pmb\lambda(j)}(\pmb s,\pmb t) \Big( \langle\partial_\alpha\pmb F_{\pmb\lambda(i)}(\pmb s),\partial_\beta\pmb F_{\pmb\lambda(j)}(\pmb t)\rangle\hat{\varphi}_j(\pmb s)\hat{\varphi}_i(\pmb t)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\frac{1}{\kappa^2}\partial_\alpha\hat{\varphi}_j(\pmb s)\partial_\beta\hat{\varphi}_i(\pmb t)\Big)\,\mathrm{d}\pmb t\,\mathrm{d}\pmb s, \end{aligned} \end{align} for $\alpha,\beta=1,2$. Here, we denote by $\hat{\varphi}_i$ the pullback of the basis function $\varphi_i$ to the reference domain, i.e., \[ \hat{\varphi}_i=\varphi_i\circ\pmb F_{\pmb\lambda(i)}. \] This means that $\hat{\varphi}_i$ is effectively an element of $\S _{p,0}^*(\square)$, i.e., it is the tensor product polynomials. \begin{remark} To obtain efficiency in an actual implementation, one may choose to simultaneously assemble the $\pmb V_{\kappa,h}^{(\alpha,\beta)}$ and exploit the symmetry $\pmb V_{\kappa,h}^{(2,1)}= \big(\pmb V_{\kappa,h}^{(1,2)}\big)^{\intercal}$ and the symmetry of $\pmb V_{\kappa,h}^{(1,1)}$ and $\pmb V_{\kappa,h}^{(2,2)}$. Employing an element-wise integration scheme avoids redundant evaluations of kernel function and geometry. This can be maintained in the following compression scheme. \end{remark} \subsection{Compression of the System Matrix}\label{sec:FMM} Due to the non-locality of the fundamental solution $G_\kappa$, the system matrix ${\bb{\mathscr V}}_{k,h}$ given by \eqref{eq:MSLtested} is densely populated. Its storage and assembly cost are thus prohibitively expensive for higher-dimensional ansatz and test spaces, and an efficient numerical implementation with compression technique is needed. We follow the approach of \cite{Dolz_2016aa,Dolz_2018aa} to compress the matrices $\pmb V_{\kappa,h}^{(\alpha,\beta)}$, $\alpha,\beta=1,2$, in terms of {a specialized} fast multipole method, which yields a representation of these matrices in terms of $\mathcal{H}^2$-matrices, see also \cite{Borm_2010aa}. However, the approach is only applicable to matrices of the kind \begin{align*} \big[\pmb A\big]_{i,j} ={}& \int_\Gamma\int_\Gamma G_{\kappa}(\pmb x-\pmb y)\varphi_j(\pmb x)\varphi_i(\pmb y)\,\mathrm{d}\sigma_{\pmb y}\,\mathrm{d}\sigma_{\pmb x}\\ ={}& \int_\square\int_\square G_{\kappa,\pmb\lambda(i),\pmb\lambda(j)}(\pmb s,\pmb t)\hat{\varphi}_j(\pmb s)\hat{\varphi}_i(\pmb t)\,\mathrm{d}\pmb t\,\mathrm{d}\pmb s, \end{align*} which does not readily fit the format of the matrices from \eqref{eq:Vkh} due to the derivatives of the geometry mappings {contained in the} basis functions {and the involved surface divergences}. In the following, we will, therefore, adapt the construction to the setting of the electric single layer operator. For constructing the \(\mathcal{H}^2\)-matrix representation, consider the level-wise Cartesian product \(\mathcal{T}\boxtimes \mathcal{T}:=\big\{\Gamma_{\pmb\lambda}\times\Gamma_{\pmb\lambda'} \colon\Gamma_{\pmb\lambda},\Gamma_{\pmb\lambda'}\in\mathcal{T}, |\pmb\lambda|=|\pmb\lambda'|\big\}\) of the cluster tree $\mathcal{T}$. Compressible matrix blocks are then identified by the following \emph{admissibility condition}. \begin{definition} The clusters \(\Gamma_{\pmb\lambda}\) and \(\Gamma_{\pmb\lambda^\prime}\) with \(|\pmb\lambda|=|\pmb\lambda^\prime|\) are called \emph{admissible} if \begin{equation}\label{eq:admissibility} \max\big\{\operatorname{diam}(\Gamma_{\pmb\lambda}),\operatorname{diam}(\Gamma_{\pmb\lambda^\prime})\big\} \leq\eta\operatorname{dist}(\Gamma_{\pmb\lambda},\Gamma_{\pmb\lambda^\prime}) \end{equation} holds for a fixed \(\eta\in (0,1)\). The largest collection of admissible blocks \(\Gamma_{\pmb\lambda}\times\Gamma_{\pmb\lambda'} \in\mathcal{T}\boxtimes\mathcal{T}\) such that \(\Gamma_{\operatorname{dad}(\pmb\lambda)}\times \Gamma_{\operatorname{dad}(\pmb\lambda')}\) is not admissible forms the \emph{far-field} \(\mathcal{F}\subset\mathcal{T} \boxtimes\mathcal{T}\)of the operator. The remaining non-admissible blocks correspond to the \emph{near-field} \(\mathcal{N}\subset\mathcal{T}\boxtimes\mathcal{T}\) of the operator. \end{definition} The far-field conforms with the compressible matrix blocks, whereas the near-field is treated by the classical boundary element method, see Figure~\ref{fig:Hmatrix} for an illustration. \begin{figure} \centering \includegraphics[width=0.3\textwidth]{pics/Hmatrix} \caption{\label{fig:Hmatrix}Illustration of the $\mathcal{H}^2$-matrix partitioning. All but the very smallest blocks are contained in the farfield and will be compressed by the fast multipole method.} \end{figure} The \emph{block-cluster tree} \(\mathcal{B}:=\mathcal{F} \cup\mathcal{N}\) can be constructed by Algorithm\ \ref{alg:constructblockclustertree}. We remark that for all block-clusters $\Gamma_{\pmb\lambda}\times\Gamma_{\pmb\lambda'}\in\mathcal{B}$, it holds $|\pmb\lambda|=|\pmb\lambda'|$ and refer to \cite{Dolz_2016aa,Harbrecht_2013ab} for an in-depth discussion about the special properties of the block-cluster tree in the isogeometric setting. \begin{algorithm}[htb] \caption{Construction of the block-cluster tree \(\mathcal{B}\)} \label{alg:constructblockclustertree} \begin{algorithmic} \Procedure{BuildBlockClusterTree}{cluster $\Gamma_{\pmb\lambda},\Gamma_{\pmb\lambda'}$} \If {\((\Gamma_{\pmb\lambda},\Gamma_{\pmb\lambda'})\) is admissible} \State $\operatorname{sons}(\Gamma_{\pmb\lambda}\times\Gamma_{\pmb\lambda'}):=\emptyset$ \Else \State $\operatorname{sons}(\Gamma_{\pmb\lambda}\times\Gamma_{\pmb\lambda'}):= \{\Gamma_{\pmb\mu}\times\Gamma_{\pmb\mu'}\colon\pmb\mu\in\operatorname{sons}(\pmb\lambda),\pmb\mu'\in\operatorname{sons}(\pmb\lambda')\}$ \For {$\pmb\mu\in\operatorname{sons}(\pmb\lambda),\pmb\mu'\in\operatorname{sons}(\pmb\lambda')$} \State \Call{BuildBlockClusterTree}{$\Gamma_{\pmb\mu}$,$\Gamma_{\pmb\mu'}$} \EndFor \EndIf \EndProcedure \end{algorithmic} \end{algorithm} For a given polynomial degree \(q\in\mathbb{N}\), let \(\{x_0,x_1,\ldots,x_q\}\subset[0,1]\) denote \(q+1\) interpolation points. Furthermore, let \(L_m(s)\) for \(m=0,\ldots,q\) be the Lagrangian basis polynomials with respect to these interpolation points. By a tensor product construction, one obtains the interpolation points \({\pmb x}_{\pmb m}:=(x_{m_1}, x_{m_2})\) and the corresponding tensor product basis polynomials \(L_{\pmb m}({\pmb s}):= L_{m_1} (s_1)\cdot L_{m_2}(s_2)\) for \(m_1,m_2=0,\ldots,q\). In all admissible blocks \(\Gamma_{\pmb\lambda}\times\Gamma_{\pmb\lambda^\prime}\in\mathcal{F}\), this gives rise to the approximation \[ G_{\kappa,\pmb\lambda,\pmb\lambda^\prime}({\pmb s},{\pmb t})\approx \sum\limits_{\substack{\|{\pmb m}\|_\infty\leq q,\\\|{\pmb m}^\prime\|_\infty\leq q}} G_{\kappa,\pmb\lambda,\pmb\lambda^\prime}({\pmb x}_{\pmb m},{\pmb x}_{{\pmb m}^\prime}) L_{\pmb m}({\pmb s})L_{{\pmb m}^\prime}({\pmb t}) \eqqcolon \tilde{G}_{\kappa,\pmb\lambda,\pmb\lambda^\prime}^{(q)}({\pmb s},{\pmb t}). \] We remark that the approach presented here interpolates the localized kernel \eqref{eq:lockernel} via polynomials on the reference domain $\square$ of the isogeometric mappings rather than the original kernel in space, as first introduced in \cite{Giebermann_2001aa,Hackbusch_2002aa}. We will see that this will lead to a complexity of $q^2$ in terms of the interpolation degree of the compression, rather than $q^3$. Including the geometry information into the kernel evaluation yields \begin{align}\label{eq:twomatrices} \pmb V_{\kappa,h}^{(\alpha,\beta)}\Big|_{\pmb\lambda,\pmb\lambda'} = \pmb V_{\kappa,h,1}^{(\alpha,\beta)}\Big|_{\pmb\lambda,\pmb\lambda'} + \pmb V_{\kappa,h,2}^{(\alpha,\beta)}\Big|_{\pmb\lambda,\pmb\lambda'} \end{align} with \begin{align*} &\Big[\pmb V_{\kappa,h,1}^{(\alpha,\beta)}\Big|_{\pmb\lambda,\pmb\lambda'}\Big]_{\ell,\ell'}\\ &{}\qquad=\int_\square\int_\square G_{\kappa,\pmb\lambda,\pmb\lambda}(\pmb s,\pmb t) \langle\partial_\alpha\pmb F_{\pmb\lambda}(\pmb s),\partial_\beta\pmb F_{\pmb\lambda}(\pmb t)\rangle\hat{\varphi}_{\ell'}(\pmb s)\hat{\varphi}_\ell(\pmb t)\,\mathrm{d}\pmb t\,\mathrm{d}\pmb s\\ &{}\qquad\approx \sum\limits_{\substack{\|{\pmb m}\|_\infty\leq q,\\\|{\pmb m}^\prime\|_\infty\leq q}} G_{\kappa,\pmb\lambda,\pmb\lambda^\prime}({\pmb x}_{\pmb m},{\pmb x}_{{\pmb m}^\prime})\langle\partial_\alpha\pmb F_{\pmb\lambda}(\pmb x_{\pmb m}),\partial_\beta\pmb F_{\pmb\lambda}(\pmb x_{\pmb m'})\rangle\\[-.5cm] &{}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\cdot \int_\square L_{\pmb m}({\pmb s})\hat{\varphi}_{\ell '}(\pmb s)\,\mathrm{d}\pmb s \int_\square L_{{\pmb m}^\prime}({\pmb t}) \hat{\varphi}_\ell(\pmb t)\,\mathrm{d}\pmb t \end{align*} for two basis functions \(\hat{\varphi}_\ell,\hat{\varphi}_{\ell^\prime} \in\S_{p,J-|\pmb\lambda|}^*(\square)\). We thus have the representation \[ \Big[\pmb V_{\kappa,h,1}^{(\alpha,\beta)}\Big|_{\pmb\lambda,\pmb\lambda'}\Big]_{\ell,\ell'} =\big[{\pmb M}_{|\pmb\lambda|}^\square{\pmb K}_{\pmb\lambda,\pmb\lambda^\prime,1}^{(\alpha,\beta)} ({\pmb M}_{|\pmb\lambda^\prime|}^\square)^\intercal\big]_{\ell,\ell^\prime}, \] where \[ \Big[{\pmb K}_{\pmb\lambda,\pmb\lambda^\prime,1}^{(\alpha,\beta)}\Big]_{\pmb m,\pmb m'} = G_{\kappa,\pmb\lambda,\pmb\lambda^\prime}({\pmb x}_{\pmb m},{\pmb x}_{{\pmb m}^\prime})\langle\partial_\alpha\pmb F_{\pmb\lambda}(\pmb x_{\pmb m}),\partial_\beta\pmb F_{\pmb\lambda}(\pmb x_{\pmb m'})\rangle \] and \begin{align*} \big[{\pmb M}_{|\pmb\lambda|}\big] _{m_1,\ell} ={}&{}\int_0^1L_{m_1}(s_1)\hat{\phi}_\ell(s_1)\,\mathrm{d} s_1,\quad\hat{\phi}_\ell\in\S_{p,0}^*([0,1]),\\ {\pmb M}_{|\pmb\lambda|}^{\square} ={}&{}{\pmb M}_{|\pmb\lambda|}\!\otimes\!{\pmb M}_{|\pmb\lambda|}. \end{align*} For the second term in \eqref{eq:twomatrices} we obtain \begin{align*} &\Big[\pmb V_{\kappa,h,2}^{(\alpha,\beta)}\Big|_{\pmb\lambda,\pmb\lambda'}\Big]_{\ell,\ell'}\\ &{}\qquad=-\frac{1}{\kappa ^2}\int_\square\int_\square G_{\kappa,\pmb\lambda,\pmb\lambda}(\pmb s,\pmb t) \partial_{\alpha}\hat{\varphi}_{\ell'}(\pmb s)\partial_{\beta}\hat{\varphi}_\ell(\pmb t)\,\mathrm{d}\pmb t\,\mathrm{d}\pmb s\\ &{}\qquad\approx \sum\limits_{\substack{\|{\pmb m}\|_\infty\leq q,\\\|{\pmb m}^\prime\|_\infty\leq q}} -\frac{1}{\kappa ^2} G_{\kappa,\pmb\lambda,\pmb\lambda^\prime}({\pmb x}_{\pmb m},{\pmb x}_{{\pmb m}^\prime}) \int_\square L_{\pmb m}({\pmb s})\partial_{\alpha}\hat{\varphi}_{\ell '}(\pmb s)\,\mathrm{d}\pmb s \int_\square L_{{\pmb m}^\prime}({\pmb t})\partial_{\beta}\hat{\varphi}_\ell(\pmb t)\,\mathrm{d}\pmb t, \end{align*} which amounts to the representation \[ \Big[\pmb V_{\kappa,h,2}^{(\alpha,\beta)}\Big|_{\pmb\lambda,\pmb\lambda'}\Big]_{\ell,\ell'} =\big[{\pmb M}_{|\pmb\lambda|}^{\alpha,\square}{\pmb K}_{\pmb\lambda,\pmb\lambda^\prime,2}^{(\alpha,\beta)} ({\pmb M}_{|\pmb\lambda^\prime|}^{\beta,\square})^\intercal\big]_{\ell,\ell^\prime}, \] with \begin{align*} \big[{\pmb M}_{|\pmb\lambda|}^{\partial}\big] _{m,\ell} ={}&\int_0^1L_{m}(s)\partial\hat{\phi}_\ell(s)\,\mathrm{d} s,\quad\hat{\phi}_\ell\in\S_{p,0}^*([0,1]),\\ {\pmb M}_{|\pmb\lambda|}^{1,\square} ={}&{\pmb M}_{|\pmb\lambda|}^\partial\!\otimes\!{\pmb M}_{|\pmb\lambda|},\\ {\pmb M}_{|\pmb\lambda|}^{2,\square} ={}&{\pmb M}_{|\pmb\lambda|}\!\otimes\!{\pmb M}_{|\pmb\lambda|}^\partial, \end{align*} and \[ \Big[{\pmb K}_{\pmb\lambda,\pmb\lambda^\prime,2}^{(\alpha,\beta)}\Big]_{\pmb m,\pmb m'} = -\frac{1}{\kappa^2}G_{\kappa,\pmb\lambda,\pmb\lambda^\prime}({\pmb x}_{\pmb m},{\pmb x}_{{\pmb m}^\prime}). \] In view of \eqref{eq:twomatrices}, this yields the low-rank representation \begin{align}\label{eq:FMMlr} \pmb V_{\kappa,h}^{(\alpha,\beta)}\Big|_{\pmb\lambda,\pmb\lambda'}\approx \begin{bmatrix} {\pmb M}_{|\lambda|}^{\square} & {\pmb M}_{|\lambda|}^{\alpha,\square} \end{bmatrix} \begin{bmatrix} {\pmb K}_{\pmb\lambda,\pmb\lambda^\prime,1}^{(\alpha,\beta)} & \\ & {\pmb K}_{\pmb\lambda,\pmb\lambda^\prime,2}^{(\alpha,\beta)} \end{bmatrix} \begin{bmatrix} \big({\pmb M}_{|\lambda|}^{\square}\big)^\intercal \\ \big({\pmb M}_{|\lambda|}^{\beta,\square}\big)^\intercal \end{bmatrix}, \end{align} for the matrices \eqref{eq:Vkh} in all admissible matrix blocks, see also Figure~\ref{fig:FMM} for an illustration. \begin{figure} \centering \begin{tikzpicture}[scale=.9] \draw[fill=blue!20] (-2,-2) rectangle (2,2); \draw (0, 0) node {$\pmb V_{\kappa,h}^{(\alpha,\beta)}\Big|_{\pmb\lambda,\pmb\lambda'}$}; \draw (3,0) node {$\approx$}; \draw[fill=green!10,densely dashed] (4,-2) rectangle (4.7,2); \draw (4.35,0) node {\scalebox{0.6}{${\bf M}_{|\pmb\lambda|}^\square$}}; \draw[fill=green!10,densely dashed] (4.7,-2) rectangle (5.4,2); \draw (5.05,0) node {\scalebox{0.6}{${\bf M}_{|\pmb\lambda|}^{\alpha,\square}$}}; \draw (5.5,0.6) rectangle (6.2,1.3); \draw[fill=red!20] (6.2,0.6) rectangle (6.9,1.3); \draw (6.55,0.95) node {\scalebox{0.5}{${\pmb K}_{\pmb\lambda,\pmb\lambda^\prime,2}^{(\alpha,\beta)}$}}; \draw[fill=red!20] (5.5,1.3) rectangle (6.2,2); \draw (5.85,1.65) node {\scalebox{0.5}{${\pmb K}_{\pmb\lambda,\pmb\lambda^\prime,1}^{(\alpha,\beta)}$}}; \draw (6.2,1.3) rectangle (6.9,2); \draw (6.2,0.6) -- (6.2,2); \draw (5.5,1.3) -- (6.9,1.3); \draw[fill=green!10,densely dashed] (7,1.3) rectangle (11,2); \draw (9,1.65) node {\scalebox{0.6}{$({\bf M}_{|\pmb\lambda|'}^\square)^\intercal$}}; \draw[fill=green!10,densely dashed] (7,0.6) rectangle (11,1.3); \draw (9,0.95) node {\scalebox{0.6}{$({\bf M}_{|\pmb\lambda|'}^{\beta,\square})^\intercal$}}; \end{tikzpicture} \caption{\label{fig:FMM}Illustration of the storage savings for an admissible block $\pmb V_{\kappa,h}^{(\alpha,\beta)}\big|_{\pmb\lambda,\pmb\lambda'}$ compressed by the fast multipole method. When using the efficient $\mathcal{H}^2$-variant, ${\bf M}_{|\pmb\lambda|}^\square$, ${\bf M}_{|\pmb\lambda|}^{\beta,\square}$, and ${\bf M}_{|\pmb\lambda|}^{\beta,\square}$ can be efficiently represented by recurrence relations such that assembly, storage and application become negligible.} \end{figure} We remark that this representation is within the same framework as it was used for the treatment of the hypersingular operator for the Laplace equation in \cite{Dolz_2016aa}. Therefore all considerations made in \cite{Dolz_2016aa} also apply for our setting here. In particular, there hold the following complexity results, which amount to a linear scaling w.r.t.~the number of elements. \begin{theorem} Let $N$ denote the number of patches and $m$ the level of refinement. The storage consumption of the compressed matrix has a complexity of $\mathcal{O}(N\cdot 4^m(pq)^2)$. Moreover, the matrix-vector multiplication has also a complexity of $\mathcal{O}(N\cdot 4^m(pq)^2)$, if its fast $\mathcal{H}^2$-variant is used. \end{theorem} \begin{remark} We stress that the introduced compression scheme has an intrinsic $\mathcal{H}^2$-structure, which is more efficient than the frequently used $\mathcal{H}$-matrix structure. Its efficiency is based on the fact that, for each admissible block $\pmb\lambda\times\pmb\lambda'$, there are only $q^2$ evaluations of the geometry and the kernel function required to assemble the matrices ${\pmb K}_{\pmb\lambda,\pmb\lambda^\prime,1}^{(\alpha,\beta)}$ and ${\pmb K}_{\pmb\lambda,\pmb\lambda^\prime,2}^{(\alpha,\beta)}$. The other required matrices from \eqref{eq:FMMlr} can be efficiently represented by recurrence relations from smaller matrices with tensor product structure such that assembly, storage and application {do not affect the asymptotic behaviour}, see \cite{Dolz_2016aa}. \end{remark} \subsection{Error Analysis Of the Compression Scheme}\label{sec:FMMerror} The interpolation of the fundamental solution for the compression of the system matrix introduces an error in the system matrix and, thus, an error in the numerical solution. Since this error depends on the degree of the interpolation $q$, this section is dedicated to a suitable error analysis. \begin{comment} \end{comment} The main application of the following theorem is to bound the approximation error of the bilinear form in a general form of Strang's first lemma \cite[Thm.~4.2.11]{SS11}. A direct consequence is that the compression scheme is able to maintain the convergence rate predicted by Theorem~\ref{thm::quasioptimality}, if the polynomial degree for the compression is properly chosen. \begin{theorem}[Error of the Bilinear Form]\label{thm::mutlipole} Let $\sigma > 0$ be arbitrary but fixed and denote by $m$ the number of uniform refinement steps of $\square$. Then, for the electric single layer operator ${\bb{\mathscr V}}_{\kappa,q}$ which results from an interpolation of degree \(q>0\) of the kernel function in every admissible block and the exact representation of the kernel in all other blocks, there holds \begin{align}\label{eq:bilinearerror} \big|\langle {\bb{\mathscr V}}_k\pmb u,\pmb v\rangle_{\times}-\langle{\bb{\mathscr V}}_{k,q} \pmb u, \pmb v\rangle_{\times}\big| \lesssim 2^{-m\sigma}\|\pmb u\|_{\pmb H^0(\div_{\Gamma},\Gamma)}\|\pmb v\|_{\pmb H^0(\div_{\Gamma},\Gamma)}, \end{align} provided that \(q\sim (\sigma+1)m\). \end{theorem} \begin{proof} The proof is analogous to the proof of \cite[Thm.~5.6]{Harbrecht_2013ab}, applied separately to both summands of the electric single layer operator. \end{proof} To apply the previous theorem in Strang's first lemma, an additional inverse estimate of the kind \[ \|\pmb u_h\|_{\pmb H^0(\div_{\Gamma},\Gamma)}\lesssim h^{-1/2}\|\pmb u_h\|_{ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}} \] on the trial spaces is required. For patchwise continuous spline spaces $\pmb\S _{\pmb p,\pmb\Xi}^1(\Gamma)$ we provide such an estimate in Lemma~\ref{lem:inverseestimate}, but we stress that the error analysis is also valid for other trial spaces providing such an estimate. We summarize our error analysis in the following theorem, which is a consequence of the considerations in this section and \cite[Thm.~4.2.11]{SS11}. \begin{theorem}\label{thm:FMMeverything} The presented compression scheme maintains the existence and uniqueness of solutions of the numerical scheme. Moreover, {there exists $q_0>0$ such that} the optimal convergence rate of Theorem~\ref{thm::quasioptimality} is maintained if one chooses $q\sim(s+5/2)m$ {and $q\geq q_0$}. \end{theorem} \subsection{Fundamental Notions}\label{sec::subsec::iga} We review the basic notions of isogeometric analysis, restricting ourselves to spaces constructed via locally quasi uniform $p$-open knot vectors as required by the theory presented in \cite{Beirao-da-Veiga_2014aa,Buffa_2018aa}. \begin{definition}[B-Splines, \cite{Beirao-da-Veiga_2014aa}]\label{def::splines} Fix $p$ and $k$ such that $0\leq p< k$. A \emph{locally quasi uniform $p$-open knot vector} is given by a set \begin{align*} \Xi = \big[{\xi_0 = \cdots =\xi_{p}}\leq \cdots \leq{\xi_{k}=\cdots =\xi_{k+p}}\big]\in[0,1]^{k+p+1} \end{align*} with $\xi_0 = 0$ and $\xi_{k+p}=1$ such that there exists a constant $\theta\geq 1$ such that for all $p\leq j < k$ one finds $\theta^{-1}\leq h_j\cdot h_{j+1}^{-1} \leq \theta,$ where $h_j\coloneqq \xi_{j+1}-\xi_{j}$ for all $\xi_j,\xi_{j+1}\in \Xi.$ The B-spline basis $ \lbrace b_j^p \rbrace_{0\leq j< k}$ is now defined by recursion as \begin{align*} b_j^p(x) & =\begin{cases} \chi_{[\xi_j,\xi_{j+1})}&\text{ if }p=0,\\[8pt] \frac{x-\xi_j}{\xi_{j+p}-\xi_j}b_j^{p-1}(x) +\frac{\xi_{j+p+1}-x}{\xi_{j+p+1}-\xi_{j+1}}b_{j+1}^{p-1}(x) & \text{ else,} \end{cases} \end{align*} where $\chi_M$ denotes the indicator function for a set $M$. Moreover, we define the spline space $S^p(\Xi)\coloneqq\operatorname{span}(\lbrace b_j^p\rbrace_{j <k}).$ \end{definition} To obtain spline spaces in two spacial dimensions, define, for a tuple $\pmb\Xi =(\Xi_1,$ $\Xi_2)$ and polynomial degrees $\pmb p=(p_1,p_2)$ the spaces $S^{\pmb p}(\pmb \Xi)\coloneqq S^{p_1}(\Xi_1)\otimes S^{p_2}(\Xi_2)$. {For simplicity, we will assume {all interior knots} to have the same multiplicity.} Given knot vectors $\Xi_1,$ $\Xi_2$ with knots {$\xi_{i}^k < \xi_{i+1}^k$ and $\xi_{i}^k,\xi^k_{i+1}\in \Xi_k$ for both, $k=1,2$, sets of the form $[\xi_{j}^1,\xi^1_{j+1}]\times[\xi^2_{j},\xi^2_{j+1}]$} will be called \emph{elements}. We reserve the letter $h$ for the maximal diameter of all elements. { We remark that this tensor product construction does not allow for local refinement. Approaches to omit this problem have been suggested, see e.g.~\cite{Evans_2014aa} and the sources cited therein, but are beyond the scope of this article. } Let $\square\coloneqq [0,1]^2$ denote the unit square. As usual in the framework of isogeometric analysis, the geometry $\Gamma=\bigcup_{j\leq N}\Gamma_j$ will be given as a family of mappings \begin{align} \pmb F_j\colon \square \to \Gamma_j \subset {\mathbb R}^3,\label{def::geom} \end{align} which we will refer to as \emph{parametrization}. These mappings will be given by NURBS mappings, i.e.,~by mappings with a representation \begin{align*} \pmb F_j(x,y)\coloneqq \sum_{0\leq j_1<k_1}\sum_{0\leq j_2<k_2}\frac{\pmb c_{j_1,j_2} b_{j_1}^{p_1}(x) b_{j_2}^{p_2}(y) w_{j_1,j_2}}{ \sum_{i_1=0}^{k_1-1}\sum_{i_2=0}^{k_2-1} b_{i_1}^{p_1}(x) b_{i_2}^{p_2}(y) w_{i_1,i_2}}, \end{align*} for control points $\pmb c_{j_1,j_2}\in {\mathbb R}^3$ and weights $w_{i_1,i_2}>0.$ For further concepts and algorithmic realization of the NURBS we refer to \cite{Piegl_1997aa}. {We assume $\Gamma = \bigcup_{j\leq N} \Gamma_j$ to be {the} piecewise smooth boundary of a simply connected Lipschitz domain.} Moreover, we assume any mapping of the parametrization to be non-singular and invertible. On any interface $\Gamma_j\cap \Gamma_i \neq \emptyset$ we require the involved mappings to coincide, i.e.,~$\pmb F_j(\cdot,1) \equiv \pmb F_i(\cdot,0)$ must be satisfied up to orientation of the reference domain. We remark that, as long as the assumptions stated above are fulfilled, the description of the geometry is independent of the analysis that will follow, i.e., one could choose sufficiently regular mappings that are not representable via NURBS, for example, mappings containing trigonometric functions. \subsection{A Conforming Discretization}\label{sec::subsec::conforming} Let $\pmb F_j\colon \square\to \Gamma_j$ be a mapping of the parametrization of $\Gamma$. Defining the \emph{surface measure} $\tau$ via \begin{align} \tau(\pmb x)\coloneqq \norm{\partial_{x}\pmb F_j( {\pmb x})\times \partial_{y}\pmb F_j( {\pmb x})}_{\mathbb{R}^3},\quad \pmb x\in \square, \end{align} the geometry transformations required for an analysis of isogeometric boundary element methods are of the form \begin{align*} \iota_0(\pmb F_j)(f_0)(\pmb x) \coloneqq {}&{}(f_0\circ\pmb F_j)(\pmb x),&&\pmb x\in\square,\\ \iota_1(\pmb F_j)(\pmb f_1)(\pmb x) \coloneqq {}&{}\big(\tau \cdot (d\pmb F_j^\intercal)^{-1} (\pmb f_1\circ \pmb F_j)\big)(\pmb x),&&\pmb x\in\square,\\ \iota_2(\pmb F_j)(f_2)(\pmb x) \coloneqq {}&{}(\tau \cdot (f_2\circ \pmb F_j))(\pmb x),&&\pmb x\in\square. \intertext{ {We note that, since $d\pmb F_j^\intercal$ is a rectangular matrix, $(d\pmb F_j^\intercal)^{-1}$ is a common abuse of notation, whose meaning is discussed for example in \cite[Chapter~5.4]{Peterson_2006aa}. In short, under mild assumptions on the geometry mapping, $\iota_1$ has to be understood in the sense of mapping a tangential vector field on a two-dimensional manifold embedded into $\mathbb R^3$ to the tangential field of the two-dimensional reference domain.} For implementations, this technicality can usually be omitted, since the operations on the reference domain merely require the push-forwards} (\iota_0(\pmb F_j))^{-1}(f_0)(\pmb x)={}&{} (f_0\circ \pmb F_j^{-1})(\pmb x),&&\pmb x\in\Gamma_j,\\ (\iota_1(\pmb F_j))^{-1}(\pmb f_1)(\pmb x)={}&{} \left(\tau^{-1} \cdot (d\pmb F_j)^\intercal( \pmb f_1\circ \pmb F_j^{-1})\right)(\pmb x),&&\pmb x\in\Gamma_j,\\ (\iota_2(\pmb F_j))^{-1}(f_2)(\pmb x)={}&{} \left(\tau^{-1} \cdot (f_2\circ \pmb F_j^{-1})\right)(\pmb x),&&\pmb x\in\Gamma_j, \end{align*} where the computation of the inverse $\pmb F_j^{-1}$ is not required, since all discrete entities are known and constructed w.r.t.~the reference coordinates. An important property of these geometry transformations is that the following diagram $$ \begin{tikzcd}[row sep = 3em,column sep = 1.3cm] H^1(\Gamma_j)\ar{d}[description]{\iota_0}\ar{r}[description]{\bcurl}& \pmb H(\div_\Gamma,\Gamma_j)\ar{r}[description]{\div}\ar{d}[description]{\iota_1}& L^2(\Gamma_j)\ar{d}[description]{\iota_2}\\ H^{1}(\square)\ar{r}[description]{{\bcurl_\Gamma}} & \pmb H(\div,\square)\ar{r}[description]{\div_\Gamma} & L^{{2}}(\square) \end{tikzcd}$$ commutes \cite{Buffa_2018aa,Peterson_1995aa}. Thus, a conforming spline basis of $H^1(\square)$ yields automatically conforming finite dimensional discretization of the entire diagram. More precisely, given polynomial degrees $p_1,p_2>0$, the mapping properties of the differential operators yield the conforming spline spaces \begin{align*} \S^0_{\pmb p,\pmb\Xi}(\square)\coloneqq {}&{} S^{p_1,p_2}(\Xi_1,\Xi_2),&&\subset H^1(\square),\\ \pmb \S^1_{\pmb p,\pmb\Xi}(\square)\coloneqq {}&{} S^{p_1,p_2-1}(\Xi_1,\Xi_2') \times S^{p_1-1,p_2}(\Xi_1',\Xi_2),&&\subset \pmb H^0(\div,\square),\\ \S^2_{\pmb p,\pmb\Xi}(\square)\coloneqq {}&{} S^{p_1-1,p_2-1}(\Xi_1',\Xi_2')&&\subset L^2(\square), \end{align*} together with their mapped counterparts on the surface. For the multipatch boundary $\Gamma = \cup_{j\leq N} \Gamma_j$ let $\pmb \Xi \coloneqq (\pmb \Xi_j)_{j\leq N}$ be an $N$-tuple of knot vectors as in Definition \ref{def::splines}. Let $\pmb p=(\pmb p_j)_{j\leq N}$ an $N$-tuple of pairs of integers $\pmb p_j=\big(p_1^{(j)},p_2^{(j)}\big)$, corresponding to polynomial degrees for each patch $\Gamma _j$. Then we define the \emph{spline complex} on the boundary $\Gamma$ via \begin{align*} \S^0_{\pmb p,\pmb\Xi}(\Gamma)\coloneqq {}&{} \left\lbrace f\in H^{1/2}(\Gamma)\colon \iota_0(\pmb F_j)(f|_{\Gamma_j}) \in \S^0_{\pmb p_j,\pmb\Xi_j}(\square)\text{ for all }j\leq N\right\rbrace,\\ \pmb \S_{\pmb p,\pmb\Xi}^1(\Gamma)\coloneqq {}&{} \left\lbrace \pmb f\in \pmb H_\times^{{-1/2}}(\div_\Gamma,\Gamma)\colon \iota_1(\pmb F_j)(\pmb f|_{\Gamma_j}) \in \pmb \S_{\pmb p_j,\pmb\Xi_j}^1(\square)\text{ for all }j\leq N\right\rbrace,\\ \S^2_{\pmb p,\pmb\Xi}(\Gamma)\coloneqq {}&{} \left\lbrace f\in H^{-1/2}(\Gamma)\colon \iota_2(\pmb F_j)(f|_{\Gamma_j}) \in \S^2_{\pmb p_j,\pmb\Xi_j}(\square)\text{ for all }j\leq N\right\rbrace. \end{align*} Throughout this paper, we will denote by $p$ the minimal polynomial degree used for the construction of $\S^0_{\pmb p,\pmb\Xi}(\Gamma)$. \begin{remark} In the spirit of the isogeometric paradigm, degrees and knot vectors of the discrete B-spline spaces can be chosen to match the properties of the geometry discretization \cite{Hughes_2005aa}. Note, however, that there is no theoretical requirement for $\pmb p$ and $\pmb \Xi$ to match the discretization of the geometry if we assume sufficient regularity of the parametrization. This fact will be used later on to benchmark different orders of discretization on the same geometry. \end{remark} By definition of the spline spaces above, the sequence \begin{equation} \begin{tikzcd} \S^0_{\pmb p,\pmb\Xi}(\Gamma)\ar{r}{\bcurl_\Gamma}& \pmb \S_{\pmb p,\pmb\Xi}^1(\Gamma)\ar{r}{\div_\Gamma}& \S^2_{\pmb p,\pmb\Xi}(\Gamma) \end{tikzcd}\label{spline::sequence} \end{equation} is a conforming multipatch discretization of the two-dimensional sequence \begin{equation} \begin{tikzcd} H^{1/2}(\Gamma)\ar{r}{\bcurl_\Gamma}& {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{} \ar{r}{\div_\Gamma}& H^{-1/2}(\Gamma). \end{tikzcd} \end{equation} We refer to \cite{Buffa_2018aa} for an in-depth discussion on how these spline spaces on the boundary are connected with the B-spline discretization of the three-dimensional de Rham sequence. Replacing $ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}$ by $\pmb\S^1_{\pmb p, \pmb \Xi}(\Gamma)\subset {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}$ yields the discrete variational problem to \eqref{problem::variational::cont}, given as follows. \begin{problem}[Discrete Propblem] Find $\pmb w_h\in\pmb \S^1_{\pmb p, \pmb \Xi}(\Gamma)$ such that \begin{align} \langle{\bb{\mathscr V}}_\kappa\pmb w_h, \pmb \mu_h\rangle_\tau=-\langle\pmb \gamma_t^+\pmb e_i,\pmb\mu_h\rangle_\tau,\label{problem::variational::disc} \end{align} for all $\pmb\mu_h\in \pmb \S^1_{\pmb p, \pmb \Xi}(\Gamma)$. \end{problem} Given a basis $\{\pmb\varphi_i\}_{i=1}^N$ of $\pmb \S_{\pmb p,\pmb\Xi}^1(\Gamma)$, this yields the linear system \begin{align}\label{eq:linsys} {\bb{\mathscr V}}_{\kappa,h}\pmb w_h=-\pmb f_h, \end{align} where the right-hand side $\pmb f_h$ is given by $\big[\pmb f_h\big]_j=\langle\pmb\gamma_t^+\pmb e_i,\pmb\varphi_j\rangle_\times$ and the system matrix ${\bb{\mathscr V}}_{\kappa,h}$ by \begin{align}\label{eq:MSLtested} \begin{aligned} \big[{\bb{\mathscr V}}_{\kappa,h}\big]_{i,j} ={}& \langle{\bb{\mathscr V}}_\kappa\pmb\varphi_j, \pmb\varphi_i\rangle_\times\\ ={}& \int_\Gamma\int_\Gamma G_{\kappa}(\pmb x-\pmb y)\pmb\varphi_j(\pmb x)\cdot\pmb\varphi_i(\pmb y)\,\mathrm{d}\sigma_{\pmb y}\,\mathrm{d}\sigma_{\pmb x} \\ &\qquad-\frac{1}{\kappa^2}\int_\Gamma\int_\Gamma G_{\kappa}(\pmb x-\pmb y)\div_\Gamma\pmb\varphi_j(\pmb x)\div_\Gamma\pmb\varphi_i(\pmb y)\,\mathrm{d}\sigma_{\pmb y}\,\mathrm{d}\sigma_{\pmb x}, \end{aligned} \end{align} see also \cite{Buffa_2003ab}. We remark that the system matrix is symmetric, but not Hermitian. \subsection{Approximation Properties and Discrete Inf-Sup Condition}\label{sec::subsec::approx} The conforming spline spaces introduced in the previous section provide approximation results of optimal order in $ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}$, w.r.t.~patchwise regularity. Therefore, setting $s\geq 0$, we define the patchwise norms \[ \norm{\pmb f}_{\tilde {\pmb H}^s(\Gamma)} \coloneqq \sum_{j\leq N} \norm{\pmb f}_{\pmb H^s(\Gamma_j)},\qquad \norm{\pmb g}_{\tilde {\pmb H}^s(\div_\Gamma,\Gamma)} \coloneqq \sum_{j\leq N} \norm{\pmb g}_{\pmb H^s(\div_\Gamma,\Gamma_j)}, \] for all functions $\pmb f\in \pmb L^2(\Gamma)$ and $\pmb g \in {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}$ for which these expressions are well defined. The corresponding spaces of higher patchwise regularity are defined canonically as subspaces of $\pmb L^2(\Gamma)$ and $\pmb H(\div_\Gamma,\Gamma)$ with finite norm, see \cite{Buffa_2018aa}. \begin{theorem}[Approximation Properties of $\pmb \S^1_{\pmb p,\pmb \Xi}(\Gamma)$, \cite{Buffa_2018aa}]\label{thm::hdiv} Let $\pmb f\in \tilde{\pmb H}^s(\div_\Gamma,\Gamma)$, $0\leq s\leq p$ and denote by $\pmb f_h$ the $\pmb H_\times^{-1/2}(\div_\Gamma,\Gamma)$-orthogonal projection of $\pmb f$ onto $\pmb\S^1_{\pmb p,\pmb \Xi}(\Gamma)$. Then one finds \begin{align*} \norm{\pmb f-\pmb f_h}_{\pmb H_\times^{-1/2}(\div_\Gamma,\Gamma)} \lesssim h^{1/2+s} \norm{\pmb f}_{\tilde {\pmb H}^s(\div_\Gamma,\Gamma)}. \end{align*} \end{theorem} According to the classical theory of the electric field integral equation the following holds. \begin{lemma}[Criteria for a Stable Discretization, {{\cite[Sec.~3]{Bespalov_2010aa}}, \cite[Prop.~4.1]{Buffa_2003aa}}]\label{theLemmaThatGivesInfSup} Under the assumptions that \begin{enumerate} \item there exists a continuous splitting $ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{} = \pmb W\oplus \pmb V$ such that the bilinear form induced by the variational formulation \eqref{problem::variational::cont} is stable and coercive on $\pmb V\times \pmb V$ and $\pmb W\times \pmb W$, and compact on $\pmb V\times \pmb W$ and $\pmb W\times \pmb V$, \item $\pmb \S^1_{\pmb p,\pmb \Xi}(\Gamma)$ can be decomposed into a sum $\pmb \S^1_{\pmb p,\pmb \Xi}(\Gamma)\coloneqq \pmb W_h \oplus \pmb V_h$ of closed subspaces of $ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}$, \item $\pmb W_h$ and $\pmb V_h$ are stable under complex conjugation, and \item it holds that $\pmb W_h\subseteq \pmb W$, as well as the so-called \emph{gap-property} \begin{align}\label{eq::gap-property} \sup_{\pmb v_h\in \pmb V_{h}}\inf_{\pmb v\in \pmb V}\frac{\norm{\pmb v-\pmb v_h}_{ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}}}{\norm{\pmb v_h}_{ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}}}\stackrel{h\rightarrow 0}{\longrightarrow} 0, \end{align} \end{enumerate} the discrete problem \eqref{problem::variational::disc} enjoys $\inf$-$\sup$-stability. \end{lemma} The continuous splitting of $ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}$ has been discussed in the literature, see, e.g., \cite{Buffa_2003ab}, and is required for proving the $\inf$-$\sup$-stability of \eqref{problem::variational::cont}. One of the most concise (although not self-contained) constructions of said splitting and the discrete inf-sup condition according to this scheme is due to \cite{Bespalov_2010aa}, whose lines we will follow closely, starting with the introduction of some necessary operators. The theory behind them goes back to \cite{Hiptmair_2002aa}. \begin{lemma}[Regularising Projection]\label{lem::regularizingprojection For compact domains $\Omega$ with Lipschitz boundary there exists a continuous projection $\mathsf R\colon {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}\to{\pmb H_\times^{1/2}(\Gamma)}$ such that \begin{align}\label{eq:diffRcomm} (\div_\Gamma \circ \mathsf R)( \pmb u) &=\div_\Gamma \pmb u, \end{align} {for all $\pmb u\in {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}$.} \end{lemma} \begin{proof} The definition of a suitable operator is done in \cite[Lem.~3.1]{Bespalov_2010aa}, which we shortly recap for later reference. {For any $\pmb v\in {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}$, the solution of the Neumann problem \begin{align}\label{eq:neumannprob} \begin{aligned} \Delta w ={}&0&&\text{in}~\Omega,\\ \pmb\grad\, w\cdot\pmb n ={}&\div_{\Gamma}\pmb v&&\text{on}~\Gamma, \end{aligned} \end{align} defines a field $\pmb\grad\, w\in\pmb H^0(\div 0,\Omega)$ with $\langle\pmb\grad\, w\cdot\pmb n,1\rangle_{L^2(\Gamma)}=0$. Using a continuous lifting operator $\mathsf{L}\colon\pmb H^0(\div 0,\Omega)\to\pmb H^1(\Omega)$ with $\pmb\curl\mathsf{L}\pmb u=\pmb u$ for all $\pmb u\in\pmb H^0(\div 0,\Omega)$ satisfying $\langle\pmb u\cdot\pmb n,1\rangle_{L^2(\Gamma)}=0$, see, e.g., \cite[Theorem 3.4]{GR86}, we finally arrive at $\mathsf{R}\pmb v\coloneqq\pmb\gamma_t^-\mathsf{L}\pmb\grad\, w\in\pmb H^{1/2}_\times(\Gamma)$ and \eqref{eq:diffRcomm} follows. Since $\pmb\grad\, w$ depends continuously on $\div_\Gamma\pmb v$ and $\pmb\gamma_t^-\colon\pmb H^1(\Omega)\to\pmb H^{1/2}_\times(\Gamma)$ is also continuous, the continuity of $\mathsf{R}$ follows. } \end{proof} Indeed, one can show the continuous $\inf$-$\sup$-condition via the splitting $\pmb V\coloneqq\mathsf {R}\big( {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}\big)$ and $\pmb W\coloneqq(\operatorname{Id}-\mathsf R)\big( {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}\big)$. The construction of a corresponding discrete splitting relies on the multipatch interpolation operators introduced in \cite{Buffa_2018aa}, given by \begin{align*} \tilde\Pi^0_\Gamma&\colon &\hspace{-2.8cm}H^{1/2}(\Gamma)\supseteq \mathcal{D}\big(\tilde\Pi^0_\Gamma\big)\to{}&{}\S_{\pmb p,\pmb\Xi}^0(\Gamma),\\ \pmb{\tilde\Pi}^1_\Gamma&\colon&\hspace{-2.8cm} {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}\supseteq\mathcal{D}\big(\pmb{\tilde\Pi}^1_\Gamma\big) \to{}&{}\pmb\S_{\pmb p,\pmb\Xi}^1(\Gamma),\\ \tilde\Pi^2_\Gamma&\colon &\hspace{-2.8cm}H^{-1/2}(\Gamma)\supseteq \mathcal{D}\big(\tilde\Pi^2_\Gamma\big)\to{}&{}\S_{\pmb p,\pmb\Xi}^2(\Gamma), \end{align*} with domains $\mathcal{D}(\,\cdot\,)$. Note that these projections commute with the surface differential operators $\pmb \curl_\Gamma$ and $\div_\Gamma,$ i.e.,~one finds \begin{align} \begin{aligned} (\pmb \curl_\Gamma \circ \tilde\Pi^0_\Gamma )(f) &= (\pmb {\tilde\Pi}^1_\Gamma\circ\pmb\curl_\Gamma)(f),\\ (\div_\Gamma \circ \,\pmb {\tilde\Pi}^1_\Gamma )(\pmb f) &= (\tilde\Pi^2_\Gamma\circ\div_\Gamma) (\pmb f). \end{aligned} \label{eq::commutingpis} \end{align} Among other estimates about these interpolation operators, \cite{Buffa_2018aa} provides the following. \begin{lemma}\label{lemma::interpolationlemma} Let $\pmb f\in \pmb{\tilde H}^s(\Gamma)$ for $1\leq s\leq p$. Then it holds that \begin{align*} \norm{\pmb f-\pmb{\tilde\Pi}^1_\Gamma \pmb f}_{\pmb L^2(\Gamma)}&\lesssim h^s\norm{\pmb f}_{\pmb {\tilde H}^s(\Gamma)}. \end{align*} \end{lemma} { Other than in \cite{Bespalov_2010aa}, we cannot use the operator $\mathsf{R}$ to introduce a discrete splitting, since the image of $\mathsf{R}$ is not patch-wise in $\pmb H^1$, which would be required to be contained in $\mathcal{D}\big(\pmb{\tilde\Pi}^1_\Gamma\big)$. Instead, we have to introduce another regularising projection. \begin{lemma}[Regularising Projection for higher Regularity]\label{lem::regularizingprojectionspline} For compact domains $\Omega$ with Lipschitz boundary there exists a continuous projection $\mathsf R_0\colon\pmb H^0(\div_{\Gamma},\Gamma)\to\tilde{\pmb H}^1(\Gamma)$ such that $(\div_\Gamma \circ \mathsf R_0)( \pmb u) =\div_\Gamma \pmb u,$ for all {$\pmb u\in\pmb H^0(\div_{\Gamma},\Gamma)$}. \end{lemma} \begin{proof} The proof is in analogy to Lemma~\ref{lem::regularizingprojection}. First, we remark that $\div_{\Gamma}\pmb u\in L^2(\Gamma)$. Thus, \eqref{eq:neumannprob} yields a field $\pmb\grad\, w\in \pmb H^{1/2}(\div 0,\Omega)$ with $\langle\pmb\grad\, w\cdot\pmb n,1\rangle_{L^2(\Gamma)}=0$. \cite[Remark 3.12]{GR86} shows that there is a continuous lifting operator $\mathsf{L}_{1/2}\colon\pmb H^{1/2}(\div 0,\Omega)\to\pmb H^{3/2}(\Omega)$ with $\pmb\curl\mathsf{L}\pmb u=\pmb u$ for all $\pmb u\in\pmb H^{1/2}(\div 0,\Omega)$ satisfying $\langle\pmb u\cdot\pmb n,1\rangle_{L^2(\Gamma)}=0$. This yields the assertion by patchwise application of the rotated tangential trace. We remark that the continuity of $\mathsf{L}_{1/2}$ follows by noting that the construction of the extensions in \cite[Theorem 3.4]{GR86} and \cite[Corollary 3.3]{GR86} depend continuously on the input data. The interpolation argument of \cite[Remark 3.12]{GR86} then yields the continuity assertion, since the image of the procedure in \cite[Theorem 3.4]{GR86} and \cite[Corollary 3.3]{GR86} coincides for equal input data in terms of their respective equivalence classes. \end{proof} } {In analogy to the continuous setting, by} Lemma \ref{lem::regularizingprojectionspline} and the construction and properties of the quasi-interpolation operators constructed in \cite{Buffa_2018aa}, the definition of the discrete splitting via \begin{align*} \pmb V_h\coloneqq (\pmb{\tilde\Pi}_\Gamma^1\mathsf \circ\, {\mathsf R_0})\big(\pmb \S^1_{\pmb p,\pmb \Xi}(\Gamma)\big),\qquad \pmb W_h\coloneqq (\operatorname{Id} - \pmb{\tilde\Pi}_\Gamma^1\circ{\mathsf R_0})\big(\pmb \S^1_{\pmb p,\pmb \Xi}(\Gamma)\big), \end{align*} {is well defined and} would be a suitable candidate to fulfill the assumptions of Lemma~\ref{theLemmaThatGivesInfSup}. { \begin{remark}\label{rem::whinw} The construction of both $\mathsf R$ and $\mathsf R_0$ make it clear that the kernel of the respective operator consists exactly of the divergence free functions. Thus, it follows that $\pmb W_h\subseteq \pmb W$ holds, compare \cite[Eq.~3.5]{Bespalov_2010aa}. \end{remark} } We are now ready to provide a statement about the $\inf$-$\sup$-stability of the discretized EFIE. \begin{theorem}\label{lem::disc_inf-sup} {The discrete problem \eqref{problem::variational::disc} enjoys $\inf$-$\sup$-stability.} \end{theorem} \begin{proof} {First, we consider the case of $\pmb\S^1_{\pmb p,\pmb \Xi}(\Gamma)\subset \pmb H^{1/2}_\times(\Gamma),$ i.e., when $\pmb\S^1_{\pmb p,\pmb \Xi}(\Gamma)$ is patchwise continuous. Due to Remark \ref{rem::whinw}, it remains to check the gap property \eqref{eq::gap-property} for $\pmb V$ and $\pmb V_h$.} We write \begin{align*} \sup_{\pmb v_h\in \pmb V_{h}}\inf_{\pmb v\in \pmb V}\frac{\norm{\pmb v-\pmb v_h}_{ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}}}{\norm{\pmb v_h}_{ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}}} \lesssim{}&{}\sup_{\pmb v_h\in \pmb V_{h}}\frac{\norm{{\mathsf R_0} \pmb v_h-\pmb v_h}_{ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}}}{\norm{\pmb v_h}_{ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}}}\\ ={}&{} \sup_{\pmb v_h\in \pmb V_{h}}\frac{\norm{({\mathsf R_0} -\pmb{\tilde\Pi}^1_\Gamma \circ {\mathsf R_0})(\pmb v_h)}_{ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}}}{\norm{\pmb v_h}_ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}}. \end{align*} Note that the last equality holds because one can show that $\tilde{\pmb\Pi}^1_\Gamma \circ {\mathsf R_0}$ is a projection, {as done in \cite[Sec.~6]{Bespalov_2010aa} for $\mathsf R$.} Thus, it holds that \[ (\tilde{\pmb\Pi}^1_\Gamma \circ {\mathsf R_0}) (\pmb V_h )= \pmb V_h \coloneqq (\tilde{\pmb\Pi}^1_\Gamma \circ {\mathsf R_0})(\pmb \S^1_{\pmb p,\pmb \Xi}(\Gamma)). \] {Since {the canonical embedding $\pmb H^0(\div_{\Gamma},\Gamma)\hookrightarrow {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}$ is continuous}, we arrive at $$ \norm{(\mathsf R_0 -\pmb{\tilde\Pi}^1_\Gamma \circ \mathsf R_0)(\pmb v_h)}_{ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}}\lesssim \norm{(\mathsf R_0 -\pmb{\tilde\Pi}^1_\Gamma \circ \mathsf R_0)(\pmb v_h)}_{H^0(\div_\Gamma,\Gamma)}.$$ By the divergence preserving property of $\mathsf R_0$ and the fact that the interpolation operators are projections which commute w.r.t.~the surface differential operator, we can apply Lemma~\ref{lemma::interpolationlemma} arriving at \[ \norm{(\mathsf R_0 -\pmb{\tilde\Pi}^1_\Gamma \circ \mathsf R_0)(\pmb v_h)}_{\pmb H^0(\div_\Gamma,\Gamma)} = \norm{(\mathsf R_0 -\pmb{\tilde\Pi}^1_\Gamma \circ \mathsf R_0)(\pmb v_h)}_{\pmb L^2(\Gamma)} \lesssim h\norm{\mathsf R_0\pmb v_h}_{\pmb {\tilde H}^1(\Gamma)}. \] Note that the right hand side is well defined due to $\mathsf R_0 (\pmb V_h) \subset \pmb {\tilde H}^1(\Gamma).$ Combining the above with the continuity of the $\mathsf R_0$ operator and inverse estimates, cf.~Lemma~\ref{lem:inverseestimate}, yields \[ \norm{(\mathsf R_0 -\pmb{\tilde\Pi}^1_\Gamma \circ \mathsf R_0)(\pmb v_h)}_{ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}} \lesssim h^{1/2}\|\pmb v_h\|_{ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}}, \] and thus the assertion for the case $\pmb\S^1_{\pmb p,\pmb \Xi}(\Gamma)\subset \pmb H^{1/2}_\times(\Gamma)$. The case $\pmb\S^1_{\pmb p,\pmb \Xi}(\Gamma)\not\subset \pmb H^{1/2}_\times(\Gamma)$, which is realized only for maximal knot repetition within knot vectors, reduces to the classical theory of higher order Raviart Thomas elements on quadrilaterals, cf.~\cite{Buffa_2003aa,Zaglmayr_2006aa}. } \end{proof} Following classical Babu\v{s}ka-Brezzi theory \cite{Babuska_1969aa,Xu_2002aa}, we can finally combine Theorems \ref{thm::hdiv} and \ref{lem::disc_inf-sup} and arrive at the main result of this section. \begin{theorem}[Discretization Error]\label{thm::quasioptimality} The solution to \eqref{problem::variational::disc} exists and is unique. Moreover, assuming $\pmb w\in\tilde{\pmb H}^{s}(\div_\Gamma,\Gamma)$ for some $0<s\leq p$, for the solutions $\pmb w\in \pmb H^{-1/2}_\times(\div_\Gamma,\Gamma)$ and $\pmb w_h\in \pmb \S^1_{\pmb p,\pmb \Xi}(\Gamma)$ of \eqref{problem::variational::cont} and \eqref{problem::variational::disc} we find that \begin{align*} \norm{\pmb w-\pmb w_h}_{\pmb H^{-1/2}_\times(\div_\Gamma,\Gamma)}\lesssim h^{s+1/2}\norm{\pmb w}_{\tilde{\pmb H}^{s}(\div_\Gamma,\Gamma)}. \end{align*} \end{theorem} As a corollary, we can predict the expected convergence rates of the scattered electric field. Similar to scalar-valued problems, the convergence rate of the field doubles. \begin{corollary}\label{cor:potentialconv} Let $\pmb x\in\Omega^c$ fixed. Let $\pmb w$ be the solution to \eqref{problem::variational::cont} and $\pmb w_h$ the solution to the numerical problem \eqref{problem::variational::disc}. Then, for $\pmb e_s=\tilde{{\bb{\mathscr V}}}_\kappa\pmb w$ and $\pmb e_{s,h}=\tilde{{\bb{\mathscr V}}}_\kappa\pmb w_h,$ it holds \[ \|\pmb e_s(\pmb x)-\pmb e_{s,h}(\pmb x)\|_{\mathbb{C}^3} \leq C(\pmb x)h^{2p+1} \|\pmb w\|_{\tilde{\pmb H}^{p}(\div_\Gamma,\Gamma)}, \] if $\pmb w$ and the solution of a suitable adjoint problem are sufficiently smooth. \end{corollary} \begin{proof} One readily verifies that $\big(\tilde{{\bb{\mathscr V}}}_{\kappa}\cdot\big)(\pmb x)$ is a linear and continuous functional on $ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}$ for given $\pmb x$. Let then $\pmb\varphi^{(\pmb x)}$ be the solution of the adjoint problem of finding $\pmb \varphi^{(\pmb x)}\in {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}$ such that \begin{align}\label{eq:adjointprob} \big\langle {\bb{\mathscr V}}_\kappa\pmb\xi, \pmb\varphi^{(\pmb x)}\big\rangle_\times = \big(\tilde{{\bb{\mathscr V}}}_{\kappa}\pmb \xi\big)(\pmb x) \end{align} holds for all $\pmb\xi \in {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}$. Let $\pmb\varphi^{(\pmb x)}_h$ denote its discrete analogon. The assertion now follows by applying a standard argument, see also \cite[Theorem 4.2.14]{SS11}, to each component of the scattered field to obtain \[ \|\pmb e_s(\pmb x)-\pmb e_{s,h}(\pmb x)\|_{\mathbb{C}^3} \lesssim \norm{\pmb w-\pmb w_h}_{\pmb H^{-1/2}_\times(\div_\Gamma,\Gamma)} \norm{\pmb \varphi^{(\pmb x)}-\pmb \varphi^{(\pmb x)}_h}_{\pmb H^{-1/2}_\times(\div_\Gamma,\Gamma)}. \] The previous theorem yields the assertion with $C(\pmb x)=C\|\pmb\varphi^{(\pmb x)}\|_{\tilde{\pmb H}^{p}(\div_\Gamma,\Gamma)}$, if the solutions to \eqref{problem::variational::cont} and \eqref{eq:adjointprob} are smooth enough. \end{proof} \begin{remark} The proof applies to any linear and continuous output functional of $\pmb w$. Thus, similar error estimates hold also for other quantities of interest, for example for path integrals of the electric field, i.e., voltages, {or radar cross sections}, cf.~\cite{Jackson_1998aa}. \end{remark} \subsection{Mie Scattering}\label{numsec::miesphere} First, we test the implementation via the computation of the surface current induced by a plane wave from a unit sphere. Here, an analytic solution to the density is known in terms of a series expansion, see \cite{Weggler_2011aa} for a comprehensive account. Since the energy norm $\norm{\cdot}_ {\bb H^{-1/2}_\times(\div_\Gamma,\Gamma)}{}$ of the density is not computable explicitly, we choose to compare the $\pmb L^2(\Gamma)$-error of the density. In accordance to quasi-optimality of the approach, cf.~Theorem \ref{thm::quasioptimality}, a convergence of order $p$ is expected\footnote{We remark again, that with $p$ we refer to the minimal polynomial degree utilized in the construction of the first space of the discrete sequence \eqref{spline::sequence}.}, and can indeed be observed, cf.~Figure \ref{num::sphere::mie}. \begin{figure}\centering \centering \begin{subfigure}{.48\textwidth} \begin{tikzpicture}[scale = .65] \begin{axis}[ height = 7.5cm, ymode=log, xmode=log, x dir = reverse, xlabel=reference mesh size $h$, xtick={0.5,0.25,0.125,0.06125}, grid = major, xticklabels={$1/2$,$1/4$,$1/8$,$1/16$}, ylabel=$L^{2}$-error of density, legend style={at={(0.0,0.0)},anchor=south west}, legend columns=2, ] \addplot[line width = 1.5pt,blue,mark = triangle*,mark size=3pt] table [trim cells=true,x=h1,y=L21] {data/plot_sphere_master}; \addlegendentry{$p=1$} \addplot[line width = 1.5pt,red,mark = triangle*,mark size=3pt] table [trim cells=true,x=h1,y=L22] {data/plot_sphere_master}; \addlegendentry{$p=2$} \addplot[line width = 1.5pt,brown,mark = triangle*,mark size=3pt] table [trim cells=true,x=h1,y=L23] {data/plot_sphere_master}; \addlegendentry{$p=3$} \addplot[line width = 1.5pt,gray,mark = triangle*,mark size=3pt] table [trim cells=true,x=h1,y=L24] {data/plot_sphere_master}; \addlegendentry{$p=4$} \addplot[line width = 1.5pt,blue,mark = none,dotted,mark size=3pt] table [trim cells=true,x=h1,y=h1] {data/plot_sphere_master}; \addlegendentry{$\mathcal O(h^1)$} \addplot[line width = 1.5pt,red,mark = none,dotted,mark size=3pt] table [trim cells=true,x=h1,y=h2] {data/plot_sphere_master}; \addlegendentry{$\mathcal O(h^2)$} \addplot[line width = 1.5pt,brown,mark = none,dotted,mark size=3pt] table [trim cells=true,x=h1,y=h3] {data/plot_sphere_master}; \addlegendentry{$\mathcal O(h^3)$} \addplot[line width = 1.5pt,gray,mark = none,dotted,mark size=3pt] table [trim cells=true,x=h1,y=h4] {data/plot_sphere_master}; \addlegendentry{$\mathcal O(h^4)$} \end{axis} \end{tikzpicture} \subcaption{\footnotesize $L^2(\Gamma)$-error of the density.} \label{num::sphere::mie} \end{subfigure} \begin{subfigure}{.48\textwidth} \begin{tikzpicture}[scale = .65] \begin{axis}[ height = 7.5cm, ymode=log, xmode=log, ylabel=$\ell^{\infty}$-error of exterior solution, grid = major, xtick={.5, .25, .125, .0625}, xlabel=reference mesh size $h$, legend style={at={(0.0,0.0)},anchor=south west}, legend columns=2, x dir=reverse, xticklabels={$1/2$, $1/4$, $1/8$, $1/16$}, ] \addplot[line width = 1.5pt,blue,mark = triangle*,mark size=3pt] table [trim cells=true,x=h1,y=pot1] {data/plot_sphere_master}; \addlegendentry{$p=1$} \addplot[line width = 1.5pt,red,mark = triangle*,mark size=3pt] table [trim cells=true,x=h1,y=pot2] {data/plot_sphere_master}; \addlegendentry{$p=2$} \addplot[line width = 1.5pt,brown,mark = triangle*,mark size=3pt] table [trim cells=true,x=h1,y=pot3] {data/plot_sphere_master}; \addlegendentry{$p=3$} \addplot[line width = 1.5pt,gray,mark = triangle*,mark size=3pt] table [trim cells=true,x=h1,y=pot4] {data/plot_sphere_master}; \addlegendentry{$p=4$} \addplot[line width = 1.5pt,blue,mark = none,dotted,mark size=3pt] table [trim cells=true,x=h1,y=h3pot] {data/plot_sphere_master}; \addlegendentry{$\mathcal O(h^3)$} \addplot[line width = 1.5pt,red,mark = none,dotted,mark size=3pt] table [trim cells=true,x=h1,y=h5pot] {data/plot_sphere_master}; \addlegendentry{$\mathcal O(h^5)$} \addplot[line width = 1.5pt,brown,mark = none,dotted,mark size=3pt] table [trim cells=true,x=h1,y=h7pot] {data/plot_sphere_master}; \addlegendentry{$\mathcal O(h^7)$} \addplot[line width = 1.5pt,gray,mark = none,dotted,mark size=3pt] table [trim cells=true,x=h1,y=h9pot] {data/plot_sphere_master}; \addlegendentry{$\mathcal O(h^9)$} \end{axis} \end{tikzpicture} \subcaption{\footnotesize $\ell^{\infty}$-error of electric field for $\mathbf{DP}_{(0,0.1,0.1)}$.} \label{num::sphere::man} \end{subfigure} \caption{Numerical exmples on the unit sphere. Wave number $\kappa = 1$, parameters $q= 10$, and $\eta = 1.6$. The $\mathbf{DP}$-error refers to the maximum error obtained via the manufactured solution of a selection of 100 points on a sphere of radius 3 around the origin. GMRES was restarted every 1500 iterations, with a stopping criterion of $\norm{\pmb r}_2\leq 10^{-8}$.} \end{figure} \subsection{The Electric Field as a Quantitiy of Interest}\label{numsec::mieman} {Although the density obtained in an approach via the electric field integral equation admits a physical interpretation as the surface current, the quantity of interest of scattering problems is mainly the scattered electric field.} Unfortunately, a numerical implementation of the Mie series {for the computation of the electric field in open space} could not achieve a sufficiently high precision to compare with the high accuracies provided by our isogeometric method. Thus, in order to obtain a reference solution, we employ an approach via manufactured solution, i.e., a function that fulfills the electric wave equation in $\Omega^c$ is used to generate the required Dirichlet data. By existence and uniqueness of the solution, cf.~\cite{Buffa_2003ab}, one can thus validate the numerical scheme. As such a manufactured solution, we utilize a simple Hertz-Dipole, for which one can check that it fulfils \eqref{problem::ext_scattering}. \begin{definition}[Hertz-Dipole, {\cite[{p.~411, (9.18)}]{Jackson_1998aa}}] Let $\pmb x_0\in \Omega^c$. We define the function \begin{align*} \mathbf{DP}_{\pmb x_0}(\pmb x) \coloneqq e^{i\kappa r} \bigg( \frac{\kappa^2}{r} (\pmb n\times\pmb p_0)\times\pmb n + \bigg( \frac{1}{r^3}-\frac{i\kappa}{r^2} \bigg) \big( 3\pmb n(\pmb n\cdot\pmb p_0)-\pmb p_0\big) \bigg), \end{align*} with $r=\|\pmb x-\pmb x_0\|$, $\pmb p_0=(0,0.1,0.1)$, and $\pmb n=(\pmb x-\pmb x_0)/r$. \end{definition} Given a reference solution, the errors illustrated in Figure \ref{num::sphere::man} validate the convergence rates of the electric field predicted by Corollary~\ref{cor:potentialconv}. The last data point of the highest order does not match the predicted order, but is, with an error around $10^{-12}$, close enough to machine accuracy to expect noticeable numerical inaccuracies. Since the sphere example is a classical benchmark test, we choose to publish detailed data about the computation, specifically in terms of time to solution, in Table \ref{tab::sphere}. There, one can also find detailed information about the machine used for computations. This may serve as a reference to compare the presented approach to other implementations, but we stress again that one has to act cautiously when comparing times, since the performance of the fast method depends on various parameters of the problem, in particular, the ratio of the wave number $\kappa$ to the size of the geometry. The input parameters of all computations are detailed in the captions of the corresponding figures. Also, we note that due to the efficient, element-based approach of the multipole method, the time spend for matrix assembly is negligible compared to the time required for the solution of the linear system, cf.~Table~\ref{tab::sphere}. \setlength{\tabcolsep}{11pt} \begin{table} \caption{Detailed data of the unit sphere example with $\kappa = 1$ and $\eta = 1.6$. Computed on a Workstation with Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz, and has been compiled with \texttt{g++ 5.4}, with compile flags \texttt{-O3 -march=native -fopenmp}. Mie-error refers to the error w.r.t.~the analytic solution of the scattering problem described in Section \ref{numsec::miesphere}, while the $\mathbf{DP}$-error refers to the error obtained via the manufactured solution as described in Section \ref{numsec::mieman}, cf.~Figures \ref{num::sphere::mie} and \ref{num::sphere::man}. Every 1500 iterations, the GMRES was restarted, with a stopping criterion of $\norm{\pmb r}_2\leq 10^{-8}$. Evaluation of the $\mathbf{DP}{}$-error was done on a set of points scattered across the sphere of radius 3.}\label{tab::sphere} \begin{footnotesize} \begin{center} \begin{tabular}{|r|llll|}\hline &\multicolumn{4}{|c|}{$p = 1$}\\ \hline $h$ w.r.t.~$\square$& 0.5 & 0.25 & 0.125 & 0.06125\\ DOFs {(real, double prec.)}&96&384&1536& 6144 \\ matrix ass. (s)&0.02&0.14&1.14& 9.15 \\ solving (s)&0.02&0.32&3.4& 79.9 \\ GMRES iterations& 12&55&119 & 231 \\ $\mathbf{DP}$-error &0.0074&0.0009&0.0001 & 1.23e-05 \\ Mie error ($\pmb L^2$) &1.051&0.499&0.246& 0.122 \\ \hline & \multicolumn{4}{|c|}{$p = 2$} \\ \hline $h$ w.r.t.~$\square$& 0.5 & 0.25 & 0.125 & 0.06125\\ DOFs {(real, double prec.)} &216&600&1944 & 6936 \\ matrix ass. (s) &0.06&0.55&4.8 & 47.3 \\ solving (s) &0.046&2.6&100.6 & 2279.6 \\ GMRES iterations&48&158&362 & 616\\ $\mathbf{DP}$-error &0.0009&1.82e-05&4.41e-07 & 1.29e-08 \\ Mie error ($\pmb L^2$) & 0.251&0.052&0.012 & 0.0029 \\ \hline &\multicolumn{4}{|c|}{$p = 3$} \\ \hline $h$ w.r.t.~$\square$& 0.5 & 0.25 & 0.125 & 0.06125\\ DOFs {(real, double prec.)}&384&864&2400& 7776 \\ matrix ass. (s)&0.8&1.16&17.4& 197.3 \\ solving (s)&0.15&8.46&237.8&8433 \\ GMRES iterations& 123&294&702 & 2003 \\ $\mathbf{DP}$-error &5.29e-05&9.83e-07&3.72e-09& 2.45e-11 \\ Mie error ($\pmb L^2$) &0.085&0.011&0.0010& 0.000121 \\ \hline &\multicolumn{4}{|c|}{$p = 4$} \\ \hline $h$ w.r.t.~$\square$& 0.5 & 0.25 & 0.125 & 0.06125\\ DOFs {(real, double prec.)}&600&1176&2904& 8664 \\ matrix ass. (s)&0.6&5.42&52.1& 746.29 \\ solving (s)&2.08&79.2&3072.9&78508 \\ GMRES iterations&224&400&919 & 5681\\ $\mathbf{DP}$-error &6.81-e06&1.54e-07&6.77e-11& 8.33e-12 \\ Mie error ($\pmb L^2$) &0.021&0.0034&0.00012& 6.69e-06 \\ \hline \end{tabular} \end{center} \end{footnotesize} \end{table} \begin{figure}\centering \includegraphics[width=.35\textwidth]{pics/T12.jpg}\quad\includegraphics[width=.35\textwidth]{pics/T11.jpg} \includegraphics[width=.75\textwidth]{pics/T9.jpg} \caption{Mesh induced by refinement of level 3 of the Tesla geometries}\label{fig::t1c} \end{figure} \subsection{Manufactured Solution: Tesla Cavities} To test more involved geometries with larger numbers of degrees of freedom, we test our boundary element method on the Tesla cavity geometries. They resemble the cavities as used in particle accelerators, for example at DESY \cite{desy}. Simulation of electromagnetic fields within such cavities is of enormous practical importance, due to the high manufacturing costs through utilization of superconducting materials. Thus, one aims for accuracies of the simulation that exceed the tolerances that manufacturers can achieve. Boundary element methods are a good fit for these requirements, due to the high convergence order of pointwise values within the domain, cf.~\cite[Cor.~3.4]{Dolz_2018aa}. We start these numerical experiments on a single cell of the Tesla cavity, as depicted in Figure \ref{fig::t1c}, which resembles a single cell of the full nine-cell cavity. A volumetric discretization is freely available through the \texttt{geopdes} package of Octave \cite{Falco_2011aa}. We extracted the boundary in the form of 34 (one-cell) and 226 (nine-cell) quadratic patches of similar sizes, such that all geometry mappings are smooth due to no interior knot repetitions. On these we apply basis functions of different polynomial degrees, refining uniformly in each refinement step to induce a hierarchical structure cf.~Figure~\ref{fig::refinement}. The scattering problem is then solved with a right hand side induced by the Dipole for which the precise parameters are presented in Figure \ref{plot::onecellresults} and Table \ref{tab::tesla}. The results are depicted in Figure \ref{plot::onecellresults}. \begin{figure} \begin{subfigure}{.46\textwidth} \centering \begin{tikzpicture}[scale = .60] \begin{axis}[ height = 6.5cm, width = 8cm, ymode=log, xmode=log, xlabel=number of DOFs, grid = major, ylabel=$\ell^{\infty}$-error of numerical solution, ] \addplot[line width = 1.5pt,blue,mark = triangle*,mark size=3pt] table [trim cells=true,x=dofs1,y=error1] {data/plot_T1C_master}; \addlegendentry{$p=1$} \addplot[line width = 1.5pt,red,mark = triangle*,mark size=3pt] table [trim cells=true,x=dofs2,y=error2] {data/plot_T1C_master}; \addlegendentry{$p=2$} \addplot[line width = 1.5pt,brown,mark = triangle*,mark size=3pt] table [trim cells=true,x=dofs3,y=error3] {data/plot_T1C_master}; \addlegendentry{$p=3$} \addplot[line width = 1.5pt,gray,mark = triangle*,mark size=3pt] table [trim cells=true,x=dofs4,y=error4] {data/plot_T1C_master}; \addlegendentry{$p=4$} \end{axis} \end{tikzpicture} \end{subfigure}\qquad \begin{subfigure}{.46\textwidth} \centering \begin{tikzpicture}[scale = .60] \begin{axis}[ height = 6.5cm, width = 8cm, ymode=log, xmode=log, ylabel=time of matrix assembly (s), grid = major, xlabel=$\ell^{\infty}$-error of numerical solution, ] \addplot[line width = 1.5pt,blue,mark = triangle*,mark size=3pt] table [trim cells=true,x=error1,y=ass1] {data/plot_T1C_master}; \addlegendentry{$p=1$} \addplot[line width = 1.5pt,red,mark = triangle*,mark size=3pt] table [trim cells=true,x=error2,y=ass2] {data/plot_T1C_master}; \addlegendentry{$p=2$} \addplot[line width = 1.5pt,brown,mark = triangle*,mark size=3pt] table [trim cells=true,x=error3,y=ass3] {data/plot_T1C_master}; \addlegendentry{$p=3$} \addplot[line width = 1.5pt,gray,mark = triangle*,mark size=3pt] table [trim cells=true,x=error4,y=ass4] {data/plot_T1C_master}; \addlegendentry{$p=4$} \addplot[line width = 1.5pt,gray,dotted,mark = none] table [trim cells=true,x=N,y=Nrevs] {data/plot_T1C_master}; \addlegendentry{$\mathcal{O}(x^{-1})$}; \addplot[line width = 1.5pt,black,densely dotted,mark = none,mark size=3pt] table [trim cells=true,x=N,y=NNs] {data/plot_T1C_master}; \addlegendentry{$\mathcal{O}(x^{-2})$}; \end{axis} \end{tikzpicture} \end{subfigure}\\[1cm] \begin{subfigure}{.46\textwidth} \centering \begin{tikzpicture}[scale = .60] \begin{axis}[ height = 6.5cm, width = 8cm, ymode=log, xmode=log, xlabel=$\ell^{\infty}$-error of numerical solution, grid = major, ylabel=number of GMRES iterations, ] \addplot[line width = 1.5pt,blue,mark = triangle*,mark size=3pt] table [trim cells=true,x=error1,y=it1] {data/plot_T1C_master}; \addlegendentry{$p=1$} \addplot[line width = 1.5pt,red,mark = triangle*,mark size=3pt] table [trim cells=true,x=error2,y=it2] {data/plot_T1C_master}; \addlegendentry{$p=2$} \addplot[line width = 1.5pt,brown,mark = triangle*,mark size=3pt] table [trim cells=true,x=error3,y=it3] {data/plot_T1C_master}; \addlegendentry{$p=3$} \addplot[line width = 1.5pt,gray,mark = triangle*,mark size=3pt] table [trim cells=true,x=error4,y=it4] {data/plot_T1C_master}; \addlegendentry{$p=4$} \addplot[line width = 1.5pt,black,dotted,mark = none,mark size=3pt] table [trim cells=true,x=N,y=Nsqrt] {data/plot_T1C_master}; \addlegendentry{$\mathcal O(\sqrt{x^{-1}})$} \end{axis} \end{tikzpicture} \end{subfigure}\qquad \begin{subfigure}{.46\textwidth} \centering \begin{tikzpicture}[scale = .60] \begin{axis}[ height = 6.5cm, width = 8cm, ymode=log, xmode=log, ylabel=time to solution (s), grid = major, xlabel=$\ell^{\infty}$-error of numerical solution, ] \addplot[line width = 1.5pt,blue,mark = triangle*,mark size=3pt] table [trim cells=true,x=error1,y=ts1] {data/plot_T1C_master}; \addlegendentry{$p=1$} \addplot[line width = 1.5pt,red,mark = triangle*,mark size=3pt] table [trim cells=true,x=error2,y=ts2] {data/plot_T1C_master}; \addlegendentry{$p=2$} \addplot[line width = 1.5pt,brown,mark = triangle*,mark size=3pt] table [trim cells=true,x=error3,y=ts3] {data/plot_T1C_master}; \addlegendentry{$p=3$} \addplot[line width = 1.5pt,gray,mark = triangle*,mark size=3pt] table [trim cells=true,x=error4,y=ts4] {data/plot_T1C_master}; \addlegendentry{$p=4$} \addplot[line width = 1.5pt,gray,dotted,mark = none,mark size=3pt] table [trim cells=true,x=N,y=Nrev] {data/plot_T1C_master}; \addlegendentry{$\mathcal{O}(x^{-1})$}; \addplot[line width = 1.5pt,black, densely dotted,mark = none,mark size=3pt] table [trim cells=true,x=N,y=NN] {data/plot_T1C_master}; \addlegendentry{$\mathcal{O}(x^{-2})$}; \end{axis} \end{tikzpicture} \end{subfigure} \caption{Results for the Tesla 1-Cell geometry. Wave number $\kappa = 18$, manufacturd solution $\mathbf{DP}_{(0,0.1,0.1)}$. Admissibility condition with $\eta=0.1$ and $q=14$. GMRES restart after 1500 iterations, stopping criterion $\norm{\pmb r}_2\leq 10^{-10}$. The $\mathbf{DP}$-error refers to the maximum error obtained via the manufactured solution of a selection of 100 points on a sphere of radius 3 around the origin. } \label{plot::onecellresults} \end{figure} One can still observe the high convergence rates w.r.t.~the number of degrees of freedom. One can also see that the time for matrix assembly, as well as the time to solution, seem to scale independent of the polynomial degree of the discrete functions. However, both the time for matrix assembly and the time to solution differ by a constant factor, favouring solutions obtained via higher order approaches. Moreover, the number of GMRES iterations required for the solution of the system w.r.t.~the achieved accuracy of the solution appears to scale completely independent of $p$. This also favors higher-order approaches: For a set accuracy, systems of higher order approaches are smaller due to the higher accuracy per DOF. Thus, an iteration of a matrix-free solver is computationally cheaper. For the nine-cell example, such clear behavior is not visible, cf. Table \ref{tab::tesla}. We attribute this to the fact that the compression parameters (for admissibility condition and order $q$ of the multipole interpolation) had to be chosen such that the problem remained computable on the accessible machines, i.e., one can not depend on the result of Theorem \ref{thm::mutlipole}. Despite the suboptimal choice of parameters, one still can observe that the method converges and yields good results. \begin{table} \caption{Detailed data of the sphere example with $\kappa = 10$. Computed on a Workstation, with Intel(R) Xeon(R) CPU E7- 8850, and has been compiled with \texttt{g++ 4.8.5}, with compile flags \texttt{-O3 -march=native -fopenmp}. The $\mathbf{DP}$-error refers to the maximum error obtained via the manufactured solution of a selection of 100 points on a sphere of radius 3 around the origin. The stopping criterion for the GMRES was a residual of $\|\pmb r\|_2<10^{-10},$ with a restart every 1500 iterations. }\label{tab::tesla} \begin{footnotesize} \begin{center} \begin{tabular}{|r|lll|}\hline & \multicolumn{3}{|c|}{$p = 2$, $q = 12$, $\eta = 0.15$} \\ \hline $h$ w.r.t.~$\square$& 0.5 & 0.25 & 0.125 \\ DOFs {(real, double prec.)}& 8136& 22600 &73224\\ matrix ass. (s)& 43 & 64 & 2031\\ GMRES iterations& 879 & 1230 & 2552\\ $\mathbf{DP}$-error& 1.69e-03 & 3.84e-07 & 9.79e-09\\ \hline &\multicolumn{3}{|c|}{$p = 3$, $q = 10$, $\eta = 0.3$} \\ \hline $h$ w.r.t.~$\square$& 0.5 & 0.25 & 0.125 \\ DOFs {(real, double prec.)}&14464 & 32544 & 90400\\ matrix ass. (s)& 37 & 207 & 5944\\ GMRES iterations& 1424 & 2987 & 7934\\ $\mathbf{DP}$-error& 9.07e-07 & 3.28e-07 & 1.33e-09\\ \hline \end{tabular} \end{center} \end{footnotesize} \end{table}
1,314,259,993,819
arxiv
\section{% \ifnum0=1% \addtocounter{alphasect}{1} \fi% \oldsection}% \renewcommand\thesection{% \ifnum0= \Alph{alphasect} \else% \arabic{section} \fi% }% \newenvironment{alphasection}{% \ifnum0=1% \errhelp={Let other blocks end at the beginning of the next block.} \errmessage{Nested Alpha section not allowed} \fi% \setcounter{alphasect}{0} \def0{1} }{% \setcounter{alphasect}{0} \def0{0} }% \usepackage{fancyhdr} \newcommand\shorttitle{EVENT-TRIGGERED DESIGN WITH GUARANTEED MINIMUM INTER-EVENT TIMES \newcommand\authors{MOHSEN GHODRAT AND HORACIO J MARQUEZ} \fancyhf{} \renewcommand\headrulewidth{0pt} \fancyhead[C]{% \ifodd\value{page} \scriptsize\scshape\authors \else \scriptsize\scshape\shorttitle \fi } \fancyhead[L] {\scriptsize\thepage} \pagestyle{fancy} \begin{document} \title{\normalsize\textbf{EVENT-TRIGGERED DESIGN WITH GUARANTEED MINIMUM INTER-EVENT TIMES AND $\mathcal{L}_p$ PERFORMANCE}} \author{\small MOHSEN GHODRAT AND HORACIO J MARQUEZ \thanks{The authors are with the Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2V4, Canada (e-mail: ghodrat@ualberta.ca; marquez@ece.ualberta.ca)}} \date{} \maketitle \begin{abstract} \small\baselineskip=9pt In an event-based scenario, the system decides when to update the actuators based on a real time triggering condition on the measured signals. This condition can be defined in various forms and varies depending on the system properties and design problem. This paper proposes a framework to design the triggering condition while keeping $\mathcal{L}_p$ performance within desired limits. Our general framework captures several existing state-based triggering rules as a special case, and can achieve the performance objectives while reducing transmissions. Indeed, this general structure is shown to enlarge the minimum inter-event time by a specified amount, for a desired period of time. Numerical examples suggest that the proposed mechanism effectively enlarges the average sampling time. \end{abstract} \section{Introduction} {I}{t} is nowadays well accepted that event-triggered control provides a competitive alternative to traditional time-triggered control (\cite{tabuada,Lunze}), offering performance very similar to classical controls while reducing the transmission of information between system components. Event triggered control was pioneered by \cite{astrom_stochastic} and has lead to extensive research including formal stability analysis (\cite{PID,tabuada, Heemels_periodic,Postuyan_GFW,Girard}) and the references therein, and performance in its various forms (\cite{L_infty_gain,lemmonlinearL_2fullstate,L_2_selftriggered, passive_input_output,passive_delay,Dolk_Lp,Dolk_LP_ieee}). Two important aspects of an event-triggered control are (i) the design should satisfy some form of closed-loop performance, and (ii) should guarantee that the execution times have enough separation to avoid excessive sampling. This second point is critical to any event design. Note that reducing communication between plant and controller is, in fact, the primary motivation behind event-based methods. However, since the execution times depend on the occurrence of a new event, the triggering rule has to be designed in a way to avoid excessive triggering, particularly the existence of an accumulation point in which an infinite number of events are generated in finite-time (also known as Zeno phenomenon). In this regard, most event-triggered laws define a threshold using the norm of a measured signal, typically, the state. Examples include \cite{tabuada,L_2_selftriggered,Girard,Postuyan_GFW}. Although this type of scheme has seen countless of successful applications and has provided an important place in the literature, it is, however, not free of limitations. Indeed, in \cite{tabuada}, the author designed a triggering rule departing from a continuous-time closed-loop input-to-state stable (ISS) system (with respect to measurement error) to achieve an event-triggered closed-loop stable system, with Zeno-free behaviour. Similar rules can also guarantee other desirable performance measures such as $\mathcal{L}_2$ input-output bounds ({\it e.g.,} see \cite{lemmonlinearL_2fullstate,L_2_selftriggered}). However, it was recently shown in \cite{Dolk_Lp} that in the presence of disturbance or sensor noise, static event-triggering rules defined in terms of solely the state vector norm cannot guarantee positive minimum inter-event time (MIET), thus becoming vulnerable to Zeno behaviour. The same issue may be encountered when dealing with dynamic triggering rules (\cite{Postuyan_GFW,Girard}), or output-based triggering rules (\cite{L_infty_gain}). The observations above suggest that incorporating disturbance in event design is nontrivial. One way to address this issue is the so-called time-regularization method, recently proposed in \cite{Mahmoud_Abdolrahim,Dolk_LP_ieee}. The core idea is to enforce a dwell-time between sampling times, inspired by classical periodic sampling ({\it e.g.,} see \cite{explicit-nesic}). See also \cite{Chopra,Forni,Mazo_Tabuada} for a different approach. A common feature of all these studies is that the triggering condition (TC) is checked only after some specific time has elapsed since the most recent triggering instant and hence the positiveness of MIET is satisfied by construction. One drawback of this technique is that it mostly reduces to periodic samplings whenever TC is static and state is near the origin, \cite{Dolk_Lp,event-separation}. \emph{Statement of contributions:} Our primary objective is to design a prescriptive triggered policy for nonlinear event-triggered systems (ETSs) such that an $\mathcal{L}_p$ performance measure is satisfied in the presence of exogenous disturbances. As opposed to the time-regularization approach, in this article we focus on ``pure" event-based TC referring to continuous monitoring of the triggering rule without imposing a pre-defined dwell-time. Following this method, the previously mentioned shortcoming associated with time-regularization approach is prevented at the expense of a non-trivial positive MIET problem. Instead, to exclude Zeno-behaviour we introduce a new dynamic parameter in the proposed TC. The results in this work are indeed a generalization of our previous work, \cite{Me_submitted}, where we studied the local $\mathcal{L}_2$ stability of nonlinear ETSs under state-dependent disturbances. Here, we first relax the restriction on the admissible input space and allow disturbances to be any signal in $\mathcal{L}_p$ space, thereby extend the results of \cite{Me_submitted} to global (non-local) $\mathcal{L}_p$ type performance. This extension, however, produces non-trivial challenges regarding the Zeno-exclusion that will be discussed in detail. Secondly, we cast a general structure for event-triggering design which specifically captures the proposed event in \cite{Me_submitted}. In summary, our design addresses the two main characteristics of ETSs mentioned earlier, in the following senses: \emph{Performance.} We focus on $\mathcal{L}_p$-gain performance of ETSs. We consider a general class of nonlinear control-affine system and a pre-designed state feedback controller whose continuous implementation satisfy some $\mathcal{L}_p$-gain performance level $\mu$. We provide a constructive TC design algorithm to retain the new $\mathcal{L}_p$-performance level at some $\mu_d$. The results here extends the prior work \cite{Me_submitted}, where the disturbance is assumed to be bounded. See also \cite{Dolk_LP_ieee,Mahmoud_Abdolrahim} for a different approach. In comparison to these references, we rely on less conservative set of assumptions and a different approach that lead to a different structure for TC design. In fact, the assumptions made in \cite{Dolk_LP_ieee,Mahmoud_Abdolrahim} require some sort of dissipativity property for ETS which, we believe, is too strong to be applied to the problem considered here. Moreover, in the dwell-time approach, $\mu_d-\mu$ is lower bounded by some function of the dwell-period thus limiting how close the event-triggered performance $\mu_d$ can be made to its continuous-time counterpart $\mu$. This limitation, however, does not exist in our proposed technique. \emph{Reduced transmission.} We propose a general framework for constructing state-based dynamic TCs in which the transmissions between plant and controller is significantly reduced. Rather than a single result, our proposed dynamic TC contains design parameters that can be selected for specific purpose and covers several well-known forms (namely, \cite{tabuada,Postuyan_GFW,Girard,me_iet,Dolk_Lp,Dolk_LP_ieee,event-separation,Me_submitted}) as special cases. In addition, as in \cite{Me_submitted}, we show that the proposed TC has the advantage over the existing dynamic techniques that a lower bound on inter-event times extension can be set. Recent studies in \cite{Mahmoud_Abdolrahim,Dolk_LP_ieee} consider the more realistic output feedback event-triggered control problem. Our focus is on the state feedback case. While state feedback has its practical limitations, the advantage is that the result is less conservative than the output feedback case. An important feature of our design is that the introduced dynamic parameters are applicable to the available TCs without violating $\mathcal{L}_p$ or ISS performance. The resulting triggering rule thus enjoys improved sampling times. \textit{Notation.} $\mathbb{R}$ (resp., $\mathbb{Z}$) represents the field of real numbers (resp., set of integers). $\mathbb{R}^+_0$, $\mathbb{Z}^+_0$, $\mathbb{R}^+$, $\mathbb{Z}^+$ are the sets of nonnegative and positive elements of $\mathbb{R}$ and $\mathbb{Z}$. $\|x\|$ denotes the Euclidean norm of vector $x\in\mathbb{R}^n$. $\mathcal{L}_p^n$ is the space of measurable n-dimensional signals $w$ with bounded $p$-norm defined as $(\int_{t_0}^{\infty}\|w(t)\|^pdt)^{\frac{1}{p}}$. The $\infty$-norm of $w$ is denoted by ${\|w\|}_{\infty}=\text{ess}\sup\{\|w(t)\| : t\geq t_0 \}$. A function $f:\mathbb{R}^{n} \mapsto\mathbb{R}^p$ is said to be locally Lipschitz-continuous in an open set $D$, if for each $z\in B$ there exist $L_f,r\in\mathbb{R}^+$ such that $\|f({{x}})-f(\tilde{{x}})\|\leq L_f\|{{x}}-{\tilde{{x}}}\|$ for all ${{x}},{\tilde{{x}}}\in \{y\in D:\|y-z\|<r\}$. A sequence $\{x_i:i\in\mathbb{Z}^+_0\}$ is uniformly isolated iff there exists some $r\in\mathbb{R}^+$ so that $|x_i-x_j|>r$ for any $i,j\in\mathbb{Z}^+_0$ with $i\neq j$. \section{Event triggered control system} \label{preliminaries} Consider the nonlinear system of the following form \footnote{The assumed affine structure is not critical and can be relaxed at the expense of obtaining a more conservative triggering condition in the sense that the triggering threshold would be reached sooner, see Remark \ref{affine}.}: \begin{eqnarray}\label{eq sys_0} \dot{\xi}=f(\xi,d)+g(\xi)u,~~~ z=h(\xi,d), \end{eqnarray} where $\xi\in\mathbb{R}^n$, $u\in\mathbb{R}^m$, $d\in \mathcal{L}_{p}^q$, $z\in\mathbb{R}^s$ represent the state, control input, exogenous disturbance and measured output. The functions $f$, $g$ and $h$ are locally Lipschitz-continuous and $f(0,0)=0$, $h(0,0)=0$ so that $\xi=0$ is an equilibrium point of zero-input system. We will assume the state $\xi$ evolves from initial conditions $\xi_0=\xi(t_0)$ on an open subset of $\mathbb{R}^n$ containing the origin. System (\ref{eq sys_0}) is said to be finite gain $\mathcal{L}_p$-stable and has an $\mathcal{L}_p$-gain $\leq\mu$ if there exist real numbers $\eta,t_0,T$, $\mu>0$, $p\geq 1$ and positive semi-definite function $\beta$ such that for any $T>t_0$, any $d\in\mathcal{L}_{p}^q$ and any $\xi_0\in\mathbb{R}^n$ \begin{eqnarray}\label{practical eq} \int_{t_0}^{T}{\|z(s)\|^p}ds\leq\mu^p\int_{t_0}^{T}{\|d(s)\|^p}ds+\beta(\xi_0)+\eta. \end{eqnarray} We assume plant and controller communicate aperiodically through a digital network and in an event-based manner. The event-triggered problem established in this paper relies on the emulation of the analog design and consists of two steps: First, we assume continuous data transmission between plant and a full information controller $u=\gamma(\xi)$, where $\gamma$ is locally Lipschitz-continuous. The resulting continuous-time plant is then given by \begin{eqnarray}\label{eq sys_1} \dot{\xi} ={f_c}(\xi,d),~~~ z=h(\xi,d), \end{eqnarray} where ${f_c}(\xi,d):=f(\xi,d)+g(\xi)\gamma(\xi)$. It is then assumed that the controller renders the closed-loop (\ref{eq sys_1}) finite gain $\mathcal{L}_p$-stable with disturbance attenuation level $\mu$. Second, the communication between plant and controller occurs at the instants belong to the set ${\{t_k:{k\in\mathbb{K}}\}}$, where $\mathbb{K}= \{0,1,2,\ldots,K\}$. The sampling sequence is a monotone increasing set, starting at $t_0$ and implicitly defined through a triggering rule. The actuator signal is held constant between events using a hold device $u(t)=u(t_k)$, $t\in[t_k,t_{k+1})$ where $t_{K+1}=\infty$ when $K$ is finite. The proposed TC is continuously monitored and once it is satisfied, the updated state is forwarded to the controller which computes the new control signal and send it to the actuator instantaneously. More specifically, let $t_k$ be the most recent sampling instant and TC be satisfied at some $\varpi_{k+1}>t_{k}$. Then the new control signal applied through the actuator at $t_{k+1}=\varpi_{k+1}^+$ and hence $u(t_{k+1})=\gamma(\xi(t_{k+1}))$. Let $\varepsilon(t):=\xi(t_k)-\xi(t)$ represent the sampling error for $t\in[t_k,t_{k+1})$. $\varepsilon(t)$ is then a right-continuous signal with zero value at $t_{k}$. In our analysis we neglect practical issues such as transmission and computation delays, however, they can be readily addressed following the approach introduced in \cite{tabuada}. The resulting closed-loop ETS is then described by \begin{eqnarray}\label{eq sys} \begin{cases} \dot{\xi}={f}_s(\xi,\varepsilon,d),~~~ z=h(\xi,d),\\ t_{k+1}=\inf \big\{t\in\mathbb{R}:t>t_k \wedge \Phi(t^-)=0\big\}, \end{cases}\hspace*{-1.6em} \end{eqnarray} where ${f}_s(\xi,\varepsilon,d):=f(\xi,d)+g(\xi)\gamma(\xi+\varepsilon)$ and $\Phi(t)$ is the TC to be designed. Assuming the existence of an $\mathcal{L}_p$-stabilizing controller for (\ref{eq sys_1}), our main interest is to design an event-triggered mechanism (ETM) that retains this input-output property of the network-free design for the resulting event-based plant; perhaps with a worse disturbance attenuation level. The proposed ETM shall (1) exclude the Zeno behaviour and (2) serve as a general platform for TC design in event-based problems. \begin{rmk}\label{remark-future} Let us write the system dynamic as $\dot{\xi}=f(\xi,d)+\sum\nolimits_{j=1}^{m}g_j(\xi)u_j$ where $f,g_j:\mathbb{R}^n\rightarrow \mathbb{R}^n$, $g=(g_1,\ldots,g_m)$ and $\gamma=(\gamma_1,\ldots,\gamma_m)^{\mathsf{T}}$. Recent studies \cite{decentralized-Mazo,Wang-distributed} consider the interesting scenarios that controller is (i) \text{distributed}: $u_j(t)=\gamma_j(\hat{\xi}(t))$, $j=1,\ldots, m$, and (ii) \text{decentralized}: $\hat{\xi}_i(t)=\xi_i(t^i_{r_i})$, $t\in[t^i_{r_i},t^{i}_{r_{i+1}})$, $i=1,\ldots,n$, implying that the distributed controllers $u_j=\gamma_j(\hat{\xi})$ utilizes the full state vector measured through $n$ independent sensors to construct the control signal. Studying the results of the current paper under these assumptions is left as a future work. \end{rmk} \section{Event-triggering mechanism} \label{section_ETM} In this section, we introduce a general structure to design $\Phi$ so that ETS (\ref{eq sys}) has $\mathcal{L}_p$-gain $\leq \mu_d$. Consider the following TC structure: \begin{eqnarray}\label{trig_cond} \Phi(t) := \bar{k} \varphi(\xi(t),\varepsilon(t)) -\textstyle{\sum} _{i=1}^{2}k_i\phi_i(t)=0 \end{eqnarray} where $k_1, k_2 >0$, and the dynamic variables ${\phi_1}$, ${\phi_2}$ and function $\varphi$ are to be designed. Furthermore, we will assume $\bar{k}=1$ unless otherwise stated. We start with designing $\varphi$, for which the following assumption is required. \begin{assumption}\label{assumption_ISS} There exist positive definite, radially unbounded functions $V_s$, $V_c$, positive constants $\mu$, $c_i$, $\bar{c}_i$ $i\in\{1,2,3\}$ and some $p\in[1,\infty)$ satisfying \vspace*{.1em} \begin{itemize} \item [(i)] $\nabla{V_s}(\xi)\cdot f_s(\xi,\varepsilon,d)\leq -c_1\|\xi\|^p+c_2\|\varepsilon\|^p+c_3\|d\|^p$,\vspace*{.1em} \item [(ii)] $\nabla{V_c}(\xi)\cdot f_c(\xi,d)\leq\mu^p\|d\|^p-\|z\|^p$,\vspace*{.1em} \item [(iii)] $V_s(\xi)\leq \bar{c}_1\|\xi\|^p$, $V_c(\xi)\leq\bar{c}_2\|\xi\|^p$, $\|\nabla V_c(\xi)\|\leq \bar{c}_3\|\xi\|^{p-1}$. \end{itemize} \end{assumption} \begin{rmk} Assumption \ref{assumption_ISS}(i) implies that system (\ref{eq sys}) is ISS with respect to the inputs $\varepsilon$, $d$. Also Assumption \ref{assumption_ISS}(ii) implies that $u=\gamma(\xi)$ renders the continuous-time system (\ref{eq sys_1}) finite gain $\mathcal{L}_p$-stable with $\mathcal{L}_p$-gain $\leq \mu$. \end{rmk} The function $\varphi$ is assumed to have the following form \begin{eqnarray} \label{varphi} \varphi(\xi,\varepsilon) = \varphi_1(\xi)+\varphi_2(\varepsilon)+\varphi_3(\xi,\varepsilon), \end{eqnarray} where $\varphi_1(r) = - c_1\sigma\|r\|^p$, $\varphi_2(r) = c_2\|r\|^p$, \begin{eqnarray}\label{phi_3} \varphi_3(r,s) = \nabla V_{c,\lambda}(r)\cdot g(r)(\gamma(r+s)-\gamma(r)) \end{eqnarray} and $\sigma< 1$, $p\in[1,\infty)$, $V_{c,\lambda}(r)=\lambda V_c(r)$ for some $\lambda\in\mathbb{R}^+$. We then continue with the design of $\phi_1$, $\phi_2$; dynamic parameters serve to enlarge the inter-event times and guarantee the event-separation property for ETS (\ref{eq sys}). Consider the equations below for $t\in[t_k,t_{k+1})$ \begin{subequations}\label{phi_eta_Ti} \begin{eqnarray} \frac{d}{dt}\begin{pmatrix} {\phi}_1\\{\phi}_2 \end{pmatrix}+\begin{pmatrix} \alpha_1({\phi_1})-k_2{\phi_2}\\ \alpha_2(\phi_2) \end{pmatrix}=\begin{pmatrix} -\varphi\\ \bar{\varphi} \end{pmatrix},\label{phi}~~~~~ \\ {\bar{\varphi}(t)}= \begin{cases} \alpha_2(\bar{\delta}),~~~~~~~~~~~~~~~~t\in[t_k,\hat{t}_k),\\ \dot{\delta}_k(t)+\alpha_2(\delta_k(t)),~~ t\in[\hat{t}_k,t_{k+1}), \end{cases}\label{Delta} \end{eqnarray} \end{subequations} where $\bar{\delta}$ is a positive constant and $\delta_k$ is a positive, bounded and piecewise differentiable function defined over $[\hat{t}_k,t_{k+1})$ and satisfies $\sum_{k} \int_{\hat{t}_k}^{t_{k+1}} \delta_k(\tau)d\tau\leq \theta_1$ for some positive $\theta_1$. Also $\hat{t}_k=t_k+\hat{\tau}$, where $\hat{\tau}$ is a positive parameter and will be designed in the sequel. Note that function $\bar{\varphi}$ is defined such that $\bar{\delta}$ (resp., $\delta_k(t)$) is a solution of $\phi_2$ in (\ref{phi}) over $[t_k,\hat{t}_k)$ (resp., $[\hat{t}_k,t_{k+1})$). Moreover $\alpha_2$ is an arbitrary class-$\mathcal{K}_\infty$ function and $\alpha_1 \in \mathcal{K}_\infty$ is designed based on the following assumption. \begin{assumption}\label{ass_alpha} $\alpha_1(r)\geq \nu r$ where $\nu=c_1({1-\sigma})/({\bar{c}_1+\bar{c}_2})$. \end{assumption} To solve (\ref{phi}), (\ref{Delta}) the following initial values are assumed \begin{eqnarray}\label{IC} {\phi_1}(t_k)=r_k,~{\phi_1}(\hat{t}_k)={\hat{r}_k},~ \phi_2(t_k)=s_k,~\phi_2(\hat{t}_k)={\hat{s}_k}, \end{eqnarray} where $r_k$, $\hat{r}_k$, $s_k$, $\hat{s}_k$ are non-negative real numbers and are designed based on the following assumption. \begin{assumption}\label{ass_impose} $r_k$ and $\hat{r}_k$ are chosen from sequences with convergent series, {\it i.e.,} there exist finite numbers ${\theta_2},\theta_3\in\mathbb{R}^+$ so that $\sum\nolimits_{k}r_k \leq {\theta_2}$, $\sum\nolimits_{k}\hat{r}_k \leq \theta_3$. Moreover, $s_k$ and $\hat{s}_k$ satisfy $s_k\geq \bar{\delta}$ and $\hat{s}_k= \delta_k(\hat{t}_k)$. \end{assumption} Dynamic rules have been previously studied in \cite{Postuyan_GFW,Girard}. The variable ${\phi_1}$ in (\ref{trig_cond}) which satisfies the differential equation (\ref{phi}), plays the role of dynamic parameter introduced in the above references. In the present work, we introduce an additional dynamic variable ${\phi_2}$; while both $\phi_1$, $\phi_2$ serve to extend the inter-event times, $\phi_2$ plays the fundamental role of guaranteeing event separation property for ETS (\ref{eq sys}). In particular, $\phi_2$ introduces two design parameter, $\bar{\delta}$, $\delta_k$. The former, is inspired by the idea of mixing triggering condition in \cite{event-separation} and is intended to rule out Zeno-behaviour. The latter, on the other hand, is a generalization of time-decaying thresholds and is mainly used to move from practical asymptotic stability (under constant threshold) to asymptotic stability, \cite{decentralized-Mazo,Seyboth}. Designing these parameters together with $\hat{t}_k$ which decides the duration over which each parameter is effective, lead to non-trivial challenges that have to be carefully carried out. Based on the above observations, the proposed TC (\ref{trig_cond}) unifies (i) dynamic TC \cite{Girard} through the presence of $\phi_1$, (ii) time-varying threshold \cite{decentralized-Mazo} through presence of $\delta_k$ and (iii) mixed triggering \cite{event-separation} through presence of $\bar{\delta}$. \begin{proposition}\label{prop_phi_positive} Under TC (\ref{trig_cond}) and Assumption \ref{ass_impose}, ${\phi_1}(t),{\phi_2}(t)\geq0$ for all $t\geq t_0$. In detail, ${\phi_2}(t)\geq \bar{\delta}$ for $t\in[t_k,\hat{t}_k)$, ${\phi_2}(t)=\delta_k(t)$ for $t\in[\hat{t}_k,t_{k+1})$. \end{proposition} Proposition \ref{prop_phi_positive}, whose proof is provided in the Appendix, illustrates the previous claim that ${\phi_1}$, ${\phi_2}$ enlarge the inter-event times. In fact, in absence of $\phi_1$, $\phi_2$ triggering occurs when $\varphi(\xi,\varepsilon)=0$. However, the positiveness of ${\phi_1}$, ${\phi_2}$ postpones the triggering to occur when $\varphi(\xi,\varepsilon)=k_1\phi_1+k_2\phi_2$. To finish the design, it remains to define $\hat{\tau}$. Let us start with the following lemma whose proof is given in Appendix. \begin{lem}\label{lem_state_bound} Under Assumptions \ref{assumption_ISS}-\ref{ass_impose} and if the control signal is updated under the triggering rule (\ref{trig_cond}), all the trajectories of the ETS (\ref{eq sys}) starting from $\mathscr{B}_{\rho}$ will remain in $\mathscr{B}_{\bar{\rho}}$, where \begin{eqnarray} \bar{\rho}=\displaystyle\max\Big{\{}\|\xi\|:V_s(\xi)+V_{c,\lambda}(\xi)\leq V_s(\xi_0)+V_{c,\lambda}(\xi_0)+~~~~\nonumber\\ ~\frac{1}{\nu}( \lambda\mu_d^p{\|d\|}_\infty^p+{k_2\|{\phi_2}\|}_{\infty})+{\theta_2}+\theta_3, \xi,\xi_0\in\mathbb{R}^n,\|\xi_0\|\leq \rho\Big{\}}.\nonumber \end{eqnarray} \end{lem} Since ${\|{\phi_2}\|}_{\infty}$ is limited by $\smash{\displaystyle{\max}\{s_k,{\|\delta_k\|}_{\infty}:k\in\mathbb{K}\}}$ and hence is bounded, Lemma \ref{lem_state_bound} suggests that the trajectories of the ETS (\ref{eq sys}) are bounded by a non-decreasing function of $\|\xi_0\|$ and ${\|d\|}_{\infty}$. Next lemma employs the Lipschitz property of $f$, $g$, $\gamma$ to provide an upper bound on the norm of state dynamics. \begin{lem}\label{lem_lip} With the same conditions as in Lemma \ref{lem_state_bound}, there exist $\lambda_i=\lambda_i({\|\xi_0\|},{\|d\|}_\infty)$, $i\in\{1,2,3\}$, non-decreasing on their arguments, so that \begin{eqnarray} \| \dot{\xi} \| \leq \lambda_1\|\xi\|+\lambda_2\|\varepsilon\|+\lambda_3\|d\|.\nonumber \end{eqnarray} \end{lem} \begin{proof}[Sketch of the proof] One can apply the Lipschitz property of functions $f$, $g$, $\gamma$ to get $ \|\dot{\xi} -\dot{\tilde{\xi}}\| \leq \lambda_1\|\xi-\tilde{\xi}\|+\lambda_2\|\varepsilon-\tilde{\varepsilon}\|+\lambda_3\|d-\tilde{d}\|$ where $\lambda_i$'s are functions of ${\|d\|}_\infty$ and $\bar{\rho}$. The result then follows by applying Lemma \ref{lem_state_bound}. \end{proof} \begin{rmk} As Lemma \ref{lem_lip} suggests, since the Lipschitz properties of functions $f$, $g$ and $\gamma$ are only \emph{local}, the Lipschitz coefficients $\lambda_i$ are bounded provided $\|\xi_0\|<\infty$, ${\|d\|}_{\infty}<\infty$. \end{rmk} Let us define for $i\in\{1,3\}$ \begin{eqnarray} \tau_i:=\sup\Big{\{}t\in\mathbb{R}^+_0:\lambda_i^p \psi(t,\lambda_2)<\frac{B_i}{c} \Big{\}},\nonumber \end{eqnarray} where $B_1=c_1\sigma-\frac{(\bar{c}_3\lambda)^q}{q}$, $B_3=\lambda(\mu_d^p-\mu^p)-c_3$, $c=c_2+\frac{\lambda_2^p}{p}$ and $\psi(t,\lambda_2)=\frac{2^{2p}(p-1)^{p-1}}{\lambda_2^{p}p^{p}}(e^{\frac{\lambda_2p}{2(p-1)}t}-1)^{p-1}(e^{\frac{\lambda_2p}{2}t}-1).$ Then, we define $\hat{\tau}$ as \begin{eqnarray}\label{tau_hat} \hat{\tau}=\min\{\tau_1,\tau_3\}. \end{eqnarray} Later in Lemma \ref{semi_global} we will see that $\hat{\tau}>0$ serves to isolate of triggering instants. Moreover, $\tau_1$ (resp., $\tau_3$) is the elapsed time since the most recent triggering instant so that sampling error grows without violating stability (resp., desired $\mathcal{L}_p$ bound) of the ETS (\ref{eq sys}) (see the proof of Theorem \ref{thm_main}). In addition, the definitions of $\hat{\tau}$, $\tau_3$ infers that $\mu_d^p-\mu^p$ is lower bounded by $\psi(\hat{\tau},\lambda_2)$. Since as shown in Section \ref{compare}, $\hat{\tau}$ is a candidate for dwell-period, the dwell-time method places a lower bound on the deviation of continuous-time and event-based performances. This restriction does not exist in our approach. \begin{rmk}\label{lambda} To design $\lambda$ one has to consider the restriction of having a positive $\hat{\tau}$. $\hat{\tau}>0$ necessitates $\tau_1$, $\tau_3$ and hence $B_1$, $B_3$ to be positive. This gives the restriction on $\lambda$ as $\lambda < {\bar{c}_3^{-1}}{{(c_1\sigma q)}^{\frac{1}{q}}}$ and $\lambda>{c_3}({\mu_d^p-\mu^p})^{-1}$. The later condition implies $\mu_d>\mu$, {\it i.e.,} the $\mathcal{L}_p$-stability of ETS (\ref{eq sys}) is achieved at the expense of a larger rejection level. However, to minimize $\mu_d$, we may choose $c_3$ small enough by \emph{scaling} Lyapunov function $V_s$ in Assumption \ref{assumption_ISS} (refer to example section for more details). Obviously, one has to replace $c_i$, $i\in\{1,2,3\}$ by the corresponding scaled values in all of the discussions. \end{rmk} \section{Main results}\label{preliminary result} \subsection{Uniform isolation of triggering instants} One of the difficulties encountered in event-based control systems is undesirable Zeno behaviour which happens when an infinite number of triggerings occur over a finite interval. This is even more challenging when the system of interest is exposed to exogenous disturbances or sensor noise, since in this case the sampling error is also driven by the disturbance/noise. As an example, while Zeno behavior is excluded in \cite{tabuada} for disturbance free systems, the same does not necessarily hold in presence of disturbance (see \cite{event-separation} for further discussion). In the sequel, we show that under TC (\ref{trig_cond}), the ETS (\ref{eq sys}) satisfies the following \emph{robust event-separation} property defined in \cite{event-separation}. \begin{deff}\label{def-separation} Let $\tau_{m} =\inf\{t_{k+1}-t_k : k\in\mathbb{K}\}$ be the MIET. ETS (\ref{eq sys}) has the robust semi-global event-separation property if there exists $\epsilon\in \mathbb{R}^+$ so that for any compact set $\Xi\subset\mathbb{R}^n$, $\inf\{\tau_m : \xi_0\in\Xi ,{\|d\|}_\infty\leq \epsilon \}>0$. \end{deff} According to Definition \ref{def-separation}, an event-based system has the robust semi-global event-separation property if the sequence of sampling times $\{t_k:k\in\mathbb{K}\}$ is a uniformly isolated set provided that $\xi_0\in\Xi$ and ${\|d\|}_\infty\leq \epsilon$. \begin{lem}\label{semi_global} Under Assumptions \ref{assumption_ISS}, \ref{ass_alpha}, \ref{ass_impose} and ETM (\ref{trig_cond})-(\ref{tau_hat}), the ETS (\ref{eq sys}) has the robust semi-global event-separation property. In detail, \begin{eqnarray} \tau_{m}=\displaystyle\min\{\tau^*(1),\hat{\tau}\},\nonumber \end{eqnarray} where $m_1=(\frac{B_1}{c})^{\frac{1}{p}}$, $m_2=(\frac{k_2 \bar{\delta}}{c})^{\frac{1}{p}}$, $\kappa{=}\max\Big{\{}\frac{2\lambda_1}{m_1},\frac{2\lambda_3 \epsilon}{m_2}\Big{\}}$ and \begin{eqnarray}\label{chi} \tau^*(\chi)=\begin{cases} \frac{1}{\lambda_2-m_1\frac{\kappa}{2}}\ln(\frac{\kappa+\lambda_2\chi }{\kappa(1+m_1\frac{\chi}{2})}),~~~~\kappa\neq\frac{2\lambda_2}{m_1},\\ \frac{m_1{\chi}}{\lambda_2(2+m_1{\chi})},~~~~~~~~~~~~~~~~~~\kappa=\frac{2\lambda_2}{m_1}. \end{cases} \end{eqnarray} \end{lem} To prove Lemma \ref{semi_global} we report here two useful inequalities. \begin{lem}\label{lem_1} For any $p,q\geq 1$ with $\frac{1}{p}+\frac{1}{q}=1$ and any $r>0$ \begin{eqnarray} (i)~\|x+y\|^r\leq 2^r\|x\|^r+2^r\|y\|^r ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nonumber\\ (ii)\int_{\mathcal{T}}\|x(\tau)y(\tau)\| d\tau \leq \Big{(}\int_{\mathcal{T}}\|x(\tau)\|^p d\tau\Big{)}^{\frac{1}{p}} \Big{(}\int_{\mathcal{T}}\|y(\tau)\|^q d\tau\Big{)}^{\frac{1}{q}}\nonumber.~ \end{eqnarray} \end{lem} \begin{newproofof} \textit{Lemma \ref{semi_global}.} We aim to modify (\ref{trig_cond}) to obtain a more conservative TC (in the sense that triggering threshold would be reached sooner) since such a TC gives rise to a lower bound on MIET. To begin, we first make the use of Proposition \ref{prop_phi_positive} which implies $\phi_1\geq 0$, ${\phi_2}(t)\geq \bar{\delta}$ for $t\in[t_k,\hat{t}_k)$ and hence modify (\ref{trig_cond}) as $\varphi=k_2\bar{\delta}$ in this interval. Note that we will assume $t_{k+1}\leq \hat{t}_k$ since otherwise $\tau_m=\hat{\tau}$ and the event-separation property holds trivially. From the inequality given in the sketch of proof of Lemma \ref{lem_lip} with $\tilde{\xi}=\xi$, $\tilde{\varepsilon}=0$, $\tilde{d}=d$, one can conclude $\|g(\xi)(\gamma(\xi+\varepsilon)-\gamma(\xi))\|\leq \lambda_2 \|\varepsilon\|$ and hence modify condition $\varphi=k_2\bar{\delta}$ as \begin{eqnarray} c_2\|\varepsilon\|^p+\lambda_2\|\nabla V_{c,\lambda}(\xi)\|\|\varepsilon\|=c_1\sigma\|\xi\|^p+k_2\bar{\delta}.\nonumber \end{eqnarray} Next, from Lemma \ref{lem_1}(ii) and Assumption \ref{assumption_ISS}(iii), we obtain $c \|\varepsilon\|^p =B_1\|\xi\|^p+k_2\bar{\delta}$. Finally, Lemma \ref{lem_1}(i) suggests \begin{eqnarray} \Big{(}\frac{{B_1}^{\frac{1}{p}}}{2}\|\xi\|+\frac{({k_2\bar{\delta}})^{\frac{1}{p}}}{2}\Big{)}^p\leq B_1\|\xi\|^p+k_2\bar{\delta},\nonumber \end{eqnarray} and hence, the desired modification of (\ref{trig_cond}) is obtained as \begin{eqnarray}\label{m1m2} 2\|\varepsilon\|=m_1{\|\xi\|}+m_2. \end{eqnarray} Define $\chi:=2\|\varepsilon\|/(m_1{\|\xi\|}+m_2)$, it can be concluded that \begin{eqnarray} \dot{\chi}\leq \Big{(}1+m_1\frac{\chi}{2}\Big{)}\Big{(}\frac{2\|\dot{\xi}\|}{m_1{\|\xi\|}+m_2}\Big{)}\leq \Big{(}1+m_1\frac{\chi}{2}\Big{)}\Big{(}\kappa+\lambda_2\chi\Big{)}\nonumber \end{eqnarray} where Lemma \ref{lem_lip} is used to obtain the last inequality. Therefore, $\tau^*(\chi)=t-t_k$ can be obtained as in (\ref{chi}) by solving \begin{eqnarray} \displaystyle \int_{t_k}^{t}d\tau=\displaystyle \int_{\chi(t_k)=0}^{\chi(t)}{\big{(}1+m_1\frac{\ell}{2}\big{)}^{-1}\big{(}\kappa+\lambda_2\ell\big{)}^{-1}}{d\ell}.\nonumber \end{eqnarray} Event rule (\ref{m1m2}) suggests that triggering occurs when $\chi=1$, thus $t_{k+1}=t_k+\tau^*(1)$. In addition, (\ref{chi}) implies that $\tau^*(1)$ is strictly nonzero since for $\xi_0\in\Xi$ and ${\|d\|}_\infty\leq \epsilon$, Lemma \ref{lem_lip} suggests that $\lambda_1$, $\lambda_2$, $\lambda_3$, and hence $\kappa$ are bounded. The result then follows from definition of $\tau_{m}$ and positiveness of $\hat{\tau}$. \end{newproofof} \subsection{Comparison with the existing strategies}\label{compare} In this subsection we study several popular existing ETMs that can be extracted as special cases of (\ref{trig_cond})-(\ref{Delta}). We emphasize that the design criteria in these references is not the same so our comparison is merely based on the structure of the triggering rule with no reference to the relative merits or performance in each design, simply because there seems to be no fair way or value in such comparison. Moreover, since some of these works focus on output feedback, in our comparisons we assume the measurable output to be the full state vector. Our proposed ETM is \emph{dynamic} due to the existence of the dynamic variable $\phi_1$. See \cite{Girard,Postuyan_GFW} for discussions regarding the effect of this variable. To the best of our knowledge, the parameter $\phi_2$ has not been introduced before. Thus, we provide the following observations regarding $\phi_2$. (i) The inter-event expansion that originates from $\phi_2$ can be quantified for a desired period of time, or a desired number of trigger instants (see \cite{Me_submitted}). (ii) As shown in \cite{Me_submitted} through several examples, ${\phi_2}$ serves to avoid redundant samplings when the norm of state is close to $0$. This is important since as a primary pitfall, triggering rules based on the norm of the state tend to increase triggering as the state approaches the origin. (iii) The primary functionality of ${\phi_2}$ is to exclude Zeno behaviour as suggested by the proof of Lemma \ref{semi_global}. (iv) While the approach in the present article is considered purely event-based, an appropriate choice of parameters in the dynamics of ${\phi_2}$ enables the TC (\ref{trig_cond}) to capture the time-regularization strategies. We conclude by extracting several triggering rules proposed in the literature from (\ref{trig_cond}). Note that $\bar{k}=1$ unless otherwise stated. $\bullet$ \cite{Me_submitted}: For $k_1=0$ and $s_k=\bar{\delta}$, TC (\ref{trig_cond}) reduces to the one proposed in \cite{Me_submitted}. In the rest of our comparisons we assume $\varphi_3 = 0$ in (\ref{varphi}). $\bullet$ \cite{event-separation}: For $k_1=0$ and $\delta_k(t)=s_k=\hat{s}_k=\bar{\delta}$ we obtain ${\phi_2}=\bar{\delta}$. Hence, the TC becomes $\varphi(\xi,\varepsilon)=k_2\bar{\delta}$. $\bullet$ \cite{Girard}: Take $k_2=0$, (\ref{trig_cond}), (\ref{phi}) reduce to $\varphi(\xi,\varepsilon)=k_1 {\phi_1}$, $\dot{\phi}_1+\alpha_1({\phi_1}) =-\varphi$. $\bullet$ \cite{me_iet}: Taking $k_2=0$, $\bar{k}=0$ and $\alpha_1(r)=0$ for any $r$, (\ref{trig_cond}) reduces to ${\phi_1}=0$, where ${\phi_1}(t)=-\int_{t_k}^{t} \varphi(\xi(s),\varepsilon(s))ds$, {\it i.e.,} the integral-based TC. $\bullet$ \cite{tabuada}: Substitute $k_1=k_2=0$ in (\ref{trig_cond}) one can extract the TC $\varphi=0$. $\bullet$ \cite{Dolk_LP_ieee,Dolk_Lp}: Define $\hat{t}_k=t_k+\tau_{m}$ where $\tau_{m}= \displaystyle\min\{\tau^*(1),\hat{\tau}\}$. This guarantees no triggering of the control task occurs over $[t_k,\hat{t}_k)$. Then, if we set $k_2=0$ and $\bar{k}=0$, $\Phi(t)=0$ reduces to $\phi_1(t)=0$, thereby $t_{k+1}$ in (\ref{eq sys}) can be written in a time-regularization fashion as $t_{k+1}=\inf \{t\in\mathbb{R}:t>t_k+\tau_{m} \bigwedge {\phi_1}(t^-)=0\}$ where $\dot{\phi}_1=-\varphi$ by setting $k_2=0$ and $\alpha_1(r)=0$ for any $r$ in (\ref{phi}). $\bullet$ \cite{Mahmoud_Abdolrahim}: Set $k_1=k_2=0$ and follow similar lines as in comparison with \cite{Dolk_LP_ieee,Dolk_Lp}, we get $t_{k+1}=\inf \{t\in\mathbb{R}:t>t_k+\tau_{m} \bigwedge \varphi(\xi(t^-),\varepsilon(t^-))=0\}$. $\bullet$ \cite{Postuyan_GFW}: Let $k_1=0$, $\bar{\varphi}(t)=0$ for all $t\in\mathbb{R}$ and $\hat{s}_k={\phi}_2(\hat{t}_k^-)$. Choose $s_k\geq 0$ we have ${\phi}_2(t)\geq 0$ for all $t\geq t_0$. Then (\ref{trig_cond}), (\ref{phi}) reduce to $\varphi_2(\varepsilon)=-\varphi_1(\xi)+k_2{\phi_2}$, where $\dot{\phi}_2+\alpha_2({\phi_2})=0$. In this case, ${\phi_2}$ plays the role of threshold variable defined in \cite{Postuyan_GFW}. However, unlike the present work where ${\phi_2}$ appears in the TC as a positive term that is added to some functions of state's norm, in \cite{Postuyan_GFW}, the admissible measurement error is bounded by the maximum of these two. \subsection{$\mathcal{L}_p$-gain performance}\label{main section} We start with a useful lemma which is an application of Lemma \ref{lem_lip} and its proof is given in the Appendix. \begin{lem}\label{lem_error} Let $a=\lambda_1^p\psi(\hat{\tau},\lambda_2)$, $b=\lambda_3^p\psi(\hat{\tau},\lambda_2)$. Then \begin{eqnarray} \int_{t_k}^{\hat{t}_k}\|\varepsilon(\tau)\|^pd\tau \leq a\int_{t_k}^{\hat{t}_k}\|\xi(\tau)\|^pd\tau + b\int_{t_k}^{\hat{t}_k}\|d(\tau)\|^pd\tau.\label{lem_error_eq} \end{eqnarray} \end{lem} \begin{rmk}\label{rmk_a,b} In view of the definition of $a$, $b$ and $\hat{\tau}$, one can verify that $a\leq \lambda_1^p\psi(\tau_1,\lambda_2)$, $b\leq \lambda_3^p\psi(\tau_3,\lambda_2)$. Also from definition of $\tau_1$, $\tau_3$ we conclude $ac<B_1$, $bc\leq B_3$; inequalities that will be used later in the proof of main results. \end{rmk} Next theorem states our primary result where the finite gain $\mathcal{L}_p$-stability of continuous-time system (\ref{eq sys_1}) is shown to be preserved under the event-based execution of control task. Compared to \cite{Dolk_LP_ieee,Mahmoud_Abdolrahim}, our result relies on a less conservative set of assumptions. \begin{thm}\label{thm_main} Under Assumptions \ref{assumption_ISS}, \ref{ass_alpha}, \ref{ass_impose} and ETM (\ref{trig_cond})-(\ref{tau_hat}) the ETS (\ref{eq sys}) is finite gain $\mathcal{L}_p$-stable with $\mathcal{L}_p$-gain $\leq \mu_d$. In addition, the origin $\xi=0$ is globally asymptotically stable. \end{thm} \begin{proof} For $t\in[t_k,\hat{t}_{k})$, Assumption \ref{assumption_ISS}(ii) suggests \begin{eqnarray}\label{W_dot} \dot{V}_c(\xi)\leq \mu^p\|d\|^p-\|z\|^p+\nabla V_c(\xi)\cdot g(\xi)(\gamma(\xi+\varepsilon)-\gamma(\xi)) \end{eqnarray} which further reduces to \begin{eqnarray} \dot{V}_c(\xi) \leq \mu^p\|d\|^p-\|z\|^p+\lambda_2\| \nabla V_c(\xi)\|~\|\varepsilon\|\nonumber \end{eqnarray} by applying $\|g(\xi)(\gamma(\xi+\varepsilon)-\gamma(\xi))\|\leq \lambda_2 \|\varepsilon\|$ (that is already proven in the proof of Lemma \ref{semi_global}). Thus, from Lemma \ref{lem_1}(ii) and Assumption \ref{assumption_ISS}(iii), we get \begin{eqnarray} \dot{V}_{c,\lambda(\xi)} \leq\frac{\lambda^q\bar{c}_3^q}{q}\|\xi\|^p +\frac{\lambda_2^p}{p}\|\varepsilon\|^p +\lambda \mu^p \|d\|^p -\lambda\|z\|^p.\nonumber \end{eqnarray} As a consequence, for ${V}(\xi)=V_s(\xi)+V_{c,\lambda}(\xi)$ it follows from Assumption \ref{assumption_ISS}, (\ref{lem_error_eq}) and (\ref{W_dot}) that \begin{eqnarray} {{V}}(\xi({\hat{t}_k})-{{V}}(\xi({t_k}))\leq -(c_1-\frac{\lambda^q\bar{c}_3^q}{q}-ac)\int_{{t_k}}^{{\hat{t}_k}}{\|\xi(\tau)\|^p}d\tau \nonumber \\ +(\lambda\mu^p+c_3+bc) \int_{{t_k}}^{{\hat{t}_k}}{\|d(\tau)\|^p}d\tau-\lambda\int_{{t_k}}^{{\hat{t}_k}}{\|z(\tau)\|^p}d\tau ~~\nonumber \\ \leq\lambda\mu_d^p \int_{{t_k}}^{{\hat{t}_k}}{\|d(\tau)\|^p}d\tau -\lambda\int_{{t_k}}^{{\hat{t}_k}}{\|z(\tau)\|^p}d\tau,~~~~~~~~~\nonumber \end{eqnarray} where the last inequality follows from Remark \ref{rmk_a,b}. For $t\in[\hat{t}_k,t_{k+1})$ one can apply TC (\ref{trig_cond}) to obtain an upper bound on $\dot{{V}}$ as $\dot{{V}}(\xi) {\leq} -c_1(1-\sigma)\|\xi\|^p+(\lambda\mu^p{+}c_3)\|d\|^p-\lambda\|z\|^p+k_2{\phi_2}-\dot{\phi}_1$ where $-\alpha_1({\phi_1})$ term is eliminated from the right hand side since $\phi_1$ is non-negative. It then follows that $\dot{{V}}(\xi) \leq \lambda\mu_d^p\|d\|^p-\lambda\|z\|^p+k_2\delta_k-\dot{\phi}_1$ and hence \begin{eqnarray} {V}(\xi({t_{k+1}}))-{V}(\xi({\hat{t}_k}))\leq \lambda \mu_d^p \int_{{\hat{t}_k}}^{{t_{k+1}}}{\|d(\tau)\|^p}d\tau\nonumber \\ -\lambda\int_{{\hat{t}_k}}^{{t_{k+1}}}{\|z(\tau)\|^p}d\tau +k_2\int_{{\hat{t}_k}}^{{t_{k+1}}}{\delta_k(\tau)}d\tau+\hat{r}_k.\nonumber \end{eqnarray} Therefore, we may conclude \begin{eqnarray}\label{recursive2} {V}(\xi({t_{k+1}}))-{V}(\xi({t_k}))\leq \lambda\mu_d^p\int_{{t_k}}^{{t_{k+1}}}{\|d(\tau)\|^p}d\tau \nonumber \\ -\lambda\int_{{t_k}}^{{t_{k+1}}}{\|z(\tau)\|^p}d\tau+k_2\int_{{\hat{t}_k}}^{{t_{k+1}}}{\delta_k(\tau)}d\tau+\hat{r}_k.\nonumber \end{eqnarray} Apply this inequality to the sampling intervals until $t\geq t_0$, the positive definiteness of ${V}$ can be employed to write \begin{eqnarray} \int_{t_0}^{t}{\|z(\tau)\|^p}d\tau \leq \mu_d^p \int_{t_0}^{t}{\|d(\tau)\|^p}d\tau+\frac{1}{\lambda}(k_2\theta_1+\theta_3+{V}(\xi_0)).\nonumber \end{eqnarray} This proves $\mathcal{L}_p$-stability of ETS (\ref{eq sys}) with $\mathcal{L}_p$-gain $\leq \mu_d$. To show asymptotic stability, let $d=0$. Using a similar process as we prove of $\mathcal{L}_p$-stability, it can be shown then suggests that for any for $\lambda=0$, any $t\geq t_0$, \begin{eqnarray} {V_s} (\xi(t)) \leq-c_1(1-\sigma)\int_{t_0}^{t}{\|\xi(\tau)\|^p}d\tau+k_2 \theta_1+\theta_3+{V_s}(\xi_0).\nonumber \end{eqnarray} This proves the ultimate boundedness of trajectories of system (\ref{eq sys}). However, global asymptotic stability is postponed to show that for any $\epsilon\in\mathbb{R}^+$ there exists some $\delta\in\mathbb{R}^+$ such that if $\|\xi_0\|\leq\delta$, $\|\xi(t)\|\leq\epsilon$ for all $t\geq t_0$ and $\lim_{t\rightarrow\infty} \xi(t)=0$. This is achieved by redefining $\delta_k(t)$ (resp., $\hat{r}_k$) as $\lambda_0 {V_s}(\xi_0)\delta_k(t)$ (resp., $\lambda_0 {V_s}(\xi_0) \hat{r}_k$) for some $\lambda_0\in\mathbb{R}^+$. Thus by choosing \begin{eqnarray} \delta={V}_s^{-1}\Big{(}\frac{{V_s}(\epsilon)}{1+\lambda_0(k_2 \theta_1+\theta_3)}\Big{)}\nonumber \end{eqnarray} for a given $\epsilon$, we have \begin{eqnarray} {V_s}(\xi(t))\leq-c_1(1-\sigma)\int_{t_0}^{t}{\|\xi(\tau)\|^p}d\tau+{V_s}(\epsilon),\nonumber \end{eqnarray} {\it i.e.,} $\|\xi(t)\|\leq\epsilon$ for all $t\geq t_0$. Convergence of $\xi$ to zero is easy to show and omitted due to space limitations. \end{proof} \begin{rmk}\label{affine} If instead of affine structure (\ref{eq sys_0}), the system model is assumed to be $\dot{\xi}=f(\xi,u,d)$, our main findings which consist the results of Lemma \ref{semi_global} and Theorem \ref{thm_main} are still valid provided that $\varphi_3$ in (\ref{phi_3}) is replaced with $\varphi_3(r,s) =\lambda_2 \nabla V_{c,\lambda}(r)\cdot \|s\|$. The details can be found in \cite{Me_submitted}. \end{rmk} \subsection{Inter-event time enlargement} In the sequel, we present an important feature of TC (\ref{trig_cond}) on extending inter-event times. For this purpose, we define \begin{eqnarray} \tau^*_{max}\doteq\max \{\tau^*:\bar{\rho},\chi\in\mathbb{R}^+_0\},\nonumber \end{eqnarray} which in view of the following theorem, upper bounds the new extended inter-event times. In this definition $\tau^*$ is assumed to be a function of $\bar{\rho}$ and $\chi$, as suggested by (\ref{chi}) and dependence of $\lambda_i$, $i\in\{1,2,3\}$ on $\bar{\rho}$ (that is defined in Lemma \ref{lem_state_bound}). \begin{thm}\label{thm_enlarge} For any $T^\circ\in\mathbb{R}^+$ and $\tau^\circ\in [0,\tau^*_{max}]$, $\bar{\varphi}$ in (\ref{Delta}) can be designed in a way that $t_{k+1}-t_k\geq \tau^\circ$ at least for $t_{k+1}\leq T^\circ$. \end{thm} \begin{proof} To find a lower bound on inter-event times, let us restrict the TC (\ref{trig_cond}) to $\varphi(\xi,\varepsilon)=k_2{\phi_2}$ by taking $k_1=0$. Recalling the proof of Lemma \ref{semi_global} where the triggering happens when $\chi=1$, our goal here is to design ${\phi_2}$ so that the triggering occurs for some $\chi>1$. Note that $\tau^*_{max}\geq\tau^*(1)$ by definition. Due to continuity of $\tau^*$ in (\ref{chi}), for any $\tau^\circ\in [0,\tau^*_{max}]$ one can find $\chi^\circ$ (obviously $\geq1$) so that $\tau^\circ=\tau^*(\chi^\circ)$. It only remains to choose the TC such that $\chi\geq\chi^\circ$ at sampling instants. With the same notation as in Lemma \ref{semi_global}, let $\delta^*:={\chi^*}^2\bar{\delta}$ where $\chi^* = \chi^\circ +\frac{m_1} {m_2}{\bar{\rho}}(\chi^\circ-1)$. We redefine $\bar{\varphi}$ in (\ref{Delta}) as \begin{eqnarray} \bar{\varphi}(t)=\begin{cases} \alpha_2(\delta^*),~~~ t\in[0,T^\circ),\\ 0,~~~~~~~~~~\text{elsewhere}. \end{cases}\nonumber \end{eqnarray} This implies ${\phi_2}(t)=\delta^*$ for $t\in[0,T^\circ)$. Then following similar lines as we derived (\ref{m1m2}), the lower bound on the inter-event times can be calculated by assuming the triggering rule \begin{eqnarray} 2\|\varepsilon\|=m_1{\|\xi\|}+\chi^*m_2.\nonumber \end{eqnarray} From definition of $\chi$ given in the proof of Lemma \ref{semi_global} it is easy to verify that \begin{eqnarray} \chi=\frac{m_1\|\xi\|+\chi^*m_2}{m_1\|\xi\|+m_2} \geq \chi^\circ\nonumber \end{eqnarray} at triggering instants and hence inter-event times are lower bounded by $\tau^\circ$ for $t\leq T^\circ$. \end{proof} \begin{rmk} Theorem \ref{thm_enlarge} explores one of the advantages of our proposed strategy where the intersampling intervals are extended to $\tau^\circ$ for $t\in[0,T^\circ]$. The numerical example in section \ref{section examples} suggests that the average sampling time is also improved in this interval. Note that while the results are not explicitly applicable to $t>T^\circ$, numerical examples in \cite{Me_submitted} verify the efficiency of this technique for all $t\geq t_0$. \end{rmk} \section{Example}\label{section examples} \subsection{System model (\cite{Me_submitted})} Consider the system (\ref{eq sys_1}), with $\xi=[\xi_1~\xi_2]^T$ and\footnote{This example is of a Lur’e type system, which has recently seen attention in the context of event-based control, \cite{Lur's_statefeedback,Cone-nonlin}.} \begin{eqnarray}\label{sys_exmp} f(\xi,d)=\begin{pmatrix} \xi_2\\-H(\xi_1)+d \end{pmatrix}, ~~g(\xi)=\begin{pmatrix} 0\\1 \end{pmatrix}, ~~h(\xi,d)=\xi_1, \nonumber \end{eqnarray} $u(t)=\gamma(\xi(t))= -\xi_2(t))$. The piecewise linear function $H:\mathbb{R}\mapsto\mathbb{R}$ is given by: $H(r)=2r$ for $|r|\leq h^*$, $H(r)=h^*+r$ for $ r\geq h^*$ and $H(r)=-h^*+r$ for $r\leq -h^*$, some $h^*\in\mathbb{R}_0^+$ for some non-negative $h^*$. Note that $H$ satisfies $r^2\leq rH(r)\leq 2r^2$ for any $r\in \mathbb{R}$. We study the finite gain $\mathcal{L}_2$-stability this system under event-based implementation of control law. \subsection{Verification of Assumption \ref{assumption_ISS}} Letting $V_s(\xi)= \frac{\upsilon_1}{2}\xi^TP\xi+ 2\upsilon_1\int_{0}^{\xi_1}H(r)dr$, $P=[1~1;1~2]$, then (i) holds for $c_1=\frac{\upsilon_1}{2}$, $c_2=c_3=5\upsilon_1$ (see \cite{Me_submitted} for details). Here $\upsilon_1$ is the scaling factor discussed in Remark \ref{lambda}. To show (ii), we start with $\frac{1}{\upsilon_1}\dot{V}_s(\xi)=-\xi_1H(\xi_1)-\xi_2^2+(\xi_1+2\xi_2)d\leq -(1-n_1)\xi_1^2-(1-n_2)\xi_2^2+(\frac{1}{4n_1}+\frac{1}{n_2})d^2-n_1(\xi_1-\frac{d}{2\sigma})^2-n_2(\xi_2-\frac{d}{n_2})^2$ for some positive $n_1$, $n_2$. Choosing $V_c(x)=\frac{1}{\upsilon_1(1-n_2)}V_s(x)$ yields $\dot{V}_c(x)\leq |z|^2-\mu^2|d|^2$ where $\mu^2=\frac{1}{1-n_2}(\frac{1}{4n_1}+\frac{1}{n_2})$. The minimum $\mu$ is $4.49$ obtained for $n_1=1$, $n_2=0.47$. A less conservative bound may be found with a different choice of $V_c$. Finally, (iii) holds for $\bar{c}_1=\frac{\upsilon_1}{2}\lambda_{max}([5~1;1~2])$, $\bar{c}_2=\frac{1}{\upsilon_1(1-n_2)}\bar{c}_1$ and $\bar{c}_3=\frac{1}{1-n_2}(\|P\|+4)$, where $\lambda_{max}(\cdot)$ is the maximum eigenvalue. \subsection{Triggering condition} Our design criteria is to guarantee $\mu_d\leq 5$. We consider here two scenarios for $\delta_k$ in (\ref{Delta}): \begin{eqnarray} \delta^1_k(t)=D_1 e^{-\varrho_1 t},~~ \delta^2_k(t)=D_2 \frac{\varrho_2^n}{n!},~ n=\lceil \frac{t}{\bar{n}}\rceil,\nonumber \end{eqnarray} where $D_1=10$, $D_2=2$, $\varrho _1=0.05$, $\varrho_2=3$, $\bar{n}=10$. Also, we consider $\alpha_1(r)=\alpha_2(r)=r$ in (\ref{phi}). To cover the strategies discussed in Section \ref{compare}, we categorize our analysis into six cases, depending on the values of $k_1$, $k_2$, $\delta_k^1$, $\delta_k^2$: \begin{table}[H] \vspace*{-0.5em} \centering \label{} \renewcommand{\arraystretch}{.7} \begin{tabular}{ccccccc} \toprule[1.5pt] case: & ~(i)&~(ii)& ~(iii)& ~(iv)&~(v)&~(vi)\\ \midrule $(k_1,k_2)$&$~(1,1)$&$~(1,1)$ & $~(1,0)$& $~(0,1)$& $~(0,1)$ & $~(0,0)$ \\ \vspace{0em} $\delta_k$&$~\delta_k^1$&$~\delta_k^2$ & $~n/a$& $~\delta_k^1$& $~\delta_k^2$ & $~n/a$\\ \bottomrule[1.5pt] \end{tabular} \vspace{-0.5em} \end{table} Cases (i), (ii) are the general dynamic triggering scenarios with both ${\phi_1}$, ${\phi_2}$ effective in condition (\ref{trig_cond}). The role of $\phi_1$ (resp., $\phi_2$) is studied in case (iii) (resp., cases (iv), (v)). Also, case (vi) results in static TC since both ${\phi_1}$, ${\phi_2}$ are absent. It is not difficult to verify $\lambda_1=3$, $\lambda_2=\lambda_3=1$ in Lemma \ref{lem_lip}. Therefore, we may choose $\lambda=4.7\times10^{-3}$, $\upsilon_1=3.6\times 10^{-3}$ (which satisfy the required bounds on $\lambda$ given in Remark \ref{lambda}) and obtain $\hat{\tau}=8.9\times10^{-3}$ from (\ref{tau_hat}). Finally, we take $\bar{\delta}=10$, $r_k=0$, $\hat{r}_k=\phi_1(\hat{t}_k^-)$, $s_k=12.5$. \subsection{Numerical simulation} Signal $d(t)$ follows a zero mean Gaussian distribution with variance $1$ over $t\in[0,100)$ and zero everywhere else. We also take $h^*=0.3$ and run the simulation for $100$ initial conditions uniformly distributed in a circle of radius $1$ over $100$ seconds and finally average the results. The plots are provided for initial condition $\xi_0=(\sin(\frac{\pi}{3}),\cos(\frac{\pi}{3}))$. \begin{figure}[H] \vspace{-0.75em} \hspace*{-0.6em} \centering \includegraphics[width=1.02\columnwidth]{1} \vspace{-1.7em} \caption{Verification of $\mathcal{L}_2$-gain.} \label{fig:ex2-1} \vspace{-1.5em} \end{figure} \begin{figure}[H] \vspace{-0.5em} \hspace*{-0.5em} \centering \includegraphics[width=1.02\columnwidth]{2} \vspace{-1.7em} \caption{Actuator signal at the triggering instants} \label{fig:ex2-1} \vspace{-0.5em} \end{figure} \vspace*{-0em} \begin{table}[H] \vspace{-1.5em} \centering \caption{Comparison of different scenarios.} \label{tab:table_final} \renewcommand{\arraystretch}{.7} \vspace{-.5em} \begin{tabular}{ccccccc} \toprule[1.5pt] case:~~~~& (i)~~&(ii)~~& (iii)~~& (iv)~~&(v)~~&(vi)~\\ \midrule $N$ ~~~~& 3.24~~&3.25~~ & 12.9~~ & 4.34~~& 4.72~~& 18.7~\\ $\tau_{m}$$\times 10^{2}$ ~~~~& 22.3~~&14.2~~& 3.3~~ & 22.6~~& 14.8~~& 1.8~\\ \bottomrule[1.5pt] \end{tabular} \vspace{-0.5em} \end{table} Table \ref{tab:table_final} illustrates the number of triggerings ($N$) and MIET ($\tau_m$) for different scenarios. The values of $\tau_m$ are in msec. Comparing different cases, it is clear that both ${\phi_1}$, ${\phi_2}$ improve transmission rate, however, when $k_2$ is non-zero, the number of samples and $\tau_{m}$ improve more significantly. This implies the effectiveness of parameter ${\phi_2}$ compared to ${\phi_1}$. This example suggests that when trajectories of open-loop ETS are either converging to the origin or staying bounded, since $\varepsilon$ remains bounded, an appropriate choice of $\bar{\delta}$, $\delta_k$ in ${\phi_2}$ avoid unnecessary samplings effectively. The fact that $\tau_{m} \gg \hat{\tau}$ supports the merit of our design compared to time-regularization method as this latter scheme often degenerates to periodic samplings ($\hat{\tau}$ in the case of our design) when state is near origin. Our method indeed outperforms time-triggered scheme too (with a period equal to the MIET), since the average inter-event intervals, {\it i.e.,} $\frac{100}{N}$, are much larger than the MIETs. \section{Conclusion}\label{section conclusion} This paper introduces a framework for event-triggered design by focusing on $\mathcal{L}_p$ performance problem. Our proposed ETM is based on two dynamic variables ${\phi_1}$ and ${\phi_2}$. Indeed, ${\phi_1}$ has the role of dynamic TC (\cite{Girard}) and is intended to enlarge inter-event times. While ${\phi_2}$ serves to extend the inter-event times too, it also has the critical roles of (i) enabling us to analytically predict the increase of MIET for a desired period of time, and, more importantly, (ii) excluding Zeno behaviour. An interesting future research topic is to check if an \emph{output-based} TC can be equipped with the dynamic parameters $\phi_1,\phi_2$ to enjoy the benefits offered by these parameters. Note that contrary to dwell-time approaches where the Zeno-freeness is granted a priori, the output-based generalization of our approach requires Zeno behaviour to be carefully ruled out. \section*{Appendix}\label{section appendix} \begin{newproofof} \textit{Lemma \ref{lem_state_bound}.} We shall need the following proposition whose proof can be obtained applying integration by parts and Assumption \ref{ass_alpha}. \begin{proposition}\label{lem_phi-1} For ${\phi}_1$ defined in (\ref{phi}) and (\ref{IC}), we have \begin{eqnarray} \int_{t_k}^{\hat{t}_k}-e^{\nu\tau} d{\phi}_1(\tau) \leq r_k e^{\nu t_k},~~\int_{\hat{t}_k}^{t_{k+1}}-e^{\nu\tau} d{\phi}_1(\tau) \leq \hat{r}_k e^{\nu \hat{t}_k}.\nonumber \end{eqnarray} \end{proposition} Now we start from Assumption \ref{assumption_ISS}(ii) to write \begin{eqnarray} ~\dot{V}_c (\xi)\leq \mu^p\|d\|^p -\|z\|^p+\nabla V_c(\xi)g(\xi) (\gamma(\xi+\varepsilon)-\gamma(\xi))\nonumber \end{eqnarray} Define ${V}(\xi)=V_s(\xi)+V_{c,\lambda}(\xi)$, one can apply Assumption \ref{assumption_ISS}(i), (\ref{trig_cond}), (\ref{varphi}) to get \begin{eqnarray} \dot{{V}}(\xi)\leq -c_1(1-\sigma)\|\xi\|^p+(\lambda\mu^p+c_3)\|d\|^p+\varphi\leq ~~~~~~~~~~~~~~~~~ \nonumber\\ -c_1(1-\sigma)\|\xi\|^p+(\lambda\mu^p+c_3)\|d\|^p+k_2{\phi_2}-\alpha_1({\phi_1})-\dot{\phi}_1~~\nonumber \end{eqnarray} where the second inequality is obtained using (\ref{phi}). Let $A={\lambda\mu_d^p{\|d\|}_\infty^p+ {k_2\|{\phi_2}\|}_{\infty}}$, we can apply Proposition \ref{prop_phi_positive} to write \begin{eqnarray} \dot{{V}} (\xi)+ \nu{V}(\xi)\leq A-\dot{\phi}_1,\nonumber \end{eqnarray} where we used ${V}(\xi)\leq(\bar{c}_1+\bar{c}_2)\|\xi\|^p$ suggested by Assumption \ref{assumption_ISS}(iii). We then conclude from Proposition \ref{lem_phi-1} that \begin{eqnarray} {V}(\xi(\hat{t}_k)) e^{\nu{{\hat{t}_k}}}\leq {V}(\xi(t_k))e^{\nu{{t}_k}}+r_k e^{\nu t_k} +A\int_{t_k}^{\hat{t}_k}e^{\nu\tau}d\tau,\nonumber~~~~ \\ {V}(\xi(t_{k+1})) e^{\nu{t_{k+1}}} \leq {V}(\xi(\hat{t}_k))e^{\nu{\hat{t}_k}}+\hat{r}_k e^{\nu \hat{t}_k}+A\int_{\hat{t}_k}^{t_{k+1}}e^{\nu\tau}d\tau.\nonumber \end{eqnarray} Adding the two inequalities and apply the result to the sampling intervals until $t\geq t_0$, Assumption \ref{ass_impose} yields ${V}(\xi(t))e^{\nu t}\leq {V}(\xi_0)+(\theta_2+\theta_3)e^{\nu t}+A\int_{t_0}^{t}e^{\nu\tau}d\tau$. Therefore, \begin{eqnarray} {V}(\xi(t)) \leq {V}(\xi_0) e^{-\nu t}+\theta_2+\theta_3+A\int_{t_0}^{t}e^{-\nu(t-\tau)}d\tau \nonumber\\ \leq {V}(\xi_0)+\theta_2+\theta_3+\nu^{-1}A~~~\quad~~~~~~~~~~~~~\nonumber \end{eqnarray} which gives the desired result. \end{newproofof} \begin{newproofof} \textit{Proposition \ref{prop_phi_positive}.} From (\ref{trig_cond}), (\ref{phi}) $\phi_1$ satisfies $\dot{\phi}_1 + \alpha_1({\phi_1}) + k_1{\phi_1}\geq 0$ for $t\in[t_k,t_{k+1})$. Note that ${\phi_1}(t)\equiv 0$ is a solution to $\dot{\phi}_1+\alpha_1({\phi_1})+k_1{\phi_1}= 0$. Therefore, since ${\phi_1}(t_k),{\phi_1}(\hat{t}_k)\geq 0$ it follows that ${\phi_1}(t)\geq 0$ for all $t\geq t_0$. For the second part, since $\bar{\delta}$ (resp., $\delta_k(t)$) is a solution of $\phi_2$ in (\ref{phi}) for $t\in[t_k,\hat{t}_k)$ (resp., $t\in[\hat{t}_k,t_{k+1})$) and $s_k\geq \bar{\delta}$ (resp., $\hat{s}_k=\delta_k(\hat{t}_k)$), it follows that ${\phi}_2(t)\geq \bar{\delta}$ (resp., ${\phi}_2(t)=\delta_k(t)$) over this interval. Finally, from the positiveness of $\bar{\delta}$ and $\delta_k(t)$, $\phi_2(t)\geq 0$ for all $t\geq t_0$. \end{newproofof} \begin{newproofof} \textit{Lemma \ref{lem_error}.} We first define the notation below: \begin{eqnarray} \mathcal{I}(x)=\int_{t_k}^{s}e^{\lambda_2 (s-\tau)}\|x(\tau)\| d\tau,~ \mathcal{Q}(s)=(\int_{t_k}^{s}e^{\frac{\lambda_2 q}{2} (s-\tau)} d\tau)^{\frac{p}{q}},\nonumber\\ \mathcal{J}(x)=\int_{t_k}^{s}e^{\frac{\lambda_2 p}{2} (s-\tau)}\|x(\tau)\|^p d\tau,~\mathcal{P}(s)=\int_{t_k}^{s}e^{\frac{\lambda_2 p}{2} (s-t_k)} ds.\nonumber \end{eqnarray} From definition of $\varepsilon$ and Lemma \ref{lem_lip} we have \begin{eqnarray} \frac{d\|\varepsilon\|}{dt} \leq \|\dot{\varepsilon}\|=\|\dot{\xi}\|\leq \lambda_1\|\xi\|+\lambda_2\|\varepsilon\|+\lambda_3\|d\|,\nonumber \end{eqnarray} solving which for $\varepsilon(t_k)=0$ and $s\geq t_k$ gives $\|\varepsilon(s)\|\leq \lambda_1 \mathcal{I}(\xi)+\lambda_3 \mathcal{I}(d)$. Then from Lemma \ref{lem_1}(i) we conclude \begin{eqnarray} \|\varepsilon(s)\|^p \leq 2^p(\lambda_1^p {\mathcal{I}}^p(\xi){+} \lambda_3^p {\mathcal{I}}^p(d) ) {\leq} 2^p{\mathcal{Q}}(s) (\lambda_1^p{\mathcal{J}}(\xi){+}\lambda_3^p {\mathcal{J}}(d) )\nonumber \end{eqnarray} where the last inequality is obtained using Lemma \ref{lem_1}(ii). It is then straightforward to check that for $t \geq t_k$, $\int_{t_k}^{t}{\mathcal{J}}(\xi) ds\leq {\mathcal{P}}(t) \int_{t_k}^{t}\|\xi(\tau)\|^p d\tau$ and hence conclude \begin{eqnarray} \int_{t_k}^{t}\|\varepsilon(s)\|^pds \leq 2^p\int_{t_k}^{t} {\mathcal{Q}}(s)(\lambda_1^p{\mathcal{J}}(\xi)+\lambda_3^p{\mathcal{J}}(d))ds ~~~~~~~~~ \nonumber \\ ~~~~~~\leq 2^p{\mathcal{Q}}(t){\mathcal{P}}(t)(\lambda_1^p\int_{t_k}^{t}\|\xi(\tau)\|^p d\tau+\lambda_3^p\int_{t_k}^{t}\|d(\tau)\|^p d\tau). \nonumber \end{eqnarray} The proof is then complete taking $t=\hat{t}_k$ since $\psi(\hat{\tau},\lambda_2)=2^p{\mathcal{Q}}(\hat{t}_k){\mathcal{P}}(\hat{t}_k)$. \end{newproofof} \bibliographystyle{IEEEtranS}
1,314,259,993,820
arxiv
\section{Introduction} It is well known that the fields of invariants of the triangular transformation groups of affine varieties are rational (see. \cite{ M, Pu, V}). In this paper, for the special instances, we are going to present the systems of free generators. For this purpose we introduce the notion of $U$-projector (it is a homomorphism of the algebra $K[X]$ to the field of invariants $K(X)^U$ identical on $K[X]^U$). In the section 2, we verify existence of $U$-projector and present its general construction. Furthermore, we show that applying the $U$-projector to the suitable system of functions $b_1,\ldots, b_m$ one can obtain the system of functions $P(b_1), \ldots, P(b_m)$ that freely generate the field of $U$-invariants. Notice that the general $U$-projector construction method is realized by the induction method on the length of ideals chain. Since the length might by rather great, this method don't provide the exact formula for $U$-projector. However, a priory the $U$-projector is not unique. In the next sections 3,4,5, we return to the problem of $U$-projector construction in the special cases. Our goal is to improve the above $U$-projector construction to make it more precise. We also plan to choose the system of functions $b_1,\ldots, b_m$ such that the system $P(b_1), \ldots, P(b_m)$ freely generate the field of $U$-invariants. The main results are formulated in the theorems \ref{acac},~ \ref{gen},~ \ref{atwo}, ~\ref{athree},~\ref{afour}. Notice that in the paper \cite{VPan} the other approach is presented for the description of generators of the field of $U$-invariants for the adjoint representation. \section{The general construction of $U$-projector} Let ${\mathfrak u}$ be an nilpotent Lie algebra over a field $K$ of zero characteristic, ~~ $U=\exp(\ux)$ be the corresponding group, $\Ac$ a commutative associative finitely generated algebra defined over the field $K$ and without zero devisors, ~$\Fc$ be the field $\mathrm{Frac}(\Ac)$. Let $D$ be a homomorphism of the Lie algebra $\ux$ into the Lie algebra of locally nilpotent derivations of the algebra $\Ac$. Then the group $U$ acts on $A$ by the formula $g(a)=\exp D_x(a)$,~ $g=\exp(x)$. The ring (the field) of $U$-invariants coincides with the ring (the field)of $\ux$-invariants. The field of $U$-invariants $\Fc^U$ is a field of fractions of the ring of $U$-invariants $\Ac^U$ ~\cite[Theorem 3.3]{VP}. It is known that the field $\Fc^U$ is a pure transcendental extension of the main field $K$ (see \cite{M, Pu, V}). By definition, an $U$-\emph{projector} is an arbitrary homomorphism $P:\Ac\to \Fc^U$ identical on $\Ac^U$. We are going to present the general construction of $U$-projector. One can construct a free generator system of generators of the field $\Fc^U$ in terms of $U$-projector. Fix the chain of ideal $\ux=\ux_n\supset\ux_{n-1}\supset\ldots\supset\ux_1\supset\ux_0={0}$, where $\mathrm{codim}(\ux_{i}, \ux_{i+1})=1$. For each pair $\ux_i\supset \ux_{i-1}$, the subalgebra of invariants $\Ac^{\ux_i}$ is contained in $\Ac^{\ux_{i-1}}$ $\Ac^{\ux_0}=\Ac$. Let ${i_1}$ be the least number such that $\Ac^{\ux_{i_1}}\ne \Ac$. Fix $x_{i_1}\in\ux_{i_1}\setminus \ux_{{i_1}-1}$. \\ \Lemma\Num\label{aaa}. There exist the elements $a_{1,1}\in \Ac$,~ $a_{1,0}\in \Ac^{\ux}$, $a_{1,0}\ne 0$ obeying \begin{equation} \label{dxa} D_{x_{i_1}}(a_{1,1})= a_{1,0}.\end{equation} \Proof. Since $D_{x_{i_1}}$ is a local nilpotent derivation, there exist $a_{1,1}\in \Ac$,,~ $a_{1,0}\in \Ac^{\ux_{i_1}}$,~ $a_{1,0}\ne 0$. It is sufficient to prove $a_{1,0}\in \Ac^\ux$. Really, for any $y\in\ux$ the element $[y,x_{i_1}]$ belongs to $\ux_{{i_1}-1}$. Therefore, $D_{[y,x_{i_1}]}(a)=0$ for all $a\in \Ac$. Then \begin{equation}\label{daaa} D_y (a_{1,0})= D_yD_{x_{i_1}}(a_{1,1})= D_{x_{i_1}}D_y(a_{1,1})+D_{[y,x_{i_1}]}(a_{1,1})= D_{x_{i_1}}D_y(a_{1,1}).\end{equation} The formula (\ref{daaa}) implies that the least containing $a_{1,0}$ and $D_\ux$-invariant subspace $<a_{1,0}>$ is contained in $\mathrm{Im} D_{x_{i_1}}$. Since $\ux$ is a finite dimensional nilpotent Lie algebra and every derivation $\ux$ is locally nilpotent, the subspace $<a_{1,0}>$ is finite dimensional. The representation $D_\ux$ has a triangular form in $<a_{1,0}>$. There exists a nonzero vector annihilated by all $D_y$, ~ $y\in \ux$. We may assume that it is $a_{1,0}$. $\Box$ Let $a_{1,0}$ and $a_{1,0}$ be as in lemma \ref{aaa}. The element $Q_1=a_{1,1}a_{1,0}^{-1}$ belongs to the localization $\Ac_1$ of the algebra $\Ac$ with respect to the denominator system generated by $a_{1,0}$, it obeys $D_{x_{i_1}}(Q_1)=1$. Consider the linear mapping $S_1: \Ac\to \Ac_1^{\ux_{i_1}}$ defined by \begin{equation}\label{SSS} S_1(a)= a-D_{x_{i_1}}(a)Q_1+D^2_{x_{i_1}}(a)\frac{Q_1^2}{2!}+\ldots+D^k_{x_{i_1}}(a)\frac{Q_1^k}{k!}+\ldots. \end{equation} The mapping $S_1$ is an algebra homomorphism identical on $\Ac^{\ux_{i_1}}$. One can extend the representation $D$ of the Lie algebra $\ux$ to $\Ac_1$, and each derivation $D_y$ remains locally nilpotent on $\Ac_1$. The mapping $S_1$ to an homomorphism $\Ac_1$ to $\Ac_1^{\ux_{i_1}}$ identical on $\Ac_1^{\ux_{i_1}}$. The subalgebra $\Ac_1^{\ux_{i_1}}$ is invariant with respect to all $D_x,~ x\in \ux$. Substitute $\Ac$ by $\Ac_1^{\ux_{i_1}}$, and proceed as above. Let $i_2$ be least number obeying $\Ac_1^{\ux_{i_2}}\ne \Ac_1^{\ux_{i_1}}$. Choose $x_2\in \ux_{i_2}\setminus\ux_{i_2-1}$. As above there exist the elements $a_{2,1}\in \Ac_1^{\ux_{i_1}}$ and $a_{2,0}\in \Ac_1^{\ux}$, ~ $a_{2,0}\ne 0$ such that $$D_{x_{i_2}}(a_{2,1})= a_{2,0}.$$ Let $\Ac_2$ stands for localization of the algebra $\Ac$ with respect to the denominator system generated by $a_{1,0}, a_{2,0}$. Consider the element $Q_2=a_{2,1}a_{2,0}^{-1}$ of the algebra $\Ac_2$. Similarly (\ref{SSS}), we construct the homomorphism $ S_2: \Ac_1^{\ux_{i_1}}\to \Ac_2^{\ux_{i_2}}$. Proceeding further we obtain the chain $n\geq i_m>\ldots>i_2>i_1\geq 1$, the systems of elements $a_{k,0}\in \Ac^\ux$ and $a_{k,1}\in \Ac$ obeying (\ref{dxa}), and the mappings $S_1, S_2,\ldots S_m$, where each $S_k$ is a homomorphism $\Ac_{k-1}^{\ux_{k-1}}\to \Ac_k^{\ux_k}$ identical on $\Ac_{k-1}^{\ux_k}$. Denote by $\Ac_*$ the localization of the algebra $\Ac$ with respect to the denominator system generated by $a_{1,0}, a_{2,0}, \ldots, a_{m,0}$. Consider the mapping $$P=S_m\circ\cdots\circ S_2\circ S_1.$$ From all above we conclude.\\ \Theorem\Num\label{acac}. The mapping $P$ is a homomorphism of the algebra $\Ac$ into $\Ac_*^\ux$ identical on $\Ac^\ux$. That is $P$ is a $U$-projector.\\ \Remark. Since $\{a_{k,0}\}\subset \Ac^\ux$, the projector $P$ can be extended to a projector $\Ac_*\to \Ac_*^\ux$ identical on $\Ac_*^\ux$. Let the group $U$ rationally act on the irreducible affine algebraic variety $X$ defined over the field $K$. Then the group $U$ naturally acts in the algebra of regular functions $K[X]$ be the formula $$T_gf(x)=f(g^{-1}x).$$ Any regular function on $X$ is contained in some finite dimensional invariant subspace \cite[Lemma 1.4]{VP}. The representation $D=d_e T$ of the Lie algebra $\ux$ is nilpotent in this subspace. Therefore, for every $x\in \ux$ the operator $D_x$ is a locally nilpotent derivation of the algebra $\Ac=K[X]$. As above there exist rational functions $a_{k,0}(x)$, ~ $a_{k,1}(x)$,~~$ 1\leq k\leq m,$. Notice that $\{a_{k,0}(x)\}$ are $U$-invariant, moreover $a_{1,0}$ and $a_{1,1}$ are regular, and $a_{k,0}(x)$, ~ $a_{k,1}(x)$ belong to the localization of the algebra of regular functions with respect to the denominator subset generated by $a_{i,0}(x)$, ~ $1\leq i\leq k-1$. Consider the $U$-invariant open subset $X_0=\{x\in X_0:~~ a_{k,0}(x)\ne 0, ~ 1\leq k\leq m\}$ and its subset $$\Sx=\{x\in X_0:~ a_{k,1}(x)=0,~~1\leq k\leq m\}.$$ There is the restriction mapping $\Res:K[X_0]\to K[\Sx]$. \\ \Theorem\Num\label{gen}. Assume that the system $\{a_{k,1}:~ 1\leq k\leq m\} $ generate the defining ideal of the subset $\Sx$ in the algebra $K[X_0]$. Let $b_1,\ldots, b_s\in \Ac$ be the system of elements such that $$\Res(b_1),\ldots,\Res(b_s), \Res(a_{0,0})^{\pm 1},\ldots, \Res(a_{m,0})^{\pm 1}$$ generate the algebra $K[\Sx]$. Then $P(b_1),\ldots, P(b_s),a_{0,0}^{\pm 1},\ldots, a_{m,0}^{\pm 1}$ generate the algebra of invariants $K[X_0]^U$. In particular, $P(b_1),\ldots, P(b_s),a_{0,0}^{\pm 1},\ldots, a_{m,0}^{\pm 1}$ generate the field of invariants $K(X)^U$.\\ \Proof. Let $\Res_U$ stands for the restriction of $\Res$ on the subalgebra of $U$-invariants. Since all $\Res(Q_i)$ annihilate on $\Sx$, we have $$\Res=\Res_U\circ P.$$ By the assumption, $$\Res_UP(b_1),\ldots,\Res_UP(b_s),\Res_U(a_{0,0})^{\pm 1},\ldots, \Res_U(a_{m,0})^{\pm 1} $$ generate the algebra $K[\Sx]$. To conclude the proof it is sufficient to prove that $\Res_U$ is an isomorphism of the algebra $K[X_0]^U$ onto $K[\Sx]$. Actually, the image $\mathrm{Im}(\Res_U)$ coincides with $K[\Sx]$. Let us show that $\mathrm{Ker}(\Res_U)=0$. If $f$ is an $U$-invariant and $\Res_U(f)=0$, then $\Res(f)=0$. Hence $f=\phi_1 a_{1,1} + \ldots + \phi_m a_{m,1}$ for some $\phi_1,\ldots,\phi_m\in K[X_0]$ and \begin{equation}\label{fpf} f=P(f)= P(\phi_1) P(a_{1,1}) + \ldots + P(\phi_m) P(a_{m,1}).\end{equation} Let us prove that $P(a_{k,1})=0$ for each $1\leq k\leq m$. Since the function $a_{k,1}$ is $\ux_{i_k}$-invariant, $S_i(a_{k,1})=a_{k,1}$ for all $1\leq i<k$. The formula (\ref{SSS}) implies $S_k(a_{k,1})=0$. Then $$S_kS_{k-1}\cdots S_1(a_{k,1})=S_k(a_{k,1})=0$$ for all $1\leq k\leq m$. Therefore $P(a_{k,1})=0$ for all $k$. By the formula (\ref{fpf}), we conclude $f=0$. $\Box$ \section{$U$-projectors for the adjoint representations} The goal of this section is to present an exact construction of the $U$-projector for the adjoint representation reductive split group. Let $G$ be a connected reductive split group over a field $K$ of zero characteristic, ~$\gx$ be its Lie algebra, ~ ~$\De$ be a root system with respect to the Cartan subalgebra $\hx$ (respectively, $\Dp$ be a set of positive roots), $$\ux =\sum_{\al\in\De^+}KE_\al$$ be the standard maximal nilpotent subalgebra in $\gx$. Via the Killing form we identify $ \gx$ with $\gx^*$, and the algebra of regular functions $K[\gx]$ with the symmetric algebra $\Sc(\gx)=K[\gx^*]$. We extend the adjoint representation $\ad_x$ of the Lie algebra $\gx$ to the representation in $\Sc(\gx)$ by derivations $D_x(a)=\{x,a\}$, where $x\in\gx$, ~ $a\in \Sc(\gx)$ and $\{\cdot,\cdot\}$ is natural Poisson bracket in $\Sc(\gx)$. If $x=E_\al$, then we denote $D_\al=D_{x_\al}$. Let $\xi=\xi_1$ be one of maximal roots in $\Dp$. Consider the subset $\De_2$ of the root system that consists of all $\al\in\De$ obeying $(\al,\xi)=0$. The subset $\De_2$ is a root system for the reductive subalgebra $\gx_2=\{x\in \gx:~ [x, E_{\xi}]=0\}$. The subalgebra $\gx_2$ contains the maximal nilpotent subalgebra $\ux_2$ spanned by $E_\al$, where $\al$ runs through the set of positive roots $\Dp_2$ in $\De_2$. The set $\Dp$splits into two subsets $\Dp=\Gamma\cup \Dp_2$, where $\Gamma$ consists of all roots $(\al,\xi)>0$. The subset $\Gamma$ contains $\xi$; denote $\Gamma^0=\Gamma\setminus \{\xi\}$. For each $\al\in\Gamma^0$, there exists a unique $\al'\in \Gamma^0$ such that $\al+\al'=\xi$ (see \cite{J}). The subalgebra $\nx=\nx_1$ spanned by $\{E_\al,~~ \al\in\Gamma\}$ is isomorphic to the Heisenberg algebra, and it is an ideal in $\ux$. The element $E_{\xi}$ is annihilated by all derivations $D_x$, ~ $ x\in \ux$. The derivations $D_x$ can be extended to the derivations of the localization $\Sc'(\gx)$ of the algebra $\Sc(\gx)$ with respect to the denominator subset generated by $E_{\xi} $. The element \begin{equation}\label{Qxi} Q_{\xi}=-\frac{1}{2}H_{\xi}E_{\xi}^{-1}\in \Sc'(\gx) \end{equation} obeys the equality $D_{\xi}(Q_{\xi})=1$. Following the formula (\ref{SSS}) we construct the projector $S_\xi$ of the algebra $\Sc'(\gx)$ onto the subalgebra of $D_\xi$-invariants. For each $\al\in \Gamma_0$, the element \begin{equation}\label{Qal} Q_{\al}=-\frac{1}{N_{\al,\al'}}E_{\al'}E_{\xi}^{-1}\in \Sc'(\gx) \end{equation} obeys $D_\al(Q_{\al})=1$. As above, for each $\al\in\Gamma_0$, we construct the projector $S_\al$. The mapping \begin{equation}\label{PPP} P_1=\left(\prod_{\al\in \Gamma_0} S_\al\right)\circ S_{\xi}, \end{equation} is a homomorphism of the algebra $\Sc(\gx)$ to the subalgebra of invariants $\Sc'(\gx)^{\nx_1}$ identical on $\Sc(\gx)^{\nx_1}$. One can extend the mapping $P_1$ to the projector of $\Sc'(\gx)$ onto the subalgebra $\Sc'(\gx)^{\nx_1}$. Notice that the product in the formula (\ref{PPP}) does not depent on the ordering of factors.\\ \Lemma\Num\label{nnn}. Let $\mx$ be a Heisenberg algebra,~ $[\mx,\mx]=Kz$, ~$\Sc(\mx)_z$ be the localization of $\Sc(\mx)$ with respect to the denominator subset generated by $z$. Let $\mx_0$ stand for the complimentary subspace for $Kz$ in $\mx$. Then if is a derivation $D$ of the algebra $\mx$ obeys $D(z)=0$ and $D(\mx_0)\subseteq \mx_0$, then there exists a unique element $b_D\in (\mx_0)^2 z^{-1}\in \Sc(\mx)_z$ such that $D(a)=\{b_D,a\}$ for any $a\in \mx$. \\ \Proof. The proof is similar \cite[Lemma 4.6.8]{Dix}. $\Box$ The element $x\in \gx_2$ provides the derivation $D_x(a)=[x,a]$ of the Heisenberg algebra $\nx$ with $D_x(E_\xi)=0$ and $D$ preserves the subspace $$\nx_0=\mathrm{span}\{E_\al:~\al\in \Gamma_0\}. $$ By the lemma (\ref{nnn}), there exists a unique $b_x\in \nx_0^2 E_\xi^{-1}$ such that $D_x(a)=\{b_x,a\}$. Then for any element $\widetilde{x}=x-b_x$ from $\Sc'(\gx)$, we obtain $[\widetilde{x}, \nx]=0$. For any $x,y\in \gx_2$, we have $$\{\widetilde{x},\widetilde{y}\} =\{x-b_x,y-b_y\} = \{x,y-b_y\} =\{x,y\} - \{x,b_y\}.$$ Since $\{x,y\}=[x,y]\in \gx_2$ è $ \{x,b_y\} \in \nx_0^2 E_\xi^{-1}$, the lemma \ref{nnn} implies $ \{x,b_y\}= b_{[x,y]}$. Hence \begin{equation}\label{thetaa} \widetilde{[x,y]} = \{\widetilde{x}, \widetilde{y}\} \end{equation} for any $x,y\in \gx_2$. The subset $\widetilde{\gx_2} = \{\widetilde{x}:~ x\in\gx_2\}$ is a Lie algebra with respect to the Poisson bracket in $\Sc'(\gx)$, and it is isomorphic to $\gx_2$. Applying (\ref{thetaa}), we obtain \begin{equation}\label{DDD} \widetilde{D_x(y)} = \widetilde{[x,y]} = \{\widetilde{x}, \widetilde{y}\} =\{x-b_x,y-b_y\} = D_x(\widetilde{y}) \end{equation} for any $x,~y\in \gx_2$. Further we proceed analogically to what have been done for $\gx$; choose a maximal root $\xi_2$ in $\Dp_2$, then we have got the subset of positive roots $\Gamma_2$, the subalgebras $\nx_2$,~ $\gx_3$ in $\gx_2$. The $Q_{\xi_2}$ is defined similarly as for $\xi$, substituting $H_\xi$ for $\widetilde{H}_{\xi_2}$ and $E_\xi$ for $\widetilde{E}_{\xi_2}$ in the formula (\ref{Qxi}). The element $Q_{\xi_2}$ belongs to the subalgebra $\Sc''(\gx)$ that is the localization of $\Sc(\gx)$ with respect to the denominator system generated by $E_{\xi_1}$ and $\widetilde{E}_{\xi_2}$. The both elements $E_{\xi_1}$ and $\widetilde{E}_{\xi_2}$ are $\ux$-invariants. Applying (\ref{DDD}), we have $D_{\xi_2}(Q_{\xi_2})=1$. Following the formula (\ref{SSS}), we define the operator $S_{\xi_2}$. Easy to show that if $a\in \Sc'(\gx)^{\nx_1}$, then $S_{\xi_2}(a)$ also belogs to $\Sc'(\gx)^{\nx_1}$. Likewise $S_{\xi_2}$ we define $S_\al$ for each $\al\in(\Gamma_2)_0 =\Gamma_2\setminus \xi_2$. The mapping \begin{equation}\label{PPPtwo} P_2=\left(\prod_{\al\in (\Gamma_2)_0} S_\al\right)\circ S_{\xi_2}, \end{equation} is a homomorphism of the algebra $\Sc(\gx)$ onto the subalgebra of invariants $\Sc''(\gx)^{\nx_2}$, and it is identical in $\Sc(\gx)^{\nx_2}$. The operator $P_2$ extends to a projector $\Sc''(\gx)$ into the subalgebra $\Sc''(\gx)^{\nx_2}$. If $a\in \Sc''(\gx)^{\nx_1}$, then $P_2(a)$ is invariant with respect to all $D_x$, ~$x\in\nx_1\oplus\nx_2$. Therefore, the mapping $P_2\circ P_1$ is a homomorphism of the algebras $$\Sc(\gx)\to\Sc''(\gx)^{\nx_1\oplus\nx_2},$$ and it if identical on ${\nx_1\oplus\nx_2}$-invariants in $\Sc(\gx)$. Continuing the process, we obtain the chain of positive roots $\xi=\xi_1, \xi_2, \ldots, \xi_m$, that is referred to as a \textit{ Kostant cascade}. The maximal nilpotent subalgebra $\ux$ decomposes into the sum of Heisenberg subalgebras $$\ux=\nx_1\oplus\nx_2\oplus\cdots\oplus \nx_{m}$$ with $[\nx_i,\nx_j]\subset\nx_i$ for all $i<j$. We define the chain of subalgebras $\gx=\gx_1\supset \gx_2\supset\ldots\supset \gx_m$ with the systems of positive roots $\De=\De_1\supset\De_2\supset\ldots\supset \De_m$; each $\xi_i$ is a maximal root in $\De_i$. By induction method with respect to $i$, we define the projectors $P_1, ~ P_2 ,\ldots, P_m$ such that each product $P_i\circ\cdots\circ P_1$ is a projector of the algebra $\Sc(\gx)$ into the subalgebra of $\nx_1\oplus\cdots\oplus \nx_i$-invariants in the localization of the algebra $\Sc(\gx)$ with respect to the denominator system generated by $$\Xi=\{E_{\xi_1}, \widetilde{E}_{\xi_2}, \ldots, \widetilde{\widetilde{E}}_{\xi_i}\}.$$ Take $\Xi=\Xi_m$. Let $\Sc(\gx)_*$ stand for the localization of $\Sc(\gx)$ with respect to the denominator system generated by $\Xi$.\\ \Theorem\Num\label{atwo}. \\ 1) The mapping $P=P_m\circ\cdots\circ P_1$ is a homomorphism of the algebra $\Sc(\gx)$ into $\Sc(\gx)_*^U$ identical on $\Sc(\gx)^U$, i.e. $P$ is an $U$-projector of the adjoint representation of the group $G$.\\ 2) Let $\{H_i\}$ be a basis orthogonal complement to the Kostant cascade in $\hx$. Then the system of elements $$\{P(E_{-\al)}:~ \al\in\Dp\}\cup \{P(H_i)\}\cup \Xi$$ freely generates $\Sc(\gx)_*^U$ (it also freely generates the field of $U$-invariants). \section{$U$-projectors for an arbitrary representations} In this section, we present the general scheme of construction of the $U$-projector for an arbitrary finite demensional representation. As in the previous section $K$ is a field of characteristic zero, ~ $\gx$ is a reductive split Lie algebra over the field $K$, ~ $G$ is a connected reductive group with the Lie algebra $\gx$, ~ $B$ is a Borel subgroup in $G$ that containes the Cartan subgroup $H$ and the maximal unipotent subgroup $U=B'$, their Lie algebras are $\bx$,~ $\hx$, ~$\ux$. Let $V$ be an arbitrary finite dimensional representation of the group $G$. This representation defines the representation $f(v)\to f(g^{-1}v)$ in the algebra $\Ac=K[V]$. The corresponding representation of the Lie algebra Ëè $\gx$ is realized in this space by the formula $$ D_x f(v)=-f(xv), ~ x\in \gx.$$ For any $x\in \ux$, the operator $D_x$ is a locally nilpotent derivation of the algebra $\Ac$. Decompose $V$ into a direct sum $V=W_0\oplus W_1$, where $W_0$ is an irreducible representation, and $W_1$ is its invariant complement. Suppose that $\dim W_0>1$. Choose a lowest vector $v_0$ in $W_0$. The stabilizer $\px^-$ of the one dimensional subspace $<v_0>$ is a parabolic subalgebra in $\gx$ containing $\hx$. Suppose that $\px^-\ne \gx$. Acting by the Cartan involution $\theta$ on $\px^-$, we obtain the parabolic subalgebra $\px$. The intersection $\gx_1=\px\cup\px^-$ is a Levi subalgebra in $\px$ (and in $\px^-$). Let $\mx=\mathrm{rad}(\px)$. Here and further, $\gamma_1\leq \gamma_2$ is an ordering on the set of all weights such that $\gamma_2-\gamma_1$ is a sum of simple roots with nonnegative coefficients. We extend $v_0$ to the basis $v_0,v_1,\ldots, v_k,\ldots, v_n$ in $V$ as follows \\ 1) ~if $v_i\in W_0$ and $v_j\in W_1$, then $i<j$;\\ 2) ~ let $E_{\al_1},\ldots, E_{\al_k}$ be the basis of $\mx$ over $K$, then $v_i=E_{\al_i}v_0$,~ $1\leq i\leq k$, is a basis in $\mx v_0$, and $v_{k+1},\ldots, v_n$is a basis in the $\gx_1$-invariant complement for $<v_0>\oplus \mx v_0$ in $V$;\\ 3)~ each vector $v_i$ is a weight vector with respect to $\hx$, moreover if $v_i, v_j\in W_0$ and $\wt(v_i)<\wt(v_j)$, then $i<j$ (i.e. $\al_i<\al_j$ implies $i<j$). Let $\omega_0,\omega_1,\ldots, \omega_k,\ldots, \omega_n$ be the dual basis. The linear form $\omega_0$ is invariant with respect to $\ux$; we extend the operators $D_x$,~ $x\in\ux$ to locally nilpotent derivations of the localization $\Ac_{\omega_0}$ of the algebra $\Ac$ with respect to the denominator system generated by $\omega_0$. Our first goal is to construct a projector $ \Ac\to \Ac_{\omega_0}^\mx$. Notice that for each $i$ the linear form $D_{E_i}(\omega_j)$ belongs to the subspace $<\omega_0,\ldots,\omega_{j-1}>$. Moreover, if $i>j$, then $D_{E_i}(\omega_j)=0$. In the case $i=j$, we have $D_{E_j}(\omega_j)=-\omega_0$. Then for the element $Q_j=-\omega_j\omega_0^{-1}$ and each $i>j$, we obtain \begin{equation} D_{E_i}(Q_j)=\left\{ \begin{array}{ll} 0,& \mbox{if} ~ i>j,\\1, & \mbox{if} ~ i=j.\end{array}\right. \end{equation} For each $1\leq i\leq k$, we construct the mapping $S_{\al_i}$ according to (\ref{SSS}). Define the mapping \begin{equation}\label{Pzero} P_0=S_{\al_1}\circ\cdots\circ S_{\al_k}. \end{equation} \Lemma\Num. The mapping $P_0$ is a homomorphism $\Ac$ to $\Ac_{\omega_0}^\mx$ identical on $\Ac^\mx$; its kernel is generated (as an ideal) by $\omega_1,\ldots,\omega_k$.\\ \Proof. One can directly verify $S_{\al_j}(\omega_j)=0$ for all $\leq j\leq k$. Since $D_{E_i}(\omega_j)=0$ for $i>j$, we have $S_{\al_i}(\omega_j)=\omega_j$ and $$P_0(\omega_j) = S_{\al_1}\circ\cdots\circ S_{\al_k}(\omega_j)= S_{\al_1}\circ\cdots\circ S_{\al_j}(\omega_j)=0.$$ As $P_0$ is a homomorphism of algebras, the ideal $I$ that is generated by $\omega_1,\ldots,\omega_k$ belongs to the kernel of $P_0$. On the other hand, for $j=0$ or $j>k$, the image $P_0(\omega_j)$ can be written in the form $P_0(\omega_j)=\omega_j+b$, where $b\in I$. Therefore $\mathrm{Ker}(P_0) = I$. Since $S_{\al_j}(a)=a$ for each $j$ and $\mx$-invariant $a$, we have $P_0(a)=a$. Let us show that for any $a\in \Ac_{\omega_0}$ the image $P_0(a)$ is $\mx$-invariant. For each $1\leq s \leq k$ denote $P_0^{(s)}=S_{\al_s}\circ\cdots\circ S_{\al_k}$. We shall prove by induction on $s$, beginning from $s=k$ that $$D_{\al_s}(P_0^{(s)}(a))=\cdots = D_{\al_k}(P_0^{(s)}(a))=0.$$ Indeed, for $s=k$ we obtain $D_{\al_k}(P_0^{(k)}(a))= D_{\al_k}(S^{(k)}(a))=0$. Suppose that the statement holds for $s+1$; let us prove it for $s$. Easy to see that $$D_{\al_s}(P_0^{(s)}(a))= D_{\al_s}S_{\al_{s}}(P_0^{(s+1)}(a))=0.$$ Let $t>s$. By induction on $n$, one can easily prove that for elements of an arbitrary Lie algebra the following equality holds \begin{equation}\label{xyn} xy^n=(y-\ad_y)^n(x). \end{equation} Applying (\ref{xyn}), we verify that there exist the operators $L_1,\ldots, L_{k-t}$ obeying $$D_{\al_t} S_{\al_s}= S_{\al_s}D_{\al_t}+L_1D_{\al_{t+1}}+\ldots+L_{k-t}D_{\al_{k}}.$$ Then $$ D_{\al_t}(P_0^{(s)}(a)) = S_{\al_s}D_{\al_t}(P_0^{(s+1)}(a))+ L_1D_{\al_{t+1}}(P_0^{(s+1}(a))+\ldots+L_{k-t}D_{\al_{k}}(P_0^{(s+1}(a)). $$ According to the induction assumption $D_{\al_t}(P_0^{(s)}(a))=0$. $\Box$ Let $G_1$ be a subgroup in $G$ which Lie algebra coincides with $\gx_1$. The projector $P_0$ is invariant with respect to $G_1$. Indeed, since $g_1\mx g_1^{-1}=\mx$ for any $g_1\in G_1$, the projector $g_1P_0g_1^{-1}$ has the same kernel and image as $P_0$, and hence $g_1P_0g_1^{-1} = P_0$. This implies $g_1P_0(a) = P_0(g_1a)$. The group $G_1$ acts on the space $V_1 = <v_0, v_{k+1},\ldots, v_n>$. The algebra $\Ac_1=K[V_1]$ is the symmetric algebra $$\Sc(V_1^*)=K[\omega_0,\omega_{k+1},\ldots,\omega_n].$$ The homomorphism $P_0$ is an isomorphism of $K[\omega_0^{\pm 1},\omega_{k+1},\ldots,\omega_n]$ to the algebra $\Ac_{\omega_0}^\mx$. Since $P_0$ commutes with $g_1$, the operator $P_0$ is an isomorphism of the $G_1$-representations $V_1^*$ and $P_0(V_1^*)$. Choose the lowest (for $\gx_1$) vector $v_0'$ in $V_1$, and continue the process as above. Finally, we obtain the chain of subspaces $V\supset V_1\supset\ldots \supset V_s$, and that of reductive subalgebras $\gx\supset\gx_1\supset\ldots \supset\gx_s$, where $\gx_s$-action in $V_s$ is diagonalizable. We obtain the chain of lowest vectors $$v_0, v_0',\ldots v_0^{(s-1)}\subset V_s$$ and the corresponding linear forms $$f_0=\omega_0, f_1=\omega_0',\ldots, f_{s-1} = \omega_0^{(s-1)}\subset V_s^*\subset V^*.$$ We extend $\{f_1, \ldots, f_{s-1}\}$ to the basis $\{f_1, \ldots, f_{s-1},\ldots, f_m\}$ in $V_s^*$. For each $1\leq i\leq s-1$, determine a homomorphism $P_i$ defined on the localization $\Ac_{\Lambda_i}$ of the algebra $\Ac$ with respect to the denominator system generated by $$\Lambda_i = \{f_0, P_0(f_1), \ldots, P_{i-1}\circ\cdots\circ P_0(f_{i-1})\}.$$ Denote $P =P_{s-1}$,~ and $\Ac_*=\Ac_{\Lambda_{s-1}}$.\\ \Theorem\Num\label{athree}. \\ 1) The mapping $P$ is a homomorphism of the algebra $\Ac$ to $\Ac_*^U$ identical on $\Ac^U$, i.e. $P$ is a $U$-projector for the representation of $G$ in $V$. \\ 2) The system of elements $\Lambda \cup \{P(f_i): 1\leq i\leq s-1 \} $ freely generate $\Ac_*^U$ (and it freely generate the field of $U$-invariants). \section{U-projector on reductive group} Let $G$ be as above (i.e. it is a connected reductive split group defined over a field $K$ of zero characteristic). We consider the representation of the group $G$ in the space $K[G]$ defined by the formula $R_gf(s)=f(g^{-1} s g)$. Our goal is to construct the $U$-projector for this representation in the algebra $K[G]$. Let $\gx$ be the Lie algebra of the group $G$. Let $\Pi=\{\al_1,\ldots,\al_n\}$ be the system of simple roots in $\Delta^+$, and $\Phi=\{\phi_1,\ldots,\phi_n\}$ be the system of fundamental weights, $\phi_i(\al_j)=\delta_{ij}$. In each fundamental representation $V_{i}$ with the highest weight $\phi_{i}$, we choose the highest vector $v_{i}^+$ and the lowest vector $ v_{i}^-$. In the dual space $V_i^*$, we choose the highest and the lowest vectors $l_{i}^+$ and $l_{i}^-$ such that $(v_i^-,l_i^+) = (v_i^+, l_i^-)=1$. The matrix element $d_i(g)=(gv_i^+,l_i^+)$, ~ $1\leq i\leq n$, is an $U$-invariant. Denote by $K[G]_*$ the localization of the algebra $K[G]$ with respect to the denominator system generated by $\{d_i(g):~ 1\leq i\leq n\}$. Order the positive roots $\Dp=\{\beta_1,\ldots,\beta_m\}$ such that $\beta_t<\beta_s$ implies $t<s$. Each $\beta_s\in \Dp$ is either simple (i.e. $\beta_s$ coincides with some $\al_{\nu(s)}\in \Pi$), or $\beta_s=\al_{\nu(s)}+\beta_s'$ for some $\al_{\nu(s)}\in \Pi$ and $\beta_s'\in \Dp$. For each $\beta_s$, let us fix $\al_{\nu(s)}\in\Pi$. Let us correspond to each $\beta_s \in\Dp$ the matric element $$ d_{\beta_s}(g)=(gv^+,E_{-\beta_s}l^+).$$ Then for any $x\in\ux$, we obtain \begin{equation}\label{dxm} D_xd_{\beta_s}(g)=-(xgv^+,E_{-\beta_s}l^+)+(gxv^+,E_{-\beta_s}l^+)=(gv^+,xE_{-\beta_s}l^+). \end{equation} If $x=E_\beta$, then as above we simplify notation $D_\beta = D_{E_\beta}$. The formula (\ref{dxm}) implies $D_{\beta_s} d_{\beta_t}(g) = 0$, if $s>t$, and $$D_{\beta_s} d_{\beta_s}(g) = \phi(H_{\beta_s})d_{\nu(s)}(g). $$ Then for $$Q_{\beta_s}(g)= d_{\beta_s}(g)(\phi (H_{\beta_s})d_{\nu(s)}(g))^{-1},$$ we have \begin{equation}\label{qdqd}D_{\beta_s} Q_{\beta_t}(g) = \left\{\begin{array}{ll} 0,~& \mbox{if} ~ s>t,\\ 1,~& \mbox{if} ~ s=t.\end{array}\right.\end{equation} For each $1\leq s\leq m$ we construct the mapping $S_{\beta_s}$ by the formula (\ref{SSS}). Define the operator $$P=S_{\beta_1}\circ\cdots\circ S_{\beta_m}.$$ For each $\beta_s\in\Dp$, consider the matrix element $$c_{\beta_s}=(gE_{-\beta_s}v^+,l^+). $$ \Theorem\Num\label{afour}. \\ 1) The operator $P$ is a homomorphism $K[G]_*$ to $K[G]_*^U$ identical on $K[G]_*^U$, i.e. it is an $U$-projector.\\ 2) The system of rational functions $$\{d_i(g):~ 1\leq i\leq n\}~ \bigcup ~ \{P(c_{\beta_s})(g):~1\leq s\leq m\}$$ freely generate the algebra $K[G]_*^U$ (it also freely generate the field $K(G)^U$).\\ \Proof. The statement 1) follows from above. Let us prove 2). It is sufficient to show that the selected system of functions satisfy the conditions of the theorem \ref{gen}.\\ 1) The inequalities $d_i\ne 0$, ~ $1\leq i\leq n$, define the open Bruhat cell $Bw_0B$. Let us show that the system $\{d_{\beta_s}(g): ~1\leq s\leq m\}$ generate the defining ideal $I_{w_0B}$ of the subset $w_0B$ in $Bw_0B$. The element $g$ belongs to $Bw_0B$ if $g$ can be written in the form $g=aw_0bh$, where $a=\exp(x)\in U$,~ $b=\exp(y)\in U$, ~$h\in H$, and $$x=\sum_{\al\in \Dp} x_\al E_\al,~~ y=\sum_{\al\in \Dp} y_\al E_\al.$$ The ideal $I_{w_0B}$ is generated by $\{x_\beta:~ \beta\in\Dp\}$. On the other hand, $$d_{\beta_s}(g)=(aw_0bhv^+, E_{-\beta_s}l^+) = \phi_{\nu(s)}(h)(v^-, a^{-1}E_{-\beta_s}l^+) = \phi_{\nu(s)}(h)f_s(x),$$ where $f_s(x)$ is a polynomial in $\{x_\al\}$ of the form $$f_s(x)=cx_{\beta_s} +~ \mbox{polynomial ~ in~} \{x_{\beta_t}:~ t<s\}$$ with $c\ne 0$. Therefore, the ideal $I_{w_0B}$ is generated by $\{d_{\beta_s}(g): ~1\leq s\leq m\}$.\\ 2) Let us show that $K[w_0B]$ is generated by the restrictions of $\{d_i(g),c_{\beta_s}(b)\}$ on $w_0B$. Indeed, $K[w_0B]$ is generated by $\{\phi_i(h), y_{\beta_s}\}$. On the other hand, $d_i(w_0bh)=(w_0bhv^+, l^+)=\phi_i(h)$, $$ c_{\beta_s}(w_0bh)=(w_0bhE_{-\beta_s}v^+,l^+)=\phi_{\nu(s)}(h)(bE_{-\beta_s}v^+,l^-)=\phi_{\nu(s)}f'_s(y), $$ where $f'_s(y)$ in a polynomial in $\{y_\al\}$ of the form $$f'_s(y)=cy_{\beta_s} +~ \mbox{polynomial ~ in~} \{y_{\beta_t}:~ t<s\}$$ with $c\ne 0$. Hence $K[w_0B]$ is generated by the restrictions of $\{d_i(g),c_{\beta_s}(b)\}$ on $w_0B$. $\Box$
1,314,259,993,821
arxiv
\section{Introduction} \label{sec:introduction} The renormalization program~\cite{weinberg} provides an insightful framework for the description of physical scales within a given problem. This assumes that the characteristic dimensional scales are sufficiently separated, as required by {\em effective field theory\/}~\cite{weinberg,polchinski}. Moreover, symmetry considerations usually furnish further analytical control over what contributing factors might be relevant for the hierarchy of scales. In addition to the well-known examples in high-energy physics and condensed matter physics, an effective renormalization of a system in molecular physics was introduced in Ref.~\cite{molecular_dipole_anomaly}, where a symmetry-centered approach was developed for the formation of dipole-bound anions by electron capture. In the relevant domain of scales, the dominant physics---governed by an inverse square potential~\cite{camblong:isp_letter}---takes a scale-invariant form known as {\em conformal quantum mechanics\/}. The central purpose of this paper is to develop an effective field theory program for the quantum anomaly of Ref.~\cite{molecular_dipole_anomaly}. Specifically, we address the role played by additional degrees of freedom---for example, the rotational ones in the molecular case. In this manner, we extensively use recent work on the renormalization and anomalous symmetry breaking of conformal quantum mechanics~\cite{camblong_CQM}. As a consequence, we will establish the following results. (i) The conformal analysis is robust and fairly insensitive to the ultraviolet and infrared physics. (ii) The effective field approach---centered on renormalization techniques---sheds light, e.g., on the properties of dipole-bound anions; this is in sharp contrast with the statements of Ref.~\cite{bawin}. (iii) The origin of a critical dipole moment for binding can be directly traced to the conformal interaction. In short, the predictions of the conformal framework of Ref.~\cite{molecular_dipole_anomaly} are {\em not significantly altered by the inclusion of additional degrees of freedom.\/} Moreover, a similar analysis can be applied to other problems for which the conformal quantum anomaly is relevant, for example, for the Efimov effect~\cite{efimov_effect}. \section{Conformal Quantum Mechanics and Dipole-Bound States} \label{sec:CQM_dipole-bound} In this section, we start by summarizing the results of Ref.~\cite{molecular_dipole_anomaly} for dipole-bound states in the language of effective field theory~\cite{camblong_CQM}. As we will see in the next section, the effective field approach also provides the natural connection between this work and the standard results of rotationally adiabatic theory~\cite{Garrett_dip_rot,Clary_dip_rot,desfrancois_epj:98}. \subsection{Conformal Physics of Dipole-Bound States} \label{sec:conformal_dipole-bound} The dominant part of the electron-molecule interaction can be described with a point dipole---the electron does not significantly probe radial scales smaller than the size $a$ of the molecule. Then, in three spatial dimensions, the associated anisotropic Hamiltonian reads \begin{equation} H = \frac{ p^{2}}{2 \, m_{e} } - \frac{g}{r^{2}} \, \cos \theta \; , \label{eq:ISP_Hamiltonian_unregularized_anisotropic} \end{equation} in which the coupling $g$ can be recast into a dimensionless form $ \lambda = 2 m_{e} \, g/\hbar^{2}$. Under time reparametrizations, this system displays an SO(2,1) conformal symmetry, whose breaking at the quantum-mechanical level can be interpreted as a {\em quantum anomaly\/}. As a first step, introducing separation of variables: $\Psi (r,\Omega) = u(r) \, \Xi (\Omega)/r$ in spherical coordinates. This leads to a scale-invariant radial equation \begin{equation} \frac{d^{2} u(r)}{dr^{2}} + \biggl[ k^{2} + \frac{\gamma (\lambda) }{r^{2}} \biggr] u(r) = 0 \; \label{eq:radial_eq_anisotropic} \end{equation} coupled to a scale-independent angular operator equation \begin{equation} \hat{A} (\lambda) \, \Xi (\Omega) = \gamma (\lambda) \, \Xi (\Omega) \; , \label{eq:angular_eq_anisotropic} \end{equation} where the eigenvalue $\gamma \equiv \gamma (\lambda) $ plays the role of a separation constant and \begin{equation} \hat{A} (\lambda) = - {\bf l}^{2} + \lambda \, \cos \theta \; , \label{eq:conformal_angular_operator} \end{equation} with ${\bf l}$ being the relative orbital angular momentum of the electron about the molecule. The problem defined by the equations above is completely characterized by the solutions of conformal quantum mechanics. \subsection{Radial Conformal Quantum Mechanics} \label{sec:radial-problem} Conformal quantum mechanics applies to the description of the radial problem. All the properties and conclusions discussed herein rely on the existence of a domain of scales in which the dominant physics is scale invariant. A symmetry-centered analysis in the relevant conformally invariant domain shows that the theory retains the SO(2,1) symmetry at the quantum level when $\gamma < 1/4$, with $\gamma = 1/4$ being a critical point of the conformal framework. The existence of a {\em conformal critical point\/} \begin{equation} \gamma^{(*)} \equiv \gamma \bigl( \lambda^{(*)} \bigr) = 1/4 \label{eq:conformal_critical_point} \end{equation} is the crucial ingredient that explains the experimental fact that electron binding by molecular anions only occurs for dipole moments greater than a critical value $p^{(*)}$~\cite{molecular_dipole_anomaly}. Conformal quantum mechanics is singular for $\gamma \geq 1/4$, but can be rescued by the use of renormalization, which yields {\em conformal bound states\/} with energies $ E_{ n } = E_{0} \, \exp \left( - 2 \pi n/\Theta \right) $, where $n$ is a positive integer, $E_{0}$ is the arbitrary ground-state energy, and the conformal parameter $\Theta$ is derived from the coupling according to the rule~\cite{camblong_CQM} \begin{equation} \Theta = \sqrt{\gamma - \frac{1}{4}} \; . \label{eq:Theta_definition} \end{equation} The specific value of the characteristic scale $E_{0}$ defined by the conformal tower of states is sensitive to the ultraviolet physics and cannot be predicted by a renormalization approach alone. However, the scale $E_{0}$ is not relevant in the determination of the relative values of bound-state energies, as exhibited by the {\em geometric scaling\/} \begin{equation} \frac{ E_{n'} }{ E_{n} } = \exp \left[ - \frac{2 \pi (n'-n )}{\Theta } \right] \; , \label{eq:ratios_cutoff_BS_regularized_energies_phenomenological} \end{equation} which is a remnant of the original scale invariance. In particular, the geometric ratio $e^{-2\pi/\Theta}$ of adjacent energy levels has a universal form that is {\em independent of the cutoff and impervious to the ultraviolet physics\/}. Finally, the conformal states are characterized by normalized radial wave functions of the form \begin{equation} u (r) = \kappa \, \sqrt{ \frac{ 2 \sinh \left( \pi \Theta \right) }{ \pi \Theta } } \, \sqrt{r} \; K_{i\Theta} (\kappa r) \; , \label{eq:radial_wave_function} \end{equation} where $K_{i\Theta} (z)$ is the Macdonald function of imaginary index~\cite{macdonald_function}. This is the function whose properties guarantee the universal geometric scaling~(\ref{eq:ratios_cutoff_BS_regularized_energies_phenomenological}). In addition, the same function leads to an estimate of the characteristic radial size of the electron probability distribution, given by $\kappa^{-1}$, with relative ratios $\kappa_{n}/\kappa_{n'} = e^{ \pi (n'-n )/\Theta }$ exhibiting a similar kind of universal geometric scaling. In short, the generic properties of conformal quantum mechanics determine the nature of the bound states of molecular anions and are parametrized by the possible values of the conformal parameter $\Theta$. In turn, $\Theta$ is described, from Eq.~(\ref{eq:Theta_definition}), in terms of the effective coupling $\gamma = \Theta^{2} + 1/4 $, which is completely determined by the angular dependence of the interaction, through the eigenvalue equation~(\ref{eq:angular_eq_anisotropic}). This is the problem to which we now turn. \subsection{Angular Eigenvalue Equation} \label{sec:angular-problem} The angular problem for an anisotropic conformal interaction is given by Eq.~(\ref{eq:angular_eq_anisotropic}), whose secular-determinant form $ D (\gamma, \lambda) \equiv \det M (\gamma,\lambda) = 0 $ involves the infinite matrix $ M(\gamma, \lambda) = - A(\lambda) + \gamma \, \openone $, with $\openone$ being the identity matrix. In particular, in the angular momentum basis $\left| l,m \right\rangle$, the matrix elements $\left\langle l { m} | M (\gamma, \lambda) | l' { m}' \right\rangle = \delta_{mm'} M_{ll'} (\gamma, \lambda; m) $ are diagonal with respect to $m$, with tridiagonal blocks \begin{equation} M_{ll'} (\gamma, \lambda; m) = \biggl[ l(l+1) + \gamma \biggr] \, \delta_{ll'} - \lambda \biggl[ { N}_{l}(m) \, \delta_{l, l'-1} + \left( l \leftrightarrow l' \right) \biggr] \; , \label{eq:conformal_tridiagonal_Mll'} \end{equation} where $ { N}_{l}(m) = \sqrt{ [(l+1)^{2} -m^{2}]/[(2l+1)(2l+3)] } $. As a result, the secular determinant takes the factorized form $D(\gamma, \lambda) = \Pi_{m} D_{m}(\gamma, \lambda) $ and the eigenvalues are given by the roots of the reduced determinants $ D_{m}(\gamma, \lambda) \equiv \det \bigl[ M_{ll'} (\gamma, \lambda; m) \bigr] = 0 $, for all integer values of $m$. At this purely conformal level, for every $m$, the roots $\gamma_{h,m}$ can be arranged in a decreasing sequence: $ \gamma_{0,m} \geq \gamma_{1,m} \geq \gamma_{2,m} \geq \ldots $, with $h =0,1, \dots$, and compared against the condition for conformal criticality: $\gamma = \gamma^{(*)}=1/4$. Equation~(\ref{eq:conformal_tridiagonal_Mll'}) implies the following trends: $\gamma$ is a monotonic function with respect to both $\lambda$ and $m$, increasing with $\lambda$ and decreasing with $m$. In particular, for any finite dipole moment $p$ (i.e., finite $\lambda$), there exist only a finite number of {\em supercritical\/} values of $\gamma$; in turn, for each $\gamma$, there is an infinite {\em tower of conformal states\/}---possibly limited by the onset of nonconformal physics for long-distance scales. Hence the conformal bound states are completely characterized by the set of quantum numbers $(n,h,m)$, in which the subset $(h,m)$ determines $\gamma_{h,m}$, while $n$ labels the ordering of the conformal tower or geometric scaling. The existence of these states in the ``supercritical regime'' yields anomalous breaking of the SO(2,1) commutator algebra~\cite{camblong_CQM}. An important related question is: for the largest root $\gamma_{0,0}$, what is the value $\lambda^{(*)}$ that generates a conformal critical point? By setting $\gamma_{0,0} = \gamma^{(*)} = 1/4$, the ``principal conformal critical coupling'' becomes $ \lambda^{(*)}_{\rm conf} \approx 1.279 $ whence the required critical dipole moment is $p^{(*)}= p_{0} \, \lambda^{(*)} \approx 1.625$ D~\cite{fer:47,lev:67,molecular_dipole_anomaly}. Likewise, for each of the other roots $\gamma_{h,m}$, the criticality condition $\gamma_{h,m} = \gamma^{(*)} = 1/4$ defines additional, increasingly larger values $\lambda ^{(*)}_{h,m}$ of the critical dipole moment. Each of these represents the onset of a new tower of conformal states of the form~(\ref{eq:ratios_cutoff_BS_regularized_energies_phenomenological}). The sequence of critical values of the dipole moment includes $\lambda_{0,0}^{(*)} \approx 1.279; \lambda ^{(*)}_{0,1} \approx 7.58; \ldots $. However, the experimentally observed bound states~\cite{mea:84,desfrancois_prl:94} appear to be limited to the highest root $\gamma_{0,0}$ because of the characteristic order of magnitude of the molecular dipole moments realized in nature. \section{Rotational Degrees of Freedom of Dipole-Bound Anions} \label{sec:rotational_dipole-bound} We now turn, through an appropriate length-scale hierarchy, to a derivation of the connection between the approach of Refs.~\cite{bawin,Garrett_dip_rot,Clary_dip_rot,desfrancois_epj:98} and the conformal treatment of Ref.~\cite{molecular_dipole_anomaly}. \subsection{Rotationally Adiabatic Theory} \label{sec:rotationally_adiabatic} In the rotationally adiabatic theory~\cite{Clary_dip_rot}, the pseudopotential \begin{equation} \mathcal{V} (r) = - \frac{\hbar^{2}}{2 m_{e}} \, \frac{ \Gamma \mbox{\boldmath\large $\left( \right.$ } \! \! \! \lambda; F(r) \mbox{\boldmath\large $\left. \right)$ } \! \! }{r^{2}} \, G(r) \; \label{eq:effective_adiabatic_potential} \end{equation} for the radial electron wave function is an eigenvalue of the reduced Hamiltonian \begin{equation} \hat{ \mathcal{H} } = - \frac{\hbar^{2}}{2 m_{e}} \, \frac{ \hat{ \mathcal{ A} } \mbox{\boldmath\large $\left( \right.$ } \! \! \! \lambda; F (r) \mbox{\boldmath\large $\left. \right)$ } \! \! }{r^{2}} \, G(r) \, \; , \label{eq:adiabatic_Hamiltonian} \end{equation} and the radial function $G(r)$ can be selected by comparison with different expressions used in the literature~\cite{bawin,Garrett_dip_rot,Clary_dip_rot,desfrancois_epj:98}. In particular, the lowest eigenvalue gives the standard adiabatic potential: $\epsilon_{\rm adiab} (r) \equiv \mathcal{V} (r)$. In addition, the nontrivial part of the effective Hamiltonian of Eq.~(\ref{eq:adiabatic_Hamiltonian}) arises from the adiabatic approximation for the rotational motion of the molecule, which provides the operator~\cite{bawin,Clary_dip_rot,desfrancois_epj:98} \begin{equation} \hat{ \mathcal{ A} } \mbox{\boldmath\large $\left( \right.$ } \! \! \! \lambda; F (r) \mbox{\boldmath\large $\left. \right)$ } \! \! = - F(r) \, {\bf l}^{2} + \lambda\, \cos \theta \; , \label{eq:adiabatic_angular_operator} \end{equation} where the function $F(r) $ has the form $F(r) = 1 + \left( r/r_{B} \right)^{2} $, in which the length scale \begin{equation} r_{B} = \sqrt{ \frac{\hbar^{2} }{ 2 \, m_{e} \, B } } \; \label{eq:IR_rotational_scale} \end{equation} is associated with the rotator constant $B = \hbar^{2}/2I$ (with $I$ being the moment of inertia). Simple inspection shows that $ \hat{ \mathcal{ A} } \mbox{\boldmath\large $\left( \right.$ } \! \! \! \lambda; F (r) \mbox{\boldmath\large $\left. \right)$ } \! \! $ is a generalization of $\hat{ A } ( \lambda )$, in which the replacement ${\bf l}^{2} \rightarrow F(r) \, {\bf l}^{2} $ is made; therefore, their angular operator structures are identical. Using again the orbital angular momentum basis $\left| l,m \right\rangle$ of the electron, the eigenvalue $\Gamma \equiv \Gamma(\lambda; F)$ of $\hat{ \mathcal{ A} } ( \lambda; F ) $ can be found from the secular equation \begin{equation} {\mathcal D}_{m} \mbox{\boldmath\large $\left( \right.$ } \! \! \! \Gamma, \lambda; F(r) \mbox{\boldmath\large $\left. \right)$ } \! \! \equiv \det \bigl[ \mathcal{M}_{ll'} \mbox{\boldmath\large $\left( \right.$ } \! \! \! \Gamma, \lambda; m; F(r) \mbox{\boldmath\large $\left. \right)$ } \! \! \bigr] = 0 \; , \label{eq:adiabatic_angular_determinant} \end{equation} where $\mathcal{M} \mbox{\boldmath\large $\left( \right.$ } \! \! \! \Gamma, \lambda; F(r) \mbox{\boldmath\large $\left. \right)$ } \! \! = - { \mathcal{ A} } \mbox{\boldmath\large $\left( \right.$ } \! \! \! \lambda; F (r) \mbox{\boldmath\large $\left. \right)$ } \! \! + \Gamma \, \openone$, so that $\mathcal{M}_{ll'} \mbox{\boldmath\large $\left( \right.$ } \! \! \! \Gamma, \lambda; m; F(r) \mbox{\boldmath\large $\left. \right)$ } \! \! $ is obtained from Eq.~(\ref{eq:conformal_tridiagonal_Mll'}) by the replacements $l(l+1) \rightarrow l(l+1) \, F(r)$ and $\gamma \rightarrow \Gamma$ in the diagonal terms. Therefore the eigenvalues arising from Eq.~(\ref{eq:adiabatic_angular_determinant}) can be labeled just as those derived from the conformal secular determinant: $\Gamma_{h,m}$. In particular, the largest one, $\Gamma_{0,0}$, leads to the standard adiabatic potential $\epsilon_{\rm adiab} (r) = - \hbar^{2} \, \Gamma_{0,0} \mbox{\boldmath\large $\left( \right.$ } \! \! \! \lambda; F(r) \mbox{\boldmath\large $\left. \right)$ } \! \! G(r)/(2 m_{e} \, r^{2} ) $ in Eq.~(\ref{eq:effective_adiabatic_potential}). \subsection{Separation of Scales: Renormalization Theory} \label{sec:scales&renormalization} The current reformulation of the rotationally adiabatic theory permits a direct comparison with the results of the conformal framework, to which it reduces by the use of effective field theory arguments. The reason for this lies in that, in a renormalization treatment, the phenomenological factor $G(r)$ merely amounts to an ultraviolet regulator---only needed for distances $r \alt a$, where $a$ is the size of the molecule. In other words, the details of the position dependence of $G(r)$ are of secondary importance because $G(r) \approx 1$ for $r \agt a$ and the conformal potential effectively dominates the relevant physics. Consequently, the only significant addition to the conformal framework appears to be the inclusion of rotational degrees of freedom via the function $F(r)$. However, a careful analysis of Eq.~(\ref{eq:adiabatic_angular_determinant}) shows that the conclusions from the conformal framework are not substantially altered. The fundamental concept that underlies this surprising result---and which makes our construction successful---is the clear-cut {\em separation of scales\/}. This is the essential assumption that underlies renormalization theory~\cite{weinberg}, as described in the effective field theory language~\cite{polchinski}. Specifically, the two characteristic length scales for the molecular anions are (i) a scale of the order of the molecular size $a$; and (ii) the rotational scale $r_{B}$ of Eq.~(\ref{eq:IR_rotational_scale}), whose size can be gleaned from $I \sim M a^{2}$, with $M$ being the mass of the molecule. Then, the scale hierarchy \begin{equation} r_{B} \sim \sqrt{ \frac{M}{m_{e} } } \; a \gg a \; \label{eq:scales} \end{equation} shows that $L_{\rm UV} \sim a$, and $L_{\rm IR} \sim r_{B}$ play the role of ``ultraviolet'' and ``infrared'' scales respectively. Moreover, Eq.~(\ref{eq:scales}) provides a justification for the adiabatic approximation used in Refs.~\cite{bawin,Garrett_dip_rot,Clary_dip_rot}; remarkably, this approximation is just a statement about length scales within an effective-field-theory description of molecular physics~\cite{wilczek}. Thus the conformal treatment constitutes a satisfactory framework for the physics of dipole-bound molecular anions. This description can be further justified by introducing a systematic reduction procedure. First, the dependence of ${\mathcal V}(r)$ for $r \gg r_{B}$ plays a secondary role for the problem of criticality. This can be rigorously established by an asymptotic analysis of the determinant~(\ref{eq:adiabatic_angular_determinant}). Most importantly, the existence of a critical value and the ensuing bound states follow from the relevant scales $r \alt r_{B}$: criticality does {\em not\/} originate in the infrared sector. Second, the critical dipole moment arises from the ultraviolet boundary and can be established by a renormalization framework. Therefore the dominant physics can be extracted by considering the intermediate scales, with $ a \alt r \ll r_{B}$. In that range, $F(r) \approx 1$ and $\Gamma (\lambda; F)$ in Eq.~(\ref{eq:adiabatic_angular_determinant}) can be replaced by a constant $\gamma (\lambda) \equiv \Gamma (\lambda; 1)$. Thus, in this ``scale window,'' the adiabatic potential approximately reduces to a long-range conformal potential ${\mathcal V} (r) = - \hbar^{2} \gamma /(2 m_{e}r^{2})$. Retracing the previous steps, this reduction establishes the Hamiltonian~(\ref{eq:ISP_Hamiltonian_unregularized_anisotropic}), whose conformal symmetry is reminiscent of the corresponding description in high-energy physics~\cite{jac:72}: at sufficiently small distances the problem becomes scale invariant. Finally, when a length scale of the order $a$ is reached, ``new physics'' emerges and a more detailed treatment is in order---for which a specific form of the factor $G(r)$ would be needed. \section{Generalized Conformal Framework: Predictions and Nature of the Corrections} \label{sec:generalized_renormalization} The length-scale analysis leads to a noteworthy adjustment to the previous results: the restriction of the conformal tower of bound states to the relevant range of scales. This is due to the fact that the dominant physics is described by a ``conformal window'' limited by the characteristic scales $L_{\rm UV}$ and $L_{\rm IR}$, which act as ultraviolet and infrared cutoffs~\cite{camblong_CQM}. The existence of an ultraviolet boundary is directly involved in the renormalization process and drives the fundamental properties of the renormalized conformal framework. By contrast, as shown in Ref.~\cite{camblong_CQM}, the infrared boundary only restricts the range of the dominant physics. Most importantly, there are a number of predictions arising from this {\em generalized conformal framework\/}, which---with appropriate refinements---could be tested experimentally and compared against results from alternative approaches. We will illustrate these results by considering the dominant sector of the theory in the subspace ${\mathcal S}_{m=0}(l=0,1)$ of quantum numbers $l=0$ and $l=1$ for the secular determinant~(\ref{eq:adiabatic_angular_determinant}) with $m=0$, in which $\Gamma_{0,0} = - F + \sqrt{ F^{2} + \lambda^{2}/3 }$~\cite{Clary_dip_rot}. The first prediction arises directly from the existence of a conformal domain, which implies that the number of {\em conformal bound states\/} undergoes a cutoff process leading to a finite value $N_{\rm conf} $. It turns out that the approximate number \begin{equation} N_{\rm conf} \sim \frac{\Theta}{\pi} \, \ln \left( \frac{L_{\rm IR}}{L_{UV} } \right) \; , \label{eq:number_conformal_states} \end{equation} which is predicted from renormalization, is also in good agreement with known bound-state estimates~\cite{Calogero,number_of_states}. For typical values of the parameters involved, the logarithmic nature of $N_{\rm conf} $ yields the generally accepted result that dipole-bound molecular anions sustain only one or two bound states. Therefore, in contrast with the claims of Ref.~\cite{bawin}, our approach shows that the presence of a conformal domain is the actual {\em cause for the existence of bound states and of the critical dipole moment\/}. The second important prediction of the generalized renormalization framework consists of corrections to the critical value $\lambda^{(*)}$. Within the effective-field reduction, as a zeroth-order approximation, Eq.~(\ref{eq:adiabatic_angular_determinant}) [with $F(r) \approx 1$] provides the required critical dimensionless dipole moment $\lambda^{(*)}_{\rm conf}$, which is purely conformal in nature. Broadly speaking, when a dipole moment is sufficiently different from the critical value, the predictions of the conformal framework are remarkably accurate. However, very near criticality, $\Theta \sim 0 $ and $\kappa \sim 0$; this is due to the fact that the condition of criticality amounts to the {\em emergence of a ground state from the continuum\/}. The corresponding enlarged characteristic size of the ground-state conformal wave function links the relevant scales and corrections are unavoidable in the presence of an infrared cutoff. One possible way of dealing with this is through a perturbative evaluation of $\lambda^{(*)}$ at the level of Eq.~(\ref{eq:adiabatic_angular_determinant}); nevertheless, because of the extremely long range of the wave function~(\ref{eq:radial_wave_function}), one would have to consider all orders of perturbation theory and carry out infinite resummations. An alternative, more direct estimate can be established from the emergence of the first bound state, \begin{equation} N = N_{\rm conf} + \delta = 1 \; , \label{eq:emergence_GS} \end{equation} where $ \delta = \delta_{\rm IR} + \delta_{\rm UV} $ is the partial contribution of the infrared and ultraviolet sectors to the number of states. The criticality condition~(\ref{eq:emergence_GS}), combined with Eq.~(\ref{eq:number_conformal_states}), can then be used to evaluate the conformal parameter $\Theta_{\rm gs}$ of the critical ground-state wave function; the fact that $\Theta_{\rm gs}$ is small but finite is due to the self-consistent restriction of the theory in the infrared. Thus the fractional correction to the critical dipole value \begin{equation} \epsilon \equiv \frac{\lambda^{(*)}}{ \lambda^{(*)}_{\rm conf} } - 1 \; \label{eq:modified_critical_dipole_def} \end{equation} can be computed from the secular equation~(\ref{eq:adiabatic_angular_determinant}), by means of Eq.~(\ref{eq:Theta_definition}), in which $\gamma = 1/4$ for the purely conformal theory, while $\tilde{\gamma} = \Theta_{\rm gs}^{2} + 1/4 $ for the theory with an infrared cutoff, so that \begin{equation} \Delta \gamma = \tilde{\gamma } - \frac{1}{4} = \Theta_{\rm gs}^{2} = 4 \pi^{2} \left( 1 - \delta \right)^{2} \, \left[ \ln \left( \frac{ r_{B} }{ a } \right)^{2} \right]^{-2} \; . \label{eq:modified_critical_dipole_basic} \end{equation} In particular, in the restriction of the theory to the dominant subspace ${\mathcal S}_{m=0}(l=0,1)$, the quantity $\epsilon$ in Eq.~(\ref{eq:modified_critical_dipole_def}) becomes \begin{equation} \epsilon = \sqrt{ [ 1 + 4 (\tilde{\gamma} -\gamma ) ] \, [ 1 + \frac{4}{9} (\tilde{\gamma} -\gamma ) ] \, } \, - 1 \approx \frac{20}{9} (\tilde{\gamma} -\gamma ) \; , \label{eq:restricted_subspace_epsilon} \end{equation} where the approximate equality arises from the relatively small values of $ (\tilde{\gamma} -\gamma )$, which are due to the {\em separation of scales\/}. Consequently, Eqs.~(\ref{eq:modified_critical_dipole_basic}) and (\ref{eq:restricted_subspace_epsilon}) imply that \begin{equation} \epsilon \approx \frac{20}{9} \, \Theta_{\rm gs}^{2} \approx \frac{80 \, \pi^{2}}{9} \, \left( 1 - \delta \right)^{2} \, \left[ \ln \left( \frac{ r_{B} }{ a } \right)^{2} \right]^{-2} \; . \label{eq:modified_critical_dipole} \end{equation} As expected, this correction becomes more prominent for decreasing values of $I$ and increases the critical dipole from its ideal conformal value. In addition, the fractional state contribution $\delta$ in the compensatory factor $(1-\delta)$ can be determined using standard estimates for the number of bound states~\cite{number_of_states}. With these building blocks, Eq.~(\ref{eq:modified_critical_dipole}) gives the leading dependence of the critical value $\lambda^{(*)}$ with respect to the infrared scale through $\ln \rho$, with $\rho \equiv I/(m_{e} a^{2}) = r_{B}^{2}/a^{2}$ being the dimensionless molecular moment of inertia. The logarithmic dependence $\ln \rho$ is the {\em trademark of the underlying renormalization-induced physics\/} and explains the slow convergence of $\lambda^{(*)}$ towards $ \lambda^{(*)}_{\rm conf} $. This analysis ultimately shows that, even when rotational degrees of freedom are included in the description of this problem, renormalization is still responsible for the predicted values of $p^{(*)}$, including: (i) the existence of a critical value whose order of magnitude is given by the conformal critical point~(\ref{eq:conformal_critical_point}); and (ii) the underlying physics of the logarithmic correction~(\ref{eq:modified_critical_dipole}). \\ Most importantly, the results~(\ref{eq:number_conformal_states})-(\ref{eq:modified_critical_dipole}) are {\em universal, i.e., model-independent, within the conformal framework\/}. In addition, we acknowledge the existence of model-dependent corrections to this framework. For molecular dipole anions, these effects can be represented by means of a pseudopotential comprised of electrostatic terms---described by the multipole expansion---combined with many-body contributions of two kinds: a polarization part and an exchange part due to the Pauli exclusion principle~\cite{Garrett_pseudopot,desfrancois_epj:98,desfrancois_prl:94,desfrancois_ijmpb:96,clusters}. The long-distance electrostatic and polarization terms do not substantially affect the rotational infrared corrections to the purely conformal problem because their coupling constants are proportional to $a^{2}$ (with the relevant rotational degrees of freedom being proportional to $r_{B}^{2}$, and $r_{B} \gg a$)~\cite{desfrancois_epj:98,desfrancois_ijmpb:96}. The short-distance behavior, which contributes to the ultraviolet physics with a scale of the order of $L_{\rm UV} \sim a$, involves electrostatic and exchange many-body effects~\cite{desfrancois_epj:98,desfrancois_ijmpb:96}. In the case of the exchange effects, the characteristic scale is determined by the overlap of orbitals associated with tightly bound electrons, and the corresponding repulsive core is highly dependent on the nature of the molecular species~\cite{exchange}, with $\delta_{\rm UV}<0$. This negative value partially compensates the positive term $\delta_{\rm IR}$ and favors the agreement with the observed critical dipole moment in complex molecular species. Consequently, the scale analysis confirms the remarkable fact that {\em the dipole-bound anionic state exists primarily due to the conformal interaction\/}~\cite{SGS:99}. One of the simplest characterizations of these model-dependent corrections is afforded by the dominant limiting infrared behavior of the rotationally adiabatic theory of Ref.~\cite{Clary_dip_rot}, which yields $\delta \approx \delta_{\rm IR} \approx \sqrt{6} \, \lambda^{(*)}_{\rm conf} \, \left( 1 + \epsilon \right)/3 \pi $. With these assignments, introducing the parameters $c = \left[ \left( \sqrt{6} \, \lambda^{(*)}_{\rm conf}/3 \pi \right)^{-1} - 1 \right]^{-1} \approx 0.498$, $A= 80 \pi^{2}L^{-2}/[9(c+1)^{2}]$, and $L = \ln \rho $, the fractional correction to the dipole moment becomes $\epsilon \approx \left\{ [1+1/(2cA)] - \sqrt{ [1+1/(2cA)]^{2} -1} \right\}/c$; for example, for various values of the dimensionless molecular moment of inertia: $\rho = 2 \times 10^{8}$, $\rho = 2 \times 10^{6}$, and $\rho = 4 \times 10^{4}$, the corresponding fractional corrections are, respectively, $\epsilon \approx 0.11$, $\epsilon \approx 0.16$, and $\epsilon \approx 0.26$~\cite{epsilon_garrett}. Finally, let us consider another universal prediction for an experimental realization with at least two conformal bound states~\cite{third_state}. For such a system, Eq.~(\ref{eq:ratios_cutoff_BS_regularized_energies_phenomenological}) yields the ratio $E_{1}/E_{0} = \exp \left( - 2 \pi/\Theta \right) $ from which the relative value of the dipole moment, compared to the critical dipole, is \begin{equation} \frac{\lambda}{ \lambda^{(*)} } - 1 \approx \frac{20}{9} \, \Theta^{2} = \frac{80 \, \pi^{2}}{9} \, \left[ \ln \left( \frac{E_{1}}{E_{0}} \right) \right]^{-2} \; , \label{eq:critical_dipole_inverted} \end{equation} which can be derived with the restriction to ${\mathcal S}_{m=0}(l=0,1)$, and supplemented by critical-dipole corrections just as in Eq.~(\ref{eq:modified_critical_dipole}). This ``inversion'' makes a simple prediction solely based on conformal quantum mechanics and which can be explicitly compared against the improved critical value~(\ref{eq:modified_critical_dipole}), using the known dipole moment $\lambda$ for the given polar molecule. In essence, this is a test of the residual scale invariance of the geometric scaling~(\ref{eq:ratios_cutoff_BS_regularized_energies_phenomenological}) of the conformal tower of states. \section{Conclusions} \label{sec:conclusions} In conclusion, the central concept put forward in this paper is the {\em anomalous emergence of bound states via renormalization\/} for a system with a conformally invariant domain whose ultraviolet boundary dictates binding. The ensuing quantum symmetry breaking within this framework captures the essence of the observed critical dipole moment for the formation of dipole-bound anions. Moreover, the tools developed in this paper, as exemplified by Eqs.~(\ref{eq:number_conformal_states})-(\ref{eq:critical_dipole_inverted}), show that this conformal framework: (1) permits the extraction of {\em universal properties\/} for physical problems with a conformally invariant domain; and (2) provides a description of dipole-bound anions in which model-dependent and model-independent contributions can be conveniently organized. In principle, this generalized conformal framework could be used as the starting point of a systematic approximation scheme for the description of dipole-bound molecular anions. The estimate~(\ref{eq:modified_critical_dipole}) is a typical illustration of this: its numerical coefficients could be further refined by an improved {\em matching \/} of the conformal domain with the infrared and ultraviolet sectors, as well as by considering higher orders (with respect to $l$). Thus our problem is similar to that encountered in many other areas of physics, in which a zeroth order approximation captures the essential ingredients, which are to be subsequently improved upon by the use of miscellaneous approximation techniques. Most intriguingly, our approach exhibits many similarities with the recently developed chiral-Lagrangian program for nuclear physics~\cite{weinberg_nuclear,ordonez}, in which the underlying chiral symmetry from QCD provides a guiding principle within a power-counting scheme that selects the terms in the Lagrangian for nucleons and pions---with the first terms capturing the dominant, model-independent contributions. Likewise, our conformal framework, based on the SO(2,1) invariance and the use of effective-field theory concepts, is a discriminating scheme to elucidate the dominant model-independent features of the molecular anions and similar systems with a conformally invariant domain; in this context, it would be interesting to develop the analog of the chiral power-counting scheme. \acknowledgments{This work was supported by CONICET, ANPCyT, the University of San Francisco Faculty Development Fund, and by the National Science Foundation under Grants No.\ 0308300 and No.\ 0308435.}
1,314,259,993,822
arxiv
\section*{Abstract} {\bf The $\boldsymbol{t\SP J}$ model is believed to be a minimal model that may be capable of describing the low-energy physics of the cuprate superconductors. However, although the $\boldsymbol{t\SP J}$ model is simple in appearance, obtaining a detailed understanding of its phase diagram has proved to be challenging. We are therefore motivated to study modifications to the $\boldsymbol{t\SP J}$ model such that its phase diagram and mechanism for d-wave superconductivity can be understood analytically without making uncontrolled approximations. The modified model we consider is a $\boldsymbol{t'\SP J_z\SP V}$ model on a square lattice, which has a second-nearest-neighbor hopping $\boldsymbol t'$ (instead of a nearest-neighbor hopping $\boldsymbol t$), an Ising (instead of Heisenberg) antiferromagnetic coupling $\boldsymbol{J_z}$, and a nearest-neighbor repulsion $\boldsymbol V$. In a certain strongly interacting limit, the ground state is an antiferromagnetic superconductor that can be described exactly by a Hamiltonian where the only interaction is a nearest-neighbor attraction. BCS theory can then be applied with arbitrary analytical control, from which nodeless d-wave or s-wave superconductivity can result. } \section{Introduction} The $t\SP J$ and Hubbard models have been studied extensively as toy models for high-temperature superconductivity in the cuprate superconductors \cite{KeimerKivelsonRev2015,CarlsonKivelsonConcepts,LeeWenRev2004,ScalapinoRev2012}. However, the ground states of these models and materials are often frustrated by multiple competing or intertwining orders \cite{FradkinIntertwined}. For example, in the $t\SP J$ model, the antiferromagnetic Heisenerg term $J$ results in antiferromagnetic order at half-filling; however, when the system is hole doped, then the hopping of holes will locally destroy the antiferromagnetic alignment. The competition between the $t$ and $J$ terms makes well-controlled analytical study of the $t\SP J$ model difficult. Nevertheless, one might hope to find a corner of the Hubbard or $t\SP J$ model phase diagram that exhibits superconductivity while maintaining analytical control. Although this can be done for the weakly-interacting Hubbard model \cite{KivelsonScalapinoWeakCoupling,repulsiveRev}, in the limit of strong Hubbard $U$, which corresponds to small $J$ in the $t\SP J$ model, there is evidence that superconductivity does not occur \cite{WhiteKivelsonInfiniteU,lowDensityPhaseSep,tJNagaokaState,NagaokaState}. To gain insight on the strongly-interacting regime, the large $J$ limit of the $t\SP J$ model has been studied; but this regime has been shown to be dominated by (unphysical\footnote{ Here, phase separation means that a fraction of the system is completely unfilled while the rest is full of electrons. This state is unphysical because it has an infinite energy density when the $1/r$ Coulomb repulsion is not ignored.}) phase separation \cite{lowDensityPhaseSep}. To make progress, many works have considered a large variety of modifications to the $t\SP J$ model in order to improve analytical tractability. Such modifications include explicit symmetry breaking \cite{exactBCSHubbard,exactHaldaneBCSHubbard}, large spatial dimension \cite{tJLargeD}, large $N$ \cite{hubbardLargeN}, nonlocality \cite{nonlocalKrishnamurthy,nonlocalLepori}, SYK-like nonlocality with large $N$ \cite{tJSYK}, and replacing the Heisenberg interaction $J$ with an Ising interaction $J_z$ \cite{NussinovJz,CayleyJz,ChainJz,StringJz,IsingJz,diagramJz}. In this work, our goal will be to study the simplest modification to the $t\SP J$ model (that does not enlarge the Hilbert space) such that a superconducting phases exists and can be well-understood with analytical control. Since the nearest-neighbor hopping frustrates the antiferromagnetic order in the $t\SP J$ model, we replace the nearest-neighbor hopping $t$ with a next-nearest-neighbor hopping $t'$ which does not compete with antiferromagnetism. To further simplify, we replace the Heisenberg interaction $J$ with an antiferromagnetic Ising interaction $J_z$.\footnote{ The $t\SP t'\SP t''\SP J_z$ model has been studied in \refcite{tttJz,tttJzHole}.} We also add a nearest-neighbor repulsion $V$ to prevent unphysical charge separation. See \figref{fig:lattice}. \begin{figure} \begin{center} \includegraphics[width=.35\textwidth]{lattice} \end{center} \caption{ A depiction of the $t'\SP J_z\SP V$ model [\eqnref{eq:HtJV}] that we study. This model includes a next-nearest-neighbor hopping $t'$ across the dashed gray links instead of a nearest-neighbor hopping $t$ across the solid black links. The model also includes an antiferromagnetic Ising interaction $J_z$ and nearest-neighbor repulsion $V$ across each solid black link. Unlike a nearest-neighbor hopping $t$, the next-nearest-neighbor hopping $t'$ does not frustrate the antiferromagnetic interaction. The red and blue arrows denote spin up and spin down fermions. } \label{fig:lattice} \end{figure} The absence of a nearest-neighbor hopping may be an unrealistic aspect of our model. However, this omission is loosely motivated since nearest-neighbor hopping is strongly suppressed in $t\SP J$-like models near half-filling when $J$ is large \cite{TrugmanHole,KaneHole,fractonPolarons}. Also note that next-nearest-neighbor hopping keeps the fermions on the same sublattice, which is a constraint that can also occur for polarons in an antiferromagnet \cite{smallPolaron,CORE,AuerbachBook}. Thus, our model could also be considered to be a toy model for polarons in an Ising antiferromagnet. In \secref{sec:tJV model}, we show that in a certain large $J_z$ and $V$ limit, the ground state of the $t'\SP J_z\SP V$ model [\eqnref{eq:HtJV}] is antiferromagnetic and the low-energy Hamiltonian can be exactly mapped to a Hamiltonian [\eqnref{eq:HAF}] where the only interaction is an attractive interaction. When the effective attraction is weak, the simplified model can be studied using BCS mean-field theory, which we carry out in detail. \section{$t'\SP J_z\SP V$ Model} \label{sec:tJV model} In this work, we study the $t'\SP J_z\SP V$ model on a square lattice (\figref{fig:lattice}), which has the following Hamiltonian: \begin{equation} H_{t'\SP J_z\SP V} = t' \sum_{\langle\langle ij \rangle\rangle} \sum_{s=\uparrow,\downarrow} \mathcal{P} \left( c_{is}^\dagger c_{js} + c^\dagger_{js} c_{is} \right) \mathcal{P} + J_z \sum_{\langle ij \rangle} S_i^z S_j^z + V \sum_{\langle ij \rangle} n_i n_j \label{eq:HtJV} \end{equation} with the single-occupancy constraint $n_i = n_{i\uparrow} + n_{i\downarrow} \leq 1$. The first term hops electrons diagonally between next-nearest-neighbor sites $\langle\langle ij \rangle\rangle$ while imposing the $n_i \leq 1$ constraint via the projection operator $\mathcal{P}$, which projects out $n_i = 2$ states. The second term is a nearest-neighbor antiferromagnetic Ising interaction where $S_i^z = \frac{1}{2} ( n_{i\uparrow} - n_{i\downarrow} )$. The third term is a nearest-neighbor repulsive interaction. We study $H_{t'\SP J_z\SP V}$ on a square lattice; however, many of our results readily generalize to any bipartite lattice. The model has a $U(1)^4$ symmetry resulting from conserved charge and $z$-component of spin on each sublattice. It is convenient to redefine the nearest-neighbor repulsion as $V = \tfrac{1}{4} J_z - V_0$ and rewrite the Hamiltonian as: \begin{equation} H_{t'\SP J_z\SP\V} = t' \sum_{\langle\langle ij \rangle\rangle,s} \mathcal{P} \left( c_{is}^\dagger c_{js} + c^\dagger_{js} c_{is} \right) \mathcal{P} + J_z \sum_{\langle ij \rangle} \left( S_i^z S_j^z + \tfrac{1}{4} n_i n_j \right) - V_0 \sum_{\langle ij \rangle} n_i n_j \label{eq:HtJv} \end{equation} We will focus on the following limit: \begin{equation} V_0 \ll |t'| \ll J_z \end{equation} with electron filling $\langle n \rangle < 1$. It is useful to consider the energy levels of two nearest-neighbor sites in the $t'=0$ limit: \begin{equation} \begin{array}{c|c} \text{state} & t'=0 \text{ energy} \\ \hline \uparrow\uparrow, \downarrow\downarrow & J_z/2 - v \\ \uparrow\!0, \downarrow\!0, 0\!\uparrow, 0\!\downarrow & 0 \\ 00 & 0 \\ \uparrow\downarrow, \downarrow\uparrow & -v \end{array} \label{eq:states} \end{equation} In the above table, $\uparrow$ and $\downarrow$ refer to spin up and down electrons, while $0$ refers to an empty site. Thus, in the large $J_z$ limit, parallel spins are strongly suppressed. We argue that the ground state never has parallel spins in the $V_0 \ll |t'| \ll J_z$ limit for sufficiently large electron fillings. This occurs because all of the eigenstates have definite $S^z$ spin on each sublattice, and the lowest energy state is a fully-polarized antiferromagnet where one sublattice has only spin-up electrons and the other has only spin-down. This is the lowest-energy symmetry sector since it minimizes the energy from the $J_z$ term and also minimizes the energy of the $t'$ term by allowing for the most electron hopping. See \figref{fig:lattice} for an example of a state in this symmetry sector. In \appref{app:AF}, we provide a rigorous numerical argument that the ground state is fully antiferromagnetic when $V_0 \ll |t'| \ll J_z$ and for sufficiently large electron filling: $\langle n \rangle > n_c$ where we bound $n_c < 0.265$. \pagebreak \subsection{Effective Model} \label{sec:eff model} Since the ground states are fully-polarized Ising antiferromagnets, let us consider the antiferromagnetic ground state where the A and B sublattices have only spin-up and spin-down electrions, respectively. It is then convenient to define new electron operators: \begin{equation} d_i = \begin{cases} c_{i\uparrow} & i \in \text{A} \\ c_{i\downarrow} & j \in \text{B} \end{cases} \end{equation} Within this subspace of only fully-polarized antiferromagnetic states, the $t'\SP J_z\SP\V$ model [\eqnref{eq:HtJv}] simplifies significantly: \begin{equation} H_\text{AF} = t' \sum_{\langle\langle ij \rangle\rangle} \left( d_i^\dagger d_j + d^\dagger_j d_i \right) - V_0 \sum_{\langle ij \rangle} n_i n_j \label{eq:HAF} \end{equation} That is, the ground states of the $t'\SP J_z\SP\V$ model can be described by the above Hamiltonian, $H_\text{AF}$, which only involves fermions with a next-nearest-neighbor hopping $t'$ and attractive interaction $V_0$. When $V_0 \ll t'$, we can apply BCS mean-field-theory to study $H_\text{AF}$, which we work out in detail in \appref{app:MFT}. The BCS order parameter is \begin{equation} \Delta_\delta = V_0 \langle d_i d_{i+\delta} \rangle \text{ where } i \in \text{A} \end{equation} where $\delta=\hat{x},\hat{y}$. The symmetry of the order parameter can be s-wave ($\Delta_x = \Delta_y$) or d-wave ($\Delta_x = -\Delta_y$), depending on the electron filling and sign of $t'$. Since the order parameter $\Delta_\delta$ is not on-site, its Fourier transformation [$\Delta_k$ in \eqnref{eq:gap function}] has nodal lines (where $\Delta_k=0$) in $k$-space for both s-wave and d-wave symmetry. However, if $\langle n \rangle \neq 1/2$, then the nodal lines never touch the fermi surface for either s-wave and d-wave symmetry, as shown in \figref{fig:symmetry}. \begin{figure} \begin{center} \includegraphics[width=.4\textwidth]{d-symmetry} \includegraphics[width=.4\textwidth]{s-symmetry} \includegraphics[width=.15\textwidth]{symmetry-legend} \end{center} \caption{ The nodal lines of the order parameter [black lines where $\Delta_k$=0 in \eqnref{eq:gap function}] and the fermi surface for various electron fillings (colored lines). The symmetry of the order parameter depends on the electron filling. When $t'>0$, the symmetry is d-wave when $\langle n \rangle < 1/2$ and s-wave when $\langle n \rangle > 1/2$. The nodal lines never touch the fermi surface as long as $\langle n \rangle \neq 1/2$. The $t'<0$ case follows from noting that the physics is symmetric under $t' \to -t'$ and $\langle n \rangle \to 1 - \langle n \rangle$. } \label{fig:symmetry} \end{figure} In the $V_0 \ll t'$ limit, the BCS order parameter satisfies the standard BCS gap equation \begin{equation} |\Delta_x| = |\Delta_y| = 2\omega e^{-1/V_0 g_0} \label{eq:gap scaling} \end{equation} where $\omega$ and $g_0$ are parameters, which we calculated numerically and show in \figref{fig:BCS} as a function of the filling fraction $\langle n \rangle$. \begin{figure} \begin{center} \includegraphics[width=.4\textwidth]{BCS1} \includegraphics[width=.59\textwidth]{BCS2} \end{center} \caption{ The BCS gap equation parameters $\omega$ (left in green) and $g_0$ (right in blue) from \eqnref{eq:gap scaling}, and the density of states at the Fermi surface (right in red). The parameters are rescaled by $t'$ to make them unitless. The density of states is defined by $\text{DoS} = \int_k \delta(\varepsilon_k)$, where we absorbed the chemical potential $\mu$, which depends on $\langle n \rangle$, into the electron dispersion $\varepsilon_k$ [\eqnref{eq:gap function}]. The density of states has a log divergence at $\langle n \rangle = 1/2$ where $\text{DoS} \sim 0.05 - 0.06 \log|\langle n \rangle - \frac{1}{2}|$. } \label{fig:BCS} \end{figure} Although the density of states at the Fermi surface diverges at half filling, the $g_0$ and the BCS gap $|\Delta_x|$ (in the weak interaction limit $V_0 \ll t'$) actually decrease as half filling is approached. This might conflict with one's intuition that a large density of states strengthens superconductivity. However, the diverging density of states occurs due to saddle points in the energy dispersion at momenta $k = (\pm\frac{\pi}{2},\pm\frac{\pi}{2})$ (black dots in \figref{fig:symmetry}). But these saddle points sit on the nodal lines of the BCS order parameter $\Delta_k$. Therefore, these states do not contribute to the BCS gap $\Delta_x$. In \appref{app:approx gap}, we mathematically confirm this argument. \pagebreak \section{Conclusion} As part of a program to identify simple and analytically tractable toy models of superconductivity \cite{Slagle,Isaev,YaoTsaiKivelson}, in this work we identify three modifications to the $t\SP J$ model, resulting in the $t'\SP J_z\SP V$ model, that allow for an analytically controlled understanding of its antiferromagnetic d-wave superconducting ground state using BCS theory. Due to the second-nearest-neighbor hopping and antiferromagnetic ground state, the onsite Hubbard repulsion effectively disappears from the effective Hamiltonian $H_\text{AF}$ [\eqnref{eq:HAF}], and the antiferromagnetic Heisenburg term leads to an effective nearest-neighbor attractive interaction $V_0$ in the antiferromagnetic ground state. Since the attractive interaction $V_0$ does not have to compete with an onsite Hubbard interaction (due to its absence in $H_\text{AF}$), the mechanism for Cooper pairing is very simple and results from a BCS description of the attractive interaction $V_0$. We then studied the small $V_0$ limit in detail using BCS theory. We also discussed why a diverging density of states at the Fermi level does not contribute to superconductivity in our model. An interesting property of our model is the coexistence of antiferromagnetism and superconductivity. This coexistence has been studied and predicted in a number of works on (sometimes extended) Hubbard and $t\SP J$ models \cite{InuiCoexistence,FoleyCoexistence,CaponeCoexistence,HimedaCoexistence,ParkCoexistence}. Our model provides an example of such a coexistence in an analytically tractable setting. It would be interesting to combine the $t\SP J$ and $t'\SP J_z\SP V$ models into a single $t\SP t'\SP J_{xy}\SP J_z\SP V$ model to understand how universal the superconducting state we found is, to what extent it extends into the larger phase diagram, and if it boarders different superconducting states. In \appref{app:AF} we showed that when $V_0 \ll |t'| \ll J_z$ and the electron filling is greater than $n_c = 0.265$, then the ground state is a fully polarized antiferromagnet. However, we did not consider the small filling case $n \ll 1$, and it could be possible that sufficiently small fillings also lead to a fully polarized antiferromagnet. Nevertheless, the $t'\SP J_z\SP V$ model that we studied has a number of limitations. Although it may be applicable to the study of polarons in an Ising antiferromagnet for which a nearest-neighbor hopping is not allowed, the absence of a nearest-neighbor hopping in our model is unnatural for an electron model. Furthermore, the tractable limit of our model was in a large antiferromagnetic interaction $J_z$ limit, which may not be experimentally accessible. Finally, the superconducting state that we found has large Cooper pairs (since it's described by BCS theory) and no gapless nodes (i.e. the lines where the order parameter is zero $\Delta_k=0$ never touch the Fermi surface). This makes the superconducting state we found qualitatively different from more interesting superconducting states, such as the ones found in the cuprate superconductors \cite{KeimerKivelsonRev2015,CarlsonKivelsonConcepts,LeeWenRev2004,ScalapinoRev2012} . In the future, it would be interesting to identify other simple and analytically tractable models with less of these shortcomings, or to include more exotic physics, such as emergent gauge fields \cite{SachdevGauge,LeeGauge}, while retaining analytical control. \section*{Acknowledgements} We thank Patrick Lee and Assa Auerbach for helpful discussions. \paragraph{Funding information} KS acknowledges support from the Walter Burke Institute for Theoretical Physics at Caltech. \pagebreak \begin{appendix} \section{Saturated Antiferromagnetism} \label{app:AF} In order to reduce the $t'\SP J_z\SP\V$ model [\eqnref{eq:HtJv}] to the effective antiferromagnetic model [\eqnref{eq:HAF}], we need to show that the ground states of the $t'\SP J_z\SP\V$ model have only spin up electrons on a one sublattice and only spin down electrons on the other sublattice. In this appendix, we argue that this is the case when \begin{equation} V_0 \ll |t'| \ll J_z \text{ and } \langle n \rangle \gtrsim 0.265 \label{eq:cond} \end{equation} To show this, we show that \eqnref{eq:cond} implies that the lowest-energy fully-polarized antiferromagnetic state has a lower energy than any state state with a single flipped spin. By ``flipped spin,'' we mean an electron with a spin in the opposite direction from the antiferromagnetic order parameter. We expect that if a single spin flip costs energy, then flipping more spins will not result in a lower energy. If this expectation is true, then we have shown that the ground state is a fully polarized antiferromagnet when \eqnref{eq:cond} is satisfied. More precisely, assuming $V_0 \ll |t'| \ll J_z$, we numerically calculated a lower bound on the energy cost $E^\text{flip}(N)$ to flip a single electron spin for a state with $N$ electrons on a square lattice with $N_\text{sites}$ sites. Mathematically, $E^\text{flip}(N)$ is defined as \begin{align} \begin{split} E^\text{flip}(N) &= E^\text{AF}_{N-1,1} - E^\text{AF}_{N,0} \\ E^\text{AF}_{N_1,N_2} &= E\Big(N_{A\uparrow}^\text{tot} + N_{B\downarrow}^\text{tot} = N_1 \,;\,\, N_{A\downarrow}^\text{tot} + N_{B\uparrow}^\text{tot} = N_2 \Big) \label{eq:Eflip} \end{split} \end{align} where $E^\text{AF}_{N_1,N_2}$ is the lowest energy state with $N_1$ electrons that are either spin-up on the A sublattice or spin-down on the B sublattice and $N_2$ electrons that are either spin-down on the A sublattice or spin-up on the B sublattice. We want to show that $E^\text{flip}(N)$ is positive for sufficiently large $\langle n \rangle = N/N_\text{sites}$ (as $N \to \infty$). Since we're assuming $V_0 \ll |t'| \ll J_z$, and the $t'$ and $J_z$ terms are sufficient to eliminate any extensive degeneracy, it's sufficient to ignore the attractive $V_0$ term and only consider states in the ground state of the $J_z$ term in \eqnref{eq:HtJv}. That is, we can simplify the calculation by considering the following limit: \begin{align} V_0 &= 0 & J_z &= \infty \label{eq:eff limit} \end{align} $E^\text{AF}_{N,0}$ is the fully-polarized antiferromagnet ground state energy. $E^\text{AF}_{N,0}$ can be efficiently calculated since it only involves free fermions since the $J_z$ term does not contribute to fully-polarized states and we are ignoring the $V_0$ term. $E^\text{AF}_{N-1,1}$ is more complicated to calculate, but we can place a lower bound on it. Let $\ket{\Psi^\text{AF}_{N-1,1}}$ be an eigenstate with energy $E^\text{AF}_{N-1,1}$. Let us decompose $\ket{\Psi^\text{AF}_{N-1,1}}$ as a sum of states with a definite position for the flipped spin: \begin{equation} \ket{\Psi^\text{AF}_{N-1,1}} = \sqrt{\frac{2}{N_\text{sites}}} \sum_{i \in \text{A}} \alpha_i c_{i\downarrow}^\dagger \ket{\psi^{(i)}_{N-1,0}} \label{eq:psi flip} \end{equation} $\ket{\psi^{(i)}_{N-1,0}}$ is a state with $N$ electrons that are either spin-up on the A sublattice or spin-down on the B sublattice, and where $\ket{\psi^{(i)}_{N-1,0}}$ depends on the lattice site $i$ of the flipped spin. Translation symmetry implies that $\alpha_i$ is only a phase (i.e. $|\alpha_i|=1$) and the states $\ket{\psi^{(i)}_{N-1,0}}$ are related by translation (i.e. $T_\delta \ket{\psi^{(i)}_{N-1,0}} = \ket{\psi^{(i+\delta)}_{N-1,0}}$ where $T_\delta$ is a translation operator). We can now derive the following bound: \begin{align} E^\text{AF}_{N-1,1} &= \left\langle \Psi^\text{AF}_{N-1,1} \left| H_{t'\SP J_z\SP\V} \right| \Psi^\text{AF}_{N-1,1} \right\rangle \nonumber\\ &= \frac{2}{N_\text{sites}} \Bigg[ \sum_{\substack{ij \in \text{A} \\ i \neq j}} \alpha_i^* \alpha_j \left\langle \psi^{(i)}_{N-1,0} \left| c_{i\downarrow} H_{t'} c_{j\downarrow}^\dagger \right| \psi^{(j)}_{N-1,0} \right\rangle + \sum_{i \in \text{A}} \left\langle \psi^{(i)}_{N-1,0} \Big| H_{t'} \Big| \psi^{(i)}_{N-1,0} \right\rangle \Bigg] \nonumber\\ &= \sum_{j = i \pm \hat{x} \pm \hat{y}} t' \alpha_i^* \alpha_j \left\langle \psi^{(i)}_{N-1,0} \Big| \psi^{(j)}_{N-1,0} \right\rangle + \left\langle \psi^{(i)}_{N-1,0} \Big| H_{t'} \Big| \psi^{(i)}_{N-1,0} \right\rangle \text{ where } i \in \text{A} \label{eq:bound2}\\ &\geq -4|t'| + \widetilde{E}^\text{AF}_{N-1,0} \label{eq:bound} \end{align} $H_{t'}$ is the $t'$ term in $H_{t'\SP J_z\SP\V}$ [\eqnref{eq:HtJv}]. Only the $t'$ term contributes due to the $V_0=0$ limit [\eqnref{eq:eff limit}]. In \eqnref{eq:bound2}, $i$ can be any site in the A sublattice. The first term in \eqnref{eq:bound} results from bounding $t' \alpha_i^* \alpha_j\langle \psi^{(i)}_{N-1,0} | \psi^{(j)}_{N-1,0} \rangle \geq -|t'|$. $\widetilde{E}^\text{AF}_{N-1,0}$ is the energy defined in \figref{fig:constraint}. $\widetilde{E}^\text{AF}_{N-1,0}$ bounds the second term in \eqnref{eq:bound2} since it is the ground state energy of $H_{t'}$ subject to the same constraint that is enforced upon $\ket{\psi^{(i)}_{N-1,0}}$ [due to $J_z = \infty$ in \eqnref{eq:eff limit}]. $\widetilde{E}^\text{AF}_{N-1,0}$ can be efficiently calculated since the projection operators $\mathcal{P}$ in $H_{t'}$ act as the identity operator for the electron filling under consideration; thus we only need to calculate the ground state energy of a free fermion Hamiltonian. \begin{figure} \begin{center} \includegraphics[width=.3\textwidth]{constraint} \end{center} \caption{ $\widetilde{E}^\text{AF}_{N-1,0}$ is the ground state energy of $H_{t'}$ [i.e. the $t'$ term in $H_{t'\SP J_z\SP\V}$ from \eqnref{eq:HtJv}] with $N-1$ electrons that are either spin-up on the A sublattice (red sites) or spin-down on the B sublattice (blue sites) and subject to the constraint that there are no fermions on the five sites marked with crosses. } \label{fig:constraint} \end{figure} In \figref{fig:dE}, we plot \begin{equation} E^\text{flip}(N) \geq -4t' + \widetilde{E}^\text{AF}_{N-1,0} - E^\text{AF}_{N,0} \end{equation} where the bound follows from \eqsref{eq:Eflip} and \eqref{eq:bound}. \figref{fig:dE} is therefore evidence that the ground state is a fully-polarized antiferromagnet. \begin{figure} \begin{center} \includegraphics[width=.4\textwidth]{dE1} \hspace{.01\textwidth} \includegraphics[width=.57\textwidth]{dE2} \end{center} \hspace{.17\textwidth} {\bf (a)} \hspace{.39\textwidth} {\bf (b)} \caption{ A lower bound on the energy $E^\text{flip}$ [\eqnref{eq:Eflip}] required to flip an electron spin on one of the sublattices when $V_0 \ll |t'| \ll J_z$. We used a square lattice with $N_\text{sites} = 2\times200\times200$, where the A and B sublattices are each $200\times200$ square lattices with periodic boundary conditions which are rotated $45\degree$ with respect to \figref{fig:lattice}. We expect the reduced model [\eqnref{eq:HAF}] to be valid when $E^\text{flip} > 0$. The figure shows that there is a critical filling $n_c$ such that $E^\text{flip} > 0$ for all $\langle n \rangle > n_c$. {\bf (b)} Zooming in suggests an upper bound on the critical filling: $n_c < 0.265$. We also show the $N_\text{sites} = 2\times100\times100$ lattice result as evidence that larger system sizes would only improve our bound. } \label{fig:dE} \end{figure} \pagebreak \section{Mean Field Theory} \label{app:MFT} In this appendix, we study $H_\text{AF}$ [\eqnref{eq:HAF}] using BCS mean-field theory. We primarily do this to check the symmetry of the BCS order parameter (see e.g. \figref{fig:symmetry}). We also numerically calculate the scaling of the order parameter for weak interactions and display the result in \figref{fig:BCS}. We begin with the following BCS mean-field expansion \begin{equation} n_i n_j \approx \langle d_i d_j \rangle d_j^\dagger d_i^\dagger + \langle d_i d_j \rangle^* d_i d_j - |\langle d_i d_j \rangle|^2 \end{equation} where we have dropped $O\!\left( d_i d_j - \langle d_i d_j \rangle \right)^2$ terms. After applying the above mean-field expansion and a Fourier transformation ($d_k = N^{-1/2} \sum_j e^{-\mathrm{i} k \cdot j} d_j$), $H_\text{AF}$ becomes \begin{equation} H_\text{BCS} = \sum_k^{0 \leq k_x < \pi} \begin{pmatrix} d_k^\dagger \\ d_{\pi-k} \end{pmatrix} \begin{pmatrix} +\varepsilon_k & -\Delta_k \\ -\Delta_k^* & -\varepsilon_k \end{pmatrix} \begin{pmatrix} d_k \\ d_{\pi-k}^\dagger \end{pmatrix} + \frac{N}{V} \left( |\Delta_x|^2 + |\Delta_y|^2 \right) \end{equation} where we have dropped a constant that does not depend on $\Delta_x$ or $\Delta_y$. The electron dispersion $\varepsilon_k$ and gap function $\Delta_k$ are: \begin{align} \begin{split} \varepsilon_k &= 4t' \cos k_x \cos k_y - \mu \\ \Delta_k &= 2\Delta_x \cos k_x + 2\Delta_y \cos k_y \label{eq:gap function} \end{split} \end{align} where $\mu$ is the chemical potential. The mean-field order parameter $\Delta_\delta$ is defined by \begin{equation} \Delta_\delta = V_0 \langle d_i d_{i+\delta} \rangle \text{ where } i \in \text{A} \label{eq:order parameter app} \end{equation} $\sum_k^{0 \leq k_x < \pi}$ sums over all momenta $k$ in the half-Brillouin zone with $0 \leq k_x < \pi$. $\pi - k$ is defined by $\pi - k = (\pi - k_x, \pi - k_y)$ where $k = (k_x, k_y)$. $H_\text{BCS}$ can be diagonalized by a Bogoliubov transformation: \begin{align} H_\text{Bogoliubov} &= \sum_k^{0 \leq k_x < \pi} \begin{pmatrix} \alpha_k^\dagger \\ \beta_k^\dagger \end{pmatrix} \begin{pmatrix} +\lambda_k & 0 \\ 0 & -\lambda_k \end{pmatrix} \begin{pmatrix} \alpha_k \\ \beta_k \end{pmatrix} + \frac{N}{V_0} \left( |\Delta_x|^2 + |\Delta_y|^2 \right) \\ \lambda_k &= \sqrt{\varepsilon_k^2 + |\Delta_k^2|} \end{align} where $\pm \lambda_k$ are the Bogoliubov quasi-particle energies. The Bogoliubov quasi-particle operators $\alpha_k$ and $\beta_k$ with $0 \leq k_x < \pi$ are defined in terms of the electron operators $d_k$ by the following Bogoliubov transformation: \begin{equation} \begin{pmatrix} d_k \\ d_{\pi-k}^\dagger \end{pmatrix} = \begin{pmatrix} +\cos\theta_k & +\sin\theta_k e^{+\mathrm{i}\phi_k} \\ -\sin\theta_k e^{-\mathrm{i}\phi_k} & +\cos\theta_k \end{pmatrix} \begin{pmatrix} \alpha_k \\ \beta_k \end{pmatrix} \end{equation} where the angle $0 < \theta_k < \pi/4$ is defined by $\tan(2\theta_k) = |\Delta_k|/\varepsilon_k$ and $\phi$ is the phase of $\Delta_k = |\Delta_k| e^{\mathrm{i} \phi_k}$. The order parameters $\Delta_x$ and $\Delta_y$ can be obtained by variationally minimizing the ground state energy density \begin{equation} \frac{E}{N} = -\frac{1}{2} \int_k \lambda_k + V_0^{-1} \left( |\Delta_x|^2 + |\Delta_y|^2 \right) \label{eq:BCS energy} \end{equation} or by solving the self-consistency condition: \begin{align} \begin{split} \Delta_\delta &= V_0 \langle d_i d_{i+\delta} \rangle \text{ where } i \in \text{A} \\ &= \frac{V_0}{2N} \int_k \cos k_\delta \frac{\Delta_k}{\lambda_k} \label{eq:self-consistency} \end{split} \end{align} where $\int_k = \int \frac{\mathrm{d} k_x}{2\pi} \frac{\mathrm{d} k_y}{2\pi}$. We will assume that $\Delta_x$ and $\Delta_y$ are related by a phase $s$ ($|s|=1$): \begin{equation} \Delta_y = s \Delta_x \end{equation} Solving the self-consistency \eqnref{eq:self-consistency} for $V_0^{-1}$ results in \begin{equation} V_0^{-1} = \frac{1}{2} \int_k \left| \cos k_x + s \cos k_y \right|^2 / \lambda_k \label{eq:v(Delta)} \end{equation} From the above \eqnref{eq:v(Delta)}, we can calculate the interaction strength $V_0$ as a function of the order parameter $\Delta_y = s \Delta_x$ and chemical potential $\mu$. We use \eqnref{eq:BCS energy} to find which order parameter symmetry ($s=1$, $\mathrm{i}$, or $-1$) gives the lowest ground state energy; the result in summarized in \figref{fig:symmetry}. We numerically calculate the scaling coefficients of the order parameter in the weak interaction limit $V_0 \ll |t'|$ by calculating $V_0$ from \eqnref{eq:v(Delta)} for a few very small values of the order parameter: $|\Delta_x/t'| \sim 10^{-5}$. We then fit the resulting $(V_0,|\Delta_x|)$ data to the standard BCS gap equation \begin{equation} |\Delta_x| = |\Delta_y| = 2\omega e^{-1/V_0 g_0} \label{eq:gap scaling app} \end{equation} using $\omega$ and $g_0$ as free parameters. The result is shown in \figref{fig:BCS} as a function of the filling fraction $\langle n \rangle$. \subsection{Approximate Gap Scaling} \label{app:approx gap} In the usual s-wave BCS theory, $g_0$ in \eqnref{eq:gap scaling} is approximately equal to the density of states at the Fermi level \cite{BCS}. However, \figref{fig:BCS} shows that this is clearly not the case in our model. This occurs because in \eqnref{eq:v(Delta)}, the integral can not be reformulated in terms of the density of states $g(\varepsilon)$ as just a function of the energy $\varepsilon$. Rather, one requires a density of states $g(\varepsilon,\chi)$ that is also a function of the shape of the gap function $\chi = |\Delta_k / \Delta_x|$, which can be seen by rewriting \eqnref{eq:v(Delta)} as \begin{align} V_0^{-1} &= \frac{1}{8} \int \mathrm{d}\varepsilon \int \mathrm{d}\chi \, g(\varepsilon,\chi) \frac{\chi^2}{\sqrt{\varepsilon^2 + |\Delta_x^2| \chi^2}} \label{eq:v approx} \\ g(\varepsilon,\chi) &= \int_k \delta(\varepsilon_k - \varepsilon) \, \delta(|\Delta_k/\Delta_x| - \chi) \label{eq:g} \end{align} In \figref{fig:g}, we plot $g(\varepsilon,\chi)$ for our model. \begin{figure} \begin{center} \includegraphics[width=.45\textwidth]{g} \end{center} \caption{ The density of states $g(\varepsilon,\chi)$ [\eqnref{eq:g}] as a function of the single-particle energy $\varepsilon$ and gap function $\chi$ for our model [\eqnref{eq:gap function}] when $\Delta_x = -\Delta_y$ (which occurs when $\mu/t' < 0$). The $\Delta_x = +\Delta_y$ case is obtained by reflecting $\varepsilon+\mu \to -\varepsilon-\mu$. The density of states $g(\varepsilon)$ as a function of only the single-particle energy $\varepsilon$ is shown in \figref{fig:BCS}. The grayscale legend should not be taken seriously at the bottom-right corner where $g(\varepsilon,\chi)$ diverges. } \label{fig:g} \end{figure} Note that the integral in \eqnref{eq:v approx} is dominated by the region near the Fermi level where $\varepsilon=0$. Thus, similar to ordinary BCS theory, we can approximate the $\varepsilon$ dependence as a box distribution: \begin{equation} g(\varepsilon,\chi) \approx \begin{cases} g(\chi) & |\varepsilon| < W \\ 0 & \text{otherwise} \end{cases} \end{equation} We can now perform the $\varepsilon$ integral in \eqnref{eq:v approx} to obtain: \begin{equation} V_0^{-1} = \int \mathrm{d}\chi \, \frac{1}{4} g(\chi) \chi^2 \ln\frac{2W}{|\Delta_x| \chi} \label{eq:v approx 2} \end{equation} Solving the above equation for $\Delta_x$ results in the BCS gap equation [\eqnref{eq:gap scaling app}] with the following BCS parameters: \begin{align} \begin{split} g_0 &= \int \mathrm{d}\chi \, \frac{1}{4} g(\chi) \chi^2 \\ \omega &= W \exp\!\left( -\frac{1}{g_0} \int \mathrm{d}\chi \, \frac{1}{4} g(\chi) \chi^2 \ln\chi \right) \\ &= W \exp\!\left( - \langle\ln\chi \rangle_{P(\chi) = \frac{1}{4g_0} g(\chi) \chi^2} \right) \end{split} \end{align} $g_0$ does not depend on $W$, which shows that $g_0$ only depends on the states near the Fermi level $\varepsilon=0$. We also see that states where the gap function $\chi = |\Delta_k / \Delta_x|$ is larger contribute the most to $g_0$. In particular, states along the nodal lines of $\Delta_k$ (i.e. where $\chi=\Delta_k=0$) do not contribute to $g_0$. This mathematically explains our intuition for $g_0$ that we explained in the last paragraph of \secref{sec:eff model}. $\omega$ does depend on $W$, and therefore $\omega$ also depends on the states away from the Fermi level $\varepsilon=0$. $\omega$ is most intuitively expressed in terms of an expectation value of $\langle\ln\chi\rangle$ where $\chi$ is thought of as a random variable with the probability distribution $P(\chi) = \frac{1}{4g_0} g(\chi) \chi^2$. \end{appendix}
1,314,259,993,823
arxiv
\section{Introduction} Quantum phase slip junctions are exact dual counterpart of the Josephson junctions. Recently these junctions have been successfully realized in ultranarrow superconducting nanowires, where quantum phase slip replaces tunneling Cooper pairs \cite{Arutyunov20081, Bezryadin}. These nanowires are nonlinear elements performing similar physics as Josephson junctions with the roles of superconducting phase $\varphi$ and charge $q$ being interchanged \cite{1367-2630-15-10-105017, mooijsuperconducting2006}. This duality has been the motivation behind many of the recent applications of quantum phase slip (QPS) elements \cite{MooijSchoen, 1367-2630-7-1-219, citeulike:10580965, 4574936}. These elements have found interesting implications for fundamental metrology and information technology, for instance as photon pulse detectors, quantum current standard, and quantum bits \cite{PhysRevB.83.174511, citeulike:10580965, 4574936, PhysRevLett.108.097001}. In a superconducting nanowire with small cross section, the supercurrent is determined by the phase difference $\varphi$ between two ends of the nanowire from the sawtooth relation $I_s=\Phi_0 \varphi /2\pi L$ with $L$ being nanowire kinetic inductance and $\Phi_0=h/2e$. In temperatures much lower than the superconducting critical temperature (i.e. $T\ll T_c $) quantum fluctuations may suppress the modulus of the order parameter in a region and turn it from superconductor to normal metal. This enables the superconducting phase to slip by $2n\pi$, with integer $n$, without any energy compensation. An individual phase slip takes place in a normal core, similar to the normal core of a magnetic flux vortex, therefore we can assume the core size is roughly the coherence length $\xi$ \cite{gio, PhysRevLett.87.217003}. QPS event takes place for a short period of time that is maximally of the order of inverse of superconducting gap $h/2\Delta$. Similar effect happens close to $T_c$ due to thermal fluctuations of the order parameter \cite{tinkham}. In superconducting nanowire made of clean materials with low normal resistance $R$, quantum phase slips rarely take place. To enhance the slip rate a nanowire should be made of highly disordered amorphous superconductor, which is in the dirty limit, with large $R$ \cite{citeulike:10580965,MooijSchoen}. There is not a well-understood theory to describe the superconductivity in near superconductor-insulator transition (SIT). A candidate theory \cite{PhysRevB.32.5658,Sadovskii1997225} proposes superconductivity at high disorder is maintained by a fragile coherence between electron pairs, which is characterized by an anomalous binding energy. If pairs are localized, they enter an insulating state, and if condense, a coherent zero-resistance state emerges. Based on this theory superconductor in SIT have regions of localized BCS-condensates nearly separated in different lakes \cite{Sacepe2011jm}. The cores of QPS can coherently tunnel across superconducting regions and avoid dissipation. This is similar to the Cooper pairs that tunnel across a Josephson junction without much dissipation \cite{PhysRevLett.46.211,Caldeira1983374}. The voltage across the nanowire is known to be periodic in charge of the crossing fluxoid; i.e. $V=V_0\sin(2\pi q/2e)$. Individual phase slips in nanowires can be observed when a large bias voltage is applied on the wire. Under such bias voltage, effective potential becomes a tilted washboard with more slanted slop in larger bias. Depending on temperature, there are two general scenarios for the dynamics of a fluxoid. Close to the critical temperature $T_c$, fluxoid particle gains energy from thermal activation and overcomes potential barrier to slip across the wire \cite{tinkham}. Quite differently, in low temperature $T\ll T_c$ fluxoid particle becomes frozen in a minima of the washboard potential. The minima are called `zero-current states' where Coulomb blockade occurs \cite{1367-2630-14-4-043014, PhysRevLett.109.187001, PhysRevB.87.144510}. Vacuum fluctuations help the particle to tunnel into the barrier and slips away. Coherent tunneling is possible between two zero-current states where a quantum variable (phase or charge) has minimum fluctuations. Such coherent tunnelings have been previously observed in the superconducting-insulating transition limit \cite{1367-2630-14-4-043014, PhysRevLett.109.187001, PhysRevB.87.144510}. Exposing nanowire to strong electromagnetic radiation produces Shapiro steps \cite{PhysRevLett.11.80,anderson64} in the current-voltage dependence, which has been observed in reference \cite{Bae:2009hn}. However, in many applications of superconducting nanowires, such as in qubits and photon detectors, weak radiation is applied where phase locking cannot occur. In this paper, we qualitatively study the effect of a weak alternating electromagnetic field on the quantum phase slip rate in ultranarrow superconducting nanowire, where the width of the nanowire is smaller than the superconducting coherence length, i.e. $r<\xi$. We consider the nanowire is in the insulating phase. Our method is to map this problem into its well-studied analogue in Josephson junction in proper regime. We use the semiclassical quantum mechanical approach developed by Ivlev and Mel'nikov \cite{SovPhysJETP.63.1986, PhysRevLett.55.1614, SovPhysJetpIvelevMelnikov1985} in studying quantum tunneling in a high-frequency field to our problem. Similar to a Josephson junction under weak time-harmonic radiation \cite{PhysRevLett.53.1260,SovPhysJetpIvelevMelnikov1985}, we expect a significant enhancement in the stimulated phase slips at zero temperature. We show that a fluxoid gains energy from radiation and tunnel into the barrier more often than usual and slips away. This leads to the super-exponential enhancement in the rate of such `stimulated quantum phase slips' (SQPs). In certain nanowires, this can result in larger DC resistivity with minimal fluctuations in a dynamical variable. The enhanced escape from the zero-current state stimulated by weak irradiation has significant practical importance. Since the observation of zero-current state requires a high impedance environment, our study in here will be confined to the highly dissipative cases. Our model suggests that the QPS in certain nanowires at low temperature can significantly be amplified in microwave to THz radiation. The feasibility of this detector at the typical frequency of 0.3 THz using conventional materials will be discussed. \section{Stimulated Quantum Phase Slips}\label{QPS_HF} A superconducting nanowire with QPS is the dual to a Josephson junction with charge and phase (as well as current and voltage) interchanged. In Josephson junction, a Cooper pair tunneling across the junction picks up a phase $\exp(\pm i \varphi(t))$ corresponding to the superconducting phase $\varphi(t)$. This induces a coupling energy $E=E_J(1-\cos \varphi)$. The current is defined $I=(2e/\hbar ) \partial E/\partial \varphi$. Analogously for a nanowire similar relations can be derived. A QPS fluxoid picks up a charge phase when tunneling $\exp(\pm iQ)$ with $Q\equiv 2 \pi q/2e$ being a dimensionless charge parameter. Therefore the QPS energy becomes $E=E_S(1-\cos Q(t))$. The voltage is defined $V=\partial E/\partial q$. Phase slips may take place everywhere in the wire whose induced current depends on the wire inductance. Therefore a narrow superconducting nanowire can be modelled as a voltage in series with an inductance as shown in the left part of Fig. \ref{figQPScir}. In the figure, the dissipation is modelled by a resistor and the AC and DC bias voltages are sources in series with the wire. This circuit is built based on \cite{1367-2630-15-10-105017, mooijsuperconducting2006}. The inductor $L$ is the total of the kinetic inductance ($L_k$) and the geometric inductance ($L_g$) of the circuit. Since, in superconducting nanowire, the kinetic inductance is much larger than the geometric inductance, we have $L\approx L_k$. In the circuit of Fig. \ref{figQPScir} voltage is $ V= V_0 \sin\left( 2 \pi q/2e\right)+ L \ddot{q}+ R \dot{q}$ with the QPS and inductance energies \begin{eqnarray}\label{QPSjunctionDef0} E_{S} =2eV_0/2\pi, \quad \quad E_L = \Phi_0^2/2 L_k. \end{eqnarray} where $V_0$ is the voltage scale of QPS energy $E_S$. In the nanowire, a crossover from insulator to a superconducting inductor takes place when the inductance energy $E_L$ is increased beyond QPS energy $E_S$. In the superconducting phase $E_L\gg E_S$, the fluxoid energy $E=E_S(1-\cos Q) + E_L(\phi_f)$ is dominated by the parabolic inductance energy $E_L(\phi_f)$ associated with induced phase $\phi_f$. The parabola associated with different winding integer $n$ cross at certain energies where the small energy of QPS provide an avoided crossing gap. This makes nanowire energy to be multivalued in separated energy bands, similar to a capacitive Josephson junction. \begin{center} \begin{figure} \includegraphics[scale=0.5]{QPS_circuit.png} \caption{ The schematic circuit of a QPS junction including an ideal QPS element (a superconducting nanowire), the dissipative element $R$, the bias voltage $V_{DC}$ and the driving source $V_{AC}$. $L$ denotes the dominant kinetic inductance.} \label{figQPScir} \end{figure} \end{center} In the opposite regime where $E_S\gg E_L$ the wire energy is dominated by QPS energy $E_S(1-\cos Q)$ which oscillates in charge. Consider that the charge undergoes a fluctuation around its macroscopic value $q$ so that the stochastic charge is $q=q+\delta q$. Following the analogue discussion for a Josephson junction phase, (see \cite{Ingold, Ansari, Ansari2} and references therein) the charge fluctuation corresponds to the effective current fluctuations across the wire \begin{equation} \label{eq. deltaq} \delta q(t) = \int_0^t \delta I(t') dt'. \end{equation} A difference between QPS and thermally activated phase slips (TAPS) is that the dissipation in TAPS is due to stochastic energy activation in high temperature while QPS allows tunneling between distinct zero-current states. The latter is similar to zero-voltage states in Josephson junction \cite{1367-2630-14-4-043014, PhysRevLett.109.187001, PhysRevB.87.144510}. For a nanowire in the insulating phase the quantum tunneling between zero-current states takes place without current fluctuations $\delta I(t)$. Therefore Eq. (\ref{eq. deltaq}) shows that the charge in fluxoid behaves semiclassically, whereas superconducting phase can be subject to large fluctuations \cite{mooijsuperconducting2006}. The semiclassical charge associated with QPS fluxoid in the nanowire depicted in Fig. \ref{figQPScir} evolves in the following way: \begin{equation}\label{EOMQPSinitialq} \frac{d^2 Q}{dt^2}+ \eta \frac{dQ}{dt}+ \omega_p^2 \left(\cos Q -k_{0} - k_{1} \cos \Omega t \right)=0, \end{equation} with $\Omega$ is the frequency of the driving voltage and \begin{equation}\label{QPSjunctionDef1} \begin{split} \eta = R/ L , \quad \omega_p = \sqrt{ 2\pi V_0/ 2 e L}, \quad k_{i} = V_{i}/V_0. \end{split} \end{equation} with the index $i=$ 0 (1) corresponds to DC (AC) voltage. The definition of the plasma frequency $\omega_p$ is compatible with the definition based on the duality: $\hbar \omega_p =\sqrt{2 E_{S} E_L} $, which is similar to the equation of RCSJ model of Josephson junction \cite{mccumber,stewart} with high-frequency driving field. For simplicity in writing Eq. (\ref{EOMQPSinitialq}) we had shifted $Q\to Q+\pi/2$ in order to have applied and QPS voltages in phase. Our aim is to study the possibility of utilizing nanowire as a detector for time-harmonic radiations, therefore we restrict ourselves to weak alternating fields, i.e. $k_1 \ll 1$. This makes our problem to be different from the physics of the Shapiro steps \cite{PhysRevLett.11.80} where the phase of nanowires (or its dual Josephson junction) is locked to the frequency of the driving field frequency and constant voltage steps are observed. For weak time-harmonic fields the driving force is very small and the wire is in zero-current state with $k_0<1$. The most important result is that the smallness of $k_1$ does not necessary mean that the its effect on the charge dynamics is small. In fact as we will show a weak time-harmonic field can significantly affect the wire by increasing the rate of QPS (see also Appendix \ref{Sem_clas_QT}). In the limit of weak dissipation, the tunneling rate of fluxoid can be studied using 1D quantum mechanics of the Lagrangian associated to Eq. (\ref{EOMQPSinitialq}) subject to $R=0$. However we are interested to study the decay of the zero-current state in the limit of strong dissipation because resistance is large in wires with QPS effects. A dual effective theory has been developed for dissipative coherent tunneling in Josephson junction by Caldeira and Leggett \cite{Caldeira1983374,PhysRevLett.46.211}, and Larkin and Ovchinnikov \cite{PhysRevB.28.6281}. In those theories, dissipation is modelled as the coupling to bosonic degrees of freedom. The low-energy effective action of the system is derived to properly take account of the dissipation. The semiclassical theory of Josephson junction exposed to weak alternating current in \cite{PhysRevB.28.6281, SovPhysJetpIvelevMelnikov1985} guides us to study the charge dynamics of radiation-stimulated quantum phase slips (SQPS) in nanowires with strong dissipation. Effectively the evolution of semi-classical charge that tunnel across the wire is: \begin{eqnarray}\label{EOMdualQPSwithDissipation} \nonumber && \frac{d^2 Q}{dt^2} +\omega_p^2 \left( \cos Q-k_0-k_1 \cos \Omega t \right) \\ && \nonumber \quad- 2i \pi \eta \left(\frac{k_B T}{\hbar}\right)^2 \int_{C} dt_1 \frac{\sin [(Q(t)-Q(t_1))/2] }{\sinh^2 \left(\pi k_B T(t_1-t)/\hbar \right)} =0,\\ \end{eqnarray} where the contour $C$ is shown in Fig. \ref{timecontour} and the principle value of the integral is implied (for general discussion of the method, see Appendix \ref{Sem_clas_QT}). In the limit of our interest, the nanowire is strongly dissipative $\eta \gg \omega_p$. For semi-classical description to be valid, it is required that $E_S \gg \hbar \Omega$ \cite{PhysRevB.28.6281, SovPhysJetpIvelevMelnikov1985}. Also the applied DC voltage is close to $V_0$, i.e. $V_0-V_{DC} \ll V_0$, therefore the term with second derivative in Eq. (\ref{EOMdualQPSwithDissipation}) can be omitted. Also, we can assume that the exchange of energy between the wire and its environment takes place in the shortest time, thus the argument of $\sinh$ in the denominator of Eq. (\ref{EOMdualQPSwithDissipation}) can be replaced by its lowest order $\sinh x\approx x$. Technical analysis of this integration over the contour $C$ shows that the integral tends to zero except that at the singularity $t=t_1$ where its proper residue must be counted (see Eq. (18) in Ref. \cite{SovPhysJetpIvelevMelnikov1985}). Therefore, in the lack of alternating radiation Eq.(\ref{EOMdualQPSwithDissipation}) in the regime of interest effectively reads: $-\eta dQ/dt+ \omega_p^2 \left( \cos Q-1 \right) =0$, which has the following solution \begin{equation}\label{eq.sol} Q(t)=i \ln \frac{t-i\tau_s}{t+i\tau_s} , \quad \tau_s=\frac{\eta}{\omega_p^2}, \end{equation} with $\tau_s$ being the time of under-barrier motion. In a system described by the classical action $S=-i\int_C \mathcal{L} dt$ with $\mathcal{L}$ being Lagrangian, the probability of quasiclassical tunneling is $\Gamma=\exp(-S)$. Regarding the alternating voltage being a small perturbation $k_1\ll 1$, we can rewrite the action in the form of $S=S_0+S_1$ with $S_0$ being the action in the lack of alternating field and $S_1$ is linear in $V_{AC}$, \cite{kagan1992quantum}. Let us assume the QPS probability in a nanowire with strong dissipation and DC voltage $V_{DC}$ about $V_0$ is denoted as $\Gamma_0$. Above results easily show that in the presence of a weak high frequency radiation hitting the nanowire, the probability of QPS in the wire will change from $\Gamma_0$ to $\Gamma$ in the following form: \begin{equation}\label{tunnelingprobACwithdissipation} \Gamma(V_{AC},\Omega)= \Gamma_0 \exp\left[ \frac{4e V_{AC} }{ \hbar \Omega} \sinh (\Omega \tau_s) \right], \end{equation} Eq.(\ref{tunnelingprobACwithdissipation}) is the main result in this paper. It indicates that an alternating driving field with certain frequency and voltage can trigger occurance of a large number of QPS's in a proper nanowire at low temperature. For instance if a wire with dissipation factor $\eta/ \omega_p = 10$ is driven by a weak time-harmonic radiation of the realtive amplitude $4 e V_{AC}/\hbar\omega_p= 5\times 10^{-4} $ and frequency $\Omega =\omega_p$, the QPS rate increases by a factor of about $250$ times. A more careful analysis shows that this result is valid for nanowire temperature $T<T_0$ with $T_0$ being the crossover temperature between quantum and thermal activation regimes $T_0= \sqrt{(1-k_0)/2}\ (\hbar \omega_p)^2 /\pi k_B \eta$, \cite{kagan1992quantum}. From Eq.(\ref{tunnelingprobACwithdissipation}) one can see that the bigger the normal resistance $R$ is, the larger the rate of SQPS becomes. Intuitively this can be understood from the definition of underbarrier time $\tau_s$ in Eq. (\ref{eq.sol}). Upon increasing dissipation the under-barrier time grows larger. According to the semiclassical description of quantum tunneling, see Appendix (\ref{Sem_clas_QT}), during quantum tunneling time parameter becomes imaginary. This changes the bounded function of alternating potential in eq. (\ref{EOMQPSinitialq}) into the unbounded function $\cosh \Omega \tau$. The longer a fluxoid stays under the barrier, the more energy it absorbs from the alternating potential and this causes stimulation of quantum phase slips. The QPS rate in the lack of an time-harmonic drive has been estimated by Mooij and Harmans in \cite{1367-2630-7-1-219} to be nearly $\Gamma_0 \approx E_{S}/\hbar$. This in addition to substituting Eqs. (\ref{QPSjunctionDef1}) and (\ref{QPSjunctionDef0}) simplifies Eq. (\ref{tunnelingprobACwithdissipation}) into: \begin{equation}\label{tunnelingprobACwithdissipationEQPS} \Gamma(V_{AC}, \Omega)= (V_0 /\Phi_0) \exp\left[ \frac{4e V_{AC} }{ \hbar \Omega} \sinh \big(\frac{e R \Omega}{\pi V_0}\big)\right], \end{equation} Given that the QPS rate in the absence of time-harmonic radiation is proportional to $V_0$, one of the features of Eq. (\ref{tunnelingprobACwithdissipationEQPS}) is that the super-exponential enhancement of QPS rate is inversely proportional to the rate $\Gamma_0$, thus for small rate $\Gamma_0$ the exponential enhancement of QPS in the presence of high-frequency field is more significant. This enhancement is only due to the stimulated excitation in the zero-current states in the presence of external drive. In the absence of the $V_{DC}$, there is no tilt in potential and the rate of fluxoid crossing to right or left are equal. As a result the average current becomes zero $(\bar{I}=0$). However, in the presence of a positive value for DC voltage bias, the average current is given by: \begin{equation} \bar{I}=2e \left( \Gamma_{\rightarrow}-\Gamma_{\leftarrow} \right) \end{equation} with $\Gamma_{\rightarrow}$ ($\Gamma_{\leftarrow}$) the rate of crossing to the right (left) where the potential barrier decreases (increases). In the case the bias voltage is close to the critical voltage ($V_0$), $\Gamma_{\leftarrow}$ the crossing $\Gamma_{\rightarrow}$ dominantly exceeds that of the opposite direction. Hence $\Gamma= \Gamma_{\rightarrow}$ and \begin{equation}\label{averagIQPS} I= 2e \Gamma(V_{AC}, \Omega), \end{equation} where $\Gamma(V_{AC}, \Omega)$ is given by Eq. (\ref{tunnelingprobACwithdissipation}). According to Eq. (\ref{averagIQPS}), the influence of high-frequency weak irradiation on superconducting nanowire biasd by the DC voltage $V_0-V_{DC}\ll V_0$ is observable by measuring the crossing current. The quality factor in nanowire $Q_{S}$ is defined as: \begin{equation} Q_{S}= \frac{\omega_p}{\eta}. \end{equation} In low quality factor QPS nanowire the dissipation is strong. The larger $\eta$ leads to longer under-barrier time and consequently the enhancement of SQPS rate exponentially increases. The increase in the under-barrier motion due to higher dissipation can be seen in Fig. \ref{multiphoton_tunneling} . Therefore, we expect that a low-$Q_S$ nanowire to be a better candidate for observing tunneling enhancement. A comment on the range of validity of the method we used in this section is in order. As it is seen from Eq. (\ref{tunnelingprobACwithdissipation}), the enhancement in the tunneling probability for $\Omega \eta \gg \omega_p^2$ is itself an exponentially large factor ($\sim e V_{AC}( \hbar\Omega)^{-1}\exp (\Omega \eta/\omega_p^2) $). This indicates that the range of the validity of the semi-classical approach in this case is limited to $e V_{AC} \sim (\hbar\Omega) \exp (-\Omega \eta/\omega_p^2)$. Beyond this, higher order correction in terms of $V_{AC}$ to the calculations is required \cite{kamenev2011field}. An alternative approach in studying the QPS rate in superconducting nanowires under high-frequency radiation would be to use the the effective action method developed by Golubev and Zaikin \cite{1999cond.mat.11314Z,PhysRevB.64.014504,PhysRevLett.78.1552,Arutyunov20081} in a non-equilibrium setting. There are some challenges associated with this approach that are studied in \cite{phdthesisamirjafarisalim}. \section{Proposal for QPS-based Energy-resolving High-frequency Radiation Detector}\label{HF_QPSdetector} The exponential enhancement of the probability of the quantum tunneling observed in Eq. (\ref{tunnelingprobACwithdissipation}) can be exploited in designing detectors of microwave to THz radiation that are capable of determining the frequency of the incoming energy. In this section we introduce a new type of high-frequency detectors based on enhancement of QPS phenomenon in superconducting nanowires. In this paper, the working principle of this type of detector will be discussed and much important engineering details like impedance matching will not be addressed. Our proposed detector is made of a low quality factor, i.e. low $Q_S$, QPS junction that is voltage biased close to the critical voltage. An antenna is the source of the high-frequency voltage and is placed right across the superconducting nanowire. The current in the loop is measured constantly, the change in the current and the amplitude determines the presence of the detected radiation. A schematic of this system is shown in Fig. (\ref{Bowtieantenna}). The superconducting nanowire is placed in the gap between two parts of the antenna. This will guarantee that the maximum $V_{AC}$ is induced along the nanowire. Other elements like the resistance and the voltage-bias source are placed outside of the antenna in the loop. Presence of the radiation results in the decay of the zero-current state of the QPS junction which causes a change in the current of the circuit. Depending on the design parameters, the detection of the change in the current might be hard to achieve. A lock-in amplifier or a SQUID can be used for current monitoring in case the current change is difficult to be monitored with conventional methods. The resistance $R$ plays the important role of reducing quality factor ($Q$) of the QPS junction. Its value is chosen such that the required enhancement in Eq. (\ref{tunnelingprobACwithdissipation}) is achieved, which depends on other parameters of the system. \begin{figure} \begin{center} \includegraphics[scale=0.35]{Bowtieantenna.png} \caption[The schematic of a QPS high-frequency detector is shown. The red segment in the middle, is the superconducting nanowire. A broadband bow-tie antenna collects the high-frequency field.]{ The schematic of a QPS high-frequency detector is shown. The red segment in the middle, is the superconducting nanowire. A broadband bow-tie antenna collects the high-frequency field. The resistance $R$ adds dissipation to lower the quality factor of the QPS junction.} \label{Bowtieantenna} \end{center} \end{figure} \subsection{Design Parameters} As an example, in this section, we investigate design parameters for a $\Omega/2\pi=0.3$ THz detector. We will study different superconducting materials to evaluate the applicability of them in our design. Based on these properties, parameters of the detector can be estimated. We assume that the QPS energy is related to the QPS rate according to $E_{S}=\hbar \Gamma_{QPS} $ \cite{1367-2630-7-1-219}. The QPS rate from the Golubev-Zaikin theory \cite{1999cond.mat.11314Z,PhysRevB.64.014504,PhysRevLett.78.1552,Arutyunov20081} is given by \begin{equation}\label{QPS_rate_detector} \Gamma_{QPS}= c_1 \frac{\Delta}{\hbar}\frac{R_q}{R_n} \frac{X^2}{\xi^2} \exp \left(- 0.3 c_2 \frac{R_q}{R_n}\frac{X}{\xi} \right), \end{equation} where $R_n$ is the normal resistance per unit length of the superconducting nanowires. The two constants $c_1$ and $c_2$ account for uncertainties in derivation of Eq. (\ref{QPS_rate_detector}) which are of order one. We set $c_1=c_2=1$. Although Eq. (\ref{QPS_rate_detector}) is given by Golubev-Zaikin theory, the factor $0.3$ in the exponent is adopted from the fit of experimental data to the Giordano model in the work of \cite{PhysRevLett.87.217003,1367-2630-7-1-219}. In order to choose the appropriate material and parameters for the detector, we study properties of four different materials NbSi, InO$_x$, NbN and Ti. Properties of these materials are listed in Table \ref{materialproperties}. The coherence length $\xi$ for superconducting nanowire is related to the bulk parameter through \begin{equation} \xi \sim 0.85 \sqrt{\xi_{\mbox{bulk}} l_{0}}, \end{equation} where, $l_{0}$ is the mean free path of the electrons. narrow superconducting nanowires are always in the dirty limit. \begin{table}[h] \centering \caption{Material properties of the superconducting nanowires used for simulations. } \begin{tabular}{c c c} \\ \hline Material & $\Delta$ [meV] & $\xi$ [nm] \\ \hline NbSi &0.18 & 15 \\ InO$_x$ & 0.41 & 20\\ NbN & 1.6 & 4 \\ Ti & 0.06 & 80\\ \hline \end{tabular} \\ \smallskip \footnotesize {The data for NbSi, InO$_x$, NbN and Ti are adopted from \cite{PhysRevB.87.144510}, \cite{citeulike:10580965}, \cite{2013arXiv1305.6692P} and \cite{PhysRevLett.109.187001} respectively.} \label{materialproperties} \end{table} Fig.\ref{EQPS} shows $E_{S}$ for four different materials as a function of normal resistance per length. The resistance per length $R_n$ determines the cross-section area of the superconducting nanowire. Since $R_n$ is inversely proportional to the cross-section area of the nanowire, the higher $R_n$ indicates thinner nanowires that support smaller current which makes the operation more difficult. \begin{figure} \begin{center} \includegraphics[scale=0.6]{EQPS.png} \caption{The QPS energy as a function of the normal resistance per length for four different materials NbSi, InO$_x$, NbN and Ti is shown. The length of the nanowire is $X=2 \mu$m. Parameters are listed in Table \ref{materialproperties}. } \label{EQPS} \end{center} \end{figure} Another important energy scale in QPS junctions is the kinetic inductive energy $E_L$ which plays an important role in the dynamics. The kinetic inductive energy is given by \begin{equation}\label{KinInductanceenergy} E_L=\frac{\Phi_0^2}{2 L_k}, \end{equation} where the kinetic inductance is found from \begin{equation}\label{KinInductance} L_k=\frac{\hbar R_N}{\pi \Delta}. \end{equation} In Eq. (\ref{KinInductance}), $R_N$ is the total normal state resistance of the superconducting nanowire which is given by $R_N=X R_n$, where $X$ is the length of the superconducting nanowire. In Eq. (\ref{KinInductanceenergy}), the geometric inductance and external inductance are assumed to be much smaller than the kinematic inductance $L_k$ of the superconducting nanowire. In Fig. \ref{EL} and Fig. \ref{omegapl} the inductive kinetic $E_L$ energy and the plasma frequency $\omega_p$ of the four nanowire as a function of normal resistance per length are shown. \begin{figure} \begin{center} \includegraphics[scale=0.6]{EL.png} \caption{ The inductive kinetic energy as a function of the normal resistance per length for four different materials NbSi, InO$_x$, NbN and Ti is shown. The length of the nanowire is $X=2 \mu$m. Parameters are listed in Table \ref{materialproperties}. } \label{EL} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.6]{omegapl.png} \caption{ The plasma frequency $\omega_{p}/2\pi$ as a function of the normal resistance per length for four different materials NbSi, InO$_x$, NbN and Ti is shown. The $R_n$ determines the dimensions of the superconducting nano wire. The length of the nanowire is $X=2 \mu$m. Parameters are listed in Table \ref{materialproperties}. } \label{omegapl} \end{center} \end{figure} Since the superconducting nanowire is intended to be working in the regime where charge is a good quantum number; this requires that at least $E_{S} > 4 E_{L}$. The ratio of $E_{S}/E_{L}$ is shown in Fig. \ref{EQPSoverEL}. The acceptable region of parameters is anywhere above 4. \begin{figure} \begin{center} \includegraphics[scale=0.6]{EQPSoverEL.png} \caption{ The ratio of $E_{S}/E_{L}$ as a function of the normal resistance per length for four different materials is shown. For the charge number to be the good quantum number, it is required that $E_{S}> 4E_{L}$. } \label{EQPSoverEL} \end{center} \end{figure} Arbitrarily, we choose the length of the superconducting nanowire to be $X=2\mu$m. To satisfy the conditions of the semiclassical approach of the previous section, i.e., $E_S \gg \hbar \Omega$, we choose $E_{S}=3$ THz for the operating frequency of $\Omega/ 2\pi = 0.3$ THz. This leads to the critical voltage of $V_0=\frac{ 2\pi}{2e} E_{S}= 39 \mbox{mV}$. Assuming the induced alternating voltage collected by the antenna has the amplitude of $V_{AC}=100$ nV, then $4eV_{AC}/\hbar \Omega \sim 10^{-3}$. Therefore, from Eq. (\ref{tunnelingprobACwithdissipationEQPS}), to have a significant enhancement it is necessary to have \begin{equation}\label{enhancementcondition} \frac{\Omega \eta}{\omega_p ^2 } \gg \sinh^{-1} (10^{3}) \approx 8, \end{equation} where the dissipation $\eta$ with dimension radian per second is defined in Eq. (\ref{QPSjunctionDef1}). Fig. \ref{EQPS} shows that InO$_x$ could be a possible candidate. In order to obtain $E_{S}=3$ THz, from Figs. \ref{EQPS}, \ref{EL}, \ref{omegapl} and \ref{EQPSoverEL} we obtain the following parameters for InO$_x$ nanowire: $R_n \approx 0.025$ K Ohm/nm, $E_L\approx 120$ GHz, $\omega_p/2\pi\approx 0.84$ THz, and $E_S/E_L \approx$ 24. Using these parameters in Eq. (\ref{enhancementcondition}), in order to have significant enhancement the total resistance of the circuit needs to be much larger than $490 \,\, \mbox{K Ohm}$. For an on chip resistance, NiCr thin-film resistors can be used. One of the significant advantages of the proposed detector would be the simplicity of the fabrication using one-layer lithography. The superconducting nanowire and the antenna can be fabricated lithographically on the same substrate. Avoiding shunting parasitic capacitances might be challenging that requires extra attention. The width of the QPS element is in the order of $10$ nm to $20$ nm, therefore, a large number of them can be fabricated in parallel which makes them good candidates for applications that require many elements like imaging or for higher detection and coupling efficiency. In the proposed method the detector is made of narrow nanowires with width smaller than the coherence length, therefore the presence of vortices can be ignored. This is because structures smaller than $4.4 \xi$ can not support vortices. The absence of vortices might improve the noise performance of the proposed detector. \section{Concluding Remarks} We studied the stimulation effect of a weak high-frequency field on the zero-current state tunneling of fluxoid particle. The approach chosen was to use the duality transformation between Josephson junction and a QPS junction to map the dynamics of QPS charge in a circuit model. Then we studied the effect of high-frequency alternating field on the coherent tunneling rate. The similar problem has been studied for the case of Josephson junction using semiclassical physics \cite{SovPhysJETP.63.1986, PhysRevLett.55.1614, SovPhysJetpIvelevMelnikov1985} which we adopted for the case of QPS junction. We observed that in a strongly dissipative superconducting ultranarrow nanowire, a high frequency field can enhance the probability of quantum tunneling super-exponentially. Interestingly we find that the enhancement of SQPS rate is more pronounced in wires with small non-stimulated QPS. This result will help to predict that quantum phase slip qubits should be better-working in the presence of weak driving field. The rate enhancement and its driving frequency dependence can be exploited in designing energy-resolving high-frequency detectors. We outlined a new type of high-frequency detectors based on the QPS phenomenon in superconducting nanowires. The basic physics and design for such a detector was introduced. We then investigated the possibility of such realization using the materials used in studying of QPS in superconducting nanowires. It was shown that the theoretical restrictions can be met by choosing correct design parameters. \begin{acknowledgments} This work is supported by the NSERC Discovery Grant. This research is also sponsored by CryptoWorks21, an NSERC funded collaborative research and training experience program. AJS and MHA thank Frank K. Wilhelm-Mauch for fruitful discussions. \end{acknowledgments}
1,314,259,993,824
arxiv
\section{Introduction} Non-perturbative effects in string theory are a key ingredient in the proper understanding of the theory, and in particular of the dynamics of compactifications to four dimensions. Already in the early times non-perturbative effects (in the form of strongly coupled field theory sectors) were considered to underlie moduli stabilization and supersymmetry breaking \cite{Dine:1985rz,Derendinger:1985kk}. The formal developments on euclidean brane instanton effects (see e.g. \cite{Becker:1995kb,Witten:1996bn, Harvey:1999as,Witten:1999eg}), in particular D-brane instantons in type II compactifications (or F/M-theory duals) have led to a variety of (in some cases very explicit) applications to e.g. moduli stabilization \cite{Kachru:2003aw,Denef:2004dm,Denef:2005mm} and the generation of perturbatively forbidden couplings \cite{Blumenhagen:2006xt,Ibanez:2006da} (see also \cite{Haack:2006cy,Florea:2006si,Ibanez:2007rs} and \cite{Blumenhagen:2007zk} for related applications). D-brane instantons in local D-brane models have also been explored, recovering field theory gauge instanton effects \cite{Billo:2002hm,Akerblom:2006hx,Bianchi:2007fx,Argurio:2007vqa,Bianchi:2007wy}, and with realizations of old and new models of supersymmetry breaking \cite{Argurio:2007qk,Aharony:2007db,Aganagic:2007py}. Other formal aspects of D-brane instantons have been recently discussed in e.g. \cite{Billo:2007py,Akerblom:2007uc}. In this paper we discuss an interesting formal aspect of non-perturbative superpotentials generated by instantons in string theory. In 4d ${\cal N}=1$ supersymmetric compactifications of string theory, non-perturbative contributions to the superpotential arise from brane instantons with two fermion zero modes, which are saturated by the $d^2\theta$ superspace integration. These must necessarily be 1/2-BPS branes, so that the fermion zero modes are given by the Goldstinos of the two broken supersymmetries. Hence, the non-perturbative superpotential depends on the precise list of BPS branes (satisfying certain additional constraints, like the absence of extra fermion zero modes) at a given point in moduli space. Now it is a well-known fact that the spectrum of BPS branes can jump discontinuously across lines of marginal stability \cite{Douglas:2000qw,Douglas:2000ah,Douglas:2000gi,Denef:2000nb,Denef:2001xn,Aspinwall:2004jr}. Namely, in type IIA compactifications the spectrum of supersymmetric D2-brane instantons may jump as one moves in complex structure moduli space (with the geometric interpretation that a supersymmetric 3-cycle may split in two independent supersymmetric 3-cycles when the complex structure is changed); similarly for D-brane instantons in type IIB compactifications as one moves in Kahler moduli space. It is therefore a natural question whether the non-perturbative superpotential is continuous across these lines of marginal stability. This is expected, given that superpotentials are protected quantities. In fact, an abrupt change in the superpotential would correspond to a non-holomorphic dependence on the moduli (since marginal stability walls are typically of codimension one), which is not compatible with supersymmetry. It turns out that the microscopic explanation of the continuity of the non-perturbative superpotential is related to a wealth of previously unnoticed surprises in D-brane instanton physics. We devote the present paper to uncovering them in a few illustrative examples, leaving a systematic discussion for future work. The first interesting novelty is that multi-instanton processes can contribute to the non-perturbative superpotential. Consider an instanton A that contributes to the non-perturbative superpotential, and which reaches a line of marginal stability where it splits into two instantons B and C. Although the instantons B and C do not in general contribute to the non-perturbative superpotential by themselves, the 2-instanton process involving B and C simultaneously does lead to a contribution to the superpotential. A key ingredient is that extra zero modes of the two individual instantons are saturated against each other, in such a way that only two fermion zero modes are left over for the combined system, see Figure \ref{twoinstanton}. Although multi-instanton processes have been extensively studied for ${\cal N}=2$ and ${\cal N}=4$ supersymmetric gauge theories (see \cite{Dorey:2002ik} for a review), the possibility to have them generate non-perturbative superpotentials in ${\cal N}=1$ theories has not been considered in the past. Moreover, our result implies that the usual strategy to compute the non-perturbative superpotential by summing all contributions from suitable BPS instantons may miss important contributions, due to multi-instanton processes. We present several explicit examples of this phenomenon, for D-brane instantons with or without interpretation as gauge theory instantons. A second interesting novelty arises from regarding the above 2-instanton process as a non-perturbative lifting of fermion zero modes. In considering the effective 4d interaction generated by say the instanton B, one needs to consider the possible interaction terms which may lift fermion zero modes (at the Gaussian level). The above mechanism corresponds to a non-perturbative contribution to the interactions of the fermions zero modes of the instanton B induced by the instanton C, so that the former can contribute to the superpotential. This will be more explicitly discussed in several examples. A final interesting surprise is related to non-gauge D-brane instantons (these are standard D-brane instantons, but refer to them as non-gauge, or sometimes exotic, to distinguish them from D-brane instantons with gauge field theory interpretation). It is usually considered that, for a D-brane instanton in a perturbative type II model to contribute to the superpotential, it must have Chan-Paton symmetry $O(1)$, so that the orientifold projection eliminates some of the universal fermion zero modes (arising from an accidental ${\cal N}=2$ supersymmetry in the relevant open string sector). We however present several examples where instantons with $U(1)$ symmetries contribute to the superpotential, with the extra zero modes being saturated by interactions in the instanton world-volume effective action. As a last remark, the continuity of the non-perturbative superpotential, combined with string duality will lead to new interesting properties of non-perturbative superpotentials in F-theory compactifications across certain topology changing phase transitions, as we discuss in Section~\ref{sec:F-theory}. Before entering the discussion, we would like to present the problem of the continuity of the non-perturbative superpotential in terms more familiar from the model building viewpoint, in the context of the recent approaches to use non-perturbative superpotentials to stabilize Kahler moduli in type IIB compactifications \cite{Kachru:2003aw}. For concreteness, consider a compactification with a gauge sector arising from stack of D7-branes. In general, such configuration of branes is supersymmetric at a point (or locus) $P$ in Kahler moduli space. At other points or loci $Q$ in moduli space, the D7-branes have misaligned BPS phases and recombine to form bound states, which correspond to BPS branes at point $Q$. The field theory interpretation is that Kahler moduli couple as Fayet-Iliopoulos terms to the D-branes, which trigger processes of Higgsing/unHiggsing in the gauge theory. In any event, the gauge sector arising from the D7-branes is different at the points $P$, $Q$. Consider that the gauge sector at $P$ develops a non-perturbative superpotential for the Kahler moduli, such that the resulting scalar potential stabilizes the moduli at the point $Q$. If the non-perturbative superpotential would not be continuous across the line of marginal stability, we would find ourselves in the paradoxical situation that the minimum lies at a point where the original potential is no longer valid. Needless to say, such behavior would enormously complicate the problem of moduli stabilization. Happily, superpotentials are far better behaved quantities, which can be used universally all over moduli space. In this paper we focus on non-perturbative superpotentials. On general grounds we expect that other quantities, such as higher derivative F-terms, arising from BPS instantons with additional fermion zero modes, are also continuous all over moduli space. We leave a systematic understanding for future work, and will be happy to constrain ourselves to the discussion of the continuity of the superpotential in a series of illustrative examples. The paper is organized as follows. In Section~\ref{sec:background} we discuss some relevant background material on instantons, both gauge and non-gauge, and we introduce the geometric backgrounds we will consider. In Section~\ref{sec:non-gauge} we discuss the continuity of the superpotential for non-gauge instantons, explaining the role of multi-instanton processes. In Section~\ref{gaugeinst} we go on to discuss continuity and multi-instanton effects for gauge instantons in string theory. In particular we describe the continuity of the superpotential under Seiberg duality. In Section~\ref{sec:exotic-to-gauge} we study motions in moduli space that convert gauge instantons into non-gauge instantons, and vice versa. In Section~\ref{sec:F-theory} we describe the dual realization of the processes we study in F and M theory. Section~\ref{sec:conclusions} contains our conclusions, and finally Appendix~\ref{nosplit} discusses some exotic geometric processes that evade the assumptions in this paper, and might lead to discontinuities in the superpotential. \section{Some background material} \label{sec:background} \subsection{Instanton effects} \label{introinst} In dealing with euclidean brane instantons in string theory compactifications, it is convenient to make some general classifications and distinctions, which are useful for future reference. For concreteness we focus on D-brane instantons, although the effects can arise from other brane instantons in dual pictures (a prototypical example are e.g. euclidean D3-brane instantons in type IIB on CY-threefolds described as M5-brane instantons in M-theory on CY-fourfolds \cite{Witten:1996bn}). A first class of D-brane instantons corresponds to those whose internal structure is exactly the same as some of the 4d space filling branes in the background. Namely, in geometric setups, Euclidean D$p$-branes wrapping the same $(p+1)$-cycle (and carrying the same world-volume gauge bundle) as some D$(p+4)$-branes in the background configuration (in more abstract CFT terms, they should be described by the same boundary state of the internal CFT). Such D-brane instantons correspond to gauge instantons on the corresponding 4d gauge sector, and thus reproduce non-perturbative effects arising from strong gauge dynamics. \subsubsection{Superpotentials from gauge D-brane instantons} \label{supogauge} A prototypical case, which will appear in our examples, is the generation of the Affleck-Dine-Seiberg superpotential \begin{eqnarray} W\, =\, (N_c-N_f)\, \left(\, \frac{\Lambda^{3N_c-N_f}}{\det M} \, \right)^{\frac{1}{N_c-N_f}} \end{eqnarray} on a set of 4d space filling branes whose low-energy dynamics corresponds to $SU(N_c)$ SQCD with $N_f$ flavours (with dynamical scale $\Lambda$; here $M$ denotes the meson fields). For $N_f=N_c-1$ this arises from classical 4d instanton field configurations, and has been recovered from D-brane instantons in several instances with different levels of detail \cite{Acharya:2000gb,Bershadsky:1996gx,Florea:2006si,Akerblom:2006hx}. For other values of $N_f<N_c$, it does not arise from classical 4d field configurations, and is obtained indirectly. Alternatively, it can be obtained by considering the theory on $\bf S^1$, where there exist suitable 3d classical field configurations (sometimes denoted calorons) leading to a 3d superpotential, which can be argued to survive in the 4d decompactification limit (with a microscopic description in terms of putative objects denoted ``fractional instantons'' or ``merons''). The latter description fits perfectly with the string theory realization. Indeed, the computation of e.g. the euclidean D3-brane instanton superpotential in type IIB configurations with gauge sectors on D7-branes on 4-cycles is usually described by invoking compactification on a circle in order to use an M-theory dual. Upon compactification, one may use T-duality, leading to a picture where instantons are D-branes stretched along the circle direction, and gauge D-branes are pointlike on it. In this picture the superpotential is generated by ``fractional'' D-branes, which are suspended between the gauge D-branes and thus stretch only a fraction of the period along the circle direction \cite{Brodie:1998bv}. Equivalently, in the dual M-theory picture, the gauge D7-branes turn into degenerations of the elliptic fibration, such that the fiber over the 4-cycle on the base is a sausage of 2-spheres. The ADS superpotential is generated by M5-branes which wrap the 4-cycle times a 2-sphere (leading to ``fractional'' objects, in the sense that the standard 4d gauge instanton corresponds to an M5-brane wrapping the whole fiber) \cite{Bershadsky:1996gx}. We will often abuse language and regard the 4d ADS superpotential as generated by (fractional) instantons, although strictly speaking only the 3d ADS superpotential has such a microscopic description. It is interesting to point out that many of the manipulations in the analysis of ${\cal N}=1$ supersymmetric field theories usually carried out in terms of the exact effective action, can be carried out microscopically in terms of the physics of the relevant (possibly fractional) instantons. For instance, an important point in working with gauge instantons in our examples below is the derivation, from the instanton physics viewpoint, of the matching of scales in processes like integrating out massive 4d matter etc. Let us describe this in a simple example. Consider an $SU(N_c)$ theory with $N_f<N_c$ flavours with mass (matrix) $m$, with dynamical scale $\Lambda=(e^{-1/g^2})^{1/(3N_c-N_f)}$. Consider the situation where we neglect the effect of the mass term on the instanton physics. Then the instanton feels $N_f$ massless flavors and the non-perturbative dynamics is described by the effect of a $\frac{1}{N_c-N_f}$-fractional instanton, leading to the total superpotential \begin{eqnarray} W\, =\, (N_f-N_c)\, \left(\, \frac{\Lambda^{3N_c-N_f}}{\det Q{\tilde Q}}\, \right)^{\frac{1}{N_c-N_f}}\, +\, m \, Q{\tilde Q} \label{supomatch} \end{eqnarray} We may want to use an alternative description where we include the effect of the mass terms from the start. From the spacetime viewpoint, we integrate out the massive flavours. From the instanton perspective, the instanton feels that the $2N_f$ fermion zero modes $\alpha,\beta$ associated to the flavors (in the D-brane picture, open strings stretched between the instanton brane and the flavor branes) are actually massive \footnote{This follows from the fact that the massive flavours are open strings from the color to the flavor branes, and that gauge brane instantons wrap exactly on top of the color branes.}. Integrating out the term $S_{\rm inst}=m \alpha \beta$ in the instanton action leads to a prefactor of $\det m$ in the amplitude for the left-over $\frac{1}{N_c}$-fractional instanton. Therefore the superpotential is \begin{eqnarray} W\, = \, N_c\, \left(\, \Lambda^{3N_c-N_f}\, \det m \,\right)^{\frac{1}{N_c}} \label{supomatch2} \end{eqnarray} This is the standard $\frac{1}{N_c}$-fractional instanton amplitude for a SYM sector with effective scale $\Lambda'$ defined by \begin{eqnarray} \Lambda'^{\, 3N_c}\, =\, \Lambda^{3N_c-N_f}\, \det m \end{eqnarray} Note that (\ref{supomatch2}) in fact agrees with (\ref{supomatch}) upon integrating out the massive flavours in the latter. Also, the matching of scales is the familiar one in field theory. \medskip For future reference, let us mention that D-brane instantons associated as above to 4d space filling D-branes, can lead to non-perturbative superpotentials despite the fact that there are 4 universal zero modes in the instanton-instanton open string sector. Indeed, two of these fermion zero modes have cubic couplings to the bosonic and fermionic zero modes in the mixed open string sector (strings stretched between the instanton and the gauge D-branes). Their role can be regarded as imposing the fermionic constraints to recover the ADHM instanton measure \cite{Billo:2002hm}. The two left-over fermion zero modes are Goldstinos of the ${\cal N}=1$ supersymmetry, and are saturated by the $d^2\theta$ integration involved in the induced superpotential. \subsubsection{Non-gauge, ``exotic'' or ``stringy'' instantons} \label{supostringy} In general an euclidean D-brane instanton does not have the same internal structure as any gauge D-brane in the configuration. Such D-brane instantons do not have any known gauge field theory interpretation, and are thus dubbed ``exotic'' or ``stringy'' instantons. BPS instantons of this kind lead to superpotential terms only if they have two fermion zero modes, with additional fermion zero modes forcing multi-fermion insertions leading to higher F-terms as described below (additional fermions zero modes, with couplings to 4d chiral multiplets, are regarded here as non-zero modes, since they are lifted by background values of the latter; equivalently, integration over these zero modes leads to insertions of the 4d chiral multiplet in the induced superpotential). We are thus interested in stringy instantons with two fermion zero modes. In the same way as for gauge instantons, there are 4 universal fermion zero modes in the instanton-instanton open string sector. However in this case, there are no bosonic zero modes which can lift the two non-goldstino modes. In the absence of other lifting mechanisms (like closed string flux backgrounds), the only mechanism which can eliminate these extra modes in type II perturbative models is an orientifold projection. Therefore, only instantons invariant under the orientifold action, and with a Chan-Paton action leading to an $O(1)$ symmetry, have two universal fermion zero modes, and have a chance of leading to a non-perturbative superpotential (of course if they do not have extra fermion zero modes in other sectors). \subsubsection{Higher F-terms from D-brane instantons} Besides D-brane instantons generating superpotentials, BPS D-brane instantons with additional fermion zero modes lead to higher F-terms in the effective action. These have been considered in \cite{Beasley:2004ys,Beasley:2005iu}, and lead to operators with one insertion of ${\overline D}{\overline \Phi}$ for each additional fermion zero mode. Roughly speaking they have the structure \begin{eqnarray} \int \,d^4x\, d^2\theta\, w_{\overline i_1\overline j_1\ldots \overline i_n\overline j_n} (\Phi)\, {\overline D} {\overline \Phi}^{\overline i_1}\, {\overline D} {\overline \Phi}^{\overline j_1} \ldots {\overline D} {\overline \Phi}^{\overline i_n}\, {\overline D} {\overline \Phi}^{\overline j_n} \label{bwop} \end{eqnarray} where the tensor $w(\Phi)$ depends holomorphically on the 4d chiral multiplets. The simplest situation is an instanton with two additional fermion zero modes, which is for instance realized for gauge instantons in $N_f=N_c$ SQCD. The corresponding operator has the above structure with $n=1$ and implements the familiar complex deformation of the moduli space (in an intrinsic way, in the sense of the moduli space geometry). The study of the interplay between non-perturbative higher F-terms and lines of marginal stability is beyond our scope in this paper, although we expect that it admits a similar microscopic description in terms of multi-instanton contributions after instanton splitting. In any event, even for the analysis of superpotential terms, such instantons will play an interesting role in some of our examples. We refer to these instantons as Beasley-Witten instantons. \subsection{A useful family of geometries} \label{geometries} Here we describe a set of geometries which we use in several of our explicit examples below. They are non-compact geometries, but they suffice to study instanton effects and transitions as long as they involve just the local structure of compact cycles (see footnote \ref{noncompact} for one example where non-compactness is relevant to the discussion). Let us consider the following class of local Calabi-Yau manifolds, described by. \begin{eqnarray} xy= \prod_{k=1}^P (z-a_k) \nonumber \\ x'y'= \prod_{k'=1}^{P'} (z-b_k') \end{eqnarray} This kind of geometry is a particular case of those considered in \cite{Ooguri:1997ih}. It describes two $\bf C^*$ fibrations, parametrized by $x,y$ and $x',y'$, varying over the complex plane $z$, degenerating at the locations $a_i$, $b_i$ respectively. In this geometry on can construct Lagrangian 3-cycles by considering segments joining pairs of degeneration points on the base, and fibering the two $\bf S^1$'s in the two $\bf C^*$ fibers. Segments joining pairs of $a$-type degenerations or pairs of $b$-type degenerations lead to 3-cycles with topology $\bf S^2\times \bf S^1$. Segments joining $a$- and $b$-type degenerations lead to 3-cycles with topology $\bf S^3$. Let us introduce the notation $[p_1,p_2]$ for the 3-cycle associated to the pair of degeneration points $p_1$, $p_2$, whatever their type. Introducing the holomorphic 3-form \begin{eqnarray} \Omega \, = \, \frac{dx}{x} \, \frac{dx'}{x'}\, dz \end{eqnarray} the 3-cycle $[p_1,p_2]$ is calibrated by the form $e^{i\theta}\Omega$, where $\theta$ is the angle of the segment $[p_1,p_2]$ with the real axis in the $z$-plane. Namely $Im(e^{i\theta} \Omega)|_{[p_1,p_2]}=0$, where $|_{[p_1,p_2]}$ denotes restriction to the 3-cycle. Hence, segments which are parallel in the $z$-plane define 3-cycles which preserve a common supersymmetry. We will be interested in configurations where all degenerations are on (or near) the real axis. We will consider stacks of 4d space filling D6-branes and/or euclidean D2-branes wrapping the different 3-cycles, and describe the non-perturbative superpotentials arising from these configurations. The open string modes and their interactions are easy to determine. For instance, each stack of $N$ D6-branes on a 3-cycle leads to a $U(N)$ gauge group in a vector multiplet of ${\cal N}=1$ supersymmetry for 3-cycles of $\bf S^3$ topology, and of ${\cal N}=2$ supersymmetry for 3-cycles of $\bf S^2\times \bf S^1$ topology. The angle $\theta$ introduced above determines the precise supersymmetry preserved by the corresponding set of branes. Also, two D6-branes wrapping two 3-cycles involving one common degeneration point lead to a vector-like pair of bi-fundamental chiral multiplets, arising from open strings in the intersection of 3-cycles (which is topologically $\bf S^1$, coming from the $\bf C^*$ that does not degenerate at the intersection). As discussed in \cite{Ooguri:1997ih} one can perform T-dualities along the two $\bf S^1$ directions, and map the configuration to a Hanany-Witten setup of $P$ NS-branes (along 012345) and $P'$ NS'-branes (along 012389), with D4-branes (along 01236) suspended among them, in a flat space geometry with a noncompact $x^6$ direction (in contrast to the usual Hanany-Witten configurations describing systems such as the conifold). The gauge theory content described above follows from the standard rules in this setup (see \cite{Giveon:1998sr}). This picture also facilitates the computation of the superpotential, whose general discussion we skip, but which we present in our concrete example below. \section{Non-gauge D-brane instantons} \label{sec:non-gauge} In this section we consider ``exotic'' D-brane instantons (i.e. instantons arising from D-branes wrapping internal cycles different from those wrapped by the spacetime filling branes in the model). For simplicity we restrict ourselves to perturbative type IIA Calabi-Yau compactifications in the absence of fluxes. The aim of this section is to show the continuity of the non-perturbative superpotential across the lines of marginal stability for the instantons. We show that the microscopic mechanism underlying this continuity reveals interesting new properties of D-brane instanton physics, including multi-instanton processes and non-perturbative lifting of fermion zero modes. We have already mentioned in Section \ref{supostringy} that in perturbative type II models (and in the absence of additional ingredients like 3-form fluxes), for instantons to have just the two fermion zero modes required to contribute to the superpotential they should be mapped to themselves under the orientifold action and have an $O(1)$ Chan-Paton symmetry. This constrains the possible splittings of the instanton in walls of marginal stability, for instance an $O(1)$ instanton cannot split into two $O(1)$ instantons, as we show in Appendix \ref{nosplit}. Still, there is enough freedom to have non-trivial splitting of instantons that contribute to the superpotential, as we now discuss in a simple example. \subsection{$O(1)$ instanton splitting as $O(1)\times U(1)$ instantons} \subsubsection{Configuration and marginal stability line} \label{onetotwo} In this section we consider one simple example of an $O(1)$ instanton A, which contributes to the non-perturbative superpotential, and can reach a line of marginal stability on which it splits as an O(1) instanton $B$ and a $U(1)$ instanton (described as a brane $C$ and its image $C'$). \begin{figure} \begin{center} \inputfig{splitting1} \caption{\small Example of an $O(1)$ instanton $A$ (figure a) splitting into an $O(1)$ instanton $B$ and a $U(1)$ instanton $C$ and its image $C'$ (figure b).} \label{splitting1} \end{center} \end{figure} Consider a geometry of the kind introduced in Section \ref{geometries}, with two degenerations $a_1$, $b_1$ located at $z=\pm t/2$, with $t\in \bf R$, and two degenerations $a_2$, $b_2$ located at $z= \pm s/2+i\epsilon$, with $s, \epsilon \in \bf R$, and $s<t$ for concreteness, see Figure \ref{splitting1}. Namely \begin{eqnarray} x^2+y^2\, = \, (z+t/2)(z-s/2-i\epsilon) \nonumber\\ x'^2+y'^2\, =\, (z+s/2-i\epsilon)(z-t/2) \end{eqnarray} Consider modding out the geometry by the orientifold action $\Omega R(-1)^{F_L}$, where $R$ is the antiholomorphic involution \begin{eqnarray} z\to -{\overline z} \quad ; \quad (x,y) \leftrightarrow \left({\overline x}', {\overline y}'\right) \label{orient} \end{eqnarray} The set of fixed points defines an O6-plane along the imaginary $z$ axis. This orientifold exchanges degenerations of $a$ and $b$ type. The parameters $s,t,\epsilon$ belong to chiral multiplets associated to complex structure moduli invariant under the orientifold action. We choose the O6-plane charge such that it leads to $O(1)$ Chan-Paton symmetry for D2-brane instantons on 3-cycles defined by horizontal segments crossing it. For generic non-zero $\epsilon$ there are two $O(1)$ instantons in this configurations, corresponding to D2-branes on the segments $[a_1,b_1]$ (denoted instanton A) and $[b_2,a_2]$ (denoted instanton B). Each has just two fermion zero modes, and therefore leads to a contribution to the non-perturbative superpotential \begin{eqnarray} W\, =\, f_1 e^{-T} \, +\, f_2 e^{-S} \label{twoinst} \end{eqnarray} where $T,S$ are the closed string chiral multiplets whose real parts are given by the moduli $t,s$ controlling the size of the wrapped 3-cycles. Here $f_i$ are prefactors given by one-loop determinants, which depend on the Kahler moduli (but not on the complex structure moduli). When $\epsilon$ is taken to zero, the four degenerations align, and the instanton A reaches a line of marginal stability, and splits into an instanton of type $B$, and a $U(1)$ instanton corresponding to a D2-brane on $[a_1,b_2]$ and its orientifold image on $[a_2,b_1]$ (denoted C and C' respectively). Since the complete superpotential should behave continuously in this motion in moduli space, there should be suitable instanton processes reproducing it. There are only two basic instantons, namely the $O(1)$ instanton B on $[b_2,a_2]$, which indeed reproduces the $e^{-S}$ term in (\ref{twoinst}), and the $U(1)$ instanton C (with its image C'), which has four fermion zero modes and does not contribute to the superpotential. Hence, there is no instanton which reproduces the $e^{-T}$ term. In analogy with the analysis in Section (\ref{gaugeinst}) for gauge D-brane instantons, the resolution of the puzzle lies in understanding the mutual influence of different instantons, and can be understood in different ways as we now describe. \subsubsection{The 2-instanton process} \label{thetwoinst} In order to show that the 2-instanton process contributes to the superpotential, we have to discuss the structure of zero modes in the 2-instanton configuration, and how they are saturated. This will involve the saturation of additional zero modes due to higher order interactions on the instanton world-volume effective action. Let us briefly describe the structure of zero modes in the different sectors. We refer to the instantons $C$, $B$ as $1$, $2$ in this section. $\bullet$ In the 11 sector (and its $1'1'$ image), the open string sector feels a background with 8 supercharges, half of which are broken by the instanton. We have a $U(1)$ gauge symmetry (although there are no gauge bosons), four bosonic zero modes $x_1^\mu$ corresponding to the 4d translational Goldstones, and four fermionic zero modes $\theta_1^\alpha$, ${\tilde \theta}_{1\,\dot{\alpha}}$, corresponding to the Goldstinos. Note that the Lorentz symmetry under which these are chiral spinors is a global symmetry from the instanton volume viewpoint. $\bullet$ The 22 sector is sensitive to the orientifold action and hence feels a background with 4 supercharges, half of which are broken by the instanton. The orientifold projection truncates part of the spectrum, as compared with the above $U(1)$ instanton case. There is an $O(1)\equiv \bf Z_2$ gauge symmetry, and four bosonic zero modes $x_2^\mu$. $\bullet$ Consider now the spectrum from open string stretching at the 12 intersection (and its image 1'2). Locally around it, the background admits 16 supersymmetries, half of which are broken by the D-branes. The massless modes thus form a hypermultiplet under the unbroken 8 supersymmetries. We have two complex bosonic zero modes $\varphi_{12}$, $\varphi_{21}$, with charges $+1$ and $-1$ under the $U(1)_1$ gauge symmetry of the instanton 1, and four fermionic zero modes, $\chi_{12}^\alpha$, $\chi_{21}^\alpha$, with charges $+1$ and $-1$ under $U(1)_1$. Alternatively, these can be conjugated to ${\overline \chi}_{21\, \dot{\alpha}}$, ${\overline \chi}_{12\, \dot{\alpha}}$, with charges $-1$, $+1$. Let us call the chiral superfields in the hypermultiplet $\Phi_{12}$ and $\Phi_{21}$. \medskip Let us now describe the couplings of these modes on the volume of the instanton. They are analogous (upon dimensional reduction) to the couplings that would appear if we would have D6-branes instead of D2-branes. There is a first term which describes the mass terms of the open strings between the two instantons when they are separated in the 4d direction \begin{equation} S_{kinetic}\, =\, (x_1^\mu - x_2^\mu)^2\, (|\varphi_{12}|^2 + |\varphi_{21}|^2)\, +\,i(x_1^\mu - x_2^\mu)\, \{\, {\overline \chi}_{12} \sigma_\mu \chi_{12} - {\overline \chi}_{21} \sigma_\mu \chi_{21}\,\} \label{eq:kinetic-couplings} \end{equation} These terms are related to the couplings to gauge bosons in the D6-D6 system. There are also terms involving the neutral fermion zero modes $\theta$, ${\tilde \theta}$ (analogous to the couplings to gauginos in the D6-D6 system), given by \footnote{Similar couplings in the context of a D2-instanton intersecting its orientifold image have been described in \cite{Blumenhagen:2007bn}.}. \begin{equation} S_{\lambda} = (\chi_{12}\,(\theta_1 - \theta_2))\varphi_{12}^* - (\chi_{21}\,(\theta_1 - \theta_2))\varphi_{21}^* + ({\overline \chi}_{12}\tilde\theta)\varphi_{12} - ({\overline \chi}_{21}\tilde\theta)\varphi_{21} \label{eq:gaugino-couplings} \end{equation} Notice that the combination $\theta_1+\theta_2$ is decoupled, and corresponds to the two Goldstinos of the combined two-instanton system. We also have a D-term potential (the same arising in a D6-D6-brane system): \begin{equation} S_D\, =\, (\, |\varphi_{12}|^2 - |\varphi_{21}|^2\, )^2 \end{equation} Finally, there are quartic couplings involving the fields in the 12 sector. The local intersection preserves 8 supercharges, but the interaction is induced by effects that preserve only 4 supercharges (due to the different nature of degenerations at the intersection and adjacent to it). The interaction can be obtained from a superpotential of the form \begin{eqnarray} W\, \simeq \, (\, \Phi_{12}\Phi_{21}\,)^2 \end{eqnarray} This in fact identical to the superpotential that would be obtained for D6-branes. The underlying reason is that both D2- and D6-branes have identical boundary states of the internal CFT (and a flip of DD to NN boundary conditions in the 4d part), thereby leading to essentially the same correlation functions. Thus we obtain fermion-scalar interactions of the form \begin{eqnarray} S_{\chi^2\varphi^2}\, =\, \chi_{12}\varphi_{21}\chi_{12}\varphi_{21} \, +\, 2\chi_{12}\chi_{21}\varphi_{12}\varphi_{21} \, +\, \varphi_{12}\chi_{21}\varphi_{12}\chi_{21} \quad + \text{h.c.} \end{eqnarray} and the F-term scalar potential \begin{eqnarray} S_{F}\, =\, |\varphi_{21}\varphi_{12}\varphi_{21}|^2\, +\, |\varphi_{12} \varphi_{21}\varphi_{21}|^2 \end{eqnarray} Let us now consider the role of this complete instanton effective action in the generation of a non-perturbative superpotential. Notice that the contribution to the superpotential is dominated by configurations of overlapping instantons, namely when $x_1-x_2=0$, as follows. A large non-zero separation gives large masses to the open strings between the instantons (consistent with the equation (\ref{eq:kinetic-couplings})), so we can integrate out these fields and set their vevs to zero, making the couplings in (\ref{eq:gaugino-couplings}) vanish. Then we cannot saturate the $\theta_1-\theta_2$ zero modes, and the integral vanishes. So let us focus for simplicity \footnote{In fact it is possible, and not much harder, to carry out the computation allowing for arbitrary $x_1-x_2$. Namely one can perform the Gaussian integration over these bosonic zero modes, and conclude that the result is localized (with some exponentially vanishing tail) onto $x_1=x_2$. We omit the detailed analysis since the conclusions are essentially unchanged, and the simplified discussion is enough to show that the 2-instanton process at hand provides a non-trivial contribution.} in the case $x_1-x_2=0$. In this case we have an instanton action given by $S_{\text{2inst}}=S_\lambda+S_D+S_{\chi^2\varphi^2}+S_F$. The pieces relevant for the saturation of zero modes will be $S_\lambda$ and $S_{\chi^2\varphi^2}$. We can soak up $(\theta_1-\theta_2)$ by bringing down two insertions of $(\chi_{12}\,(\theta_1 -\theta_2))\varphi_{12}^*$ from $S_\lambda$. Similarly we can soak up $\tilde\theta$ by bringing down two insertions of $({\overline \chi}_{12}\tilde\theta)\varphi_{12}$. This also saturates the zero modes $\chi_{12}$, ${\overline \chi}_{12}$. The remaining zero modes $\chi_{21}$, ${\overline \chi}_{21}$ can be soaked up by bringing down two insertions of $\varphi_{12}\chi_{21}\varphi_{12}\chi_{21}$ from $S_{\chi^2\phi^2}$ and two insertions of its complex conjugate operator. Bringing everything together, and integrating over the (saturated) fermionic zero modes, we get the following 2-instanton contribution: \begin{equation} \int d^4x_+ d^2\theta_+ [d\varphi]\, \exp\left\{-S_D-S_F\right\} |\varphi_{12}|^4 \end{equation} where $x_+=x_1+x_2$, $\theta_+=\theta_1+\theta_2$ are the surviving zero modes of the instanton. Note that the $\varphi$ integral converges since there are no flat directions in the ($\varphi_{12},\varphi_{21}$) space, as is easily seen from the form of $S_D$ and $S_F$. There are other similar contributions from other combinatorics of soaking up zero modes. The overall result is a non-zero contribution to the superpotential from the 2-instanton process. The above mechanism is very similar to the lifting of accidental zero modes by world-volume interactions in other situations. For instance in the study of instanton effects on 4d ${\cal N}=4$ supersymmetric theories, where a world-volume 4-fermion interaction lifts fermion zero modes in groups of four (and allows multi-instanton processes contribute to the same 4d effective action terms as single-instanton ones). The analogy could be made much more explicit by integrating over the bosonic modes above, generating world-volume 4-fermion interactions. This is, to our knowledge, the first explicit realization of a similar mechanism in the computation of non-perturbative D-brane instanton superpotentials in ${\cal N}=1$ theories. Notice also the interesting fact that in such situations the usual recipe of adding the contributions from the individual instantons misses these new contributions. The spacetime picture of the above mechanism is of the kind shown in Figure \ref{twoinstanton}, with two fermion zero modes of each instanton saturated against each other, and two left-over fermion zero modes. \begin{figure} \begin{center} \inputfig{twoinstanton} \caption{\small Schematic picture of a multi-instanton configuration contributing to the superpotential. A number of additional fermion zero modes are saturated against each other, due to interaction terms in the world-volume effective action of the 2-instanton system. The two left-over fermion zero modes are the Goldstinos of the overall BPS D-brane instanton system, and are saturated against the $d^2\theta$ integration in the induced 4d effective action superpotential term.} \label{twoinstanton} \end{center} \end{figure} As a last comment, note that the above system fits nicely with the concept of quasi-instanton as described in \cite{Dorey:2002ik}. Namely the bosonic modes $\varphi$ can be described as quasi-zero modes, and they parametrize a quasi-moduli space of quasi-instantons, in the sense that they correspond to a moduli space of instantons, which are lifted by a world-volume potential whose effects can be studied perturbatively in the value for the bosonic fields. Although strictly speaking such configurations do not correspond to BPS instantons, they can provide the dominant dynamical effect in the semiclassical approximation to certain quantities. Note that the additional Goldstinos (those associated to the supersymmetries preserved by BPS instantons) are not turned on in the first correction, and the effect of larger values for the bosonic fields is suppressed due to the exponential damping. \subsubsection{Non-perturbative lifting of zero modes of the $U(1)$ instanton} \label{nplift} One can interpret the appearance of the non-trivial contribution to the superpotential as the instanton 2 generating an effective interaction term for the additional zero modes of the instanton 1. Indeed the piece \begin{eqnarray} \Delta S_{\rm inst 1} = \int \, d^2\theta_2 \, d^4\chi \, d^4\varphi \, \exp [\, (\theta_1-\theta_2) \, \varphi \, \chi\, + \, {\tilde \theta}_1\, {\overline \varphi}\, {\overline \chi} + \chi^2\, \varphi^2\, +\, V(\varphi)\,] \end{eqnarray} of the integral above can be regarded as computing the non-perturbative contribution of the instanton 2 to the effective action of the instanton 1. The result corresponds to an effective mass term (of non-perturbative strength $e^{-S}$) for the extra fermion zero modes of the instanton 1. Hence the amplitude of the instanton 1 is sketchily of the form \begin{eqnarray} S_{4d} & \simeq & \int d^4x\, d^2\theta \, d^2{\tilde \theta}\, \exp \, (\,-T_1\, -\, e^{-S} \, {\tilde \theta} {\tilde \theta} ) \notag\\ & = & \int d^4 x\, d^2\theta\, e^{-S} e^{-T_1}\, = \,\int\, d^4x\, d^2\theta e^{-T} \label{twoexpo} \end{eqnarray} namely the appropriate superpotential term. In Section (\ref{npliftgauge}) we will provide yet another viewpoint regarding the non-perturbative lifting of fermion zero modes. It is very interesting that $U(1)$ instantons can contribute to non-perturbative superpotentials via this mechanism of non-perturbative lifting of the extra zero modes. We also expect other instantons with additional universal fermion zero modes, like $Sp(2)$ instantons, to similarly contribute under special circumstances. It would be interesting to use this mechanism to revisit the role of interesting $U(1)$ and $Sp(2)$ instantons in model building applications, like the instanton scan in \cite{Ibanez:2007rs}. In fact, multi-instanton processes can already arise in simple toroidal orientifolds (see \cite{Ibanez:2007tu} for an explicit $\mathbb{T}^6/\bf Z_3$ example). \subsubsection{4d charged matter insertions} \label{insertions} The bottom line of the above Sections is that non-perturbative superpotentials for non-gauge D-brane instantons are continuous across lines of marginal stability. The microscopic instanton physics mechanism relies on the fact that additional zero modes in multi-instanton processes can be saturated by interactions, leaving only a few zero modes to be saturated by external insertions in 4d correlators. The initial instanton amplitude is thus fully reconstructed by a multi-instanton amplitude. Let us comment on the situation where the initial instanton intersects some of the 4d space filling D-branes in the system. There are fermion zero modes charged under the 4d gauge group at those intersections. In order to contribute to the superpotential, these additional fermion zero modes should be coupled to operators involving the 4d charged matter fields, so that upon integration over them (or pulling down these interactions) one generates insertions of the 4d charged matter fields in the 4d effective superpotential as discussed in \cite{Ganor:1996pe,Blumenhagen:2006xt,Ibanez:2006da,Florea:2006si}. The appearance of the same insertions in the multi-instanton amplitude at the line of marginal stability is easy to show: Notice that the homology charge of the contributing D-brane instanton system is preserved in the process of reaching the line of marginal stability. This ensures that the number of charged fermion zero modes is preserved in the process, and that the insertions of 4d fields are suitably generated. We refrain from delving into a more detailed discussion in concrete example, and prefer to move on. \subsection{$O(1)$ splitting as $U(1)$ instanton} \label{onetoone} In this Section we would like to consider another possible splitting of an $O(1)$ instanton across a line of marginal stability, in which it splits as a $U(1)$ instanton and its image. In fact this kind of process was considered in \cite{Blumenhagen:2007bn}, with the conclusion that such instantons cannot contribute to the superpotential due to the presence of additional zero modes. In fact our explicit example evades this no-go result: there exists an F-term interaction in the world-volume of the instanton (not considered in \cite{Blumenhagen:2007bn}) which lifts the additional fermion zero modes. The geometry in this configuration is similar, but slightly different from those introduced in Section \ref{geometries}. It is therefore better to introduce the configuration in terms of a type IIA Hanany-Witten setup. Consider a NS-brane along 012345, and two NS-branes along 0123 and rotated by angles $\theta$ and $-\theta$ in the planes 45 and 89 (so we denote them NS$_\theta$ and NS$_{-\theta}$). One can discuss the relevant part of the geometry by depicting the positions of the different branes in the $z=x^6+ix^7$ plane, as shown in Figure \ref{unfoldgeom}. In our configuration, the NS-brane is located at $z=i\epsilon$, while the NS$_{\pm \theta}$-branes are located at $z=\pm t$, with $t,\epsilon\in\bf R$. We consider instantons arising from euclidean D0-branes suspended between the different NS5-branes, thus corresponding to segments between the different NS5-brane locations in the $z$-plane. BPS instantons correspond to horizontal segments. \begin{figure} \begin{center} \inputfig{unfoldgeom} \caption{\small Configuration of an $O(1)$ instanton splitting as a $U(1)$ instanton (and its orientifold image). Interpreted as a HW setup, the dots $b$, $a_{\pm \theta}$, denote the locations in the 67 plane for an unrotated NS-brane, and NS5-branes rotated by angles $\pm \theta$ in the 4589 directions. Interpreted as D1-brane instantons in a threefold geometry, the dots $a_{\pm \theta}$, $b$ denote a projection of the degenerations loci of a $\bf C^*$ fiber. D1-brane instantons wrap 2-cycles obtained by fibering the latter over segments defined by such degenerations, and are supersymmetric when the segments lie horizontally.} \label{unfoldgeom} \end{center} \end{figure} The above kind of configuration can be T-dualized using \cite{Uranga:1998vf} into a type IIB geometry similar to those in Section \ref{geometries} (and similar to those studied in \cite{Gubser:1998ia,Lopez:1998zf}). As a complex variety, the geometry can be described as an unfolding of an $A_2$ singularity \begin{eqnarray} xy= u (u+\alpha v)(u-\alpha v) \label{unfolding} \end{eqnarray} with $\alpha=\tan\theta$. It can be regarded as a $\bf C^*$ fibration over the $(u,v)$ space, and degenerating at the loci $u=0$, $u=\pm \alpha v$. The directions $u$, $v$ are closely related to the directions $45$ and $89$ in the HW setup, and the degeneration loci correspond to the NS5-brane volumes in those directions. The geometry contains non-trivial 2-cycles, obtained by fibering the circle in the $\bf C^*$ over a segment joining two degeneration loci. There are D-brane instantons arising from D1-branes wrapping these 2-cycles. The description of the geometry as a complex manifold provided in (\ref{unfolding}) does not encode the parameters $\epsilon, t$, which are Kahler parameters and control the lines of marginal stability of our instantons. We will rather use pictures like Figure \ref{unfoldgeom}, which can be regarded as a depiction of the blow-up structure of the above geometry, or the representation of the $67$ plane in the HW configuration. Since the spectrum of instanton zero modes and their interactions can be obtained from the latter using standard rules, we stick to this language, although it is straightforward to translate into the geometric one. Let us introduce an O6-plane along 0123789 in the HW setup, which thus corresponds to a fixed line along the vertical axis on the $z$-plane. The O6-plane intersects the NS-brane (in an intersection preserving 8 supercharges) mapping it to itself, while it exchanges the NS$_{\pm \theta}$-branes. We choose the O6-plane charge such that it leads to $O(1)$ Chan-Paton symmetry on instantons along horizontal segments crossing the O6-plane. Consider the configuration for non-zero $\epsilon$, see Figure \ref{unfoldgeom}a. The only BPS instanton is given by a D0-brane stretched between the NS$_{-\theta}$ and NS$_\theta$ branes. It has $O(1)$ Chan-Paton symmetry and has just 2 fermion zero modes (for non-zero $\theta$), and thus leads to a non-perturbative superpotential contribution $W \simeq e^{-T}$, with $T$ the chiral multiplet with real part $t$. Consider the configuration for $\epsilon=0$, where the previous instanton reaches a line of marginal stability and splits into a $U(1)$ instanton $1$ (a D0-brane between the NS$_{-\theta}$ and the NS branes) and its orientifold image $1'$ (between the NS and NS$_\theta$ branes). At the Gaussian level, the instanton has many additional zero modes beyond the required set of two fermion zero modes, hence naively it would not contribute to the superpotential. However, it is easy to go through the analysis of zero modes and their interactions, and realize that the additional fermion zero modes are lifted. The argument is very similar to that in previous Section, so our discussion is sketchy. In the $11$ sector of open strings with both endpoints on the instanton, there are four translational Goldstone bosonic zero modes $x^\mu$, and four fermionic zero modes, two of them $\theta^\alpha$ associated to Goldstinos of the 4d ${\cal N}=1$, and two ${\tilde\theta}_{\dot \alpha}$ associated to the accidental enhancement to ${\cal N}=2$. In the $11'$ sector of open strings between the instanton and its image, we have a hypermultiplet (given by the pair of chiral field $\Phi$ and $\Phi'$ in ${\cal N}=1$ language) of zero modes $\varphi$, $\varphi'$, $\chi_\alpha$, $\chi'_\alpha$ with $U(1)$ charges $\pm 2$ for unprimed/primed fields. The couplings between the $11$ and $11'$ fields are \begin{eqnarray} S\, =\, {\tilde \theta} \, (\, \varphi {\overline \chi} - {\overline \chi}' \varphi'\, ) \end{eqnarray} From the HW construction it is possible to derive that there are interactions among fields in the $11'$ sector. Given the amount of susy, it is possible to describe them by a superpotential $W\simeq (\Phi \Phi')^2$. Namely, there are scalar potential terms (involving also a D-term contribution) \begin{eqnarray} V_D & \simeq & (\, |\varphi|^2\, -\, |\varphi'|^2\, )^2 \nonumber \\ V_F & \simeq & |\varphi\varphi'\varphi|^2 \, + \, |\varphi'\varphi\varphi'|^2 \end{eqnarray} and most importantly couplings to the $11'$ fermions \begin{eqnarray} S_{\varphi\chi} \, \simeq\, \chi\chi\varphi'\varphi' \, +\, 2\, \chi \varphi\chi'\varphi' \, +\, \varphi\varphi\chi'\chi' \end{eqnarray} As discussed in previous examples, all additional zero modes can be saturated by pulling down interaction terms from the instanton effective action. The only left-over fermion zero modes are the two Goldstinos $\theta^\alpha$, hence the $U(1)$ instanton contributes to the superpotential. Note that in contrast with the previous examples, the lifting of zero modes of the $U(1)$ instanton is purely perturbative (although is reminiscent of the non-perturbative lifting in previous section when regarded in the covering space). Since the volume of the instanton and its image add up to the volume of the original $O(1)$ instanton, the complete superpotential is continuous. \section{Gauge D-brane instantons} \label{gaugeinst} Let us proceed to systems which are more familiar, namely configurations where the non-perturbative superpotential can be regarded as generated by gauge theory instantons. The idea is to consider a simple example of gauge sector with a non-perturbative superpotential, engineered via D-branes, and to consider its fate as one crosses a line of marginal stability. The general lesson of this example is the following. In this kind of setup, the crossing of lines of marginal stability in moduli space is basically described in terms of a Higgsing/unHiggsing in the field theory. Also, the dependence of the superpotential on the relevant moduli is encoded in the dynamical scales of the gauge factors associated to the 4d spacetime filling D-branes (since they control the gauge couplings). Thus the statement about the continuity of the superpotential across lines of marginal stability corresponds to the familiar matching of dynamical scales of a gauge theory in a Higgsing/unHiggsing process, at energies above and below the relevant vevs. Given this interpretation and the construction and discussion below, it is easy to find other examples of similar behaviour. \subsection{An example of $N_f<N_c$ SQCD non-perturbative superpotential} \label{sqcd} \subsubsection{Configuration, marginal stability, and the spacetime picture} Let us describe a system of D6-branes crossing a line a marginal stability in a geometry of the kind introduced in Section \ref{geometries}. Consider the geometry in Figure~\ref{crossgauge}, having two $a$-type degenerations and one $b$-type degeneration, ordered as $b,a_1,a_2$ from left to right along the real axis. We consider a set of $N$ D6-branes wrapped on the 3-cycle $C_1=[b,a_1]$ and $N$ D6-branes on $C_2=[a_1,a_2]$. This configuration is supersymmetric as long as the degeneration $a_1$ is aligned with the other two. Moving $a_1$ away from the horizontal axis forces the D6-branes on $C_1$ and $C_2$ to misalign, and their tension increases. The system of branes can relax by forming a bound states, described by $N$ D6-branes on the 3-cycle $C=[b,a_2]$. Namely, the locus in moduli space where $a_1$ aligns with $b,a_2$ corresponds to a line of marginal stability for a D6-branes on $C$, which become unstable against decay into D6-branes on $C_1$, $C_2$. \begin{figure} \begin{center} \inputfig{crossgauge} \caption{\small a) Marginally stable configuration. b) Moving $a_1$ away from the horizontal axis renders the configuration nonsupersymmetric, so it can c) decay to a supersymmetric configuration by brane recombination.} \label{crossgauge} \end{center} \end{figure} The above phenomenon of brane dynamics has a counterpart in classical gauge field theory. The system of $N$ D6-branes on $C_1$, $C_2$ leads to a $U(N)_1\times U(N)_2$ ${\cal N}=1$ supersymmetric gauge theory (see later for a discussion of the $U(1)$ factors), with chiral multiplets $Q$, ${\tilde Q}$ in the $(\fund_1,\overline{\fund}_2)$, $(\overline{\fund}_1,\fund_2)$, and $\Phi$ in the adjoint of $SU(N)_2$. There is a classical superpotential \begin{eqnarray} W\, =\, {\rm tr \,} \, Q\Phi{\tilde Q} \end{eqnarray} The parameters of the gauge theory are the gauge couplings $g_i$, and theta angles $\theta_i$, which are classically related to $C_i$ by \begin{eqnarray} T_i\, =\, \frac{1}{g_i^{\, 2}} +i\theta_i\, =\,\frac{1}{g_s} \int_{C_i}\, \Omega \, +\, i \int_{C_i}\, A_3 \end{eqnarray} where $A_3$ is the type IIA RR 3-form. In the quantum theory, these parameters are traded for dynamical scales \begin{eqnarray} \Lambda_1\, =\, \exp\left(-\frac{T_1}{2N}\right) \quad ; \quad \Lambda_2\, =\, \exp\left(-\frac{T_2}{N}\right) \end{eqnarray} The change in the complex structure associated to moving $a_1$ off the horizontal axis corresponds to turning on a Fayet-Iliopoulos parameter $\xi$ for the difference of the two $U(1)$'s. This triggers a vev for the bi-fundamental flavours $Q$ or ${\tilde Q}$, depending on the sign of $\xi$, and breaking the gauge group to the diagonal $U(N)$. Assuming that $Q$ acquires the vev, the fields $\Phi$, ${\tilde Q}$ become massive by the superpotential and disappear. We are left with a $U(N)$ pure SYM gauge theory, with complex gauge coupling \begin{eqnarray} T\, =\, T_1+T_2\, =\, \int_C\, \Omega \, +\, \int_C\, A_3 \end{eqnarray} This agrees with the picture of the D6-branes recombining into D6-branes wrapped on $C$. It is worth noting that the $U(1)$ generators have $BF$ Stuckelberg couplings with closed string moduli, which make the gauge bosons massive, so the $U(1)$ factors are really absent from the low energy effective theory. This modifies the above discussion very mildly. Namely, instead of turning on a FI parameter, the above transition can be regarded as moving along the baryonic branch of the $SU(N)_1\times SU(N)_2$ theory, to yield a pure $SU(N)$ SYM theory. \medskip We would now like to consider the non-perturbative superpotential in these two systems, and in showing that it is continuous across the line of marginal stability. Interestingly, the non-perturbative effects have a microscopic description in terms of D2-brane instantons on the relevant 3-cycles, along the lines described in Section \ref{introinst}. In the discussion we stick to the description in gauge theory language. Also for convenience we use the description where the $U(1)$'s are not included in the low-energy dynamics. Consider first the system of $N$ D6-branes on $C$. Since it corresponds to a pure SYM theory, it confines and develops a gaugino condensate. There is a non-perturbative superpotential \begin{eqnarray} W\, =\, \Lambda^3\, =\, (\, e^{-T/3N}\,)^3 \label{supo1} \end{eqnarray} Consider now the situation when the instanton reaches the line of marginal stability. We consider the system of $N$ D6-branes on $C_1$ and $N$ D6-branes on $C_2$, so we essentially have to study the dynamics of the $SU(N)_1\times SU(N)_2$ gauge theory. Let us focus in the regime where $\Lambda_1\gg \Lambda_2$, so the dynamics of $SU(N)_1$ dominates. In this case the $SU(N)_1$ group confines first. It has $N_f = N_c$, so the instanton on $C_1$ is a Beasley-Witten instanton, which induces a quantum deformation on the moduli space. Instead of using the intrinsic picture in moduli space and inducing an operator of the form (\ref{bwop}), we prefer to work as usual in field theory analysis, by imposing the deformation by a quantum modified constraint. We describe the system is in terms of mesons $M$ and baryons $B\tilde B$, with superpotential: \[ W = \mu \Phi M + \mu^{-2N+2}\, X\, ( \det M - B \tilde B - \Lambda_1^{2N}) \] where we have introduced the scale $\mu$ to keep the dimension of the operator in the superpotential invariant. This dynamical scale will be of the order of $\Lambda_1$, so we use it in what follows. The F-term for $\Phi$ enforces $M=0$, and vice versa. The fields $\Phi$ and $M$ are massive, so we can integrate them out. We are left with a pure $SU(N)_2$ SYM theory, with dynamical scale $\Lambda_f$, to be determined later. In addition we have the singlets $X$, $B$, ${\tilde B}$, with superpotential \[ W \simeq X\,(\,B\tilde B + \Lambda_1^{2N}\, ) \] The theory has a one-complex dimensional baryonic moduli space, but these singlets do not modify the theory otherwise. The dynamical scale $\Lambda_f$ is determined by the matching, in analogy with the discussion in (\ref{supogauge}), as \begin{eqnarray} \Lambda_f^{3N}=\Lambda_2^{N}\Lambda_1^{2N} \end{eqnarray} and is in fact the same as the $\Lambda$ introduced above. In this left-over $SU(N)_2$ pure SYM theory, the effect of the (fractional) instanton on $C_2$ is simply to develop a gaugino condensate non-perturbative superpotential \begin{eqnarray} W\, =\, \Lambda_f^{\, 3} \, =\, (\, e^{-T/3N}\, )^3 \end{eqnarray} in agreement with (\ref{supo1}). This example provides a non-trivial and simple realization of the continuity of superpotentials across lines of marginal stability. The instanton wrapping $C$ reaches the line of marginal stability, at which it splits into two BPS instantons, wrapping $C_1$ and $C_2$. The instanton on $C_1$ is of Beasley-Witten type and deforms the moduli space. The instanton on $C_2$, once the effect of the instanton on $C_1$ is taken into account, induces a non-perturbative superpotential. The total effect neatly adds up to the effect of the single instanton on $C$ before crossing the line of marginal stability. For completeness, let us mention that the discussion with $U(1)$'s in the effective action is similar. There are no baryonic operators, so there are no fields left out after integrating out $M$, $\Phi$. The one-dimensional moduli space is realized in this view in the closed sector, as the FI term for the relative $U(1)$ corresponding to the position of $a_1$ off the horizontal axis. \subsection{Microscopic interpretation} \label{microgauge} In this Section we discuss the microscopic interpretation of the continuity of the non-perturbative superpotential of the above configuration in terms of D-brane instanton physics. \subsubsection{The 2-instanton process} In analogy with the discussion for non-gauge instanton in Section \ref{thetwoinst}, and from the above discussion, it is clear that the superpotential contribution at the line of marginal stability arises from a two-instanton process, involving the instantons $C_1$ and $C_2$. In fact, it is possible to compute the set of zero modes for the two-instanton system, and their interactions. We skip the detailed discussion and just sketch the result. The contributions to the superpotential localize on configurations of instantons coincident in 4d. In addition the 3-cycle $C_1$ is non-rigid, and there is a bosonic zero mode $\phi$ parametrizing a branch where the instanton on $C_1$ slides away from the D6-branes on $C_1$. Along this branch the configuration has additional zero modes $\chi$, ${\overline \chi}$ (the partners of $\phi$), which are not saturated. Hence the contributions to the superpotential localize at $\phi=0$. At this point one can easily check that all fermion zero modes except for the two overall Goldstinos $\theta_1+\theta_2$ have non-trivial interactions, which can be pulled down to saturate the corresponding integrals. \begin{figure} \begin{center} \inputfig{twoinstantongauge} \caption{\small Schematic picture of the multi-instanton configuration discussed in the text.} \label{twoinstantongauge} \end{center} \end{figure} The whole process can be described in spacetime in terms of a diagram \ref{twoinstantongauge}. The instanton $C_2$ has six unsaturated fermion zero modes, since it is a Beasley-Witten instanton with $N_f=N_c$ (thus leading to two unsaturated fermion zero modes beyond the two ${\cal N}=1$ Goldstinos) and two additional fermion zero modes $\chi$, ${\overline chi}$ from being on a non-rigid cycle. The instanton $C_1$ has four unsaturated fermion zero modes, since it is an $N_f=N_c$ Beasley-Witten instanton. In the two-instanton process, one can generate interactions between the zero modes of the two instantons via the bosonic zero modes charged under both, which allow to contract four fermion zero modes, leading to an overall process with only two fermion zero modes. \subsubsection{Non-perturbative lifting of fermion zero modes} \label{npliftgauge} Along the lines of the discussion in Section \ref{nplift}, we would like to improve on the additional viewpoint of the process as a lifting of fermion zero modes of the instanton $C_2$ by a non-perturbative effect induced by the instanton $C_1$. In fact, for gauge instantons the mechanism can posed in a much sharper setup. Consider a gauge instanton $A$, with field configuration ${\cal A}(x^\mu;\varphi,\psi)$, as a function of the sets of bosonic and fermionic zero modes, $\varphi$, $\psi$. Here ${\cal A}$ denotes the set of all 4d fields involved in the configuration. The classical effective action for the zero modes $S_{\rm inst.}(\varphi,\psi)$ can be obtained by replacing the instanton field configuration on the 4d action $S_{4d}[{\cal A}]=\int d^4 x \, {\cal L} [{\cal A}]$, namely \begin{eqnarray} S_{\rm inst}(\varphi,\psi)\, =\, \int d^4x\, {\cal L} [{\cal A} (x^\mu; \varphi,\psi)] \end{eqnarray} From this point of view, any additional term in the 4d effective action $\delta S_{4d}[\cal A]$ induces a corresponding term on the instanton effective action $\delta S_{\rm inst}(\varphi,\psi)$. \begin{eqnarray} \delta S_{\rm inst} \, =\, \int d^4 x\, \delta {\cal L}[{\cal A} (x^\mu;\varphi,\psi)] \end{eqnarray} When the additional term in the 4d effective action $\delta S_{4d}$ is induced by another instanton $B$, the term $\delta S_{\rm inst.}(\varphi,\psi)$ can in a very precise sense be regarded as a non-perturbative interaction term for the zero modes of $A$ induced by the instanton $B$. In particular interaction terms of this kind involving the fermion zero modes $\psi$ of $A$ are non-perturbatively lifted by the instanton $B$. Notice that the $g_s$ dependence arranges in such a way that the total 4d effect is suppressed by the exponential factors of both instantons $A$ and $B$, as in (\ref{twoexpo}). On general grounds, we may expect that an instanton $B$ with $k$ fermion zero modes induces a 4d F-term leading to contributions to $S_{4d}$ with $k$ 4d fermion insertions. This will in general induce an interaction term $S_{\rm inst.}(\varphi,\psi)$ on the instanton $A$ lifting $k$ fermion zero modes. The spacetime picture of the process is a two-instanton process where $k$ fermion zero modes of the two instantons are contracted against each other. In particular, in our gauge theory example, the instanton $C_2$ induces a 4d effective operator corresponding to a 4-fermion F-term, which then induces a 4-fermion interaction term on the effective action for the zero modes of instanton $C_1$. Since the instanton $C_1$ has six fermion zero modes, the lifting of four leaves only the two Goldstinos, so that there is a non-trivial contribution to the superpotential. To conclude, we would like to add yet another equivalent, but related, viewpoint on the non-perturbative lifting of zero modes. The idea is based on a generalization of the analysis in section 4.3 of \cite{Beasley:2004ys}, which discussed the effects of a (perturbative) superpotential mass term on instantons with additional fermion zero modes. Consider an instanton $A$ with $k=2n$ fermion zero modes beyond the two Goldstinos, and leading to a 4d higher F-term (\ref{bwop}) \begin{eqnarray} \int \,d^4x\, d^2\theta\, {\cal O}_{w}\, =\, \int \,d^4x\, d^2\theta\, \, w_{\overline i_1\overline j_1\ldots \overline i_n\overline j_n} (\Phi)\, {\cal O}^{\overline i_1\overline j_1} \ldots {\cal O}^{\overline i_n\overline j_n} \end{eqnarray} with \begin{eqnarray} {\cal O}_{\overline i\overline j}\, =\, {\overline D} {\overline \Phi}^{\overline i}\, {\overline D} {\overline \Phi}^{\overline j} \end{eqnarray} The operator ${\cal O}_w$ is chiral (despite its appearance). In the presence of an additional superpotential $W(\Phi)$, the supersymmetry algebra is modified (since the fermion variations change, $\delta\psi=F=-{\overline {\partial W/\partial \Phi}}$) and ${\cal O}_w$ is no longer chiral. Still, since the instanton $A$ remains BPS, it should induce an F-term. Indeed, in \cite{Beasley:2004ys} it was argued that (for superpotential mass terms), there is a suitable deformation ${\tilde {\cal O}}_w$ of ${\cal O}_w$ which is chiral in the presence of the superpotential. The instanton amplitude is now given by \begin{eqnarray} \int \,d^4x\, d^2\theta\, {\tilde {\cal O}}_{w}\, =\, \int \,d^4x\, d^2\theta\, \, w_{\overline i_1\overline j_1\ldots \overline i_n\overline j_n} (\Phi)\, {\tilde {\cal O}}^{\overline i_1\overline j_1} \ldots {\tilde {\cal O}}^{\overline i_n\overline j_n} \end{eqnarray} where, generalizing the result in \cite{Beasley:2004ys}, ${\tilde {\cal O}}_{\overline i\overline j}$ has schematically the structure \begin{eqnarray} {\tilde {\cal O}}^{\overline i\overline j}\, =\, {\overline D} {\overline \Phi}^{\overline i}\, {\overline D} {\overline \Phi}^{\overline j}\, +\, W^{\overline i\overline j} \label{supoinst} \end{eqnarray} Note that the total effect is that the instanton generate effective vertices not only with $2n$ fermionic external legs, but also with $2n-2p$ fermionic external legs (with $p$ taking several possible values, depending on the detailed structure of $W$). The 4d interpretation is that $2p$ fermionic legs have been soaked up by $p$ insertions of the superpotential interaction. In fact, one is lead to suspect a further generalization of the above argument. Consider the instanton $A$ in the presence, not of a 4d superpotential term, but of a higher F-term (which could be of perturbative or non-perturbative origin). Consider the latter to be of the form \begin{eqnarray} \delta S_{4d} \, =\, \int \, d^4x\, d^2\theta\, W_{\overline i_1\overline j_1\ldots \overline i_m\overline j_m} (\Phi)\, {\overline D} {\overline \Phi}^{\overline i_1}\, {\overline D} {\overline \Phi}^{\overline j_1} \ldots {\overline D} {\overline \Phi}^{\overline i_m}\, {\overline D} {\overline \Phi}^{\overline j_m} \label{fterm} \end{eqnarray} Namely it leads to 4d interactions with $2m$ 4d fermions, and we assume $m<n$. Although we do not have a precise argument based on the supersymmetry algebra, we expect the amplitude of the instanton $A$ to be modified in the presence of such term in the 4d action. Let us define ${\tilde n}=n \text{ mod } m$ and $r=(n-{\tilde n})/m$, hence $n=rm+{\tilde n}$. The instanton amplitude is expected to take the schematic form \begin{eqnarray} \int \,d^4x\, d^2\theta\, w_{ \{ \overline i_{1}\overline j_{1}\} \ldots \{ \overline i_{r}\overline j_{r}\} \overline p_{\tilde n}} {\cal O}^{ \{ \overline i_{1}\overline j_{1}\} } \ldots {\cal O}^{\{ \overline i_{r}\overline j_{r}\} }\, {\overline D} {\overline \Phi}^{\overline k_1}\, {\overline D} {\overline \Phi}^{\overline p_1} \ldots {\overline D} {\overline \Phi}^{\overline k_{\tilde n}}\, {\overline D} {\overline \Phi}^{\overline p {\tilde n}}\,\nonumber \end{eqnarray} where $\{i_q,j_q\}$ denotes an $m$-plet of indices $i_{q1},j_{q1}\ldots i_{qm}j_{qm}$, and \begin{eqnarray} {\cal O}^{ \{ \overline i\overline j\} }\, =\, {\cal O}^{\overline i_{1} \overline j_{1}\ldots \overline i_{m} \overline j_{m}} \, =\, {\overline D} {\overline \Phi}^{\overline i_1}\, {\overline D} {\overline \Phi}^{\overline j_1} \ldots {\overline D} {\overline \Phi}^{\overline i_m}\, {\overline D} {\overline \Phi}^{\overline j_m} \, +\, W^{\overline i_1\overline j_1\ldots \overline i_m\overline j_m} \label{fterminst} \end{eqnarray} The interpretation is that in the presence of the 4d F-term (\ref{fterm}), the instanton with $2n$ fermion zero modes can generate effective vertices with $2n-2m$ external fermionic legs, by having sets of $2m$ fermionic legs soaked up by the F-term (\ref{fterm}). The above discussion can be carried out to the situation where the modification to the 4d action is induced by a second instanton $B$ with $2m$ fermion zero modes (which could be a gauge instanton or a non-gauge D-brane instanton). In the spacetime picture, we would have a multi-instanton process involving $A$ and $B$, in which some of the fermionic external legs of the instanton $A$ are soaked up by the 4d effective interaction induced by $B$. A simple example would be to consider the instanton $B$ to have two fermion zero modes, so it generates a superpotential, thus fitting into the situation leading to (\ref{supoinst}). In fact, a particular case fitting within the analysis in \cite{Beasley:2004ys} can be obtained by considering the instanton $B$ to be a non-gauge D-brane instanton inducing a superpotential mass term in the 4d action. Explicit examples of this have been considered e.g. in \cite{Bianchi:2007fx,Argurio:2007qk,Ibanez:2007tu}. Our example of gauge theory instantons above corresponds to a more general situation of the kind (\ref{fterminst}), with the instantons $A$, $B$ given by the instantons $C_1$, $C_2$ (and $n=3$, $m=2$) As a last remark, we expect processes with non-gauge instantons to admit a similar interpretation. Thus the contribution to the superpotential arising from the two-instanton process involving the $U(1)$ and the $O(1)$ instantons can be regarded as the 4d effective term induced by the $U(1)$ instanton in the presence of the additional 4d interaction induced by the $O(1)$ instanton. \subsection{Adding semi-infinite D-branes} \label{addingflav} It is interesting to consider some simple modifications of the above discussion in the presence of additional semi-infinite D6-branes sticking out of the $\bf C^*$ degenerations. From the field theory viewpoint they correspond to the addition of extra flavours for some of the gauge factors. From the viewpoint of the instantons, they lead to additional fermion zero modes. In this section we consider a few possibilities In the above situation we have focused on a case where the non-perturbative dynamics reduces to that of pure SYM. However, it is straightforward to modify the setup to SQCD with $N_f$ flavors. It suffices to introduce a stack of $N_f$ D6-branes wrapping the non-compact 3-cycle obtained from a horizontal semi-infinite line starting from the degeneration $a_2$ (this can be regarded as a limit of infinite 3-cycle volume of a geometry with a second $b$-type degeneration, located on the far right of the figure). The above argument goes through, and implies the continuity of the non-perturbative superpotential across the line of marginal stability. Notice that in the particular case of $N_f=N-1$ the instantons under discussion are familiar gauge theory instantons. Another straightforward addition of semi-infinite branes is to consider adding $K$ D6-branes stretching from the $a_1$ degeneration horizontally to the left infinity. Note that for the configuration in Figure \ref{crossgauge}a, these D6-branes hit the $b$ degeneration, so the configuration can be regarded as $K$ D6-branes stretching along $(-\infty,b]$, $N+K$ on $[b,a_1]$ and $N$ on $[a_1,a_2]$. For the configuration in Figure \ref{crossgauge}c, we have $N$ D6-branes on $[b,a_2]$ and a disconnected set of $K$ D6-branes from left infinity to $a_1$. It is easy to carry out an analysis similar to the above to derive the continuity of the superpotential. In the initial configuration, the gauge factor $SU(N+K)$ has $N_f=N_c$ and thus a Beasley-Witten instanton deforming its moduli space and forcing the gauge factor onto the baryonic branch. The adjoint of the $SU(N)$ factor pairs up with some of the mesons and becomes massive, so the left over pure SYM theory develops a gaugino condensation superpotential. One recovers the same result from the instanton contribution in the final configuration Figure \ref{crossgauge}c (upon matching of scales along the lines in Section \ref{introinst}). \subsection{Gauge theory instantons and Seiberg duality} \label{sec:seiberg-duality} In this section we elaborate on an interesting point. It is a familiar fact that the realization of Seiberg duality in terms of the D-brane construction of gauge theories corresponds to a motion in moduli space (in which D-branes typically break up and recombine) \cite{Elitzur:1997fh,Ooguri:1997ih,Beasley:2001zp,Feng:2001bn,Cachazo:2001sg} (see also \cite{Ito:1999xn,Berenstein:2002fi,Herzog:2004qw} for other related approaches). Therefore they provide a large class of examples of motion across lines of marginal stability in which the non-perturbative superpotential is continuous. A comment is in order here. From field theory experience we know that Seiberg duality involves a non-trivial change of variables in the 4d chiral multiplets. We also know that tree-level superpotentials are crucial in matching properties of two Seiberg-dual theories. Both properties are related to the following fact. Seiberg dualities in the D-brane realization of field theories can be described as a motion between two points $P$ and $Q$ in moduli space, at each of which we have D-branes wrapped on cycles, whose sizes control the gauge couplings and thus the strength of instanton effects. This motion typically involves a region in moduli space larger than the radius of convergence of the instanton expansion at either point. In other words, the operation can also be described as a continuation past infinite coupling, in the sense that they can be obtained by shrinking a cycle $C$ on which 4d space filling branes wrap and growing a cycle C' which is in the opposite homology class $[C']=-[C]$. The point $O$ where the cycle shrinks is strongly coupled from the viewpoint of the original instanton at $P$, but a different weakly coupled description is available at $Q$ (and vice versa). The change of description has several effects, which we be taken into account implicitly in our discussions below: $\bullet$ It relates the strengths of the instantons as $e^{-T}=(e^{-T'})^{-1}$, where $T$, $T'$ control the sizes of $C$, $C'$. This underlies the fact that matching of scales in the Seiberg duality encodes the continuity of the superpotential as a function of the closed string moduli. $\bullet$ It implies a non-trivial change of variables in the 4d chiral multiplets, hence the comparison of the superpotentials at $P$ and $Q$ requires expressing the open string 4d multiplets in terms of gauge invariant operators. $\bullet$ It can map tree-level and non-perturbative superpotentials to each other. Thus the continuity applies to the full superpotential. The D-brane realization of Seiberg duality for large classes of field theories thus provides a large class of examples of continuity of the non-perturbative superpotential across lines of marginal stability (with the appropriate change of variables for the charged matter fields). We restrict to the description of this phenomenon with simple examples, which are illustrative for this whole class. Notice that it is easy to provide a D-brane realization of the original Seiberg duality \cite{Seiberg:1994pq} using the above geometries following \cite{Ooguri:1997ih}, as we review now, see Figure \ref{iia-emduality}. \begin{figure} \begin{center} \inputfig{iia-emduality} \caption{\small Realizing Seiberg duality in terms of D-branes: a) The electric configuration. b)~We move $a_1$ up a bit. The original branes are now nonaligned, so they recombine to minimize their tension. c)~Finally moving $a_1$ all the way to the middle position we get the magnetic dual theory.} \label{iia-emduality} \end{center} \end{figure} Consider a geometry with three aligned degenerations ordered as $a_1,b,a_2$, and introduce $N_c$ D6-branes on $[a_1,b]$ and $N_f$ on $[b,a_2]$, with $N_f\geq N_c$, Figure \ref{iia-emduality}a. This describes the electric theory of $SU(N_c)$ SQCD with $N_f$ flavours, with a gauged flavour group. Now move up the degeneration $a_1$. The minimal energy configuration is obtained when $N_c$ D6-branes recombine at $b$, so we have $N_c$ D6-branes on the tilted \footnote{The tilting breaks supersymmetry in the intermediate steps of the argument; there are however simple modifications of the setup which allow to preserve supersymmetry throughout the process \cite{Ooguri:1997ih}. We skip their discussion since they will not be needed in our examples below.} segment $[a_1,a_2]$ and $N_f-N_c$ on $[b,a_2]$, Figure \ref{iia-emduality}b. Now move $a_1$ to the right and bring it down between $b$ and $a_2$. The $N_c-N_f$ D6-branes on $[b,a_2]$ split, so we are left with $N_f-N_c$ D6-branes on $[b,a_1]$ and $N_f$ on $[a_1,a_2]$. This describes the magnetic theory (again with gauged flavour group). Note that the gauging of the flavour group is just for the purposes of introducing configurations to be used later; a realization of the pure Seiberg duality can be obtained simply by sending the degeneration $a_2$ to right infinity. Clearly the possibility of embedding Seiberg dualities in terms of D-branes provides a huge class of examples of brane systems crossing walls of marginal stability. The continuity of the non-perturbative superpotential in these processes is automatically guaranteed by the field theory argument for the matching of scales, as discussed above. We will not delve into a more detailed discussion, and simply discuss some particular examples related to systems in other Sections. Let us focus on some particularly simple examples where the basic splitting processes of the D6-branes are of the kind analyzed in the previous Section. Consider the situation with $N$ D6-branes on $[b,a_2]$ and no D6-branes on $[a_1,b]$. The $a_1$ degeneration has no D6-branes attached, so moving it between the degenerations $b$, $a_2$ is exactly the inverse process of the one in Figure~\ref{crossgauge}, studied in Section~\ref{sqcd}. For future convenience let us consider another example, now involving semi-infinite D6-branes. Consider the initial configuration with degenerations ordered as $a_1$, $b$, $a_2$ and introduce $N$ D6-branes on $(-\infty,a_1)$, no D6-branes on $[a_1,b]$ and $K$ D6-branes on $[b,a_2]$. As one moves the $a_1$ degeneration between $b$, $a_2$, it drags the $K$ semi-infinite D6-branes, which end up split in the final configuration. In the latter we have $K$ D6-branes on $(-\infty,b)$, $N+K$ D6-branes on $(b,a_1)$, and $N$ D6-branes on $(a_1,a_2)$. The splitting process of the semi-infinite D6-branes is exactly as in the last system of Section \ref{addingflav}, where we showed the continuity of the non-perturbative superpotential. \medskip As a final example based on the configuration in the previous paragraph, let us consider a type IIA configuration mirror to D-branes at the conifold, and (one of the steps of) the celebrated Klebanov-Strassler duality cascade \cite{Klebanov:2000hb}. Following \cite{Uranga:1998vf,Dasgupta:1998su}, a system of D-branes at a conifold can be realized in terms of D4-branes suspended (along a circle direction) between two rotated NS-branes. Equivalently, one can use an infinite periodic array of rotated NS-branes with suspended D4-branes. This systems can be mapped to one of our familiar double $\bf C^*$-fibration geometries by simply introducing a periodic array of degenerations $\ldots,a,b,a,b,\ldots$, with D6-branes on the finite segments, as shown in Figure \ref{iia-ks}. This is equivalent to (but easier to visualize than) a double $\bf C^*$ fibration over a cylinder, with one degeneration of each type. \begin{figure} \begin{center} \inputfig{iia-ks} \caption{\small The periodic configuration dual to the conifold. The dotted vertical line denotes the period. a)~Final step of the cascade. b)~One step up in the cascade. We reach this point by moving all $a$ degenerations one cell to the left.} \label{iia-ks} \end{center} \end{figure} Consider the configuration on Figure \ref{iia-ks}a, with $M$ D6-branes on the intervals of type $[a,b]$, and no D6-branes on those of type $[b,a]$. This describes the theory at the end of the duality cascade, and corresponds to $SU(M)$ SYM, with a non-perturbative superpotential induced by a $1/M$-fractional instanton. Consider now the geometric operation that takes us one step up the cascade. This corresponds to moving the $a$-type degenerations once around the period, coming back to its original position in the periodically identified geometry but moving one period to the left in the covering space we are drawing. We do this in the same way as above: moving up the $a$ singularity a bit, taking it one cell to the left, and finally returning it to its original vertical position. The resulting configuration is shown in Figure \ref{iia-ks}b and contains $M$ D6-branes on the $[b,a]$ intervals and $2M$ D6-branes on the $[a,b]$ intervals. The geometric process, and in particular the splitting of branes, is exactly as that considered two paragraphs above, for $K=N\equiv M$. The continuity of the superpotential is easily derived, by showing (using the instanton interpretation of the field theory analysis in \cite{Gubser:2004qj}) that the Beasley-Witten instanton of the $SU(2M)$ theory (which has $N_f=N_c$) deforms the moduli space of this theory and forces it into the baryonic branch, while the $1/M$-fractional instanton on the left over $SU(M)$ theory (with scale suitably computed by matching) generates the superpotential. \section{Exotic instantons becoming gauge instantons} \label{sec:exotic-to-gauge} In the previous Sections we have argued continuity of the non-perturbative superpotential for gauge and non-gauge D-brane instantons, in several examples. In this Section we would like to consider a slightly more general situation where the nature of the instanton changes in the process of reaching lines of marginal stability. Namely a non-gauge D-brane instanton ends up as a gauge D-brane instanton after some motion in moduli space. A prototypical situation where this takes place is in duality cascades \cite{Klebanov:2000hb} (see also e.g. \cite{Franco:2004jz,Herzog:2004tr,Franco:2005fd,Brini:2006ej}) of quiver gauge theories, in which one of the nodes of the quiver becomes eventually empty of 4d space filling branes. D-brane instantons which occupied this node change from gauge to non-gauge instantons in the motion in moduli space associated to the cascade. Since we are interested in studying contributions to the superpotential, one would need to consider cascades of orientifolded quiver gauge theories. In fact, this kind of analysis has been carried out in \cite{Aharony:2007pr} in one particular example, focusing on the relevant part of the superpotential for the infrared theory. In Section \ref{sec:cascade} we revisit the system in our language, and recover that the full superpotential is well-behaved in the process. Our analysis reproduces some pieces dropped in \cite{Aharony:2007pr}, which are irrelevant in the infrared, but are still part of the full superpotential of the theory. Before revisiting the example of the duality cascade, let us consider the simplest case where a non-gauge D-brane instanton becomes a gauge theory effect. \subsection{Dualizing the $O(1)$ instanton} \label{sec:dualizing-O(1)} \begin{figure} \begin{center} \inputfig{gauge-D} \caption{\small Prototypical example of a D-instanton effect being equivalent to a gauge theory effect via Seiberg duality. Figure a) shows the geometry leading to a $USp\times SU$ gauge theory upon wrapping D6-branes on the appropriate 3-cycles. The dotted line denotes the orientifold plane. Figure b) shows the configuration after the motion in moduli space corresponding to Seiberg duality. There are no D6-branes in the 3-cycle $[a_2,b_1]$, but an instanton (dashed line) wrapped on it can contribute to the superpotential. We have indicated the charged fermionic zero modes $\lambda$, $\bar \lambda$ between the D-brane instanton and the gauge D-brane.} \label{fig:gauge-D} \end{center} \end{figure} Let us consider a geometry of the kind in Section \ref{geometries}, with an O6-plane, see Figure \ref{fig:gauge-D}. Let us wrap stack of D6-branes on the different 3-cycles corresponding to the configuration in Figure \ref{fig:gauge-D}a. The low energy dynamics of this configuration is given a $SU(N)\times USp(2N-4)$ gauge theory, with quarks $q\in (\fund_{SU},\fund_{USp})$, $\tilde q \in (\overline{\fund}_{SU},\fund_{Usp})$ and superpotential \begin{equation} W = q\tilde q q \tilde q \end{equation} Let us focus on the strong dynamics for the $USp$ theory \footnote{There are additional non-perturbative effects from the $SU(N)$ factor, which can also be followed along the transition below, in analogy with our examples above (in fact, they map to 2-instanton effects after the transition). We skip their discussion in order to emphasize the main point.}. As argued in \cite{Intriligator:1995ne}, when the $USp$ node becomes strongly coupled the theory has an effective description (corresponding to its Seiberg dual) in which the $USp$ group confines completely, and the fundamental degrees of freedom are the mesons: \begin{equation} M_{\Yasymm}=q \cdot q\ ; \qquad M_{\overline{\Yasymm}}=\tilde q \cdot \tilde q\ ; \qquad M_{Adj}=q\cdot \tilde q \end{equation} where we have expressed the mesons in terms of the electric fields \footnote{We have omitted here the meson singlet under the $SU(N)$. In the stringy setup is will get a mass due to a coupling related by the ${\cal N}=2$ susy to the one giving mass to the $U(1)$ gauge boson.}, and the dot denotes contraction in the $USp$ indices, which antisymmetrizes the fields. The subindex denotes the representation of the $SU(N)$ group under which the meson transforms. There is also a superpotential implementing the classical constraint between the mesons, which can be written as \begin{equation} W_0 = \text{Pf} \begin{pmatrix} M_{\Yasymm} & M_{Adj} \\ - M_{Adj} & M_{\overline{\Yasymm}} \end{pmatrix} \end{equation} Adding the original superpotential in terms of the mesons we obtain \begin{equation} W = W_0\, +\, M_{\overline{\Yasymm}} M_{\Yasymm} \end{equation} We can solve the equations of motion for the massive mesons $M_{\overline{\Yasymm}}$, $M_{\Yasymm}$ just by setting them to zero. The resulting superpotential is then given by: \begin{equation} \label{eq:W-gauge-theory} W = \text{Pf} \begin{pmatrix} 0 & M_{Adj} \\ -M_{Adj} & 0 \end{pmatrix} = \det M_{Adj}. \end{equation} We can now perform a brane motion taking the configuration to that in Fig.~\ref{fig:gauge-D}b, where there is no brane stretching on the 3-cycle $[a_2,b_1]$. This result takes into account the brane creation effects due to the presence of the orientifold planes, as discussed in \cite{Ooguri:1997ih}. Despite the non-trivial change in the brane configuration, the superpotential is continuous. Namely the above superpotential is still generated, but now via an exotic $O(1)$ instanton on $[a_2,b_1]$ which can contribute \footnote{Recall that the orientifold projection acts oppositely on 4d space filling D6-branes and D2-brane instantons, so an orientifold giving a $USp$ gauge group will give a $O(1)$ D-instanton. This works in the same way as for the perhaps more familiar D5-D9 system.}. The calculation in this case is simple. In Figure \ref{fig:gauge-D}b, the theory on the $SU(N)$ brane is locally ${\cal N}=2$, in particular it has an adjoint, which we identify with the adjoint meson of the gauge analysis (in both cases it parametrizes sliding the D6-branes along the two $b$-type degenerations, and their images along the $a$-type ones). The zero modes $\lambda$, $\bar\lambda$ between the D2-brane instanton and the $SU(N)$ brane couple to this adjoint via a term \begin{equation} S = \ldots + \lambda M_{Adj} \bar \lambda \end{equation} in the instanton action (this has the same origin as the usual coupling between the adjoint and the flavors in ${\cal N}=2$ theories). Integrating over the fermionic zero modes gives us the determinant operator we found in equation \ref{eq:W-gauge-theory}. We thus recover the same kind of superpotential, with an exponential dependence on the closed string modulus associated to the 3-cycle defined by the degenerations $b_1$ and $a_2$. Thus the result is continuous across the motion in moduli space, in which gauge and non-gauge instantons turn into each other \footnote{\label{noncompact} One may try to discuss a similar configuration without the external degenerations and with semi-infinite branes. In this case the computation \ref{fig:gauge-D}a gives a non-zero superpotential, while in \ref{fig:gauge-D}b there are no dynamical mesons to help saturate the fermion zero modes of the instanton, hence there is no superpotential. The mismatch is related to the non-compact D-branes in the configuration (see comment at the beginning of Section \ref{geometries}). Upon ``compactification'' by adding the external degenerations at a finite distance, one recovers the above full agreement.}. \subsection{A duality cascade example} \label{sec:cascade} Let us proceed to the more complicated case of the duality cascade studied in \cite{Aharony:2007pr}, and show the continuity of the non-perturbative superpotential along a complicated chain of Seiberg dualities. The theory under consideration is given by the quiver in Figure \ref{AK-bottom}, with gauge group at the bottom of the cascade given by $USp(0)\times SU(1)\times SU(N_3)\times\ldots$ with $N_3,\ldots$ arbitrary. The superpotential is given by: \begin{equation} W = \sum_{i=1}^{N_{factors}} (-1)^i X_{i,i+1} X_{i+1,i+2} X_{i+2,i+1} X_{i+1,i}. \label{eq:W-conifold} \end{equation} \begin{figure} \begin{center} \inputfig{AK-bottom} \caption{\small Relevant nodes of the quiver theory for the orbifolded conifold. We have indicated the ranks at the bottom of the cascade. There can be more $SU(N)$ nodes to the right, ending with another $USp(N)$ group. $X_{ij}$ denotes the bifundamental from node $i$ to node $j$.} \label{AK-bottom} \end{center} \end{figure} This theory can be easily realized in string theory by modding out an orbifold of the conifold \cite{Uranga:1998vf} by a suitable orientifold action \cite{Franco:2007ii}. In terms of the geometries in Section \ref{geometries}, we can consider a periodic array of degenerations $a_1,b_1,a_2,b_2,a_3,b_3,a_4,b_4$ and introducing an orientifold quotient $\Omega R(-1)^{F_L}$, with $R$ given by (\ref{orient}). In terms of the geometrical setup, the cascade of Seiberg dualities simply amounts to a motion in moduli space, generalizing that discussed above. In this situation there are also some brane creation effects due to the presence of the orientifold planes, as discussed in \cite{Ooguri:1997ih}. The configuration one step up in the cascade is given by the same quiver but with different ranks: \begin{equation} USp(2N_2-4) \times SU(N_2) \times SU(N_4-1)\times SU(N_4)\times \ldots \end{equation} In particular all nodes are occupied so there are no non-gauge instantons at this level. The detailed gauge theory analysis of \cite{Aharony:2007pr} for the initial configuration shows that the nonperturbative superpotential of the initial gauge theory configuration can be described in terms of the fields at the end of the cascade as \begin{equation} W_{np}^{bottom} = X_{23} X_{32} + \det(X_{34} X_{43}) + X_{23} X_{34} X_{43} X_{32} + \ldots \label{aksupo} \end{equation} where we have omitted some quartic terms of the same form as those in (\ref{eq:W-conifold}), which are tree level from the point of view of $g_s$, and thus not particularly interesting here. We rather us focus on the first two terms, which are nonperturbative. Our aim is to recover them by studying possible D-brane instantons in the final configuration. Note that the determinant term was dropped as irrelevant in \cite{Aharony:2007pr}, since they were just interested in the infrared behaviour of the theory. We are interested in the continuity of the full superpotential so we should keep it, since it carries an implicit dependence on the closed string modulus controlling the corresponding cycle. For completeness, let us reproduce here a sketch of the gauge theory analysis done in \cite{Aharony:2007pr}: \subsubsection{The gauge analysis} We will assume for simplicity a hierarchy of scales given by \begin{equation} \Lambda_1 \gg \Lambda_3 \gg \ldots \gg \Lambda_2 \gg \Lambda_4 \gg \ldots \end{equation} We will choose the ranks in such a way that the bottom of the cascade is described by the quiver in Figure~\ref{AK-bottom}. This can be achieved by choosing the following ranks: \begin{equation} N_1 = 2N_2 - 4 \quad ; \quad N_3 = N_4-1. \end{equation} Due to the hierarchy of scales we have chosen, the first node to become strongly coupled is the $USp$ one. This goes just as in Section \ref{sec:dualizing-O(1)}, and we end up with a $USp(0)$ group, some mesons $M_{Adj}$ charged in the adjoint of $SU(N_2)$, and a nonperturbative superpotential: \begin{equation} W_{np} = \det M_{Adj} \end{equation} Now we have to dualize the $SU(N_3)$ node. We have $N_f=N_2+N_4$, so the dual description is in terms of a $SU(N_f-N_c=N_2+1)$ gauge group, and the dual quarks and mesons. The mesons get a mass due to the quartic terms in the superpotential \ref{eq:W-conifold}, and they can be integrated out. Also, there is a mass coupling coming from the superpotential between $M_{Adj}$ and the meson $M_2^{(3)}$ of $SU(N_3)$ charged under the adjoint of $SU(N_2)$. The relevant term in the superpotential looks like: \begin{equation} W = \ldots + \det M_{Adj} + M_{Adj} M_2^{(3)} + M_2^{(3)} q_{23}q_{32} \end{equation} with $q$ the dual quarks. Integrating the mesons out, we end up with a superpotential: \begin{equation} W = \ldots + \det q_{23} q_{32} + q_{23} q_{34} q_{43} q_{32} \label{eq:det-qq} \end{equation} where we have included a piece of the quartic superpotential that will play a role in a moment. Going down in energy, eventually the $SU(N_2)$ node will become strongly coupled. It has $N_f=N_2+1$, coming just from the third node, so the gauge group confines completely (let us call the resulting node ``$SU(1)$'', as in the stringy picture of the duality there is a single brane remaining). The description is in terms of mesons $M_3^{(2)}$ in the adjoint of the third node and baryons $B_3$, $\tilde B_3$ in the fundamental and antifundamental. There is a superpotential given by: \begin{equation} W = \ldots + B_3 M_3^{(2)} \tilde B_3 - \det M_3^{(2)} \end{equation} When the second node confines the $q_{23}$, $q_{32}$ quarks get confined into baryons and mesons. In particular, the superpotential \ref{eq:det-qq} can be expressed as: \begin{equation} W = \ldots + B_3 \tilde B_3 + M_3^{(2)} q_{34}q_{43} \label{eq:supo-node2} \end{equation} The last step in the chain of dualities, as far as the first three nodes are concerned, comes from dualizing the fourth node. This is important for our discussion as it gives a mass to $M_3^{(2)}$ via the dual of the last coupling in eq. \ref{eq:supo-node2}. After dualizing node 4, we end up with a superpotential: \begin{equation} W = \ldots + B_3 M_3^{(2)} \tilde B_3 - \det M_3^{(2)} + B_3 \tilde B_3 + M_3^{(2)} M_3^{(4)} - M_3^{(4)} X_{34}X_{43} \end{equation} where we have denoted as $X_{34}$, $X_{43}$ the dual quarks of node 4 charged under node 3. We see that the mesons of node 3 get massive as expected. Integrating them out, one gets: \begin{equation} W = \ldots + B_3 X_{34}X_{43}\tilde B_3 -\det X_{34}X_{43} + B_3 \tilde B_3 \end{equation} which is the same as the one in (\ref{aksupo}) up to a relabeling of the baryons as $X_{23}$, $X_{32}$. \subsubsection{D-instanton effects at the bottom of the cascade} Let us now consider the final configuration, where the 4d space filling D6-brane configuration gives rise to a structure $USp(0)\times SU(1)\times SU(N_3)\times\ldots$ with $N_3,\ldots$. There are two instantons which can contribute to the superpotential. There is a non-gauge D-brane instanton arising from the cycle corresponding to the node of the quiver with no 4d space filling branes. As argued in \cite{Aharony:2007pr} and we now review, it leads to the first mass terms in (\ref{aksupo}). The instanton has $O(1)$ symmetry and has two neutral fermion zero modes. In addition it has two fermion zero modes $\alpha$ and $\beta$ from the open strings going from the D-instanton to the $SU(1)$ brane. The instanton action contains a coupling of the form $\alpha X_{23} X_{32} \beta$, arising from the same disk instantons that produce the terms in (\ref{eq:W-conifold}). Integrating over these fermionic zero modes, we get a mass contribution to the superpotential: \begin{align} W = & \ldots + \int\!d\alpha d\beta \ \alpha X_{23} X_{32} \beta \notag\\ = & \ldots + X_{23} X_{32}. \end{align} There is another D-brane instanton which contributes to the non-perturbative superpotential, and which involves a somewhat novel effect. It corresponds to a D-brane instanton on the node with 4d group ``$SU(1)$''. This instanton does not have a proper gauge theory interpretation, but still it shares some common features with gauge instantons. Namely, since it is a $U(1)$ instanton, not mapped to itself by the orientifold action, it has four fermion zero modes. The two Goldstinos of ${\cal N}=1$ supersymmetry remain, while the two accidental ${\cal N}=2$ Goldstinos have non-trivial couplings with the bosonic and fermionic zero modes in the sector of open strings between the instanton and the $SU(1)$-brane. For gauge D-brane instantons, integration over these zero modes imposes the fermionic ADHM constraints \cite{Billo:2002hm}, and reproduces the correct measure on instanton (super)moduli space. In the present setup, we lack an appropriate gauge theory interpretation for the coupling, but its effect of leading to the saturation of the additional fermion zero modes remains. We are therefore left with the two Goldstinos $\theta^{\alpha}$ needed for contributing to the superpotential. We still need to saturate the charged zero modes going from the D-instanton to the $SU(N_3)$ group, there are $2N_c$ of these, $N_c$ of each chirality. Let us call them $\lambda_{23}$ and $\lambda_{32}$. They can be saturated via the same kind of quartic coupling $\lambda_{23} Y_{34} Y_{43} \lambda_{32}$ as above. Expanding the instanton action we get a contribution to the superpotential: \begin{align} W = & \ldots + \int\![d\lambda_{23}][d\lambda_{32}]\ \exp\left(\lambda_{23} Y_{34} Y_{43} \lambda_{32}\right) \notag\\ \simeq & \ldots + \epsilon^{i_1\ldots i_{N_3}}\epsilon^{k_1\ldots k_{N_3}} (Y_{34}Y_{43})_{i_1,k_1} \cdot \ldots \cdot (Y_{34}Y_{43})_{i_{N_3},k_{N_3}}\notag \\ \simeq & \ldots + \det(Y_{34}Y_{43}). \end{align} which correctly reproduces the second term in the nonperturbative superpotential \ref{aksupo}. We see that there is a beautiful agreement between both computations. Clearly, there are plenty of other systems where the agreement between the superpotential up in the cascade and at the lower steps can be checked. We leave this analysis for the interested reader. \section{Topology changing transitions in F-theory} \label{sec:F-theory} In this section we comment on an intriguing implication of the continuity of the non-perturbative superpotential, when considering the F-theory viewpoint on non-perturbative effects on systems of D7-branes near lines of marginal stability. The process of D7-branes splitting/recombining corresponds to a topology changing transition in F/M-theory, along the lines of \cite{Uranga:2002ag}. Our results therefore imply a non-trivial relation between the non-perturbative superpotentials on topologically different Calabi-Yau fourfolds. We restrict to a simple local analysis of such D7-brane system, and of its F-theory lift. Consider the type IIB D7-brane realization of the D-brane configuration studied in Section \ref{sqcd}. There are two stack of D7-branes wrapped on two holomorphic 4-cycles $C_1$ and $C_2$, intersecting over a complex curve $\Sigma$. It is possible to consider concrete examples of Calabi-Yau threefolds and 4-cycles with $h_{2,0}(C_1)=1$, $h_{2,0}(C_2)=0$, which would fit our example, but it is not necessary to illustrate the main point. In fact, the basic idea is already present in a local model in a neighborhood of a point $P$ in $\Sigma$. Using local complex coordinates $z,w,u$ we have D7-branes on $C_1$, described locally by $w=0$ (and $z,u$ arbitrary) and D7-branes on $C_2$, described locally by $z=0$ (and $w,u$ arbitrary). The curve $\Sigma$ is locally parametrized by $u$. In this local analysis, the direction $u$ is an spectator and we can ignore it in the following (although it can lead to global obstructions in the compact model). Thus we have a system of D7-branes wrapped on the locus $zw=0$. The F-theory lift of this configuration is described by an elliptic fibration over the threefold, with degenerate fibers (due to pinching of a 1-cycle) over the 4-cycle wrapped by the D7-branes. We can also work locally near the pinching of the elliptic fiber, and describe the geometry as a $\bf C^*$ fibration. For $n$, $m$ D7-branes on the two different 4-cycles, the local description of the fourfold is thus given by the spectator direction $u$ times the manifold \begin{eqnarray} xy=z^nw^m \end{eqnarray} This kind of geometries were introduced in \cite{Uranga:1998vf}. Let us focus on the simplest representative, $n=m=1$, the conifold. In fact, the configuration corresponds to the resolved conifold, with the 2-cycle described as follows. The fiber on top of the intersection locus $z=w=0$ on the base degenerates into two 2-spheres touching at two points. The class of the 2-cycle corresponds to one of these 2-spheres (while the sum is the class of the fiber). For intersecting D7-branes, the F/M-theory lift corresponds to the limit of vanishing 2-cycle (and no background 2-form potential can be turned on). We are thus at the singular conifold limit, in which there are massless states \cite{Strominger:1995cz} (arising from wrapped M2-branes in the M-theory picture). These are nothing but the open strings degrees of freedom between the two D7-brane stacks. Consider now the D7-brane system away from the line of marginal stability. The D7-branes recombine into a single smooth one, wrapped on a 4-cycle which is a deformation of the above, namely $zw=\epsilon$. In the local model, $\epsilon$ corresponds to a modulus, a flat direction for the fields arising at the intersection of the D7-branes. In the global model the flat direction is obstructed by a D-term condition, and the value of $\epsilon$ is fixed by the closed string modulus moving us away from marginal stability. The F-theory lift of this configuration corresponds to the geometry \begin{eqnarray} xy = zw -\epsilon \end{eqnarray} This describes the deformed conifold. This is expected, since the massless charged states have acquired a vev, thus triggering a topology changing transition \cite{Greene:1995hu}. The behaviour of the arbitrary $n=m$ case is similar, using the deformation $xy=(zw-\epsilon)^n$. The local analysis shows that the crossing of a line of marginal stability corresponds to a topology change in the F/M-theory fourfold. The continuity of the non-perturbative superpotential in this case implies a non-trivial matching between topologically different spaces. It would be interesting to have a more microscopic derivation of this result. We conclude by mentioning a few key points to this aim. The relevant instanton in the IIB picture is a D3-brane wrapped on the 4-cycle which splits at the line of marginal stability. As emphasized, the continuity of the process requires a non-trivial contribution from a 2-instanton process in the intersecting D7-brane configuration. In the F/M-theory lift, the effect arises from an M5-brane instanton wrapping a 6-cycle which splits, and there should exist a non-trivial contribution to the superpotential arising from a 2-instanton process involving the two M5-brane instantons wrapped on the two components of the split 6-cycle. Thus our analysis of superpotentials from multi-instantons should apply to M5-brane instantons on M-theory on CY fourfolds. Clearly this goes beyond the analysis in \cite{Witten:1996bn}, since one would require a suitable generalization to M5-brane instantons on singular 6-cycles. In this respect, notice that one can rephrase the multi-instanton process as a non-perturbative lifting of zero modes of one M5-brane (A) by the effects of a second M5-brane (B). This is not inconsistent with the arguments in \cite{Witten:1996bn}, which were based on counting fermion zero modes chiral with respect to the $U(1)$ symmetry acting on the normal directions transverse to the M5-brane A. Indeed the second M5-brane B can induce couplings which violate this $U(1)$ (which acts on directions along the volume of the M-brane B). Thus the non-perturbative lifting mechanism is powerful enough to allow the appearance of contributions from instantons which violate the celebrated arithmetic genus condition in \cite{Witten:1996bn}. Concrete examples of this are provided by suitable F/M-theory versions of the type II models studied in this paper. \section{Conclusions and outlook} \label{sec:conclusions} In this paper we have studied the microscopic mechanisms via which D-brane instanton computations lead to non-perturbative superpotential continuous across moduli space. This understanding has revealed interesting surprises, including the interesting role of multi-instanton contributions to the superpotential, and its interpretation as non-perturbative lifting of fermion zero modes. These results go in the direction that D-brane instanton effects are subtler, and more abundant, than hitherto considered. It would be interesting to revisit some of the models considered in the literature and look for additional contributing instanton processes of the kind we have introduced. The computation of multi-instanton processes to the superpotential are involved, and require the precise knowledge of the zero mode interactions. It would be interesting to use the continuity of the non-perturbative superpotential to systematize or short-cut such computations. For instance, consider a set of BPS instantons $\{ C_i\}$ at a point $P$ in moduli space. If these instantons can form an irreducible bound state $C$ somewhere else in moduli space (at a point $Q$), and if $C$ has only two fermion zero modes, then in the theory at $P$ there is a non-trivial multi-instanton process involving the instantons $\{ C_i\}$. Similarly, if the instantons form a bound state, but it has more than two fermion zero modes, the corresponding superpotential (at $Q$ and hence at $P$) vanishes. This seemingly innocent statement is in fact very powerful. For instance it may be feasible to systematically construct instantons contributing to the superpotential at some tractable point in moduli space, and translate the corresponding instanton processes to the corresponding (possibly multi-)instanton processes in other regions. For instance, BPS instantons on type IIB models may be constructed as stable holomorphic gauge bundles in the large volume regime. Such contributing instantons could subsequently be translated into multi-instanton processes at other interesting points like orbifold limits or Gepner points. The non-perturbative superpotential is an interesting quantity which is well-behaved all over moduli space, in a non-trivial way. It would be interesting to gain a deeper understanding of the microphysics underlying this result in general, beyond the concrete examples we have analyzed. We expect further insights from more powerful approaches, for instance using the category of holomorphic D-branes, which is another interesting object with universal properties over moduli space. This category does not include the information about the stability conditions on D-branes, namely on the D-term contributions to the world-volume action. However our results suggest that the full superpotential is rather insensitive to the stability properties of individual BPS instantons: as soon as an instanton become unstable and decays into sub-objects, the latter can reconstruct the same contribution via a multi-instanton process. We expect our results to shed light on the physics of non-perturbative superpotentials in string theory, both from the viewpoint of its formal properties, and for physical applications in concrete examples. \vspace*{1cm} {\bf Acknowledgments}\\ We thank C. Bachas, E. Kiritsis, L. Iba\~nez and D. Tong for useful discussions. A.M.U. thanks M. Gonz\'alez for encouragement and support. I.G.-E. thanks the CERN Theory division for hospitality, and N. Hasegawa for kind support. This work has been supported by the European Commission under RTN European Programs MRTN-CT-2004-503369, MRTN-CT-2004-005105, by the CICYT (Spain) and the Comunidad de Madrid under project HEPHACOS P-ESP-00346. The work of I.G.-E. is financed by the Gobierno Vasco PhD fellowship program.
1,314,259,993,825
arxiv
\section{Introduction} The current sample of exoplanets exhibits several interesting dynamical features (Butler {\it et al.\thinspace} 2006): here we focus on two of these. First, there is a vast range of planetary eccentricities, from zero to $>0.9$, with a median of 0.2 (0.27 for planets past 0.1 AU that have not been affected by tides; Rasio {\it et al.\thinspace} 1996; Jackson {\it et al.\thinspace} 2008). Second, mean motion resonances (MMRs) in multiple planet systems appear to be relatively common (e.g., Marcy {\it et al.\thinspace} 2001). There are 31 currently-known multiple planet systems comprising 44 pairs of adjacent planets, of which ten (23\%) show some evidence of resonances (Table 1). However, the evidence for resonances is tentative for all but a few cases. Dynamical instabilities in systems of two or more planets can explain the wide eccentricity distribution of exoplanets (Rasio \& Ford 1996; Weidenschilling \& Marzari 1996; Lin \& Ida 1997). Instabilities arise on timescales that are related to the planets' initial separation (Marzari \& Weidenschilling 2002; Chatterjee {\it et al.\thinspace} 2008), and lead to close encounters between planets and subsequent ejections or mergers. In the aftermath of close encounters, the surviving planets can statistically reproduce the observed eccentricity distribution of exoplanets (Adams \& Laughlin 2003; Juric \& Tremaine 2008; Chatterjee {\it et al.\thinspace} 2008). MMRs are thought to arise primarily from convergent migration in gaseous protoplanetary disks (Snellgrove {\it et al.\thinspace} 2001; Lee \& Peale 2002). Indeed, models show that capture in the 2:1 and 3:2 MMRs is a particularly common occurrence (Thommes 2005; Pierens \& Nelson 2008; Lee {\it et al.\thinspace} 2008). However, MRI-derived turbulence can act to remove planets from resonance. Adams {\it et al.\thinspace} (2008) estimate that only 1\% of resonant systems should remain for a disk lifetime of 1 Myr. In this paper we attempt to reconcile the planet-planet scattering scenario with the population of resonant exoplanets. We numerically investigate dynamical instabilities in systems of three planets located at $\sim$ 2-10 AU with a variety of mass distributions. We find that MMRs are a common occurrence, arising in 5-10 percent of unstable systems. Our simulations populate a range of MMRs, including the 2:1, 3:2, 3:1, 4:1 and extending up to much higher-order (Table 2). MMRs are populated by scattering at random into stable regions; the density of resonant orbits (i.e., the fraction of phase space that undergoes resonant oscillations) is consistent with the scattered resonant systems. We propose several ways to discriminate between scattering and convergent migration as the source of exoplanet MMRs. \section{Methods} Our simulations started with three planets randomly separated by 4-5 mutual Hill radii ($R_{H,m} = 0.5 (a_1+a_2) ([M_1+M_2]/3M_\star)^{1/3}$, where $a$ is the semimajor axis and $M$ the mass). This spacing was chosen to produce instabilities on timescales of at least the $\sim 10^5$ year timescale of runaway gas accretion\footnote{Indeed, instabilities occurred on timescales from 100 years to 98 Myr with a median of a few $\times 10^5$ years. In addition, about 1/4 of simulations were stable for 100 Myr which shows that we started close to the stability boundary.} (Pollack {\it et al.\thinspace} 1996; Marzari \& Weidenschilling 2002). The outermost planet was placed two Hill radii interior to 10 AU; cases with more massive planets and therefore larger Hill radii therefore had the innermost planet closer to the star than for cases with lower-mass planets (see below). Planets were given zero eccentricity and mutual inclinations of less than 1 degree, and the stellar mass was 1 ${\rm\,M_\odot}$. We considered a range of planetary mass distributions. For our two largest sets (1000 simulations each) we randomly selected planet masses according to the observed distribution of exoplanet masses: $dN/dM \propto M^{-1.1}$ (Butler {\it et al.\thinspace} 2006). In the ``mixed1'' set we restricted the planet mass $M_p$ to be between a Saturn mass $M_{Sat}$ and three Jupiter masses $M_{Jup}$. For our ``mixed2'' set, the minimum planet mass was decreased to 10 ${\rm\,M_\oplus}$. We also performed four ``Mequal'' sets (500 simulations each) with equal mass planets for $M_p = 30 {\rm\,M_\oplus}$, $M_{Sat}$, $M_{Jup}$, and $3 M_{Jup}$. Finally, the ``Mgrad'' sets (250 simulations each) contained radial gradients in $M_p$. For the JSN set, in order of increasing orbital distance, $M_p$ = $M_{Jup}$, $M_{Sat}$, and $30 {\rm\,M_\oplus}$. For the NSJ set, these masses were reversed, i.e., the $M_{Jup}$ planet was the most distant. The 3JJS and SJ3J sets had, in increasing radial distance, $M_p$ = $3 M_{Jup}$, $M_{Jup}$ and $M_{Sat}$, and $M_p$ = $M_{Sat}$, $M_{Jup}$ and $3 M_{Jup}$, respectively. \begin{figure}[t] \centerline{\plotone{f1.eps}} \caption{Cumulative eccentricity distribution of the known exoplanets beyond 0.1 AU (thick grey), compared with our scattering simulations.} \label{fig:edist} \end{figure} Each simulation was integrated with the hybrid version of the {\tt Mercury} integrator (Chambers 1999). All planets were assigned physical densities of 1.3 $g \,cm^{-3}$ and collisions were treated as inelastic mergers. We used a 20 day timestep which tests show introduces an error of less than 1 part in 10$^5$ for perihelion distances larger than 0.5 AU. In almost all cases energy was conserved to better than one part in 10$^4$ for the entire 100 Myr simulation, which Barnes \& Quinn (2004) showed is adequate precision to test stability. However, in some cases, energy was poorly conserved; those cases were rerun with a 5 day timestep. After this step, simulations with poor energy conservation were removed from the analysis. \section{Results} Figure~\ref{fig:edist} shows that four of our sets of simulations match the exoplanet eccentricity distribution -- mixed1, Mequal:Jup, Mequal:Sat, and Mequal:30 ${\rm\,M_\oplus}$ -- with P values from K-S tests greater than 0.01. However, given the increasing number of low-mass exoplanets, we believe that our mixed1 and mixed2 simulations are the most realistic initial conditions. If scattering is the source of exoplanet eccentricities, then soon-to-be-discovered systems with lower-mass planets should indeed tend to have lower eccentricities (Ford \& Rasio 2008). \begin{figure}[t] \centerline{\plotone{f2.eps}} \caption{Evolution of a system that produced a pair of planets in the 3:1 MMR. {\bf Top:} The three planets' semimajor axes $a$, perihelia $q$ and aphelia $Q$. The inner (green), middle (black), and outer (red) planets are 43, 105, and 16 ${\rm\,M_\oplus}$, respectively. {\bf Bottom:} Evolution of the 3:1 resonant argument $\theta_3 = 3 \lambda_2 - \lambda_1 - (\varpi_1+\varpi_2)$. Resonant libration starts immediately after ejection of the outer planet.} \label{fig:evol31} \end{figure} We found MMRs by examining resonant arguments for simulations which produced pairs of planets with period ratios close to commensurate values. A pair of planets is in resonance if any resonant argument $\theta_i$ librates rather than circulates. For MMR p+q : p, arguments are of the form \begin{equation} \theta_{i} = (p+q) \lambda_1 - p \lambda_2 -q \varpi_{1,2} \\ \end{equation} \noindent where $\lambda$ are mean longitudes, $\varpi$ are longitudes of pericenter, and subscripts 1 and 2 refer to the inner and outer planet, respectively (e.g., Murray \& Dermott 1999). Figure~\ref{fig:evol31} shows the evolution of a typical simulation that created a resonant system. The instability started 55.8 Myr into the simulation, causing a series of close encounters. Within a few hundred thousand years the outer planet was ejected and the inner two planets swapped places. The two remaining planets are on stable orbits in the 3:1 MMR, and all three resonant arguments $\theta_{1,2,3}$ librate with amplitudes between 120$^\circ$ and 160$^\circ$. A variety of MMRs is populated by scattering (see Table 2). Most common are the 2:1 and 3:1, but higher-order MMRs exist up to eleventh order (13:2). The resonant libration amplitudes tend to be large, with a median of 110$^\circ$ and several cases with amplitudes of $\sim 170^\circ$. This contrasts with MMRs from migration which are created in a dissipative environment and should be much smaller. MMRs occur preferentially in cases with mixed mass distributions, especially those with a positive mass gradient such as Mgrad:NSJ. MMRs are relatively rare for equal mass planets, and they tend to arise more often after collisions rather than ejections, which contrasts with the mixed and Mgrad cases. This may explain why MMRs have not been found in previous studies (except for isolated cases in Adams \& Laughlin 2003 and Chatterjee {\it et al.\thinspace} 2008). MMRs appear to be populated at random: any stable region of parameter space can be accessed by scattering. To test this hypothesis, we calculated the phase space density of resonant orbits within 10\% of the 2:1 and 3:1 MMRs for planetary mass ratio of 1/3, 1, and 3, with $M_{inner}+M_{outer}=400 {\rm\,M_\oplus}$. For each MMR we ran $\sim$ 22,000 3-body (star + two planets) simulations for 1 Myr. The semimajor axis of the inner planet was fixed at 5 AU (2:1 MMR) or 4 AU (3:1 MMR). We sampled four parameters: the orbital period ratio, the inner and outer planets' eccentricities, and the relative apsidal orientation. Inclinations (of $<1^\circ$) and mean longitudes were sampled at random. Resonant orbits were found by libration of resonant arguments. The density of 2:1 resonant orbits is higher for a more massive outer planet (Figure~\ref{fig:res21}).\footnote{For a more detailed study of the 2:1 MMR, see Marzari {\it et al.\thinspace} (2006) and Michtchenko {\it et al.\thinspace} (2008a, 2008b).} Almost all of the 2:1 and 3:1 resonant orbits from scattering are found in areas of high resonant density, and near-resonant ``false alarms'' lie in areas of low density. Thus, scattering does indeed appear to populate MMRs at random. This explains why the 2:1 MMRs from our mixed1 and mixed2 simulations, with no initial mass gradients, have a median $M_{inner}/M_{outer}$ of 0.5. In addition, the Mgrad:NSJ ($M_{inner}/M_{outer}\approx 1/3$) systems formed a large number of 2:1 MMRs while the Mgrad:JSN ($M_{inner}/M_{outer} \approx 3$) cases formed far fewer. For the parameter space we sampled, the integrated 3:1 resonant density is $\sim 40\%$ less than the 2:1 density. However, the available parameter space is not evenly populated by scattering. For example, scattering causes more massive planets to be closer to the star (Chatterjee {\it et al.\thinspace} 2008). Indeed, the integrated 2:1 resonant density for $M_{inner}/M_{outer} = 1/3$ is 66\% higher than the 3:1 density. This explains the increased number of 2:1 vs. 3:1 MMRs in our sample -- 74 cases of the 2:1 MMR and 47 of the 3:1 (61\% more in 2:1). \begin{deluxetable}{c|c|c|c|c}[t] \scriptsize \tablewidth{0pc} \tablecaption{Candidate Resonant Planetary systems\tablenotemark{1}} \renewcommand{\arraystretch}{.6} \tablehead{ \\ \colhead{System} & \colhead{$a_1,a_2$} & \colhead{$e_1,e_2$} & \colhead{$M_1,M_2$} & \colhead{MMR} \\ \colhead{(pair)} & \colhead{(AU)} & \colhead{ } & \colhead{($M_{Jup}$)} } \startdata \\ GJ 876 c-b & 0.13, 0.2078 & 0.27, 0.025 & 0.56, 1.935 & 2:1\\ HD 73526 b-c & 0.66, 1.05 & 0.19, 0.14 & 2.9, 2.5 & 2:1\\ HD 82943 c-b & 0.746, 1.19 & 0.359, 0.219 & 2.01, 1.75 & 2:1\\ HD 128311 b-c & 1.099, 1.76 & 0.25, 0.17 & 2.18, 3.21 & 2:1\\ $\mu$ Arae d-b & 0.921, 1.497 & 0.067, 0.128 & 0.522, 1.676 & 2:1\\ GJ 317 b-c & 0.95, 2.35 & 0.19, 0.42 & 1.2, 0.83 & 4:1\tablenotemark{2} \\ HD 108874 b-c & 1.051, 2.68 & 0.07, 0.25 & 1.36, 1.018 & 4:1\\ HD 17156 b-c & 0.159, 0.481 & 0.6717, 0.136 & 3.111, 0.063 & 5:1\\ HD 202206 b-c & 0.83, 2.55 & 0.435, 0.267 & 17.4, 2.44 & 5:1\\ HD 208487 b-c & 0.49, 1.8 & 0.32, 0.19 & 0.45, 0.46 & 7:1\\ \enddata \tablenotetext{1}{See http://www.lpl.arizona.edu/$\sim$rory/research/xsp/dynamics/.} \tablenotetext{2}{Johnson {\it et al.\thinspace} (2007) did not determine apsidal angles, but Barnes \& Greenberg (2008) used a stability analysis to predict that the system must be in the 4:1 MMR.} \end{deluxetable} The density of resonant orbits can explain other features of the population of resonant planets. The resonant planets tend to have smaller eccentricities than the non-resonant planets (Fig.~\ref{fig:edist}). Indeed, resonant systems underwent an average of about five times fewer close encounters between planets before the system stabilized ($\sim$ 30-50 vs. 100-$>$200 encounters), compared with the median outcome. The time between the first instability and stabilization for resonant cases was $\sim$ 50,000 years, a factor of about five shorter than for the non-resonant systems. \begin{figure}[t] \centerline{\plotone{f3.eps}} \caption{Resonant density for the 2:1 MMR. For these simulations, the inner planet's semimajor axis was fixed at 5 AU, and its eccentricity was varied between 0.05 and 0.25. The color corresponds to the fraction of orbits in resonance for a given value of the outer planet's semimajor axis and eccentricity (see color bar), averaged over 8 simulations with different apsidal alignments.} \label{fig:res21} \end{figure} Low-order MMRs preferentially arise in systems containing a planet with a large Safronov number $S$. $S$ is the ratio of the escape speed from a planet's surface to the escape speed from the system, $S = (M_p/M_\star)^{1/2}\, (a_p/R_p)^{1/2}$, where $M_\star$ is the stellar mass, $a_p$ and $R_p$ are the planet's orbital distance and radius, respectively (Safronov 1969). Planets with larger $S$ give stronger velocity kicks and thereby reduce the number of encounters needed to eject a planet. Indeed, the 2:1 and 3:1 MMRs correlate with systems with at least one planetary $S$ value above 4. In the mixed2 set, which is the only set with a significant range in $S$, the median $S_{max}$ for the 2:1 and 3:1 MMRs is 4.5, as compared with 3.5 for all unstable mixed2 simulations. A K-S test shows that the two samples are indeed different at the 99.9\% confidence level. This constrains where the 2:1 or 3:1 MMRs can arise as a function of $M_p$, $M_\star$, $a_p$, and $R_p$: only systems with relatively high-mass planets ($M_p \gtrsim M_{Jup}$) can generate these MMRs close-in. In contrast, high-order (4:1 and higher) MMRs from the mixed2 set match the sample of unstable cases and so are not constrained. The MMRs we found are numerically robust. The median fractional integration error dE/E for the 170 resonant systems was $2.6 \times 10^{-8}$, far below the $\sim10^{-4}$ limit for determining stability (Barnes \& Quinn 2004), and smaller than the median dE/E of $1.2 \times 10^{-7}$ for all unstable systems. MMRs tend to arise in cases with short encounter times and relatively low final eccentricities. Those situations yield smaller dE/E than for the more common stronger encounters that lead to very eccentric planets. Thus, our cutoff of dE/E $< 10^{-4}$ allows us to accurately sample both eccentricities and MMRs. \section{Discussion and Conclusions} Planet-planet scattering creates MMRs. The typical path to a resonant system involves several close encounters between one smaller and two larger planets. After a relatively short time of instability, the smaller planet is destroyed, usually via ejection (78\% of all cases) or collision (22\%), leaving behind a pair of resonant planets. Relatively weak instabilities are probably very common in planetary systems; one may even have occurred in our own Solar System (Thommes {\it et al.\thinspace} 1999). Nonetheless, only a fraction of unstable systems produce resonant planets, typically 5-10\%. These systems have large libration amplitudes and occupy a range of low-and high-order MMRs (Table 2). Most of these resonances are indefinitely stable; we integrated the 170 resonant systems for an additional 1 Gyr and only 9 (5\%) left the resonance, 4 cases leading to an additional system instability. \begin{deluxetable}{c|c|c|p{2cm}}[t] \scriptsize \tablewidth{0pt} \tablecaption{Resonances from scattering simulations} \renewcommand{\arraystretch}{.6} \tablehead{ \\ \colhead{Set} & \colhead{Nsims ---} & \colhead{N(\%) in} & \colhead{MMRs}\\ \colhead{ } & \colhead{unstable(\%)} & \colhead{MMRs} & \colhead{(\%)}} \startdata Mixed1 & 965--569 (59\%) & 27 (4.7\%) & 2:1 (1.6\%), 3:1 (1.6\%), 4:1 (0.7\%), 5:1, 6:1, 7:2, 9:2\\ Mixed2 & 982--744 (76\%) & 52 (7\%) & 3:2 (0.8\%), 2:1 (2.4\%), 3:1 (1.1\%), 5:3, 4:1, 5:2, 5:1, 7:3, 6:1, 7:2, 8:3, 9:4, 10:3, 12:5, 11:2, 14:5\\ Mequal:3$M_J$ & 368--241 (65\%) & 1 (0.4\%) & 7:1\\ Mequal:$M_J$ & 452--232 (51\%) & 4 (1.7\%) & 2:1, 3:1, 4:1 (0.9\%)\\ Mequal:$M_{Sat}$ & 390--362 (93\%) & 14 (3.9\%) & 2:1 (1.9\%), 3:1 (0.6\%), 5:2, 5:1 (0.6\%), 6:1, 7:1\\ Mequal:30${\rm\,M_\oplus}$ & 367--365 (99\%) & 10 (2.7\%) & 2:1 (1.9\%), 3:1, 11:6\\ Mgrad:JSN & 250--206 (82\%) & 13 (6.3\%) & 2:1 (1\%), 3:1 (1.9\%), 4:1, 5:2, 5:1, 7:3, 8:3, 13:2\\ Mgrad:NSJ & 245--221 (90\%) & 30 (14.6\%) & 2:1 (9.2\%), 3:1 (2.3\%), 5:2, 7:3, 7:2, 11:5\\ Mgrad:3JJS & 250--150 (60\%) & 4 (2.7\%) & 3:1, 4:1 (1.3\%), 11:3\\ Mgrad:SJ3J & 245--219 (89\%) & 16 (6.5\%) & 3:1 (5.7\%), 4:1 \enddata \end{deluxetable} It may be possible to tell apart resonant exoplanets created via scattering from those created via convergent migration. In fact, only one resonant system appears to be inconsistent with a scattering origin due to its very low-amplitude libration (GJ 876; Marcy {\it et al.\thinspace} 2001). If two planets are trapped in the 2:1 or 3:2 MMR and the inner planet is the more massive, then migration can be stopped or even reversed (Masset \& Snellgrove 2001; Crida \& Morbidelli 2007). However, if the outer planet is the more massive then inward migration continues. In contrast, scattering produces planets in a variety of MMRs (including the 3:2 and 2:1) with a wide range in mass ratios and a preference for the outer planet to be more massive. Thus, scattering is likely to be responsible for systems past $\sim$ 1 AU with 2:1 or 3:2 resonant planets and a more massive outer planet. The HD 128311 and $\mu$ Arae systems are good candidates for creation via scattering (Table 1; see also S\'andor \& Kley 2006). Several extra-solar systems show tentative evidence for high-order MMRs -- 4:1, 5:1 and even 7:1 (Table 1; Johnson {\it et al.\thinspace} 2007; Correia {\it et al.\thinspace} 2005; Gregory 2007; Short {\it et al.\thinspace} 2008). No study to date has shown that migration could capture planets in MMRs of higher order than 2:1, although we encourage expanded studies of this process.\footnote{Highly-damped bodies can undergo resonant shepherding by the 6:1 or even 8:1 MMRs (Raymond {\it et al.\thinspace} 2006; Mandell {\it et al.\thinspace} 2007). However, as bodies grow the damping decreases, and shepherded planets do not survive in resonance.} Scattering produces a wide range of high-order resonances (Table 2). Thus, if the current candidate high-order MMR systems are confirmed (Table 1), then scattering is likely to be the responsible mechanism. Turbulence in gaseous disks may destroy MMRs, leaving perhaps only $\sim$ 1\% of planet pairs in resonance (Adams {\it et al.\thinspace} 2008). This effect is stronger for higher-order MMRs. However, the timescale for MMR destruction is sensitive to the strength of MRI turbulence which is very uncertain. In particular, if the MRI is fully or partially suppressed by the low ionization fraction in the inner protoplanetary disk (Gammie 1996) then the survival prospects for resonant planets would be improved. Nonetheless, if only a few percent of systems remain in resonance, then scattering and migration may provide a comparable number of MMRs. Our simulations do not account for any dissipation. However, instabilities may occur while some gas remains in the disk (Moeckel {\it et al.\thinspace} 2008). In that case, MMRs could still result from scattering (Lee {\it et al.\thinspace} 2008) and perhaps have smaller libration amplitudes. In conclusion, we have identified a new mechanism for the creation of exoplanet systems in MMRs. Unfortunately the current data do not allow a conclusive determination of a resonance, let alone precise descriptions of the resonant argument oscillation. Nonetheless, our scattering model has important distinctions from the convergent migration model: high-order MMRs, large-amplitude resonant libration, and low-order MMRs with $M_{inner}/M_{outer} < 1$. As the orbital properties of exoplanets are better determined, it should be possible to distinguish between these scenarios. \vskip .2in We thank Google for access to their machines. We are grateful to Greg Laughlin, Dimitri Veras, and an anonymous referee for helpful input. S.N.R. was supported by the NASA Postdoctoral Program administered by Oak Ridge Associated Universities through a contract with NASA. R.B. acknowledges support from NASA's PG\&G grant NNG05GH65G and NASA Terrestrial Planet Finder Foundation Science grant 811073.02.07.01.15.
1,314,259,993,826
arxiv
\section{Introduction} \label{sec:introduction} Graph coloring is a fundamental problem in graph theory, which has been extensively studied over the years (see, e.g.,~\cite{DBLP:books/daglib/0077283} for an overview). Most of the research in this area has been devoted to the \emph{vertex-coloring problem} (or \emph{coloring problem}, for short), which dates back to 1852~\cite{MM12}. In its general form, the problem asks to label the vertices of a graph with a given number of colors, so that no two adjacent vertices share the same color. In other words, a coloring of a graph partitions its vertices into a particular number of independent sets (each of these sets is usually referred to as a \emph{color class}, as all its vertices have the same color). A central result in this area is the so-called \emph{four color theorem}, according to which every planar graph admits a coloring with at most four colors; see e.g.~\cite{MR2463991}. Note that the problem of deciding whether a planar graph is $3$-colorable is NP-complete~\cite{DBLP:books/fm/GareyJ79}, even for graphs of maximum degree~$4$~\cite{d-uccp-80}. Several variants of the coloring problem have been proposed over the years. One of the most studied is the so-called \emph{defective coloring}, which was independently introduced by Andrews and Jacobson~\cite{aj-gcn-85}, Harary and Jones~\cite{hj-cc-85}, and Cowen et al.~\cite{cgj-97}. In the defective coloring problem edges between vertices of the same color class are allowed, as long as the monochromatic components induced by vertices of the same color maintain some special structure. In this respect, one can regard the classical vertex-coloring as a defective one in which every monochromatic component is an isolated vertex, given that every color class defines an independent set. In this work we focus on defective colorings in which each component is acyclic and has small diameters. In particular, we call a graph $G$ \emph{\colorable{\kappa}{\lambda}} if the vertices of $G$ can be colored with $\kappa$ colors, so that all monochromatic components are acyclic and of diameter at most~$\lambda$, where $\kappa \geq 1$, $\lambda \geq 0$. Clearly, a classical $\kappa$-coloring corresponds to a \coloring{\kappa}{0}. The \emph{diameter of a coloring} is defined as the maximum diameter among the monochromatic components. We present algorithmic and complexity results for \colorings{\kappa}{\lambda} for small values of $\kappa$ and $\lambda = 2$. For simplicity, we refer to this problem as \emph{\kcoloring{\kappa}}, as each monochromatic component is a \emph{star} (i.e., a tree with diameter two; see Figure~\ref{fig:star}). Similarly, we refer to the \coloring{\kappa}{\lambda} problem when $\lambda=1$ as \emph{\konecoloring{\kappa}} problem. By definition, a \konecoloring{\kappa} is also a \kcoloring{\kappa}. Figs.\ref{fig:example1}-\ref{fig:example3} show a trade-off between number of colors and structure of the monochromatic components. Our work can be seen as a variant of the \emph{bipartisation} of graphs, namely the problem of making a graph bipartite by removing a small number of elements (e.g, vertices or edges), which is a central graph problem with many applications~\cite{Hadlock75,Karp72}. The bipartisation by removal (a not-necessarily minimal number of) \emph{non-adjacent} edges corresponds to the \konecoloring{2} problem. In the \kcoloring{2} problem, we also solve some kind of bipartisation by removing independent stars. Note that we do not ask for the minimum number of removed stars but for the existence of a solution. \begin{figure}[tb] \centering \subfloat[\label{fig:example1}]{\includegraphics[scale=1.4,page=1]{figures}} \hfil \subfloat[\label{fig:example2}]{\includegraphics[scale=1.4,page=2]{figures}} \hfil \subfloat[\label{fig:example3}]{\includegraphics[scale=1.4,page=3]{figures}} \hfil \subfloat[\label{fig:star}]{\includegraphics[scale=1.4,page=4]{figures}} \caption{(a-c) Different colorings of the same graph: (a)~a traditional $4$-coloring, (b)~an \konecoloring{3} (c)~a \kcoloring{2}; (d)~a star with three leaves; its \emph{center} has degree~$3$.} \label{fig:examples} \end{figure} To the best of our knowledge, this is the first time that the defective coloring problem is studied under the requirement of having color classes of small diameter. Previous research was focused either on their size or their degree~\cite{aj-gcn-85,cgj-97,hj-cc-85,Lo66,Lovasz1975269}. As byproducts of these works, one can obtain several results for the \konecoloring{\kappa} problem. More precisely, from a result of Lov\'{a}sz~\cite{Lo66}, it follows that all graphs of maximum degree $4$ or $5$ are \konecolorable{3}. However, determining whether a graph of maximum degree~$7$ is \konecolorable{3} is NP-complete~\cite{cgj-97}. In the same work, Cowen et al.~\cite{cgj-97} prove that not all planar graphs are \konecolorable{3} and that the corresponding decision problem is NP-complete, even in the case of planar graphs of maximum degree~$10$. Results for graphs embedded on general surfaces are also known~\cite{a-ndc-87,ccw-86,cgj-97}. Closely related is also the so-called \emph{tree-partition-width} problem, which is a variant of the defective coloring problem in which the graphs induced by each color class must be acyclic~\cite{Ding199645,ehno-aepofbtg-11,Wood20091245}, i.e., there is no restriction on their diameter. Our contributions are: \begin{itemize} \item In Section~\ref{sec:algorithms}, we present a linear-time algorithm to determine whether an outerplanar graph is \kcolorable{2}. Note that outerplanar graphs are 3-colorable\xspace \cite{ps-evecog-86}, and hence \kcolorable{3}, but not necessarily \kcolorable{2}. On the other hand, we can always construct \kcolorings{2} for outerpaths (which form a special subclass of outerplanar graphs whose weak-dual\footnote{Recall that the \emph{weak-dual} of a plane graph is the subgraph of its dual induced by neglecting the face-vertex corresponding to its unbounded face.} is a path). \item In Section~\ref{sec:np-completeness}, we prove that the \kcoloring{2} problem is NP-complete, even for graphs of maximum degree $5$ (note that the corresponding \konecoloring{2} problem is NP-complete, even for graphs of maximum degree $4$~\cite{cgj-97}). Since all graphs of maximum degree $3$ are \konecolorable{2}~\cite{Lo66}, this result leaves open only the case for graphs of maximum degree~$4$. We also prove that the \kcoloring{3} problem is NP-complete, even for graphs of maximum degree $9$ (recall that the corresponding \konecoloring{3} problem is NP-complete, even for graphs of maximum degree~$7$~\cite{cgj-97}). Since all graphs of maximum degree $4$ or $5$ are \konecolorable{3}~\cite{Lo66}, our result implies that the computational complexity of the \kcoloring{3} problem remains unknown only for graphs of maximum degree $6$, $7$, and $8$. For planar graphs, we prove that the \kcoloring{2} problem remains NP-complete even for triangle-free planar graphs (recall that triangle-free planar graphs are always 3-colorable\xspace~\cite{Kow10}, while the test of 2-colorability\xspace can be done in linear~time). \end{itemize} \section{Coloring Outerplanar Graphs and Subclasses} \label{sec:algorithms} In this section we consider \kcolorings{2} of outerplanar graphs. To demonstrate the difficulty of the problem, we first give an example (see Figure~\ref{lem:outercounterexample}) of a small outerplanar graph not admitting any \kcoloring{2}. Therefore, in Theorem~\ref{thm:outerplanar} we study the complexity of deciding whether a given outerplanar graph admits such a coloring and present a linear-time algorithm for this problem; note that outerplanar graphs always admit 3-colorings\xspace~\cite{ps-evecog-86}. Finally, we show that a notable subclass of outerplanar graphs, namely outerpaths, always admit \kcolorings{2} by providing a constructive linear-time algorithm (see Theorem~\ref{thm:outerpaths}). \begin{lemma}\label{lem:outercounterexample} There exist outerplanar graphs that are not \kcolorable{2}. \end{lemma} \begin{proof} We prove that the outerplanar graph of Figure~\ref{fig:outercouterexample} is not \kcolorable{2}. In particular, we show that in any $2$-coloring of this graph there exists a monochromatic path of four vertices. Assume w.l.o.g.~that vertex $u$ has color gray. Then, at least two vertices out of $u_1,\dots,u_8$ are gray, as otherwise there would be a path of four white vertices. Hence, $u$ is the center of a gray star. Next, we observe that either $u_2$ is white or the path $u_{21},\dots,u_{24}$ must consist of only white vertices. Similarly, we observe that either $u_3$ is white or the path $u_{31},\dots,u_{34}$ must consist of only white vertices. If both $u_2$ and $u_3$ are white, then either one of paths $u_{21},\dots,u_{24}$ and $u_{31},\dots,u_{34}$ consists only of gray vertices, or there exists a path from one of $u_{21},\dots,u_{24}$ via $u_2$ and $u_3$ to one of $u_{31},\dots,u_{34}$, that consists only of white vertices. Clearly, all aforementioned cases lead to a monochromatic path of four vertices. \end{proof} \begin{figure}[htb] \centering \subfloat[\label{fig:outercouterexample}]{\includegraphics[page=5]{figures}} \hfil \subfloat[\label{fig:outer-defs}]{\includegraphics[page=6]{figures}} \caption{(a)~An outerplanar graph that is not \kcolorable{2}. (b)~An outerpath, whose spine edges are drawn as dashed segments. Dotted arcs highlighted in gray correspond to edges belonging to the fan of each spine vertex. Note that $|f_6|=0$.} \label{fig:outerpath} \end{figure} Lemma~\ref{lem:outercounterexample} implies that not all outerplanar graphs are \kcolorable{2}. In the following we give a linear-time algorithm to decide whether an outerplanar graph is \kcolorable{2} and in case of an affirmative answer to compute the actual coloring. \begin{theorem}\label{thm:outerplanar} Given an outerplanar graph $G$, there exists a linear-time algorithm to test whether $G$ admits a \kcoloring{2} and to construct a \kcoloring{2}, if one exists. \end{theorem} \begin{proof} We assume that $G$ is embedded according to its outerplanar embedding. We can also assume that $G$ is biconnected. This is not a loss of generality, as we can always reduce the number of cut-vertices by connecting two neighbors $a$ and $b$ of a cut-vertex $c$ belonging to two different biconnected components with a path having two internal vertices. Clearly, if the augmented graph is \kcolorable{2}, then the original one is \kcolorable{2}. For the other direction, given a \kcoloring{2} of the original graph, we can obtain a corresponding coloring of the augmented graph by coloring the neighbors of $a$ and $b$ with different color than the ones of $a$ and $b$, respectively. Denote by $T$ the weak dual of $G$ and root it at a leaf $\rho$ of $T$. For a node $\mu$ of $T$, we denote by $G(\mu)$ the subgraph of $G$ corresponding to the subtree of $T$ rooted at $\mu$. We also denote by $f(\mu)$ the face of $G$ corresponding to $\mu$ in $T$. If $\mu \neq \rho$, consider the parent $\nu$ of $\mu$ in $T$ and their corresponding faces $f(\nu)$ and $f(\mu)$ of $G$, and let $(u,v)$ be the edge of $G$ shared by $f(\nu)$ and $f(\mu)$. We say that $(u,v)$ is the \emph{attachment edge} of $G(\mu)$ to $G(\nu)$. The attachment edge of the root $\rho$ is any edge of face $f(\rho)$ that is incident to the outer face (since $G$ is biconnected and $\rho$ is a leaf, this edge always exists). Consider a \kcoloring{2} of $G(\mu)$. In this coloring, each of the endpoints $u$ and $v$ of the attachment edge of $G(\mu)$ plays exactly one of the following roles: \begin{inparaenum}[$(i)$] \item \emph{center} or \item \emph{leaf} of a colored star; \item \emph{isolated vertex}, that is, it has no neighbor with the same color; or \item \emph{undefined}, that is, the only neighbor of $u$ (resp. $v$) which has its same color is $v$ (resp. $u$). Note that if the only neighbor of $u$ (resp. $v$) which has its same color is different from $v$ (resp. from $u$), we consider $u$ (resp. $v$) as a center. \end{inparaenum} Two \kcolorings{2} of $G(\mu)$ are \emph{equivalent} w.r.t. the attachment edge $(u,v)$ of $G(\mu)$ if in the two \kcolorings{2} each of $u$ and $v$ has the same color and plays the same role. This definition of equivalence determines a partition of the colorings of $G(\mu)$ into a set of equivalence classes. Since both the number of colors and the number of possible roles of each vertex $u$ and $v$ are constant, the number of different equivalence classes is also constant (note that, when the role is undefined, $u$ and $v$ must have the same color). In order to test whether $G$ admits a \kcoloring{2}, we perform a bottom-up traversal of $T$. When visiting a node $\mu$ of $T$ we compute the maximal set $C(\mu)$ of equivalence classes such that, for each class $C \in C(\mu)$, graph $G(\mu)$ admits at least a coloring belonging to $C$. Note that $|C(\mu)| \le 38$. In order to compute $C(\mu)$, we consider the possible equivalence classes one at a time, and check whether $G(\mu)$ admits a \kcoloring{2} in this class, based on the sets $C(\mu_1),\dots,C_(\mu_h)$ of the children $\mu_1,\dots,\mu_h$ of $\mu$ in $T$, which have been previously computed. In particular, for an equivalence class $C$ we test the existence of a \kcoloring{2} of $G(\mu)$ belonging to $C$ by selecting an equivalence class $C_i \in C(\mu_i)$ for each $i = 1,\dots,h$ in such a way that: \begin{enumerate} \item the color and the role of $u$ in $C_1$ are the same as the ones $u$ has in $C$; \item the color and the role of $v$ in $C_h$ are the same as the ones $v$ has in $C$; \item for any two consecutive children $\mu_i$ and $\mu_{i+1}$, let $x$ be the vertex shared by $G(\mu_i)$ and $G(\mu_{i+1})$. Then, $x$ has the same color in $C_i$ and $C_{i+1}$ and, if $x$ is a leaf in $C_i$, then $x$ is isolated in $C_{i+1}$ (or vice-versa); and \item for any three consecutive children $\mu_{i-1}$, $\mu_i$, and $\mu_{i+1}$, let $x$ (resp. $y$) be the vertex shared by $G(\mu_{i-1})$ and $G(\mu_i)$ (resp. by $G(\mu_{i})$ and $G(\mu_{i+1})$). Then, $x$ (resp. $y$) has the same color in $C_i$ and $C_{i-1}$ (resp. $C_{i+1}$); also, if $x$ and $y$ are both undefined in $C_i$, then in $C_{i-1}$ and $C_{i+1}$ none of $x$ and $y$ is a leaf, and at least one of them is isolated. \end{enumerate} Note that the first two conditions ensure that the coloring belongs to $C$, while the other two ensure that it is a \kcoloring{2}. Since the cardinality of $C(\mu_i)$ is bounded by a constant, the test can be done in linear time. If the test succeeds, add $C$ to $C(\mu)$. Once all $38$ equivalence classes are tested, if $C(\mu)$ is empty, then we conclude that $G$ is not \kcolorable{2}. Otherwise we proceed with the traversal of $T$. At the end of the traversal, if $C(\rho)$ is not empty, we conclude that $G$ is \kcolorable{2}. A \kcoloring{2} of $G$ can be easily constructed by traversing $T$ top-down, by following the choices performed during the bottom-up visit. \end{proof} In the following, we consider a subclass of outerplanar graphs, namely outerpaths, and we prove that they always admit \kcolorings{2}. Note that the example that we presented in Lemma~\ref{lem:outercounterexample} is ``almost'' an outerpath, meaning that the weak-dual of this graph contains only degree-$1$ and degree-$2$ vertices, except for one specific vertex that has degree~$3$ (see the face of Figure~\ref{fig:outercouterexample} highlighted in gray). Recall that the weak-dual of an outerpath is a path (hence, it consists of only degree-$1$ and degree-$2$ vertices). Let $G$ be an outerpath (see Figure~\ref{fig:outer-defs}). We assume that $G$ is inner-triangulated. This is not a loss of generality, as any \kcoloring{2} of a triangulated outerpath induces a \kcoloring{2} of any of its subgraphs. We first give some definitions. We call \emph{spine vertices} the vertices $v_1,v_2,\dots,v_m$ that have degree at least four in $G$. We consider an additional spine vertex $v_{m+1}$, which is the (unique) neighbor of $v_m$ along the cycle delimiting the outer face that is not adjacent to $v_{m-1}$. Note that the spine vertices of $G$ induce a path, that we call \emph{spine} of $G$\footnote{Note that the spine of $G$ coincides with the spine of the caterpillar obtained from the outerpath $G$ by removing all the edges incident to its outer face, neglecting the additional spine vertex $v_{m+1}$.}. The \emph{fan} $f_i$ of a spine vertex $v_i$ consists of the set of neighbors of $v_i$ in $G$, except for $v_{i-1}$ and for those following and preceding $v_i$ along the cycle delimiting the outer face\footnote{Fan $f_i$ contains all the leaves of the caterpillar incident to $v_i$, plus the following spine vertex $v_{i+1}$.}; note that $|f_i|\ge 1$ for each $i=1,\dots,m$, while $|f_{m+1}|=0$. For each $i = 1, \dots,m+1$, we denote by $G_i$ the subgraph of $G$ induced by the spine vertices $v_1,\dots,v_i$ and by the fans $f_1,\dots,f_{i-1}$. Note that $G_{m+1}=G$. We denote by $c_i$ the color assigned to spine vertex $v_i$, and by $c(G_i)$ a coloring of graph $G_i$. Finally, we say that an edge of $G$ is \emph{colored} if its two endpoints have the same color. \begin{figure}[tb] \centering \includegraphics[page=7]{figures} \caption{Schematization of the algorithm. Each node represents the (unique) condition satisfied by $G_i$ at some step $0\le i\le k$. An edge label $0,1,e,o$ represents the fact that the cardinality of a fan $f_i$ is $0$, $1$, even $\neq 0$, or odd $\neq 1$. If the label contains two characters, the second one describes the cardinality of $f_{i+1}$. An edge between $Q_j$ and $Q_h$ with label $x\in\{1,e,o\}$ (with label $xy$, where $y\in\{0,1,e,o\}$) represents the fact that, if $G_i$ satisfies condition $Q_j$ and $|f_i|=x$ (resp. $|f_i|=x$ and $|f_{i+1}|=y$), then $f_i$ is colored so that $G_{i+1}$ satisfies $Q_h$.}\label{fig:automaton} \end{figure} \begin{theorem}\label{thm:outerpaths} Every outerpath admits a \kcoloring{2}, which can be computed in linear time. \end{theorem} \begin{proof} Let $G$ be an outerpath with spine $v_1,\dots,v_k$. We describe an algorithm to compute a \kcoloring{2} of $G$. At each step $i = 1, \dots, k$ of the algorithm we consider the spine edge $(v_{i-1},v_{i})$, assuming that a \kcoloring{2} of $G_i$ has already been computed satisfying one of the following conditions (see Figure~\ref{fig:automaton}): \begin{description} \item[$Q_0$:] The only colored vertex is $v_1$; \item[$Q_1$:] $c_{i}\neq c_{i-1}$, vertex $v_{i-1}$ is the center of a star with color $c_{i-1}$, and no colored edge is incident to $v_{i}$; \item[$Q_2$:] $c_{i}=c_{i-1}$, and no colored edge other than $(v_{i-1}, v_i)$ is incident to $v_{i-1}$ or $v_{i}$; \item[$Q_3$:] $c_{i}\neq c_{i-1}$, vertex $v_{i-1}$ is a leaf of a star with color $c_{i-1}$, and no colored edge is incident to $v_{i}$; \item[$Q_4$:] $c_{i}\neq c_{i-1}$, vertex $v_{i-1}$ is the center of a star with color $c_{i-1}$, and vertex $v_{i}$ is the center of a star with color $c_{i}$; further, $i<k$ and $|f_i|>1$; \item[$Q_5$:] $c_{i}=c_{i-1}$, vertex $v_{i-1}$ is the center of a star with color $c_{i-1}$, and no colored edge other than $(v_{i-1}, v_i)$ is incident to $v_{i}$; further, $i<k$ and $|f_i|=1$. \end{description} Next, we color the vertices in $f_i$ in such a way that $c(G_{i+1})$ is a \kcoloring{2} satisfying one of the conditions; refer to Figure~\ref{fig:automaton} for a schematization of the case analysis. In the first step of the algorithm, we assign an arbitrary color to $v_1$, and hence $c(G_1)$ satisfies $Q_0$. For $i=1,\dots,k$ we color $f_i$ depending on the condition satisfied by $c(G_i)$. \smallskip \noindent{\textbf{Coloring }$\mathbf{c(G_i)}$ \textbf{satisfies} $\mathbf{Q_0}$}: Independently of the cardinality of $f_i$, we color its vertices with alternating colors so that $c_{i+1} \neq c_i$. In this way, the only possible colored edges are incident to $v_i$ and not to $v_{i+1}$. So, $c(G_{i+1})$ satisfies condition $Q_1$. \smallskip \noindent{\textbf{Coloring} $\mathbf{c(G_i)}$ \textbf{satisfies} $\mathbf{Q_1}$}: In this case we distinguish the following subcases, based on the cardinality of $f_i$. \begin{itemize} \item If $|f_i|=0$, we have that $i=k$ and hence $G_k=G$. It follows that $c(G_k)$ is a \kcoloring{2} of $G$. \item If $|f_i|=1$ (that is, $f_i$ contains only $v_{i+1}$; see Figure~\ref{fig:q1_1_post}), we set $c_{i+1}=c_i$. Since the only neighbor of $v_{i+1}$ in $G_{i+1}$ different from $v_i$ is $v_{i-1}$, whose color is $c_{i-1} \neq c_i$, and since $v_i$ has no neighbor with color $c_i$ other than $v_{i+1}$, by condition $Q_1$, coloring $c(G_{i+1})$ is a \kcoloring{2} satisfying condition $Q_2$. \begin{figure}[tb] \centering \subfloat[\label{fig:q1_1_post}]{\includegraphics[page=8]{figures}} \hfil \subfloat[\label{fig:q1_gt1_post}]{\includegraphics[page=9]{figures}} \hfil \subfloat[\label{fig:q2_odd_post}]{\includegraphics[page=10]{figures}} \hfil \subfloat[\label{fig:q2_even_zero}]{\includegraphics[page=11]{figures}} \hfil \subfloat[\label{fig:q2_even_one_post}]{\includegraphics[page=12]{figures}} \hfil \subfloat[\label{fig:q2_even_gt_one_post}]{\includegraphics[page=13]{figures}} \caption{Graph $G_{i+1}$ after coloring $f_i$ when $c(G_i)$ satisfies: $Q_1$ and \sref{fig:q1_1_post} $|f_i|=1$ or \sref{fig:q1_gt1_post} $|f_i|>1$; $Q_2$ and \sref{fig:q2_odd_post} $|f_i|=o$, or $|f_i|=e$ and \sref{fig:q2_even_zero} $|f_{i+1}|=0$, \sref{fig:q2_even_one_post} $|f_{i+1}|=1$, or \sref{fig:q2_even_gt_one_post} $f_{i+1} > 1$. Shaded regions represent $G_i$. Bold edges connect vertices with the same color, while spine edges are dashed. } \end{figure} \item If $|f_i|>1$ (see Figure~\ref{fig:q1_gt1_post}), we color the vertices in $f_i$ with alternating colors so that $c_{i+1}\neq c_i$. This implies that every colored edge of $G_{i+1}$ not belonging to $G_i$ is incident either to $v_i$, if its color is $c_i$, or to $v_{i-1}$, if its color is $c_{i-1}$; the latter case only happens if $|f_i|$ is odd. Thus, $v_i$ (resp. $v_{i-1}$) is the center of a star of color $c_i$ (resp. $c_{i-1}$) in $G_{i+1}$. Since $v_i$ has no neighbor with color $c_i$ in $G_i$, while $v_{i-1}$ is a center also in $G_i$, coloring $c(G_{i+1})$ is a \kcoloring{2}. Finally, since $v_{i+1}$ has no neighbors with color $c_{i+1} \neq c_i$, by construction, $c(G_{i+1})$ satisfies condition $Q_1$. \end{itemize} \noindent{\textbf{Coloring }$\mathbf{c(G_i)}$ \textbf{satisfies }$\mathbf{Q_2}$:} We again distinguish subcases based on $|f_i|$. \begin{itemize} \item If $|f_i|=0$, we have that $i=k$ and hence $c(G_k)$ is a \kcoloring{2} of $G=G_k$. \item If $|f_i|$ is odd, including the case $|f_i|=1$ (see Figure~\ref{fig:q2_odd_post}), we color the vertices of $f_i$ with alternating colors in such a way that $c_{i+1}\neq c_i$. By construction, $c(G_{i+1})$ is a \kcoloring{2} satisfying condition $Q_1$. \item If $|f_i|$ is even and different from $0$, instead, we have to consider the cardinality of $f_{i+1}$ in order to decide the coloring of $f_i$. We distinguish three subcases: \smallskip \begin{description} \item[$|f_{i+1}|=0$]: Note that in this case $i=k$ holds (see Figure~\ref{fig:q2_even_zero}). We color the vertices of $f_i$ with alternating colors so that $c_{i+1} = c_i$. Note that the unique neighbor of $v_{i-1}$ in $f_i$ has color different from $c_{i-1}$, since $|f_i|$ is even. Hence, all the new colored edges are incident to $v_i$, which implies that $c(G_{i+1})$ is a \kcoloring{2} satisfying condition $Q_2$. \item[$|f_{i+1}|=1$]: Note that $i<k$ and $f_{i+1}$ only contains $v_{i+2}$ (see Figure~\ref{fig:q2_even_one_post}). We color the vertices of $f_i$ with alternating colors so that $c_{i+1} = c_i$. Since (i) all the new colored edges are incident to $v_i$, (ii) $v_i$ and $v_{i-1}$ have no neighbor with their same color in $G_i$ (apart from each other), (iii) $c_{i+1} = c_i$, and (iv) $i<k$, we have that $c(G_{i+1})$ is a \kcoloring{2} satisfying condition $Q_5$. \item[$|f_{i+1}|>1$]: Note that $i<k$ (see Figure~\ref{fig:q2_even_gt_one_post}). Independently of whether $|f_{i+1}|$ is even or odd, we color the vertices of $f_i$ so that $c_{i+1} \neq c_i$, the unique neighbor of $v_{i+1}$ different from $v_i$ has also color $c_{i+1}$, and all the other vertices have alternating colors. Since each new colored edge is incident to either $v_i$ or $v_{i+1}$, since $c_{i+1} \neq c_i$, and since $i<k$, coloring $c(G_{i+1})$ is a \kcoloring{2} satisfying condition $Q_4$. \end{description} \end{itemize} \smallskip \begin{figure}[tb] \centering \subfloat[\label{fig:q3_one}]{\includegraphics[page=14]{figures}} \hfil \subfloat[\label{fig:q3_even}]{\includegraphics[page=15]{figures}} \hfil \subfloat[\label{fig:q4}]{\includegraphics[page=16]{figures}} \hfil \subfloat[\label{fig:q5}]{\includegraphics[page=17]{figures}} \caption{Graph $G_{i+1}$ after coloring $f_i$ when $c(G_i)$ satisfies: $Q_3$ and \sref{fig:q3_one} $|f_i|=1$, or \sref{fig:q3_even} $|f_i|=e$; $Q_4$ \sref{fig:q4}; or $Q_5$ \sref{fig:q5}. Shaded regions represent $G_i$. Bold edges connect vertices with the same color, while spine edges are dashed. } \end{figure} \noindent{\textbf{Coloring} $\mathbf{c(G_i)}$ \textbf{satisfies} $\mathbf{Q_3}$}: \begin{itemize} \item If $|f_i|=0$, we have that $i=k$ and hence $c(G_k)$ is a \kcoloring{2} of $G=G_k$. \item If $|f_i|=1$ (that is, $f_i$ contains only $v_{i+1}$; see Figure~\ref{fig:q3_one}), we set $c_{i+1}=c_i$. As in the analogous case in which $c(G_i)$ satisfies condition $Q_1$, we can prove that $c(G_{i+1})$ is a \kcoloring{2} which satisfies condition $Q_2$. \item If $|f_i|$ is even and different from $0$ (see Figure~\ref{fig:q3_even}), we color the vertices of $f_i$ with alternating colors in such a way that $c_{i+1}\neq c_i$. By construction, $c(G_{i+1})$ is a \kcoloring{2} which satisfies condition $Q_1$. \item If $|f_i|$ is odd and different from $1$, we again consider the cardinality of $f_{i+1}$ in order to decide the coloring of $f_i$. For the four possible classes of values of $|f_{i+1}|$, the coloring strategy and the condition satisfied by the resulting coloring $c(G_{i+1})$ are the same as for the analogous case in which $c(G_i)$ satisfies $Q_2$ and $|f_i|$ is even. \end{itemize} \noindent{\textbf{Coloring} $\mathbf{c(G_i)}$ \textbf{satisfies} $\mathbf{Q_4}$}: Note that $|f_i|>0$, given that $i<k$, and $|f_i|\neq 1$, by condition $Q_4$. Independently of whether $|f_i|$ is even or odd (see Figure~\ref{fig:q4}), we color the vertices in $f_i$ with alternating colors so that $c_{i+1} \neq c_i$. In this way, the only possible colored edges are incident to $v_{i-1}$ and to $v_i$, which are both centers of a star already in $G_i$, and not to $v_{i+1}$. Hence, $c(G_{i+1})$ is a \kcoloring{2} satisfying condition $Q_1$. \smallskip \noindent{\textbf{Coloring} $\mathbf{c(G_i)}$ \textbf{satisfies} $\mathbf{Q_5}$}: Note that $|f_i| = 1$, by condition $Q_5$ (that is, $f_i$ only contains $v_{i+1}$; see Figure~\ref{fig:q5}). We set $c_{i+1} \neq c_i$; clearly, $c(G_{i+1})$ is a \kcoloring{2} satisfying condition $Q_3$. \smallskip Observe that the running time of the algorithm is linear in the number of vertices of $G$. In fact, at each step $i = 1,\dots,k$, the condition $Q_j$ satisfied by $c(G_i)$ and the cardinalities of $f_i$ and $f_{i+1}$ are known (the cardinality of all the fans can be precomputed in advance), and the coloring strategy to obtain $c(G_{i+1})$ and the condition satisfied by this coloring are uniquely determined by these information in constant time. \end{proof} \section{NP-completeness for (Planar) Graphs of Bounded Degree} \label{sec:np-completeness} In this section, we study the computational complexity of the \kcoloring{2} and \kcoloring{3} problems for (planar) graphs of bounded degree. \begin{theorem} It is NP-complete to determine whether a graph admits a \kcoloring{2}, even in the case where its maximum degree is no more than $5$. \label{thm:2colorDeg5NpHard} \end{theorem} \begin{proof} The problem clearly belongs to NP; a non-deterministic algorithm only needs to guess a color for each vertex of the graph and then in linear time can trivially check whether the graphs induced by each color-set are forests of stars. To prove that the problem is NP-hard, we employ a reduction from the well-known Not-All-Equal $3$-SAT problem or \textsc{naesat}\xspace for short~\cite[p.187]{Pap07}. An instance of \textsc{naesat}\xspace consists of a $3$-CNF formula $\phi$ with variables $x_1,\ldots,x_n$ and clauses $C_1,\ldots,C_m$. The task is to find a truth assignment of $\phi$ so that no clause has all three literals \emph{equal} in truth value (that is, not all are true). We show how to construct a graph $G_\phi$ of maximum vertex-degree $5$ admitting a \kcoloring{2} if and only if $\phi$ is satisfiable. Intuitively, graph $G_\phi$ reflecting formula $\phi$ consists of a set of subgraphs serving as variable gadgets that are connected to simple $3$-cycles that serve as clause gadgets in an appropriate way; see Figure~\ref{fig:reduction1} for an example. Consider the graph of Figure~\ref{fig:k24}, which contains two adjacent vertices, denoted by $u_1$ and $u_2$, and four vertices, denoted by $v_1$, $v_2$, $v_3$ and $v_4$, that form a path, so that each of $u_1$ and $u_2$ is connected to each of $v_1$, $v_2$, $v_3$ and $v_4$. We claim that in any \kcoloring{2} of this graph $u_1$ and $u_2$ have different colors. Assume to the contrary that $u_1$ and $u_2$ have the same color, say white. Since $u_1$ and $u_2$ are adjacent, none of $v_1$, $v_2$, $v_3$ and $v_4$ is white. So, $v_1, \ldots, v_4$ form a monochromatic component in gray which is of diameter~$3$; a contradiction. Hence, $u_1$ and $u_2$ have different colors, say gray and white, respectively. In addition, the colors of $v_1$, $v_2$, $v_3$ and $v_4$ alternate along the path $v_1 \rightarrow v_2 \rightarrow v_3 \rightarrow v_4$, as otherwise there would exist two consecutive vertices $v_i$ and $v_{i+1}$, with $i=1,2,3$, of the same color, which would create a monochromatic triangle with either $u_1$ or $u_2$. \begin{figure}[tb] \begin{minipage}[b]{.5\textwidth} \centering \begin{minipage}[b]{\textwidth} \centering \subfloat[\label{fig:k24}{variable-gadget}]{\includegraphics[page=18]{figures}} \end{minipage} \begin{minipage}[b]{\textwidth} \centering \subfloat[\label{fig:chain}{a chain of length $3$}]{\includegraphics[width=\textwidth,page=19]{figures}} \end{minipage} \end{minipage} \hfil \begin{minipage}[b]{.4\textwidth} \centering \subfloat[\label{fig:reduction1}{reduction; clause-gadgets are gray}]{\includegraphics[width=\textwidth,page=20]{figures}} \end{minipage} % \caption{Illustration of: \sref{fig:k24}~a graph with $6$ vertices, \sref{fig:chain}~a chain of length $3$, \sref{fig:reduction1}~the reduction from \textsc{naesat}\xspace to \kcoloring{2}: $\phi = (x_1 \lor x_2 \lor x_3) \land (\neg x_1 \lor \neg x_2 \lor \neg x_3)$. The solution corresponds to the assignment $x_1=false$ and $x_2=x_3=false$. Sets $O_{x_1}$, $E_{x_2}$ and $E_{x_3}$ ($E_{x_1}$, $O_{x_2}$ and $O_{x_3}$, resp.) are colored gray (white, resp.).} \label{fig:2colorDeg5NpHard} \end{figure} For $k \geq 1$, we form a \emph{chain of length $k$} that contains $k$ copies $G_1, G_2, \ldots, G_k$ of the graph of Figure~\ref{fig:k24}, connected to each other as follows (see Figure~\ref{fig:chain}). For $i=1,2,\ldots,k$, let $u_1^i$, $u_2^i$, $v_1^i$, $v_2^i$, $v_3^i$ and $v_4^i$ be the vertices of $G_i$. Then, for $i=1,2,\ldots,k-1$ we introduce between $G_i$ and $G_{i+1}$ an edge connecting vertices $v_4^i$ and $v_1^{i+1}$ (dotted in Figure~\ref{fig:chain}). This edge ensures that $v_4^i$ and $v_1^{i+1}$ are not of the same color, since otherwise we would have a monochromatic path of length four. Hence, the colors of the vertices of the so-called \emph{spine-path} $v_1^1 \rightarrow v_2^1 \rightarrow v_3^1 \rightarrow v_4^1 \rightarrow \ldots \rightarrow v_1^k \rightarrow v_2^k \rightarrow v_3^k \rightarrow v_4^k$ alternate along this path. In other words, if the odd-positioned vertices of the spine-path are white, then the even-positioned ones will be gray, and vice versa. In addition, all vertices of the spine-path have degree~$4$ (except for $v_1^1$ and $v_4^k$, which have degree~$3$). For each variable $x_i$ of $\phi$, graph $G_\phi$ contains a so-called \emph{variable-chain} $\mathcal{C}_{x_i}$ of length $\lceil \frac{n_i-2}{2} \rceil$, where $n_i$ is the number of occurrences of $x_i$ in $\phi$, $1 \leq i \leq n$; see Figure~\ref{fig:reduction1}. Let $O[\mathcal{C}_{x_i}]$ and $E[\mathcal{C}_{x_i}]$ be the sets of odd- and even-positioned vertices along the spine-path of $\mathcal{C}_{x_i}$, respectively. For each clause $C_i=(\lambda_j \lor \lambda_k \lor \lambda_\ell)$ of~$\phi$, $1\leq i \leq m$, where $\lambda_j \in \{x_j,\neg x_j\}$, $\lambda_k \in \{x_k,\neg x_k\}$, $\lambda_\ell \in \{x_\ell,\neg x_\ell\}$ and $j, k, \ell \in \{1,\ldots,n\}$, graph $G_\phi$ contains a $3$-cycle of corresponding \emph{clause-vertices} which, of course, cannot have the same color (\emph{clause-gadget}; highlighted in gray in Figure~\ref{fig:reduction1}). If $\lambda_j$ is positive (negative), then we connect the clause-vertex corresponding to $\lambda_j$ in $G_\phi$ to a vertex of degree less than $5$ that belongs to set $E[\mathcal{C}_{x_j}]$ ($O[\mathcal{C}_{x_j}]$) of chain $\mathcal{C}_{x_j}$. Similarly, we create connections for literals $\lambda_k$ and $\lambda_\ell$; see the edges leaving the triplets for clauses $C_1$ and $C_2$ in Figure~\ref{fig:reduction1}. The length of $\mathcal{C}_{x i}$, $1 \leq i \leq n$, guarantees that all connections are accomplished so that no vertex of $\mathcal{C}_{x_i}$ has degree larger than $5$. Thus, $G_\phi$ is of maximum degree~$5$. Since $G_\phi$ is linear in the size of $\phi$, the construction can be done in $O(n + m)$ time. We show that $G_\phi$ is \kcolorable{2} if and only if $\phi$ is satisfiable. Assume first that $\phi$ is satisfiable. If $x_i$ is true (false), then we color $E[\mathcal{C}_{x_i}]$ white (gray) and $O[\mathcal{C}_{x_i}]$ gray (white). Hence, $E[\mathcal{C}_{x_i}]$ and $O[\mathcal{C}_{x_i}]$ admit different colors, as desired. Further, if $x_i$ is true (false), then we color gray (white) all the clause-vertices of $G_\phi$ that correspond to positive literals of $x_i$ in $\phi$ and we color white (gray) those corresponding to negative literals. Thus, a clause-vertex of $G_\phi$ cannot have the same color as its neighbor at the variable-gadget. Since in the truth assignment of $\phi$ no clause has all three literals true, no three clause-vertices belonging to the same clause have the same color. Suppose that $G_\phi$ is \kcolorable{2}. By construction, each of $E[\mathcal{C}_{x_i}]$ and $O[\mathcal{C}_{x_i}]$ is either white or gray, $i=1,\ldots,n$. If $P[\mathcal{C}_{x_i}]$ is white, then we set $x_i=true$; otherwise, we set $x_i=false$. Assume, to the contrary, that there is a clause of $\phi$ whose literals are all true or all false. By construction, the corresponding clause-vertices of $G_\phi$, which form a $3$-cycle in $G_\phi$, have the same color, which is a contradiction. \end{proof} We now turn our attention to planar graphs. Our proof follows the same construction as the one of Theorem~\ref{thm:2colorDeg5NpHard} but to ensure planarity we replace the crossings with appropriate crossing-gadgets. Also, recall that the construction in Theorem~\ref{thm:2colorDeg5NpHard} highly depends on the presence of triangles (refer, e.g., to the clause gadgets). In the following theorem, we prove that the \kcoloring{2} problem remains NP-complete, even in the case of triangle-free planar graphs. Note that in order to avoid triangular faces, we use slightly more complicated variable- and clause-gadgets, which have higher degree but still bounded by a constant. \newcommand{\colorTriangFreeNpHard}{ It is NP-complete to determine whether a triangle-free planar graph admits a \kcoloring{2}.} \begin{theorem} \colorTriangFreeNpHard \label{thm:2colorTriangFreeNpHard} \end{theorem} \begin{proof} Membership in NP can be shown as in the proof of Theorem~\ref{thm:2colorDeg5NpHard}. To prove that the problem in NP-hard, we again employ a reduction from \textsc{naesat}\xspace. To avoid crossings we will construct a triangle-free planar graph $G_\phi$ (with different variable- and clause-gadgets) similar to the previous construction, so that $G_\phi$ admits a \kcoloring{2} if and only if $\phi$ is satisfiable. The clause-gadget is illustrated in Fig.\ref{fig:clause-gadget}. It consists of a $2 \times 3$ grid (highlighted in gray) and one vertex of degree~$2$ (denoted by~$u$ in Fig.\ref{fig:clause-gadget}) connected to the top-left and bottom-right vertices of the grid. We claim that the \emph{clause-vertices} of this gadget (denoted by $u$, $u_{11}$ and $u_{23}$ in Fig.\ref{fig:clause-gadget}) cannot all have the same color. For a proof by contradiction assume that $u$, $u_{11}$ and $u_{23}$ are all black. Since $u_{12}$, $u_{13}$, $u_{21}$ and $u_{22}$ are adjacent either to $u_{11}$ or to $u_{23}$, none of them is black. Hence, $u_{21} \rightarrow u_{22} \rightarrow u_{12} \rightarrow u_{13}$ is a monochromatic path of length three; a contradiction to the diameter of the coloring. \begin{figure}[tb] \centering \subfloat[\label{fig:clause-gadget}]{\includegraphics[page=24]{figures}} \hfil \subfloat[\label{transmitter-gadget}]{\includegraphics[page=25]{figures}} \hfil \subfloat[\label{fig:variable-gadget}]{\includegraphics[page=26]{figures}} \hfil \subfloat[\label{fig:chain-gadget}]{\includegraphics[page=27]{figures}} \hfil \subfloat[\label{fig:crossing-gadget}]{\includegraphics[page=28]{figures}} \caption{ (a)~clause-gadget, (b)~transmitter-gadget, (c)~variable-gadget, (d)~a chain of length~11, (e)~crossing-gadget.} \label{fig:3colorTrianFreeNpHard} \end{figure} Fig.\ref{transmitter-gadget} illustrates the so-called \emph{transmitter-gadget}, which consists of three copies of the $2 \times 3$ grid (highlighted in gray), each of which gives rise to a clause-gadget with the degree-$2$ vertices $u_1$, $u_2$ and $u_3$. It also has two additional vertices (denoted by $s$ and $t$ in Fig.\ref{transmitter-gadget}), each of which forms a clause-gadget with each of the three copies of the rectangular grid. We claim that in any \kcoloring{2} of the transmitter-gadget $s$ and $t$ are of the same colors. Otherwise, a simple observation shows that there is a monochromatic path of length three; a contradiction to the diameter of the coloring. A schematization of the transmitter-gadget is given in Fig.\ref{transmitter-gadget}. The variable-gadget is illustrated in Fig.\ref{fig:variable-gadget}. We claim that in any \kcoloring{2} of these gadget vertices $x$ and $y$ must be of different colors. Assume to the contrary that $x$ and $y$ are both white. Then, vertices $x_1$, $x_2$ and $x_3$ must also be white, due to the transmitter-gadgets involved. Hence, $x \rightarrow x_1 \rightarrow y \rightarrow x_3 \rightarrow x_1$ is a white-colored cycle; a contradiction to the diameter of the coloring. A schematization of the variable-gadget is given in Fig.\ref{fig:variable-gadget}. The corresponding one for the chain is given in Fig.\ref{fig:chain-gadget}. Since we proved that the clause-vertices of the clause-gadgets cannot all have the same color and that the variable gadget has two specific vertices of different colors, the rest of the construction is identical to the one of the previous theorem. Note, however, that $G_\phi$ is unlikely to be planar, as required by this theorem. However we can arrange the variable-gadgets and the clause-gadgets so that the only edges that cross are the ones joining the variable-gadgets with the clause-gadgets. Then, we replace every crossing by the crossing-gadget illustrated in Fig.\ref{fig:crossing-gadget}. This particular gadget has the following two properties: \begin{inparaenum}[(i)] \item its topmost and bottommost vertices must be of the same color (due to the vertical arrangement of the variable-gadgets), which implies that \item the leftmost and rightmost vertices must be of the same color as well. \end{inparaenum} Hence, we can replace all potential crossings with the crossing-gadget. Since the number of crossings is quadratic to the number of edges, the size of the construction is still polynomial. Everything else in the construction and in the argument remains the same. \end{proof} Note that Theorems~\ref{thm:2colorDeg5NpHard} and \ref{thm:2colorTriangFreeNpHard} have been independently proven by Dorbec et al.~\cite{Dorbec14}. In the following theorem we prove that the \kcoloring{2} problem remains NP-complete even if one allows one more color and the input graph is either of maximum degree~$9$ or planar of maximum degree~$16$. Recall that all planar graphs are $4$-colorable. \begin{theorem} It is NP-complete to determine whether a graph $G$ admits a \kcoloring{3}, even in the case where the maximum degree of $G$ is no more than $9$ or in the case where $G$ is a planar graph of maximum degree $16$. \label{thm:3colorDeg9NpHard} \end{theorem} \begin{proof} Membership in NP can be proved similarly to the corresponding one of Theorem~\ref{thm:2colorDeg5NpHard}. To prove that the problem is NP-hard, we employ a reduction from the well-known \textsc{3-coloring}\xspace problem, which is NP-complete even for planar graphs of maximum vertex-degree $4$~\cite{d-uccp-80}. So, let $G$ be an instance of the \textsc{3-coloring}\xspace problem. To prove the first part of the theorem, we will construct a graph $H$ of maximum vertex-degree $9$ admitting a \kcoloring{3} if and only if $G$ is 3-colorable\xspace. Central in our construction is the complete graph on six vertices $K_6$, which is \kcolorable{3}; see Figure~\ref{fig:k6}. We claim that in any \kcoloring{3} of $K_6$ each vertex is adjacent to exactly one vertex of the same color. For a proof by contradiction, assume that there is a \kcoloring{3} of $K_6$ in which three vertices, say $u$, $v$ and $w$, have the same color. From the completeness of $K_6$, it follows that $u$, $v$ and $w$ form a monochromatic components of diameter~$3$, which is a contradiction. \begin{figure}[tb] \centering \begin{minipage}[b]{.32\textwidth} \centering \subfloat[\label{fig:k6}{}]{\includegraphics[width=.8\textwidth,page=21]{figures}} \end{minipage} \hfil \begin{minipage}[b]{.32\textwidth} \centering \subfloat[\label{fig:3colorattachment}{}]{\includegraphics[width=.8\textwidth,page=22]{figures}} \end{minipage} \hfil \begin{minipage}[b]{.32\textwidth} \centering \subfloat[\label{fig:counterexample}{}]{\includegraphics[width=.8\textwidth,page=23]{figures}} \end{minipage} \caption{ (a)~The complete graph on six vertices $K_6$. (b)~The attachment-graph for the planar case. (c)~A planar graph of max-degree~$4$ that is not \kcolorable{2}.} \label{fig:3colorNpHard} \end{figure} Graph $H$ is obtained from $G$ by attaching a copy of $K_6$ at each vertex $u$ of $G$, and by identifying $u$ with a vertex of $K_6$, which we call \emph{attachment-vertex}. Hence, $H$ has maximum degree~$9$. As $H$ is linear in the size of $G$, it can be constructed in linear time. If $G$ admits a $3$-coloring, then $H$ admits a \kcoloring{3} in which each attachment-vertex in $H$ has the same color as the corresponding vertex of $G$, and the colors of the other vertices are determined based on the color of the attachment-vertices. To prove that a \kcoloring{3} of $H$ determines a \textsc{3-coloring}\xspace of $G$, it is enough to prove that any two adjacent attachment-vertices $v$ and $w$ in $H$ have different colors, which clearly holds since both $v$ and $w$ are incident to a vertex of the same color in the corresponding copies of $K_6$ associated with them. For the second part of the theorem, we attach at each vertex of $G$ the planar graph of Figure~\ref{fig:3colorattachment} using as attachment its topmost vertex, which is of degree~$12$ (instead of $K_6$ which is not planar). Hence, the constructed graph $H$ is planar and has degree~$16$ as desired. Furthermore, it is not difficult to be proved that in any \kcoloring{3} of the graph of Figure~\ref{fig:3colorattachment} its attachment-vertex is always incident to (at least one) another vertex of the same color, that is, it has exactly the same property with any vertex of $K_6$. Hence, the rest of the proof is analogous to the one of the first part of the theorem. \end{proof} \section{Conclusions} \label{sec:conclusions} In this work, we presented algorithmic and complexity results for the \kcoloring{2} and the \kcoloring{3} problems. We proved that all outerpaths are \kcolorable{2} and we gave a polynomial-time algorithm to determine whether an outerplanar graph is \kcolorable{2}. For the classes of graphs of bounded degree and planar triangle-free graphs we presented several NP-completeness results. However, there exist several open questions raised by our work. \begin{itemize} \item In Theorem~\ref{thm:2colorDeg5NpHard} we proved that it is NP-complete to determine whether a graph of maximum degree~$5$ is \kcolorable{2}. So, a reasonable question to ask is whether one can determine in polynomial time whether a graph of maximum degree~$4$ is \kcolorable{2}. The question is of relevance even for planar graphs of maximum degree~$4$. Note that not all planar graphs of maximum degree~$4$ are \kcolorable{2} (Figure~\ref{fig:counterexample} shows such a counterexample found by extensive case analysis), while all graphs of maximum degree~$3$ are \konecolorable{2}~\cite{Lo66}. \item Other classes of graphs, besides the outerpaths, that are always \kcolorable{2} are of interest. \item In Theorem~\ref{thm:3colorDeg9NpHard} we proved that it is NP-complete to determine whether a graph of maximum degree~$9$ is \kcolorable{3}. The corresponding question on the complexity remains open for the classes of graphs of maximum degree~$6$, $7$ and $8$. Recall that graphs of maximum degree~$4$ or $5$ are always \kcolorable{3}. \item One possible way to expand the class of graphs that admit defective colorings, is to allow larger values on the diameter of the graphs induced by the same color class. \end{itemize} \paragraph{Acknowledgement:} We thank the participants of the special session GNV of IISA'15 inspiring this work. We also thank Pascal Ochem who brought~\cite{Dorbec14} to our attention, where some of our NP-completeness results have been independently proven. \bibliographystyle{abbrv}
1,314,259,993,827
arxiv
\section{#1}} \newcommand{\conclusio{section}{1} \setcounter{equation}{0}{\conclusio{section}{1} \setcounter{equation}{0} \section*{Appendix \Alph{section}}} \setlength{\parindent}{0.22in} \setlength{\textheight}{9.2in} \setlength{\textwidth}{15.5cm} \setlength{\topmargin}{-.3in} \setlength{\evensidemargin}{-1cm} \setlength{\oddsidemargin}{-.2cm} \renewcommand{\baselinestretch}{1.5} \newsavebox{\PSLASH} \sbox{\PSLASH}{$p$\hspace{-1.8mm}/} \newcommand{\usebox{\PSLASH}}{\usebox{\PSLASH}} \begin{document} \title{Boundary conformal field theories and loop models } \author{M. A. Rajabpour$^{a,b}$\footnote{e-mail: rajabpour@to.infn.it} \\ \\ $^{a}$Dip. di Fisica Teorica and INFN, Universit\`{a} di Torino, Via P. Giuria 1, 10125 Torino, \\Italy\\$^{b}$Institute for Studies in Theoretical Physics and Mathematics, Tehran 19395-5531, Iran} \maketitle \begin{abstract} We propose a systematic method to extract conformal loop models for rational conformal field theories (CFT). Method is based on defining an ADE model for boundary primary operators by using the fusion matrices of these operators as adjacency matrices. These loop models respect the conformal boundary conditions. We discuss the loop models that can be extracted by this method for minimal CFTs and then we will give dilute $O(n)$ loop models on the square lattice as examples for these loop models. We give also some proposals for WZW $SU(2)$ models. \vspace{5mm}% \newline \textit{Keywords}: Critical Loop, Bondary CFT, ADE Models \end{abstract} \section{Introduction}\ The study of statistical models related to loop models is interesting both from the physical and the mathematical point of views. Most of the statistical models studied in physics such as the Ising, the q-state Potts model and also complicated vertex models can be represented in terms of loops \cite{Nienhis1}. The loop representation of the spin system is very easy to understand: loops correspond to domain walls separating regions of different magnetization. The study of critical loop models can be interesting from many point of views: they are good candidates for the ground state of topological quantum systems \cite{freedman}, they are also good candidates for the Schramm Loewner evolution (SLE), a method discovered by Schramm \cite{schramm} to classify conformally invariant curves connecting two distinct boundary points in a simply connected domain. Different applications of conformal loop models are stimulating to do a systematic study of these models by CFT. Recently we proposed in ~\cite{Rajabpour} a method to extract loop models corresponding to a conformal field theory (CFT), the method was based on defining a RSOS model for every primary operator by using fusion matrix of the primary operator as an adjacency matrix and then extracting the loop model corresponding to domain walls of the RSOS model. The weight of the loop model is equal to the quantum dimension of the corresponding operator. In this paper we want to follow the same method consistent with the conformal boundary operators, since the SLE is a boundary CFT we think that using the fusion matrix of boundary operators as an adjacency matrix is more consistent with the nature of SLE. Recently a very nice and strong project was initiated by Jacobsen and Saleur \cite{Jacobsen and Saleur1} followed by Dubail, Jacobsen, Saleur \cite{Jacobsen and Saleur2} to classify all the possible conformal boundary loop models. It is based on classifying the possible boundary loop models compatible with the boundary conformal field theories. This classification is in close relation with the earlier work by Cardy on formulating the modular invariant partition function of $O(n)$ model on the annulus \cite{Cardy1}. The results that we get by our method apart from simplicity are all compatible with the results in \cite{Jacobsen and Saleur1,Jacobsen and Saleur2,Cardy1}. The paper is organized as follows: In the next section we will introduce the necessary ingredients to find the boundary operators and also the fusion matrices corresponding to them. In the third section we briefly review the method proposed in ~\cite{Rajabpour} and we will also generalize it to the graphs with largest eigenvalue bigger than two. The central claim of this section is as follows: the loop model extracted with this method is connected with the properties of the statistical loop model in the same universality class as the corresponding CFT. In the third section we follow explicitly some examples in particular; Ising model, tri-critical Ising model, three states Potts model and tri-critical three states Potts model. Then we will give the possible loop models, extractable with this method, of minimal CFTs and also the lattice models corresponding to these loop models. We will close this section by giving some proposals for possible loop models for WZW $SU(2)$ models. Last section contains our conclusions with a brief description of the work in progress motivated by these results. \section{Boundary conformal field Theory}\ \setcounter{equation}{0} To define loop model for a generic minimal CFT consistent with the conformal boundary we need to first summarize the main important facts about boundary CFT. The most important ingredient to classify the boundary conformal operators is the modular invariant partition function of the CFT. The classification of modular invariant partition functions of $SU(2)$ minimal models are well known and can be related to a pair of simply laced Dynkin diagrams $(A,G)$ ~\cite{CapItzZub87}. The complete classification based on ADE diagrams is \begin{eqnarray}\label{ADE} (A,G)=\cases{ (A_{h-1},A_{g-1})&\cr (A_{h-1},D_{(g+2)/2}),\quad g\ \mbox{even}&\cr (A_{h-1},E_6), \quad\quad\quad\hspace{0.2cm} g=12 &\cr (A_{h-1},E_7), \quad\quad\quad\hspace{0.2cm} g=18&\cr (A_{h-1},E_8), \hspace{0.2cm}\quad\quad\quad g=30,} \end{eqnarray} where $g$ and $h$ are the Coxeter numbers of $A$ and $G$ with $h,g\geq 2$. The above pair of Dynkin diagrams describes bulk modular invariant partition function with some primary operators and with the following central charge \begin{eqnarray}\label{central charge} c=1-6\frac{(h-g)^2}{h g}. \end{eqnarray} Each of the unitary minimal models $M(A_{h-1},G)$ with $g-h=\pm1$ can be realized as the continuum scaling limit of an integrable two-dimensional lattice model at criticality, with heights living on the nodes of the graph $G$. In particular, the critical series with $g-h=1$ is associated with the A-D-E lattice models~\cite{ABF84} and the tri-critical series with $g-h=-1$ is associated with the dilute lattice models ~\cite{Roc92,WN}. For theories with a diagonal torus partition function it is known that there is a conformal boundary condition associated to each operator in the theory ~\cite{Cardy89}. The fusion rules of these boundary operators are just given by the bulk fusion algebra. It was shown in a series of papers that for $SU(2)$ minimal models one can propose a complete set of conformal boundary operators $i=(r,a)\in (A,G)$, where $r$ and $a$ are nodes on the Dynkin diagram of $A$ and $G$ respectively with the identification $(r,a)=(h-r,\gamma(a))$, where $\gamma$ is an automorphism acting on the nodes of the graph $G$. This automorphism is identity except for the $A$, $E_{6}$ and $D_{odd}$ which is $Z_{2}$ symmetry of Dynkin diagram, symmetries of Dynkin diagrams play an important rule in the forthcoming discussion. Following \cite{BPZ} we show the corresponding operators by $\hat{\phi}_{i}$ and the independent boundary states by $|(r,a)\rangle$ which is called Cardy states. Cardy states can be written in terms of Ishibashi states, i.e. $|j\rangle\rangle$, as follows $|(r,a)\rangle=\sum_{j} c_{(r,a)}^{j}|j\rangle\rangle$, where sum is over all Ishibashi states. We are interested to the fusion rules of these boundary operators. To give a formula for the fusion rules of these operators we need to define some quantities. Let $\Psi$ be the eigenvectors of the adjacency matrix corresponding to the group $G$ then the graph fusion matrices $\hat{N}_{a}$ with $a\in G$ can be defined as follows \begin{eqnarray}\label{graph fusion matrix} (\hat{N}_a)_b{}^c= \sum_{m\in \mbox{\scriptsize Exp}(G)} {\Psi_{am} \Psi_{bm} \Psi^*_{cm}\over \Psi_{1m}},\qquad a,b,c\in G, \end{eqnarray} where $Exp(G)$ denotes the set of exponents of $G$, see table ~1. Let's show also the graph fusion matrix for $A_{h-1}$ by $N_{r}$ then following \cite{BPZ} the fusion rules for boundary operators are \begin{eqnarray}\label{fusion rules} \hat{\phi}_{i_{1}}\hat{\phi}_{i_{2}}=\sum_{i_3\in (A,G)}(\mathcal{N}_{i_{1}})_{i_{2}}^{i_3}\hat{\phi}_{i_{3}}, \end{eqnarray} where $(\mathcal{N}_{i_{1}})_{i_{2}}^{i_3}$ has the following relation with the graph fusion matrices of $A$ and $G$ \begin{eqnarray}\label{fusion rules2} (\mathcal{N}_{(r_{1},a_{1})})_{(r_{2},a_{2})}^{(r_{3},a_{3})}=N_{r_{1}r_{2}}^{r_{3}}\hat{N}_{a_{1}a_{2}}^{a_{3}}. \end{eqnarray} For more details about the connection of the boundary operators to bulk counterparts see ~\cite{BPZ, BpPZ}. \newline \begin{table}[htp] \begin{center} \begin{tabular}{|c|c|c|}\hline Dynkin Diagram & Coexter Number($h$) & Coexter Exponent($m$) \\ \hline $A_{n}$ &$n+1$ &$1,2,...,n$ \\ \hline $D_{n}$ &$2(n+1)$ & $1,3,...,2n-1,n-1$ \\ \hline $E_{6}$ &$12$ & $1,4,5,7,8,11$ \\ \hline $E_{7}$&$18$ & $1,5,7,9,11,13,17$ \\ \hline $E_{8}$ &$30$ & $1,7,11,13,17,19,23,29$\\ \hline \end{tabular} \end{center} \caption{\label{tab1} The Coexter number and the Coexter exponents of Dynkin diagrams. } \end{table} To calculate the fusion matrices of boundary operators we need also to define a conjugation operator $C(a)=a^*$, it is the identity except for $D_{4n}$ graphs where the eigenvectors $\Psi_{am}$ are complex and conjugation corresponds to the $Z_2$ Dynkin diagram automorphism. It then follows that $\hat{N}_{a^*b}^c=\hat{N}_{ca}^b$. The operator $C(a)$ acts on the right to raise and lower indices in the fusion matrices $\hat{N}^a=\hat{N}_a C$ so it is the important ingredient to get the right fusion matrices for the boundary operators, in particular for the $D_{4n}$ graphs. we will give some examples in section ~4, in particular we use the above method to get the fusion matrices of the boundary operators of Ising model, tri-critical Ising model, ~3 state Potts model and tri-critical ~3 state Potts model. \section{Loop Models for Boundary operators}\ \setcounter{equation}{0} In this section we propose a method to extract some possible loop models for CFTs, the method is the same as the method introduced recently in ~\cite{Rajabpour}. In that reference we showed that using the fusion matrix as an adjacency matrix it is possible to associate a $O(n)$ loop model to every primary operator. The method is briefly as follows: The graph of a primary operator $\hat{\phi}_{i}$ has $g$ vertices where $g$ is the number of primary operators in the theory and edges connecting pairs of vertices $(j,k)$ when $\mathcal{N}_{ij}^{k}=1$. Following \cite{Cardy ADE} one can define a height model on the triangular lattice by imposing that the height $h_{j}$ at the site $j$ can take values $0,1,\dots,g-1$. Then constraint the heights at neighboring sites according to the incidence matrix associated to a given primary field $\hat{\phi}_{i}$: only neighbor heights $h_{j}$ and $h_{k}$ with $(\mathcal{N}_{i})_{j}^{k}=1$ are admissible. For a consistent definition of loop models on a triangular lattice at least two of the heights at the corners of an elementary triangular plaquette should be equal then the weights for the elementary plaquette are defined as follows. If the heights of plaquette are $(c,b,b)$ with $c \ne b$ then the weight is $x(\frac{\hat{S}_{l}^{b}}{\hat{S}_{l}^{c}})^{1/6}$, where $\hat{S}$ satisfies $\sum_{b}(\mathcal{N}_{a})_{b}^{c}\frac{\hat{S}_{l}^{b}}{\hat{S}_{l}^{0}}=\frac{\hat{S}_{l}^{a}}{\hat{S}_{0}^{l}}\frac{\hat{S}_{l}^{c}}{\hat{S}_{l}^{0}}$. It means that the $b$ th element of the eigenvector of $\mathcal{N}_{a}$ with eigenvalue $\frac{\hat{S}_{l}^{a}}{\hat{S}_{0}^{l}}$ is given by $\frac{\hat{S}_{l}^{b}}{\hat{S}_{l}^{0}}$. If the heights are all equal then the weight is $1$ except for those with $\mathcal{N}_{ab}^{b}\neq 0$ that have weights $1$ or $x$ depending on the particular model considered \footnote{For more details specially about identical neighbor heights see \cite{Rajabpour}.}. The next step is to mark triangles with unequal heights $(c,b,b)$ drawing a curved segment on the dual honeycomb lattice \cite{Cardy ADE} and linking to the center the midpoints of the two edges with different heights ($b$ and $c$) at the extremes (See fig ~1). Summing over the admissible values of heights consistent with a given loop configuration we find \begin{eqnarray}\label{loop height} \sum_{b}(\mathcal{N}_{a})_{b}^{c}\frac{\hat{S}_{l}^{b}}{\hat{S}_{l}^{c}}=\frac{\hat{S}_{l}^{a}}{\hat{S}_{0}^{l}}, \end{eqnarray} where the sum is just over $b$. We take most of the times $l=0$ to get the largest eigenvalue of $\mathcal{N}_{a}$ to guaranty positive real weights in our height models, however, we will also point to other cases. \begin{figure} \begin{center} \includegraphics[angle=0,scale=0.6]{rajab31} \caption{A triangular plaquette with $c\neq b$ and the corresponding curve segment on the dual honeycomb lattice.} \label{Fig1} \end{center} \end{figure} The weight of the loops is given by the largest eigenvalue of the fusion matrix $n_{a}$ and the partition function of the model is as follows \begin{eqnarray}\label{O(n)} Z=\sum x^{l}n_{a}^{N}, \end{eqnarray} where $l$ is the number of bonds in the loop configuration and $N$ is the number of loops. Using this method we can correspond to every boundary conformal operator a $O(n)$ loop model, since the $O(n)$ model posses a dilute critical point for $n\leq 2$ with $x_{c}=\frac{1}{\sqrt{2+\sqrt{2-n}}}$; see ~\cite{Nienhis2}, correspondingly our loop models will have a critical point just for the fields with $n_{a}$ smaller than $2$. The $O(n)$ model has another critical regime, the so-called dense phase, for $x=(x_{c},\infty)$ corresponds to a different universality class. Mapping to the $O(n)$ model helps us to find the connection with SLE: from coulomb gas arguments we know that, in the dilute regime, the loop weight has the following relation with the drift in the SLE equation \begin{eqnarray}\label{S minimal} n_{a}=-2 \cos(\frac{4\pi}{\kappa}). \end{eqnarray} For the dense phase the above equation is still true if we work in the region $4\leq \kappa \leq 8$. Using the above equation we can find the properties of the loop model corresponding to a boundary conformal operator. The achievement of this method is respecting the Cardy's equation \cite{Cardy89}: \textit{fields in the same sector have the same loop representation}. Before generalizing the definition to more general graphs we should stress that although we started with well defined minimal CFT but the loop model that we extracted is not necessarily minimal. The point is that the extracted loop model respects some aspects of the corresponding conformal field theory. This is like to say that although the domain walls in Ising model at the critical point is the same as the critical $O(n=1)$ but the Ising conformal field theory does not explain all the aspects of the critical curves. From this point the loop model that one can get by this method from the rational CFT is not perfectly equal to the corresponding CFT. One can generalize the above idea to the decomposable fusion graphs by the method that was explained in \cite{Fendley}. Since the fusion graphs of some operators in minimal models are equivalent to the tensor product of two adjacency diagrams one can use this method to extract new loop models that can also have configurations with crossing loop segments. The general strategy is based on extracting critical loop models with $n\leq 2$ for the graphs with largest eigenvalue bigger than ~2. Some graphs obey simple decomposition, can be written as tensor product, but others need to be mapped to simple decomposable graphs by going to the ground state adjacency graph \cite{Fendley}. Here we just comment on decomposable graphs $\mathcal{N}=\mathcal{N}_{1}\otimes \mathcal{N}_{1}$, where $\mathcal{N}_{1}$ and $\mathcal{N}_{2}$ are simple ADE diagrams. In these cases we can define two-flavor loop model living on the honeycomb lattice independently, one is related to the loop model of $\mathcal{N}_{1}$ with weight $n_{1}$ and the other comes from the graph $\mathcal{N}_{2}$ with weight $n_{2}$. Fendley showed \cite{Fendley} that in this case it is also possible to define consistently interacting loop model on the square lattice with partition function $Z=\sum n_{1}^{N_{1}}n_{2}^{N_{2}}b^{C}$, where $N_{1,2}$ are the numbers of each kind of loop and $C$ is the number of plaquettes with a resolved potential crossing at their center. The critical values of $b$ were calculated in \cite{Fendley3} but the critical properties of the loops are still unsolved. This is obviously is not the only method to define loop model for non-simple graphs, the other method is based on the multi-flavor loop model of \cite{WN}. In this loop model a curve of flavor $i$ separating two neighboring sites does not necessarily separate two sites with different heights, for the definition of the RSOS model in this case and its relation to the loop model see \cite{WN}. In the next section we summarize some simple examples including the most familiar minimal conformal models such as Ising, tri-critical Ising, ~3-state Potts model and tri-critical ~3-state Potts model. The main point is to take the fusion graphs as adjacency graphs in the consistent way and to extract some loop models. These loop models are not equivalent to the corresponding conformal field theory but still carry some aspects of the underlying field theory in the consistent way, in particular the critical properties of these loop models are in close connection with the corresponding conformal field theory. In this paper some distinctions are crucial. We have some minimal conformal field theories with well defined fusion matrices and modular invariant partition functions, one example is Ising conformal field theory. There are some statistical models such as spin models, RSOS models which at the critical point can be describe partially by the minimal CFT, so the Ising CFT is different from the statistical Ising model. We prefer also to distinguish between for example dilute ADE models and dilute $O(n)$ loop model. They can be mapped to each other and have the same phase transitions but since the fundamental objects in one side is local and in the other one is non-local this distinction is useful. There are lots of work done on connecting these two models, minimal conformal field theories and statistical models counterparts, using integrability methods and our argument hardly has something new to say from this point of view. Finally we are defining another statistical model by using the fusion matrices of primary operators of conformal field theory which most of the times is in the same universality class as the statistical model counterpart of the corresponding CFT. These height models have also loop representations. This similarity can be useful to get an idea about the loop properties of the statistical models with well-known minimal CFTs. \section{Some Examples}\ In this section we apply the method introduced in section ~3 to the minimal conformal field theories with well defined fusion structure and also WZW $SU_{k}(2)$ models. We will also point on the consistency of these loop models with the Cardy's boundary states. These consistency is a hint to believe that it may be possible to extend the results in to the level of the boundary partition function \cite{Cardy1}. For notational convenience in this section of the paper we will drop the hat of boundary operators. \newline \begin{figure} \begin{center} \includegraphics[angle=0,scale=0.8]{rajab2}\\ \caption{ Graphs of fusion matrices of boundary primary operators in Ising model, from the left to the right the fusion graphs of $1$, $\epsilon$ and $\sigma$. The graph of the operator $\epsilon$ is $A_{2}$ and the graph of $\sigma$ is $A_{3}$} \label{Fig2} \end{center} \end{figure} \textbf{Ising model}: The simplest example is the Ising model $(A_{2},A_{3})$ since the model has diagonal modular invariant partition function the fusion matrices of the boundary operators is the same as the bulk case. The fusion graphs are as fig ~2 so the boundary states are as follows \begin{eqnarray}\label{cardyising} |\mathbf{1}\rangle&=&\frac{1}{\sqrt{2}}|\mathbf{1}\rangle\rangle+\frac{1}{\sqrt{2}}|\epsilon\rangle\rangle+\frac{1}{\sqrt[4]{2}}|\sigma\rangle\rangle;\nonumber\\ |\epsilon \rangle&=&\frac{1}{\sqrt{2}}|\mathbf{1}\rangle\rangle+\frac{1}{\sqrt{2}}|\epsilon\rangle\rangle-\frac{1}{\sqrt[4]{2}}|\sigma\rangle\rangle;\nonumber\\ |\sigma \rangle&=&|\mathbf{1}\rangle\rangle-|\epsilon\rangle\rangle. \end{eqnarray} These equations reflect the $Z_{2}$ symmetry corresponding to changing the sign of spin, this is also evident in the loop representation; $n_{\epsilon}=n_{\mathbf{1}}=1$. Both operators give $\kappa=3$, these loops are the domain walls between different spins. It is worth mentioning that this symmetry comes from the natural $Z_{2}$ symmetry of Dynkin diagram. The operator $\sigma$ with $n_{\sigma}=\sqrt{2}$ corresponds to free boundary condition. The loops in the dense phase have $\kappa=\frac{16}{3}$ and describe the domain walls of Fortuin-Kasteleyn (FK) clusters. In the above calculation we considered only the largest eigenvalue of the fusion graphs, however, it is also possible to consider other eigenvalues as the weight of the loops, the cost is accepting complex local Boltzmann weights for the corresponding height model. Since loop models are generically non-local theories accepting complex Boltzmann weights is equal to accepting non-unitary theories. By this introduction one can accept the possibility of loop models with $n=\pm\sqrt{2},0$ for the loop model corresponding to the $A_{3}$ diagram of spin operator. \newline \textbf{Tri-critical Ising model}: The next simple example is the tri-critical Ising model, $(A_{3},A_{4})$ which we have diagonal modular invariant partition function. The boundary CFT of this model was discussed in \cite{chim}. There are ~6 boundary operators $\mathbf{1},\epsilon,\epsilon',\epsilon'',\sigma$ and $\sigma'$ with the fusion graphs as fig ~3 and the following Cardy states \begin{eqnarray}\label{cardytricritical} |\mathbf{1}\rangle &=& C[|\mathbf{1}\rangle\rangle + \eta|\epsilon\rangle\rangle + \eta|\epsilon'\rangle\rangle + |\epsilon''\rangle\rangle + \root4\of{2}|\sigma'\rangle\rangle + \root4\of{2}|\sigma\rangle\rangle];\nonumber\\ |\epsilon \rangle &=&C[\eta^2|\mathbf{1}\rangle\rangle - \eta^{-1}|\epsilon\rangle\rangle - \eta^{-1}|\epsilon'\rangle\rangle + \eta^2|\epsilon''\rangle\rangle - \root4\of{2}\eta^2|\sigma'\rangle\rangle + \root4\of{2}\eta^{-1}|\sigma\rangle\rangle];\nonumber\\ |\epsilon'\rangle &=& C[\eta^2|\mathbf{1}\rangle\rangle - \eta^{-1}|\epsilon\rangle\rangle - \eta^{-1}|\epsilon'\rangle\rangle + \eta^2|\epsilon''\rangle\rangle + \root4\of{2}\eta^2|\sigma'\rangle\rangle - \root4\of{2}\eta^{-1}|\sigma\rangle\rangle];\nonumber\\ |\epsilon''\rangle &=& C[|\mathbf{1}\rangle\rangle + \eta|\epsilon\rangle\rangle + \eta|\epsilon'\rangle\rangle + |\epsilon''\rangle\rangle - \root4\of{2}|\sigma'\rangle\rangle - \root4\of{2}|\sigma\rangle\rangle];\nonumber\\ |\sigma'\rangle &=& \sqrt2C[|\mathbf{1}\rangle\rangle - \eta|\epsilon\rangle\rangle + \eta|\epsilon'\rangle\rangle - |\epsilon''\rangle\rangle] ;\nonumber\\ |\sigma\rangle &=& \sqrt2C[\eta^2|\mathbf{1}\rangle\rangle + \eta^{-1}|\epsilon\rangle\rangle - \eta^{-1}|\epsilon'\rangle\rangle - \eta^2|\epsilon''\rangle\rangle], \end{eqnarray} where $C = \sqrt{{sin{\pi\over5}}\over{\sqrt5}}$ and $\eta = \sqrt{2\cos{\frac{\pi}{5}}}$. The boundary states corresponding to boundary operators $\mathbf{1}$ and $\epsilon''$ can be transformed to each other by just changing the sign of spin operators, i.e. $Z_{2}$ symmetry. They have also the same loop weight $n=1$ comes from the largest eigenvalue of the fusion matrix\footnote{To get the loop weights we consider one simply connected part of the fusion graph as an adjacency graph, the other parts of the graph have always equal largest eigenvalues. One can see that these different parts are folding or orbifold dual of each other, see \cite{ginsparg} }. The boundary states $\epsilon$ and $\epsilon'$ are connected also by just changing the the sign of spin states. The weight of the loops is $n=2\cos(\frac{\pi}{5})$ with $\kappa=5$ in the dense phase. This loop model corresponds to the boundary of geometric clusters at the geometric critical point of tri-critical Ising model or Blume-Capel model ~\cite{blote nienhuis}. The operator $\sigma'$ describes a loop model with $n=\sqrt{2}$ corresponding to $\kappa=\frac{16}{5}$ in the dense phase which is related to the boundary of spin clusters and also vacancy clusters in Blume-Capel model ~\cite{blote nienhuis,DGB}. The interesting point for tri-critical models is the equality of critical exponents for spin clusters and FK clusters ~\cite{blote nienhuis,DGB}. \begin{figure} \begin{center} \includegraphics[angle=0,scale=0.6]{rajab111}\\ \caption{Graphs of fusion matrices of the boundary primary operators in the tri-critical Ising model in the upper row from the left to the right the fusion graphs of $1$, $\epsilon$ and $\epsilon'$, in the lower row from the left to the right the fusion graphs of $\epsilon''$, $\sigma$ and $\sigma'$. The fusion graph of $\epsilon$ is $A_{4}$ plus $T_{2}$, they are connected to each other by folding duality. The fusion graph of $\sigma$ is $T_{2}\otimes A_{3}$.} \label{Fig3} \end{center} \end{figure} The operator $\sigma$ is related to the degenerate boundary condition and the corresponding loop model with $n=2\sqrt{2}\cos(\frac{\pi}{5})$ is non-critical, however, it is easy to see that the fusion matrix of this operator is decomposable to simple matrices $N_{\sigma}=N_{T_{2}}\otimes N_{A_{3}}$ so one can define for this graph two-flavor loop model with weights $n_{1}=\sqrt{2}$ and $n_{2}=2\cos(\frac{\pi}{5})$. One can conclude from the above discussion that those operators with the same loop representations are connected to each other by folding and orbifold duality and it is also possible to see these symmetries in the level of boundary states. Similar to the previous subsection one can also consider other possible loop weights come from the other eigenvalues of the fusion matrix. The eigenvalues of the fusion matrix of the operator $\epsilon$ are $n=\pm2\cos(\frac{\pi}{5}),\pm\frac{1}{2\cos(\frac{\pi}{5})}$ and the eigenvalues of the fusion matrix of the operator $\sigma'$ are $\pm\sqrt{2},0$. The eigenvalues of the other operators are a subset of the above eigenvalues. Interestingly apart from the negative eigenvalues the above weights can be fitted with the boundary loop weights in \cite{Jacobsen and Saleur1,Jacobsen and Saleur2}. \newline \begin{figure} \begin{center} \includegraphics[angle=0,scale=0.6]{3Potts}\\ \caption{Graphs of fusion matrices of primary operators in three states Potts model. In the upper row from the left to the right the fusion graphs of $1$ and $\epsilon$, in the middle row from the left to the right the fusion graphs of $\phi_{1,2}$ and $\phi_{2,2}$ and in the lowest row the fusion graphs of $\psi$ and $\sigma$. The fusion graphs of $\psi^{\dag}$ and $\sigma^{\dag}$ can be derived from the fusion graphs of $\psi$ and $\sigma$ by the following exchanges $\psi\leftrightarrow \psi^{\dag}$ and $\sigma\leftrightarrow \sigma^{\dag}$. The fusion graph of $\epsilon$ is $A_{4}$ plus two $T_{2}$ graphs, they are connected to each other by folding duality. The fusion graph of $\phi_{1,2}$ is two $D_{4}$ and the fusion graph of $\phi_{2,2}$ is $T_{2}\otimes D_{4}$.} \label{Fig3} \end{center} \end{figure} \textbf{Three states Potts model}: The next example is the first non-diagonal case, ~3-state Potts model $(A_{4},D_{4})$ with ~8 boundary operators $\mathbf{1},\psi,\psi^{\dag},\epsilon,\sigma,\sigma^{\dag},\phi_{1,2}$ and $\hat{\phi}_{2,2}$, see ~\cite{Cardy89, BPZ,saleur}. The fusion graphs are given in fig ~4. Following Cardy's argument one can show that the operators $\mathbf{1},\psi,\psi^{\dag}$ correspond to fix boundary conditions and the corresponding boundary states can be transformed to each other by $Z_{3}$ symmetry, i.e. the symmetry of Dynkin diagram $D_{4}$. They also have the same quantum dimensions $n_{\mathbf{1}}=n_{\psi}=n_{\psi^{\dag}}=1$. The operators $\epsilon,\sigma,\sigma^{\dag}$ describe the fluctuating boundary conditions ~\cite{Cardy3} and all have the same kinds of fusion graphs with $n_{\epsilon}=n_{\sigma}=n_{\sigma^{\dag}}=2\cos(\frac{\pi}{5})$. In the dilute phase one can consider $\kappa=\frac{10}{3}$ as the property of the curve. In the lattice ~3-state Potts model these loops are the same as the domain walls of spin clusters. The fusion graph of the operator $\phi_{1,2}$ is two $D_{4}$ graphs. This operator describes fix boundary condition and has loop model with $n=\sqrt{3}$ which is equal to the loop model of domain walls in FK clusters of ~3-state Potts model. The operator $\phi_{2,2}$ describes degenerate boundary condition and the corresponding loop model with $n_{\phi_{2,2}}=\sqrt{\frac{9+3\sqrt{5}}{2}}$ is non-critical, however, decomposition is possible. In this case one can write $N_{\phi_{2,2}}=N_{T_{2}}\otimes N_{D_{4}}$ and so the corresponding two-flavor loop model has weights $n_{1}=\sqrt{3}$ and $n_{2}=2\cos(\frac{\pi}{5})$. The fusion matrix of $\varepsilon$ as was discussed in the case of tri-critical Ising model has the eigenvalues $n=\pm2\cos(\frac{\pi}{5}),\pm\frac{1}{2\cos(\frac{\pi}{5})}$ and the eigenvalues of the $N_{D_{4}}$ are $\pm\sqrt{3},0$. These loop weights can be fitted with the boundary loop weights in \cite{Jacobsen and Saleur1,Jacobsen and Saleur2}. \newline \newline \textbf{Tri-critical three states Potts model}: The next interesting example is tri-critical ~3-state Potts model $(D_{4}, A_{6})$ it has non-diagonal modular invariant partition function and also it is not part of Pasquirer's A-D-E models. The boundary states of this model have not been investigated systematically so far. The boundary operators of this model are: $\phi_{i}$ with $i=(r,a)$, $r=1,2,3$ and $a=1,...,4$. The fusion graphs for the boundary operators in this case are given in the Appendix. The boundary states corresponding to boundary operators $\phi_{1,1},\phi_{1,3},\phi_{1,4}$ can be transformed to each other by $Z_{3}$ symmetry of spin operators and should correspond to fix boundary conditions with $n=1$. The operators $\phi_{2,1},\phi_{2,3},\phi_{2,4}$ have also the same property with the same fusion graphs with $n=2\cos(\frac{\pi}{7})$. In the lattice tri-critical ~3-state Potts model they are domain walls of geometric clusters of geometric critical point ~\cite{blote nienhuis} with $\kappa=4\frac{7}{6}$. The operators $\phi_{3,1},\phi_{3,3},\phi_{3,4}$ can be transformed to each other again by $Z_{3}$ symmetry but they have loop weights bigger than two; $n=2.246$. The operators $\phi_{2,2}$ and $\phi_{3,2}$ have also loop weights bigger than two and related to degenerate boundary conditions. Finally the graph of $\phi_{1,2}$ is equal to three $D_{4}$ graphs with $n=\sqrt{3}$. In the dilute phase this weight describes the domain walls of spin clusters in the lattice tri-critical ~3-state Potts model with $\kappa=4\frac{6}{7}$. The fusion graph of $\phi_{2,1}$ is the sum of two graphs $A_{6}$ and $T_{3}$. The fusion matrix has the eigenvalues $n=2\cos(\frac{\pi j}{7})$ with $j=2,3,4,5,6$. The eigenvalues of the fusion matrix of $\phi_{1,2}$ are $n=\pm\sqrt{3},0$. Interestingly again apart from the negative eigenvalues the above weights can be fitted with the boundary loop weights in \cite{Jacobsen and Saleur1,Jacobsen and Saleur2}. The fusion graph of $\phi_{2,2}$ is decomposable as $T_{3}\otimes D_{4}$ and so it is possible to define two crossing loop models in this case. The fusion graphs of $\phi_{3,1}$ is not decomposable to simple graphs so it is not possible to extract critical loops also for $\phi_{3,3}$ and $\phi_{3,4}$ which are in the same sector. Although the loops, extracted by our method, corresponding to the above operators are not critical but by considering the fusion graph of the ground state of the above adjacency graph it is possible to extract critical loops. we will not discuss this method here, for more detail one can see \cite{Fendley}. The fusion graph of $\phi_{3,2}$ is decomposable but not to the simple graphs, i.e. $N_{\phi_{3,2}}=N_{T_{3}^{2}}\otimes N_{D_{4}}$. Another possibility to extract critical loops for $\phi_{3,1}$ is by considering other eigenvalues of the fusion matrix of this operator. The eigenvalues of $N_{\phi_{3,1}}$ are $\pm\frac{\sin(\frac{3\pi}{7})}{\sin(\frac{\pi}{7})}$, $\pm\frac{\sin(\frac{2\pi}{7})}{\sin(\frac{\pi}{7})}$ and $\pm\frac{\sin(\frac{\pi}{7})}{\sin(\frac{2\pi}{7})}$, the last two cases have critical loops. \newline \newline \textbf{Minimal models}: Finding loop models by the above method is completely general and applicable for more general cases. Take a pair $(A,G)$ from the equation (\ref{ADE}) then it is possible to correspond at least two different kinds of loop models for these minimal models with the following weights \begin{eqnarray}\label{loop minimal} n=2\cos(\frac{\pi}{g}),\hspace{2cm}n=2\cos(\frac{\pi}{h}). \end{eqnarray} They are the largest eigenvalues of the fusion matrices of $\phi_{1,2}$ and $\phi_{2,1}$. One can also consider the following SLE drifts for these loop models \begin{eqnarray}\label{SLE} \kappa=4\frac{g}{g+1},\hspace{2cm}\kappa=4\frac{h}{h-1},\hspace{1cm}g-h&=&1,\nonumber\\ \kappa=4\frac{g}{g-1},\hspace{2cm}\kappa=4\frac{h}{h+1},\hspace{1cm}g-h&=&-1. \end{eqnarray} The other eigenvalues of $G$ can be written as \begin{eqnarray}\label{exponents} n=2\cos(\frac{\pi m}{g}), \end{eqnarray} where $m$ is one of the Coexter exponents of the graph $G$. they are listed in the table ~1. It is possible to consider loop models for the above eigenvalues as before, however, they are not still all the possible loop models because as we already showed in some cases one can define two flavor loop models for decomposable fusion graphs. It is also possible as the case of the fusion graph of $\phi_{3,1}$ in tri-critical ~3-state Potts model to have matrices with relevant non-largest eigenvalues. We believe that they are relevant because the same loop weights appear in the classification of Jacobsen and Saleur \cite{Jacobsen and Saleur1}. Although so far we have given more familiar examples as the possible candidates for our loop models but it is also possible to extract systematic examples for the above proposals by using Pasquier's ADE models and Dilute ADE models \cite{Roc92,WN}. Pasquier's ADE models give a lattice realization for the $(A,G)$ series with $g-h=1$ and the description briefly is as follows: define an RSOS model by using the graph $G$ this height model at the critical point can be described by a the minimal CFT then map this height model to loop model \cite{Cardy ADE} at the critical point with $n=2 \cos(\frac{\pi}{g})$ which is the same as the loop model that we proposed in (\ref{loop minimal}). Of course the method proposed in this article and \cite{Rajabpour} is highly influenced with Pasquier's ADE models but it has something more to say by connecting the loop properties to the fusion properties of the primary operators. \begin{figure} \begin{center} \includegraphics[angle=0,scale=0.5]{loop2}\\ \caption{The Boltzmann weights of the different vertices in the $O(n)$ model on the square lattice.} \label{Fig1} \end{center} \end{figure} To get the dilute loop models and the loop models corresponding to tri-critical models we need to use Dilute ADE models. These models have rich phase diagrams with four branches: branch ~1 and ~2 have central charges $c=1-\frac{6}{g(g\pm1)}$ and branch ~3 and ~4 have $c=\frac{3}{2}-\frac{6}{g(g\pm1)}$. One can also map this height models to $O(n)$ loop models with the non-intersecting bonds on the square lattice with the partition function \begin{eqnarray}\label{partition function of generalized loop model} Z=\sum u^{N_{u}}v^{N_{v}}w^{N_{w}}n^{N}, \end{eqnarray} where the weights for different plaquettes are given in Fig ~5 and the $N_{u},N_{v}$ and $N_{w}$ are the numbers of different plaquettes \cite{BN}. This generalized $O(n)$ loop model apart from the critical properties at $u=w=\frac{1}{2}$ and $v=0$ has four other branches coincide with the four branches of dilute ADE models \cite{warnaar}. The weights are given by \begin{eqnarray}\label{weights} n&=&-2\cos(2\theta),\nonumber\\ w&=&\frac{1}{2-[1-2\sin(\theta/2)][1+2\sin(\theta/2)]^{2}},\nonumber\\ u&=&\pm 4w \sin(\theta/2)\cos( \pi /4-\theta /4),\\ v&=&\pm w[1+2\sin(\theta/2)],\nonumber \end{eqnarray} where $\frac{\pi}{2}\leq\theta\leq\pi$, $0\leq\theta\leq\frac{\pi}{2}$, $-\frac{\pi}{2}\leq\theta\leq0$ and $-\pi\leq\theta\leq-\frac{\pi}{2}$ are the intervals corresponding to branches ~1, ~2, ~3 and ~4 respectively. They coincide with the different branches in the dilute ADE models. It is interesting to investigate the connection of the above loop model to the SLE. There are different methods to do that here we use the magnetic operator to find the SLE drift. It was shown in \cite{BN} by the numerical calculation that the magnetic exponent of the branch ~1 and ~2 is identified with $2h_{\frac{m+1}{2},\frac{m+1}{2}}$ where \begin{eqnarray}\label{Kac formula} h_{r,s}=\frac{((m+1)r-ms)^{2}-1}{4m(m+1)}, \end{eqnarray} and $m$ is related to the central charge of the theory by $c=1-\frac{1}{m(m+1)}$. Its connection to the loop variables comes from the relation $\frac{2\theta}{\pi}+\frac{\pi}{2\theta}-2=\frac{1}{m(m+1)}$ derived from the coulomb gas method \cite{BN}. The connection of the magnetic exponent to the SLE drift is as follows \cite{Sheffield} \begin{eqnarray}\label{magnetic exponentand kappa} 2h_{\frac{m+1}{2},\frac{m+1}{2}}=\frac{(8-\kappa)(3\kappa-8)}{32\kappa}. \end{eqnarray} Using the above equation the SLE drift at the branches ~1 and ~2 of the loop model (\ref{partition function of generalized loop model}) can be derived as follows \begin{eqnarray}\label{SLE drift} \kappa&=&\frac{8\theta}{\pi}. \end{eqnarray} This result is consistent also with our expectation from the second level null vector of minimal models \cite{bernard}, it is also consistent with the recent direct investigation by using holomorphic variables \cite{cardy4} . Back to the height model representation one can summarize following results: the branch ~2 of the ADE models corresponds to the dilute loops with $n=2 \cos(\frac{\pi}{h})$ and the branch ~1 is the dense phase of tri-critical models with $n=2 \cos(\frac{\pi}{h})$. The results for some of the simple cases are as follows: \begin{eqnarray}\label{ll} \mbox{branch 2:}\quad A_2&=&\mbox{critical percolation,}\hspace{1.5cm}c=0 \hspace{1.5cm} n=1;\nonumber\\ \mbox{branch 1:}\quad A_2&=&\mbox{critical Ising}\hspace{2.6cm}c=1/2\hspace{1.1cm} n=1;\nonumber\\ \mbox{branch 2:}\quad A_3&=&\mbox{critical Ising}\hspace{2.6cm}c=1/2\hspace{1.1cm} n=\sqrt{2};\nonumber\\ \mbox{branch 1:}\quad A_3&=&\mbox{tri-critical Ising}\hspace{2.1cm}c=7/10\hspace{.9cm} n=\sqrt{2};\nonumber\\ \mbox{branch 2:}\quad A_4&=&\mbox{tri-critical Ising}\hspace{2.1cm}c=7/10\hspace{.92cm} n=2\cos(\frac{\pi}{5});\nonumber\\ \mbox{branch 2:}\quad D_4&=&\mbox{critical 3-state Potts}\hspace{1.2cm}c=4/5\hspace{1.1cm} n=\sqrt{3};\nonumber\\ \mbox{branch 1:}\quad D_4&=&\mbox{tri-critical 3-state Potts}\hspace{0.8cm}c=6/7\hspace{1.1cm} n=\sqrt{3}.\nonumber \end{eqnarray} Using the above method it is easy to find the lattice realization for most of the proposed loop models, the results are interestingly consistent. Following the same method it is possible to extract the loop models corresponding to minimal CFTs, however, the loop model for the non-diagonal cases with $g-h=-1$ is not extractable with this method because we are not able to find the dense phase of loop models for these cases. It seems that the dense lattice height model has not been proposed for this case. To conclude this subsection we proposed some loop representations for the minimal CFTs by using fusion of boundary operators. Then since ADE models give a lattice statistical model representation for minimal CFTs we used these models to extract physical loop models corresponding to ADE models. The fractal properties of these lattice loop models are the same as the loop models that we proposed by using the fusion of primary operators. \newline \newline \textbf{$SU_{k}(2)$ Models}: It is possible to follow the same calculation for every unitary minimal model. For example for WZW $SU_{k}(2)$ models the classification of modular invariant partition functions is based on A-D-E-T graphs with $g=k+2$. The same method as the minimal models is applicable here and one can find boundary operators $\hat{\phi}_{j}$ with $1\leq j\leq k+1$. The loop models have weights $d_{j}=\frac{\sin(\frac{\pi j}{g})}{\sin(\frac{\pi}{g})}$. Only $j=\frac{1}{2}$ has critical loop representation with the following loop weight \begin{eqnarray}\label{loop weight} n=2\cos(\frac{\pi}{k+2}), \end{eqnarray} with $\kappa=4\frac{k+2}{k+3}$ and $\kappa=4\frac{k+2}{k+1}$ for the dilute and dense phase respectively. The other loop models are not critical except for $k=4$ with $n=2$. The fusion graphs of the operators with $j\neq \frac{1}{2}$ is not decomposable to the simple graphs, however, the non-largest eigenvalues can be still relevant. For example take $k=5$ with $j=3/2$, the fusion graph is similar to the one part of the $\phi_{31}$ fusion graph of the tri-critical ~3-state Potts model, the right one in fig ~6. The eigenvalues are $\pm\frac{\sin(\frac{3\pi}{7})}{\sin(\frac{\pi}{7})}$, $\pm\frac{\sin(\frac{2\pi}{7})}{\sin(\frac{\pi}{7})}$ and $\pm\frac{\sin(\frac{\pi}{7})}{\sin(\frac{2\pi}{7})}$, the last two cases have critical loop representation. The similarities between fusion graphs of $SU_{k}(2)$ models with minimal models is not just an accident they are based on the coset construction of the minimal models. \section{Discussion}\ We proposed a method to classify some possible loop models consistent with the conformal boundary conditions for generic rational CFT: take the simply laced classification of the corresponding minimal CFT then find the boundary operators and also the fusion matrices, make the $O(n)$ loop model of the primary operator by the method that we discussed in section ~3 and ~\cite{Rajabpour}. We think that there should be some connections between these loop models and the SLE interpretation of CFT investigated in \cite{bernard} which is based on the connection of SLE with the null vectors in the CFT. This connection is not complete even for minimal CFTs because we do not know how to explain the boundary operators with the same loop model but with the different null vectors, for example in the three states Potts model $\epsilon,\sigma$ and $\sigma^{\dag}$ are in the same sector from the boundary CFT point of view but just $\epsilon$ and $\sigma$ have the required second level null vectors. However, from null vector point of view this correspondence is not clear but it is possible to show that in the partition function level this similarity is more known. Another way to look at the results of this paper is by conjecturing the largest eigenvalue of the fusion graph as the possible loop weight for the loop model in the universality class of the corresponding CFT without defining any height model on the fusion graph. One possible generalization of the above construction is by considering graphs with largest eigenvalue bigger than ~2 as an adjacency graph of fused RSOS model and then extracting the loop model by the method investigated in \cite{Fendley}. The other interesting direction is to investigate the modular invariant partition functions of loop models and their possible connections to the classified modular invariant partition functions of minimal models, this is related to investigate more directly the connection of our method to the classification of \cite{Jacobsen and Saleur1,Jacobsen and Saleur2}. \newline \newline \textit{ Acknowledgment}: I thank Roberto Tateo for useful discussions and Paul Fendley for useful comments. I thank also John Cardy for his useful criticism. \section{Appendix}\ \begin{figure} \begin{center} \includegraphics[angle=0,scale=0.6]{tricriticalpotts}\\ \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[angle=0,scale=0.6]{tricriticalpotts2}\\ \end{center} \end{figure} In this appendix we list the fusion graphs of the boundary operators in tri-critical ~3-state Potts model. The fusion graphs are given in fig ~6. The fusion graph of $\phi_{1,4}$ can be derived from the fusion graph of the operator $\phi_{1,3}$ by the following transformations: \begin{eqnarray}\label{transformations1} \phi_{1,3}\leftrightarrow \phi_{1,4}\hspace{1cm}\phi_{2,3}\leftrightarrow \phi_{2,4}\hspace{1cm}\phi_{3,4}\leftrightarrow \phi_{3,3}. \end{eqnarray} The fusion graph of $\phi_{2,3}$ can be derived from the fusion graph of the operator $\phi_{2,1}$ by the following transformations: \begin{eqnarray}\label{transformations2} \phi_{1,3}\leftrightarrow \phi_{1,1}\hspace{1cm}\phi_{2,3}\leftrightarrow \phi_{2,1}\hspace{1cm}\phi_{3,3}\leftrightarrow \phi_{3,1}. \end{eqnarray} Finally the fusion graph of $\phi_{2,4}$ can be derived from the fusion graph of the operator $\phi_{2,1}$ by the following transformations: \begin{eqnarray}\label{transformations3} \phi_{1,4}\leftrightarrow \phi_{1,1}\hspace{1cm}\phi_{2,4}\leftrightarrow \phi_{2,1}\hspace{1cm}\phi_{3,4}\leftrightarrow \phi_{3,1}. \end{eqnarray} To get the fusion graphs of $\phi_{3,3}$ and $\phi_{3,4}$ from the fusion graph of $\phi_{3,1}$ one just need to use the transformations (\ref{transformations2}) and (\ref{transformations3}) respectively. We shall call the part of the fusion graph of $\phi_{3,1}$ with two neighbor blobs $T_{3}^{2}$, the lower index is the number of nodes and the upper index is the number of blobs attached to the neighboring nodes of the graphs starting from one of the extremes. These kinds of fusion graphs appear also in the fusion graph of $\phi_{j=1}$ of $SU_{2}(k)$ models.
1,314,259,993,828
arxiv
\section*{References}} \def\mathcal{A}{\mathcal{A}} \def\mathcal{B}{\mathcal{B}} \def\mathcal{L}{\mathcal{L}} \def\mathbf{u}{\mathbf{u}} \def\mathbf{v}{\mathbf{v}} \def\mathbb{N}{\mathbb{N}} \def\mathbb{R}{\mathbb{R}} \newtheorem{theorem}[]{Theorem} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{claim}[theorem]{Claim} \newtheorem{observation}[theorem]{Observation} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{definition}[theorem]{Definition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{algorithm}[theorem]{Algorithm} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \crefname{theorem}{Theorem}{Theorems} \crefname{corollary}{Corollary}{Corollaries} \crefname{example}{Example}{Examples} \crefname{lemma}{Lemma}{Lemmas} \crefname{proposition}{Proposition}{Propositions} \crefname{definition}{Definition}{Definitions} \crefname{example}{example}{examples} \begin{document} \begin{frontmatter} \title{Characterization of circular D0L systems} \author[fit]{Karel Klouda} \author[fit]{Štěpán Starosta} \address[fit]{Faculty of Information Technology, Czech Technical University in Prague, Thákurova~9, 160~00, Prague, Czech Republic} \begin{abstract} We prove that every non-circular D0L system contains arbitrarily long repetitions. This result was already published in 1993 by Mignosi and Séébold, however their proof is only a sketch. We give here a complete proof. Further, employing our previous result, we give a simple algorithm to test circularity of an injective D0L system. \end{abstract} \begin{keyword} D0L system \sep circular D0L system \sep repetition \sep critical exponent \MSC 68R15 \end{keyword} \end{frontmatter} \section{Introduction} In formal language theory, D0L languages form an important class. See for instance\cite{RoSa80}. Starting by the work of Axel Thue, repetitions in various languages were studied. In \cite{EhRo83}, the authors show that is it is decidable whether a D0L language is $k$-power free, i.e., does not contain a repetition of $k$ same words for some $k \in \mathbb{N}$. In \cite{MiSe}, the authors show that if a PD0L language is $k$-power free for some integer $k$, then it is circular. However, the authors give mostly only sketches of proofs, thus we give a sound proof here. Moreover, we generalize the result as we prove it for non-injective PD0L-systems and slightly relaxed definition of circularity, called weak circularity. We also give a simple algorithm to test whether an injective D0L system is circular. \section{Preliminaries} Let $\mathcal{A}$ be an \textit{alphabet}: a finite set of \textit{letters}. The free monoid $\mathcal{A}^*$ is the set of all finite words over $\mathcal{A}$ endowed with concatenation. The \textit{empty word} is denoted $\varepsilon$ and the set of all non-empty words over $\mathcal{A}$ is denoted $\mathcal{A}^+$. The \textit{length} of $w \in \mathcal{A}^*$ is denoted $|w|$. Given a word $w \in \mathcal{A}^*$, we say that $u \in \mathcal{A}^*$ is a \textit{factor} of $w$ if there exists words $p$ and $s$, possibly empty, such that $w = pus$. Such a word $p$ is a \textit{prefix} of $w$, and the word $s$ is a \textit{suffix} of $w$. If $|p| < |w|$, $p$ is a \textit{proper} prefix, if $|s| < |w|$, $s$ is a \textit{proper} suffix. The set $\mathcal{A}^\mathbb{N}$ is the set of all \textit{infinite words} over $\mathcal{A}$. Given a word $w$, by $w^\omega$ we denote the infinite word $www \cdots$. Let $\varphi$ be an endomorphism of $\mathcal{A}^*$. We define $$ \| \varphi \| = \max \{ |\varphi(a) | \colon a \in \mathcal{A} \} \quad \text{ and } \quad | \varphi | = \min \{ |\varphi(a) | \colon a \in \mathcal{A} \}. $$ A triplet $G = (\mathcal{A}, \varphi, w)$ is a \textit{D0L system} if $\mathcal{A}$ is an alphabet, $\varphi$ is an endomorphism of $\mathcal{A}^*$, and $w \in \mathcal{A}^*$. The word $w$ is the \textit{axiom} of $G$. The \textit{sequence of $G$} is $E(G) = (w_i)_{i \geq 0}$ where $w_0 = w$ and $w_i = \varphi^i(w_0)$. The \textit{language of $G$} is the set $L(G) = \{ \varphi^n(w) \colon n \in \mathbb{N} \}$ and by $S(L(G))$ we denote the set of all factors appearing in $L(G)$. The alphabet is always considered to be the minimal alphabet necessary, i.e., $\mathcal{A} \cap S(L(G)) = \mathcal{A}$. We say that a D0L system $G = (\mathcal{A}, \varphi, w)$ is \textit{injective} if for every $w,v \in S(L(G))$, $\varphi(w) = \varphi(v)$ implies that $w = v$. It is clear that if $\varphi$ is injective, then $G$ is injective. The converse is not true: consider $\varphi: a \to abc, b \to bc, c \to a$, then $\varphi$ is not injective as $\varphi(cb) = \varphi(a)$ but $G = ( \{a,b,c\}, \varphi, a )$ is injective since $cb \not \in S(L(G))$. If $\varphi$ is \textit{non-erasing}, i.e., $\varphi(a) \neq \varepsilon$ for all $a \in \mathcal{A}$, then we speak about \textit{propagating D0L system}, shortly PD0L. Given a D0L system $G = (\mathcal{A}, \varphi, w)$ we say that the letter $a$ is \textit{bounded} (or also of \textit{rank zero}) if the set $\{ \varphi^n(a) \colon n \in \mathbb{N} \}$ is finite. If a letter is not bounded, it is \textit{unbounded}. We denote the set of all bounded letters by $\mathcal{A}_0$. The system $G$ is \textit{pushy} if $S(L(G))$ contains infinitely many factors over $\mathcal{A}_0$. A D0L system is \textit{repetitive} if for any $k \in \mathbb{N}$ there is a non-empty word $w$ such that $w^k$ is a factor. By~\cite{EhRo83}, any repetitive D0L system is \textit{strongly repetitive}, i.e., there is a non-empty word $w$ such that $w^k$ is a factor for all $k \in \mathbb{N}$. \section{Definition of circularity} In the literature, one can find two slightly different views of circularity. Both these views can be expressed in terms of interpretations. \begin{definition} Let $G = (\mathcal{A},\varphi, w)$ be a PD0L-system. A triplet $(p,v,s)$ where $p,s \in \mathcal{A}^*$ and $v = v_1\cdots v_n \in \mathcal{A}^+$ is an \textit{interpretation of a word $u \in S(L(G))$} if $\varphi(v) = pus$. \end{definition} The following definition of circularity is used in~\cite{MiSe}. \begin{definition} \label[definition]{def:circular-system} Let $G = (\mathcal{A},\varphi, w)$ be a PD0L-system and let $(p,v,s)$ and $(p',v',s')$ be two interpretations of a non-empty word $u \in S(L(G))$ with $v = v_1\cdots v_n$, $v' = v'_0 \cdots v'_m$ and $u = u_1 \cdots u_\ell$. We say that $G$ is \emph{circular} with \emph{synchronization delay $D > 0$} if whenever we have $$ |\varphi(v_1\cdots v_i)| - |p| > D \quad \text{and} \quad |\varphi(v_{i+1} \cdots v_n)| - |s| > D $$ for some $1 \leq i \leq n$, then there is $1 \leq j \leq m$ such that $$ |\varphi(v_1\cdots v_{i-1}| - |p| = |\varphi(v'_1\cdots v'_{j-1})| - |p'| $$ and $v_i = v'_j$ (see Figure~\ref{fig:circularity_1}). \end{definition} This definition says that a long enough word has unique $\varphi$-preimage except for some prefix and suffix shorter than a constant $D$. Note that if a D0L system $G = (\mathcal{A},\varphi, w)$ contains arbitrarily long words with two different $\varphi$-preimages (i.e., for any $n > 0$ there are words $v$ and $u$ in $S(L(G))$ longer than $n$ with $\varphi(v) = \varphi(u)$) cannot be circular. In~\cite{Ca94}, a circular D0L system with injective morphism is defined using the notion of synchronizing point (see Section 3.2 in~\cite{Ca94} for details). We give here an equivalent definition employing the notion of interpretation. \begin{definition} Let $G = (\mathcal{A},\varphi, w)$ be a PD0L-system. We say that two interpretations $(p,v,s)$ and $(p',v',s')$ of a word $u \in SL(G)$ are \textit{synchronized at position $k$} if there exist nonnegative indices $i$ and $j$ such that $$ \varphi(v_1\cdots v_i) = p u_1 \cdots u_k \quad \text{ and } \quad \varphi(v'_1\cdots v'_j) = p' u_1 \cdots u_k $$ with $v = v_1\cdots v_n$, $v' = v'_0 \cdots v'_m$ and $u = u_1 \cdots u_\ell$ (see Figure~\ref{fig:circularity_2}). We say that a word $u \in S(L(G))$ has a \textit{synchronizing point} at position $k$ with $0 \leq k \leq |u|$ if all its interpretations are pairwise synchronized at position $k$. \end{definition} \label[definition]{def:weak-circularity} By~\cite{Ca94}, a D0L system $G$ with injective morphism is circular if there is positive $D$ such that any $v$ from $S(L(G))$ longer than $2D$ has a synchronizing point. This definition is equivalent to \cref{def:circular-system}. However, the synchronizing point is defined for D0L systems with just non-erasing morphism and so we can omit the assumption of injectiveness in \Cref{def:circular-system}. \begin{definition} A PD0L-system $G$ is called \emph{weakly circular} if there is a constant $D > 0$ such that any $v$ from $S(L(G))$ longer that $2D$ has a synchronizing point. \end{definition} As said above, if $G$ is injective, weak circularity is equivalent to circularity. As the following example shows, this is not true for the non-injective case. \begin{example}\label[example]{ex:weak-needed} Consider the D0L system $G_1 = (\{a,b,c\}, \varphi_1, a)$ with the non-injective $\varphi_1: a \to abca, b \to bc, c \to bc$. This system is not circular as for all $m \in \mathbb{N}$ the word $(bc)^{2m}$ has two different preimages $(bc)^m$ and $(cb)^m$. The corresponding interpretations, however, have synchronizing points for $m > 1$ at positions $2k$ for all $0 \leq k \leq m$. Moreover, one can easily check that $G_1$ is weakly circular. \end{example} So, circularity implies weak circularity but the converse is not true. \begin{figure}[th] \centering \begin{tikzpicture} \draw[thick] (2.5,5) rectangle (4.5,5.5); \node at (3.5,5.25) {$v$}; \draw[thick] (.5,3.5) rectangle (2,3); \node at (1.25,3.25) {$p$}; \draw[dashed,->] (2.5,5) -- (.5,3.5); \draw[thick] (2,3.5) rectangle (5,2.5); \node at (3.5,3) {$u$}; \draw[thick] (5,3.5) rectangle (6,3); \node at (5.5,3.25) {$s$}; \draw[dashed,->] (4.5,5) -- (6,3.5); \draw[thick] (1,3) rectangle (2,2.5); \node at (1.5,2.75) {$p'$}; \draw[thick] (5,3) rectangle (7,2.5); \node at (6,2.75) {$s'$}; \draw[thick] (2.25,0.5) rectangle (4.75,1); \node at (3.5,.75) {$v'$}; \draw[dashed,->] (2.25,1) -- (1,2.5); \draw[dashed,->] (4.75,1) -- (7,2.5); \draw [gray] plot [smooth, tension=1.5] coordinates { (0.5,3.5) (1,4) (1.5,3.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (1.5,3.5) (1.75,4) (2,3.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (2,3.5) (2.25,4) (2.5,3.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (2.5,3.5) (3,4) (3.5,3.5)}; \node [above] at (3,4) {$\varphi(v_i)$}; \draw [gray] plot [smooth, tension=1.5] coordinates { (3.5,3.5) (3.875,4) (4.25,3.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (4.25,3.5) (4.75,4) (5.25,3.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (5.25,3.5) (5.625,4) (6,3.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (1,2.5) (1.75,2) (2.5,2.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (2.5,2.5) (3,2) (3.5,2.5)}; \node [below] at (3,2) {$\varphi(v'_j)$}; \draw [gray] plot [smooth, tension=1.5] coordinates { (3.5,2.5) (3.875,2) (4.25,2.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (4.25,2.5) (4.5,2) (4.75,2.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (4.75,2.5) (5.25,2) (5.75,2.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (5.75,2.5) (6,2) (6.25,2.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (6.25,2.5) (6.625,2) (7,2.5)}; \draw[dotted] (2.5,3.5) -- (2.5,2.5); \draw[dotted] (3.5,3.5) -- (3.5,2.5); \draw[dotted] (4.25,3.5) -- (4.25,2.5); \end{tikzpicture} \caption{Two interpretations from \cref{def:circular-system} with $v_i = v'_j$.} \label{fig:circularity_1} \end{figure} \begin{figure}[th] \centering \begin{tikzpicture} \draw[thick] (11.5,5) rectangle (13.5,5.5); \node at (12.5,5.25) {$v$}; \draw[thick] (9.5,3.5) rectangle (11,3); \node at (10.25,3.25) {$p$}; \draw[dashed,->] (11.5,5) -- (9.5,3.5); \draw[thick] (11,3.5) rectangle (14,2.5); \node at (12.5,3) {$u$}; \draw[thick] (14,3.5) rectangle (15,3); \node at (14.5,3.25) {$s$}; \draw[dashed,->] (13.5,5) -- (15,3.5); \draw[thick] (10,3) rectangle (11,2.5); \node at (10.5,2.75) {$p'$}; \draw[thick] (14,3) rectangle (16,2.5); \node at (15,2.75) {$s'$}; \draw[thick] (11.25,0.5) rectangle (13.75,1); \node at (12.5,.75) {$v'$}; \draw[dashed,->] (11.25,1) -- (10,2.5); \draw[dashed,->] (13.75,1) -- (16,2.5); \draw [gray] plot [smooth, tension=1.5] coordinates { (9.5,3.5) (10,4) (10.5,3.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (10.5,3.5) (10.75,4) (11,3.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (11,3.5) (11.25,4) (11.5,3.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (11.5,3.5) (12,4) (12.5,3.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (12.5,3.5) (12.875,4) (13.25,3.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (13.25,3.5) (13.75,4) (14.25,3.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (14.25,3.5) (14.625,4) (15,3.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (10,2.5) (10.75,2) (11.5,2.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (11.5,2.5) (11.75,2) (12,2.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (12,2.5) (12.25,2) (12.5,2.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (12.5,2.5) (12.875,2) (13.25,2.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (13.25,2.5) (13.5,2) (13.75,2.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (13.75,2.5) (14.25,2) (14.75,2.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (14.75,2.5) (15,2) (15.25,2.5)}; \draw [gray] plot [smooth, tension=1.5] coordinates { (15.25,2.5) (15.625,2) (16,2.5)}; \draw[dotted] (11.5,3.5) -- (11.5,2.5); \draw[dotted] (12.5,3.5) -- (12.5,2.5); \draw[dotted] (13.25,3.5) -- (13.25,2.5); \end{tikzpicture} \caption{Two interpretations from \cref{def:weak-circularity} synchronized at positions depicted by dotted lines.} \label{fig:circularity_2} \end{figure} \section{Main result} \begin{theorem}\label[theorem]{thm:main-result} Any PD0L system that is not weakly circular is repetitive. \end{theorem} The two following lemmas will be used to prove this theorem,. The next lemma and its proof is based on the ideas in the proof of Theorem 4.35 in \cite{Ku_ToSyDy}. \begin{lemma}\label[lemma]{lem:kurka} Let $G = (\mathcal{A},\varphi,w)$ be a PD0L system. If there exists a sequence $\epsilon(k)$ with $\displaystyle \lim_{k \to +\infty} \epsilon(k) = +\infty$ and if for any $k \in \mathbb{N}$ there are two non-empty words $u$ and $v$ in $S(L(G))$ containing an unbounded letter such that the following conditions are satisfied \begin{enumerate}[(i)] \item $|u| = k$; \item \label{lk_bod2} there are two integers $m$ and $n$ such that $m > n$ and letters $a$ and $b$ such that for each $i \in \{m,n\}$ the word $\varphi^i(u)$ is a factor of $\varphi^i(v)$ and $\varphi^i(v)$ is a factor of $\varphi^i(aub)$, moreover, $\frac{|\varphi^i(u)|}{|\varphi^i(a)|} > \epsilon(k)$ or $\frac{|\varphi^i(u)|}{|\varphi^i(b)|} > \epsilon(k)$; and \item \label{lk_bod3} for each $i \in \{m,n\}$ the factor $\varphi^i(u)$ has no synchronizing point: two non-synchronized in\-ter\-pre\-ta\-ti\-ons are $(\varepsilon,\varphi^{i-1}(u),\varepsilon)$ and $(p_i,\varphi^{i-1}(v),s_i)$, \end{enumerate} then the D0L system is repetitive. \end{lemma} \begin{proof} Suppose that $\frac{|\varphi^i(u)|}{|\varphi^i(a)|} > \epsilon(k)$ is true in requirement \eqref{lk_bod2}, the other case $\frac{|\varphi^i(u)|}{|\varphi^i(b)|} > \epsilon(k)$ is analogous. It holds that $$ \varphi^m(v) = p_m \varphi^m(u) s_m = \varphi^{m-n}(\varphi^n(v)) = \varphi^{m-n}(p_n) \varphi^m(u) \varphi^{m-n}(s_n). $$ The fact that the interpretations $(\varepsilon,\varphi^{m-1}(u),\varepsilon)$ and $(p_m,\varphi^{m-1}(v),s_m)$ are not synchronized implies that $p_m \neq \varphi^{m-n}(p_n)$ (if $p_m = \varphi^{m-n}(p_n)$, the two interpretations of $\varphi^m(u)$ are synchronized at position $0$, see Figure~\ref{fig:proof_of_lemma_1}). Since $p_m \varphi^m(u) s_m = \varphi^{m-n}(p_n) \varphi^m(u) \varphi^{m-n}(s_n)$ the word $p_m$ is a proper prefix of $\varphi^{m-n}(p_n)$ or vice versa. Moreover, $p_m$ is not empty since it contradicts again the point \eqref{lk_bod3}. Suppose $p_m$ is a non-empty proper prefix of $\varphi^{m-n}(p_n)$. It implies there exists a word $z$ such that $p_m z = \varphi^{m-n}(p_n)$. If $\varphi^{m-n}(p_n)$ is a non-empty proper prefix of $p_m$, then we may find a word $z$ such that $\varphi^{m-n}(p_n)z = p_m$ (see Figure~\ref{fig:proof_of_lemma_2}). \begin{figure}[th] \centering \begin{tikzpicture} \draw[thick] (2,5) rectangle (2.5,5.5); \node at (2.25,5.25) {$a$}; \draw[thick] (2.5,5) rectangle (4.5,5.5); \node at (3.5,5.25) {$u$}; \draw[thick] (4.5,5) rectangle (5,5.5); \node at (4.75,5.25) {$b$}; \draw[thick] (0,3.5) rectangle (2,3); \node at (1,3.25) {$p_m$}; \draw[thick] (2,3.5) rectangle (5,3); \node at (3.5,3.25) {$\varphi^m(u)$}; \draw[thick] (5,3.5) rectangle (7.5,3); \node at (6.25,3.25) {$s_m$}; \draw[dashed,->] (2.5,5) -- (2,3.5); \draw[dashed,-] (2,5) -- (-0,4); \draw[dashed,->] (4.5,5) -- (5,3.5); \draw[dashed,-] (5,5) -- (7.5,4); \draw[thick] (0,3) rectangle (2,2.5); \node at (1,2.75) {$\varphi^{m-n}(p_n)$}; \draw[thick] (2,3) rectangle (5,2.5); \node at (3.5,2.75) {$\varphi^m(u)$}; \draw[thick] (5,3) rectangle (7.5,2.5); \node at (6.25,2.75) {$\varphi^{m-n}(s_n)$}; \node at (8.5,3) {$= \varphi^m(v)$}; \draw[thick] (1.5,1) rectangle (2.5,0.5); \node at (2,0.75) {$p_n$}; \draw[thick] (2.5,1) rectangle (4.5,0.5); \node at (3.5,.75) {$\varphi^n(u)$}; \draw[thick] (4.5,1) rectangle (5.5,0.5); \node at (5,.75) {$s_n$}; \node[right] at (5.5,0.75) {$= \varphi^n(v)$}; \draw[dashed,->] (1.5,1) -- (0,2.5); \draw[dashed,->] (2.5,1) -- (2,2.5); \draw[dashed,->] (4.5,1) -- (5,2.5); \draw[dashed,->] (5.5,1) -- (7.5,2.5); \draw[thick] (2.75,-1.5) rectangle (4.25,-1); \node at (3.5,-1.25) {$v$}; \draw[dashed,->] (2.75,-1) -- (1.5,.5); \draw[dashed,->] (4.25,-1) -- (5.5,.5); \end{tikzpicture} \caption{The first arrangement from the proof of \Cref{lem:kurka}.} \label{fig:proof_of_lemma_1} \end{figure} \begin{figure}[th] \centering \begin{tikzpicture} \draw[thick] (12,5) rectangle (12.5,5.5); \node at (12.25,5.25) {$a$}; \draw[thick] (12.5,5) rectangle (14.5,5.5); \node at (13.5,5.25) {$u$}; \draw[thick] (14.5,5) rectangle (15,5.5); \node at (14.75,5.25) {$b$}; \draw[thick] (9.5,3.5) rectangle (12,3); \node at (10.75,3.25) {$p_m$}; \draw[thick] (12,3.5) rectangle (15,3); \node at (13.25,3.75) {$\varphi^m(u)$}; \path [fill=lightgray] (12.05,3.45) rectangle (12.5,3.05); \node at (12.25, 3.25) {$z$}; \path [fill=lightgray] (12.55,3.45) rectangle (13,3.05); \node at (12.75, 3.25) {$z$}; \node [right] at (13, 3.25) {$\cdots$}; \draw[thick] (15,3.5) rectangle (16,3); \node at (15.5,3.25) {$s_m$}; \draw[dashed,->] (12.5,5) -- (12,3.5); \draw[dashed,-] (12,5) -- (9.5,4); \draw[dashed,->] (14.5,5) -- (15,3.5); \draw[dashed,-] (15,5) -- (16,4); \draw[thick] (9.5,3) rectangle (11.5,2.5); \node at (10.5,2.75) {$\varphi^{m-n}(p_n)$}; \draw[thick] (11.5,3) rectangle (14.5,2.5); \node at (13,2.25) {$\varphi^m(u)$}; \path [fill=lightgray] (11.55,2.95) rectangle (12,2.55); \node at (11.75, 2.75) {$z$}; \path [fill=lightgray] (12.05,2.95) rectangle (12.5,2.55); \node at (12.25, 2.75) {$z$}; \path [fill=lightgray] (12.55,2.95) rectangle (13,2.55); \node at (12.75, 2.75) {$z$}; \node [right] at (13, 2.75) {$\cdots$}; \draw[thick] (14.5,3) rectangle (16,2.5); \node at (15.25,2.75) {$\scriptstyle{\varphi^{m-n}(s_m)}$}; \draw[thick] (11,1) rectangle (12,0.5); \node at (11.5,0.75) {$p_n$}; \draw[thick] (12,1) rectangle (14,0.5); \node at (13,.75) {$\varphi^n(u)$}; \draw[thick] (14,1) rectangle (15,0.5); \node at (14.5,.75) {$s_n$}; \node[left] at (11,0.75) {$\varphi^n(v) = $}; \draw[dashed,->] (11,1) -- (9.5,2.5); \draw[dashed,->] (12,1) -- (11.5,2.5); \draw[dashed,->] (14,1) -- (14.5,2.5); \draw[dashed,->] (15,1) -- (16,2.5); \draw[thick] (12.25,-1.5) rectangle (13.75,-1); \node at (13,-1.25) {$v$}; \draw[dashed,->] (12.25,-1) -- (11,.5); \draw[dashed,->] (13.75,-1) -- (15,.5); \node at (17,3) {$= \varphi^m(v)$}; \end{tikzpicture} \caption{The second arrangement from the proof of \cref{lem:kurka}.} \label{fig:proof_of_lemma_2} \end{figure} Therefore, in both cases, the word $\varphi^m(u)$ is a prefix of $z^\ell$ for some integer $\ell$. Since $$ \frac{|\varphi^m(u)|}{|z|} > \frac{|\varphi^m(u)|}{\max\{|p_m|,|\varphi^{m-n}(p_n)|\}} > \frac{|\varphi^m(u)|}{|\varphi^m(a)|} > \epsilon(k), $$ we deduce that $z^{\lfloor \epsilon(k) \rfloor}$ is factor of $\varphi^m(u)$. As $\displaystyle \lim_{k \to +\infty} \epsilon(k) = +\infty$, the D0L-system is repetitive. \end{proof} \begin{lemma} \label{le:bounded_are_sync} In any PD0L system there is a constant $C$ such that all factors over bounded letters longer than $C$ have a synchronizing point. \end{lemma} \begin{proof} The statement is trivial for non-pushy D0L systems, hence we consider a pushy one. Clearly, there exist an integer $n$ such that for all $c \in \mathcal{A}_0$ we have $|\varphi^m(c)| = |\varphi^{m+1}(c)|$ for every $m \geq n$. Let $u$ be a factor over bounded letters only of length at least $L = 3 \| \varphi^{n+1} \|\cdot |w_0|$ where $w_0$ is the axiom of the D0L system. This implies that $u$ appears in the sequence $E(G)= (w_i)_{i \geq 0}$ in $w_k$ for $k > n+1$. Let $(p,w,s)$ be an interpretation of $u$. Since $u$ is a factor of $w_k$ such that $k > n+1$ and $|w_k| > L$, there must be words $x, y \in \mathcal{A}^*$ and $v \in \mathcal{A}^+$ such that $w = x\varphi^n(v)y$ and $$ |\varphi(x)| - |p| < \|\varphi^{n+1}\| \quad \text{and} \quad |\varphi(y)| - |s| < \|\varphi^{n+1}\|. $$ As $\varphi^{n+1}(v)$ is a factor of $u$, it contains only bounded letters, and thus so does the word $v$. Moreover, by the definition of $n$, every letter $c$ occurring in $\varphi^n(v)$ satisfies $|\varphi^n(c)| = |\varphi^{n+1}(c)|$. It follows that any two interpretations $(p,w,s)$ and $(p',w',s')$ of the word $u$ are synchronized at position $k = \|\varphi^{n+1}\|$. \end{proof} \begin{proof}[Proof of \cref{thm:main-result}] Consider a PD0L system $G = (\mathcal{A},\varphi, w_0)$ with infinite language (the statement for D0L system with finite language is trivial). We define a partition of the alphabet $\mathcal{A} = \Sigma_m \cup \Sigma_{m-1} \cup \cdots \cup \Sigma_1 \cup \Sigma_0$ as follows: \begin{enumerate}[(i)] \item $\Sigma_0 = \mathcal{A}_0$ is the set of bounded letters, \item if $x$ and $y$ are from $\Sigma_i$, then the sequence $\left(\frac{|\varphi^n(x)|}{|\varphi^n(y)|}\right)_{n \geq 1}$ is $\Theta(1)$, \item for all $i = 1, \ldots, m$, if $x$ is an element of $\Sigma_i$ and $y$ of $\Sigma_{i-1}$, then $\displaystyle \lim_{n \to + \infty} \frac{|\varphi^n(x)|}{|\varphi^n(y)|} = + \infty$. \end{enumerate} This partition is well defined due to~\cite{SaSo} where it is proved that for any $a \in \mathcal{A}$ there are numbers $\alpha \in \mathbb{N}$ and $\beta \in \mathbb{R}_{\geq 1} \cup \{0\}$ such that $|\varphi^n(a)| = \Theta(n^\alpha \beta^n)$. Further we define for all $j = 0, 1, \ldots, m$ the sets $$ \mathcal{A}_j = \bigcup_{0 \leq i \leq j} \Sigma_{i}. $$ Note that $\varphi(\mathcal{A}_j) \subset \mathcal{A}_j$ and $\varphi(\mathcal{A}_j) \cap \Sigma_j \neq \emptyset$. Lemma \ref{le:bounded_are_sync} implies that factors without synchronizing point over $\mathcal{A}_0$ are bounded in length. Fix a positive integer $j$ and assume that there is a factor without synchronizing point of arbitrary length over $\mathcal{A}_j$. Let $k$ be a positive integer. For any positive $\ell \in \mathbb{N}$ we can find words $u^{(k)}_\ell \in \mathcal{A}_j^*$ and $v^{(k)}_\ell \in \mathcal{A}_j^*$ and letters $a^{(k)}_\ell \in \mathcal{A}_j$ and $b^{(k)}_\ell \in \mathcal{A}_j$ such that \begin{enumerate}[(a)] \item \label{du-1} $|u^{(k)}_\ell| = k$, \item \label{du-2} $\varphi^\ell(v^{(k)}_\ell)$ is a factor of $\varphi^\ell(a^{(k)}_\ell u^{(k)}_\ell b^{(k)}_\ell)$ and $\varphi^\ell(u^{(k)}_\ell)$ is a factor of $\varphi^\ell(v^{(k)}_\ell)$, \item $\varphi^\ell(u^{(k)}_\ell)$ has two non-synchronized interpretations $$(\varepsilon, \varphi^{\ell-1}(u^{(k)}_\ell), \varepsilon) \text{ and } (p^{(k)}_\ell,\varphi^{\ell-1}(v^{(k)}_\ell),s^{(k)}_\ell)$$ where $p^{(k)}_\ell\varphi^{\ell}(u^{(k)}_\ell)s^{(k)}_\ell = \varphi^{\ell}(v^{(k)}_\ell)$. \end{enumerate} Since the length of $u^{(k)}_\ell$ is fixed, there must be an infinite set $E_1^{(k)} \subset \mathbb{N}$ such that $u^{(k)}_i = u^{(k)}_j = u^{(k)}$, $a^{(k)}_i = a^{(k)}_j = a^{(k)}$ and $b^{(k)}_i = b^{(k)}_j = b^{(k)}$ for all $i,j$ from $E_1^{(k)}$. If for each $k$ there are indices $\ell_1 > \ell_2$ in $E_1^{(k)}$ such that $v^{(k)}_{\ell_1} = v^{(k)}_{\ell_2} = v^{(k)}$ and if the number of letters from $\Sigma_j$ in $u^{(k)}$ tends to $+\infty$ as $k \to +\infty$, then $G$ is repetitive by \cref{lem:kurka} and the proof is finished. Assume no such indices $\ell_1, \ell_2$ exist for some $k$, then $|v^{(k)}_\ell|$ must go to infinity as $\ell \to +\infty$. It follows from \eqref{du-1} and \eqref{du-2} that the number of letters from $\Sigma_j$ in words $v^{(k)}_\ell$ is bounded (or even zero) and so there must be $j' \in \{ 1,\ldots, j-1\}$ such that the number of letters from $\Sigma_{j'}$ in $v^{(k)}_\ell$ goes to infinity as $\ell \to +\infty$ and there is a factor without a synchronizing point over $\mathcal{A}_{j'}$ of arbitrary length. Note that such $j'$ must exist since number of letters from $\Sigma_j$ is bounded and the factors of $v^{(k)}_\ell$ containing only letters from $\mathcal{A}_0$ are bounded in length. If such indices $\ell_1, \ell_2$ exist for each $k$ but the number of letters from $\Sigma_j$ in $u^{(k)}$ is bounded as $k \to +\infty$, there must be again some $j' \in \{ 1,\ldots, j-1\}$ such that the number of letters from $\Sigma_{j'}$ in $u^{(k)}$ goes to infinity as $k \to +\infty$ and there is again a factor without a synchronizing point over $\mathcal{A}_{j'}$ of arbitrary length. Overall, given the integer $j$, we either prove $G$ is repetitive by \cref{lem:kurka} or we find a positive integer $j'$ less than $j$ such that there is a factor without a synchronizing point over $\mathcal{A}_{j'}$ of arbitrary length. In the latter case we repeat the construction for $j = j'$. The only remaining case is when $j = 1$, i.e. we have a factor without a synchronizing point over $\mathcal{A}_1$ of arbitrary length. Even in this case we can repeat the construction above. However, the case when $\ell_1 > \ell_2$ such that $v^{(k)}_{\ell_1} = v^{(k)}_{\ell_1} = v^{(k)}$ do not exists for some $k$ is not possible. Indeed, it cannot happen that $|v^{(k)}_\ell|$ goes to infinity as $\ell \to +\infty$: $v^{(k)}_\ell$ must consist of letters from $\mathcal{A}_1 = \Sigma_1 \cup \Sigma_0$. Since $u^{(k)}$ is over $\mathcal{A}_1$ as well (with at least one letter from $\Sigma_1$ for $k$ large enough), the number of letters from $\Sigma_1$ in $v^{(k)}$ cannot be unbounded (for $\ell \to +\infty$) by the definition of $\Sigma_1$. Clearly, again by \cref{le:bounded_are_sync}, the number of letters from $\mathcal{A}_0$ in $v^{(k)}_\ell$ is bounded as well (for $\ell \to +\infty$) and so the indices $\ell_1 > \ell_2$ must exist so that $v^{(k)}_{\ell_1} = v^{(k)}_{\ell_1} = v^{(k)}$ . Moreover, since factors without a synchronizing point over $\mathcal{A}_0$ are bounded in length, the number of letters from $\Sigma_1$ in $u^{(k)}$ goes to infinity as $k \to +\infty$. This all implies that $G$ is repetitive by \cref{lem:kurka}. \end{proof} \section{Simple criterion for circularity} \begin{definition} We say that a D0L system $G$ is \textit{unboundedly repetitive} if there exists $w \in S(L(G))$ such that $w^k \in S(L(G))$ for all $k$ and $w$ contains at least one unbounded letter. \end{definition} In \cite{EhRo78}, the authors introduced the notion of simplification to study properties of a D0L system. Given an endomorphism $\varphi$ over $\mathcal{A}$, the endomorphism $\Psi$ over $\mathcal{B}$ is its \textit{simplification} if $\# \mathcal{B} < \# \mathcal{A}$ and there exist morphisms $h: \mathcal{A}^* \to \mathcal{B}^*$ and $k: \mathcal{B}^* \to \mathcal{A}^*$ such that $\varphi = kh$ and $\Psi = hk$. A corollary of the defect theorem (see \cite{KoOt00}) is that every non-injective morphism has a simplification which is injective, called an \textit{injective simplification}. Specially, injective $G$ is its own injective simplification. The following claim follows from Proposition 4.3 in \cite{KoOt00} and Theorem 2 in \cite{KlSt13}. \begin{proposition} \label{unboundedly-repetitive-characterization} A D0L system $G$ is unboundedly repetitive if and only if for some its injective simplification $G' = (\mathcal{B}, \psi, w'_0)$ of $G$ there is a positive integer $\ell $ and $a \in \mathcal{B}$ such that $$ (\psi^\ell)^\infty(a) = w^\omega \quad \text{ for some } w \in \mathcal{B}^+. $$ \end{proposition} In fact, if the condition in the previous claim is satisfied for some injective simplification, then it is satisfied for all injective simplifications. Using this proposition and Theorem 1 of \cite{KlSt13} we deduce the following theorem. \begin{theorem} \label{repetitive-pushy-or-unbounded} Let $G$ be a repetitive D0L system, then one of the following is true: \begin{enumerate}[(i)] \item $G$ is pushy, \item $G$ is unboundedly repetitive. \end{enumerate} \end{theorem} In the previous section we proved that any PD0L system that is not weakly circular is repetitive. The next theorem gives a characterization of injective circular D0L systems. \begin{theorem} \label{thm:circ-iff-ur} An injective D0L system $G = (\mathcal{A}, \varphi, w)$ is not circular if and only if it is unboundedly repetitive. \end{theorem} \begin{proof} $(\Rightarrow)$: As an injective morphism is non-erasing, \Cref{thm:main-result} implies that $G$ is repetitive. Thus, by \Cref{repetitive-pushy-or-unbounded}, $G$ is pushy or unboundedly repetitive. Suppose it is pushy and not unboundedly repetitive. Therefore, there exist an integer $N$ such that all repetitions $u^\ell$ where $\ell > N$ and $u \in S(L(G))$ are over bounded letters only, i.e., $u \in \mathcal{A}_0^+$. From the proof of \Cref{thm:main-result} one can see that long enough non-synchronized factors contain longer and longer repetitions but these repetitions cannot be over bounded letters due to \Cref{le:bounded_are_sync} -- a contradiction. $(\Leftarrow)$: \Cref{unboundedly-repetitive-characterization} implies that there is a positive integer $\ell $ and a letter $a$ such that $(\varphi^\ell)^\infty(a) = w^\omega$ for some $w \in \mathcal{A}^+$. In \cite{KlSt13} it is proved that the word $w$ can be taken so that it contains the letter $a$ only once at its beginning. It follows that $\varphi^\ell(w) = w^k$ for some $k > 1$. Since $\varphi$ is injective, we must have $\varphi(p) \neq w$ for all prefixes $p$ of $w$. This implies that for all $n \in \mathbb{N}$ the word $w^{nk}$ has two non-synchronized interpretations $(\varepsilon, w^n, \varepsilon)$ and $(w,w^{n+1},w^{k-1})$. \end{proof} \begin{remark} In the previous theorem, we cannot omit assumption of injectiveness and replace circularity with weak circularity: consider again the D0L system $G_1$ from \cref{ex:weak-needed}. The conditions of \Cref{unboundedly-repetitive-characterization} is satisfied for $\ell = 1$ and the letter $b$ with $w = bc$ but still the corresponding D0L system is weakly circular. \end{remark} Since the existence of $\ell$ and $a$ satisfying conditions of \Cref{unboundedly-repetitive-characterization} can be tested by a simple and fast algorithm described in~\cite{La91}, we have a simple algorithm deciding circularity. As a corollary of \Cref{thm:circ-iff-ur}, we retrieve the following result of \cite{Mo96}. A morphism $\varphi: \mathcal{A}^* \to \mathcal{A}^*$ is \textit{primitive} if there exists an integer $k$ such that for all letters $a,b \in \mathcal{A}$, the letter $b$ appears in $\varphi^k(a)$. An infinite word $\mathbf{u}$ is a \textit{periodic point} of a morphism $\varphi$ if there exists an integer $k$ such that $\varphi^k(\mathbf{u}) = \mathbf{u}$. \begin{corollary}[\cite{Mo96}] If $\mathbf{u}$ is an aperiodic fixed point of a primitive morphism injective on $S(L(G))$, then it is circular. \end{corollary} \begin{proof} Any periodic point of a primitive morphism has the same language as $\mathbf{u}$. Therefore, every periodic point is aperiodic and so the condition of \Cref{unboundedly-repetitive-characterization} cannot be satisfied. \Cref{thm:circ-iff-ur} yields the result. \end{proof} \section*{Acknowledgements} This work was supported by the Czech Science Foundation grant GA\v CR 13-35273P. \bibliographystyle{alpha}
1,314,259,993,829
arxiv
\section{Introduction} The problem of cross-identifying sources between two catalogs $K$ and $K'$ has previously been studied by \citet{Condon}, \citet{DeRuiter}, \citet{Prestage}, \citet{SS} and \citet{Rutledge}, among others. As evidenced by recent papers of \citet{BS} and \citet{Pineau}, this field is still very active and will be more so with the wealth of forthcoming multiwavelength data. Usually, the association is performed using a ``likelihood ratio'': this quantity is typically computed as the ratio of the probability of finding, at some distance from a source $M_i \in K$, a source $\MJ_{\smash[t]{j}} \in K'$, if $\MJ_{\smash[t]{j}}$ is a counterpart of $M_i$, to the probability that $\MJ_{\smash[t]{j}}$ is a chance association at the same position, given the local surface density of $K'$-sources. As noticed by \citet{SS}, there has been some confusion in the definition and interpretation of the likelihood ratio, and, more importantly, in the estimation of the probabilit \footnote{% E.g., \citet{DeRuiter} state that, if there is a counterpart, the closest object is always the right one, which is obviously wrong.% } that a source in $K'$ is the counterpart of a source in $K$. When associating sources from catalogs at different wavelengths, some authors include in this likelihood ratio some \emph{a priori} information on the spectral energy distribution (\textsc{\large sed}) of the source. As this work began, our primary goal was to build template observational \textsc{\large sed}'s of galaxies from the optical to the far-infrared for different types of galaxies. We initially intended to cross-identify the \textsc{\large iras}\ Faint Source Survey \citep{FSS, FSC} with the \textsc{\large leda}\ database \citep{LEDA}. Because of the large positional inaccuracy of \textsc{\large iras}\ data, special care was needed to identify optical sources with infrared ones. While \textsc{\large iras}\ data are by now quite outdated and have been superseded by Spitzer observations, we still think that the procedure we developed at that time may be valuable for other studies. Because we aimed to fit synthetic \textsc{\large sed}'s to the template observational ones, we could not and did not want to make assumptions on the \textsc{\large sed}\ of sources based on their type, since this would have biased the procedure. We therefore rely in what follows only on the positions to associate sources between catalogs. The method we use is essentially similar to that of \citet{SS}. Because thinking in terms of probabilities rather than of likelihood ratios highlights some implicit assumptions, we found it however useful for the sake of clarity to detail hereafter our calculations; this allows us moreover to extend our work to a case not covered by papers cited above (see Sect.~\ref{one}). We define our notations and explicit our general assumptions in Sect.~\ref{notations}. In Sect.~\ref{several}, we compute the probability of association under the assumption that a $K$-source has at most one counterpart in $K'$ but that several $K$-sources may have the same counterpart (``several-to-one'' associations). We moreover determine the fraction of sources with a counterpart and, if unknown, estimate the uncertainty on the position in one of the catalogs. In Sect.~\ref{one}, we compute the probability of association under the assumption that a $K$-source has at most one counterpart in $K'$ and that no other $K$-source has the same counterpart (``one-to-one'' associations). We provide in Sect.~\ref{practical} some guidance to help the user to implement these results. The probability distribution of the relative positions of associated sources is modeled in App.~\ref{cov}. \section{Notations and general assumptions} \label{notations} We consider two catalogs $K$ and $K'$ defined on a common area $S$ of the sky and use the following notations: \begin{itemize} \item $\mathop{\#}\mathopen{} E$: number of elements of any set $E$; \item $M_1$, \textellipsis, $M_n$, with $n \equiv \mathop{\#}\mathopen{}K$: sources in $K$; \item $M'_{\smash[t]{1}}$, \textellipsis, $M'_{\smash[t]{\cramped{n'}}}$, with $\cramped{n'} \equiv \mathop{\#}\mathopen{}K'$: sources in $K'$. \end{itemize} We define the following events: \begin{itemize} \item $c_i$: $M_i$ is in the infinitesimal surface element $\mathrm{d}^2\vec{r_i}$ located at $\vec{r_i}$; \item $\coord'_{\smash[t]{j}}$: $\MJ_{\smash[t]{j}}$ is in the surface element $\mathrm{d}^2\vec{r'_{\smash[t]{j}}}$ located at $\vec{r'_{\smash[t]{j}}}$; \item $C \equiv \bigcap_{i=1}^n c_i$: the coordinates of all $K$-sources are known; \item $C' \equiv \bigcap_{j=1}^{\cramped{n'}} \coord'_{\smash[t]{j}}$: the coordinates of all $K'$-sources are known; \item $A_{i{,}\, j}$, with $j>0$: $\MJ_{\smash[t]{j}}$ is the counterpart of $M_i$; \item $A_{i{,}\, 0}$: $M_i$ has no counterpart in $K'$, i.e. $A_{i{,}\, 0} = \overline{\bigcup_{j>0}A_{i{,}\, j}}$, where $\overline{\omega}$ denotes the negation of any event $\omega$; \item $A_{0{,}\, j}$: $\MJ_{\smash[t]{j}}$ has no counterpart in $K$. \end{itemize} We also write $f$ the \emph{a priori} probability $P(\bigcup_{j>0} A_{i{,}\, j})$ that an element of $K$ has a counterpart in $K'$ (so, $P(A_{i{,}\, 0}) = 1-f$); we will see in Sects.~\ref{fractionsto} and~\ref{fractionoto} how to estimate $f$. We moreover assume that any $M_i$ has at most one counterpart in $K'$: $A_{i{,}\, j} \cap A_{i{,}\, k} = \varnothing$ if $j\neq k$. Clustering is neglected in all the paper. \section{Several-to-one associations} \label{several} In this section, we do not make any assumption on the number of $K$-sources that may be the counterpart of a given source of $K'$: this is a reasonable hypothesis if the angular resolution in $K'$ (e.g.\ \textsc{\large iras}) is much poorer than in $K$ (e.g.\ \textsc{\large leda}), since, in that case, several distinct objects of $K$ may be confused in $K'$. As evidenced by Sect.~\ref{local}, this is also the assumption implicitly made by most of the authors cited in the introduction. We call this the ``several-to-one'' case. \subsection{Probability of association: all-sky computation} \label{all-sky} We want to compute% \footnote{For the sake of clarity, let us mention that we adopt the same decreasing order of precedence of operators as in \emph{Mathematica} \citep{Mathematica}: $\times$ and $/$; $\prod$; $\sum$; $+$ and $-$.}% , in the several-to-one case, the probability $\Prob_{\!\sto}(A_{i{,}\, j} \mid C \cap C')$ of association of sources $M_i$ and $\MJ_{\smash[t]{j}}$ ($j > 0$) or the probability that $M_i$ has no counterpart ($j = 0$), knowing the coordinates of all the objects in $K$ and $K'$. Remembering that, for any events $\omega_1$, $\omega_2$ and $\omega_3$, $P(\omega_1 \mid \omega_2) = P(\omega_1 \cap \omega_2)/P(\omega_2)$ and $P(\omega_1 \cap \omega_2 \mid \omega_3) = P(\omega_1 \mid \omega_2 \cap \omega_3) \* P(\omega_2 \mid \omega_3)$, we have \begin{align} \label{Pstodef} \Prob_{\!\sto}(A_{i{,}\, j} \mid C \cap C') = \frac{\Prob_{\!\sto}(A_{i{,}\, j} \cap C \cap C')}{\Prob_{\!\sto}(C \cap C')} = \frac{\Prob_{\!\sto}(A_{i{,}\, j} \cap C \mid C')}{\Prob_{\!\sto}(C \mid C')}. \end{align} We first compute $\Prob_{\!\sto}(C \mid C')$. Using the symbol $\biguplus$ for mutually exclusive events instead of $\bigcup$, we obtain \begin{align} \Prob_{\!\sto}(C \mid C') &= \Prob_{\!\sto}\Bigl(C \cap \biguplus_{j_1=0}^{\cramped{n'}}\biguplus_{j_2=0}^{\cramped{n'}}\cdots\biguplus_{j_{n}=0}^{\cramped{n'}} \bigcap_{k=1}^n A_{k{,}\, j_k} \Bigm| C' \Bigr) = \sum_{j_1=0}^{\cramped{n'}} \sum_{j_2=0}^{\cramped{n'}} \cdots \sum_{j_{n}=0}^{\cramped{n'}} \Prob_{\!\sto}\Bigl(C \cap \bigcap_{k=1}^{n} A_{k{,}\, j_k} \Bigm| C' \Bigr) \notag \\ &= \sum_{j_1=0}^{\cramped{n'}} \sum_{j_2=0}^{\cramped{n'}} \cdots \sum_{j_{n}=0}^{\cramped{n'}} \Prob_{\!\sto}\Bigl(C \Bigm| \bigcap_{k=1}^{n} A_{k{,}\, j_k} \cap C' \Bigr) \* \Prob_{\!\sto}\Bigl(\bigcap_{k=1}^{n} A_{k{,}\, j_k} \bigm| C' \Bigr). \label{PstoCCpgen} \end{align} One has \begin{align} \Prob_{\!\sto}\Bigl(C \Bigm| \bigcap_{k=1}^{n} A_{k{,}\, j_k} \cap C'\Bigr) &= \Prob_{\!\sto}\Bigl(c_1 \Bigm| \bigcap_{k=2}^{n} c_k \cap \bigcap_{k=1}^{n} A_{k{,}\, j_k} \cap C'\Bigr) \* \Prob_{\!\sto}\Bigl(\bigcap_{k=2}^{n} c_k \Bigm| \bigcap_{k=1}^{n} A_{k{,}\, j_k} \cap C' \Bigr) \notag \\ &= \prod_{\ell=1}^{n} \Prob_{\!\sto}\Bigl(c_\ell \Bigm| \bigcap_{k=\ell+1}^{n} c_k \cap \bigcap_{k=1}^{n} A_{k{,}\, j_k} \cap C'\Bigr) \label{prod_c_cp_A} \end{align} by iteration. If $j_\ell \neq 0$, since $M_\ell$ is associated with $M'_{\smash[t]{j_\ell}}$ only, \begin{align} \Prob_{\!\sto}\Bigl(c_\ell \Bigm| \bigcap_{k=\ell+1}^{n} c_k \cap \bigcap_{k=1}^{n} A_{k{,}\, j_k} \cap C'\Bigr) = \Prob_{\!\sto}(c_\ell \mid A_{\ell{,}\, j_\ell} \cap c'_{\smash[t]{j_\ell}}) = \xi_{\ell{,}\, j_\ell} \* \mathrm{d}^2\vec{r_\ell}, \label{jl_non_nul} \end{align} where \[ \xi_{\ell{,}\, j_\ell} \equiv \frac{\exp\Bigl(-\frac{1}{2}\*\vec{\transpose r_{\smash[t]{\ell{,}\, j_\ell}}} \cdot \Gamma_{\smash[t]{\ell{,}\, j_\ell}}^{-1} \cdot \vec{r_{\ell{,}\, j_\ell}}\Bigr)} {2\*\pi\*(\det \Gamma_{\ell{,}\, j_\ell})^{1/2}}, \] $\vec{r_{\ell{,}\, j_\ell}} \equiv \vec{r'_{\smash[t]{j_\ell}}} - \vec{r_\ell}$ and the covariance matrix $\Gamma_{\ell{,}\, j_\ell}$ of $\vec{r_{\ell{,}\, j_\ell}}$ is computed as detailed in App.~\ref{cov}. (Note that, in the several-to-one case considered here, the computation of $\Prob_{\!\sto}(C \mid C')$ is easier than that of $\Prob_{\!\sto}(C' \mid C)$: because several $M_\ell$ may be associated with the same $\MJ_{\smash[t]{k}}$, the latter would require to calculate $\Prob_{\!\sto}(c'_{\smash[t]{k}} \mid \bigcap_{\ell=1{;}\, j_\ell=k}^n {[c_\ell \cap A_{\ell{,}\, j_\ell}]})$. This does not matter in the one-to-one case studied in Sect.~\ref{one}.) If $j_\ell = 0$, since $M_\ell$ is not associated with any source in $K'$ and clustering is neglected, \begin{align} \Prob_{\!\sto}\Bigl(c_\ell \Bigm| \bigcap_{k=\ell+1}^{n} c_k \cap \bigcap_{k=1}^{\cramped{n'}} c'_{\smash[t]{k}} \cap \bigcap_{k=1}^{n} A_{k{,}\, j_k}\Bigr) = \Prob_{\!\sto}(c_\ell \mid A_{\ell{,}\, 0}) = \xi_{\ell{,}\, 0}\*\mathrm{d}^2\vec{r_\ell}, \label{jl_nul} \end{align} where $\xi_{\ell{,}\, 0} \equiv 1/S$ if we assume a uniform distribution of $K$-sources without counterpart as prior. From Eqs.~\eqref{prod_c_cp_A}, \eqref{jl_non_nul} and~\eqref{jl_nul}, it follows that \begin{align} \label{prod_xi_sto} \Prob_{\!\sto}\Bigl(C \Bigm| \bigcap_{k=1}^{n} A_{k{,}\, j_k} \cap C' \Bigr) &= \lambda \* \prod_{k=1}^{n} \xi_{k{,}\, j_k}, \end{align} where $\lambda \equiv \prod_{k=1}^{n} \mathrm{d}^2\vec{r_k}$. We now compute $\Prob_{\!\sto}(\bigcap_{k=1}^{n} A_{k{,}\, j_k} \mid C')$. Without any other assumption, $\Prob_{\!\sto}(\bigcap_{k=1}^{n} A_{k{,}\, j_k} \mid C') = \Prob_{\!\sto}(\bigcap_{k=1}^{n} A_{k{,}\, j_k}) $. Let $m \equiv \mathop{\#}\mathopen{} \{j_k > 0{;}\,\allowbreak k \in \IE[1, n]\}$. Since a given $\MJ_{\smash[t]{\ell}}$ may be the counterpart of several $M_k$ (i.e.\ the events $(A_{k{,}\, j_k})_{k\in\IE[1,n]}$ are independent whatever the values of the indices $j_k$), \[ \Prob_{\!\sto}\Bigl(\bigcap_{k=1}^{n} A_{k{,}\, j_k}\Bigr) = \prod_{k=1}^{n} \Prob_{\!\sto}(A_{k{,}\, j_k}). \] As $\Prob_{\!\sto}(A_{k{,}\, 0}) = 1-f$ and $\Prob_{\!\sto}(A_{k{,}\, j_k}) = f/\cramped{n'}$ for $j_k > 0$, \begin{align} \label{PstoA} \Prob_{\!\sto}\Bigl(\bigcap_{k=1}^{n} A_{k{,}\, j_k}\Bigr) = \Biggl(\frac{f}{\cramped{n'}}\Biggr)^m\*(1-f)^{n-m}. \end{align} Hence, from Eqs.~\eqref{Pstodef}, \eqref{PstoCCpgen}, \eqref{prod_xi_sto} and~\eqref{PstoA}, \begin{align} \label{PstoCCp} \Prob_{\!\sto}(C \mid C') = \lambda\*\sum_{j_1=0}^{\cramped{n'}} \sum_{j_2=0}^{\cramped{n'}} \cdots \sum_{j_{n}=0}^{\cramped{n'}} {\Biggl(\frac{f}{\cramped{n'}}\Biggr)^m\*(1-f)^{n-m} \*\prod_{k=1}^{n} \xi_{k{,}\, j_k}} = \lambda\*\Lh_\sto, \end{align} where \begin{align} \label{Lh_sto} \Lh_\sto \equiv \sum_{j_1=0}^{\cramped{n'}}\sum_{j_2=0}^{\cramped{n'}} \cdots \sum_{j_{n}=0}^{\cramped{n'}}\prod_{k=1}^n\zeta_{k{,}\, j_k} = \prod_{k=1}^{n}\sum_{j_k=0}^{\cramped{n'}}\zeta_{k{,}\, j_k} \end{align} is the likelihood to observe the $K$-sources at their positions if the positions of $K'$-sources are known, $\zeta_{k{,}\, 0} \equiv (1-f)\*\xi_{k{,}\, 0}$ and $\zeta_{k{,}\, j_k} \equiv f\*\xi_{k{,}\, j_k}/\cramped{n'}$ if $j_k > 0$. The computation of $\Prob_{\!\sto}(A_{i{,}\, j} \cap C \mid C')$ is similar to that of $\Prob_{\!\sto}(C \mid C')$: \begin{align} \Prob_{\!\sto}(A_{i{,}\, j} \cap C \mid C') &= \Prob_{\!\sto}\Bigl(C \cap A_{i{,}\, j} \cap \biguplus_{j_1=0}^{\cramped{n'}} \cdots \biguplus_{j_{i-1}=0}^{\cramped{n'}} \biguplus_{j_{i+1}=0}^{\cramped{n'}} \cdots \biguplus_{j_{n}=0}^{\cramped{n'}} \bigcap_{\substack{k=1\\ k\neq i}}^{n} A_{k{,}\, j_k} \Bigm| C' \Bigr) = \Prob_{\!\sto}\Bigl(C \cap \biguplus_{j_1=0}^{\cramped{n'}} \cdots \biguplus_{j_{i-1}=0}^{\cramped{n'}} \biguplus_{j_{i+1}=0}^{\cramped{n'}} \cdots \biguplus_{j_{n}=0}^{\cramped{n'}} \bigcap_{k=1}^{n} A_{k{,}\, j_k} \Bigm| C' \Bigr) \notag \\ &= \sum_{j_1=0}^{\cramped{n'}} \cdots\sum_{j_{i-1}=0}^{\cramped{n'}} \sum_{j_{i+1}=0}^{\cramped{n'}} \cdots \sum_{j_{n}=0}^{\cramped{n'}} \Prob_{\!\sto}\Bigl(C \Bigm| \bigcap_{k=1}^{n} A_{k{,}\, j_k} \cap C' \Bigr) \* \Prob_{\!\sto}\Bigl(\bigcap_{k=1}^{n} A_{k{,}\, j_k} \Bigm| C' \Bigr), \label{PStoACCpgen} \end{align} where we have put $j_i \equiv j$. Let $m^\ast \equiv \mathop{\#}\mathopen{} \{j_k > 0{;}\, k \in \IE[1, n]\}$ (indices $j_k$ are those of Eq.~\eqref{PStoACCpgen}). As for $\Prob_{\!\sto}(C \mid C')$, \begin{align} \Prob_{\!\sto}(A_{i{,}\, j} \cap C \mid C') &= \lambda\*\sum_{j_1=0}^{\cramped{n'}} \cdots \sum_{j_{i-1}=0}^{\cramped{n'}} \sum_{j_{i+1}=0}^{\cramped{n'}} \cdots \sum_{j_{n}=0}^{\cramped{n'}} {\Biggl(\frac{f}{\cramped{n'}}\Biggr)^{m^\ast}\*(1-f)^{n-m^\ast} \prod_{k=1}^{n} \xi_{k{,}\, j_k}} = \lambda\*\zeta_{i{,}\, j_i}\*\sum_{j_1=0}^{\cramped{n'}}\cdots \sum_{j_{i-1}=0}^{\cramped{n'}}\sum_{j_{i+1}=0}^{\cramped{n'}}\cdots \sum_{j_{n}=0}^{\cramped{n'}}\prod_{\substack{k=1 \\ k\neq i}}^n \zeta_{k{,}\, j_k} \notag \\ &= \lambda\*\zeta_{i{,}\, j}\*\prod_{\substack{k=1 \\ k\neq i}}^{n} \sum_{j_k=0}^{\cramped{n'}}\zeta_{k{,}\, j_k}. \label{PstoACCp} \end{align} Finally, from Eqs.~\eqref{Pstodef}, \eqref{PstoCCp}, \eqref{Lh_sto} and~\eqref{PstoACCp}, \begin{align} \label{Psto} \Prob_{\!\sto}(A_{i{,}\, j} \mid C \cap C') &= \frac{ \zeta_{i{,}\, j}\*\prod_{\leftsubstack{k=1 \\ k\neq i}}^{n} \sum_{j_k=0}^{\cramped{n'}}\zeta_{k{,}\, j_k} }{ \prod_{k=1}^{n}\sum_{j_k=0}^{\cramped{n'}}\zeta_{k{,}\, j_k} } = \frac{\zeta_{i{,}\, j}}{\sum_{k=0}^{\cramped{n'}}\zeta_{i{,}\, k} \\ &= \label{Psto2} \mathopen{}\mathclose\bgroup\left\{ \begin{aligned} \frac{f\*\xi_{i{,}\, j}}{ (1-f)\*\cramped{n'}\!/S + f\*\sum_{k=1}^{\cramped{n'}}\xi_{i{,}\, k}} & \quad \text{if } j > 0,\\ \frac{(1-f)\*\cramped{n'}\!/S}{ (1-f)\*\cramped{n'}\!/S + f\*\sum_{k=1}^{\cramped{n'}}\xi_{i{,}\, k}} & \quad \text{if } j = 0. \end{aligned} \aftergroup\egroup\right. \end{align} The probability $\Prob_{\!\sto}(A_{0{,}\, j} \mid C \cap C')$ that $\MJ_{\smash[t]{j}}$ has no counterpart in $K$ can be computed in this way: \begin{align*} \Prob_{\!\sto}(A_{0{,}\, j} \cap C \mid C') &= \Prob_{\!\sto}\Bigl(C \cap A_{0{,}\, j} \cap \biguplus_{j_1=0}^{\cramped{n'}} \biguplus_{j_2=0}^{\cramped{n'}} \cdots \biguplus_{j_{n}=0}^{\cramped{n'}} \bigcap_{k=1}^{n} A_{k{,}\, j_k} \Bigm| C'\Bigr) = \Prob_{\!\sto}\Bigl(C \cap \biguplus_{\substack{j_1=0 \\ j_1\neq j}}^{\cramped{n'}} \biguplus_{\substack{j_2=0\\ j_2\neq j}}^{\cramped{n'}} \cdots \biguplus_{\substack{j_{n}=0\\ j_n\neq j}}^{\cramped{n'}} \bigcap_{k=1}^{n} A_{k{,}\, j_k} \Bigm| C'\Bigr) \\ &= \sum_{\substack{j_1=0 \\ j_1\neq j}}^{\cramped{n'}} \sum_{\substack{j_2=0\\ j_2\neq j}}^{\cramped{n'}} \cdots \sum_{\substack{j_{n}=0\\ j_n\neq j}}^{\cramped{n'}} \Prob_{\!\sto}\Bigl(C \cap \bigcap_{k=1}^{n} A_{k{,}\, j_k} \Bigm| C'\Bigr) = \lambda\*\sum_{\substack{j_1=0 \\ j_1\neq j}}^{\cramped{n'}} \sum_{\substack{j_2=0\\ j_2\neq j}}^{\cramped{n'}} \cdots \sum_{\substack{j_{n}=0\\ j_n\neq j}}^{\cramped{n'}} \prod_{k=1}^{n}\zeta_{k{,}\, j_k} = \lambda\*\prod_{k=1}^{n}\sum_{\substack{j_k=0\\ j_k\neq j}}^{\cramped{n'}}\zeta_{k{,}\, j_k} \end{align*} and \begin{align} \Prob_{\!\sto}(A_{0{,}\, j} \mid C \cap C') &= \frac{\Prob_{\!\sto}(A_{0{,}\, j} \cap C \mid C')}{\Prob_{\!\sto}(C \mid C')} = \frac{\lambda\*\prod_{k=1}^{n}\sum_{\leftsubstack{j_k=0\\ j_k\neq j}}^{\cramped{n'}} \zeta_{k{,}\, j_k}}{\lambda\*\prod_{k=1}^{n}\sum_{j_k=0}^{\cramped{n'}}\zeta_{k{,}\, j_k}} = \prod_{k=1}^{n}\frac{\sum_{j_k=0}^{\cramped{n'}}\zeta_{k{,}\, j_k} - \zeta_{k{,}\, j}}{ \sum_{j_k=0}^{\cramped{n'}}\zeta_{k{,}\, j_k}} = \prod_{k=1}^{n}{\Biggl(1-\frac{\zeta_{k{,}\, j_k}}{\sum_{j_k=0}^{\cramped{n'}}\zeta_{k{,}\, j_k}}\Biggr)} \notag \\ & = \prod_{k=1}^{n}{(1 - \Prob_{\!\sto}[A_{k{,}\, j} \mid C \cap C'])}. \label{PstoA0j} \end{align} \subsection{Fraction of sources with a counterpart and other unknown parameters} \label{fractionsto} \subsubsection{Estimates} Besides $f$, the probabilities $P(A_{i{,}\, j} \mid C \cap C')$ may depend on other unknown parameters, e.g.\ $\mathring\sigma$ and $\mathring\nu$ (cf.~App.~\ref{cov}). Let us write them $x_1$, $x_2$, etc., and $\vec x \equiv (x_1, x_2, \ldots)$. An estimate $\hat{\vec x}$ of $\vec x$ may be obtained by maximizing the likelihood $L$ with respect to $\vec x$ (and with the constraint $\hatf_{{\text{s:o}}} \in[0, 1]$), or, equivalently, by finding the solution $\hat{\vec x}$ of \begin{align} \label{max_Lh} \frac{\partial\lnL}{\partial\vec x} = 0. \end{align} For any parameter $x_p$, as all the $\zeta_{i{,}\, j}$ are strictly positive and $\ln\Lh_\sto = \sum_{i=1}^{n} \ln\sum_{k=0}^{\cramped{n'}}\zeta_{i{,}\, k}$ (Eq.~\eqref{Lh_sto}), \begin{align} \frac{\partial\ln\Lh_\sto}{\partial x_p} &= \sum_{i=1}^{n}\frac{\partial\ln\sum_{k=0}^{\cramped{n'}}\zeta_{i{,}\, k}}{\partial x_p} = \sum_{i=1}^{n}\sum_{j=0}^{\cramped{n'}}\frac{\partial\zeta_{i{,}\, j}/\partial x_p}{ \sum_{k=0}^{\cramped{n'}}\zeta_{i{,}\, k}} = \sum_{i=1}^{n}\sum_{j=0}^{\cramped{n'}}\frac{\partial\ln\zeta_{i{,}\, j}}{\partial x_p}\* \frac{\zeta_{i{,}\, j}}{\sum_{k=0}^{\cramped{n'}}\zeta_{i{,}\, k}} \notag \\ &= \sum_{i=1}^{n}\sum_{j=0}^{\cramped{n'}}\frac{\partial\ln\zeta_{i{,}\, j}}{\partial x_p}\* \Prob_{\!\sto}(A_{i{,}\, j} \mid C \cap C'). \label{der_Lh_sto} \end{align} Let us consider in particular the case $x_p = f$. Note that $\partial\ln\zeta_{i{,}\, 0}/\partialf = -1/(1-f)$ and $\partial\ln\zeta_{i{,}\, j}/\partialf = 1/f$ for $j > 0$. Since $\sum_{j=0}^{\cramped{n'}} \Prob_{\!\sto}(A_{i{,}\, j} \mid C \cap C') = 1$, \begin{align} \sum_{j=0}^{\cramped{n'}}\frac{\partial\ln\zeta_{i{,}\, j}}{\partial x_p}\* \Prob_{\!\sto}(A_{i{,}\, j} \mid C \cap C') &= -\frac{\Prob_{\!\sto}(A_{i{,}\, 0} \mid C \cap C')}{1-f} + \sum_{j=1}^{\cramped{n'}} \frac{\Prob_{\!\sto}(A_{i{,}\, j} \mid C \cap C')}{f} = -\frac{\Prob_{\!\sto}(A_{i{,}\, 0} \mid C \cap C')}{1-f} + \frac{1-\Prob_{\!\sto}(A_{i{,}\, 0} \mid C \cap C')}{f} \notag \\ &= \frac{(1-f) - \Prob_{\!\sto}(A_{i{,}\, 0} \mid C \cap C') }{ f\*(1-f)}. \label{somme_j} \end{align} Summing on $i$, we obtain \begin{align} \frac{\partial\ln\Lh_\sto}{\partialf} &= \frac{n\*(1-f) - \sum_{i=1}^{n}\Prob_{\!\sto}(A_{i{,}\, 0} \mid C \cap C')}{ f\*(1-f)}. \end{align} So, as expected, an estimate of the probability that a source in $K$ has a counterpart in $K'$ is given by \begin{align} \hatf_{{\text{s:o}}} &= 1 - \frac{1}{n}\* \sum_{i=1}^{n}\expandafter\hat\Prob_{\!\sto}(A_{i{,}\, 0} \mid C \cap C'), \label{f_est} \\ &= \frac{1}{n}\* \sum_{i=1}^{n}\sum_{j=1}^{\cramped{n'}}\expandafter\hat\Prob_{\!\sto}(A_{i{,}\, j} \mid C \cap C'). \label{f_est2} \end{align} Note that, since $\partial^2\zeta_{i{,}\, j}/\partialf^2 = 0$ for all $(i, j) \in \IE[1, n]\times\IE[0, \cramped{n'}]$, \begin{align} \label{concave} \frac{\partial^2\ln\Lh_\sto}{\partialf^2} = -\sum_{i=1}^n{\Biggl(\frac{ \sum_{j=0}^{\cramped{n'}}\partial\zeta_{i{,}\, j}/\partialf }{ \sum_{j=0}^{\cramped{n'}}\zeta_{i{,}\, j}}\Biggr)^2} < 0 \end{align} for all $f$, so $\partial\ln\Lh_\sto/\partialf$ has at most one zero in $[0, 1]$: $\hatf_{{\text{s:o}}}$ is unique. One may also compute an estimate of the fraction $f'$ of $K'$-sources with a counterpart from \begin{align} \hatf'_{{\text{s:o}}} = 1 - \frac{1}{\cramped{n'}}\* \sum_{j=1}^{\cramped{n'}}\expandafter\hat\Prob_{\!\sto}(A_{0{,}\, j} \mid C \cap C') \label{f'} \end{align} One can easily check from Eqs.~\eqref{f_est2}, \eqref{f'} and~\eqref{PstoA0j} that $\hatf_{{\text{s:o}}}/\cramped{n'} > \hatf'_{{\text{s:o}}}/n$ in the several-to-one case. \subsubsection{Uncertainties} It may be interesting to know the uncertainties on the unknown parameters. For large numbers of sources, the covariance matrix $V$ of $\hat{\vec x}$ is asymptotically given by \begin{align} \label{cov_x} \Bigl(V^{-1}\Bigr)_{p{,}\, q} = \Biggl(-\frac{\partial^2\lnL}{\partial x_p\*\partial x_q} \Biggr)_{\vec x=\hat{\vec x}} \end{align} \citep{KS}. Let us write with a circumflex accent all the quantities calculated at $\vec x = \hat{\vec x}$. From \[ \frac{\partial^2\lnL}{\partial x_p\*\partial x_q} = \frac{1}{\expandafter P(C \mid C')}\*\frac{\partial^2 P(C \mid C')}{ \partial x_p\*\partial x_q} - \frac{1}{P^2(C \mid C')} \*\frac{\partial P(C \mid C')}{\partial x_p}\* \frac{\partial P(C \mid C')}{\partial x_q}, \] one obtains \begin{align} \label{der2_Lh} \frac{\hat\partial^2\lnL}{\hat\partial x_p\*\hat\partial x_q} = \frac{1}{\expandafter\expandafter\hat P(C \mid C')}\* \frac{\hat\partial^2 P(C \mid C')}{ \hat\partial x_p\*\hat\partial x_q}. \end{align} One has \begin{align} \label{der2_Lh2} \frac{\partial^2 \Prob_{\!\sto}(C \mid C')}{\partial x_p\*\partial x_q} = \sum_{i=1}^{n}\sum_{j=0}^{\cramped{n'}}{\frac{\partial^2\ln\zeta_{i{,}\, j}}{ \partial x_p\*\partial x_q}\*\Prob_{\!\sto}(A_{i{,}\, j} \cap C \mid C')} + \sum_{i=1}^{n}\sum_{j=0}^{\cramped{n'}}{\frac{\partial\ln\zeta_{i{,}\, j}}{ \partial x_p}\*\frac{\partial \Prob_{\!\sto}(A_{i{,}\, j} \cap C \mid C')}{ \partial x_q}}. \end{align} For any product of strictly positive functions $g_k$ of some variable $y$, \begin{align} \label{der_prod} \frac{\partial\prod_{k=1}^{n} g_k}{\partial y} = \sum_{i=1}^{n}{\frac{\partial g_i}{\partial y}\* \prod_{\substack{k=1\\ k\neq i}}^{n} g_k} = \sum_{i=1}^{n}{\frac{\partial\ln g_i}{\partial y}\* \prod_{k=1}^{n} g_k}, \end{align} so, using Eq.~\eqref{PstoACCp}, \begin{align} \frac{\partial \Prob_{\!\sto}(A_{i{,}\, j} \cap C \mid C')}{\partial x_q} &= \lambda\*\frac{\partial\zeta_{i{,}\, j}}{\partial x_q}\* \prod_{\substack{k=1\\ k\neq i}}^{n}\sum_{j_k=0}^{\cramped{n'}}\zeta_{k{,}\, j_k} + \lambda\*\zeta_{i{,}\, j}\*\sum_{\substack{\ell=1\\\ell\neq i}}^{n}{ \frac{\partial\sum_{j_\ell=0}^{\cramped{n'}}\zeta_{\ell{,}\, j_\ell}}{\partial x_q}\* \prod_{\substack{k=1\\ k\not\in\{i{,}\,\ell\} }}^{n}\sum_{j_k=0}^{\cramped{n'}}\zeta_{k{,}\, j_k}} \notag \\ &= \lambda\*\frac{\partial\ln\zeta_{i{,}\, j}}{\partial x_q}\*\zeta_{i{,}\, j}\* \prod_{\substack{k=1\\ k\neq i}}^{n}\sum_{j_k=0}^{\cramped{n'}}\zeta_{k{,}\, j_k} + \lambda\*\frac{\zeta_{i{,}\, j}}{\sum_{j_i=0}^{\cramped{n'}}\zeta_{i{,}\, j_i}}\* \sum_{\substack{\ell=1\\\ell\neq i}}^{n}\sum_{j_\ell=0}^{\cramped{n'}} {\frac{\partial\ln\zeta_{\ell{,}\, j_\ell}}{\partial x_q}\* \zeta_{\ell{,}\, j_\ell}\* \prod_{\substack{k=1\\ k\neq\ell}}^{n}\sum_{j_k=0}^{\cramped{n'}}\zeta_{k{,}\, j_k}} \notag \\ &= \frac{\partial\ln\zeta_{i{,}\, j}}{\partial x_q}\* \Prob_{\!\sto}(A_{i{,}\, j} \cap C \mid C') + \Prob_{\!\sto}(A_{i{,}\, j} \mid C \cap C')\* \sum_{\substack{\ell=1\\\ell\neq i}}^{n}\sum_{j_\ell=0}^{\cramped{n'}} {\frac{\partial\ln\zeta_{\ell{,}\, j_\ell}}{\partial x_q}\* \Prob_{\!\sto}(A_{\ell{,}\, j_\ell} \cap C \mid C')}. \label{der_P} \end{align} For $\vec x = \hat{\vec x}$, \begin{align} \sum_{\substack{\ell=1\\\ell\neq i}}^{n}\sum_{j_\ell=0}^{\cramped{n'}} {\frac{\hat\partial\ln\zeta_{\ell{,}\, j_\ell}}{\hat\partial x_q}\* \expandafter\hat \Prob_{\!\sto}(A_{\ell{,}\, j_\ell} \cap C \mid C')} &= \sum_{\ell=1}^{n}\sum_{j_\ell=0}^{\cramped{n'}} {\frac{\hat\partial\ln\zeta_{\ell{,}\, j_\ell}}{\hat\partial x_q}\* \expandafter\hat \Prob_{\!\sto}(A_{\ell{,}\, j_\ell} \cap C \mid C')} - \sum_{j_i=0}^{\cramped{n'}} {\frac{\hat\partial\ln\zeta_{i{,}\, j_i}}{\hat\partial x_q}\* \expandafter\hat \Prob_{\!\sto}(A_{i{,}\, j_i} \cap C \mid C')} \notag \\ &= - \sum_{j_i=0}^{\cramped{n'}} {\frac{\hat\partial\ln\zeta_{i{,}\, j_i}}{\hat\partial x_q}\* \expandafter\hat \Prob_{\!\sto}(A_{i{,}\, j_i} \cap C \mid C')} \label{der_P2} \end{align} since the first term on the right-hand side of the first line is zero from Eq.~\eqref{der_Lh_sto}. Finally, combining Eqs.~\eqref{der2_Lh2}, \eqref{der_P}, \eqref{der_P2} and dividing by $\expandafter\hat \Prob_{\!\sto}(C \mid C')$, we obtain \begin{align} \label{der2_Lh_sto} \frac{\hat\partial^2\ln\Lh_\sto}{\hat\partial x_p\*\hat\partial x_q} &= \sum_{i=1}^{n}\sum_{j=0}^{\cramped{n'}}{\Biggl(\frac{\hat\partial^2\ln\zeta_{i{,}\, j}}{ \hat\partial x_p\*\hat\partial x_q} + \frac{\hat\partial\ln\zeta_{i{,}\, j}}{\hat\partial x_p}\* \frac{\hat\partial\ln\zeta_{i{,}\, j}}{\hat\partial x_q}\Biggr) \*\expandafter\hat \Prob_{\!\sto}(A_{i{,}\, j} \mid C \cap C')} \notag \\ &\mathrel{\phantom{=}}{}- \sum_{i=1}^{n}{\Biggl( \sum_{j=0}^{\cramped{n'}}{\frac{\hat\partial\ln\zeta_{i{,}\, j}}{\hat\partial x_p} \*\expandafter\hat \Prob_{\!\sto}[A_{i{,}\, j} \mid C \cap C']}\Biggr) \* \sum_{j=0}^{\cramped{n'}}{\frac{\hat\partial\ln\zeta_{i{,}\, j}}{\hat\partial x_q} \*\expandafter\hat \Prob_{\!\sto}(A_{i{,}\, j} \mid C \cap C')}}. \end{align} In particular, for $x_p = x_q = f$, $\partial^2\ln\zeta_{i{,}\, j}/\partialf^2 + (\partial\ln\zeta_{i{,}\, j}/\partialf)^2= 0$, whether $j=0$ or not. From Eqs.~\eqref{somme_j} and~\eqref{f_est}, \begin{align} \label{der2_Lh_sto2} \frac{\hat\partial^2\ln\Lh_\sto}{\hat\partialf^2} &= -\sum_{i=1}^{n}{\Biggl(\frac{1}{\hatf_{{\text{s:o}}}} - \frac{\expandafter\hat\Prob_{\!\sto}[A_{i{,}\, 0} \mid C \cap C']}{ \hatf_{{\text{s:o}}}\*(1-\hatf_{{\text{s:o}}})}\Biggr)^2} \notag \\ &= \frac{n}{\hatf_{{\text{s:o}}}^2} - \frac{\sum_{i=1}^{n} \expandafter\hat \Prob_{\!\sto}^2(A_{i{,}\, 0} \mid C \cap C')}{ \hatf_{{\text{s:o}}}^2\*(1-\hatf_{{\text{s:o}}})^2}. \end{align} \subsection{Probability of association: local computation} \label{local} In the several-to-one case, a purely local computation of the probability of association between a given $M_i$ and some $\MJ_{\smash[t]{j}}$ ($j > 0$), or of the probability that $M_i$ has no counterpart in $K'$, is also possible. Let us consider a region $D_i$ of area $S\!_i$ containing the position of $M_i$, and such that we can safely hypothesize that the $K'$-counterpart of $M_i$, if any, will be inside. We assume that the local surface density $\rho'_{\smash[t]{i}}$ of $K'$-sources unrelated to $M_i$ is uniform on $D_i$. To avoid biasing the estimate if $M_i$ has a counterpart, $\rho'_{\smash[t]{i}}$ may be computed from the number of $K'$-sources in a region surrounding, but not overlapping, $D_i$. Besides the $A_{i{,}\, j}$, we consider the following events: \begin{itemize} \item $N'_{\smash[t]{i}}$: $D_i$ contains $n'_{\smash[t]{i}}$ sources; \item $\COORD'_{\smash[t]{i}} \equiv \bigcap_{j \in I_i} \coord'_{\smash[t]{j}}$, where $I_i \equiv \{j \mid \MJ_{\smash[t]{j}} \in D_i\}$. \end{itemize} We want to compute the probability that a source $\MJ_{\smash[t]{j}}$ in $D_i$ is the counterpart of $M_i$, given the positions of the neighbors, i.e. $\Prob_{\!\loc}(A_{i{,}\, j} \mid \COORD'_{\smash[t]{i}}\cap N'_{\smash[t]{i}})$. We have \begin{align*} \Prob_{\!\loc}(A_{i{,}\, j} \mid \COORD'_{\smash[t]{i}}\cap N'_{\smash[t]{i}}) &= \frac{\Prob_{\!\loc}(A_{i{,}\, j} \cap \COORD'_{\smash[t]{i}}\cap N'_{\smash[t]{i}})}{ \Prob_{\!\loc}(\COORD'_{\smash[t]{i}} \cap N'_{\smash[t]{i}})} = \frac{\Prob_{\!\loc}(\COORD'_{\smash[t]{i}} \cap A_{i{,}\, j} \cap N'_{\smash[t]{i}})}{ \Prob_{\!\loc}(\COORD'_{\smash[t]{i}} \cap \biguplus_{k\in I_i\cup\{0\}} A_{i{,}\, k} \cap N'_{\smash[t]{i}})} \\ &= \frac{\Prob_{\!\loc}(\COORD'_{\smash[t]{i}} \cap A_{i{,}\, j} \cap N'_{\smash[t]{i}})}{ \sum_{k\in I_i\cup\{0\}} \Prob_{\!\loc}(\COORD'_{\smash[t]{i}} \cap A_{i{,}\, k} \cap N'_{\smash[t]{i}})} = \frac{\Prob_{\!\loc}(\COORD'_{\smash[t]{i}} \mid A_{i{,}\, j} \cap N'_{\smash[t]{i}})\*\Prob_{\!\loc}(A_{i{,}\, j} \cap N'_{\smash[t]{i}})}{\sum_{k\in I_i\cup\{0\}} \Prob_{\!\loc}(\COORD'_{\smash[t]{i}}\mid A_{i{,}\, k} \cap N'_{\smash[t]{i}}) \*\Prob_{\!\loc}(A_{i{,}\, k} \cap N'_{\smash[t]{i}})} \\ &=\frac{\Prob_{\!\loc}(\COORD'_{\smash[t]{i}}\mid A_{i{,}\, j} \cap N'_{\smash[t]{i}})\*\Prob_{\!\loc}(A_{i{,}\, j} \mid N'_{\smash[t]{i}})}{ \sum_{k\in I_i\cup\{0\}} \Prob_{\!\loc}(\COORD'_{\smash[t]{i}}\mid A_{i{,}\, k} \cap N'_{\smash[t]{i}}) \*\Prob_{\!\loc}(A_{i{,}\, k} \mid N'_{\smash[t]{i}})}. \end{align*} If $j>0$, $\Prob_{\!\loc}(A_{i{,}\, j} \mid N'_{\smash[t]{i}})=\Prob_{\!\loc}(\bigcup_{k\in I_i}A_{i{,}\, k} \mid N'_{\smash[t]{i}})/n'_{\smash[t]{i}}$ (one sees here why the event $N'_{\smash[t]{i}}$ was defined: otherwise, $\Prob_{\!\loc}(A_{i{,}\, j})$ could not be computed as $\Prob_{\!\loc}(\bigcup_{k\in I_i}A_{i{,}\, k})/n'_{\smash[t]{i}}$ because $n'_{\smash[t]{i}}$ would be undefined). Now, \[\Prob_{\!\loc}\Bigl(\bigcup_{k\in I_i}A_{i{,}\, k} \mid N'_{\smash[t]{i}}\Bigr) = \frac{\Prob_{\!\loc}( N'_{\smash[t]{i}} \cap \bigcup_{k\in I_i}A_{i{,}\, k})}{\Prob_{\!\loc}(N'_{\smash[t]{i}})} = \frac{\Prob_{\!\loc}(N'_{\smash[t]{i}}\mid \bigcup_{k\in I_i}A_{i{,}\, k})\*\Prob_{\!\loc}(\bigcup_{k\in I_i}A_{i{,}\, k})} {\Prob_{\!\loc}(N'_{\smash[t]{i}}\mid A_{i{,}\, 0})\*\Prob_{\!\loc}(A_{i{,}\, 0})+\Prob_{\!\loc}(N'_{\smash[t]{i}}\mid \bigcup_{k\in I_i}A_{i{,}\, k})\*\Prob_{\!\loc}(\bigcup_{k\in I_i}A_{i{,}\, k})}.\] If clustering is negligible, the number of sources randomly distributed with a mean surface density $\rho'_{\smash[t]{i}}$ in an area $S\!_i$ follows a Poissonian distribution, so \[\Prob_{\!\loc}\Bigl(N'_{\smash[t]{i}}\mid \bigcup_{k\in I_i}A_{i{,}\, k}\Bigr) = \frac{(\rho'_{\smash[t]{i}}\*S\!_i)^{n'_{\smash[t]{i}}-1}\*\exp(-\rho'_{\smash[t]{i}}\*S\!_i)}{(n'_{\smash[t]{i}}-1)!} \quad\text{($n'_{\smash[t]{i}}-1$ random sources in $S\!_i$)}\] and \[\Prob_{\!\loc}(N'_{\smash[t]{i}}\mid A_{i{,}\, 0}) = \frac{(\rho'_{\smash[t]{i}}\*S\!_i)^{n'_{\smash[t]{i}}}\*\exp(-\rho'_{\smash[t]{i}}\*S\!_i)}{n'_{\smash[t]{i}}!} \quad\text{($n'_{\smash[t]{i}}$ random sources in $S\!_i$)}.\] Thus, \[ \Prob_{\!\loc}(A_{i{,}\, j} \mid N'_{\smash[t]{i}}) = \mathopen{}\mathclose\bgroup\left\{\begin{aligned} \frac{f}{n'_{\smash[t]{i}}\*f+(1-f)\*\rho'_{\smash[t]{i}}\*S\!_i} & \quad \text{if } j > 0,\\ \frac{(1-f)\*\rho'_{\smash[t]{i}}\*S\!_i}{n'_{\smash[t]{i}}\*f+(1-f)\*\rho'_{\smash[t]{i}}\*S\!_i} & \quad \text{if } j = 0. \end{aligned}\aftergroup\egroup\right. \] For $j > 0$, \[ \Prob_{\!\loc}(\COORD'_{\smash[t]{i}}\mid A_{i{,}\, j} \cap N'_{\smash[t]{i}}) = \xi_{i{,}\, j}\*\mathrm{d}^2\vec{r'_{\smash[t]{j}}}\* \prod_{\substack{k\in I_i\\ k\neq j}} \frac{\mathrm{d}^2\vec{r'_{\smash[t]{k}}}}{S\!_i} \] (rigorously, $\xi_{i{,}\, j}$ should be replaced by $\xi_{i{,}\, j}/\Prob_{\!\loc}(\MJ_{\smash[t]{j}}\in D_i\mid A_{i{,}\, j})$, but $\Prob_{\!\loc}(\MJ_{\smash[t]{j}}\not\in D_i \mid A_{i{,}\, j})$ is negligible), and \[ \Prob_{\!\loc}(\COORD'_{\smash[t]{i}}\mid A_{i{,}\, 0}\cap N'_{\smash[t]{i}}) = \prod_{k\in I_i} \frac{\mathrm{d}^2\vec{r'_{\smash[t]{k}}}}{S\!_i}. \] Finally, \begin{equation} \label{cpsto} \Prob_{\!\loc}(A_{i{,}\, j}\mid \COORD'_{\smash[t]{i}}\cap N'_{\smash[t]{i}}) = \mathopen{}\mathclose\bgroup\left\{\begin{aligned} \frac{f\*\ensuremath{\textsc{\large lr}}_{i{,}\, j}}{(1-f)+f\*\sum_{k\in I_i}\ensuremath{\textsc{\large lr}}_{i{,}\, k}} & \quad \text{if } j > 0, \\ \frac{(1-f)}{(1-f)+f\*\sum_{k\in I_i}\ensuremath{\textsc{\large lr}}_{i{,}\, k}} & \quad \text{if } j = 0, \end{aligned}\aftergroup\egroup\right. \end{equation} where $\ensuremath{\textsc{\large lr}}_{i{,}\, k} \equiv \xi_{i{,}\, k}/\rho'_{\smash[t]{i}}$ is the ``likelihood ratio''. \emph{Mutatis mutandis}, one obtains the same result as Eq.~(14) of \citet{Pineau} and aforementioned authors. When extended to the all sky (i.e.\ $S\!_i \to S$), $\rho'_{\smash[t]{i}}$ is replaced by $\cramped{n'}\!/S$ in Eq.~\eqref{cpsto}, $\sum_{k\in I_i}$ by $\sum_{k=1}^{\cramped{n'}}$ and one recovers Eq.~\eqref{Psto2}. The index $\check\jmath_i$ of the most likely counterpart $M'_{\smash[t]{\check\jmath_i}}$ of $M_i$ is the value of $j > 0$ maximizing $\ensuremath{\textsc{\large lr}}_{i{,}\, j}$. Usually, $\sum_{k=1;\,k\neq \check\jmath_i}^{n'_{\smash[t]{i}}}\ensuremath{\textsc{\large lr}}_{i{,}\, k}\ll \ensuremath{\textsc{\large lr}}_{i{,}\, \check\jmath_i}$, so \[ \Prob_{\!\sto}(A_{i{,}\, \check\jmath_i}\mid C \cap C') \approx \frac{f\*\ensuremath{\textsc{\large lr}}_{i{,}\, \check\jmath_i}}{(1-f)+f\*\ensuremath{\textsc{\large lr}}_{i{,}\, \check\jmath_i}}. \] As a ``poor man's'' recipe, if the value of $f$ is unknown and not too close to either $0$ or $1$, an association may be considered as true if $\ensuremath{\textsc{\large lr}}_{i{,}\, \check\jmath_i}\gg 1$ and as false if $\ensuremath{\textsc{\large lr}}_{i{,}\, \check\jmath_i}\ll 1$. Where to set the boundary between true associations and false ones is somewhat arbitrary. For a large sample, however, $f$ can be determined from the distribution of the positions of all the sources, as shown in Sect.~\ref{fractionsto}. \section{One-to-one associations} \label{one} In Sect.~\ref{several}, a given $\MJ_{\smash[t]{j}}$ may be associated with several $M_i$: the probabilities are actually asymmetric in $M_i$ and $\MJ_{\smash[t]{j}}$ and, while $\sum_{j=0}^{\cramped{n'}} \Prob_{\!\sto}(A_{i{,}\, j}\mid C \cap C') = 1$ for all $M_i$, one may well have $\sum_{i=1}^{n} \Prob_{\!\sto}(A_{i{,}\, j}\mid C \cap C') > 1$ for some sources $\MJ_{\smash[t]{j}}$. Here, we assume not only that each $K$-source is associated with at most one $K'$-source, but that each $K'$-source is associated with at most one $K$-source. We call this the ``one-to-one'' case and note $\Prob_{\!\oto}$ the probabilities calculated under this assumption. As far as we know and despite some attempt by \citet{Rutledge}, this problem has not been solved previously. Since a $K'$-potential counterpart of $M_i$ within some neighborhood $D_i$ of $M_i$ might in fact be the true counterpart of another source $M_k$ outside of $D_i$, there is no obvious way to extend the exact local several-to-one computation of Sect.~\ref{local} to the one-to-one case. We therefore have to consider either the whole sky, as in Sect.~\ref{all-sky}, or at least some large enough region around both $M_i$ and $\MJ_{\smash[t]{j}}$ to neglect side effects. In the case of one-to-one associations, a source of $K$ and a source of $K'$ play symmetrical roles; in particular, $\Prob_{\!\oto}(A_{i{,}\, j}) = f/\cramped{n'} = f'\!/n$. However, for practical reasons (cf.~Eq.~\eqref{binom}), we name $K$ the catalog with the fewer objects and $K'$ the other one, so $n \leqslant \cramped{n'}$ in the following. \subsection{Probability of association} We want to compute $\Prob_{\!\oto}(A_{i{,}\, j} \mid C \cap C')$ for $i > 0$. We still have \begin{align} \label{Potodef} \Prob_{\!\oto}(A_{i{,}\, j} \mid C \cap C') = \frac{\Prob_{\!\oto}(A_{i{,}\, j} \cap C \mid C')}{\Prob_{\!\oto}(C \mid C')} \end{align} and \begin{align*} \Prob_{\!\oto}(C \mid C') &= \Prob_{\!\oto}\Bigl(C \cap \bigcup_{j_1=0}^{\cramped{n'}}\bigcup_{j_2=0}^{\cramped{n'}}\cdots\bigcup_{j_{n}=0}^{\cramped{n'}} \bigcap_{k=1}^n A_{k{,}\, j_k} \Bigm| C' \Bigr). \end{align*} As $A_{i{,}\, j} \cap A_{k{,}\, \ell} = \varnothing$ if $i \neq k$ and $j = \ell > 0$, this reduces to \begin{align*} \Prob_{\!\oto}(C \mid C') &= \Prob_{\!\oto}\Bigl(C \cap \biguplus_{\substack{j_1=0 \\ j_1\not\in J_0}}^{\cramped{n'}} \biguplus_{\substack{j_2=0 \\ j_2\not\in J_1}}^{\cramped{n'}} \cdots \biguplus_{\substack{j_{n}=0 \\ j_{n}\not\in J_{n-1}}}^{\cramped{n'}} \bigcap_{k=1}^n A_{k{,}\, j_k} \Bigm| C' \Bigr), \end{align*} where $J_0 \equiv \varnothing$ and $J_k$ is defined iteratively for all $k \in \IE[1, n]$ by $J_{k} \equiv (J_{k-1} \cup \{j_k\}) \setminus \{0\}$. Hence, \begin{align} \Prob_{\!\oto}(C \mid C') &= \sum_{\substack{j_1=0 \\ j_1\not\in J_0}}^{\cramped{n'}} \sum_{\substack{j_2=0 \\ j_2\not\in J_1}}^{\cramped{n'}} \cdots \sum_{\substack{j_{n}=0 \\ j_{n}\not\in J_{n-1}}}^{\cramped{n'}} \Prob_{\!\oto}\Bigl(C \cap \bigcap_{k=1}^{n} A_{k{,}\, j_k} \Bigm| C' \Bigr) \notag \\ &= \sum_{\substack{j_1=0 \\ j_1\not\in J_0}}^{\cramped{n'}} \sum_{\substack{j_2=0 \\ j_2\not\in J_1}}^{\cramped{n'}} \cdots \sum_{\substack{j_{n}=0 \\ j_{n}\not\in J_{n-1}}}^{\cramped{n'}} \Prob_{\!\oto}\Bigl(C \Bigm| \bigcap_{k=1}^{n} A_{k{,}\, j_k} \cap C' \Bigr) \* \Prob_{\!\oto}\Bigl(\bigcap_{k=1}^{n} A_{k{,}\, j_k} \Bigm| C' \Bigr). \label{PotoCCpgen} \end{align} As in the several-to-one case, \begin{align} \label{prod_xi_oto} \Prob_{\!\oto}\Bigl(C \Bigm| \bigcap_{k=1}^{n} A_{k{,}\, j_k} \cap C'\Bigr) &= \lambda\*\prod_{k=1}^{n} \xi_{k{,}\, j_k}. \end{align} We now have to compute $\Prob_{\!\oto}(\bigcap_{k=1}^{n} A_{k{,}\, j_k} \mid C') = \Prob_{\!\oto}(\bigcap_{k=1}^{n} A_{k{,}\, j_k})$. Let $m \equiv \mathop{\#}\mathopen{} J_{n}$ and $X$ be a random variable describing the number of associations between $K$ and $K'$: \[ \Prob_{\!\oto}\Bigl(\bigcap_{k=1}^{n} A_{k{,}\, j_k}\Bigr) = \Prob_{\!\oto}\Bigl(\bigcap_{k=1}^{n} A_{k{,}\, j_k} \Bigm| X = m\Bigr) \* \Prob_{\!\oto}(X = m) + \Prob_{\!\oto}\Bigl(\bigcap_{k=1}^{n} A_{k{,}\, j_k} \Bigm| X \neq m\Bigr) \* \Prob_{\!\oto}(X \neq m). \] Since $\Prob_{\!\oto}(\bigcap_{k=1}^{n} A_{k{,}\, j_k} \mid X \neq m) = 0$, one just has to compute $\Prob_{\!\oto}(\bigcap_{k=1}^{n} A_{k{,}\, j_k} \mid X = m)$ and $\Prob_{\!\oto}(X = m)$. There are $n!/(m!\*[n-m]!)$ choices of $m$ elements among $n$ in $K$, and $\cramped{n'}!/(m!\*[\cramped{n'}-m]!)$ of $m$ elements among $\cramped{n'}$ in $K'$. The number of permutations of $m$ elements is $m!$, so the total number of one-to-one associations of $m$ elements from $K$ to $m$ elements of $K'$ is \[ m!\*\frac{n!}{m!\*(n-m)!}\*\frac{\cramped{n'}!}{m!\*(\cramped{n'}-m)!}. \] The inverse of this number is \begin{align} \label{PotoAm} \Prob_{\!\oto}\Bigl(\bigcap_{k=1}^{n} A_{k{,}\, j_k} \Bigm| X = m\Bigr) = \frac{m!\*(n-m)!\*(\cramped{n'}-m)!}{n!\*\cramped{n'}!}. \end{align} With our definition of $K$ and $K'$, $n \leqslant \cramped{n'}$, so all the elements of $K$ may have a counterpart in $K'$ jointly. Therefore, $\Prob_{\!\oto}(X = m)$ is given by the binomial law: \begin{align} \label{binom} \Prob_{\!\oto}(X = m) = \frac{n!}{m!\*(n-m)!}\*f^m\*(1-f)^{n-m}. \end{align} From Eqs.~\eqref{PotoCCpgen}, \eqref{prod_xi_oto}, \eqref{PotoAm} and \eqref{binom}, we obtain \begin{align} \Prob_{\!\oto}(C \mid C') &= \lambda\*\sum_{\substack{j_1=0 \\ j_1\not\in J_0}}^{\cramped{n'}} \sum_{\substack{j_2=0 \\ j_2\not\in J_1}}^{\cramped{n'}} \cdots \sum_{\substack{j_{n}=0 \\ j_{n}\not\in J_{n-1}}}^{\cramped{n'}} {\frac{(\cramped{n'}-m)!}{\cramped{n'}!} \* f^m \* (1-f)^{n-m}\*\prod_{k=1}^{n} \xi_{k{,}\, j_k}} \notag \\ &= \lambda\*\Lh_\oto, \label{PotoCCp} \end{align} where \begin{align} \label{Lh_oto} \Lh_\oto \equiv \sum_{\substack{j_1=0 \\ j_1\not\in J_0}}^{\cramped{n'}} \sum_{\substack{j_2=0 \\ j_2\not\in J_1}}^{\cramped{n'}}\cdots \sum_{\substack{j_{n}=0 \\ j_{n}\not\in J_{n-1}}}^{\cramped{n'}} \prod_{k=1}^n \eta_{k{,}\, j_k}, \end{align} $\eta_{k{,}\, 0} \equiv \zeta_{k{,}\, 0}$ and $\eta_{k{,}\, j_k} \equiv f\*\xi_{k{,}\, j_k}/(\cramped{n'}-\mathop{\#}\mathopen{} J_{k-1})$ if $j_k > 0$. $\Prob_{\!\oto}(A_{i{,}\, j} \cap C \mid C')$ is computed in the same way as $\Prob_{\!\oto}(C \mid C')$: \begin{align*} \Prob_{\!\oto}(A_{i{,}\, j} \cap C \mid C') &= \Prob_{\!\oto}\Bigl(C \cap A_{i{,}\, j} \cap \biguplus_{\substack{j_1=0 \\ j_1\not\in J^\ast_0}}^{\cramped{n'}} \cdots \biguplus_{\substack{j_{i-1}=0 \\ j_{i-1}\not\in J^\ast_{i-2}}}^{\cramped{n'}} \biguplus_{\substack{j_{i+1}=0 \\ j_{i+1}\not\in J^\ast_{i}}}^{\cramped{n'}} \cdots \biguplus_{\substack{j_{n}=0 \\ j_{n}\not\in J^\ast_{n-1}}}^{\cramped{n'}} \bigcap_{\substack{k=1\\ k\neq i}}^n A_{k{,}\, j_k} \Bigm| C' \Bigr) \\ &= \Prob_{\!\oto}\Bigl(C \cap \biguplus_{\substack{j_1=0 \\ j_1\not\in J^\ast_0}}^{\cramped{n'}} \cdots \biguplus_{\substack{j_{i-1}=0 \\ j_{i-1}\not\in J^\ast_{i-2}}}^{\cramped{n'}} \biguplus_{\substack{j_{i+1}=0 \\ j_{i+1}\not\in J^\ast_{i}}}^{\cramped{n'}} \cdots \biguplus_{\substack{j_{n}=0 \\ j_{n}\not\in J^\ast_{n-1}}}^{\cramped{n'}} \bigcap_{k=1}^n A_{k{,}\, j_k} \Bigm| C' \Bigr), \end{align*} where $j_i \equiv j$, $J^\ast_0 \equiv \{j\} \setminus \{0\}$ and $J^\ast_{k} \equiv (J^\ast_{k-1} \cup \{j_k\}) \setminus \{0\}$ for all $k \in \IE[1, n]$, so \begin{align*} \Prob_{\!\oto}(A_{i{,}\, j} \cap C \mid C') = \sum_{\substack{j_1=0 \\ j_1\not\in J^\ast_0}}^{\cramped{n'}} \cdots\sum_{\substack{j_{i-1}=0 \\ j_{i-1}\not\in J^\ast_{i-2}}}^{\cramped{n'}} \sum_{\substack{j_{i+1}=0 \\ j_{i+1}\not\in J^\ast_{i}}}^{\cramped{n'}} \cdots \sum_{\substack{j_{n}=0 \\ j_{n}\not\in J^\ast_{n-1}}}^{\cramped{n'}} \Prob_{\!\oto}\Bigl(C \Bigm| \bigcap_{k=1}^{n} A_{k{,}\, j_k} \cap C' \Bigr) \* \Prob_{\!\oto}\Bigl(\bigcap_{k=1}^{n} A_{k{,}\, j_k} \Bigm| C' \Bigr). \end{align*} Let $m^\ast \equiv \mathop{\#}\mathopen{} J^\ast_{n}$. As for $\Prob_{\!\oto}(C \mid C')$, \begin{align} \Prob_{\!\oto}(A_{i{,}\, j} \cap C \mid C') &= \lambda\*\sum_{\substack{j_1=0 \\ j_1\not\in J^\ast_0}}^{\cramped{n'}} \cdots \sum_{\substack{j_{i-1}=0 \\ j_{i-1}\not\in J^\ast_{i-2}}}^{\cramped{n'}} \sum_{\substack{j_{i+1}=0 \\ j_{i+1}\not\in J^\ast_{i}}}^{\cramped{n'}} \cdots \sum_{\substack{j_{n}=0 \\ j_{n}\not\in J^\ast_{n-1}}}^{\cramped{n'}} {\frac{(\cramped{n'}-m^\ast)!}{\cramped{n'}!} \* f^{m^\ast} \* (1-f)^{n-m^\ast}\*\prod_{k=1}^{n} \xi_{k{,}\, j_k}} \notag \\ &= \lambda\*\eta^\ast_{i{,}\, j}\* \sum_{\substack{j_1=0 \\ j_1\not\in J^\ast_0}}^{\cramped{n'}} \cdots \sum_{\substack{j_{i-1}=0 \\ j_{i-1}\not\in J^\ast_{i-2}}}^{\cramped{n'}} \sum_{\substack{j_{i+1}=0 \\ j_{i+1}\not\in J^\ast_{i}}}^{\cramped{n'}} \cdots \sum_{\substack{j_{n}=0 \\ j_{n}\not\in J^\ast_{n-1}}}^{\cramped{n'}} \prod_{\substack{k=1\\ k\neq i}}^n \eta^\ast_{k{,}\, j_k}, \label{PotoACCp} \end{align} where $ \eta^\ast_{k{,}\, j_k} \equiv f\*\xi_{k{,}\, j_k}/(\cramped{n'}-\mathop{\#}\mathopen{} J^\ast_{k-1}) $ if $k\neq i$ and $j_k > 0$, and $\eta^\ast_{k{,}\, j_k} = \zeta_{k{,}\, j_k}$ otherwise. Finally, from Eqs.~\eqref{Potodef}, \eqref{PotoCCp}, \eqref{Lh_oto} and~\eqref{PotoACCp}, \begin{align} \label{Poto} \Prob_{\!\oto}(A_{i{,}\, j} \mid C \cap C') &= \frac{ \zeta_{i{,}\, j}\* \sum_{\leftsubstack{j_1=0 \\ j_1\not\in J^\ast_0}}^{\cramped{n'}} \cdots \sum_{\leftsubstack{j_{i-1}=0 \\ j_{i-1}\not\in J^\ast_{i-2}}}^{\cramped{n'}} \sum_{\leftsubstack{j_{i+1}=0 \\ j_{i+1}\not\in J^\ast_{i}}}^{\cramped{n'}} \cdots \sum_{\leftsubstack{j_{n}=0 \\ j_{n}\not\in J^\ast_{n-1}}}^{\cramped{n'}} \prod_{\leftsubstack{k=1\\ k\neq i}}^n \eta^\ast_{k{,}\, j_k} }{ \sum_{\leftsubstack{j_1=0 \\ j_1\not\in J_0}}^{\cramped{n'}} \sum_{\leftsubstack{j_2=0 \\ j_2\not\in J_1}}^{\cramped{n'}} \cdots \sum_{\leftsubstack{j_{n}=0 \\ j_{n}\not\in J_{n-1}}}^{\cramped{n'}} \prod_{k=1}^n \eta_{k{,}\, j_k} }. \end{align} The probability that a source $\MJ_{\smash[t]{j}}$ has no counterpart in $K$ is simply given by \[ \Prob_{\!\oto}(A_{0{,}\, j} \mid C \cap C') = 1-\sum_{k=1}^{n} \Prob_{\!\oto}(A_{k{,}\, j}\mid C \cap C'). \] \subsection{Fraction of sources with a counterpart and other unknown parameters} \label{fractionoto} \subsubsection{Estimates} As in the several-to-one case, an estimate $\hat{\vec x}_{{\text{o:o}}}$ of the set $\vec x$ of unknown parameters may be obtained by solving Eq.~\eqref{max_Lh} (with the constraint $\hatf_{{\text{o:o}}} \in [0, n/\cramped{n'}]$). As the number of terms in $\Lh_\oto$ grows exponentially with $n$ and $\cramped{n'}$, Eq.~\eqref{Lh_oto} seems useless for this purpose. Fortunately, the computation of $\Lh_\oto$ is not necessary if the probabilities $\Prob_{\!\oto}(A_{i{,}\, j} \mid C \cap C')$ are known (we will see in Sect.~\ref{impl_oto} how to approximate these). Indeed, for any parameter $x_p$, let us show that we get the same result (Eq.~\eqref{der_Lh_sto}) as in the several-to-one case. Using Eq.~\eqref{der_prod}, we obtain \begin{equation} \label{der_Lh_x_gauche} \frac{\partial \Prob_{\!\oto}(C \mid C')}{\partial x_p} = \lambda \* \sum_{\substack{j_1=0\\j_1\notin J_0}}^{\cramped{n'}}\sum_{\substack{j_2=0\\j_2\notin J_1}}^{\cramped{n'}} \cdots\sum_{\substack{j_{n}=0\\j_{n}\notin J_{n-1}}}^{\cramped{n'}}\sum_{i=1}^{n}{ \frac{\partial\ln\eta_{i{,}\, j_i}}{\partial x_p}\* \prod_{k=1}^{n}\eta_{k{,}\, j_k}}. \end{equation} The expression of $\Prob_{\!\oto}(A_{i{,}\, j} \cap C \mid C')$ may also be written \[ \Prob_{\!\oto}(A_{i{,}\, j} \cap C \mid C') = \lambda \* \sum_{\substack{j_1=0\\j_1\notin J_0}}^{\cramped{n'}}\sum_{\substack{j_2=0\\j_2\notin J_1}}^{\cramped{n'}} \cdots\sum_{\substack{j_{n}=0\\j_{n}\notin J_{n-1}}}^{\cramped{n'}} {\mathbf{1}(j_i = j)\*\prod_{k=1}^{n} \eta_{k{,}\, j_k}}, \] where $\mathbf{1}$ is the indicator function (i.e.\ $\mathbf{1}(j_i = j) = 1$ if proposition ``$j_i = j$\kern1pt'' is true and $\mathbf{1}(j_i = j) = 0$ otherwise), so \begin{align} \label{der_Lh_x_droite} \sum_{i=1}^{n}\sum_{j=0}^{\cramped{n'}}{\frac{\partial\ln\zeta_{i{,}\, j}}{\partial x_p} \*\Prob_{\!\oto}(A_{i{,}\, j} \cap C \mid C')} &= \lambda\*\sum_{i=1}^{n}\sum_{\substack{j_1=0\\j_1\notin J_0}}^{\cramped{n'}} \sum_{\substack{j_2=0\\j_2\notin J_1}}^{\cramped{n'}} \cdots\sum_{\substack{j_{n}=0\\j_{n}\notin J_{n-1}}}^{\cramped{n'}}\sum_{j=0}^{\cramped{n'}} {\mathbf{1}(j_i = j)\*\frac{\partial\ln\zeta_{i{,}\, j}}{\partial x_p}\* \prod_{k=1}^{n}\eta_{k{,}\, j_k}} \notag \\ &= \lambda\*\sum_{i=1}^{n}\sum_{\substack{j_1=0\\j_1\notin J_0}}^{\cramped{n'}} \sum_{\substack{j_2=0\\j_2\notin J_1}}^{\cramped{n'}} \cdots\sum_{\substack{j_{n}=0\\j_{n}\notin J_{n-1}}}^{\cramped{n'}} {\frac{\partial\ln\zeta_{i{,}\, j_i}}{\partial x_p}\* \prod_{k=1}^{n}\eta_{k{,}\, j_k}}. \end{align} If $j_i = 0$, $\eta_{i{,}\, j_i} = \zeta_{i{,}\, j_i}$; and if $j_i > 0$, the numerators of $\eta_{i{,}\, j_i}$ and $\zeta_{i{,}\, j_i}$ are the same and their denominators do not depend on $x_p$: in all cases, $\partial\ln\eta_{i{,}\, j_i}/\partial x_p = \partial\ln\zeta_{i{,}\, j_i}/\partial x_p$. The right-hand sides of Eqs.~\eqref{der_Lh_x_gauche} and~\eqref{der_Lh_x_droite} are therefore identical. Dividing their left-hand sides by $\Prob_{\!\oto}(C \mid C')$, one obtains again \begin{equation} \label{der_Lh_x} \frac{\partial\ln\Lh_\oto}{\partial x_p} = \sum_{i=1}^{n}\sum_{j=0}^{\cramped{n'}}{\frac{\partial\ln\zeta_{i{,}\, j}}{ \partial x_p}\*\Prob_{\!\oto}(A_{i{,}\, j} \mid C \cap C')}. \end{equation} For $x_p = f$, one still has $\partial\ln\zeta_{i{,}\, 0}/\partialf = -1/(1-f)$ and $\partial\ln\zeta_{i{,}\, j}/\partialf = 1/f$ if $j > 0$, so, as in the several-to-one case, \begin{align} \frac{\partial\ln\Lh_\oto}{\partialf} &= \frac{n\*(1-f) - \sum_{i=1}^{n} \Prob_{\!\oto}(A_{i{,}\, 0} \mid C \cap C')}{f\*(1-f)}. \end{align} \subsubsection{Uncertainties} Regarding uncertainties on the $x_p$, Eqs.~\eqref{cov_x}, \eqref{der2_Lh} and \eqref{der2_Lh2} are valid in the one-to-one case too, so, from Eq.~\eqref{der_Lh_x}, \[ \frac{\hat\partial^2 \ln\Lh_\oto}{\hat\partial x_p\*\hat\partial x_q} = \sum_{i=1}^{n}\sum_{j=0}^{\cramped{n'}}{\frac{\hat\partial^2\ln\zeta_{i{,}\, j}}{ \hat\partial x_p\*\hat\partial x_q}\* \expandafter\hat\Prob_{\!\oto}(A_{i{,}\, j} \mid C \cap C')} + \sum_{i=1}^{n}\sum_{j=0}^{\cramped{n'}}{\frac{\hat\partial\ln\zeta_{i{,}\, j}}{ \hat\partial x_p}\*\frac{\hat\partial \Prob_{\!\oto}(A_{i{,}\, j} \mid C \cap C')}{ \hat\partial x_q}}. \] Contrary to the several-to-one case, no simple exact analytic expression of the terms $\hat\partial \Prob_{\!\oto}(A_{i{,}\, j} \mid C \cap C')/\hat\partial x_q$ could be obtained. These derivatives may be computed numerically using finite differences; however, unless the fraction of sources having several likely counterparts is high, Eqs.~\eqref{der2_Lh_sto} and~\eqref{der2_Lh_sto2} should provide a more convenient approximation of the covariance matrix of $\hat{\vec x}_{{\text{o:o}}}$. \section{Practical implementation} \label{practical} \subsection{Several-to-one case} \subsubsection{Neighbors only!} \label{neighbors} In the several-to-one case, the computation of the probability of association $\Prob_{\!\sto}(A_{i{,}\, j} \mid C \cap C')$ between $M_i$ and $\MJ_{\smash[t]{j}}$ from Eq.~\eqref{Psto} is without problem if $f$ and the positional uncertainties are known. However, the number of calculations for the whole sample or for the determination of $\hat{\vec x}$ is of the order of $n\*\cramped{n'}{}^2 $. As $\zeta_{i{,}\, k}$ rapidly tends to $0$ when the angular distance $r_{i{,}\, k}$ between $M_i$ and $\MJ_{\smash[t]{k}}$ increases, there is no need to sum from $k = 1$ to $\cramped{n'}$ in Eq.~\eqref{Psto}, nor to compute explicitly all the $\Prob_{\!\sto}(A_{i{,}\, j} \mid C \cap C')$. If $R$ is some angular distance above which $\xi_{i{,}\, k} \ll \cramped{n'}\!/S$, one may set $\xi_{i{,}\, k}$ to $0$ (and $\Prob_{\!\sto}(A_{i{,}\, k})$ too) if $r_{i{,}\, k} > R$ and replace the sums $\sum_{k=1}^{\cramped{n'}}$ by $\sum_{k=1;\, r_{i{,}\, k}\leqslant R}^{\cramped{n'}}$. In fact, for most $M_i$, one does not even need to test whether $r_{i{,}\, k} \leqslant R$ for each $\MJ_{\smash[t]{k}} \in K'$. Let us write $E_i$ the domain of right ascensions $\alpha'$ out of which no point $M'$ of declination $\delta'$ closer than $R$ to $M_i$ may be found. The angular distance $\psi$ between $M'$ and $M_i$ is given (cf.~Eq.~\eqref{psi}) by \[ \cos\psi = \cos(\alpha'-\alpha_i)\*\cos\delta_i\*\cos\delta' + \sin\delta_i\*\sin\delta'. \] If $\delta_i \in [-\pi/2+R, \pi/2-R]$, the minimum of $\cos(\alpha'-\alpha_i)$ under the constraint $\cos\psi \geqslant \cos R$ is reached when $\sin\delta' = \sin\delta_i/{\cos R}$ and \[ \cos(\alpha'-\alpha_i) = \frac{\!\sqrt{\cos^2 R - \sin\mathclose{}^2\,\delta_i}}{\cos\delta_i}. \] Let $\Delta_i \equiv \arccos\Bigl(\!\sqrt{\cos^2 R - \sin\mathclose{}^2\,\delta_i}/{\cos\delta_i}\Bigr)$. The domain $E_i$ is given by \begin{align*} E_i = \mathopen{}\mathclose\bgroup\left\{ \begin{aligned} &[0, \alpha_i + \Delta_i - 2\*\pi] \cup [\alpha_i-\Delta_i, 2\*\pi] && \text{if } \alpha_i + \Delta_i > 2\*\pi, \\ &[0, \alpha_i + \Delta_i] \cup [\alpha_i-\Delta_i + 2\*\pi, 2\*\pi] && \text{if } \alpha_i - \Delta_i < 0, \\ &[\alpha_i - \Delta_i, \alpha_i+\Delta_i] && \text{otherwise}. \end{aligned} \aftergroup\egroup\right. \end{align*} If $\delta_i \in [-\pi/2, {-}\pi/2+R] \cup [\pi/2-R, \pi/2]$, one has $E_i = [0, 2\*\pi]$. For a catalog $K'$ ordered by increasing right ascension (if not, this is the first thing to do), one may easily find the subset of indices $k$ for which $\alpha'_{\smash[t]{k}} \in E_i$. For instance, if $E_i = [\alpha_i - \Delta_i, \alpha_i+\Delta_i]$, one just has to find by dichotomy the indices $k^-$ and $k^+$ such that $\alpha'_{\smash[t]{k^--1}} < \alpha_i - \Delta_i \leqslant \alpha'_{\smash[t]{k^-}}$ and $\alpha'_{\smash[t]{k^+}} \leqslant \alpha_i + \Delta_i < \alpha'_{\smash[t]{k^++1}}$. The sums $\sum_{k=1;\, r_{i{,}\, k}\leqslant R}^{\cramped{n'}}$ may then be replaced by $\sum_{k=k^-;\, r_{i{,}\, k}\leqslant R}^{k^+}$. In all cases, the sum may be further restricted to sources with a declination $\delta'_{\smash[t]{k}} \in [\delta_i-R, \delta_i+R] \cap [-\pi/2, \pi/2]$. \subsubsection{Fraction of sources with a counterpart} All the probabilities depend on $f$ and, possibly, other unknown parameters like $\mathring\sigma$ and $\mathring\nu$. These parameters may be found by solving Eq.~\eqref{max_Lh} using Eq.~\eqref{der_Lh_sto}. If the fraction of sources with a counterpart is the only unknown, the $\xi_{i{,}\, j}$ need to be computed only once and $f$ may be easily determined from Eq.~\eqref{f_est}. Denote $g$ the function \begin{align*} g\colon [0, 1] & \to \mathbb{R}, \\ f &\mapsto 1-\frac{1}{n}\*\sum_{i=1}^n\Prob_{\!\sto}(A_{i{,}\, 0} \mid C \cap C'). \end{align*} Let us show that, for any $f_0 \in \mathopen]0, 1\mathclose[$, the sequence $(f_k)_{k\in\mathbb{N}}$ defined by $f_{k+1} \equiv g(f_k)$ tends to $\hatf$. First, note that \[ g(f) = f + \frac{f\*(1-f)}{n} \* \frac{\partial\ln\Lh_\sto}{ \partialf}. \] The only fixed points of $g$ are hence $0$, $1$ and $\hatf$. As $\partial^2\ln\Lh_\sto/\partialf^2 < 0$ (Eq.~\eqref{concave}), one has $\partial\ln\Lh_\sto/\partialf \geqslant 0$ and thus $g(f) \geqslant f$ for $f \in [0, \hatf]$; similarly, $\partial\ln\Lh_\sto/\partialf \leqslant 0$ and $g(f) \leqslant f$ for $f \in [\hatf, 1]$. Because \[ \frac{\mathrm{d} g}{\mathrm{d} f} = \frac{1}{n\*\cramped{n'}}\*\sum_{i=1}^n \frac{\xi_{i{,}\, 0} \* \sum_{k=1}^{\cramped{n'}} \xi_{i{,}\, k}}{ (\sum_{k=0}^{\cramped{n'}} \zeta_{i{,}\, k})^2} > 0, \] $g$ is also an increasing function. Let us consider the case $f_0 \in [0, \hatf]$. If $f_k \leqslant \hatf$, $g(f_k) \geqslant f_k$ and $g(f_k) \leqslant g(\hatf) = \hatf$. As $g(f_k) = f_{k+1}$, $(f_k)_{k\in\mathbb{N}}$ is an increasing sequence bounded from above by $\hatf$: it converges therefore in $[f_0, \hatf]$. Because $g$ is continuous and $\hatf$ is the only fixed point in this interval, $(f_k)_{k\in\mathbb{N}}$ tends to $\hatf$. Similarly, if $f_0 \in [\hatf, 1]$, $(f_k)_{k\in\mathbb{N}}$ is a decreasing sequence converging to $\hatf$. \subsection{One-to-one case} \label{impl_oto} All what was said for the several-to-one case still holds in the one-to-one case. Incidentally, as the former is computationally much simpler than the latter, it is a good idea to compute first $\hat{\vec x}_{\text{s:o}}$ and the probabilities $\expandafter\hat\Prob_{\!\sto}(A_{i{,}\, j} \mid C \cap C')$: as $\hatf_{{\text{s:o}}}/\cramped{n'} > \hatf'_{{\text{s:o}}}/n$ and $\hatf_{{\text{o:o}}}/\cramped{n'} = \hatf'_{{\text{o:o}}}/n$, the several-to-one assumption is probably correct if $\hatf_{{\text{s:o}}}/\cramped{n'} \gg \hatf'_{{\text{s:o}}}/n$; and if not, one may first test the one-to-several (subscript ``${\text{o:s}}$'' hereafter) assumption, i.e. reverse the roles of $K$ and $K'$ in all the formulae of Sect.~\ref{several}, and adopt it if $\hatf_{{\text{o:s}}}/\cramped{n'} \ll \hatf'_{{\text{o:s}}}/n$. Ideally, one would compare the likelihood of each assumption and adopt the most likely one. While $\expandafter\hat\Lh_\sto$ and $\hatL_{{\text{o:s}}}$ are easily computed, no convenient expression was found for $\expandafter\hat\Lh_\oto$. However, if $\ln\expandafter\hat\Lh_\sto$ and $\ln\hatL_{{\text{o:s}}}$ are of the same order, this provides some hint that the one-to-one case (or maybe the several-to-several one!) should be considered. Even then, $\hat{\vec x}_{\text{s:o}}$ will still be a good starting point to find $\hat{\vec x}_{\text{o:o}}$ and there will be no need to compute $\expandafter\hat\Prob_{\!\oto}(A_{i{,}\, j} \mid C \cap C')$ for all couples $(i, j)$ such that $\expandafter\hat\Prob_{\!\sto}(A_{i{,}\, j} \mid C \cap C') \approx \hatP_{{\text{o:s}}}(A_{i{,}\, j} \mid C \cap C') \approx 1$. The results of Sect.~\ref{fractionoto} are given in terms of $\Prob_{\!\oto}(A_{i{,}\, j} \mid C \cap C')$. The only difficulty is to estimate this probability from Eq.~\eqref{Poto}. Because of the combinatorial explosion of the number of terms, an exact computation is hopeless. An approximate value might however be obtained in the following way. For any $M_i$, let $\phi$ be a permutation on $K$ ordering the elements $M_{\phi(1)}$, $M_{\phi(2)}$, \textellipsis, $M_{\phi(n)}$ by increasing angular distance to $M_i$. For $j=0$ or $\MJ_{\smash[t]{j}}$ in the neighborhood of $M_i$, and for any $\ell \in \IE[1, n]$, define \begin{align} \label{Pl} P_\ell(A_{i{,}\, j} \mid C \cap C') &\equiv \frac{\zeta_{i{,}\, j}\* \sum_{\leftsubstack{j_2=0 \\ j_2\not\in J^{\phi{,}\,\ast}_1}}^{\cramped{n'}} \cdots \sum_{\leftsubstack{j_\ell=0 \\ j_\ell\not\in J^{\phi{,}\,\ast}_{\ell-1}}}^{\cramped{n'}} \prod_{k=2}^\ell \eta^{\phi{,}\,\ast}_{k{,}\, j_k} }{ \sum_{\leftsubstack{j_1=0 \\ j_1\not\in J^\phi_0}}^{\cramped{n'}} \sum_{\leftsubstack{j_2=0 \\ j_2\not\in J^\phi_1}}^{\cramped{n'}} \cdots \sum_{\leftsubstack{j_\ell=0 \\ j_\ell\not\in J^\phi_{\ell-1}}}^{\cramped{n'}} \prod_{k=1}^\ell \eta^\phi_{k{,}\, j_k} }, \end{align} where $J^{\phi{,}\,\ast}_1 \equiv \{j\} \setminus \{0\}$, $J^{\phi{,}\,\ast}_{k} \equiv (J^{\phi{,}\,\ast}_{k-1} \cup \{j_k\}) \setminus \{0\}$ for all $k \in \IE[2, n]$, $J^\phi_k \equiv J_k$ for all $k$, \[ \eta^\phi_{k{,}\, j_k} \equiv \frac{f\*\xi_{\phi(k){,}\, j_k}}{\cramped{n'}-\mathop{\#}\mathopen{} J^\phi_{k-1}} \quad \text{and}\quad \eta^{\phi{,}\,\ast}_{k{,}\, j_k} \equiv \frac{f\*\xi_{\phi(k){,}\, j_k}}{\cramped{n'}-\mathop{\#}\mathopen{} J^{\phi{,}\,\ast}_{k-1}} \quad \text{if } j_k > 0, \] and $\eta^\phi_{k{,}\, 0} \equiv \eta^{\phi{,}\,\ast}_{k{,}\, 0} \equiv \zeta_{\phi(k){,}\, 0}$. As $\phi(1) = i$, $P_1(A_{i{,}\, j} \mid C \cap C') = \Prob_{\!\sto}(A_{i{,}\, j} \mid C \cap C')$ (cf.~Eq.~\eqref{Psto}): at first order, we obtain the same result as in the several-to-one case. Since the influence of other $K$-sources on the result decreases very fast with their angular distance to $M_i$ and $\MJ_{\smash[t]{j}}$ if $M_i$ and $\MJ_{\smash[t]{j}}$ are close to each other, $P_\ell(A_{i{,}\, j} \mid C \cap C')$ should rapidly converge to $P_{n}(A_{i{,}\, j} \mid C \cap C') = \Prob_{\!\oto}(A_{i{,}\, j} \mid C \cap C')$, even for small values of $\ell$. Because of the recursive sums in Eq.~\eqref{Pl}, the computation must in practice be further restricted to sources $\MJ_{\smash[t]{k}}$ in the neighborhood of $M_i$ and $\MJ_{\smash[t]{j}}$, as explained in Sect.~\ref{neighbors}.
1,314,259,993,830
arxiv
\section{Supplemental Material} \label{supplement} \subsection{Experimental Methods} In the experiments, we employ an optically-trapped cloud of $^6$Li atoms in a 50-50 mixture of the two lowest hyperfine states, which is tuned to a broad collisional (Feshbach) resonance in a bias magnetic field of 834 G, and cooled by evaporation. The initial energy per particle $\widetilde{E}$ is measured from the trapped cloud profile, as discussed below. A focused CO$_2$ laser beam forms the cigar-shaped optical trap with a transverse aspect ratio of 1:2.7, which enables an observation of transverse elliptic flow on a short time scale and a precise measurement of the static shear viscosity even at low temperature, where the shear viscosity is small. The transverse aspect ratio is controlled by using two sets of cylindrical ZnSe lenses. One set is placed just after the acousto-optic modulator that controls the laser intensity. A second set is placed just before an expansion telescope. The telescope increases the trapping beam radii before focusing into a high vacuum chamber, where the optical trap is loaded from a standard magneto-optical trap. The first set of cylindrical lenses adjusts the transverse aspect ratio of focused beam, while the second set matches of the beam curvatures to achieve a common focal plane. To observe the expansion dynamics, the cloud is released from the trap and the cloud profile is measured as a function of time after release in all three dimensions using two identical CCD cameras, which simultaneously image different spin states to avoid cross-saturation. The magnifications of the imaging systems are measured by translating the trap focus. The measured magnifications yield average axial dimensions $\sigma_z$ that are consistent within 1\%. To obtain the most precise measurements of the cloud profile, we adjust the effective magnification of one camera so that the average axial dimensions precisely agree. In this way, the cloud radii $\sigma_i$ in all three dimensions are consistently measured, to determine the aspect ratios $\sigma_x/\sigma_y$, $\sigma_x/\sigma_z$, and $\sigma_y/\sigma_z$, as well as the mean square cloud radius, $\langle {\mathbf{r}}^2\rangle$. Two-dimensional density distributions are fit to the cloud profiles to extract the cloud radii. For fast data handling, gaussian profiles are assumed for most of the data and a zero-temperature Thomas-Fermi profile is assumed for the lowest energies. Both types of fit profiles are compared to full finite-temperature Thomas-Fermi profiles to estimate multiplicative corrections to the cloud radii, which are needed to correct for the small error arising from the form of the fit functions. We derive an exact, model-independent evolution equation for $\langle {\mathbf{r}}^2\rangle$ based on hydrodynamics and energy conservation in \S~\ref{sec:rsq}. This enables precise characterization of scale-invariance and local thermodynamic equilibrium in an expanding cloud. The primary result, Eq.~\ref{eq:3.1}, is independent of the shear viscosity and includes the corrections to the flow arising from the bulk viscosity and the deviation $\Delta p\equiv p-\frac{2}{3}\,{\cal E}$ of the pressure from the scale invariant equation of state, $p=\frac{2}{3}\,{\cal E}$. We also include the potential energy arising from the finite bias magnetic field curvature, as required for our precision measurements. The pressure change $\Delta p$ is determined for the high temperature limit in \S~\ref{sec:highT}. Then the method of estimating the bulk viscosity is described in \S~\ref{sec:bulk}. Finally, in \S~\ref{sec:parameters}, we discuss the method of fitting the mean square size $\langle {\mathbf{r}}^2\rangle$ data, when the bias field is tuned both to resonance and off-resonance in the large scattering length regime. \subsection{Expansion of the Mean-Square Cloud Radius} \label{sec:rsq} We employ a hydrodynamic description for a single component fluid~\cite{CaoViscosity,CaoNJP}, where the velocity field $\mathbf{v}(\mathbf{x},t)$ is determined by the scalar pressure and the viscosity pressure tensor, \begin{eqnarray} n\,m\left(\partial_t +\mathbf{v}\cdot\nabla\right)v_i&=&-\partial_i p + \sum_j \partial_j (\eta\,\sigma_{ij}+\zeta_B\,\sigma^{'}\delta_{ij})\nonumber \\ & &-n\,\partial_i U_{total}. \label{eq:force} \end{eqnarray} Here $p$ is the scalar pressure and $m$ is the atom mass. $U_{total}$ is the total trapping potential energy arising from the optical trap and the bias magnetic field curvature. The second term on the right describes the friction forces arising from both shear $\eta$ and bulk $\zeta_B$ viscosities, where $\sigma_{ij}=\partial v_i/\partial x_j+\partial v_j/\partial x_i-2\delta_{ij}\nabla\cdot\mathbf{v}/3$ and $\sigma^{'}\equiv\nabla\cdot\mathbf{v}$. Current conservation for the density $n(\mathbf{x},t)$ requires \begin{equation} \frac{\partial n}{\partial t}+\nabla\cdot(n{\mathbf{v}})=0. \label{eq:ncons} \end{equation} Finally, consistent with Eq.~\ref{eq:force} and Eq.~\ref{eq:ncons}, conservation of total energy is described by \begin{equation} \frac{d}{d t}\int d^3{\mathbf{r}}\left(n\frac{1}{2}m{\mathbf{v}}^2+{\cal E}+n\,U_{total}\right)=0. \label{eq:energycons} \end{equation} Here, the first term is the kinetic energy arising from the velocity field and ${\cal E}$ is the internal energy density of the gas. As shown below, Eq.~\ref{eq:energycons} will play an important role in determining a general evolution equation for the volume integral of the pressure in both the scale-invariant regime and away from scale-invariance. To explore scale invariance for an expanding cloud without creating a spherical trap, we measure the mean-square cloud radius, $\langle{\mathbf{r}}^2\rangle$, which is a scalar quantity. In this section, we derive generally the equation of motion for $\langle{\mathbf{r}}^2\rangle$, with no simplifying assumptions, except that of a single component hydrodynamically expanding fluid. This approach is appropriate in the normal fluid regime above the superfluid transition temperature as well as in the superfluid regime when the normal and superfluid components move together~\cite{StringariBulk}. We show that in the scale-invariant regime at a Feshbach resonance, where $p-\frac{2}{3}\,{\cal E}$ and $\zeta_B$ should be $0$, conservation of total energy leads to {\it ballistic} expansion of $\langle{\mathbf{r}}^2\rangle$ for a hydrodynamic gas. Away from resonance, the departure from scale-invariance is determined by the change in the equation of state, characterized by the conformal symmetry breaking pressure $\Delta p\equiv p-\frac{2}{3}{\cal E}$ and a finite bulk viscosity $\zeta_B$. We begin by noting that for each direction $i=x,y,z$, the mean square size $\langle x_i^2\rangle\equiv\frac{1}{N}\int d^3{\mathbf{r}}\,n({\mathbf{r}},t)\,x_i^2$ obeys \begin{eqnarray} \frac{d\langle x_i^2\rangle}{d t}&=&\frac{1}{N}\int d^3{\mathbf{r}}\,\frac{\partial n}{\partial t}x_i^2=\frac{1}{N}\int d^3{\mathbf{r}}\,[-\nabla\cdot(n{\mathbf{v}})]x_i^2\nonumber\\ &=&\frac{1}{N}\int d^3{\mathbf{r}}\,n\,{\mathbf{v}}\cdot\nabla x_i^2=2\langle x_i\,v_i\rangle, \label{eq:xsq} \end{eqnarray} where $N$ is the total number of atoms. We have used integration by parts and $n=0$ for $x_i\rightarrow\pm\infty$ to obtain the second line. Similarly, \begin{eqnarray} \frac{d\langle x_iv_i\rangle}{d t}&=&\frac{1}{N}\int d^3{\mathbf{r}}\,n\,x_i\frac{\partial v_i}{\partial t}+\frac{1}{N}\int d^3{\mathbf{r}}\,\frac{\partial n}{\partial t}\,x_i v_i\nonumber\\ &=&\frac{1}{N}\int d^3{\mathbf{r}}\,n\,x_i\frac{\partial v_i}{\partial t}+\frac{1}{N}\int d^3{\mathbf{r}}\,n\,{\mathbf{v}}\cdot\nabla (x_iv_i)\nonumber\\ &=&\langle x_i(\partial_t+{\mathbf{v}}\cdot\nabla)v_i\rangle+\langle v_i^2\rangle. \label{eq:xv} \end{eqnarray} Combining Eq.~\ref{eq:xsq} and Eq.~\ref{eq:xv}, we obtain, \begin{equation} \frac{d^2}{d t^2}\frac{\langle x_i^2\rangle}{2}=\langle x_i(\partial_t+{\mathbf{v}}\cdot\nabla)v_i\rangle+\langle v_i^2\rangle. \label{eq:xsqddot1} \end{equation} To proceed, we use Eq.~\ref{eq:force}, which yields \begin{eqnarray*} \int d^3{\mathbf{r}}\,n\,x_i(\partial_t+{\mathbf{v}}\cdot\nabla)v_i&=&\nonumber\\ & &\hspace*{-0.5in}\frac{1}{m}\int d^3{\mathbf{r}}\,x_i(-\partial_i p-n\,\partial_i U_{total})\nonumber\\ & &\hspace*{-0.75in}+\frac{1}{m}\sum_j\int d^3{\mathbf{r}}\,x_i\partial_j (\eta\,\sigma_{ij}+\zeta_B\,\sigma^{'}\delta_{ij}) \end{eqnarray*} Integrating by parts on the right hand side, assuming that the surface terms vanish, we obtain \begin{eqnarray} \langle x_i(\partial_t+{\mathbf{v}}\cdot\nabla)v_i\rangle&=&\frac{1}{Nm}\int d^3{\mathbf{r}}\,p-\frac{1}{m}\langle x_i\partial_iU_{total}\rangle \nonumber \\ & &\hspace*{-0.25in}-\frac{1}{Nm}\int d^3{\mathbf{r}}\,(\eta\,\sigma_{ii}+\zeta_B\,\sigma') \end{eqnarray} with $\sigma'\equiv\nabla\cdot{\mathbf{v}}$. Defining the viscosity coefficients $\alpha_S$ and $\alpha_B$ by $\eta\equiv\alpha_S\,\hbar\,n$ and $\zeta_B\equiv\alpha_B\,\hbar\,n$, respectively, we can write, \begin{eqnarray} \langle x_i(\partial_t+{\mathbf{v}}\cdot\nabla)v_i\rangle&=&\frac{1}{Nm}\int d^3{\mathbf{r}}\,p-\frac{1}{m}\langle x_i\partial_iU_{total}\rangle \nonumber \\ & &-\frac{\hbar}{m}\langle\alpha_S\,\sigma_{ii}+\alpha_B\,\sigma'\rangle, \label{eq:2.4a} \end{eqnarray} where \begin{equation} \langle\alpha_S\,\sigma_{ii}+\alpha_B\,\sigma'\rangle\equiv\frac{1}{N}\int d^3{\mathbf{r}}\,n\,(\alpha_S\,\sigma_{ii}+\alpha_B\,\sigma'). \end{equation} Using Eq.~\ref{eq:2.4a} in Eq.~\ref{eq:xsqddot1}, we then obtain for one direction $x_i$, \begin{eqnarray} \frac{d^2}{dt^2}\frac{\langle x_i^2\rangle}{2}&=&\frac{1}{Nm}\int d^3{\mathbf{r}}\,p+\langle v_i^2\rangle-\frac{1}{m}\langle x_i\partial_iU_{total}\rangle \nonumber \\ & &-\frac{\hbar}{m}\langle\alpha_S\,\sigma_{ii}+\alpha_B\,\sigma'\rangle. \label{eq:xsqddot2} \end{eqnarray} Eq.~\ref{eq:xsqddot2} determines the evolution of the mean square cloud radii along each axis, $\langle x_i^2\rangle$, which depends on the conservative forces arising from the scalar pressure and the trap potential, as well as the viscous forces arising from the shear and bulk viscosities. Summing Eq.~\ref{eq:xsqddot2} over all three directions, the shear viscosity term vanishes, since $\sigma_{ij}$ is traceless, yielding \begin{eqnarray} \frac{d^2}{d t^2}\frac{\langle {\mathbf{r}}^2\rangle}{2}&=&\frac{3}{Nm}\int d^3{\mathbf{r}}\,p+\langle {\mathbf{v}}^2\rangle-\frac{1}{m}\langle {\mathbf{r}}\cdot\nabla U_{total}\rangle \nonumber \\ & &-\frac{3\hbar}{m}\,\langle\alpha_B\,\nabla\cdot{\mathbf{v}}\rangle. \label{eq:1.1} \end{eqnarray} At $t=0^-$, {\it before} release from the trap, ${\mathbf{v}}=0$, Eq.~\ref{eq:1.1} shows that the volume integral of the pressure is \begin{equation} \frac{3}{N}\int d^3{\mathbf{r}}\,p_{\,0}=\langle {\mathbf{r}}\cdot\nabla U_{total}\rangle_0, \label{eq:1.2} \end{equation} where the subscript $(\,)_0$ denotes the initial condition. Here $U_{total}=U_{opt}+U_{mag}$ is the total trapping potential, comprising an optical component from the laser trap and a magnetic component arising from the curvature of the bias magnetic field used in the experiments, as described further below. It will be convenient to rewrite Eq.~\ref{eq:1.1} in terms of $\Delta p\equiv p-\frac{2}{3}\,{\cal E}$ using \begin{equation} p= \frac{2}{3}\,{\cal E}+\Delta p, \label{eq:1.4} \end{equation} where the first term defines the equation of state for the pressure in the scale-invariant regime, and the second term is the conformal symmetry breaking pressure change. Then, \begin{eqnarray} \frac{d^2}{dt^2}\frac{\langle {\mathbf{r}}^2\rangle}{2}&=&\frac{2}{Nm}\int d^3{\mathbf{r}}\,{\cal E}+\langle {\mathbf{v}}^2\rangle+\frac{3}{Nm}\int d^3{\mathbf{r}}\Delta p\nonumber\\ & &\hspace*{-0.125in}-\frac{1}{m}\langle {\mathbf{r}}\cdot\nabla U_{total}\rangle-\frac{3\hbar}{m}\,\langle\alpha_B\,\nabla\cdot{\mathbf{v}}\rangle. \label{eq:1.5} \end{eqnarray} Just after release from the trap, $t\geq 0^+$, from the optical trap, the trapping potential changes abruptly, $U_{total}\rightarrow U_{mag}$. To determine the evolution of $\langle {\mathbf{r}}^2\rangle$ after release, we use total energy conservation to eliminate $\langle {\mathbf{v}}^2\rangle$ from Eq.~\ref{eq:1.5}. From Eq.~\ref{eq:energycons}, the final total energy per particle is equal to the initial total energy, \begin{eqnarray} \frac{1}{N}\int d^3{\mathbf{r}}\,{\cal E}+\frac{m}{2}\langle{\mathbf{v}}^2\rangle+\langle U_{mag}\rangle &=&\nonumber\\ & &\hspace*{-1.25in}\frac{1}{N}\int d^3{\mathbf{r}}\,{\cal E}_0+\langle U_{mag}\rangle_0. \end{eqnarray} To determine the initial internal energy, $\frac{1}{N}\int d^3{\mathbf{r}}\,{\cal E}_0$, we use ${\cal E}_0=\frac{3}{2}p_{\,0}-\frac{3}{2}\Delta p_{\,0}$. With Eq.~\ref{eq:1.2}, this yields \begin{equation} \frac{1}{N}\int d^3{\mathbf{r}}\,{\cal E}_0=\frac{1}{2}\langle {\mathbf{r}}\cdot\nabla U_{total}\rangle_0-\frac{3}{2N}\int d^3{\mathbf{r}}\Delta p_{\,0}. \label{eq:2.2} \end{equation} We have $\langle {\mathbf{r}}\cdot\nabla U_{total}\rangle_0=\langle {\mathbf{r}}\cdot\nabla U_{opt}\rangle_0+\langle {\mathbf{r}}\cdot\nabla U_{mag}\rangle_0$ in Eq.~\ref{eq:2.2}. Multiplying Eq.~\ref{eq:2.2} by $2/m$ determines the first two terms in Eq.~\ref{eq:1.5}. Then, with $\langle {\mathbf{r}}\cdot\nabla U_{total}\rangle\rightarrow\langle {\mathbf{r}}\cdot\nabla U_{mag}\rangle$ for $t\geq 0^+$ in Eq.~\ref{eq:1.5}, we obtain finally our central result for studying scale invariance, \begin{eqnarray} \frac{d^2}{dt^2}\frac{m\langle {\mathbf{r}}^2\rangle}{2}&=&\langle {\mathbf{r}}\cdot\nabla U_{opt}\rangle_0 +\frac{3}{N}\int d^3{\mathbf{r}}\,[\Delta p-\Delta p_{\,0}] \nonumber \\ & &-3\,\hbar\,\langle\alpha_B\,\nabla\cdot{\mathbf{v}}\rangle+\Delta U_{mag}, \label{eq:3.1} \end{eqnarray} where we define the conformal symmetry breaking pressure \begin{equation} \Delta p\equiv p-\frac{2}{3}\,{\cal E}, \label{eq:Deltap} \end{equation} which describes the departure of the pressure from the scale-invariant regime. We also define \begin{eqnarray} \Delta U_{mag}&\equiv& 2\langle U_{mag}\rangle_0+\langle {\mathbf{r}}\cdot\nabla U_{mag}\rangle_0\nonumber\\ & &-2\langle U_{mag}\rangle-\langle {\mathbf{r}}\cdot\nabla U_{mag}\rangle, \label{eq:DeltaU} \end{eqnarray} which corrects for the small potential energy arising from the bias magnetic field curvature. As the bias coils are oriented along the $x$ direction, the effective potential is repulsive along the $x$ axis and twice the magnitude of the attractive potential along $y$ and $z$, \begin{equation} U_{mag}({\mathbf{r}})=\frac{1}{2}\,m\,\omega_{mag}^2\left(y^2+z^2-2x^2\right), \label{eq:Mag} \end{equation} where $\omega_{mag}=2\pi\times 21.5(0.25)$ Hz at 834 G is measured from the oscillation frequency of the cloud in the $y-z$ plane. Note that $\omega_{mag}^2[B]$ is proportional to $B$. The first three terms in Eq.~\ref{eq:3.1} reproduce the Eq.~1 of the main text, where the small magnetic contribution was neglected for brevity. The potential energy arising from the magnetic field curvature depends on the mean square cloud radii, $\langle x_i^2\rangle$, which are determined as a function of time by fitting the aspect ratio data using a scaling approximation for the density profile. The cloud radii for $t=0^+$ are dominated by $\langle z^2\rangle_0$, the longest direction of the cigar-shaped cloud in the trap. This is consistently measured both by in-situ imaging and by measurements after expansion, using the calculated expansion factor, which is close to unity. $\langle {\mathbf{r}}\cdot\nabla U_{opt}\rangle_0$ is determined by the trap parameters and measurements of the cloud radii, \S~\ref{sec:xdotgradU}. We determine the harmonic oscillation frequencies, $\omega_i$ by parametric resonance methods. We subtract off the contribution from the magnetic potential and extrapolate to the harmonic values of the optical frequencies by correcting for trap anharmonicity. We obtain $\omega_x=2\pi\times 2210(4)$ Hz, $\omega_y=2\pi\times 830(2)$, and $\omega_{z\,opt}=2\pi\times 60.7 (0.1)$. The corresponding trap depth is $U_0=60.3(0.2)\,\mu$K. The Fermi energy of an ideal gas at the trap center is $E_F=(3N)^{1/3}\hbar\bar{\omega}$, where $\bar{\omega}\equiv (\omega_x\omega_y\omega_z)^{1/3}$. With a typical total number of atoms $N\simeq 2.5\times 10^5$, $E_F\simeq k_B\times 2.0\,\mu$K. Given the time-dependent volume integrals of $\Delta p$ and $\zeta_B$, Eq.~\ref{eq:3.1} is easily integrated using the initial conditions $\langle {\mathbf{r}}^2\rangle_0$ and $\partial_t\langle {\mathbf{r}}^2\rangle_0=0$. To clearly demonstrate scale-invariant expansion after the optical trap is extinguished, we integrate Eq.~\ref{eq:3.1} in two steps. First, we integrate the magnetic contribution, $\Delta\langle {\mathbf{r}}^2\rangle_{mag}$ determined from \begin{eqnarray} \frac{d^2}{dt^2}\frac{m\Delta\langle {\mathbf{r}}^2\rangle_{mag}}{2}&=&-2\,m\,\omega_{mag}^2 \left\{\langle y^2\rangle_0 \,[b_y^2(t)-1]+\right.\nonumber\\ & &\hspace*{-0.70in}\left.\langle z^2\rangle_0\,[ b_z^2(t)-1]-2\langle x^2\rangle_0\,[b_x^2(t)-1]\right\}. \label{eq:3.1Mag} \end{eqnarray} We employ homogeneous initial conditions for the magnetic contribution, $\Delta\langle {\mathbf{r}}^2\rangle_{0\,Mag}=0$ (which does not change $\langle {\mathbf{r}}^2\rangle_0$) and $\partial_t\Delta\langle {\mathbf{r}}^2\rangle_{0\,Mag}=0$. The time dependent expansion scale factors $b_i(t)$ are determined by fitting the measured cloud radii as a function of time after release, using the hydrodynamic equations, Eq.~\ref{eq:xsqddot2} in a scaling approximation~\cite{CaoViscosity,CaoNJP}, i.e., $\langle x_i^2\rangle=\langle x_i^2\rangle_0\,b_i^2(t)$ and $\langle v_i^2\rangle=\langle x_i^2\rangle_0\,\dot{b}_i^2(t)$. As the magnetic contribution to $\langle {\mathbf{r}}^2\rangle$ arising from Eq.~\ref{eq:3.1Mag} is only a few percent, the expansion factors are readily determined with sufficient precision. After integration, the quantity $\Delta\langle {\mathbf{r}}^2\rangle_{mag}$ is subtracted from the measured $\langle {\mathbf{r}}^2\rangle$ data to determine the effective $\langle {\mathbf{r}}^2\rangle$, which then expands according to the remaining terms in Eq.~\ref{eq:3.1}, \begin{eqnarray} \frac{d^2}{dt^2}\frac{m\langle {\mathbf{r}}^2\rangle}{2}&=&\langle {\mathbf{r}}\cdot\nabla U_{opt}\rangle_0 +\frac{3}{N}\int d^3{\mathbf{r}}\,[\Delta p-\Delta p_{\,0}]\nonumber \\ & &-3\,\hbar\,\langle\alpha_B\,\nabla\cdot{\mathbf{v}}\rangle. \label{eq:3.1Opt} \end{eqnarray} The first term is the optical trap contribution, which is dominant and leads to a $t^2$ scaling for the mean square cloud radius of a resonantly interacting gas or for a ballistic gas, with the initial condition $\langle {\mathbf{r}}^2\rangle_0$. The remaining small pressure change and bulk viscosity terms can be integrated separately, using the scale factors $b_i(t)$ and the same homogeneous initial conditions as for the magnetic contribution. These contributions are described in more detail below. \subsection{High Temperature Approximation to $\Delta p$} \label{sec:highT} We study the regime away from the Feshbach resonance by using in Eq.~\ref{eq:3.1} a nonzero correction to the pressure $\Delta p$. We determine the energy $\widetilde{E}$ and initial cloud temperature $T_0$, using \begin{equation} \widetilde{E}=3\,k_B T_0=3\left\langle z\frac{\partial U_{total}}{\partial z}\right\rangle_0=\langle {\mathbf{r}}\cdot\nabla U_{total}\rangle_0, \label{eq:temperature} \end{equation} which follows from force balance in the trapping potential and $p=n(0) k_B T_0$ in the high temperature limit. This approximation is adequate for evaluating $\Delta p$ in the second virial approximation, as described below. We discuss the measurement of $\langle z\partial_zU_{total}\rangle_0$ using the axial cloud profile in \S~\ref{sec:parameters}. As a consequence of energy conservation, only the {\it difference} between the pressure at time $t$ and at time $t=0$, i.e., $\Delta p-\Delta p_{\,0}$, appears in Eq.~\ref{eq:3.1}. Hence, any {\it static} contribution to $\Delta p$ has no effect. In the high-temperature limit, we can evaluate $\Delta p$ using a virial expansion~\cite{HoMuellerHighTemp}. To determine $\Delta p$, we make the assumption that contributions to $\Delta p$ that require three-body and higher order processes to maintain equilibrium are {\it frozen} at their initial values over the time scale of the expansion, and do not contribute to $\Delta p-\Delta p_{\,0}$. In particular, the molecular contribution to the second virial coefficient~\cite{HoMuellerHighTemp} requires three-body collisions to populate and depopulate the molecular state as the gas expands and cools in the translational degrees of freedom. Therefore, the molecular contribution is expected to be negligible. We evaluate $\Delta p$ in the high temperature limit to second order in the fugacity~\cite{HoMuellerHighTemp} , where $\Delta p$ is given by \begin{equation} \Delta p=p-\frac{2}{3}\,{\cal E}=-\frac{\sqrt{2}}{3}\,n\,k_BT\left(T\frac{\partial b_2}{\partial T}\right)(n\lambda_T^3). \label{eq:6.3c} \end{equation} Here $\lambda_T\equiv h/\sqrt{2\pi mk_BT}$ is the thermal wavelength and $b_2$ is the part of the second virial coefficient that describes two-body interactions. Ignoring the molecular contribution, which is frozen on the short time of the expansion as discussed above, we take \begin{equation} b_2(x)=-\frac{sgn[a_S]}{2}\,e^{x^2}{\rm erfc}(x), \label{eq:10.1} \end{equation} where ${\rm erfc}(x)=1-\frac{2}{\sqrt{\pi}}\int_0^x dx'\,e^{-x'^2}$ and $x=\frac{\lambda_T}{|a_S|\sqrt{2\pi}}$, with $a_S$ the s-wave scattering length. As $\Delta p$ causes only a small perturbation to the flow, we make an adiabatic approximation for the temperature, $T=T_0\,\Gamma^{-2/3}$, where $T_0$ is the initial temperature of the trapped cloud and $\Gamma=b_xb_yb_z$ is the volume scale factor, i.e., the density $n$ of the expanding gas scales as $1/\Gamma$. We determine $\Gamma$ by fitting the aspect ratio data with a scaling approximation to the hydrodynamics~\cite{CaoViscosity,CaoNJP}, using the shear viscosity as the only free parameter. Then, $x=x_0\,\Gamma^{1/3}$, where $x_0=\frac{\lambda_{T_0}}{|a_S|\sqrt{2\pi}}$. Using the high temperature and harmonic approximation for the energy per particle, $\widetilde{E}=3\,k_B\,T_0$ and $E_F=\frac{\hbar^2 k_{FI}^2}{2m}=(3N)^{1/3}\hbar\bar{\omega}$, the Fermi energy of an ideal gas at the cloud center, we have \begin{eqnarray} x&=&x_0\,\Gamma^{1/3}\nonumber\\ x_0&=&\frac{\sqrt{6}}{k_{FI}|a_S|}\left(\frac{E_F}{\widetilde{E}}\right)^{1/2}, \label{eq:10.3} \end{eqnarray} where $k_{FI}=\sqrt{2m\,E_F/\hbar^2}$ is the Fermi wavevector of an ideal gas at the trap center. Note that $E_F$ is measured using the total atom number and the oscillation frequencies in the trap, $\bar{\omega}\equiv(\omega_x\omega_y\omega_z)^{1/3}$, where the $\omega_i$ are given in \S~\ref{sec:parameters}. Now, $T\frac{\partial b_2}{\partial T}=-xb'_2/2$, where $b_2'(x)\equiv sgn[a_S]\,f_2'(x)$ with \begin{equation} f_2'(x)\equiv\frac{1}{\sqrt{\pi}}-xe^{x^2}{\rm erfc}(x). \label{eq:10.2} \end{equation} Integrating over the trap volume, and using the adiabatic approximation for the temperature and a scaling approximation for the density, we obtain \begin{equation} \frac{1}{N}\int d^3{\mathbf{r}}\,\Delta p=\frac{\sqrt{2}}{3}\frac{k_B T_0}{\Gamma^{2/3}}\bar{z}\,x\,b_2'(x), \label{eq:7.4} \end{equation} where the trap-averaged fugacity $\bar{z}$ is an adiabatic invariant. For a gaussian density profile, \begin{equation} \bar{z}\equiv\frac{1}{N}\int d^3{\mathbf{r}}\,n\left(\frac{n\lambda_{T_0}^3}{2}\right)=\frac{9}{4\sqrt{2}}\left(\frac{E_F}{\widetilde{E}}\right)^3. \label{eq:7.5} \end{equation} In Eq.~\ref{eq:10.3} and Eq.~\ref{eq:7.5}, we have made the harmonic approximation $\omega_x^2\langle x^2\rangle_0=\omega_y^2\langle y^2\rangle_0=\omega_z^2\langle x^2\rangle_0$ and we have used the high temperature approximation $\widetilde{E}= 3\,k_B\,T_0$. To use Eq.~\ref{eq:3.1} to determine $\langle {\mathbf{r}}^2\rangle$ as a function of time, the volume integral Eq.~\ref{eq:7.4} is written as, \begin{equation} \frac{1}{N}\int d^3{\mathbf{r}}\,\Delta p= \frac{\widetilde{E}\sqrt{6}}{4}\left(\frac{E_F}{\widetilde{E}}\right)^{7/2} \frac{\Gamma^{-1/3}}{k_{FI}a_S}\,f_2'(x), \label{eq:8.2} \end{equation} where the time dependence of $x$ is determined by Eq.~\ref{eq:10.3} and $f_2'(x)$ is given by Eq.~\ref{eq:10.2}. We note that the leading dependence of $\Delta p$ on the Fermi energy is $E_F^{7/2}/k_{FI}\propto E_F^3$, which is proportional to $N$ the total atom number, as it should be. We have used $sgn[a_S]/|a_S|=1/a_S$ to explicitly show that the volume integral of $\Delta p$ changes sign with the scattering length $a_S$ as the bias magnetic field is tuned across the Feshbach resonance. As discussed above, the net pressure correction in Eq.~\ref{eq:3.1} is $\Delta p-\Delta p_{\,0}$. Hence, we also evaluate Eq.~\ref{eq:8.2} in the limit $t=0$, where $\Gamma\rightarrow 1$ and $x\rightarrow x_0$. As $|\Delta p|\leq |\Delta p_{\,0}|$ for all $t$, the net pressure correction is positive (negative) when $\Delta p$ is negative (positive). Then, compared to the resonant case, where $1/(k_{FI}a_S)=0$, the cloud is expected to expand more rapidly when the scattering length $1/(k_{FI}a_S)<0$ and more slowly when $1/(k_{FI}a_S)>0$, as observed in the experiments (see the main text). \subsection{Bulk Viscosity} \label{sec:bulk} The bulk viscosity is positive for finite $a_S$ and must vanish in the scale-invariant regime, where $|a_S|\rightarrow \infty$. Hence, to leading order in $1/a_S$, the bulk viscosity must be {\it quadratic} in $\frac{1}{k_Fa_S}$. Using dimensional analysis, the bulk viscosity then takes the general form \begin{equation} \zeta_B=\hbar\,n\,\frac{f_B(\theta)}{(k_Fa_S)^2}, \label{eq:bulk1} \end{equation} where $\hbar\,n$ is the natural unit of viscosity, $k_F\propto n^{1/3}$ is the local Fermi wavevector and $f_B(\theta)$ is a dimensionless function of the reduced temperature $\theta\equiv T/T_F(n)$, where $k_B\,T_F(n)\equiv\epsilon_F(n)\propto n^{2/3}$ is the local Fermi energy. As discussed in the main text, by using an adiabatic approximation for $\theta$, one obtains the trap-averaged bulk viscosity coefficient, which takes the form \begin{equation} \bar{\alpha}_B(t)=\bar{\alpha}_B(0)\,\Gamma^{2/3}(t), \label{eq:bulk2} \end{equation} where the $\Gamma^{2/3}$ factor arises from the $1/k_F^2$ scaling. The bulk viscosity provides the {\it only} contribution proportional to $\frac{1}{a_S^2}$, while the $\frac{1}{a_S^2}$ contribution to the volume integral of $\Delta p-\Delta p_{\,0}$ generally vanishes, as we now show. Using dimensional analysis, the most general $1/(k_Fa_S)^2$ contribution to $\Delta p$, which we define as $\Delta p_2$, must be of the form $$\Delta p_2=n\,\epsilon_F(n)\,\frac{f_p(\theta)}{(k_Fa_S)^2},$$ \noindent where $\epsilon_F(n)$ is the local Fermi energy and $f_p(\theta)$ is a dimensionless function of the reduced temperature. As $\frac{\epsilon_F(n)}{(k_Fa_S)^2}=\frac{\hbar^2}{2ma_S^2}$, $\frac{1}{N}\int d^3{\mathbf{r}}\,\Delta p=\frac{\hbar^2}{2ma_S^2}\langle f_p(\theta)\rangle$ is time-independent, since the number of atoms in a volume element is conserved during the expansion and $\frac{1}{N}\int d^3{\mathbf{r}}\,n\,f_p(\theta)$ is constant in the adiabatic approximation. Hence, the $\frac{1}{a_S^2}$ part of $\Delta p-\Delta p_{\,0}$ in Eq.~\ref{eq:3.1} vanishes and the bulk viscosity provides the only time-dependent $\frac{1}{a_S^2}$ contribution, which increases as $\Gamma^{2/3}$, according to Eq.~\ref{eq:bulk2}. The form of $\bar{\alpha}_B(0)$ in Eq.~\ref{eq:bulk2} is obtained from Ref.~\cite{DuslingSchaferBulk}, where bulk viscosity is predicted in the high temperature limit. To second order in the fugacity $z$, \begin{equation} \zeta_B=\widetilde{c}_B\,\left(\frac{\lambda_T}{a_S}\right)^2\,\frac{\hbar}{\lambda_T^3}\,z^2, \label{eq:bulkpredict} \end{equation} where we can approximate $z=n\,\lambda_T^3/2$. Dusling and Sch\"{a}fer~\cite{DuslingSchaferBulk} give $\widetilde{c}_B=\frac{1}{24\pi\sqrt{2}}$. Integrating over the trap volume, we obtain $\bar{\alpha}_B(t)=\bar{\alpha}_B(0)\,\Gamma^{2/3}(t)$ as in Eq.~\ref{eq:bulk2} with \begin{equation} \bar{\alpha}_B(0)=\widetilde{c}_B\,\frac{27\pi}{2\sqrt{2}}\frac{1}{(k_{FI}a_S)^2} \left(\frac{E_F}{\widetilde{E}}\right)^4\equiv c_B\,\left(\frac{E_F}{\widetilde{E}}\right)^4. \label{eq:bulkviscoef} \end{equation} Here, the value of $c_B$ based on the high-temperature prediction is \begin{equation} c_B=\frac{9}{32}\frac{1}{(k_{FI}a_S)^2}. \label{eq:bulkpred} \end{equation} \subsection{Measuring $\Delta p$ and the Bulk Viscosity.} \label{sec:parameters} We measure the effect of $\Delta p$ on the flow at finite scattering length and estimate the bulk viscosity by tuning the bias magnetic field $B$ away from resonance and observing the departure of $\langle{\mathbf{r}}^2\rangle$ from ballistic flow. For ballistic flow, \begin{equation} \langle{\mathbf{r}}^2\rangle=\langle{\mathbf{r}}^2\rangle_0+\frac{t^2}{m}\,\langle{\mathbf{r}}\cdot\nabla U_{opt}\rangle_0. \label{eq:scaleinv2} \end{equation} The $t^2$ form of Eq.~\ref{eq:fitc1c0} is exactly valid for a resonantly interacting gas and for a ballistic (noninteracting) gas. However, for finite scattering length at an arbitrary bias magnetic field $B$, Eq.~\ref{eq:3.1Opt} shows that $\langle {\mathbf{r}}^2\rangle$ does not expand as $t^2$. However, for the range of interaction strengths studied in our experiments, $\Delta p$ and the bulk viscosity produce only small perturbations to the flow. For this reason, we can continue to parameterize the time evolution of $\langle{\mathbf{r}}^2\rangle$ using \begin{equation} \langle {\mathbf{r}}^2\rangle=c_0+c_1\,t^2. \label{eq:fitc1c0} \end{equation} We determine $c_1=c_1[B]$ in Eq.~\ref{eq:fitc1c0} from the fit to the data and compare the ratio $c_1[B]/c_0$ for finite $1/a_S$ to that obtained at resonance. This method avoids utilizing model-dependent expansion factors for the cloud radii in the data analysis. All of the measured $\langle {\mathbf{r}}^2\rangle$ are corrected for the effective potential arising from magnetic field curvature by subtracting the magnetic field contribution, Eq.~\ref{eq:3.1Mag}. Then we determine the {\it effective} ratio $(c_1[B]/c_0)/(c_1[834]/c_0)$, described in more detail below and shown in Fig.~\ref{fig:c1c0}. If $\Delta p$ and $\zeta_B$ were zero at all magnetic fields, then this ratio would be unity everywhere, corresponding to the black line in the figure. The systematic deviation from unity arises from finite $\Delta p$ and $\zeta_B$, where the red dots (top) show data at 986 G where $a_S<0$ and the blue dots (bottom) show data for 760 G, where $a_S>0$. In the following sections, we describe the analysis in more detail. \subsubsection{Energy Scale} We begin by noting that the ratios $c_1/c_0$ are energy dependent. To provide an energy scale for all of the experiments, we use \begin{equation} \widetilde{E}\equiv\langle{\mathbf{r}}\cdot\nabla U_{total}\rangle_0. \label{eq:energy} \end{equation} $\widetilde{E}$ is twice the potential energy per particle in the harmonic oscillator approximation. For the resonantly interacting gas, where the virial theorem holds, $\widetilde{E}$ is precisely the energy of the cloud for a harmonic trapping potential. For an anharmonic trap, the virial theorem gives the total energy for the resonantly interacting gas in terms of the trapping potential~\cite{ThermoLuo}, but at finite scattering length, the relation between the total energy and the trapping potential energy is scattering length dependent~\cite{WernerVirial}. By using $\widetilde{E}$ to characterize the energy, we avoid the scattering length dependence. As discussed in \S~\ref{sec:highT}, for evaluating $\Delta p$ and $\zeta_B$ in the high temperature limit, we determine the initial temperature of the cloud $T_0$ from $\widetilde{E}=3\,k_B\,T_0$. \subsubsection{Determining $\langle{ \mathbf{r}}\cdot\nabla U_{opt}({\mathbf{r}})\rangle_0$} \label{sec:xdotgradU} For precise measurements, it is necessary to determine both the harmonic value and the anharmonic corrections to $\langle {\mathbf{r}}\cdot\nabla U_{opt}({\mathbf{r}})\rangle_0$. We recall that the total trapping potential takes the form \begin{equation} U_{total}({\mathbf{r}})= U_{opt}({\mathbf{r}})+U_{mag}({\mathbf{r}}), \label{eq:totalU} \end{equation} where $U_{opt}$ arises from the optical trap and $U_{mag}$ arises from the magnetic field curvature, Eq.~\ref{eq:Mag}. We note that for a scalar pressure $p$, force balance for the trapped cloud requires $\int d^3{\mathbf{r}}\, p=\langle x\partial_x U_{total}\rangle_0 =\langle y\partial_y U_{total}\rangle_0=\langle z\partial_z U_{total}\rangle_0$. Then, \begin{equation} \langle {\mathbf{r}}\cdot\nabla U_{total}({\mathbf{r}})\rangle_0=3\,\left\langle z\frac{\partial U_{total}}{\partial z}\right\rangle_0 \end{equation} and \begin{equation} \langle {\mathbf{r}}\cdot\nabla U_{opt}({\mathbf{r}})\rangle_0=3\,\left\langle z\frac{\partial U_{total}}{\partial z}\right\rangle_0 -\langle {\mathbf{r}}\cdot\nabla U_{mag}({\mathbf{r}})\rangle_0 . \label{eq:xdotgradU} \end{equation} Since $\langle x^2\rangle_0$ and $\langle y^2\rangle_0$ are small compared to $\langle z^2\rangle_0$, the last term is just $m\,\omega_{mag}^2\,\langle z^2\rangle_0$. Using Eq.~\ref{eq:totalU} in Eq.~\ref{eq:xdotgradU}, we then have \begin{equation} \langle {\mathbf{r}}\cdot\nabla U_{opt}({\mathbf{r}})\rangle_0=3\,\left\langle z\frac{\partial U_{opt}}{\partial z}\right\rangle_0 +2\,m\,\omega_{mag}^2\,\langle z^2\rangle_0 . \label{eq:xdotgradUopt} \end{equation} The harmonic oscillator frequency for atoms in the optical trapping potential is generally energy dependent, because the optical trap is less confining at higher energy, causing the frequency to decrease. We model this by writing \begin{equation} \left\langle z\frac{\partial U_{opt}}{\partial z}\right\rangle_0=m\,\omega^2_{z\,opt}\,\langle z^2\rangle_0\,h_A(\langle z^2\rangle_0), \end{equation} where $m\,\omega^2_{z\,opt}\,\langle z^2\rangle_0$ arises from the harmonic trapping potential. Here $h_A$ is the anharmonic correction factor, \begin{equation} h_A(\langle z^2\rangle_0)\equiv 1-\lambda_1\,\langle z^2\rangle_0, \label{eq:hA834} \end{equation} Hence, \begin{eqnarray} \frac{\langle {\mathbf{r}}\cdot\nabla U_{opt}({\mathbf{r}})\rangle_0}{m}&=& 3\,\omega^2_{z\,opt}\,\langle z^2\rangle_0\,h_A(\langle z^2\rangle_0)\nonumber\\ & &+2\,\omega_{mag}^2\,\langle z^2\rangle_0 . \label{eq:xdotgradUopt2} \end{eqnarray} \subsubsection{Determining the Anharmonic Correction} \label{sec:anharmonic} We use Eq.~\ref{eq:xdotgradUopt2} to measure the anharmonic correction factor $h_A=1-\lambda_1\langle z^2\rangle_0$, by making measurements of $\langle {\mathbf{r}}^2\rangle=c_0+c_1[B]\,t^2$ for both an ideal gas at $B=528$ G and a resonantly interacting gas at $B=834$ G for several different energies. For these two cases, $c_1=\langle {\mathbf{r}}\cdot\nabla U_{opt}({\mathbf{r}})\rangle_0/m$ and $c_0$ determines $\langle z^2\rangle_0$ from \begin{equation} c_0=\langle {\mathbf{r}}^2\rangle_0=\langle z^2\rangle_0\,\left(1+\frac{\omega_z^2}{\omega_x^2} +\frac{\omega_z^2}{\omega_y^2}\right), \label{eq:c0} \end{equation} where $\omega_z^2\equiv \omega_{z\,opt}^2+\omega_{mag}^2$ and the quantity in parentheses is close to unity in the our experiments. Solving Eq.~\ref{eq:xdotgradUopt2} for $h_A$, we obtain \begin{equation} h_A[B,\langle z^2\rangle_0]=\frac{c_1[B]}{3\,\omega^2_{z\,opt}\langle z^2\rangle_0}-\frac{2\,\omega_{mag}^2[B]}{3\,\omega^2_{z\,opt}}. \label{eq:anharmonicfit} \end{equation} For a resonantly interacting cloud or an ideal gas in a harmonic optical trap, we would have $h_A[834,\langle z^2\rangle_0]=1$, by construction. Instead we find that $h_A[834,\langle z^2\rangle_0]$ decreases with increasing $\langle z^2\rangle_0$, i.e., $\lambda_1>0$, as expected for a correction arising from trap anharmonicity, where the quartic terms in the optical trapping potential decrease the average oscillation frequency. The optical frequency $\omega_{z\,opt}$ in Eq.~\ref{eq:anharmonicfit} is most precisely determined at 834 G by demanding $h_A=1$ for energies close to the ground state, where the anharmonic correction is small and the resonantly interacting gas is nearly a pure superfluid. The slope $\lambda_1$ is determined by measuring the ballistic expansion of the noninteracting Fermi gas at 528 G as a function of initial cloud size, using the same method as for the resonantly interacting gas. In the experiments, we find that the $\lambda_1$ obtained from the $h_A$ data for ballistic expansion at 528 G is within 10\% of that obtained from the $h_A$ data for the highest energies of the resonantly interacting gas. By construction, the quantity $h_A[834,\langle z^2\rangle_0]/(1-\lambda_1\langle z^2\rangle_0)$ is then unity at all energies $\widetilde{E}$, corresponding to the black horizontal line in Fig.~\ref{fig:c1c0Resonance}. \subsubsection{Results} \begin{figure} \begin{center}\ \includegraphics[width=3.0in]{fig6a.eps} \caption{Resonantly interacting Fermi gas. The black dots are obtained from the fit to individual expansion curves using $\langle {\mathbf{r}}^2\rangle=c_0+c_1\,t^2$ to determine $Q_B$, Eq.~\ref{eq:ratio}. The black horizontal line denotes the ideal value of unity. \label{fig:c1c0Resonance}} \end{center} \end{figure} For the resonantly interacting gas at all initial energies, we use the linear anharmonic correction $h_A$ and $\omega_{z\,opt}$, determined as described above, to predict $\langle {\mathbf{r}}\cdot\nabla U_{opt}({\mathbf{r}})\rangle_0$ according to Eq.~\ref{eq:xdotgradUopt2}. Using this in Eq.~\ref{eq:3.1Opt} and $\Delta p=0$, we fit the expansion data for each energy $\widetilde{E}$ and find that the corresponding bulk viscosity is very small. The energy-averaged bulk viscosity coefficient is consistent with zero, as described in the main text. \begin{figure} \begin{center} \includegraphics[width=3.0in]{fig7a.eps} \end{center} \caption{Pressure change $\Delta p$ and bulk viscosity $\zeta_B$ contributions to conformal symmetry breaking as a function of energy. The data is fit with $\langle {\mathbf{r}}^2\rangle=c_0+c_1\,t^2$. The effective ratio $(c_1/c_0)/(c_1/c_0)_{834}$ (given in detail by Eq.~\ref{eq:ratio}) is shown for the resonantly interacting gas $1/(k_{FI}a_S)=0$ (black line-theory), for $1/(k_{FI}a_S)=-0.59$ (top, red dots) and for $1/(k_{FI}a_S)=+0.61$ (bottom, blue dots). Solid curves top and bottom show the best fit, where $\lambda_p=1.07(0.25)$ and $\lambda_B=0.20(0.55)$, see Fig.~\ref{fig:Contour}. The dashed (dotted) curves show the predictions for $\lambda_p=1.06$ and $\lambda_B=0$ ($\lambda_B=1$), to illustrate the effect of the bulk viscosity. \label{fig:c1c0}} \end{figure} For the off-resonant studies at a bias field $B$, we again fit $\langle {\mathbf{r}}^2\rangle=c_0+c_1\,t^2$ to the expansion data. We can still determine $h_A[B,\langle z^2\rangle_0]$ from Eq.~\ref{eq:anharmonicfit}. However, since $c_1[B]$ is modified by the nonzero $\Delta p$ and $\zeta_B$, $h_A[B,\langle z^2\rangle_0]$ deviates from $1-\lambda_1\langle z^2\rangle_0$. Therefore, we characterize the flow using the ratio, \begin{equation} Q_B\equiv \frac{h_A[B,\langle z^2\rangle_0]}{1-\lambda_1\langle z^2\rangle_0} \label{eq:ratio} \end{equation} for each energy $\widetilde{E}$. By construction, this ratio is unity for $B=834$ G, corresponding to the black horizontal line in Fig.\ref{fig:c1c0}. For $B\neq 834$ G, where $\Delta p\neq 0$ and $\zeta_B\neq 0$, the ratio deviates systematically from unity. \begin{figure} \begin{center} \vspace*{0.25in}\includegraphics[width=3.0in]{fig8a.eps} \end{center} \caption{Contour plot of $\chi^2$ for all of the off-resonance data as a function of $\lambda_B$ and $\lambda_p$. The data shown in Fig.~\ref{fig:c1c0} are compared to the high temperature predictions of \S~\ref{sec:highT} and \S~\ref{sec:bulk} using two scaling parameters, $\lambda_p$ for $\Delta p$ and $\lambda_B$ for the bulk viscosity. \label{fig:Contour}} \end{figure} Fig.~\ref{fig:c1c0} shows the ratio $Q[B]$ of Eq.~\ref{eq:ratio} as a function of the energy $\widetilde{E}$. For comparison, we use Eq.~\ref{eq:3.1Opt} to predict the corresponding ratios for each energy as a function of two scaling parameters, $\lambda_p$ for $\Delta p$ calculated in the high temperature limit and $\lambda_B$ for the predicted high temperature bulk viscosity. The solid lines (top and bottom) Fig.~\ref{fig:c1c0} show the best fit to the data, see the contour plot, Fig.~\ref{fig:Contour}, where $\lambda_p=1.07(0.25)$ and $\lambda_B=0.20(0.55)$. The contribution of the bulk viscosity appears smaller than predicted. To show the relative scale of the bulk viscosity and the $\Delta p$ corrections, the predictions for $\lambda_p=1.07$ and $\lambda_B=0$ are shown as dashed curves, while the dotted curves show the predictions for $\lambda_p=1.07$ and $\lambda_B=1$. As our $\Delta p$ model adequately describes the data, it appears that the pressure in the expanding cloud is not far from local equilibrium in the translational degrees of freedom, and the observed breaking of scale invariance is primarily due the direct change in the pressure $\Delta p=p-\frac{2}{3}{\cal E}\neq 0$. \end{document}
1,314,259,993,831
arxiv
\section{Introduction} Associated strangeness ($KY$) photoproduction is a crucial area of study to elucidate the nucleon excitation spectrum and the relevant degrees of freedom. There remain many resonances predicted by constituent quark models (CQMs)~\cite{capstick86,capstick92,capstick94,riska01}, lattice QCD calculations~\cite{edwards11}, harmonic oscillator and hypercentral CQMs~\cite{klempt12,giannini15} and Dyson-Schwinger equations of QCD~\cite{roberts11} that have not been observed experimentally. Significant advancements however have been made, both in the understanding of known resonances properties and new resonance discoveries\footnote{The Particle Data Group, for example, recognised 10 \textit{four star} and 3 \textit{three star} $N^*$ resonances above ground state in 2010, compared to 13 and 7 in 2020~\cite{pdg2010,pdg20}.}. A main motivation of the study of $KY$ photoproduction channels over the last 15 years has been to search for these ``missing resonances” which may only couple weakly to $N\pi$ final states~\cite{capstick00,loering01}. The ensuring wealth of high statistics data from the Crystal Ball @ MAMI~\cite{jude14}, CLAS~\cite{bradford06,mccracken10,dey10,bradford07,mcnabb04,carman09}, SAPHIR~\cite{glander04}, LEPS~\cite{sumihama06,shiu18} and GRAAL~\cite{lleres07} collaborations have rendered the $KY$ channels the closest to a ``complete experiment”, where a judiciously selected set of polarisation observables permit a complete description of the photoproduction mechanism~\cite{barker75}. This is partly due to the weak, self analysing decay of the $\Lambda$ enabling easier access to the recoiling baryon (single and double) polarisation observables. Despite this data and support from partial wave analyses (PWA) with dynamical coupled-channel frame works~\cite{anisovich07,anisovich14,muller19,roenchen18}, isobar models~\cite{skoupil16,skoupil18,mart99,clymton17,lee01,janssen01,janssen01EPJA,janssen03}, and models incorporating Regge trajectories~\cite{cruz12a,cruz12b,bydovsky19} to fix $t$-channel contributions using data above the resonance region (photon beam energies larger than 4\,GeV), a mutually consistent description between theory and data of $KY$ photoproduction channels has not been realised. The $K^+\Lambda$ threshold at a centre of mass energy of 1609\,MeV, is in the third resonance region where an abundance of $s$-channel resonances up to high spin states, $u$-channel hyperon resonances and $t$-channel $K$, $K^*$ and $K_1$ exchanges contribute. The isospin singlet $\Lambda$, however, acts as a filter to remove intermediate $\Delta^*$ states which are present in $K\Sigma$ channels, enabling a ``cleaner" study of $t$-channel processes. At forward angles, where the cosine of the centre of mass $K^+$ polar angle, $\cos\theta_\mathrm{CM}^{K}$, exceeds 0.9, there is a paucity of data to constrain the reaction mechanism, and the existing cross section data of SAPHIR~\cite{glander04} and CLAS~\cite{bradford06,mccracken10,mcnabb04} have pronounced inconsistencies\footnote{The LEPS collaboration data~\cite{sumihama06,shiu18} starts at a photon beam energy of 1.5\,GeV and is generally in agreement with CLAS data.}. This has led to a poor understanding of the dynamics of the Born terms and $t$-channel $K^+$ and $K^*$ exchanges which dominate at forward angles (see for example ref.~\cite{bydzovsky12}). PWA solutions have also included different $s$-channel resonance contributions, depending if the fits used the SAPHIR or CLAS datasets (see for example ref.~\cite{mart06}). Data with high $\cos\theta_\mathrm{CM}^{K}${} resolution at forward (and backward) angles is also sensitive to high-spin intermediate states, where the corresponding Legendre polynomials change quickly with respect to $\cos\theta_\mathrm{CM}^{K}$. States with spin 5/2 and 7/2 have been incorporated in previous PWA and isobar model solutions (see for example refs.~\cite{anisovich07,anisovich14,mart06}). Forward angle kinematics also enables access to a regime where the momentum transfer to the recoiling hyperon is minimised. This is a vital input for the description of hypernuclei electroproduction at low $Q^2$~\cite{achenbach12,achenbach12b,garibaldi19,motoba10,bydovsky07,bydzovsky12hypernuclei}. Studying the $Y$-$N$ interaction is crucial for an SU(3)$_\mathrm{flavour}$ description of baryon interactions and provides important astrophysical constraints, for example upon the equation of state for neutron stars (see ref.~\cite{haidenbauer17} and references therein). The BGOOD experiment~\cite{technicalpaper} (shown in fig.~\ref{fig:BGOODsetup}) at the ELSA facility~\cite{hillert06,hillert17} in Bonn, Germany, is ideally suited for $\gamma p \rightarrow K^+\Lambda$ measurements at forward angles. BGOOD is composed of two distinct parts: a forward magnetic spectrometer, ideal for the detection of forward going $K^+$, and a central calorimeter, suited for the identification of hyperons at low momentum, decaying almost isotropically. The presented data resolve discrepancies in existing datasets for $\cos\theta_\mathrm{CM}^{K}$$ > 0.9$ from threshold to a centre of mass energy, $W = 1870$\,MeV. Due to the high $\cos\theta_\mathrm{CM}^{K}${} resolution, the cross section as the minimum momentum transfer is approached can be determined in 0.02 $\cos\theta_\mathrm{CM}^{K}${} intervals. \begin{figure} [htp] \centering \vspace*{0cm} \resizebox{\columnwidth}{!}{% \includegraphics{BGOODsetup_Rev2.pdf} } \caption{Overview of the BGOOD setup. The central detector region consists of the BGO Rugby Ball, enclosing the MWPCs, Plastic Scintillating Barrel and the target. Figure taken from ref.~\cite{technicalpaper}.} \label{fig:BGOODsetup} \end{figure} This paper is organised as follows: sect.~\ref{sec:detector} describes the BGOOD experiment and the running conditions during the data taking. Section~\ref{sec:selectevents} explains the identification of the reaction channel and corresponding systematic uncertainties. Differential cross sections and recoil polarisation measurements are presented and discussed in sect.~\ref{sec:results}. Concluding remarks are made in sect.~\ref{sec:conclusions}. \section{BGOOD setup and experimental running conditions}\label{sec:detector} A detailed description of the experimental setup, performance and analysis procedures is given in ref.~\cite{technicalpaper}. The data were taken during a 22 day beam time, using an incident ELSA electron beam energy of 3.2\,GeV and a 6\,cm long liquid hydrogen target. The electron beam was incident upon a thin crystal radiator to produce a continuous spectrum of bremsstrahlung photons. The orientation of the crystal was such that a coherent, polarised peak was set at a photon beam energy ($E_\gamma$) of 1440\,MeV, however the polarisation was not required for the presented analysis. The energy of each photon was determined by momentum analysing the post-bremsstrahlung electron in the \textit{Photon Tagger}. This consists of a dipole magnet and a hodoscope of plastic scintillators to detect the deflection angle of the electron. Photon energies were measured from 10\,\% to 90\,\% of the extracted ELSA electron beam energy. The photon beam passed through a 7\,mm diameter collimator, with approximately 80\,\% of the bremsstrahlung photons impinging upon the target (referred to as the \textit{tagging efficiency}). The photon flux was determined continually during the data taking using the \textit{Flumo} detector downstream from the experiment. This consists of two sets of three plastic scintillators arranged downstream from each other to detect electron-positrons from pair production in the beam. Flumo was calibrated to the photon flux by taking separate, low rate runs using a lead glass scintillator, \textit{GIM}, with 100\,\% photon detection efficiency. The integrated photon flux from 900 to 1500\,MeV photon beam energy (the approximate region of the data shown) was $8.4\times10^{12}$. The \textit{BGO Rugby Ball}, comprised of 480 BGO crystals individually coupled to photomultipliers, covers polar angles 25$^\circ$ to 155$^\circ$. The fast time read out per crystal allows clean identification of neutral meson decays to photons. A set of two coaxial and cylindrical multiwire proportional chambers (\textit{MWPCs}) and a \textit{Plastic Scintillating Barrel} surround the target within the BGO Rugby Ball and are used for charged particle identification and reaction vertex reconstruction. The \textit{Forward Spectrometer} is a combination of tracking detectors, an open dipole magnet and time of flight walls. Two scintillating fibre detectors, \textit{MOMO} and \textit{SciFi}, track particles from the reaction vertex in the target. Downstream from these is the \textit{Open Dipole Magnet}, operating at an integrated field strength of 0.7\,Tm and covering polar angles 1$^\circ$ to 12$^\circ$ or 8$^\circ$ in the horizontal or vertical planes respectively. Particle trajectories downstream from the Open Dipole Magnet are determined using eight double layered drift chambers, and particle momentum is subsequently determined by the deflection of the trajectory in the magnetic field. Three time of flight (\textit{ToF}) walls at the end of the spectrometer measure particle $\beta$. The region between the BGO Rugby Ball and the Forward Spectrometer is covered by the \textit{SciRi} detector, which is composed of three segmented rings of plastic scintillators for charged particle detection. SciRi covers a polar angle range of 10$^\circ$ to 25$^\circ$. \section{Event selection}\label{sec:selectevents} $K^+$ were identified in the Forward Spectrometer from spatial coincidences between MOMO, SciFi, the Drift Chambers and the ToF walls. The momentum calculation used a three dimensional magnetic field description, including fringe fields extending beyond the magnet yoke, and particle energy loss from the target, air and detector materials. The particle trajectory was ``stepped through” in discrete intervals, applying the expected acceleration due to the Lorentz force and material energy loss. The interval lengths were dynamically determined to optimise accuracy and computational time depending upon the magnitude of the energy loss and Lorentz force per interval. An iterative approach was used to determine the optimum trajectory and momentum, given the hit positions in the detectors and weighted by their spatial resolutions. A momentum resolution of approximately 5\,\% of the measured momentum was achieved. See ref.~\cite{technicalpaper} for details. Particle $\beta$ was determined by time measurements in the ToF walls, accounting for the trajectory length and particle energy loss. Contrary to the default track finding routine described in ref.~\cite{technicalpaper}, a cluster in MOMO was not required to form a forward track due to an efficiency of only 80\,\%. If no MOMO cluster was identified, it was sufficient to use only a SciFi cluster and the target centre as a space point. The increase in background and reduction in spatial resolution were proved to be negligible. The mass of forward particles was calculated from momentum and $\beta$. Figure~\ref{fig:massselection} shows two examples of the reconstructed $K^+$ mass for different momentum intervals, with good agreement between real and simulated events. The rising structure towards low masses at 300\,MeV/c$^2$ in the real data is from $\pi^+$ from other hadronic reactions, and positrons from pair production in the beam. The small peak at 360\,MeV/c$^2$ in the lower momentum interval is from pair production in the beam from an ELSA electron bunch adjacent in time (every 2\,ns) to the bunch containing the electron responsible for the triggered event. Timing cuts with respect to particle $\beta$ remove most of these events, however these selection cuts are very conservative with respect to detector time resolutions to avoid removing any particles from triggered hadronic reactions. \begin{figure} [h] \centering \resizebox{0.8\columnwidth}{!}{ \includegraphics{MassSelection-eps-converted-to.pdf} } \caption{Mass reconstruction for $K^+$ candidates in the forward spectrometer for real and simulated data (red and blue lines respectively). The $K^+$ momentum, $p_{K^+}$, intervals are labelled inset. The dashed lines indicate the selection cut for the median value of $p_{K^+}$ described in the text.} \label{fig:massselection} \end{figure} Candidate events were selected over $\pm2\sigma$ of the reconstructed $K^+$ mass by approximately fitting a Gaussian function to the mass distribution. This varied with $K^+$ momentum, from $\pm 47$\,MeV/c$^2$ and $\pm 106$\,MeV/c$^2$ at 450\,MeV/c and 1000\,MeV/c respectively. Due the relatively small cross section compared to non-strange channels, identification of the decay $\Lambda\rightarrow \pi^0 n$ was required to enhance the signal relative to background. $\pi^0$ were identified in the BGO Rugby Ball via the two photon decay, where the measured invariant mass was required to be $\pm 30$\,MeV/c$^2$ from the accepted $\pi^0$ mass, corresponding to $\pm2\sigma$. Figure~\ref{fig:pionselection} shows the missing mass from the $K^+\pi^0$ system corresponding to the neutron mass for the $K^+\Lambda$ channel, plotted against the missing mass from the forward $K^+$. Events were selected above the red line. \begin{figure} [h] \centering \vspace*{0cm} \resizebox{0.8\columnwidth}{!}{% \includegraphics{PionSelection.pdf} } \caption{Missing mass recoiling from the $K^+\pi^0$ system versus the missing mass from the $K^+$. (a) Real data. (b) Simulated $K^+\Lambda$ and $K^+\Sigma^0$ events, approximately weighted to the measured ratio. Events were selected above the red line. } \label{fig:pionselection} \end{figure} Events were rejected if a charged particle was identified in either the BGO Rugby Ball (via coincidence with the plastic scintillating barrel) or the intermediate SciRi detector. The total energy deposition in the BGO Rugby Ball was also required to be lower than 250\,MeV. The simulated data shown in fig.~\ref{fig:esum} demonstrates this removes approximately half of the most significant background from falsely identified $\pi^+$ from $\Delta^0\pi^+$ events. \begin{figure} [h] \centering \vspace*{0cm} \resizebox{\columnwidth}{!}{% \includegraphics{BGOEnergySum-eps-converted-to.pdf} } \caption{Total energy deposition in the BGO Rugby Ball for simulated $\gamma p \rightarrow K^+\Lambda$ and $\gamma p \rightarrow \Delta^0\pi^+$ events (red and blue lines respectively) when a $K^+$ candidate was identified in the forward spectrometer and the $\pi^0$ from the $\Lambda$ decay in the BGO Rugby Ball. The dashed black line indicates the maximum energy deposition allowed when selecting $K^+\Lambda$ events. } \label{fig:esum} \end{figure} Figure~\ref{fig:missingmass} shows the $K^+$ missing mass for different photon beam intervals. The distribution of the $\pi^+$ and $e^+$ background was described by an equivalent analysis of negatively charged particles, where $\pi^-$ and $e^-$ have similar kinematics. Simulated data were used to describe the $K^+\Lambda$ signal and the $K^+\Sigma^0$ background. The simulations followed energy and angular distributions from previously measured cross sections~\cite{mccracken10,dey10}, however the intervals in $\cos\theta_\mathrm{CM}^{K}${} and energy were sufficiently small so that the missing mass spectra could be considered fixed across each interval. The spectra therefore depended solely on the experimental energy and spatial resolutions, and accurately described the real data. A fit was subsequently applied using the three missing mass spectra as templates with separate scaling factors in order to extract the $K^+\Lambda$ yield. To fully understand background contributions, missing mass spectra from additional simulated channels were included in the fit. The only significantly contributing channel proved to be $\gamma p \rightarrow \Delta^0\pi^+$, where the $\pi^+$ was mistaken for a $K^+$. This was already included in the $e^+/\pi^+$ background (the cyan line in fig.~\ref{fig:missingmass}), however the inclusion of this simulated channel allowed the relative contributions of misidentified $e^+$ and $\pi^+$ to vary. This channel only contributed in the highest four energy intervals, and did not significantly change the extracted $K^+\Lambda$ yield. For these intervals, the fit including the additional $\Delta^0\pi^+$ missing mass spectrum was used for the $K^+\Lambda$ yield extraction if the reduced $\chi^2$ of the fit was improved. This occurred for the highest two data points, where the reduced $\chi^2$ were 2.47 and 2.50 without including the $\Delta^0\pi^+$ spectra, and 1.45 and 1.42 when including it. Fig.~\ref{fig:compareyields} shows the extracted yields with and without the simulated $\Delta^0\pi^+$ data. \begin{figure} [h] \centering \vspace*{0cm} \resizebox{\columnwidth}{!}{% \includegraphics{FitExample-eps-converted-to.pdf} } \caption{Missing mass from forward $K^+$ candidates after selection criteria described in the text. Every other photon beam energy bin ($E_\gamma$) is shown and labelled in units of MeV, with corresponding reduced $\chi^2$ for the fit. The data are the black points, with fitted spectra from simulated $K^+\Lambda$ and $K^+\Sigma^0$ and $e^+$/$\pi^+$ background (red, green and cyan lines respectively). The blue line is the summed total fit. The highest energy bin, $E_\gamma = 1370$\,MeV also includes the simulated $\Delta^0\pi^+$ contribution (purple line).} \label{fig:missingmass} \end{figure} \begin{figure} [h] \centering \vspace*{0cm} \resizebox{0.9\columnwidth}{!}{% \includegraphics{CompareYields-eps-converted-to.pdf} } \caption{The extracted yields for the $K^+\Lambda$ signal and background from $K^+\Sigma^0$ and $e^+\pi^+$ misidentification (red circles, green triangles and cyan squares respectively). The solid filled data points are without the simulated $\Delta^0\pi^+$ background, the open data points are when including this additional background.} \label{fig:compareyields} \end{figure} \subsection{Detection efficiency calculations} The detection efficiency was determined using a Geant4~\cite{geant4} simulation of the experimental setup. This included all spatial, energy and time resolutions, efficiencies for all detectors in the forward spectrometer (described in ref.~\cite{technicalpaper}) and the modelling of the hardware triggers described below. Three hardware trigger conditions, listed in table~\ref{table:triggers} were implemented for a broad range of experimental requirements. Trigger 4 was used for this analysis, where approximately 80\,MeV minimum energy deposition was required in the BGO Rugby Ball and a signal in the SciFi and ToF detectors, described in table 1 as a \textit{Forward Track}. \begin{table}[h] \begin{tabular}{c l} \hline\hline Trigger & Description \\ \hline 0 & High BGO energy sum ($\sim 200$\,MeV) \\ 1 & Low BGO energy sum ($\sim 80$\,MeV) \& SciRi\\ 3 & SciRi \& Forward Track\\ 4 & Low BGO energy sum \& Forward Track\\ \hline\hline \end{tabular} \caption{BGOOD hardware triggers. Each trigger also required a cluster in the Photon Tagger. Trigger 2 is obsolete.} \label{table:triggers} \end{table} The efficiencies of the BGO Rugby Ball energy sum triggers, shown in fig.~\ref{fig:triggereff}(a) were determined via a ratio of events passing different trigger combinations. The high energy sum distribution was determined from the ratio of all events passing both triggers 0 and 3, and all events passing trigger 3. The low energy sum used in this analysis was determined from the ratio of all events passing both triggers 1 and 4, and all events passing trigger 3. This ensured that the difference was dependent only upon the low energy sum efficiency, and not reaction and topologically specific. These distributions were implemented in simulated data for an accurate determination of detection efficiencies. \begin{figure} [h] \centering \vspace*{0cm} \resizebox{0.9\columnwidth}{!}{% \includegraphics{TriggerEff.pdf} } \caption{Modelling of the hardware triggers. (a) The fraction of events passing the low and high BGO energy sum triggers (blue and red respectively). (b) The efficiency of trigger 4 as a function of the forward going particle $\beta$.} \label{fig:triggereff} \end{figure} Due to the small misalignment of trigger timing windows and the large time range for forward going particles, the efficiency of trigger 4 also had a small dependence upon the particle $\beta$. Fig.~\ref{fig:triggereff}(b) shows this efficiency, determined from a clean selection of forward going protons. For forward $K^+$ from $K^+\Lambda$, $\beta$ is approximately 0.65 and 0.90 at $W = 1680$ and 1900\,MeV, corresponding to correction factors of 1.09 and 1.06 to the event yields respectively. Both the trigger efficiency as a function of the BGO energy deposition and the $\beta$ of forward going particles were successful in describing the well known $\gamma p \rightarrow \eta p$ differential cross section, the results of which are presented in ref.~\cite{technicalpaper}. Shown in fig.~\ref{fig:deteff}, the detection efficiency was approximately 2.4\,\% at threshold, rising smoothly to 5\,\% at 1400\,MeV. The efficiency also increases at more forward angles. These efficiencies also account for the $\pi^0$ detection, the $\Lambda \rightarrow \pi^0 n$ branching ratio of 36\,\%, and approximately 50\,\% of $K^+$ decaying in-flight. These three factors alone limit the detection efficiency to 13\,\%. \begin{figure} [h] \centering \vspace*{0cm} \resizebox{\columnwidth}{!}{% \includegraphics[width=\columnwidth,trim={0cm 0cm 1.5cm 1.0cm},clip=true]{KLDetectionEfficiency1Row_Less-eps-converted-to.pdf} } \caption{Detection efficiency for: (a) $\cos\theta_\mathrm{CM}^{K}$ $> 0.9$ versus photon beam energy and (b) versus $\cos\theta_\mathrm{CM}^{K}$ for selected photon energy intervals labelled inset. The connecting lines are an aid to guide the eye.} \label{fig:deteff} \end{figure} \subsection{Systematic uncertainties}\label{sec:syserror} Systematic uncertainties are divided into two components. The \textit{scaling uncertainty}, the sources of which are listed in table~\ref{table:syserror}, is a constant fraction of the measured cross section. The position of the beam when impinging upon the target was the largest source due to the dependence of the measured production angle and forward acceptance. This was determined using simulated data. The absolute photon flux determination is the second largest uncertainty. This was estimated by measuring well known photoproduction cross sections (for example $\gamma p \rightarrow \pi^0p$ and $\eta p$ shown in ref.~\cite{technicalpaper})), and comparing flux measurements using the tagging efficiency calculations from the Flumo and GIM detectors. Flumo measured the tagging efficiency continuously during the data taking, whereas GIM measured the tagging efficiency every 12 hours at low rates (an extracted electron beam of 40\,pA compared to 1420\,pA). Despite the different beam conditions, an agreement of the flux normalisation to within 3\,\% was achieved. The electron beam position upon the diamond radiator was also closely monitored by a continuous study of the coherent edge of the linearly polarised bremsstrahlung photon energy distribution. \begin{table}[h] \centering \begin{tabular}{l c} \hline\hline Source & \% error \\ \hline Beam spot alignment & 4.0 \\ Photon flux & 4.0\\ $K^+$ selection & 2.0 \\ SciFi efficiency & 3.0 \\ Target wall contribution & 2.0 \\ Track time selection & 2.0 \\ Target length & 1.7 \\ ToF wall efficiency & 1.5 \\ MOMO efficiency & 1.0 \\ Drift chamber efficiency & 1.0 \\ Beam energy calibration & 1.0 \\ Modelling of hardware triggers & 1.0 \\ $\pi^0$ identification & 1.0 \\ Forward track geometric selection & 1.0 \\ \hline Summed in quadrature & 8.0 \\ \hline\hline \end{tabular} \caption{Systematic uncertainties contributing to the constant fractional error.} \label{table:syserror} \end{table} The \textit{fitting uncertainty} from extracting the number of events from the missing mass spectra permits the individual movement of data points. This was estimated from the difference of when including the additional simulated $\Delta^0\pi^+$ events in the background distribution and by also varying the fit range. An exponential function was fitted to the difference in the cross section to describe the general trend. The only significant differences were at the four data points at the highest energies where the signal yield begins to reduce compared to the background and the $K^+$ missing mass distribution becomes broader. This gave an uncertainty of 0.022 and 0.042\,$\mu$b/sr at centre of mass energies 1831 and 1858\,MeV respectively. The data stops at 1858\,MeV as this uncertainty becomes very large at higher energies. To check the consistency of the fitting procedure, the data were also binned into both 0.03 and 0.02 $\cos\theta_\mathrm{CM}^{K}$ intervals, where the yield was summed and compared to the total over the full 0.1 $\cos\theta_\mathrm{CM}^{K}$ interval. This showed good agreement within the systematic errors. The same fitting systematic uncertainty was assumed for the data binned in smaller $\cos\theta_\mathrm{CM}^{K}${} intervals, where the reduced statistics prevented an accurate determination. \section{Results and discussion}\label{sec:results} All presented data are tabulated in the appendix. The data extends to a photon beam energy of 1400\,MeV, corresponding to a centre of mass energy of 1858\,MeV. Above this energy the systematic uncertainty in separating the signal from background begins to increase very quickly. \subsection{$\gamma p \rightarrow K^+\Lambda$ differential cross section} The differential cross section for $\cos\theta_{CM}^{K} > 0.9$ is shown in fig.~\ref{fig:cstotal}. The interval range in $W$ is typically 14\,MeV and determined by the width of the Photon Tagger channels. This is comparable to the previous data shown from the CLAS collaboration~\cite{bradford06,mccracken10} and half the size of the SAPHIR collaboration data~\cite{glander04}. It should be noted that the CLAS data is at the more backward angle of $0.85 < \cos\theta_{CM}^{K} < 0.95$, and the SAPHIR data is the only other dataset at this most forward $\cos\theta_\mathrm{CM}^{K}${} interval. The statistical error, as a fraction of the measured data, is improved by approximately a factor of two over most of the measured energy range. The available datasets at these forward $\cos\theta_\mathrm{CM}^{K}${} intervals exhibit discrepancies, where the SAPHIR data is consistently lower than the CLAS data, and the two CLAS datasets also deviate from each other. These new data appear in agreement with the CLAS data of McCracken~\cite{mccracken10}. The CLAS data of Bradford~\cite{bradford06} appears (by eye) approximately 20\,\% lower for energies below 1850\,MeV and the SAPHIR data~\cite{glander04} are lower over the full energy range by the order of 30 to 40\,\%. \begin{figure} [htb] \centering \vspace*{0cm} \resizebox{\columnwidth}{!}{% \includegraphics{KLCSPostComments_Updated-eps-converted-to.pdf} } \caption{$\gamma p \rightarrow K^+\Lambda$ differential cross section for $\cos\theta_\mathrm{CM}^{K}$ $>0.90$ (black filled circles). The systematic uncertainties on the abscissa are in three components: The shaded blue and red bars are the \textit{scaling} and \textit{fitting} uncertainties respectively, described in sec.~\ref{sec:syserror}. The grey bars are the total. Previous data (only including statistical errors) is shown of McCracken \textit{et al.} (CLAS)~\cite{mccracken10} (blue open squares), Bradford \textit{et al.} (CLAS)~\cite{bradford06} (red open triangles), Glander \textit{et al.} (SAPHIR)~\cite{glander04} (green open diamonds), Shiu \textit{et al.} (LEPS)~\cite{shiu18} (orange filled triangle) and Sumihama \textit{et al.}~\cite{sumihama06} (orange filled squares). The CLAS data are at the more backward angle of 0.85 $<$$\cos\theta_\mathrm{CM}^{K}${} $<$ 0.95. The Regge plus resonant model \cite{bydovsky19} and isobar models BS1 and BS3 \cite{skoupil16,skoupil18} of Skoupil and Byd\v{z}ovsk\'{y} are the solid red, dotted green and dotted blue lines respectively. The Bonn-Gatchina PWA~\cite{muller19} solutions with and without the inclusion of the new data are the dashed cyan and dashed magenta lines respectively.} \label{fig:cstotal} \end{figure} The isobar models of Skoupil and Byd\v{z}ovsk\'{y} \cite{skoupil16,skoupil18}, BS1 and BS3 (green and blue dotted lines), also plotted in fig.~\ref{fig:cstotal}, show good agreement with the peak structure around $1720$\,MeV. The data exhibits a flatter structure from 1800 to 1850\,MeV, which the BS3 model appears to reproduce well. A peak is evident in both the BS1 and BS3 models at this energy but at a more backward angle of $\cos\theta_\mathrm{CM}^{K}$\,$\approx 0.4$ which is not covered by this new data. The Regge plus resonant (RPR) model of Skoupil and Byd\v{z}ovsk\'{y} \cite{bydovsky19} (red line) fails to reproduce the bump at $1720$\,MeV, where it is considered that the $S_{11}(1650)$ would need to contribute more to describe the data. This new data with improved statistics will help constrain the RPR model where previously it was fitted to the less precise CLAS and LEPS datasets within this forward region~\cite{dilaborprivate}. There is an improved agreement with the RPR model for energies beyond $1800$\,MeV, where the rise is due to the constructive interference of the $D_{13}$(1700) and $D_{15}$(1675), however the data exhibits a flatter distribution. Neither resonances are included in the BS1 or BS3 isobar models, which may cause the discrepancies at these energies~\cite{dilaborprivate}. The flatter distribution of the cross section for energies greater than $1800$\,MeV for this data, the CLAS Bradford data and the LEPS data~\cite{shiu18,sumihama06} is inherent to Regge based models which cannot introduce structure, compared to isobar models. The RPR model amplitude within this region however is still strongly influenced by the parameters from the $s$ channel contributions, with the Regge region only applicable above 3\,GeV~\cite{dilaborprivate}. The Bonn-Gatchina BG2019 solution~\cite{muller19}, when fitted simultaneously to both the CLAS data is also shown in fig.~\ref{fig:cstotal} as the magenta line. There is a reduced $\chi^2$ of 2.99 between the fit and this data. The fit describes this data well below 1800\,MeV however above this energy the fit reduces in strength and does not reproduce the slight rise of the data points. A new fit additionally including this data is shown as the cyan line. The fit optimized all $K^+\Lambda$ and $K^+\Sigma^0$ couplings for the resonant contributions and $t$ and $u$ channel exchange amplitudes with $K^+\Lambda$ and $K^+\Sigma^0$ final states. Only reactions with two body final states were fitted. A full parameter optimisation was then made, fitting all reactions from the Bonn-Gatchina PWA database. Finally, all three body couplings were fixed. The reduced $\chi^2$ between this new fit and the data improved to 2.41. The only significant changes occurred in the forward region, with negligible changes to the more backward region covered by the CLAS data. The inclusion of this data changed contributions from the non-resonant amplitudes defined by the $K^0$(1430) and $\Sigma$ exchanges. For the resonant couplings the solution readjusted the $K\Sigma$ couplings of the highest $P_{11}$ states. However these readjustments did not significantly change the absolute values of the couplings calculated as residues in the pole position, where only relative phases changed by one standard deviation. The most notable changes were found in the $A^{1/2}$ helicity couplings for the P$_{33}$(1920) and helicity couplings of the P$_{13}$(1900), although in both cases these changed by less than two standard deviations. The fit was repeated by iteratively adding resonant contributions with different quantum numbers. Only a small improvement of the description could be achieved. The most notable changes are observed for resonances with $J^- = 5/2^-$, which provided the best overall improvement, without making any significant change to the more backward CLAS data. Figures \ref{fig:csvsangle} and \ref{fig:csvsenergyfine} show the differential cross section in 0.02 $\cos\theta_\mathrm{CM}^{K}${} intervals versus $\cos\theta_\mathrm{CM}^{K}${} and $W$ respectively. Near threshold, the distribution is flat, suggesting $s$-channel dominating components of the reaction mechanism. As $W$ increases the cross section becomes more forward peaked consistent with increasing $t$-channel $K$ and $K^*$ exchange processes. In fig.~\ref{fig:csvsenergyfine}, the peak at $1720$\,MeV remains approximately constant in strength over the $\cos\theta_\mathrm{CM}^{K}${} range \begin{figure} [htb] \includegraphics[width=\columnwidth,trim={0cm 2.5cm 0cm 0cm},clip=true]{CSVsAngle_Updated-eps-converted-to.pdf} \caption{$\gamma p \rightarrow K^+\Lambda$ differential cross section versus $\cos\theta_\mathrm{CM}^{K}${} for each centre of mass energy, $W$ labelled inset in MeV. Filled black circles are these data binned into 0.02 $\cos\theta_\mathrm{CM}^{K}${} intervals, and other data points and model fits are the same as described in fig.~\ref{fig:cstotal}. } \label{fig:csvsangle} \end{figure} \begin{figure} [htb] \centering \vspace*{0cm} \resizebox{\columnwidth}{!}{% \includegraphics[width=\columnwidth,trim={0cm 2cm 0.5cm 0.6cm},clip=true]{KLCSVsEnergy_Updated-eps-converted-to.pdf} } \caption{$\gamma p \rightarrow K^+\Lambda$ differential cross section for intervals of 0.02 in $\cos\theta_\mathrm{CM}^{K}$ (filled black circles). Other data points and model fits are the same as described in fig.~\ref{fig:cstotal}. } \label{fig:csvsenergyfine} \end{figure} The data binned finely into 0.02 $\cos\theta_\mathrm{CM}^{K}$~intervals was used to determine the differential cross section with respect to the Mandelstam variable, $t = (p_\gamma - p_K)^2$, where $p_\gamma$ and $p_K$ are the four-momenta of the photon beam and $K^+$ respectively. To account for the distribution of $t$ within each two dimensional $W$ and $\cos\theta_\mathrm{CM}^{K}${} interval, a generated distribution assumed the differential cross section of the McCracken CLAS data~\cite{mccracken10}. For each interval of the BGOOD data in $W$ and $\cos\theta_\mathrm{CM}^{K}$, the mean average value of $t$ was used as the central value, and the width was determined as $\sqrt{12}$\,RMS. The BGOOD differential cross section data with respect to $t$ is shown for each $W$ interval in fig.~\ref{fig:fittingslope}. The function in eq.~1 was fitted to the data to interpolate the cross section to the minimum value of $t$ achievable for the given $W$ interval, $t_\mathrm{min}$ (occurring at $\cos\theta_\mathrm{CM}^{K}$ $= 1$), and to extract the slope parameter, $S$. \begin{equation} \frac{\mathrm{d}\sigma}{\mathrm{d}t} = \frac{\mathrm{d}\sigma}{\mathrm{d}t}\Big|_{t=t_\mathrm{min}}e^{S|t-t_\mathrm{min}|} \end{equation}\label{eq:fitfunction} \begin{figure} [h] \centering \vspace*{0cm} \resizebox{0.5\textwidth}{!}{% \includegraphics[width=\columnwidth,trim={0cm 2.5cm 0cm 3.5cm},clip=true]{KLambdaVstXFigged-eps-converted-to.pdf} } \caption{d$\sigma$/d$t$ versus $|t - t_\mathrm{min}|$ for intervals of centre of mass energy, $W$, labelled inset in MeV. Only the statistical error is shown and included in the fit. The red line is eq.~1 fitted to the data. } \label{fig:fittingslope} \end{figure} Fig.~\ref{fig:cstmin} shows the differential cross section at $t_\mathrm{min}$ and the slope parameter $S$ versus $W$. The shape of the cross section is similar to the most forward $\cos\theta_\mathrm{CM}^{K}${} interval, with a dominant peak at 1720\,MeV. For the first 100\,MeV above threshold, $S$ remains positive. At higher energies, $S$ becomes increasingly negative, indicating the onset of $t$-channel $K$ exchange dominating the reaction mechanism. \begin{figure} [h] \centering \includegraphics[width=\columnwidth,trim={0cm 1.7cm 0cm 0.5cm},clip=true]{KL_CSAttmin_Updated-eps-converted-to.pdf} \caption{(a) $K^+\Lambda$ differential cross section, d$\sigma$/d$t$ extrapolated to $t_\mathrm{min}$ versus $W$. (b) The slope parameter $S$ versus $W$. } \label{fig:cstmin} \end{figure} \subsection{$\gamma p \rightarrow K^+\Lambda$ recoil polarisation} The weak decay of the $\Lambda$ allows access to the recoil polarisation via the decay distribution. The $\pi^0$ four-momentum from $\Lambda \rightarrow \pi^0 n$ was boosted into the $\Lambda$ rest frame and the $\pi^0$ direction relative to the reaction plane was determined (denoted $N_{\uparrow/\downarrow}$). The recoil polarisation was measured according to eq.~2. The $\Lambda$ decay parameter used, $\alpha = 0.642 \pm 0.04$~\cite{pdg2018} is the average value cited by the Particle Data Group prior to 2019\footnote{This older value of $\alpha$ was chosen for consistency as the isobar models of Skoupil and Byd\v{z}ovsk\'{y}~\cite{skoupil16,skoupil18} shown in fig.~\ref{fig:recpol} are fitted to a combination of data which used this. The value since 2019, $\alpha = 0.732 \pm 0.014$~\cite{pdg20} would reduce all data points shown and associated errors by a factor of 0.877.}. \begin{equation} P_\Lambda = \frac{2}{\alpha}\frac{N_\uparrow - N_\downarrow}{N_\uparrow + N_\downarrow} \end{equation}\label{eq:recpol} Simulated data were used to determine the success rate of correctly determining $N_{\uparrow/\downarrow}$ per event to measure dilution effects which may have occurred due to limited azimuthal angular resolution at forward angles. A small correction as a function of $E_\gamma$ was determined. This was 5\,\% and 7\,\% at $E_\gamma = 914$\,MeV (threshold) and 1400\,MeV respectively. The recoil polarisation data is shown in fig.~\ref{fig:recpol}. The systematic uncertainties shown in table~\ref{table:syserror} and the fitting uncertainty mostly cancel out. The remaining dominating uncertainty is the accuracy of $\alpha$ of 6.2\,\%. \begin{figure} [h] \centering \vspace*{0cm} \resizebox{0.9\columnwidth}{!}{% \centering \includegraphics{RecPol_Revised-eps-converted-to.pdf} } \caption{Recoil polarisation, $P_\Lambda$ for 0.9 $<$$\cos\theta_\mathrm{CM}^{K}$$ < 1.0$ (black circles). Previous data (only including statistical errors) of McCracken \textit{et al.} (CLAS)~\cite{mccracken10} for $<0.85$ $\cos\theta_\mathrm{CM}^{K}$$<0.95$ and Lleres \textit{et al.} (GRAAL)~\cite{lleres07} for approximately $0.77 <$ $\cos\theta_\mathrm{CM}^{K}$$ < 0.94$ shown as blue open squares and magenta open circles respectively. The two isobar models, BS1 and BS3 of Skoupil and Byd\v{z}ovsk\'{y}~\cite{skoupil16,skoupil18} are the dotted green and blue lines respectively.} \label{fig:recpol} \end{figure} This is the first data for $P_\Lambda$ in this most forward $\cos\theta_\mathrm{CM}^{K}${} interval (the previous data shown are at more backward angles described in the figure caption). $P_\Lambda$ is consistent with zero at threshold and at higher energies becomes negative, consistent with the isobar models, BS1 and BS3~\cite{skoupil16,skoupil18}. The Bonn-Gatchina BG2019 solution prior to including this data gives a $\chi^2$ of 0.98 for the recoil asymmetry. When refitting using the new data as described above, $\chi^2$ changes to 0.95. \section{Conclusions}\label{sec:conclusions} Differential cross sections for $\gamma p \rightarrow K^+\Lambda$ for $\cos\theta_\mathrm{CM}^{K}${} $> 0.9$ have been measured with high polar angle resolution from threshold to $W = 1870$\,MeV. A consistency is observed between this data and the CLAS data of McCracken \textit{et al.}~\cite{mccracken10}, which is also supported by a dedicated Bonn Gatchina PWA analysis. The high statistics provide constraints in determining dominating $t$-channel $K$ and $K^*$ exchange at forward angles and low momentum transfer, and the $\cos\theta_\mathrm{CM}^{K}${} resolution renders the data particularly sensitive to intermediate high-spin states. Additionally, the recoil polarisation data for $K^+\Lambda$ is the first dataset at this most forward $\cos\theta_\mathrm{CM}^{K}${} interval. \section*{Acknowledgements} We thank the staff and shift-students of the ELSA accelerator for providing an excellent beam. We thank Dalibor Skoupil and Petr Byd\v{z}ovsk\'{y} for insightful input and comparison of the data to their isobar and RPR models and Eberhard Klempt for help with the Bonn-Gatchina PWA. This work is supported by SFB/TR-16, DFG project numbers 388979758 and 405882627, the RSF grant numbers 19-42-04132 and 19-12-04132, the Third Scientific Committee of the INFN and the European Union’s Horizon 2020 research and innovation programme under grant agreement number 824093. P.L. Cole gratefully acknowledges the support from both the U.S. National Science Foundation (NSF-PHY-1307340, NSF-PHY-1615146, and NSF-PHY-2012826) and the Fulbright U.S. Scholar Program (2014/2015). \bibliographystyle{unsrt}
1,314,259,993,832
arxiv
\section{Introduction} To accurately plan and evaluate safety measures in buildings and transportation systems, knowledge of the behavior of pedestrians in emergency situations is required. There are two major ways to acquire this information, conducting experiments or performing computer simulations. The problem with experiments is the immense effort that is necessary to recreate an appropriate situation. This is even more difficult in the planning phase when a prototype would have to be constructed. In addition many situations cannot be evaluated experimentally because the safety of the test subjects cannot be guaranteed. Simulations on the other hand are usually based on relatively simple models of pedestrian behavior and cannot capture the complexities of human thought processes and decision making. A common solution is to conduct simple experiments and simulations and attempt to derive generally applicable rules from them \cite{Chattaraj2009}. The generality makes these rules necessarily crude; an example would be the required exit width depending on the number of people in a building. Kobes et al. \cite{Kobes2009} propose the usage of so called serious games to combine the benefits of simulations and experiments. Their system allows a test subject to try to escape from a simulated emergency situation using common video game techniques. The problem with this approach is that the user doesn't actually move even though proprioception (i.e. self-perception of motion) has been shown to improve navigational abilities. We therefore introduce a combination of an extended range telepresence system and a pedestrian simulation. The effectiveness of the system is shown by comparing different utilities for finding fire exits. Such different utilities, like escape exit signs or guiding lines, can be seen and followed with varying simplicity. As for example guiding lines are visually very intrusive and may be considered unaesthetic a quantitative evaluation of the usefulness of different signage is desirable. \section{Combined System} This section presents the components of our experimental setup; the extended range telepresence system and the pedestrian simulation software. \subsection{Extended Range Telepresence} The extended range telepresence system allows a user to feel present in a remote or virtual environment (called the target environment) by locally reproducing perception for the user and remotely reproducing actions by the user. To achieve this, the user is wearing a head-mounted display (HMD) capable of displaying the target environment in 3D and playing back sound (cf. Figure 1). The head-mounted display is fitted with additional sensors that allow its position and orientation in the room to be tracked. When the user takes a step forward, this movement will be registered and transferred to the target environment and what the user sees will change accordingly. This process is further complicated by an algorithm called Motion Compression \cite{Nitzsche2002,Nitzsche2004,Roessler2004}, which allows the target environment to be much larger than the user environment. The path of the user is curved to require less space while keeping length and turning angles of the paths in both environments identical. It has been shown that users do not notice slight changes in curvature \cite{Nitzsche2003}. The image the user sees is then rotated slightly to steer him on the calculated user path. \subsection{VISSIM} We have connected the pedestrian and vehicle simulation software VISSIM \cite{Fellendorf2010,VISSIM2010} to the extended range telepresence allowing us to simulate environments that include virtual agents (cf. Figure 2). These virtual (simulated) agents react to the telepresent user as if he was a simulated agent allowing him to become part of the simulation and interact with it. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.500\columnwidth]{figure01.jpg} \caption{User in the telepresence System, wearing HMD and backpack computer for processing.} \label{fig:1} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.500\columnwidth]{figure02.png} \caption{Screenshot from the VISSIM microscopic traffic simulator.} \label{fig:2} \end{center} \end{figure} \subsection{Benefits} This setup has several benefits apart from proprioception. Three dimensional vision with the head mounted display allows to naturally judge distances and increases realism. The user position is tracked, which allows the simple creation of detailed records of his movement. The orientation of the head-mounted display is tracked as well which can be used to extract coarse focus of attention information, e.g., whether a fire exit is visible. Using a pedestrian simulation enables us to populate the simulated world with other humans without requiring further test subjects. This allows conducting experiments concerning the behavior of individuals in large crowds quickly and cheaply. \section{Route Choice behavior In a Hotel Evacuation} \subsection{Scenario} The combined VISSIM-Telepresence system is used to study the influence of different signage for finding fire exits in a virtual hotel scenario (Fig. \ref{fig:3}), it has the same layout as the scenario used by Kobes et al. in \cite{Kobes2007fire,Kobes2009,Kobes2010case,Kobes2010way}. The layout was enhanced with textures and furniture in order to realistically reproduce a typical hotel scenario. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.500\columnwidth]{figure03.png} \caption{Orthogonal view of the 3D scenario.} \label{fig:3} \end{center} \end{figure} Using this scenario has two advantages: First, the layout of the chosen hotel is classified as complex \cite{Kobes2010case} and the choice of the nearest exit is not trivial. Second, this scenario reproduces a real hotel and data of a real case study performed in the hotel are available in \cite{Kobes2010case}, so that we can compare our results with the real data. \subsection{Experiment Description} Preliminary experiments were conducted to check the user behavior concerning the virtual walls. In principle, the user could consciously decide to move through the virtual walls. Different to an experiment designed to be done in front of a screen and controlled by keys, mouse or joystick, the telepresence system has no mechanism to prevent this. The preliminary experiments showed that the visual information that the user receives via the head-mounted display is sufficient to navigate in the virtual environment without colliding with the virtual walls (Fig. \ref{fig:4} shows exemplary trajectories of these experiments). However, without supporting information (cf. Fig. \ref{fig:5}) most test subjects were unable to find the nearest exit in these experiments. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.500\columnwidth]{figure04.png} \caption{Example trajectories. Base picture adapted from \cite{Kobes2007fire}.} \label{fig:4} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.500\columnwidth]{figure05.png} \caption{Screenshot of evacuation scenario without supporting information.} \label{fig:5} \end{center} \end{figure} A case study was designed to investigate the influence of the signage in finding the nearest exit in case of a hotel evacuation. As part of this study, we compared exit choice, travel times, walking distances, and walking speeds towards exits under following conditions: There is a guiding line on the floor (Fig. \ref{fig:6}(a)). There are other persons (simulated agents) walking to the exit (Fig. \ref{fig:6}(b)). There is standard escape exit signage above head (Fig. \ref{fig:6}(c)). There is an evacuation floor plan in the room where the evacuation starts (Fig. \ref{fig:6}(d)). \begin{figure}[htbp] \begin{center} \includegraphics[height=75pt]{figure06a.png} \hspace{6pt} \includegraphics[height=75pt]{figure06b.png} \hspace{6pt} \includegraphics[height=75pt]{figure06c.png} \hspace{6pt} \includegraphics[height=75pt]{figure06d.png} \caption{Screenshots of scenarios with supporting information: (a) guiding lines, (b) simulated agents, (c) escape exit signs, and (d) floor plan.} \label{fig:6} \end{center} \end{figure} \subsection{Experiment Participants} We introduced 20 participants, all male, between 21 and 32 years old to the scenario. The participants had the opportunity to familiarize themselves with the telepresence system in a simpler scenario. All participants tested the 4 conditions. The participants started each test run at different positions to avoid learning effects. Moreover, 5 participants started with condition 1, 5 with condition 2, etc, in a way that we could analyze the performance under different signage conditions with and without learning effects separately. The participants were instructed to leave the building as fast as possible as they would be a real evacuation. \subsection{Performance Measures} In order to evaluate the efficiency of the supporting information, exit choice, travel times, walking distances, and walking speeds towards exits were recorded. In order to quantify the subjective preference of the participants for each type of signal, a questionnaire was used. \section{Results} The participants were not able to find the nearest exit without supporting information. Therefore, the next results only report the performance measures for the scenarios with supporting information. \subsection{Objective Measures} \subsubsection{Nearest Exit Choice} The evaluation of the exit choice in Fig. \ref{fig:7} shows that the guiding lines are the most efficient signage to find the nearest exit. By using other signage the nearest exit is only chosen in about 50\% of the test runs. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.500\columnwidth]{figure07.png} \caption{Percentage of correct exit choice for each signage condition.} \label{fig:7} \end{center} \end{figure} \subsubsection{Travel Time} The time needed to find the exit in the evacuation scenario is shown in Fig.~\ref{fig:9a}. When only the first test runs are evaluated (i.e., the participants do not know the hotel scenario in advance), the guiding lines and the presence of other pedestrians lead to the shortest travel times with average values of 91.6 sec. and 87.2 sec., respectively. Escape exit signs and the escape floor plan have the longest travel times. Note that there is a higher standard deviation across the participants when using the exit signs, whereas the floor plan leads to longer times for most participants. \begin{figure}[htbp] \begin{center} \subfigure[ ]{\includegraphics[width=0.450\columnwidth]{figure08.png} \hspace{6pt} \label{fig:9a} } \subfigure[ ]{\includegraphics[width=0.450\columnwidth]{figure09.png} \label{fig:9b} } \caption{Average duration for each signage condition using only the first runs of the participants \subref{fig:9a}. Average duration using all runs of the participants \subref{fig:9b}.} \end{center} \end{figure} The guiding lines and the presence of other pedestrians also lead to faster evacuations when regarding the average time duration for all test runs Fig.~\ref{fig:9b}, although the duration of the evacuation using the exit signs and the floor plan is as expected shorter when the user knows the building in advance. \subsubsection{Walking Distance} Fig.~\ref{fig:10a} shows the average of the covered distances to the exits using only the first runs of the participants. The guiding lines and the presence of other pedestrians again lead to shorter walking distances than the exit signs and the floor plan. The same trend is observed when considering all test runs of the participants (Fig.~\ref{fig:10b}). \begin{figure}[htbp] \begin{center} \subfigure[ ]{\includegraphics[width=0.450\columnwidth]{figure10.png} \label{fig:10a}} \subfigure[ ]{\includegraphics[width=0.475\columnwidth]{figure11.png} \label{fig:10b}} \caption{Average walking distance using only the first runs of the participants \subref{fig:10a}. Average walking distance using all runs of the participants \subref{fig:10b}.} \end{center} \end{figure} \subsubsection{Walking Speed} The learning effect is clearly observed by regarding the average velocity of the participants at each test run in Fig. \ref{fig:12}. The average velocity in the first test run is significantly lower than in the other runs. However, no significant difference is observed between the second run and the next runs. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.500\columnwidth]{figure12.png} \caption{Average velocity of participants at each run.} \label{fig:12} \end{center} \end{figure} \subsection{Subjective Measures} The evacuation scenario was found to be modeled realistically by almost all (19) participants. Most participants found the guiding lines to be the most efficient signal. Moreover, in case of fire, 19 participants would prefer the guiding lines and 7 participants would prefer a combination of guiding lines and exit signs (Table \ref{tab:1}). \begin{table}[htbp] \begin{center} \begin{tabular}{|l|c|c|c|c|} \hline &Guiding &Exit &Sim. &Floor \\ & Lines & Signs & Agents & Plan \\ \hline Which aid seemed to provide the fastest way out? & 9& 7& 2& 2\\ \hline Which aid seemed the most useful? & 17& 2& 1& 0\\ \hline Which aid would you prefer in case of fire? & 19& 5& 2& 0\\ (Multiple answers possible) & & & & \\ \hline \end{tabular} \caption{Questionnaire used to evaluate the preference of the participants.} \label{tab:1} \end{center} \end{table} \subsection{Discussion} In our route choice behavior study, the guiding lines turned out to be the most efficient signage method in order to find the nearest exit. This signage condition achieved shorter times and walking distances than exit signs or a floor plan hanging on the wall. The evaluation of the questionnaires also showed that participants have a clear preference for this signage condition, especially in the case of a fire evacuation. These results are in agreement with studies reported in \cite{Kobes2007fire, Proulx2000, Quellette1993} that indicate that photoluminescent low-level exit path markings are likely to be more effective compared to conventional escape route signs. The presence of other pedestrians turned out to be very beneficial for the evacuation in our study leading to exit times close to those of guiding lines. However, following other agents did not always guide to the nearest exit for the user who may have started in a different room. In order to validate our extended range telepresence system as an adequate tool to perform such a route choice study, a comparison of our results with real data is necessary. For this purpose, we use the results from the case study in the hotel scenario presented in \cite{Kobes2010case}. In our experiments not all participants chose the nearest exit, neither did they in the real experiments. The mean value of the covered distance to the chosen exit is 48.8 m in the real experiment, with a minimum value of 13.5 m and a maximum value of 83.2 m \cite{Kobes2010case}. The mean value of the covered distance in our experiments (considering only the first run) is 26.7 m, with minimum and maximum values of 13.5 m (following simulated pedestrians) and 54.5 m (following the exit signs), respectively. The mean and the maximum covered distance in extended range telepresence are rather lower than in the real experiments. However, all the values are within the range of distances achieved in the real experiments. The mean value of the walking velocity in the real experiments \cite{Kobes2010case} is 1.03 m/s, which is higher than the mean velocity in our experiments. This difference may be due to the user being afraid of running outside the borders of the user environment or damaging the carried equipment. This difference is irrelevant for our evaluation of route choice as there are no differences in speed limiting factors along the different exit paths. \section{Summary} A combination of a telepresence system and a microscopic traffic simulator has been introduced and its efficacy for evaluating evacuation scenarios has been shown. As a first test scenario, the evacuation of a hotel using different kinds of signage has been evaluated. The results indicate that low-level exit path markings are the most efficient way of guiding people to an emergency exit but also that following others is efficient as well. These results are consistent with previously performed real and virtual experiments, which validates the use of our telepresence system in evacuation studies, and also shows the extended possibilities of using a pedestrian simulation software to add virtual agents. \section{Acknowledgements} This work was supported by the research project ``The Pedestrian Simulation VISSIM within a Telepresence System'' within the Central Innovation Programme for Small and Medium-sized Enterprises (ZIM) of the German Federal Ministry of Economics and Technology (BMWi). \nocite{_PED2008} \bibliographystyle{utphys2011b}
1,314,259,993,833
arxiv
\section*{Methods} \textbf{Capacity of a memory unit:} Here we discuss the technical details of the quantum memory unit and how it uses topological planar codes to achieve a long encoded coherence time. These codes are amenable to experimental implementation as they can be defined locally on an two-dimensional nearest neighbour qubit array and have one of the highest fault-tolerant thresholds of any code. Illustrated in Figure \ref{fig:memorystick} is the structure and performance of a memory unit. The device encodes a single logical qubit of memory [Figure \ref{fig:memorystick}{\bf a.}]. The logical Pauli operators are chains of physical $X$ and $Z$ operations that connect the top and bottom (logical $X$) or the left and right (logical $Z$) edges of the lattice. Through simulation, we numerically determine both the fault-tolerant threshold for the memory unit [Figure \ref{fig:memorystick}{\bf b.}] and the expected failure rate as a function of QEC strength at a fixed physical error rate, $p$ [Figure \ref{fig:memorystick}{\bf d.}]. From the behaviour of the code for low values of the physical error rate, $p$, we can estimate the probability that a memory unit fails during one error correction cycle, $P_L$, as a function of the distance of the topological planar code, $d$ (an error correction cycle requires $d$ rounds of error correction). For an operational device, we assume the error rate for each physical gate in the quantum memory is, $p$. The functional form for the failure of the code is given by, \begin{equation} P_L \approx \alpha (\beta p)^{\frac{d+1}{2}} \end{equation} We use the data from Figure \ref{fig:memorystick}, which simulates a full $d$ rounds of error correction to estimate $\alpha \approx 0.3$ and $\beta \approx 70$. The total number of physical qubits in the memory unit is $N = (2d-1)^2$ and the total time of a memory correction cycle is $T_{\text{corr}} = 6t d$, where $t$ is the operational time of a \emph{physical} quantum gate (initialisation, measurement or \textsc{cnot}), the factor of 6 comes from the six elementary gates necessary to perform syndrome extraction in the topological code, and we require $d$ rounds of error correction to correct for measurement errors. The total memory time of the unit, $T_{\text{mem}}$, is related to the per-error correction cycle failure probability, $P_L$, and the chosen permissible infidelity of the final entangled link, $P_{\text{link}} = 1-F$, where $F$ is the link fidelity (between memory units), \begin{equation} \begin{aligned} T_{\text{mem}} &= \frac{\log(1-P_{\text{link}})T_{\text{corr}}}{\log(1-P_L)} \\ &\approx \frac{6t(\sqrt{N}+1)P_{ \text{link}}}{2\alpha}(\beta p)^{-(\frac{\sqrt{N}+3}{4})} \end{aligned} \label{eq:mem} \end{equation} Figure \ref{fig:memorystick}{\bf b}. shows the memory time for a device that has a $t=3.5\mu$s physical gate time (appropriate for optically coupled NV$^-$ \cite{N14}), as a function of total number of physical qubits, $N$, and desired final link infidelity, $P_{\text{link}}$. The contour where the memory unit can maintain coherence for one year and achieve the desired link fidelity is plotted with a heavy line. Similar plots can be easily obtained from Eqn. \ref{eq:mem} for different physical gate times, $t$. As it is assumed that the physical system is at a fixed physical error rate $p=0.1$\% (or $p=0.001$\%) regardless of the intrinsic gate speed of the system, $t$, memory times will increase with slower systems. Ion trap computers will have a {\em longer} memory time than donor-based systems as we assume {\em both} technologies can achieve a $p=0.1\%$ (0.001\%) error rate on all fundamental gates. In the main text we assume a $N=4225$ qubit memory unit for the optically coupled NV$^-$ system. Taking $T_{\text{mem}} \approx 40$ days as our target memory time, we find the link infidelity achievable is approximately $P_{\text{link}} \approx 10^{-10}$ \footnote{We assume a 40 day storage time for a 20 day transit time to account for preparing and consuming the Bell states at the source or destination. This in practice can be reduced to 20 days by strategic choices of which Bell states to prepare and consume at given points in time, but this does not significantly change the size of each memorystick}. For all other technologies we recalculate the size of the memory unit to achieve the same infidelity and memory time given the physical gate time, $t$, and the physical error rate, $p$. \noindent \textbf{Lattice surgery operations.} The planar code (and all toric code derivatives) allows a logical two-qubit \textsc{cnot} gate to be executed as a transversal operation using individual \textsc{cnot} gates applied between corresponding qubits in each memory unit. While fault tolerant, this method may be difficult to implement due to the difficulty of ensuring that each qubit in the 2D memory cell can interact with the corresponding qubit in another cell. A different approach, called \emph{lattice surgery}, partially solves this problem by realising a fault-tolerant \textsc{cnot} gate between two memory units by only using interaction between qubits along an edge of each memory unit. Lattice surgery works by merging two separate lattices, each containing a single logical qubit encoded in the planar code, into a single oblong lattice, then splitting up this single planar code again. The merging operation is done by matching the edges of two distinct logical qubits and measuring code stabilisers spanning the lattice cells. This effectively reduces a two qubit encoded system to a single encoded qubit. This merging takes the state $\ket{\psi}_L\otimes \ket{\phi}_L = (\alpha\ket{0}_L+\beta\ket{1}_L)\otimes (\alpha'\ket{0}_L+\beta'\ket{1}_L)$ to $\alpha\ket{\phi}_L+\beta\ket{\bar{\phi}}_L=\alpha'\ket{\psi}_L+\beta'\ket{\bar{\psi}}_L$, where $\ket{\bar{A}} = \sigma_x\ket{A}$. The measurement of the stabilisers to perform a merge must occur $d$ times, where $d$ is the effective code distance of each planar code. This protects against faulty qubit measurements for each stabiliser measurement. Given that the quantum circuit required to measure the stabilisers for the planar code requires 6 physical gates, the merge operation requires a time of $T=6td$, for physical gate times $t$. The splitting operation is executed by physically measuring the qubits along the merged edge to divide the single lattice back into two individual lattices. The effect of a split operation is to take the single logical state encoded in the joint lattice, $\alpha\ket{0}_L+\beta\ket{1}_L$ to the two-qubit state, $\alpha\ket{00}_L+\beta\ket{11}_L$. Once again to protect against measurement errors, error correction of both lattices must be run for a total of $d$ cycles, requiring a total time of $T=6dt$ for the split operation. Given these transformations, we can construct a Bell state between two encoded memory units by initialising a $d\times d$ lattice holding a logical qubit in one memory unit in the $\ket{+}_L$ state and a logical qubit in the other memory unit in the $\ket{0}_L$ state, merge the edges of the lattices across the optical interface between units to form a single state $\ket{+}_L$ in a $2d\times d$ distance lattice, and then split them again to create the state $\left(\ket{00}_L+\ket{11}_L\right)/\sqrt{2}$, with one logical qubit held in each memory unit. This state can be manipulated through transversal Hadamard operations on each memory cell and/or $X_L$ and $Z_L$ to any of the three other Bell states in either the $X$- or $Z$-basis. The total time for the split/merge operation will be $T=12dt= 6(\sqrt{N}+1)t$ for a physical gate time of $t$ and a memory cell containing $N$ qubits. For the NV$^-$ design described in the main text, $t=3.5\mu$s and for $N=4225$, $T\approx 1.4$ms \noindent \textbf{Network operational procedures.} The fixed constraints on the bandwidth of a link are latency of the ship, the capacity of a memory unit, and the physical gate time, from which we can derive additional operational procedures and hardware development goals. A total of seven shipping containers are utilised for each ``online'' pair. Two units are permanently located at each shipping terminal. Three mobile units rotate locations; at any point in time, one is at each terminal ($A$ or $B$) and the third is aboard ship. A stationary unit sitting at terminal $A$ is entangled with a mobile memory unit at terminal $B$, and this pair is used as the online pair for supplying terminal-to-terminal entanglement to other parts of the network. After the remote entanglement supply in the mobile unit is exhausted, it will be re-entangled with another stationary memory unit at its current location. A second mobile memory unit is aboard ship, entangled with a stationary memory unit at the shipping terminal from which it departed, carrying entanglement from $A$ to $B$. A third mobile memory unit at $B$ is creating entanglement with a stationary local partner in preparation for shipping. This ensures that ships are never transporting inactive (unentangled) memory units. The fixed 20-day transit time for the Japan-U.S. link serves as an upper bound for completing the entanglement of a mobile memory unit with a stationary memory unit, and for consuming the entanglement after shipping. The $T = 1.4$ms logical Bell pair creation time in the main text arises from a surface code distance $d = 33$ and gate operation time $t = 3.5\mu$s, for an NV$^-$ optical implementation \cite{N14}, and assumes that inter-container operations can be executed at the same rate as operations local to each memory unit. At this operation rate, the entanglement of a memorystick containing approximately 12.7KEb (Kilo-Entangled-bit) in the NV$^-$ optical approach is created or consumed in 18s, and at full rate an entire shipload of entanglement would be consumed in about a day. Thus, inter-container operations may be $100\times$ slower without impacting the performance even if only one container at a time out of an entire shipload is used online. The denser memory subsystems, differing physical gate times, and varying code distances for other options in Table 1 will result in different demands on the inter-container interfaces. Because of the generic nature of the created entanglement, slow inter-container interfaces can be compensated for by having more than a single container online. \section*{Funding} SJD acknowledges support from the JSPS Grant-in-aid for Challenging Exploratory Research and NICT, Japan. RV and SJD acknowledge support from JSPS KAKENHI Kiban B 25280034. RV acknowledges that this project has been made possible in part by a gift from the Cisco University Research Program Fund, a corporate advised fund of Silicon Valley Community Foundation. ADG acknowledges the ARC for financial support (DP130104381). AMS acknowledges support from NICT, Japan. \section*{Author contributions} SJD and ADG conceived the idea. AMS undertook the numerical simulations. RV devised the network protocol. All authors contributed to the writing of the manuscript. \section*{Competing financial interests} The authors declare no competing financial interests. \section*{Data and materials availability: } All data needed to evaluate the conclusions in the paper are present in the paper and Supplementary Materials. Further information pertaining to the theoretical calculations reported in this work will be made available by the authors upon request. \bibliographystyle{unsrt}
1,314,259,993,834
arxiv
\section{Introduction} Dynamical behavior of crowds has attracted many physicists over the last decades for its nontrivial characteristics \cite{ped,ped2}. The motion of pedestrians can be regarded as a problem of a many-body system of ``self-driven" particles. In order to investigate the collective phenomena of the system, many microscopic models have been developed: the social force model \cite{SF}, the floor field (FF) model \cite{FF,FFc,FF2,FFy,FFy2,FFy3,PFF,AFF,FFF}, the lattice gas model \cite{LG}, etc. In addition to the simulations, much effort has also been devoted to experimental studies \cite{ex,ex2}. In this research field, the evacuation of crowds has been vigorously studied since it is of great importance to design buildings properly for the case of emergency, in the context of risk management \cite{man}. One remarkable phenomenon observed during evacuation is the clustering of pedestrians at bottlenecks such as exits (\textit{arching}). When more than one pedestrian tries to move to the same place, the \textit{conflict} occurs, decreasing the outflow of pedestrians. The effect of the conflict is a dominant factor of total evacuation time; however, only a few theoretical analyses have been performed so far \cite{FF2,FFy,FFy2,FFy3}. In previous studies, the evacuation process from a single room has mainly been investigated. However, in an actual emergency, the pedestrian flow experiences many bottlenecks and merges together toward the exit. To the authors' knowledge, no systematic approach to evacuation from complex buildings has been proposed. This study provides a first step toward the understanding of the problem. First, as illustrated in Fig. \ref{concept}, we abstract the most important factors, i.e., bottlenecks and their connectivity as a network. By considering this kind of a general network, one can apply our results to a broad range of practical problems. In this study, different from most other studies of networks, each node itself has a complex dynamics; we therefore begin by focusing on a single segment of the network. \begin{figure}[htbp] \begin{center} \includegraphics[width=70mm]{concept.eps} \end{center} \caption{Schematic diagram of pedestrian evacuation from a building. A single segment enclosed with dotted lines is mainly focused on.} \label{concept} \end{figure} In this paper, the overall argument is based on the FF model, which is one of the well-established cellular automata models for describing the pedestrian dynamics. The model is convenient not only for simplicity and ease of use, but for its extensibility \cite{FFc,FF2,FFy,FFy2,FFy3,PFF,AFF,FFF}. The effect of the conflict was first implemented in the FF model by introducing the friction parameter in Refs. \cite{FFc,FF2}. In the present study, we use a more general version of the friction parameter, namely, the \textit{friction function} which was first proposed in Ref. \cite{FFy2}. The motion of pedestrians involved in the conflict is canceled with a certain probability determined by the friction function, which controls the strength of clogging and sticking among pedestrians. In addition to exits, we set an entrance providing the system with pedestrians with a certain probability every time step. The stochastic entrance is convenient for us to control the inflow of pedestrians \cite{FFy3,PFF}, and by regarding it as inflow from an exit of the previous bottleneck, we can evaluate the effect of connectivity of the bottlenecks. The rest of this paper is organized as follows. Section \ref{mod} gives the definition of the model. In Sec. \ref{sim}, simulation results are shown. To explain the phenomenon analytically, we propose the second-order cluster approximation in Sec. \ref{theo}. Finally, we summarize the argument in Sec. \ref{con}. \section{Model}\label{mod \subsection{Floor field model} We consider a two-dimensional lattice representing a room with an entrance and exit, consisting of $N \times N$ sites labeled ($i,j$) ($i,j=1,2,\cdots,N$). Each site can contain only one pedestrian at most. Every time step pedestrians choose one destination site out of their five neighboring sites including the present site : ($i,j$), ($i+1,j$), ($i-1,j$), ($i,j+1$), and ($i,j-1$) (see Fig. \ref{jump}), according to two types of FFs. One of the FFs is the static FF ($S_{ij}$) describing the shortest distance to the exit site, and the other is the dynamic FF ($D_{ij}$) expressing the total number of pedestrians who have visited the site. The dynamic FF has the dynamics of diffusion and decay, unlike the static FF \cite{FF}. The transition probability $p_{ij}$ for a jump to the neighboring site ($i,j$) is determined by the following expression: \begin{equation} p_{ij} = Z \xi_{ij}\exp{(-k_sS_{ij}+k_dD_{ij})}, \end{equation} where $k_s$ and $k_d$ are non-negative sensitivity parameters, and $Z$ stands for the normalization factor. $\xi$ returns $0$ for forbidden transitions such as to a wall, an obstacle, and neighboring occupied sites, and returns $1$ for other transitions. In this paper, the static FF is given by the $L^2$ norm as \begin{equation} S_{ij} = \sqrt{(x_{ij}-x_{\rm{ex}})^2+(y_{ij}-y_{\rm{ex}})^2}, \end{equation} where ($x_{ij},y_{ij}$) and ($x_{\rm{ex}},y_{\rm{ex}}$) are the coordinates of the site ($i,j$) and the exit site, respectively. On the other hand, we ignore the effect of the dynamic FF ($k_d=0$) since it does not greatly affect the arguments in this work \cite{ig}. \begin{figure}[htbp] \begin{center} \includegraphics[width=70mm]{jump.eps} \end{center} \caption{Update rules. Each pedestrian can hop to its neighboring sites or stay at its present site in a time step. Pedestrians enter the area from the entrance with the probability $\alpha$, and leave the area from the exit with the probability $\beta=1$.} \label{jump} \end{figure} \subsection{Conflict resolution Due to the use of parallel update it happens that more than one pedestrian tries to choose the same site, which is called the conflict. The simplest solution of the conflict is to choose one pedestrian randomly to move to the site, and keep other pedestrians at their present sites. However, in actual situations, pedestrian flow is often clogged by more than one pedestrian moving at the same time. To model this effect, the friction parameter has been introduced \cite{FFc,FF2}, and many significant results have been obtained so far. In a recent study \cite{FFy2}, the friction function has been proposed to describe the effect more precisely. In the friction function, the number of pedestrians involved in the conflict, $k$, is reflected to its resolution probability. In this paper, we assume $\phi$ in the following form as in Ref. \cite{FFy2,FFy3}: \begin{equation} \phi(\zeta,k) = 1 - (1-\zeta)^k-k\zeta(1-\zeta)^{k-1}. \end{equation} Here, $\zeta \in [0,1]$ is the friction coefficient representing the strength of the clogging irrelevant to $k$. This $\phi$ is a monotonically increasing function of $k$ and $\zeta$. Note that this choice of $\phi$ is one of the possible expressions. If one takes the friction function independent of $k$, it coincides with the friction parameter. Each conflict is resolved with probability $1-\phi(\zeta,k)$, and one of $k$ pedestrians is randomly selected to move to the site, otherwise, the conflict remains. \subsection{Entrance and exit} In each time step, a pedestrian is provided to an entrance site with the probability $\alpha \in [0,1]$ if the site is empty, and removed from an exit site with probability $\beta$. (See Fig. \ref{jump}.) In this paper, we assume that each room has enough space so that pedestrians at exit sites are smoothly accepted to the next room, namely, $\beta=1$. \begin{figure*}[tbp] \begin{center} \includegraphics[width=160mm]{fd.eps} \end{center} \caption{(Color online.) Fundamental diagram of the system in the competitive parameter regime [(a)$\zeta=0.8$] and the cooperative parameter regime [(b)$\zeta=0.0$]. Each plot is obtained by setting the inflow probability $\alpha=0.1,0.2,\cdots,0.6$ and averaging the flux and density over 100 time steps.} \label{FD} \end{figure*} \begin{figure}[tbp] \begin{center} \includegraphics[width=70mm]{sche.eps} \end{center} \caption{(Color online.) Snapshots of the simulations. A green site (at the top) and a red site (at the bottom) indicate the entrance and exit sites, respectively. With the same inflow rate, one can observe the free-flow phase (left) and congestion phase (right) of pedestrians represented by blue circles.} \label{sch} \end{figure} \section{Simulations}\label{sim In the following, we set $k_s=10$ \cite{ks}. The dimensions of the simulation area are $25\times 25$, and one entrance site and one exit site are set at $(13,25)$ and $(13,1)$, respectively. In this section we see some simulation results, varying the inflow probability $\alpha$ and the conflict coefficient $\zeta$. \subsection{Metastability of pedestrian flow In Fig. \ref{FD}, the average pedestrian outflow through the exit, $q$, is plotted. Here, the density $\rho$ is defined as the number of pedestrians in the room divided by the number of sites in the area, $25 \times 25$. We performed simulations for 100000 time steps for each inflow probability $\alpha = 0.1, 0.2, \cdots, 0.6$ with the initial condition that no pedestrian is set in the system. The simulation results of the flux and density are averaged over every $100$ time steps. The relation between the flux and density is often referred to as the \textit{fundamental diagram} in the context of traffic flow, and in vehicular traffic the \textit{metastable state} is observed in the fundamental diagram. The metastable state indicates an unstable state with high flux and density before falling to the jammed state, and at the same density we can see multiple fluxes corresponding to the metastable state and jammed state, in a certain regime of density. Interestingly, the metastability is observed also in this problem. While in vehicular traffic flow, the metastability comes from the effect of inertia of vehicles; in this case, the conflict plays an important role for pedestrian traffic. If the conflict occurs at the exit, outflow is decreased; therefore, even for the small density of pedestrians in the system, the jammed flux can be observed by the concentration of pedestrians at the exit. On the other hand, if the pedestrians are dispersed and in the state of free flow, the system can keep large flux. This is the mechanism by which the metastability is induced in the pedestrian bottleneck . Consistent with these arguments, the metastability cannot be observed when $\zeta=0$ [Fig. \ref{FD}(b)]. Next, let us explain the fundamental behavior of the system. First, the free-flow phase (see Fig. \ref{sch}) can be observed for every $\alpha$ and $\zeta$. In this phase, large $\alpha$ directly leads to large flux, corresponding to the linear relation between $q$ and $\rho$. On the other hand, when the conflict occurs at the exit, the rate of outflow shrinks. If this outflow is smaller than inflow (for large $\alpha$ and $\zeta$), the density of pedestrians increases leading to congestion [the congestion phase shown in Fig. \ref{sch} (right)]. In contrast, if the inflow is not enough large, the conflict disappears, recovering the free flow. Here, if we take $\alpha$ as a controllable parameter for a given $\zeta$, how can we optimize the inflow parameter? The answer is to keep the inflow lower than the critical rate, which prevents the conflict at the exit from growing. We discuss this issue in the following sections in detail. \begin{figure*}[htbp] \begin{center} \includegraphics[width=160mm]{randf.eps} \end{center} \caption{(Color online.) Pedestrian flux and density in the steady state (st.) and transient state (tr.) vs $\alpha$. Equations (5) and (12) derived in Sec. \ref{theo} are also shown as $q_f$ theo. and $q_c$ theo., respectively. The curves are added to improve visibility in (b). } \label{RF} \end{figure*} \begin{figure}[htbp] \begin{center} \includegraphics[width=70mm]{acr.eps} \end{center} \caption{(Color online.) Phase diagram obtained by simulations (st.). The red line indicates the theoretical critical condition (\ref{a}).} \label{acr} \end{figure} \subsection{Critical phenomenon related to $\alpha$ and $\zeta$ In this subsection, we analyze the phase transition from free flow to congestion in detail. Figure \ref{RF} shows the density and flux in the steady state and the \textit{transient} state. Here, the transient state is defined as the state of the system with free-flow initial conditions (the room with no pedestrian) at finite time steps and introduced to include the probability of the system having the phase transition. Even in the parameter regime of the congestion the system might keep the metastable state for finite time steps. Hence the expected value of the flux (density) in the transient state is larger (lower) than that in the steady state. These kinds of quantities are also important because the system does not always reach the stationary state in actual evacuation of crowds. Here, we summarize the simulation conditions in Table \ref{simc}. In the simulations, each plot of the transition state (tr.) is obtained by averaging the flux or density over $t_{\rm{max}}=100000$ time steps and $100$ samples \cite{tr}. On the other hand, plots for the steady state (st.) are calculated by averaging over 1000000 time steps from $t=100000$ to $t_{\rm{max}}=1100000$. For the st. plots, we adopted the initial condition that pedestrians occupy all the available sites to ensure the occurrence of the congested situation in the corresponding parameter regime. In Fig. \ref{RF} (b), the lines of the density jump at each critical $\alpha$, which correspond to the occurrence of the congestion. Corresponding to the fact, the flux of pedestrians [Fig. \ref{RF} (a)] agrees with two types of lines: In the free-flow phase, the flux is determined only by the inflow probability; on the other hand, in the parameter regime of the congestion, the flux is determined by the outflow. These two pedestrian fluxes are evaluated theoretically in the next section. Moreover, the simulation results of the steady states imply that the situation near the exit can be assumed to be independent of the inflow probability in the congested situation. On the other hand, one can see the peak of the flux in the transient state. Since the system does not always cause congestion even in the congestion regime of parameters (metastability), the expected flux is higher than the congestion flux for $\alpha$ around the critical value. If the inflow rate is large enough, the probability of the flux falling to the congested situation increases, and thus, the average flux decreases. Figure \ref{acr} shows the phase diagram of the system. For each $\zeta$, the upper limit of the inflow rate to keep the free flow is depicted with the line derived in Sec. \ref{cc}. From a practical standpoint, it corresponds to the criteria for avoiding the congested situation. \begin{table}[htb] \begin{center} \caption{Simulation conditions.} \begin{tabular}{c||c|c} \hline\hline &st. & tr. \\ \hline averaging time steps&$t=100000-1100000$ & $t=1-100000$ \\ number of samples&$1$ & $100$ \\ initial condition&$\rho=1$ & $\rho=0$\\ \hline\hline \end{tabular} \label{simc} \end{center} \end{table} \begin{figure}[h] \begin{center} \includegraphics[width=70mm]{mer.eps} \end{center} \caption{(Color online.) Snapshots of the merging. (a) Center (ce) connection, (b) corner (co) connection.} \label{ma} \end{figure} \subsection{Merging Buildings have branches of passages, and the merging of the pedestrian flow is an important and complex problem in architectural design. In this subsection, we present some simulation results to show a paradoxical effect of local improvement of the pedestrian flow. As more macroscopic systems than those in previous subsections, we consider the systems with three rooms and two types of connections: the center (ce) connection [Fig. \ref{ma}(a)] and the corner (co) connection [Fig. \ref{ma}(b)]. Here, two rooms are connected to a room with an exit site like an entrance hall, and we do not provide pedestrians to each room after setting initial conditions. In Ref. \cite{FFy}, it has been demonstrated that the exits at the corner has larger capacity of outflow than ones in the center because they have only two neighboring sites, and conflicts are reduced. Therefore, at first sight, the corner connection seems to be better in the sense of swift evacuation. In fact, however, it is quite opposite, as shown in Fig. \ref{merging}. The simulations have been performed with an initial condition that 50 pedestrians are randomly distributed in each of two rooms other than the entrance hall. As expected, the evacuation time from each room to the entrance hall is improved by setting the connection at the corner, namely, $t_{co}<t_{ce}$; however, the total evacuation time worsens as $\zeta$ increases. Accordingly, the rate of the local evacuation time $t_{co}/t_{ce}$ decreases while that of the total evacuation time $T_{co}/T_{ce}$ increases. This phenomenon is explained as follows. The total evacuation time is determined only by the outflow from the exit, and the local pedestrian flow to the entrance hall does not influence the total outflow directly. Furthermore, the large inflow to the entrance hall will increase the density of pedestrians near the exit, which leads to the decrease of the outflow. In other words, if the local inflow is reduced by a strong bottleneck, the total outflow is improved conversely. Thus, we can conclude that pedestrian flow should be dispersed not only spatially but also temporally, and the strong bottlenecks might be used for the total optimization. Note that here we considered the situations that the effective inflow rate to the entrance hall is larger than the exit capacity, namely, they are in the congestion phase as a whole. If the exit has enough width, the local optimization directly improves the total evacuation time. \begin{figure*}[tp] \begin{center} \includegraphics[width=160mm]{merge.eps} \end{center} \caption{(Color online.) (a) Average evacuation time and (b) rates of evacuation time. $T$ and $t$ are the average total and local evacuation time, and the subscripts ``co" and ``ce" stand for the conditions of the corner connection and the center connection, respectively. Each plot is calculated by averaging 1000 samples.} \label{merging} \end{figure*} \section{Theoretical Analyses}\label{theo Let us analyze the pedestrian outflow in each phase and its critical conditions focusing on a single bottleneck. In this section, we consider the limit $k_s\rightarrow \infty$, where pedestrians surely move to the most desirable site. \subsection{Free-flow phase} In the free-flow phase, the pedestrian flux is evaluated by the inflow. First, we consider a balance equation at the entrance site: \begin{equation} \alpha (1-\rho_{\rm{en}}) = \rho_{\rm{en}}. \end{equation} Here, $\rho_{\rm{en}}$ is the probability of finding a pedestrian at the entrance site. Since we assume $k_s\rightarrow \infty$, a pedestrian at the entrance site surely leaves the site (with probability $1$) in the next time step. By solving the equation, we can evaluate the flux $q_f$: \begin{equation} q_{f} = \alpha(1-\rho_{\rm{en}})=\frac{\alpha}{1+\alpha}. \end{equation} This expression is shown in Fig. \ref{RF} (a) and well agrees with the simulation results. \subsection{Congestion phase In the congested situation, the area near the exit is almost fully occupied. This fact enables us to estimate the outflow in the steady state. In the previous studies, the pedestrian density at the neighboring sites of the exit is approximately assumed to be $1$ \cite{FF2,FFy,FFy2}. However, when the effect of the conflict is strong, this assumption does not give a good estimate. Especially in this paper, since the effect of the conflict depends on the number of pedestrians involved, we have to take the configurations of pedestrians at the exit into consideration. Hence, we adopt a second-order approximation here. In the approximation, we assume the probability of finding a pedestrian at the site where the exit is accessible with two jumps (the second neighboring site) to be $1$ (see Fig. \ref{appr}). Then, the states of the neighboring sites are characterized by four occupation numbers $A,B,C,$ and $E$, which take $0$ (empty) or $1$ (occupied). By considering transition probabilities among these states, we can obtain the probability distribution $P^{E}_{ABC}$ of finding each state in the steady state. Here, the superscript corresponds to the occupation number of the exit, and the subscripts indicate one of its neighboring sites as shown in Fig. \ref{appr}. To reduce the dimension of the transition matrix, we use the following facts. First, we can easily find $P^{0}_{000}=0$ and $P^{1}_{111}=0$. Since no configuration of pedestrians can result in these states in the next time step, they are not realized in the stationary state. Furthermore, by symmetry of the system, the equations \begin{eqnarray} P^{0}_{100}&=&P^{0}_{001},\\ P^{0}_{110}&=&P^{0}_{011},\\ P^{1}_{100}&=&P^{1}_{001},\\ P^{1}_{110}&=&P^{1}_{011}, \end{eqnarray} are satisfied. With the normalization condition, the transition matrix is summarized as follows: \begin{widetext} \begin{equation} \left( \begin{array}{c} P^{0}_{100} \\ P^{0}_{010} \\ P^{0}_{110} \\ P^{0}_{101} \\ P^{0}_{111} \\ P^{1}_{000} \\ P^{1}_{100} \\ P^{1}_{010} \\ P^{1}_{110} \\ P^{1}_{101} \\ \end{array} \right) = \left( \begin{array}{ccccccccccc} 0 &0 &0 &0 &0 &\frac{1}{4}\phi_2^2 &\frac{1}{2}\phi_2^2 &0 & 0&0&\\ 0 & 0& 0& 0& 0&\frac{1}{4}\phi_2^2 &0 &\phi_2^2 &0 &0 &\\ 0 &0 & \phi_2^2& 0& 0&\frac{1}{2}\phi_2\tilde{\phi}_2 & \frac{1}{2}\phi_2\tilde{\phi}_2&\phi_2\tilde{\phi}_2 & \phi_2&0 &\\ 0 &0 &0 &\phi_2\phi_3 &0 &\frac{1}{4}\phi_3+\frac{1}{2}\phi_2\tilde{\phi}_2 & \phi_3+\phi_2\tilde{\phi_2}&0 &0 &\phi_3 &\\ 0 &0 &2\phi_2 \tilde{\phi}_2 & \phi_2\tilde{\phi}_3 &\phi_3 &\frac{3}{4}\tilde{\phi}_2^2+\frac{1}{4}\tilde{\phi}_3 &\tilde{\phi}_2^2+\tilde{\phi}_3 &\tilde{\phi}_2^2 &2\tilde{\phi}_2 &\tilde{\phi}_3 &\\ \phi_2^2 &\phi_2^2&0 &0 &0 &0 &0 &0 & 0 &0&\\ \frac{1}{2}\phi_3+\frac{1}{2}\phi_2\tilde{\phi}_2 & \phi_2\tilde{\phi_2}&\frac{1}{2}\phi_2\tilde{\phi_2}&\frac{1}{2}\tilde{\phi}_2\phi_3 &0 &0 &0 &0 & 0& 0&\\ \phi_2\tilde{\phi}_2 &0 &\phi_2\tilde{\phi}_2 &0 & 0& 0& 0& 0& 0&0 &\\ \frac{1}{2}\tilde{\phi}_2^2+\frac{1}{2}\tilde{\phi}_3 &0 &\frac{1}{2}\tilde{\phi}_2^2&\frac{1}{2}\tilde{\phi}_2\tilde{\phi}_3 & \frac{1}{3}\tilde{\phi}_3 & 0& 0& 0& 0&0 &\\ 0 &\tilde{\phi}_2^2 &\tilde{\phi}_2^2 &0 &\frac{1}{3}\tilde{\phi_3} &0 & 0& 0& 0&0 & \end{array} \right) \left( \begin{array}{c} P^{0}_{100} \\ P^{0}_{010} \\ P^{0}_{110} \\ P^{0}_{101} \\ P^{0}_{111} \\ P^{1}_{000} \\ P^{1}_{100} \\ P^{1}_{010} \\ P^{1}_{110} \\ P^{1}_{101} \\ \end{array} \right)\label{trM} \end{equation} \begin{equation} 2P^{0}_{100}+P^{0}_{010}+2P^{0}_{110}+P^{0}_{101}+P^{0}_{111}+P^{1}_{000}+2P^{1}_{100}+P^{1}_{010}+2P^{1}_{110}+P^{1}_{101}=1.\label{nc} \end{equation} Note that abbreviated notations $\phi_2 = \phi(\zeta,2),\phi_3 = \phi(\zeta,3),\tilde{\phi}_2=1-\phi_2$, and $\tilde{\phi}_3=1-\phi_3$ are used. Then, the pedestrian flux is given by \end{widetext} \begin{equation} q_{c} = P^{1}_{000}+2P^{1}_{100}+P^{1}_{010}+2P^{1}_{110}+P^{1}_{101}. \end{equation} This expression is illustrated with simulation results in Fig. \ref{RF} (a). When the effect of the conflict is dominant, the error becomes large. In this parameter region, pedestrians are not provided smoothly to the second neighboring sites due to the strong friction, and thus, the assumption of the approximation is not satisfied. On the other hand, when the effect of the conflict is not strong, the approximation gives good evaluation. \begin{figure}[htbp] \begin{center} \includegraphics[width=50mm]{appr.eps} \end{center} \caption{Cluster approximation. The probability of finding a pedestrian on the second neighboring sites of the exit is assumed to be $1$.} \label{appr} \end{figure} \subsection{Critical conditions}\label{cc} In the previous subsections, we evaluated pedestrian flux in the free-flow situation $q_f$ and congested situation $q_c$. When $q_c$ becomes smaller than $q_f$, the system cannot maintain the free-flow situation; therefore we can obtain the critical condition regarding the inflow probability for a given $\zeta$ from the condition, $q_c = q_f$: \begin{equation} \alpha_{\rm{cr}} = \frac{q_c}{1-q_c}.\label{a} \end{equation} This critical condition is compared with simulation results in Fig. \ref{acr}, and well describes the phase transition. Using this expression, we can control the inflow, preventing the occurrence of the congestion. For example, by securing additional routes and keeping the total inflow to one bottleneck lower than the outflow in the congested situation, the clustering at the exits can be avoided, which may shorten the total evacuation time. \section{Conclusions}\label{con} In this paper, we have taken a first step to treat the problems of multiple bottlenecks in evacuation process. To consider the problems, we focused on a part of the bottlenecks, using a stochastic entrance. Through simulations, the metastable state of pedestrian flow arising from the effect of pedestrian conflicts is demonstrated. Supported by approximate analyses we have derived the expressions for the pedestrian flux in the free-flow phase and congestion phase, and a critical condition of the inflow to prevent the congestion. Furthermore, interesting phenomena related to the merging of pedestrian flow have been reported. The local improvement of the pedestrian flow sometimes causes more serious congestion. We believe we could give some hints for designing evacuation routes from a theoretical point of view. To validate these characteristics, some experimental studies would also be necessary. In this paper, only simple situations have been focused on to concentrate on the essence of the problem. However, other factors such as the width of exits, multiple entrances and/or exits, route choice of pedestrians, obstacles, etc., should also be investigated in combination with our study in future works. \begin{widetext} \section*{Appendix: Explicit expression of $q_c$ for a simple case} Although we can obtain the explicit expression of $q_c$ by solving Eqs. (\ref{trM}) and (\ref{nc}), the expression is very complicated in general. If we use the friction parameter $\mu$ \cite{FFc,FF2,FFy} independent of the pedestrian number, namely, $\phi_2=\phi_3=\mu$, the expression will be relatively simple. Here we show the result in the second-order approximation for this case, $q_2(\mu)$, and compare it with the previous result derived by first-order approximation \cite{FF2}. From simultaneous Eqs. (\ref{trM}) and (\ref{nc}), we can derive \begin{equation} q_2 (\mu) = \frac{48+72\mu-132\mu^2-28\mu^3+140\mu^4-236\mu^5+131\mu^6+49\mu^7-91\mu^8+125\mu^{9}-126\mu^{10}+57\mu^{11}-9\mu^{12}} {96+192\mu-144\mu^2-68\mu^3+240\mu^4-404\mu^5+78\mu^6+129\mu^7-166\mu^8+185\mu^{9}-117\mu^{10}+48\mu^{11}-9\mu^{12}}. \end{equation} On the other hand, Ref. \cite{FF2} presented the expression derived by the first-order approximation, $q_1(\mu)=\frac{1-\mu}{2-\mu}$. The difference between these equations yields \begin{eqnarray} &&q_2(\mu)-q_1(\mu)\nonumber\\ &=& \frac{\mu^5(32+16\mu-84\mu^2+64\mu^3-10\mu^4-75\mu^5+75\mu^6-18\mu^7)} {(2-\mu)(96+192\mu-144\mu^2-68\mu^3+240\mu^4-404\mu^5+78\mu^6+129\mu^7-166\mu^8+185\mu^{9}-117\mu^{10}+48\mu^{11}-9\mu^{12})}\nonumber\\ &=&\frac{1}{6}\mu^5 - \frac{1}{6}\mu^6+\cdots \qquad (\mu \ll 1)\\. \end{eqnarray} Hence $q_2(\mu)=\frac{1-\mu}{2-\mu}+ \frac{1}{6}\mu^5 + O(\mu^6)$ is asymptotic to $q_1(\mu)$ in the $\mu\rightarrow 0$ limit, and we can conclude that the second-order approximation presented in this paper improves the prediction, especially when the effect of the conflicts is strong. \end{widetext}
1,314,259,993,835
arxiv
\section*{References}} \usepackage[paperwidth=7.0in, paperheight=10.0in, margin=.875in]{geometry} \usepackage[utf8]{inputenc} \usepackage{graphicx} \usepackage{float} \usepackage{color} \usepackage{microtype} \usepackage{hyperref} \usepackage{subfigure \usepackage{subfigmat \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \begin{document} \begin{frontmatter} \title{Use of Jordan forms for convection-pressure split Euler solvers} \author[1]{Naveen Kumar Garg\fnref{a}} \fntext[a]{Current address: Post Doctoral Fellow, TIFR Center for Applicable Mathematics, Bangalore, India} \ead{garg.naveen70@gmail.com, naveen@tifrbng.res.in} \author[2]{S.V. Raghurama Rao} \ead{raghu@aero.iisc.ernet.in} \author[3]{M. Sekhar} \ead{muddu@civil.iisc.ernet.in} \address[1]{Department of Mathematical Sciences, Indian Institute of Science (IISc), Bangalore, India} \address[2]{Department of Aerospace Engineering, IISc, Bangalore, India} \address[3]{Department of Civil Engineering, IISc, Bangalore, India} \begin{abstract} In this study, we analyze convection-pressure split Euler flux functions which contain genuine weakly hyperbolic convection subsystems. A system is said to be a genuine weakly hyperbolic if all eigenvalues are real with no complete set of linearly independent (LI) eigenvectors. To construct an upwind solver based on flux difference splitting (FDS) framework, we require to generate complete set of LI eigenvectors. This can be done through addition of generalized eigenvectors which can be computed from theory of Jordan canonical forms. Once we have complete set of LI generalized eigenvectors, we construct upwind solvers in convection-pressure splitting framework. Since generalized eigenvectors are not unique, we take extra care to ensure no direct contribution of generalized eigenvectors in the final formulation of both the newly developed numerical schemes. First scheme is based on Zha and Bilgen type splitting approach, while second is based on Toro \& V\'azquez splitting. Both the schemes are tested on several bench-mark test problems on 1-D and one of them is tested on some typical 2-D test problems which involve shock instabilities. The concept of generalized eigenvector based on Jordan forms is found to be useful in dealing with the genuine weakly hyperbolic parts of the considered Euler systems. \end{abstract} \begin{keyword} {Convection-pressure splittings} \sep{Jordan forms} \sep{Upwind schemes} \end{keyword} \end{frontmatter} \section{Introduction} \label{} Numerical algorithms based on compressible Euler systems are of great importance and are frequently used in simulations. These algorithms can be broadly divided into two major categories, namely, central discretization methods and upwind discretization methods. In this study we mainly focus on upwind methods and, in particular, on the Flux Difference Splitting (FDS) based upwind schemes. Another class of upwind methods based on Flux Vector Splitting (FVS) schemes, like Steger \& Warming \cite{Steger_&_Warming} and van Leer \cite{Leer} schemes, are constructed using eigenvector structure of Euler system. But these schemes are quite diffusive and can't capture isolated contact discontinuities crisply. Similarly, schemes based on FDS framework, such as an approximate Riemann solvers of Roe \cite{Roe} and Osher \cite{Osher}, also depend on the eigen-structure (both eigenvalues and eigenvectors) but are quite accurate. Osher scheme captures expansion waves well but is computationally expensive. Roe's approximate Riemann solver is accurate and can capture steady discontinuities either exactly or with a single interior point, without being as expensive. Because of less numerical diffusion, Roe scheme tends to produce unphysical expansion shocks \cite{wesseling}, post-shock oscillations \cite{Stiriba_Donat_postshock_oscillations} and it is non-trivial to avoid instability problems \cite{Quirk} in its application. Similarly, HLL scheme \cite{HLL_SIAM_1983} and HLLC scheme \cite{Toro_Spruce} are special upwind schemes mainly dependent on structure of eigenvalues. HLLC scheme is a modified version of HLL scheme and can capture an isolated contact discontinuity exactly but suffers from infamous carbuncle phenomena and some other shock instabilities in 2-D, as shown in \cite{Huang_Wu_Yan}, \cite{Shen_Yan}. There is a different class of upwind schemes based on splitting of the Euler flux function, with several possible splittings. We consider here the popular splittings of the Euler flux function as is usually done in three distinct ways. In the first category, we have the AUSM family of schemes \cite{Meng_Sing_Liou_Review}, \cite{Liou_&_Steffen_1991} and CUSP scheme of Jameson \cite{Jameson_1}, in which flux vector is split into a convection and a pressure part. Liou and Steffen first introduced the category of AUSM family of schemes, such that in their convection-pressure splitting, the pressure term is alone present in the pressure part of the split flux vector. The Second category of convection-pressure split schemes were initially proposed by Steger and Warming in \cite{Steger_&_Warming}, but were throughly explored by Jameson \cite{Jameson_2}, Zha \& Bilgen \cite{Zha_Bilgen}, Balakrishnan \& Deshpande \cite{Balakrishnan_and_Deshpande} and by Raghurama Rao \& Deshpande \cite{Rao_PVU}. The structure of this splitting is such that pressure term of momentum equation and the term containing the product of pressure and velocity in the energy equation constitute the pressure part of the convection-pressure split Euler flux function. The main feature of this type of convection-pressure splitting is that the eigenvalues corresponding to Jacobian of pressure subsystem become completely free from fluid velocity $u$, unlike in the splitting utilized by Meng-Sing Liou and others. Another interesting and more recent convection-pressure splitting is proposed by Toro \& V\'azquez \cite{Toro_Vazquez}. In this category, the authors split convection and pressure parts in such a way that convection flux becomes completely free from pressure terms. For all the three splittings, convection part always corresponds to a genuine weakly hyperbolic system. As the each convection part is weakly hyperbolic, we can utilize the theory of Jordan Canonical forms to recover complete set of linearly independent (LI) generalized eigenvectors. In contrast, each pressure part corresponds to strict or non-strict hyperbolic system. In the first category of convection-pressure splitting, although the convection part contains contribution of two different eigenvalues namely, $u$ and $\gamma u$, with $u$ as repeated eigenvalue, the eigenvalues of pressure part do not contain any contribution of acoustic speed and this may result in an unstable scheme if used in FDS framework. As pressure part of other two splittings contain the contribution of acoustic speeds as well, we propose two numerical schemes namely, Zha and Bilgen split - flux difference splitting scheme (ZBS-FDS) and Toro and V\'azquez split - flux difference splitting TVS-FDS scheme. The main idea of each scheme is first to construct traditional FDS scheme for pressure sub-system and then utilize the resulting averaged values of all variables together with theory of Jordan forms for convection part. Our motivation is to develop efficient and workable flux difference split schemes based on convection-pressure splitting, together with the use of Jordan forms for convection subsystems, rather than focusing on reproducing an ideal approximate Riemann solver in this framework. Both schemes are tested on various shock tube problems in 1-D and are found to require no entropy fix for sonic points and in strong expansion regions. Both the schemes capture isolated and steady contact discontinuities exactly. Out of the two, ZBS-FDS scheme is extended to two dimensions and is further tested on a variety of shock instability problems, including shock diffraction around a corner, flow over over a half cylinder and reflection of a plane shock from a wedge. \section{Convection-Pressure splittings for Euler flux function} Consider the one-dimensional inviscid Euler system \begin{equation} \frac{\partial\boldsymbol{U}}{\partial t} \ + \ \frac{\partial \boldsymbol{F} \left( \boldsymbol{U} \right)} {\partial x} \ = \ \boldsymbol{0}, \ \ \ \ (x, t) \in {\rm I\!R}\times [0,\infty), \end{equation} where $\boldsymbol{U}:({\rm I\!R}\times{\rm I\!R^{+}})\longmapsto \Omega\subseteq{\rm I\!R^{3}}$, $\Omega$ is an open subset, is the conserved variable vector and $\boldsymbol{F}:\Omega\longmapsto {\rm I\!R^{3}}$ is the flux vector defined by \begin{equation} \boldsymbol{U} = \begin{bmatrix} \rho \\[0.3em] \rho u \\[0.5em] \rho E \end{bmatrix} \ \mbox{and} \ \boldsymbol{\boldsymbol{F} \left( \boldsymbol{U} \right)} = \begin{bmatrix} \rho u \\[0.3em] p + \rho u^{2} \\[0.3em] p u + \rho u E \end{bmatrix} \end{equation} Here the total energy $E$ is defined as the sum of internal energy ($e$) and kinetic energy ($\frac{1}{2}u^{2}$), given as: $E = e + \frac{1}{2} u^{2} = \frac{p}{\rho \left(\gamma - 1\right)} + \frac{1}{2} u^{2}$. Till now, three distinct convection-pressure splittings have been proposed, which are described in the following. \subsection{Liou and Steffen splitting procedure} Liou and Steffen, in formulating their upwind scheme \cite{Liou_&_Steffen_1991}, introduced a unique convection-pressure splitting by taking out pressure part from momentum equation of full Euler system. \begin{equation} \boldsymbol{F} \ =\ \boldsymbol{F}_{c}^{\boldsymbol{LS}} + \boldsymbol{F}_{p}^{\boldsymbol{LS}} \end{equation} where \begin{equation} \boldsymbol{F}_{c}^{\boldsymbol{LS}} = \begin{bmatrix} \rho u \\[0.3em] \rho u^{2} \\[0.3em] \rho u E + p u \end{bmatrix} \ \mbox{and} \ \boldsymbol{F}_{p}^{\boldsymbol{LS}} = \begin{bmatrix} 0 \\[0.3em] p \\[0.3em] 0 \end{bmatrix} \end{equation} Let us split the system $(1)$ into Liou and Steffen type convection and pressure subsystems, for gaining better insight by analyzing each part separately. \begin{equation} \frac{\partial \boldsymbol{U}}{\partial t} \ + \ \frac{\partial \boldsymbol{F}_{c}^{\boldsymbol{LS}} \left(\boldsymbol{U} \right)} {\partial x} \ = \ \boldsymbol{0} \end{equation} \mbox{and} \ \begin{equation} \frac{\partial \boldsymbol{U}}{\partial t} \ + \ \frac{\partial \boldsymbol{F}_{p}^{\boldsymbol{LS}} \left( \boldsymbol{U} \right)} {\partial x} \ = \ \boldsymbol{0} \end{equation} Both subsystems can also be written in quasilinear form as follows. \begin{equation} \frac{\partial \boldsymbol{U}}{\partial t} \ + \ \boldsymbol{A}_{c}^{\boldsymbol{LS}} \frac{\partial \boldsymbol{U}} {\partial x} \ = \ \boldsymbol{0} \end{equation} \begin{equation} \frac{\partial\boldsymbol{U}}{\partial t} \ + \ \boldsymbol{A}_{p}^{\boldsymbol{LS}} \frac{\partial \boldsymbol{U}} {\partial x} \ = \ \boldsymbol{0} \end{equation} Here $\boldsymbol{A}_{c}^{\boldsymbol{LS}}$ and $\boldsymbol{A}_{p}^{\boldsymbol{LS}}$ are Jacobian matrices for convection and pressure parts respectively and are given by \begin{equation*} \boldsymbol{A}_{c}^{\boldsymbol{LS}} = \begin{bmatrix} \ 0 && 1 && 0 \\[0.3em] \ -u^{2} && 2 u && 0 \\[0.3em] \ -\gamma u E + (\gamma - 1) u^{3} && \gamma E -\frac{3}{2}(\gamma - 1) u^{2} && \gamma u \end{bmatrix} \ \end{equation*} \mbox{and} \ \begin{equation*} \boldsymbol{A}_{p}^{\boldsymbol{LS}} = \begin{bmatrix} \ 0 && 0 && 0 \\[0.3em] \frac{1}{2} (\gamma -1) {u^2} && -(\gamma -1) u && (\gamma -1) \\[0.3em] \ 0 && 0 && 0 \end{bmatrix} \end{equation*} Eigenvalues corresponding to convective Jacobian matrix $\boldsymbol{A}_{c}^{\boldsymbol{LS}}$ are $ \lambda_{c,1}^{\boldsymbol{LS}} = \gamma u, \ \ \lambda_{c,2}^{\boldsymbol{LS}} = \lambda_{c,3}^{\boldsymbol{LS}} = u$ and algebraic multiplicity (AM) of the eigenvalue $u$ is 2. Similarly, eigenvalues corresponding to pressure Jacobian matrix $\boldsymbol{A}_{p}^{\boldsymbol{LS}}$ are $ \lambda_{p,1}^{\boldsymbol{LS}} = -(\gamma - 1) u, \ \ \lambda_{p,2}^{\boldsymbol{LS}} = \lambda_{p,3}^{\boldsymbol{LS}} = 0$. Since AM of $u$ is 2, so we have to find its eigenvector space to see whether $\boldsymbol{A}_{c}^{\boldsymbol{LS}}$ has complete set of linearly independent eigenvectors or not. The analysis of matrix $\boldsymbol{A}_{c}^{\boldsymbol{LS}}$ shows that convective subsystem is weakly hyperbolic as there is no complete set of linearly independent eigenvectors. Indeed, its eigenvectors are \begin{equation} \boldsymbol{R}_{c,1}^{\boldsymbol{LS}} = \begin{bmatrix} \ 0 \\[0.3em] \ 0 \\[0.3em] \ 1 \end{bmatrix} ~~\mbox{and}~~ \ \boldsymbol{R}_{c,2}^{\boldsymbol{LS}} = \begin{bmatrix} \ 1 \\[0.3em] \ u \\[0.3em] \ \frac{1}{2}u^2 \end{bmatrix} \ \end{equation} Similarly, eigenvectors corresponding to $ \lambda_{p,1}^{\boldsymbol{LS}} = -(\gamma - 1) u, \ \ \lambda_{p,2}^{\boldsymbol{LS}} = \lambda_{p,3}^{\boldsymbol{LS}} = 0$ of pressure subsystems are: \begin{equation} \boldsymbol{R}_{p,1}^{\boldsymbol{LS}} = \begin{bmatrix} \ 0 \\[0.3em] \ 1 \\[0.3em] \ 0 \end{bmatrix} \ , \ \boldsymbol{R}_{p,2}^{\boldsymbol{LS}} = \begin{bmatrix} \ 1 \\[0.3em] \ 0 \\[0.3em] \ -\frac{1}{2}u^2 \end{bmatrix} \ , \ \boldsymbol{R}_{p,3}^{\boldsymbol{LS}} = \begin{bmatrix} \ 0 \\[0.3em] \ 1 \\[0.3em] \ u \end{bmatrix} \end{equation} Convection subsystem turns out to be weakly hyperbolic, and Jordan theory can be applied to explore it further, whereas pressure subsystem is non-strict hyperbolic. Apart from eigenvectors, traditional FDS solvers depend heavily on eigenvalues also, but for present case all eigenvalues are either $u$ or constant times $u$. In other words, there is no direct or indirect contribution of acoustic speed $a$ as an eigenvalue for both subsystems. This is a serious issue as $u$ frequently goes to zero or near to zero in a flow field which results in zero or near zero diffusion at some parts of the flow. Thus, the scheme may generate near zero diffusion which effectively reduces the scheme to forward in time and central in space (FTCS) framework, and as FTCS doesn't preserve the hyperbolicity, the solution $blows$-$up$. In fact, we constructed FDS scheme for present splitting but unfortunately, it led to blow-up of the solution for almost all problems. Note that we are only considering the application of flux difference splitting to Liou and Steffen splitting here and not their alternative upwinding procedure. \subsection{Zha and Bilgen splitting procedure} Another type of flux splitting is given by Zha and Bilgen \cite{Zha_Bilgen}, in which they split the full Euler flux function into convection and pressure fluxes in such a way that eigenvalues corresponding to Jacobian of pressure flux $\boldsymbol{A_{p}^{ZB}}$ contains no contribution of fluid velocity $u$, unlike in Liou and Steffen splitting. Their convection-pressure splitting is as follows. \begin{equation} \boldsymbol{F} \ =\ \boldsymbol{F}_{c}^{\boldsymbol{ZB}} + \boldsymbol{F}_{p}^{\boldsymbol{ZB}} \end{equation} where \begin{equation} \boldsymbol{F}_{c}^{\boldsymbol{ZB}} = \begin{bmatrix} \rho u \\[0.3em] \rho u^{2} \\[0.3em] \rho u E \end{bmatrix} \ \mbox{and} \ \boldsymbol{F}_{p}^{\boldsymbol{ZB}} = \begin{bmatrix} 0 \\[0.3em] p \\[0.3em] pu \end{bmatrix} \end{equation} As done earlier, we split system $(1)$ into convection and pressure subsystems, using Zha and Bilgen type flux splitting, separately as \begin{equation}\label{conservation_ZB_c_part} \frac{\partial \boldsymbol{U}}{\partial t} + \frac{\partial \boldsymbol{F}_{c}^{\boldsymbol{ZB}} \left(\boldsymbol{U} \right)} {\partial x} \ = \ \boldsymbol{0} \end{equation} \mbox{and} \ \begin{equation}\label{conservation_ZB_p_part} \frac{\partial \boldsymbol{U}}{\partial t} + \frac{\partial \boldsymbol{F}_{p}^{\boldsymbol{ZB}} \left( \boldsymbol{U} \right)} {\partial x} \ = \ \boldsymbol{0} \end{equation} Again, both subsystems can also be written in quasilinear form as follows. \begin{equation}\label{quasi_form_ZB_c_part} \frac{\partial \boldsymbol{U}}{\partial t} + \boldsymbol{A}_{c}^{\boldsymbol{ZB}} \frac{\partial \boldsymbol{U}} {\partial x} \ = \ \boldsymbol{0} \end{equation} \begin{equation}\label{quasi_form_ZB_p_part} \frac{\partial\boldsymbol{U}}{\partial t} + \boldsymbol{A}_{p}^{\boldsymbol{ZB}} \frac{\partial \boldsymbol{U}} {\partial x} \ = \ \boldsymbol{0} \end{equation} where, $\boldsymbol{A}_{c}^{\boldsymbol{ZB}}$ and $\boldsymbol{A}_{p}^{\boldsymbol{ZB}}$ are Jacobian matrices for convection and pressure parts respectively and are given by \begin{equation*} \boldsymbol{A}_{c}^{\boldsymbol{ZB}} = \begin{bmatrix} \ 0 && 1 && 0 \\[0.3em] \ -u^{2} && 2 u && 0 \\[0.3em] \ -u E && E && u \end{bmatrix} \ \end{equation*} \mbox{and} \ \begin{equation*} \boldsymbol{A}_{p}^{\boldsymbol{ZB}} = \begin{bmatrix} \ 0 && 0 && 0 \\[0.3em] \frac{1}{2} (\gamma -1) {u^2} && -(\gamma -1) u && (\gamma -1) \\[0.3em] \ - \frac{a^{2}u}{\gamma} + \frac{(\gamma -1)}{2} u^{3} && \frac{a^{2}}{\gamma} - (\gamma - 1) u^{2} && (\gamma - 1) u \end{bmatrix} \end{equation*} Now, eigenvalues corresponding to convective Jacobian matrix $\boldsymbol{A}_{c}^{\boldsymbol{ZB}}$ are $ \lambda_{c,1}^{\boldsymbol{ZB}} = \lambda_{c,2}^{\boldsymbol{ZB}} = \lambda_{c,3}^{\boldsymbol{ZB}} = u$, thus algebraic multiplicity (AM) of eigenvalue $u$ is 3. Similarly, eigenvalues corresponding to pressure Jacobian matrix $\boldsymbol{A}_{p}^{\boldsymbol{ZB}}$ are $ \lambda_{p,1}^{\boldsymbol{ZB}} = -\sqrt{\frac{(\gamma - 1)}{\gamma}} a, \ \ \lambda_{p,2}^{\boldsymbol{ZB}} = 0 \ and \ \lambda_{p,3}^{\boldsymbol{ZB}} = \sqrt{\frac{(\gamma - 1)}{\gamma}} a$. Since AM of $u$ is 3, so we have to find its eigenvector space to see whether $\boldsymbol{A}_{c}^{\boldsymbol{ZB}}$ has complete set of linearly independent eigenvectors or not. The analysis of matrix $\boldsymbol{A}_{c}^{\boldsymbol{ZB}}$ shows that convective subsystem is weakly hyperbolic as there is no complete set of linearly independent eigenvectors. Indeed, its eigenvectors are \begin{equation} \boldsymbol{R}_{c,1}^{\boldsymbol{ZB}} = \begin{bmatrix} \ 1 \\[0.3em] \ u \\[0.3em] \ 0 \end{bmatrix} ~~\mbox{and}~~ \ \boldsymbol{R}_{c,2}^{\boldsymbol{ZB}} = \begin{bmatrix} \ 0 \\[0.3em] \ 0 \\[0.3em] \ 1 \end{bmatrix} \ \end{equation} Since all eigenvalues for pressure part are real and distinct, this makes pressure subsystem strictly hyperbolic. Analysis of the flux Jacobian matrix for the pressure part shows complete set of eigenvectors, as given below. \begin{equation} \boldsymbol{R}_{p,1}^{\boldsymbol{ZB}} = \begin{bmatrix} \ 0 \\[0.3em] \ 1 \\[0.3em] \ u - \frac{a}{\sqrt{\gamma (\gamma - 1)}} \end{bmatrix} \ , \ \boldsymbol{R}_{p,2}^{\boldsymbol{ZB}} = \begin{bmatrix} \ 1 \\[0.3em] \ u \\[0.3em] \ \frac{1}{2}u^2 \end{bmatrix} \ , \ \boldsymbol{R}_{p,3}^{\boldsymbol{ZB}} = \begin{bmatrix} \ 0 \\[0.3em] \ 1 \\[0.3em] \ u + \frac{a}{\sqrt{\gamma (\gamma - 1)}} \end{bmatrix} \end{equation} \subsection{Toro and V\'azquez splitting Procedure} More recently, Toro \& V\'azquez-Cend\'on \cite{Toro_Vazquez} presented a flux splitting in which convection part contains no pressure term at all, leading to following type of splitting. \begin{equation} \boldsymbol{F} \ =\ \boldsymbol{F}_{c}^{\boldsymbol{TV}} + \boldsymbol{F}_{p}^{\boldsymbol{TV}} \end{equation} where \begin{equation} \boldsymbol{F}_{c}^{\boldsymbol{TV}} = \begin{bmatrix} \rho u \\[0.3em] \rho u^{2} \\[0.3em] \frac{1}{2} \rho u^{3} \end{bmatrix} \ \mbox{and} \ \boldsymbol{F}_{p}^{\boldsymbol{TV}} = \begin{bmatrix} 0 \\[0.3em] p \\[0.3em] \frac{\gamma}{\gamma - 1} p u \end{bmatrix} \end{equation} Let us split the system $(1)$ into convection and pressure subsystems, for gaining better insight by analysing each part separately. \begin{equation}\label{conservation_TV_c_part} \frac{\partial \boldsymbol{U}}{\partial t} + \frac{\partial \boldsymbol{F}_{c}^{\boldsymbol{TV}} \left(\boldsymbol{U} \right)} {\partial x} \ = \ \boldsymbol{0} \end{equation} \mbox{and} \ \begin{equation}\label{conservation_TV_p_part} \frac{\partial \boldsymbol{U}}{\partial t} + \frac{\partial \boldsymbol{F}_{p}^{\boldsymbol{TV}} \left( \boldsymbol{U} \right)} {\partial x} \ = \ \boldsymbol{0} \end{equation} Again, both subsystems can also be written in quasilinear form as follows. \begin{equation}\label{quasi_form_TV_c_part} \frac{\partial \boldsymbol{U}}{\partial t} + \boldsymbol{A}_{c}^{\boldsymbol{TV}} \frac{\partial \boldsymbol{U}} {\partial x} \ = \ \boldsymbol{0} \end{equation} \begin{equation}\label{quasi_form_TV_p_part} \frac{\partial\boldsymbol{U}}{\partial t} + \boldsymbol{A}_{p}^{\boldsymbol{TV}} \frac{\partial \boldsymbol{U}} {\partial x} \ = \ \boldsymbol{0} \end{equation} Here $\boldsymbol{A}_{c}^{\boldsymbol{TV}}$ and $\boldsymbol{A}_{p}^{\boldsymbol{TV}}$ are Jacobian matrices for convection and pressure parts respectively and are given by \begin{equation*} \boldsymbol{A}_{c}^{\boldsymbol{TV}} = \begin{bmatrix} \ 0 && 1 && 0 \\[0.3em] \ -u^{2} && 2 u && 0 \\[0.3em] \ -u^{3} && \frac{3}{2}u^{2} && 0 \end{bmatrix} \ \end{equation*} \mbox{and} \ \begin{equation*} \boldsymbol{A}_{p}^{\boldsymbol{TV}} = \begin{bmatrix} \ 0 && 0 && 0 \\[0.3em] \frac{1}{2} (\gamma -1) {u^2} && -(\gamma -1) u && (\gamma -1) \\[0.3em] \ -\frac{u a^{2}}{(\gamma - 1)} + \frac{1}{2}\gamma u^{3} && \frac{a^{2}}{(\gamma - 1)} - \gamma u^{2} && \gamma u \end{bmatrix} \end{equation*} Eigenvalues corresponding to convective Jacobian matrix $\boldsymbol{A}_{c}^{\boldsymbol{TV}}$ are $ \lambda_{c,1}^{\boldsymbol{TV}} = 0, \ \ \lambda_{c,2}^{\boldsymbol{TV}} = \lambda_{c,3}^{\boldsymbol{TV}} = u$ and algebraic multiplicity (AM) of eigenvalue $u$ is 2, so we have to find its eigenvector space to see whether $\boldsymbol{A}_{c}^{\boldsymbol{TV}}$ has complete set of linearly independent eigenvectors or not. The analysis of matrix $\boldsymbol{A}_{c}^{\boldsymbol{TV}}$ shows that convective subsystem is weakly hyperbolic as there is no complete set of linearly independent eigenvectors. Indeed, its eigenvectors are \begin{equation} \boldsymbol{R}_{c,1}^{\boldsymbol{TV}} = \begin{bmatrix} \ 0 \\[0.3em] \ 0 \\[0.3em] \ 1 \end{bmatrix} ~~\mbox{and}~~ \ \boldsymbol{R}_{c,2}^{\boldsymbol{TV}} = \begin{bmatrix} \ 1 \\[0.3em] \ u \\[0.3em] \ \frac{1}{2}u^2 \end{bmatrix} \ \end{equation} Similarly, the eigenvalues corresponding to pressure Jacobian matrix $\boldsymbol{A}_{p}^{\boldsymbol{TV}}$, when evaluated, are found to be $\lambda_{p,1}^{\boldsymbol{TV}} = \frac{1}{2}(u - \beta), \ \ \lambda_{p,2}^{\boldsymbol{TV}} = 0, \ \ \lambda_{p,3}^{\boldsymbol{TV}} = \frac{1}{2}(u + \beta) $, where $\beta$ \ = \ $\sqrt{u^2 + 4a^2}$. All eigenvalues for pressure part are real and distinct and this makes pressure subsystem strictly hyperbolic. Analysis of the flux Jacobian matrix for the pressure part shows complete set of eigenvectors, as given below. \begin{equation} \boldsymbol{R}_{p,1}^{\boldsymbol{TV}} = \begin{bmatrix} \ 0 \\[0.3em] \ 1 \\[0.3em] \ u + \frac{1}{2}(\frac{u - \beta}{\gamma - 1}) \end{bmatrix} \ , \ \boldsymbol{R}_{p,2}^{\boldsymbol{TV}} = \begin{bmatrix} \ 1 \\[0.3em] \ u \\[0.3em] \ \frac{1}{2}u^2 \end{bmatrix} \ , \ \boldsymbol{R}_{p,3}^{\boldsymbol{TV}} = \begin{bmatrix} \ 0 \\[0.3em] \ 1 \\[0.3em] \ u + \frac{1}{2}(\frac{u + \beta}{\gamma - 1}) \end{bmatrix} \end{equation} Since the convective subsystems for both Zha-Bilgen splitting and Toro-V\'azquez splitting have incomplete set of linearly independent (LI) eigenvectors, it will be nontrivial to construct an upwind scheme based on eigenvector structure. \section{Addition of generalized eigenvectors} First we consider Jacobian matrix $\boldsymbol{A}_{c}^{\boldsymbol{TV}}$ corresponding to Toro-V\'azquez convective subsystem, for which, our aim is to get complete set of linearly independent generalized eigenvectors. For this system, we have two different sets of eigenvalues. Here, we briefly discuss a procedure to find generalized eigenvectors for cases where resultant Jordan matrix possess exactly one Jordan block for each set of eigenvalues. Let \begin{equation*} {\boldsymbol J} = \begin{bmatrix} \ {\boldsymbol J}(\lambda_{1}) & \boldsymbol{0} & \cdots & \boldsymbol{0} \\[0.3em] \ \boldsymbol{0} & {\boldsymbol J}(\lambda_{2}) & \cdots & \boldsymbol{0} \\[0.3em] \ \vdots & \vdots & \ddots & \vdots \\[0.3em] \boldsymbol{0} & \boldsymbol{0} & \cdots & {\boldsymbol J}(\lambda_{p}) \end{bmatrix}, \ \ \textrm{where} \ {\lambda_{1}, \lambda_{2}, \cdots , \lambda_{p}} \in \sigma(\boldsymbol{A}) \end{equation*} are set of distinct eigenvalues, some or all of them with arithmetic multiplicity greater than one. Moreover, assume there exists a single Jordan block for each $\lambda_{i}$. Let us focus on one such $ \lambda_{i}$, with AM equal to $m>1$. Then \begin{equation*} {\boldsymbol J(\lambda_{i})} = \begin{bmatrix} \ \lambda_{i} & 1 & \\[0.3em] \ & \ddots & \ddots & \\[0.3em] \ & & \ddots & 1 \\[0.3em] \ & & & \lambda_{i} \end{bmatrix}_{m\times m} \ \end{equation*} \\ In order to find set of generalized eigenvectors corresponding to $\lambda_{i}$, we need to focus on portion $\boldsymbol{P}^{*} = \big[\boldsymbol{X}_1, \boldsymbol{X}_2, \boldsymbol{X}_3,........., \boldsymbol{X}_m\big]$ of $\boldsymbol{P} \ = \ [... \boldsymbol{P}^{*}...]$ that corresponds to the position $\boldsymbol J(\lambda_{i})$ in $\boldsymbol J$. Now $\boldsymbol{A}\boldsymbol{P} = \boldsymbol{P}\boldsymbol{J}$ implies $\boldsymbol{A}\boldsymbol{P}^{*} = \boldsymbol{P}^{*}\boldsymbol J(\lambda_{i})$, {\em i.e.}, \begin{equation*} \boldsymbol{A}\big[\boldsymbol{X}_1, \boldsymbol{X}_2, \boldsymbol{X}_3,........., \boldsymbol{X}_{m}\big] \ = \ \big[\boldsymbol{X}_1, \boldsymbol{X}_2, \boldsymbol{X}_3,........., \boldsymbol{X}_{m}\big] \begin{bmatrix} \ \lambda_{i} & 1 & \\[0.3em] \ & \ddots & \ddots & \\[0.3em] \ & & \ddots & 1 \\[0.3em] \ & & & \lambda_{i} \end{bmatrix}_{m\times m} \ \end{equation*} On equating columns on both sides, we get \begin{align} \begin{split} \boldsymbol{A}\boldsymbol{X}_{1} \ &= \ \lambda_{i} \boldsymbol{X}_{1} \\ \boldsymbol{A}\boldsymbol{X}_{2} \ &= \ \lambda_{i} \boldsymbol{X}_{2} \ + \ \boldsymbol{X}_{1} \\ \boldsymbol{A}\boldsymbol{X}_{3} \ &= \ \lambda_{i} \boldsymbol{X}_{3} \ + \ \boldsymbol{X}_{2} \\ \vdots \\ \boldsymbol{A}\boldsymbol{X}_{m} \ &= \ \lambda_{i} \boldsymbol{X}_{m} \ + \ \boldsymbol{X}_{m-1} \end{split} \end{align} \\ Now $u$ is a repeated eigenvalue of matrix $\boldsymbol{A}_c^{\boldsymbol{TV}}$ with AM is equal to two and other eigenvalue is zero with multiplicity one. First we have to compute ranks of matrices $(\boldsymbol{A}_{c}^{\boldsymbol{TV}}- u\boldsymbol{I})$, \ \ $(\boldsymbol{A}_{c}^{\boldsymbol{TV}}- u\boldsymbol{I})^2$, $\cdots$. It turns out that $rank(\boldsymbol{A}_{c}^{\boldsymbol{TV}}- u\boldsymbol{I})^2 \ = \ 1 \ = \ rank(\boldsymbol{A}_{c}^{\boldsymbol{TV}}- u\boldsymbol{I})^{3}$, which means there should be a $Jordan$ block of order $2$. Therefore, there is single Jordan block of order two corresponding to an eigenvalue $u$. Thus, a Jordan chain of order two will be formed by matrix $\boldsymbol{A}_{c}^{\boldsymbol{TV}}$, {\em i.e.}, \begin{align}\label{gen_for_TV} \begin{split} \boldsymbol{A}_{c}^{\boldsymbol{TV}}\boldsymbol{X}_{1} \ &= \ u\boldsymbol{X}_{1} \ \textrm{and} \ \\ \boldsymbol{A}_{c}^{\boldsymbol{TV}}\boldsymbol{X}_{2} \ &= \ u\boldsymbol{X}_{2} \ + \ \boldsymbol{X}_{1} \end{split} \end{align} should hold. From first relation we get \begin{equation} \boldsymbol{X}_{1} \ = \ \boldsymbol{R}_{c,2}^{\boldsymbol{TV}} = \begin{bmatrix} \ 1 \\[0.3em] \ u \\[0.3em] \ \frac{1}{2} u^2 \end{bmatrix} \end{equation} and on using $\boldsymbol{X}_{1}$ in the second relation of (\ref{gen_for_TV}), we can find required generalized eigenvector $\boldsymbol{X}_{2}$ which is given below. \begin{equation} \boldsymbol{X}_{2} \ = \ \boldsymbol{R}_{c,3}^{\boldsymbol{TV}} = \begin{bmatrix} \ x_{1} \\[0.3em] \ 1 + ux_{1} \\[0.3em] \ u + \frac{1}{2}u^{2}x_{1} \end{bmatrix} \end{equation} Here, $x_{1} \in {\rm I\!R}$ is a real constant and $det(\boldsymbol{P})$ is equal to one. If we take $\boldsymbol{P}$ equal to \begin{equation} \begin{bmatrix} \ 0 && 1 && x_{1} \\[0.3em] \ 0 && u && 1 + ux_{1} \\[0.3em] \ 1 && \frac{1}{2}u^{2} && u + \frac{1}{2}u^{2}x_{1} \end{bmatrix} \ \textrm{then} \ \boldsymbol{P}^{-1}\boldsymbol{A}_{c}^{\boldsymbol{TV}}\boldsymbol{P} \ = \ \begin{bmatrix} \ 0 && 0 && 0 \\[0.3em] \ 0 && u && 1 \\[0.3em] \ 0 && 0 && u \end{bmatrix} \ \ = \ \boldsymbol{J}_1 \end{equation} and if we take $\boldsymbol{P}$ equal to \begin{equation} \begin{bmatrix} \ 1 && x_{1} && 0 \\[0.3em] \ u && 1 + ux_{1} && 0 \\[0.3em] \ \frac{1}{2}u^{2} && u + \frac{1}{2}u^{2}x_{1} && 1 \end{bmatrix} \ \textrm{then} \ \boldsymbol{P}^{-1}\boldsymbol{A}_{c}^{\boldsymbol{TV}}\boldsymbol{P} \ = \ \begin{bmatrix} \ u && 1 && 0 \\[0.3em] \ 0 && u && 0 \\[0.3em] \ 0 && 0 && 0 \end{bmatrix} \ \ = \ \boldsymbol{J}_2 \end{equation} \\ Next, we have to find generalized eigenvectors corresponding to Jacobian matrix $\boldsymbol{A}_{c}^{\boldsymbol{ZB}}$ for a convective subsystem of Zha and Bilgen type splitting. As explained earlier, eigenvalues for $\boldsymbol{A}_{c}^{\boldsymbol{ZB}}$ are $u, u, u$ and set of LI eigenvectors are, \begin{equation} \boldsymbol{R}_{c,1}^{\boldsymbol{ZB}} = \begin{bmatrix} \ 1 \\[0.3em] \ u \\[0.3em] \ 0 \end{bmatrix} ~~\mbox{and}~~ \ \boldsymbol{R}_{c,2}^{\boldsymbol{ZB}} = \begin{bmatrix} \ 0 \\[0.3em] \ 0 \\[0.3em] \ 1 \end{bmatrix} \ \end{equation} On computing ranks of matrices $(\boldsymbol{A}_{c}^{\boldsymbol{ZB}}- u\boldsymbol{I})$, \ \ $(\boldsymbol{A}_{c}^{\boldsymbol{ZB}}- u\boldsymbol{I})^2$, $\cdots$, we find $rank(\boldsymbol{A}_{c}^{\boldsymbol{ZB}}- u\boldsymbol{I})^2 \ = \ 0 \ = \ rank(\boldsymbol{A}_{c}^{\boldsymbol{ZB}}- u\boldsymbol{I})^{3}$. Thus there will be one $Jordan$ block of order $2$ and since all eigenvalues are equal then there must be another Jordan block of order one. In short, Jordan matrix $\boldsymbol{J}$ corresponding to matrix $\boldsymbol{A}_c^{\boldsymbol{ZB}}$ is made up of two Jordan blocks, which clearly shows that for the present case there is no single Jordan block for given set of eigenvalues. Thus, earlier theory may not be directly applicable for this case. But a Jordan chain of order two should form by matrix $\boldsymbol{A}_{c}^{\boldsymbol{ZB}}$. If possible, without loss of generality, let first assume \begin{align} \begin{split} \boldsymbol{A}_{c}^{\boldsymbol{ZB}}\boldsymbol{R}_{c,1}^{\boldsymbol{ZB}} \ &= \ u\boldsymbol{R}_{c,1}^{\boldsymbol{ZB}} \\ \boldsymbol{A}_{c}^{\boldsymbol{ZB}}\boldsymbol{X} \ &= \ u\boldsymbol{X} + \boldsymbol{R}_{c,1}^{\boldsymbol{ZB}} \end{split} \end{align} holds. On expanding second relation \begin{equation*} \begin{bmatrix} \ 0 && 1 && 0 \\[0.3em] \ -u^{2} && 2 u && 0 \\[0.3em] \ -u E && E && u \end{bmatrix} \ \begin{bmatrix} \ x_{1} \\[0.3em] \ x_{2} \\[0.3em] \ x_{3} \end{bmatrix} \ \textrm{=} \ u \begin{bmatrix} \ x_{1} \\[0.3em] \ x_{2} \\[0.3em] \ x_{3} \end{bmatrix} \ + \ \begin{bmatrix} \ 1 \\[0.3em] \ u \\[0.3em] \ 0 \end{bmatrix} \end{equation*} and from first two equations, we get \begin{align} \begin{split} x_{2} \ &= \ ux_{1} + 1 \end{split} \end{align} Similarly, from third equation \begin{align} \begin{split} -u Ex_{1} \ + \ Ex_{2} \ + \ ux_{3} \ &= \ ux_{3} \\ \Rightarrow x_{2} \ &= \ ux_{1}. \end{split} \end{align} We get two different expressions for real constant $x_{2}$, which is a contradiction. Thus eigenvector $\boldsymbol{R_{c,1}^{ZB}}$ can't form a Jordan chain of order two corresponding to matrix $\boldsymbol{A}_{c}^{\boldsymbol{ZB}}$. If possible, let us assume now $\boldsymbol{R}_{c,2}^{\boldsymbol{ZB}}$ forms a Jordan chain of order two, {\em i.e.}, \begin{align} \begin{split} \boldsymbol{A}_{c}^{\boldsymbol{ZB}}\boldsymbol{R}_{c,2}^{\boldsymbol{ZB}} \ &= \ u\boldsymbol{R}_{c,2}^{\boldsymbol{ZB}} \\ \boldsymbol{A}_{c}^{\boldsymbol{ZB}}\boldsymbol{X} \ &= \ u\boldsymbol{X} + \boldsymbol{R}_{c,2}^{\boldsymbol{ZB}} \end{split} \end{align} holds. Again after expanding second relation, we have \begin{equation*} \begin{bmatrix} \ 0 && 1 && 0 \\[0.3em] \ -u^{2} && 2 u && 0 \\[0.3em] \ -u E && E && u \end{bmatrix} \ \begin{bmatrix} \ x_{1} \\[0.3em] \ x_{2} \\[0.3em] \ x_{3} \end{bmatrix} \ \textrm{=} \ u \begin{bmatrix} \ x_{1} \\[0.3em] \ x_{2} \\[0.3em] \ x_{3} \end{bmatrix} \ + \ \begin{bmatrix} \ 0 \\[0.3em] \ 0 \\[0.3em] \ 1 \end{bmatrix} \end{equation*} and further on solving first two equations of expanded system we get \begin{align} \begin{split} x_{2} \ &= \ ux_{1}. \end{split} \end{align} From third equation we have \begin{align} \begin{split} -u Ex_{1} \ + \ Ex_{2} \ + \ ux_{3} \ &= \ ux_{3} + 1 \\ \Rightarrow x_{2} \ &= \ \frac{1 + u Ex_{1}}{E} \end{split} \end{align} If we compare both values of $x_{2}$, we get $0 = 1$ which is impossible, hence a contradiction. Therefore, neither $\boldsymbol{R}_{c,1}^{\boldsymbol{ZB}}$ nor $\boldsymbol{R}_{c,2}^{\boldsymbol{ZB}}$ helps in forming a Jordan chain of order two corresponding to matrix $\boldsymbol{A}_{c}^{\boldsymbol{ZB}}$. Thus, we need to go more deep into the theory of Jordan canonical forms to obtain proper generalized eigenvectors. Since there will be a Jordan block of order 2, this means that we need to construct a generalized eigenvector which should help in generating a {\em Jordan chain}. Let $R(\boldsymbol{A})$ denote the space spanned by the columns of matrix $\boldsymbol{A}_{c}^{\boldsymbol{ZB}} -u\boldsymbol{I}$. Then \begin{equation} R(\boldsymbol{A}) \ = \ x_{1}\boldsymbol{A}_1 \ + \ x_{2}\boldsymbol{A}_2 \ + \ x_{3}\boldsymbol{A}_3 \end{equation} where, $\boldsymbol{A}_1,\boldsymbol{A}_2,\boldsymbol{A}_3$ are column vectors of $\boldsymbol{A_{c}^{ZB}} -u\boldsymbol{I}$. Now \begin{equation} R(\boldsymbol{A}) \ = \ x_{1}\begin{bmatrix} \ -u \\[0.3em] \ -u^{2} \\[0.3em] \ -u E \end{bmatrix} \ + \ x_{2}\begin{bmatrix} \ 1 \\[0.3em] \ u \\[0.3em] \ E \end{bmatrix} \ + \ x_{3}\begin{bmatrix} \ 0 \\[0.3em] \ 0 \\[0.3em] \ 0 \end{bmatrix} \end{equation} or \begin{equation} R(\boldsymbol{A}) \ = \ -u x_{1}\begin{bmatrix} \ 1 \\[0.3em] \ u \\[0.3em] \ E \end{bmatrix} \ + \ x_{2}\begin{bmatrix} \ 1 \\[0.3em] \ u \\[0.3em] \ E \end{bmatrix} \ = \ (-u x_{1} + x_{2}) \begin{bmatrix} \ 1 \\[0.3em] \ u \\[0.3em] \ E \end{bmatrix} \end{equation} Therefore, column vector $\boldsymbol{X} \ = \ (1,u,E)^{t}$ is a range space of $R(\boldsymbol{A})$. Next, let $N(\boldsymbol{AX})$ be a null space of column vectors $(\boldsymbol{A}_{c}^{\boldsymbol{ZB}} -u\boldsymbol{I})\boldsymbol{X}$. By definition \begin{equation} N(\boldsymbol{AX}) \ = \ \big\{\ \boldsymbol{v} \in {\rm I\!R^n} ; (\boldsymbol{AX})\boldsymbol{v} = \boldsymbol{0} \big\} \end{equation} and dimension of $\boldsymbol{v}$ is equal to number of entries in each row of matrix $\boldsymbol{AX}$. Now \begin{equation} \boldsymbol{AX} \ = \ (\boldsymbol{A_{c}^{ZB}} -u\boldsymbol{I})\boldsymbol{X} \ = \ \begin{bmatrix} \ -u && 1 && 0 \\[0.3em] \ -u^{2} && u && 0 \\[0.3em] \ - u E && E && 0 \end{bmatrix} \ \begin{bmatrix} \ 1 \\[0.3em] \ u \\[0.3em] \ E \end{bmatrix} \ = \ \begin{bmatrix} \ 0 \\[0.3em] \ 0 \\[0.3em] \ 0 \end{bmatrix}. \ \end{equation} For present case, $\boldsymbol{AX}$ is just a null vector, therefore $\boldsymbol{v}$ reduces to a scalar coefficient. By definition of null space of column vectors we have, \begin{equation} \begin{bmatrix} \ 0 \\[0.3em] \ 0 \\[0.3em] \ 0 \end{bmatrix}v \ = \ \begin{bmatrix} \ 0 \\[0.3em] \ 0 \\[0.3em] \ 0 \end{bmatrix} \ \end{equation} which holds for any $v \in {\rm I\!R}$. And by definition, $\boldsymbol{Xv}$ which is equal to $\boldsymbol{X}v$ should from a basis for $ R(\boldsymbol{A}_{c}^{\boldsymbol{ZB}} -u\boldsymbol{I})$ $\cap$ $N(\boldsymbol{A}_{c}^{\boldsymbol{ZB}} -u\boldsymbol{I})$. Thus, $\boldsymbol{X}_1 \ = \ (1,u,E)^{t}$ should be a generalized eigenvector and to check that we need to see whether for $\boldsymbol{X} \ = \ \boldsymbol{X}_1$, relation $\boldsymbol{A}_{c}^{\boldsymbol{ZB}}\boldsymbol{X} \ = \ u\boldsymbol{X}$ holds or not. Now \begin{equation} \boldsymbol{A}_{c}^{\boldsymbol{ZB}}\boldsymbol{X}_1 \ = \ \begin{bmatrix} \ 0 && 1 && 0 \\[0.3em] \ -u^{2} && 2u && 0 \\[0.3em] \ - u E && E && u \end{bmatrix} \ \begin{bmatrix} \ 1 \\[0.3em] \ u \\[0.3em] \ E \end{bmatrix} \ = \ u \begin{bmatrix} \ 1 \\[0.3em] \ u \\[0.3em] \ E \end{bmatrix} \ = \ u \boldsymbol{X}_1 \ \end{equation} This generalized eigenvector is expected to form a Jordan chain of order two corresponding to matrix $\boldsymbol{A}_{c}^{\boldsymbol{ZB}}$, {\em i.e.}, \begin{align} \begin{split} \boldsymbol{A}_{c}^{\boldsymbol{ZB}}\boldsymbol{X}_1 \ &= \ u\boldsymbol{X}_1 \\ \boldsymbol{A}_{c}^{\boldsymbol{ZB}}\boldsymbol{X}_2 \ &= \ u\boldsymbol{X}_2 \ + \ \boldsymbol{X}_1 \end{split} \end{align} Eigenvector $\boldsymbol{X}_2$ can be find from second relation and in expanded from it is written as, \begin{equation} \begin{bmatrix} \ 0 && 1 && 0 \\[0.3em] \ -u^{2} && 2u && 0 \\[0.3em] \ - u E && E && u \end{bmatrix} \begin{bmatrix} \ x_{1} \\[0.3em] \ x_{2} \\[0.3em] \ x_{3} \end{bmatrix} \ = \ u\begin{bmatrix} \ x_{1} \\[0.3em] \ x_{2} \\[0.3em] \ x_{3} \end{bmatrix} \ + \ \begin{bmatrix} \ 1 \\[0.3em] \ u \\[0.3em] \ E \end{bmatrix} \end{equation} After little algebra, $\boldsymbol{X}_2$ comes out as \begin{equation} \boldsymbol{X}_2 \ = \ \begin{bmatrix} \ x_{1} \\[0.3em] \ 1 + ux_{1} \\[0.3em] \ x_{3} \end{bmatrix} \ = \ \boldsymbol{R}_{c,2}^{\boldsymbol{ZB}} \end{equation} where $x_1, x_3 \in {\rm I\!R}$. \begin{equation} \boldsymbol{X}_1 \ = \ \begin{bmatrix} \ 1 \\[0.3em] \ u \\[0.3em] \ E \end{bmatrix} \ = \ \boldsymbol{R}_{c,1}^{\boldsymbol{ZB}} \end{equation} and we can take $\boldsymbol{R}_{c,3}^{\boldsymbol{ZB}}$ either equal to \begin{equation} \begin{bmatrix} \ 0 \\[0.3em] \ 0 \\[0.3em] \ 1 \end{bmatrix} \ \ \textrm{or} \ \ \begin{bmatrix} \ 1 \\[0.3em] \ u \\[0.3em] \ 0 \end{bmatrix}. \end{equation} If we take \begin{equation} \boldsymbol{R}_{c,3}^{\boldsymbol{ZB}} \ = \ \begin{bmatrix} \ 0 \\[0.3em] \ 0 \\[0.3em] \ 1 \end{bmatrix} \ \ \textrm{then} \ \boldsymbol{P} \ = \ \begin{bmatrix} \ 1 & x_{1} & 0 \\[0.3em] \ u & 1 + ux_{1} & 0 \\[0.3em] \ E & x_{3} & 1 \end{bmatrix} \end{equation} and $det(\boldsymbol{P}) \ = \ 1$. Further, \begin{equation} \boldsymbol{P}^{-1}\boldsymbol{A}_{c}^{\boldsymbol{ZB}}\boldsymbol{P} \ = \ \ \begin{bmatrix} \ u & 1 & 0 \\[0.3em] \ 0 & u & 0 \\[0.3em] \ 0 & 0 & u \end{bmatrix} \ = \ \boldsymbol{J}_{1} \end{equation} Similarly, if we take \begin{equation} \boldsymbol{P} \ = \ \begin{bmatrix} \ 0 & 1 & x_{1} \\[0.3em] \ 0 & u & 1 + ux_{1} \\[0.3em] \ 1 & E & x_{3} \end{bmatrix} \ \textrm{then} \ \boldsymbol{P}^{-1}\boldsymbol{A}_{c}^{\boldsymbol{ZB}}\boldsymbol{P} \ = \ \ \begin{bmatrix} \ u & 0 & 0 \\[0.3em] \ 0 & u & 1 \\[0.3em] \ 0 & 0 & u \end{bmatrix} \ = \ \boldsymbol{J}_{2} \end{equation} Similarly, let \begin{equation} \boldsymbol{P} \ = \ \begin{bmatrix} \ 1 & x_{1} & 1 \\[0.3em] \ u & 1 + ux_{1} & u \\[0.3em] \ E & x_{3} & 0 \end{bmatrix} \end{equation} then, $det(\boldsymbol{P}) \ = \ -E \neq 0$. \section{Formulation of ZBS-FDS and TVS-FDS schemes} \subsection{ZBS-FDS scheme} We first consider pressure subsystem of Zha and Bilgen type splitting and on comparing (\ref{conservation_ZB_p_part}) and (\ref{quasi_form_ZB_p_part}), we get \begin{equation} d\boldsymbol{F}_{p}^{\boldsymbol{ZB}} \ = \ \boldsymbol{A}_{p}^{\boldsymbol{ZB}}d\boldsymbol{U} \end{equation} The finite difference analogue of the above differential relation is, \begin{equation}\label{FDS_ZBS_p_part} \Delta{\boldsymbol{F}_{p}^{\boldsymbol{ZB}}} \ = \ \boldsymbol{\bar{A}}_{p}^{\boldsymbol{ZB}}\Delta{\boldsymbol{U}} \end{equation} where $\boldsymbol{\bar{A}}_{p}^{\boldsymbol{ZB}}$ is now a function of left and right states, {\em i.e.}, $\boldsymbol{\bar{A}}_{p}^{\boldsymbol{ZB}} = \boldsymbol{\bar{A}}_{p}^{\boldsymbol{ZB}}(\boldsymbol{U}_L,\boldsymbol{U}_R)$. Since $\Delta{\boldsymbol{U}}$ is a column vector, therefore it can be written as linear combination of LI eigenvectors. \begin{equation} \Delta \boldsymbol{U} \ = \ \sum_{i = 1}^{3} \bar{\alpha}_{p,i}^{\boldsymbol{ZB}}\boldsymbol{\bar{R}}_{p,i}^{\boldsymbol{ZB}} \end{equation} On using above expression in (\ref{FDS_ZBS_p_part}), \begin{equation} \\ \Delta{\boldsymbol{F}_{p}^{\boldsymbol{ZB}}} \ = \ \boldsymbol{\bar{A}}_{p}^{\boldsymbol{ZB}} \sum_{i = 1}^{3} \bar{\alpha}_{p,i}^{\boldsymbol{ZB}}\boldsymbol{{\bar{R}}}_{p,i}^{\boldsymbol{ZB}} \end{equation} or \begin{equation} \Delta{\boldsymbol{F}_p^{\boldsymbol{ZB}}} \ = \ \bar{\alpha}_{p,1}^{\boldsymbol{ZB}} \boldsymbol{\bar{A}}_{p}^{\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{p,1}^{\boldsymbol{ZB}} \ + \ \bar{\alpha}_{p,2}^{\boldsymbol{ZB}} \boldsymbol{\bar{A}}_{p}^{\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{p,2}^{\boldsymbol{ZB}} \ + \ \bar{\alpha}_{p,3}^{\boldsymbol{ZB}} \boldsymbol{\bar{A}}_{p}^{\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{p,3}^{\boldsymbol{ZB}} \end{equation} which is further equal to \begin{equation} \Delta{\boldsymbol{F}_p^{\boldsymbol{ZB}}} \ = \ \bar{\alpha}_{p,1}^{\boldsymbol{ZB}}\bar \lambda_{p,1}^{\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{p,1}^{\boldsymbol{ZB}} \ + \ \bar{\alpha}_{p,2}^{\boldsymbol{ZB}} \bar \lambda_{p,2}^{\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{p,2}^{\boldsymbol{ZB}} \ + \ \bar{\alpha}_{p,3}^{\boldsymbol{ZB}} \bar \lambda_{p,3}^{\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{p,3}^{\boldsymbol{ZB}} \end{equation} Now, $\Delta{\boldsymbol{F}_p^{+\boldsymbol{ZB}}}$ must have contribution of positive part of eigenvalues only, {\em i.e.}, \begin{align}\label{ZBS-FDS_positive_p_part} \begin{split} \Delta{\boldsymbol{F}_p^{+\boldsymbol{ZB}}} \ = \ \bar{\alpha}_{p,1}^{\boldsymbol{ZB}}\bar \lambda_{p,1}^{+\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{p,1}^{\boldsymbol{ZB}} \ &+ \ \bar{\alpha}_{p,2}^{\boldsymbol{ZB}} \bar \lambda_{p,2}^{+\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{p,2}^{\boldsymbol{ZB}} \ + \ \bar{\alpha}_{p,3}^{\boldsymbol{ZB}} \bar \lambda_{p,3}^{+\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{p,3}^{\boldsymbol{ZB}} \end{split} \end{align} Similarly, \begin{align}\label{ZBS-FDS_negative_p_part} \begin{split} \Delta{\boldsymbol{F}_p^{-\boldsymbol{ZB}}} \ = \ \bar{\alpha}_{p,1}^{\boldsymbol{ZB}}\bar \lambda_{p,1}^{-\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{p,1}^{\boldsymbol{ZB}} \ &+ \ \bar{\alpha}_{p,2}^{\boldsymbol{ZB}} \bar \lambda_{p,2}^{-\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{p,2}^{\boldsymbol{ZB}} \ + \ \bar{\alpha}_{p,3}^{\boldsymbol{ZB}} \bar \lambda_{p,3}^{-\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{p,3}^{\boldsymbol{ZB}} \end{split} \end{align} We now define the standard Courant splitting for the eigenvalues as \begin{equation} \bar \lambda^{\pm}_{p,i} = \frac{\bar \lambda_{p,i} \pm |\bar \lambda_{p,i}|}{2} \end{equation} On using standard upwinding along with above definition, we finally get \begin{equation}\label{ZBS-FDS-p-part} \Delta{\boldsymbol{F}_p^{+\boldsymbol{ZB}}} - \Delta{\boldsymbol{F}_p^{-\boldsymbol{ZB}}} \ = \ \sum_{i = 1}^{3} \bar{\alpha}_{p,i}^{\boldsymbol{ZB}}\left|\bar \lambda_{p,i}^{\boldsymbol{ZB}}\right| \boldsymbol{{\bar{R}}}_{p,i}^{\boldsymbol{ZB}} \end{equation} To determine right side of (\ref{ZBS-FDS-p-part}) completely, we need to find average values of eigenvalues along with coefficients which are attached with LI eigenvectors. First we consider linearization equation, $\Delta \boldsymbol{F}_{p}^{\boldsymbol{ZB}} \ = \ \boldsymbol{\bar{A}}_{p}^{\boldsymbol{ZB}}\Delta{\boldsymbol{U}}$, of pressure subsystem of ZBS-FDS scheme. In expanded form it can be written as \begin{equation} \begin{bmatrix} 0 \\[0.3em] \Delta(p) \\[0.3em] \Delta(pu) \end{bmatrix} \ = \ \begin{bmatrix} \ 0 & 0 & 0 \\[0.3em] \frac{1}{2} (\gamma -1) \bar u^2 & -(\gamma -1)\bar u & (\gamma -1) \\[0.3em] \ -\frac{\bar u \bar a^2}{\gamma} + \frac{(\gamma - 1)}{2} \bar u^3 & \frac{\bar a^2}{\gamma} - (\gamma - 1) \bar u^2 & (\gamma - 1)\bar u \end{bmatrix} \begin{bmatrix} \Delta (\rho) \\[0.3em] \Delta (\rho u) \\[0.3em] \Delta (\rho E) \end{bmatrix} \end{equation} From the second equation, we get \begin{equation} \Delta p \ = \ \frac{1}{2} (\gamma -1) \bar u^2 \Delta \rho \ - \ (\gamma -1)\bar u \Delta (\rho u) \ + \ (\gamma -1) \Delta (\rho E) \end{equation} or \begin{align} \begin{split} \Delta p \ = \ \frac{1}{2} (\gamma -1) \bar u^2 \Delta \rho \ - \ (\gamma -1)\bar u \Delta (\rho u) \ + \ (\gamma -1) \Delta(\frac{p}{\gamma - 1}) \\ \ + \ \frac{1}{2} (\gamma - 1) \Delta(\rho u^2) \end{split} \end{align} It further reduces to \begin{align}\label{average_rel_1} \begin{split} {\bar{u}}^2 \Delta (\rho) \ - \ 2 \bar{u} \Delta (\rho u) \ + \ \Delta (\rho u^{2})\ = \ 0 \end{split} \end{align} which gives average value for conserved variable $\bar u$ as given below. \begin{equation} \\ \ \bar{u} \ = \ \frac{\sqrt{\rho_L} u_L \ + \ \sqrt{\rho_R} u_R }{\sqrt{\rho_L} \ + \ \sqrt{\rho_R}} \end{equation} Other root is being neglected as it contains negative sign in the denominator. Let us consider the relation \begin{equation}\label{average_rel_2} \Delta(\rho u) \ = \ \bar{\rho} \Delta{u} \ + \ \bar{u} \Delta{\rho} \end{equation} in expanded form it is written as, \begin{equation} \rho_R u_R - \rho_L u_L \ = \ \bar{\rho}(u_R - u_L) \ + \ \bar{u}(\rho_R - \rho_L) \end{equation} On using $\bar{u}$ in the above relation we get $\bar{\rho} = \sqrt{\rho_L\rho_R}$. Similarly, third equation can be written as \begin{eqnarray} \Delta(pu) \ & = & \ \left(-\frac{\bar u \bar a^2}{\gamma} + \frac{(\gamma - 1)}{2} \bar u^3\right)\Delta\rho \ \nonumber \\ & & + \ \left(\frac{\bar a^2}{\gamma} - (\gamma - 1) \bar u^2\right)\Delta (\rho u) + \ (\gamma - 1)\bar u \Delta( \rho E) \end{eqnarray} On expanding further, above equation looks like \begin{align} \begin{split} \Delta(pu) \ &= \ \left(-\frac{\bar u \bar a^2}{\gamma} + \frac{(\gamma - 1)}{2} \bar u^3\right)\Delta\rho \ + \ \left(\frac{\bar a^2}{\gamma} - (\gamma - 1) \bar u^2\right)\Delta{(\rho u)} \\ \ &+ \ \bar u\Delta(p) + \frac{1}{2}(\gamma - 1)\bar{u}\Delta{(\rho u^{2})} \end{split} \end{align} We next use (\ref{average_rel_1}) and (\ref{average_rel_2}) in above equation and after cancellations of some terms we get, \begin{equation} \Delta (pu) \ = \ \bar u \Delta p \ + \ \bar p \Delta u \end{equation} On using $\bar p = \dfrac{\bar{a}^{2}\bar{\rho}}{\gamma}$, we have \begin{equation}\label{average_rel_3} \Delta(a^2 \rho u) \ - \ \bar{u}\Delta(a^2\rho) \ = \ \bar{a}^2\bar{\rho} \Delta{u} \end{equation} Let $\eta$ be any flow variable. Consider the relation \begin{equation} \Delta (\rho u \eta) \ - \ \bar u \Delta (\rho \eta) \ = \ \bar \rho \bar \eta \Delta u \end{equation} which can be written as \begin{equation} \Delta (\rho u \eta) \ - \ \bar u \Delta (\rho \eta) \ = \ \sqrt{\rho_L \rho_R}( \frac{\sqrt{\rho_L} \eta_L \ + \ \sqrt{\rho_R} \eta_R }{\sqrt{\rho_L} \ + \ \sqrt{\rho_R}}) \Delta u \end{equation} We put $\eta = a^2$, then above relation becomes an equation and further on comparing it with (\ref{average_rel_3}), we get average value of acoustic speed from \begin{equation} \bar{a}^2 \ = \ \frac{\sqrt{\rho_L} a^{2}_L \ + \ \sqrt{\rho_R} a^{2}_R }{\sqrt{\rho_L} \ + \ \sqrt{\rho_R}} \end{equation} Similarly, we find coefficients which are attached with LI eigenvector of pressure subsystem for ZBS-FDS scheme as follows. \begin{equation} \Delta \boldsymbol{U} \ = \ \sum_{i = 1}^{3} \bar{\alpha}_{p,i}^{\boldsymbol{ZB}}\boldsymbol{{\bar{R}}}_{p,i}^{\boldsymbol{ZB}} \end{equation} In expanded form, \begin{equation} \begin{bmatrix} \Delta(\rho) \\[0.3em] \Delta(\rho u) \\[0.3em] \Delta(\rho E) \end{bmatrix} = \bar{\alpha}_{p,1}^{\boldsymbol{ZB}}\begin{bmatrix} \ 0 \\[0.3em] \ 1 \\[0.3em] \ \bar{u} - \frac{\bar{a}}{\sqrt{\gamma(\gamma - 1)}} \end{bmatrix} + \bar{\alpha}_{p,2}^{\boldsymbol{ZB}}\begin{bmatrix} \ 1 \\[0.3em] \ \bar{u} \\[0.3em] \ \frac{1}{2}\bar{u}^{2} \end{bmatrix} + \bar{\alpha}_{p,3}^{\boldsymbol{ZB}}\begin{bmatrix} \ 0 \\[0.3em] \ 1 \\[0.3em] \ \bar{u} + \frac{\bar{a}}{\sqrt{\gamma(\gamma - 1)}} \end{bmatrix} \end{equation} On comparing first equation we have, \begin{equation}\label{eq_1_ZBS-FDS wave_strenghts_p_part} \\ \bar{\alpha}_{p,2}^{\boldsymbol{ZB}} \ = \ \Delta{\rho} \end{equation} From second equation, we get \begin{equation} \Delta(\rho u) \ = \ \bar{\alpha}_{p,1}^{\boldsymbol{ZB}} \ + \ \bar{u} \bar{\alpha}_{p,2}^{\boldsymbol{ZB}} \ + \ \bar{\alpha}_{p,3}^{\boldsymbol{ZB}} \end{equation} On using (\ref{eq_1_ZBS-FDS wave_strenghts_p_part}) in the above equation we get expression as \begin{equation}\label{eq_2_ZBS-FDS wave_strenghts_p_part} \\ \bar{\alpha}_{p,1}^{\boldsymbol{ZB}} \ + \ \bar{\alpha}_{p,3}^{\boldsymbol{ZB}} \ = \ \bar{\rho}\Delta{u} \end{equation} Similarly, from third equation \begin{equation} \Delta(\rho E) \ = \ \bar{\alpha}_{p,1}^{\boldsymbol{ZB}}\bigg\{\bar{u} - \frac{\bar{a}}{\sqrt{\gamma(\gamma - 1)}}\bigg\} \ + \ \frac{1}{2}\bar{u}^{2}\bar{\alpha}_{p,2}^{\boldsymbol{ZB}} \ + \ \bar{\alpha}_{p,3}^{\boldsymbol{ZB}}\bigg\{\bar{u} + \frac{\bar{a}}{\sqrt{\gamma(\gamma - 1)}}\bigg\} \end{equation} or \begin{align} \begin{split} \Delta\Big(\frac{p}{(\gamma - 1)} \ + \ \frac{1}{2} \rho u^2\Big) \ &= \ \bar{u}(\bar{\alpha}_{p,1}^{\boldsymbol{ZB}} \ + \ \bar{\alpha}_{p,3}^{\boldsymbol{ZB}}) + \frac{\bar{a}}{\sqrt{\gamma(\gamma - 1)}}(\bar{\alpha}_{p,3}^{\boldsymbol{ZB}} \ - \ \bar{\alpha}_{p,1}^{\boldsymbol{ZB}}) \\ \ &+ \ \frac{1}{2}\bar{u}^{2}\bar{\alpha}_{p,2}^{\boldsymbol{ZB}} \end{split} \end{align} on using (\ref{eq_2_ZBS-FDS wave_strenghts_p_part}) in the above equation, we get \begin{equation} \\ \frac{1}{(\gamma - 1)}\Delta{p} \ + \ \frac{1}{2}\Delta(\rho u^2) \ = \ \bar{\rho}\bar{u}\Delta{u} \ + \ \frac{\bar{a}}{\sqrt{\gamma(\gamma - 1)}}(\bar{\alpha}_{p,3}^{\boldsymbol{ZB}} \ - \ \bar{\alpha}_{p,1}^{\boldsymbol{ZB}}) \ + \ \frac{1}{2}\bar{u}^{2}\Delta{\rho} \end{equation} Now, $\frac{1}{2}\Delta(\rho u^2) \ = \ \bar{\rho}\bar{u}\Delta{u} \ + \ \frac{1}{2}\bar{u}^{2}\Delta{\rho} $ is an equation for above defined averages values of $\bar{\rho} \ \textrm{and} \ \bar{u}$. Thus we are left with \begin{equation}\label{eq_3_ZBS-FDS wave_strenghts_p_part} \bar{\alpha}_{p,3}^{\boldsymbol{ZB}} \ - \ \bar{\alpha}_{p,1}^{\boldsymbol{ZB}} \ = \ \sqrt{\frac{\gamma}{\gamma - 1}}\frac{\Delta{p}}{\bar{a}} \end{equation} On solving (\ref{eq_2_ZBS-FDS wave_strenghts_p_part}) and (\ref{eq_3_ZBS-FDS wave_strenghts_p_part}) simultaneously, we get \begin{align} \begin{split} \bar{\alpha}_{p,1}^{\boldsymbol{ZB}} \ &= \ \frac{\bar{\rho}\Delta{u}}{2} \ - \ \sqrt{\frac{\gamma}{\gamma - 1}}\frac{\Delta{p}}{2\bar{a}} \ \ \ \textrm{and} \ \\ \bar{\alpha}_{p,3}^{\boldsymbol{ZB}} \ &= \ \frac{\bar{\rho}\Delta{u}}{2} \ + \ \sqrt{\frac{\gamma}{\gamma - 1}}\frac{\Delta{p}}{2\bar{a}} \end{split} \end{align} Let us consider convective subsystem of Zha and Bilgen type splitting. On comparing (\ref{conservation_ZB_c_part}) and (\ref{quasi_form_ZB_c_part}) and writing in finite difference analogue, we have \begin{equation}\label{FDS_ZBS_c_part} \Delta{\boldsymbol{F}_{c}^{\boldsymbol{ZB}}} \ = \ \boldsymbol{\bar{A}}_{c}^{\boldsymbol{ZB}}\Delta{\boldsymbol{U}} \end{equation} where $\boldsymbol{\bar{A}}_{c}^{\boldsymbol{ZB}}$ is now a function of left and right states, {\em i.e.}, $\boldsymbol{\bar{A}}_{c}^{\boldsymbol{ZB}} = \boldsymbol{\bar{A}}_{c}^{\boldsymbol{ZB}}(\boldsymbol{U}_L,\boldsymbol{U}_R)$. It is worth nothing that (\ref{FDS_ZBS_c_part}) is just a relation and may not become an equation for already defined average values. Further $\Delta{\boldsymbol{U}}$ is a column vector and from the theory of Jordan Forms we are able to get complete set of LI generalized eigenvectors. Thus we can form generalized basis for $\Delta{\boldsymbol{U}}$, {\em i.e.}, \begin{equation} \Delta \boldsymbol{U} \ = \ \sum_{i = 1}^{3} \bar{\alpha}_{c,i}^{\boldsymbol{ZB}}\boldsymbol{\bar{R}}_{c,i}^{\boldsymbol{ZB}} \end{equation} On using above expression in (\ref{FDS_ZBS_c_part}), we get \begin{equation} \Delta{\boldsymbol{F}_{c}^{\boldsymbol{ZB}}} \ = \ \boldsymbol{\bar{A}}_{c}^{\boldsymbol{ZB}} \sum_{i = 1}^{3} \bar{\alpha}_{c,i}^{\boldsymbol{ZB}}\boldsymbol{{\bar{R}}}_{c,i}^{\boldsymbol{ZB}} \end{equation} or \begin{equation} \Delta{\boldsymbol{F}_c^{\boldsymbol{ZB}}} \ = \ \bar{\alpha}_{c,1}^{\boldsymbol{ZB}} \boldsymbol{\bar{A}}_{c}^{\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{c,1}^{\boldsymbol{ZB}} \ + \ \bar{\alpha}_{c,2}^{\boldsymbol{ZB}} \boldsymbol{\bar{A}}_{c}^{\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{c,2}^{\boldsymbol{ZB}} \ + \ \bar{\alpha}_{c,3}^{\boldsymbol{ZB}} \boldsymbol{\bar{A}}_{c}^{\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{c,3}^{\boldsymbol{ZB}} \end{equation} which is further equal to \begin{equation} \Delta{\boldsymbol{F}_c^{\boldsymbol{ZB}}} \ = \ \bar{\alpha}_{c,1}^{\boldsymbol{ZB}}\bar \lambda_{c,1}^{\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{c,1}^{\boldsymbol{ZB}} \ + \ \bar{\alpha}_{c,2}^{\boldsymbol{ZB}}\big( \bar \lambda_{c,2}^{\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{c,2}^{\boldsymbol{ZB}} + \boldsymbol{{\bar{R}}}_{c,1}^{\boldsymbol{ZB}}\big) \ + \ \bar{\alpha}_{c,3}^{\boldsymbol{ZB}} \bar \lambda_{c,3}^{\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{c,3}^{\boldsymbol{ZB}} \end{equation} Now, $\Delta{\boldsymbol{F}_p^{+\boldsymbol{ZB}}}$ must have contribution of positive part of eigenvalues only, {\em i.e.}, \begin{align}\label{ZBS-FDS_positive_c_part} \begin{split} \Delta{\boldsymbol{F}_c^{+\boldsymbol{ZB}}} \ &= \ \bar{\alpha}_{c,1}^{\boldsymbol{ZB}}\bar \lambda_{c,1}^{+\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{c,1}^{\boldsymbol{ZB}} \ + \ \bar{\alpha}_{c,2}^{\boldsymbol{ZB}} \bar \lambda_{c,2}^{+\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{c,2}^{\boldsymbol{ZB}} \ + \ \bar{\alpha}_{c,3}^{\boldsymbol{ZB}} \bar \lambda_{c,3}^{+\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{c,3}^{\boldsymbol{ZB}} \\ \ &+ \ \bar{\alpha}_{c,2}^{\boldsymbol{ZB}}\boldsymbol{{\bar{R}}}_{c,1}^{\boldsymbol{ZB}} \end{split} \end{align} Similarly, \begin{align}\label{ZBS-FDS_negative_c_part} \begin{split} \Delta{\boldsymbol{F}_c^{-\boldsymbol{ZB}}} \ &= \ \bar{\alpha}_{c,1}^{\boldsymbol{ZB}}\bar \lambda_{c,1}^{-\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{c,1}^{\boldsymbol{ZB}} \ + \ \bar{\alpha}_{c,2}^{\boldsymbol{ZB}} \bar \lambda_{c,2}^{-\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{c,2}^{\boldsymbol{ZB}} \ + \ \bar{\alpha}_{c,3}^{\boldsymbol{ZB}} \bar \lambda_{c,3}^{-\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{c,3}^{\boldsymbol{ZB}} \\ \ &+ \ \bar{\alpha}_{c,2}^{\boldsymbol{ZB}}\boldsymbol{{\bar{R}}}_{c,1}^{\boldsymbol{ZB}} \end{split} \end{align} Again, we define the standard Courant splitting for the eigenvalues as \begin{equation} \bar \lambda^{\pm}_{c,i} = \frac{\bar \lambda_{c,i} \pm |\bar \lambda_{c,i}|}{2} \end{equation} On using standard upwinding along with above definition, we finally get \begin{equation} \Delta{\boldsymbol{F}_c^{+\boldsymbol{ZB}}} - \Delta{\boldsymbol{F}_c^{-\boldsymbol{ZB}}} \ = \ \sum_{i = 1}^{3} \bar{\alpha}_{c,i}^{\boldsymbol{ZB}}\left|\bar \lambda_{c,i}^{\boldsymbol{ZB}}\right| \boldsymbol{{\bar{R}}}_{c,i}^{\boldsymbol{ZB}} \end{equation} As we can see the resultant of extra contribution, which is coming because of weak hyperbolicity of convective subsystem, turns out be equal to zero. Unlike pressure subsystem here we don't need to find wave strengths because all eigenvalues corresponding to convective subsystem are equal, which leads to \begin{equation} \Delta{\boldsymbol{F}_c^{+\boldsymbol{ZB}}} - \Delta{\boldsymbol{F}_c^{-\boldsymbol{ZB}}} \ = \ \left|\bar \lambda_{c}^{\boldsymbol{ZB}}\right| \sum_{i = 1}^{3} \bar{\alpha}_{c,i}^{\boldsymbol{ZB}} \boldsymbol{{\bar{R}}}_{c,i}^{\boldsymbol{ZB}} \end{equation} or \begin{equation} \Delta{\boldsymbol{F}_c^{+\boldsymbol{ZB}}} - \Delta{\boldsymbol{F}_c^{-\boldsymbol{ZB}}} \ = \ \left|\bar \lambda_{c}^{\boldsymbol{ZB}}\right| \Delta{\boldsymbol{U}} \end{equation} Now, \begin{equation} \Delta{\boldsymbol{U}} \ = \ \begin{bmatrix} \Delta U_1 \\[0.3em] \Delta U_2 \\[0.3em] \Delta U_3 \end{bmatrix} \ = \ \begin{bmatrix} \Delta (\rho) \\[0.3em] \Delta (\rho u) \\[0.3em] \Delta (\rho E) \end{bmatrix} \end{equation} where, \begin{align} \begin{split} \Delta{U_1} \ &= \ \rho_R - \rho_L \\ \Delta{U_2} \ &= \ \Delta(\rho u) \ = \ \bar{\rho} \Delta{u} \ + \ \bar{u} \Delta{\rho} \ \textrm{and} \ \\ \Delta{U_3} \ &= \ \Delta(\rho E) \ = \ \Delta \Big(\frac{p}{(\gamma - 1)} \ + \ \frac{1}{2} \rho u^{2}\Big) \\ \ &= \ \frac{1}{\gamma -1} \Delta{p} \ + \ \frac{1}{2} \big(\bar{u}^2 \Delta{\rho} \ + \ 2 \bar{\rho} \bar{u} \Delta{u}) \end{split} \end{align} Here, we did an experiment to check $\Delta{(\rho E)} \ = \ \frac{1}{\gamma -1} \Delta{p} \ + \ \frac{1}{2} \big(\bar{u}^2 \Delta{\rho} \ + \ 2 \bar{\rho} \bar{u} \Delta{u})$ holds or not. As we know from the theory of gas dynamics \cite{Zucrow_Gas_dynamics}, ratio of densities, {\em i.e.}, $ (\frac{\rho_r}{\rho_l})$ attains a constant value of $6$ as Mach number $M \rightarrow \infty$. Next, we define \begin{equation} \textrm{error}_3 \ = \ \Delta{(\rho E)} \ - \ \frac{1}{\gamma -1} \Delta{p} \ - \ \frac{1}{2} \big(\bar{u}^2 \Delta{\rho} \ + \ 2 \bar{\rho} \bar{u} \Delta{u}) \end{equation} and we consider a test case with variable Mach number taken from \cite{steady_shock_problem}, which is given below. \begin{equation*} \begin{bmatrix} p_l \\[0.3em] \rho_l \\[0.3em] u_l \end{bmatrix} \ = \ \begin{bmatrix} \frac{1}{\gamma M^2} \\[0.3em] 1.0 \\[0.3em] 1.0 \end{bmatrix} \ \textrm{and} \ \begin{bmatrix} p_r \\[0.3em] \rho_r \\[0.3em] u_r \end{bmatrix} \ = \ \begin{bmatrix} p_l \frac{2 \gamma M^{2} \ - \ (\gamma - 1)}{\gamma + 1} \\[0.8em] \dfrac{\frac{\gamma + 1}{\gamma - 1} \frac{p_r}{p_l} + 1}{\frac{\gamma + 1}{\gamma - 1} + \frac{p_r}{p_l}} \\[1.5em] \sqrt {\gamma \frac{(2+(\gamma - 1)M^{2})p_r}{(2\gamma M^{2} + (1 - \gamma ))\rho_r}} \end{bmatrix} \end{equation*} As per expectations density ratio approaches to limit $6$ as shown in Figure \ref{ZBS-FDS_d_ratio.eps} and error$_3$, which is given in Figure \ref{ZBS-FDS_error-3.pdf} is not exactly zero and it fluctuates between limits $-10^{-15}$ to $10^{-15}$, which is anyhow very small. The possible reason of this could be the generation of round-off error and if we take macroscopic scale, error looks almost equal to zero as shown in Figure (\ref{ZBS-FDS_error-3_macro.eps}). \begin{figure}[!ht] \centerline{% \subfigure[]{% \includegraphics[trim=0 5 35 5, clip, width=0.55\textwidth]{Results/ZBS-FDS_d_ratio.pdf}% \label{ZBS-FDS_d_ratio.eps}}% \subfigure[]{% \includegraphics[trim=0 5 35 5, clip, width=0.55\textwidth]{Results/ZBS-FDS_error-3.pdf}% \label{ZBS-FDS_error-3.pdf} } }% \caption{(a) represents ratio of densities results and (b) represents error$_3$ results at microscopic level} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[trim=5 5 35 5, clip, width=0.6\textwidth]{Results/ZBS-FDS_error-3_macro.pdf} \caption{represents error$_3$ results at macroscopic level} \label{ZBS-FDS_error-3_macro.eps} \end{center} \end{figure} \\ The final expressions in the finite volume framework, with ZBS-FDS scheme for Euler equations are as follows. \begin{equation} \boldsymbol{U}^{n+1}_{j} \ = \ \boldsymbol{U}^{n}_{j} - \frac{\Delta t}{\Delta x} \left[ \boldsymbol{F}^{n}_{j+\frac{1}{2}} \ - \ \boldsymbol{F}^{n}_{j-\frac{1}{2}} \right] \end{equation} where the cell-interface fluxes, $\boldsymbol{F}_{I} \ = \ \boldsymbol{F}_{j\pm\frac{1}{2}}$, are defined by \begin{equation} \boldsymbol{F}_{I} = \frac{1}{2} \left[\boldsymbol{F}_{L} \ + \ \boldsymbol{F}_{R} \right] - \frac{1}{2} \left[\left(\Delta{\boldsymbol{F}_c^{+\boldsymbol{ZB}}} - \Delta{\boldsymbol{F}_c^{-\boldsymbol{ZB}}}\right) \ + \ \left(\Delta{\boldsymbol{F}_p^{+\boldsymbol{ZB}}} - \Delta{\boldsymbol{F}_p^{-\boldsymbol{ZB}}}\right) \right] \end{equation} where \begin{equation} \Delta{\boldsymbol{F}_c^{+\boldsymbol{ZB}}} - \Delta{\boldsymbol{F}_c^{-\boldsymbol{ZB}}} \ = \ \left|\bar \lambda_{c}^{\boldsymbol{ZB}}\right|\begin{bmatrix} \rho_R - \rho_ L \\[0.8em] \bar{\rho} \Delta{u} \ + \ \bar{u} \Delta{\rho} \\[0.8em] \frac{1}{\gamma -1} \Delta{p} \ + \ \frac{1}{2} \big(\bar{u}^2 \Delta{\rho} \ + \ 2 \rho \bar{u} \Delta{u}) \end{bmatrix} \end{equation} and \begin{equation} \Delta{\boldsymbol{F}_p^{+\boldsymbol{ZB}}} - \Delta{\boldsymbol{F}_p^{-\boldsymbol{ZB}}} \ = \ \sum_{i = 1}^{3} \bar{\alpha}_{p,i}^{\boldsymbol{ZB}}\left|\bar \lambda_{p,i}^{\boldsymbol{ZB}}\right| \boldsymbol{{\bar{R}}}_{p,i}^{\boldsymbol{ZB}} \end{equation} \subsection{Formulation of TVS-FDS scheme} Let us consider pressure subsystem corresponding to Toro and V\'azquez type splitting and on comparing (\ref{conservation_TV_p_part}) and (\ref{quasi_form_TV_p_part}), and on using the finite difference analogue we have, \begin{equation} \Delta{\boldsymbol{F}_{p}^{\boldsymbol{TV}}} \ = \ \boldsymbol{\bar{A}}_{p}^{\boldsymbol{TV}}\Delta{\boldsymbol{U}} \end{equation} Similar procedure like what we did for pressure subsystem of Zha and Bilgen type splitting is followed here and finally we get \begin{equation}\label{TVS-FDS-p-part} \Delta{\boldsymbol{F}_p^{+\boldsymbol{TV}}} - \Delta{\boldsymbol{F}_p^{-\boldsymbol{TV}}} \ = \ \sum_{i = 1}^{3} \bar{\alpha}_{p,i}^{\boldsymbol{TV}}\left|\bar \lambda_{p,i}^{\boldsymbol{TV}}\right| \boldsymbol{{\bar{R}}}_{p,i}^{\boldsymbol{TV}} \end{equation} To determine right side of (\ref{TVS-FDS-p-part}) completely, we need to find average values of eigenvalues along with wave strengths. Consider the linearization equation of pressure subsystem for TVS-FDS scheme, $\Delta \boldsymbol{F}_{p}^{\boldsymbol{TV}} \ = \ \boldsymbol{\bar{A}}_{p}^{\boldsymbol{TV}}\Delta{\boldsymbol{U}}$, {\em i.e.}, \begin{equation} \begin{bmatrix} 0 \\[0.3em] \Delta p \\[0.3em] (\frac{\gamma}{\gamma-1}) \Delta p u \end{bmatrix} \ = \ \begin{bmatrix} \ 0 & 0 & 0 \\[0.3em] \frac{1}{2} (\gamma -1) \bar u^2 & -(\gamma -1)\bar u & (\gamma -1) \\[0.3em] \ -\frac{\bar u \bar a^2}{(\gamma - 1)} + \frac{1}{2}\gamma \bar u^3 & \frac{\bar a^2}{(\gamma - 1)} - \gamma \bar u^2 & \gamma \bar u \end{bmatrix} \begin{bmatrix} \Delta (\rho) \\[0.3em] \Delta (\rho u) \\[0.3em] \Delta (\rho E) \end{bmatrix} \end{equation} From the second equation, we get \begin{equation} \Delta p \ = \ \frac{1}{2} (\gamma -1) \bar u^2 \Delta \rho \ - \ (\gamma -1)\bar u \Delta (\rho u) \ + \ (\gamma -1) \Delta (\rho E) \end{equation} or \begin{align} \begin{split} \Delta p \ &= \ \frac{1}{2} (\gamma -1) \bar u^2 \Delta \rho \ - \ (\gamma -1)\bar u \Delta (\rho u) \\ \ &+ \ (\gamma -1) \Delta(\frac{p}{\gamma - 1}) \ + \ \frac{1}{2} (\gamma - 1) \Delta(\rho u^2) \end{split} \end{align} which further reduces to \begin{equation} \ {\bar{u}}^2 \Delta (\rho) \ - \ 2 \bar{u} \Delta (\rho u) \ + \ \Delta (\rho u^{2})\ = \ 0 \end{equation} \begin{equation} \Rightarrow \\ \bar{u} \ = \ \frac{\sqrt{\rho_L} u_L \ + \ \sqrt{\rho_R} u_R }{\sqrt{\rho_L} \ + \ \sqrt{\rho_R}} \end{equation} Average density can be found by substituting $\bar{u}$ in the relation \begin{equation} \rho_R u_R - \rho_L u_L \ = \ \bar{\rho}(u_R - u_L) \ + \ \bar{u}(\rho_R - \rho_L) \end{equation} and after some simple calculation, we get $\bar{\rho} = \sqrt{\rho_L\rho_R}$. Similarly, third equation can be written as \begin{align} \begin{split} \left(\frac{\gamma}{\gamma-1}\right) \Delta( p u ) \ &= \ \left(-\frac{\bar u \bar a^2}{(\gamma - 1)} \ + \ \frac{1}{2}\gamma \bar u^3\right) \Delta \rho \\ \ &+ \ \left(\frac{\bar a^2}{(\gamma - 1)} \ - \ \gamma \bar u^2\right) \Delta (\rho u) \ + \ \gamma \bar u \Delta( \rho E) \end{split} \end{align} On expanding we have, \begin{align} \begin{split} \left(\frac{\gamma}{\gamma-1}\right) \Delta( p u ) \ &= \ \left(-\frac{\bar u \bar a^2}{(\gamma - 1)} \ + \ \frac{1}{2}\gamma \bar u^3\right) \Delta \rho \ + \ \left(\frac{\bar a^2}{(\gamma - 1)} \ - \ \gamma \bar u^2\right) \Delta (\rho u) \\ \ &+ \ \gamma \bar u \Delta\Big(\frac{p}{\gamma - 1} + \frac{1}{2} \rho u^{2} \Big) \end{split} \end{align} After further simplifications, above relation reduces to \begin{equation}\label{recall_1} \gamma \Delta(pu) \ = \ \bar{a}^2 \bar{\rho}\Delta{u} \ + \ \gamma \bar{u}\Delta{p} \end{equation} On using $p = \dfrac{a^2\rho}{\gamma}$, we have \begin{equation} \Delta(a^2 \rho u) \ = \ \bar{u}\Delta(a^2\rho) \ + \ \bar{a}^2\bar{\rho} \Delta{u} \end{equation} as explained earlier, $\bar{a}^2$ comes out as \begin{equation} \bar{a}^2 \ = \ \frac{\sqrt{\rho_L} a^{2}_L \ + \ \sqrt{\rho_R} a^{2}_R }{\sqrt{\rho_L} \ + \ \sqrt{\rho_R}} \end{equation} Similarly, wave strengths for pressure subsystem of TVS-FDS scheme can be calculated from the relation \begin{equation} \Delta \boldsymbol{U} \ = \ \sum_{i = 1}^{3} \bar{\alpha}_{p,i}^{\boldsymbol{TV}}\boldsymbol{{\bar{R}}}_{p,i}^{\boldsymbol{TV}} \end{equation} {\em i.e.}, \begin{equation} \begin{bmatrix} \Delta(\rho) \\[0.3em] \Delta(\rho u) \\[0.3em] \Delta(\rho E) \end{bmatrix} = \bar{\alpha}_{p,1}^{\boldsymbol{TV}}\begin{bmatrix} \ 0 \\[0.3em] \ 1 \\[0.3em] \ \bar u + \frac{1}{2}(\frac{\bar u - \bar{\beta}}{\gamma - 1}) \end{bmatrix} + \bar{\alpha}_{p,2}^{\boldsymbol{TV}} \begin{bmatrix} \ 1 \\[0.3em] \ \bar u \\[0.3em] \ \frac{1}{2} \bar u^2 \end{bmatrix} + \bar{\alpha}_{p,3}^{\boldsymbol{TV}} \begin{bmatrix} \ 0 \\[0.3em] \ 1 \\[0.3em] \ \bar u + \frac{1}{2}(\frac{\bar u + \bar {\beta}}{\gamma - 1}) \end{bmatrix} \end{equation} From the first equation, we get \begin{equation} \bar{\alpha}_{p,2}^{\boldsymbol{TV}} \ = \ \Delta \rho \end{equation} Similarly, the second equation gives \begin{equation} \Delta(\rho u) \ = \ \bar{\alpha}_{p,1}^{\boldsymbol{TV}} \ + \ \bar{u} \bar{\alpha}_{p,1}^{\boldsymbol{TV}} \ + \ \bar{\alpha}_{p,3}^{\boldsymbol{TV}} \end{equation} \begin{equation} \label{momentum_p} \Rightarrow \\ \bar{\alpha}_{p,1}^{\boldsymbol{TV}} \ + \ \bar{\alpha}_{p,3}^{\boldsymbol{TV}} \ = \ \bar{\rho}\Delta{u} \end{equation} Third equation implies \begin{equation} \Delta (\rho E) \ = \ \{ u + \frac{1}{2}(\frac{u - \bar{\beta}}{\gamma - 1})\}\bar{\alpha}_{p,1}^{\boldsymbol{TV}} \ + \ \frac{1}{2}u^2\bar{\alpha}_{p,2}^{\boldsymbol{TV}} \ + \ \{ u + \frac{1}{2}(\frac{u + \bar{\beta}}{\gamma - 1})\}\bar{\alpha}_{p,3}^{\boldsymbol{TV}} \end{equation} On rearrangement of terms and after some algebra, the above equation reduces to \begin{equation} \label{internal_p} \bar{\alpha}_{p,3}^{\boldsymbol{TV}} \ - \ \bar{\alpha}_{p,1}^{\boldsymbol{TV}} \ = \ \frac{1}{\bar{\beta}} \{ 2 \Delta p \ - \ \bar u \bar \rho \Delta u \} \end{equation} Finally on comparing (\ref{momentum_p})\ and\ (\ref{internal_p}), we get both $\bar{\alpha}_{p,1}^{TV}$ and $\bar{\alpha}_{p,3}^{TV}$ as \begin{eqnarray} \bar{\alpha}_{p,1}^{\boldsymbol{TV}} \ = \ \frac{1}{2} \bar{\rho} \Delta{u} \ + \ \frac{1}{2 \bar{\beta}} \bar{\rho} \bar{u} \Delta u \ - \ \frac{\Delta p}{\bar \beta} \\ \bar{\alpha}_{p,3}^{\boldsymbol{TV}} \ = \ \frac{1}{2} \bar \rho \Delta u \ - \ \frac{1}{2 \bar \beta} \bar \rho \bar u \Delta u \ + \ \frac{\Delta p}{\bar \beta} \end{eqnarray} Therefore, the wave strengths are finally given by \begin{align} \begin{split} \bar{\alpha}_{p,1}^{\boldsymbol{TV}} \ &= \ \frac{1}{2} \bar \rho \Delta u \ + \ \frac{1}{2 \bar \beta} \bar \rho \bar u \Delta u \ - \frac{\Delta p}{\bar \beta} \ , \ \bar{\alpha}_{p,2}^{\boldsymbol{TV}} \ = \ \Delta \rho \\ \ \ \textrm{and} \ \ \bar{\alpha}_{p,3}^{\boldsymbol{TV}} \ &= \ \frac{1}{2} \bar \rho \Delta u \ - \ \frac{1}{2 \bar \beta} \bar \rho \bar u \Delta u \ + \ \frac{\Delta p}{\bar \beta} \end{split} \end{align} We know that the convective subsystem is weakly hyperbolic but we can still form a basis of generalized eigenvectors, {\em i.e.}, \begin{equation} \Delta \boldsymbol{U} \ = \ \sum_{i = 1}^{3} \bar{\alpha}_{c,i}^{\boldsymbol{TV}}\boldsymbol{{\bar{R}}}_{c,i}^{\boldsymbol{TV}} \end{equation} and on comparing (\ref{conservation_TV_c_part}) and (\ref{quasi_form_TV_c_part}), and after writing in a finite difference form, we have the relation \begin{equation} \\ \Delta{\boldsymbol{F}_{c}^{\boldsymbol{TV}}} \ = \ \boldsymbol{\bar{A}}_{c}^{\boldsymbol{TV}} \sum_{i = 1}^{3} \bar{\alpha}_{c,i}^{\boldsymbol{TV}}\boldsymbol{{\bar{R}}}_{c,i}^{\boldsymbol{TV}} \end{equation} or \begin{equation} \label{full_convection_flux_for_C_part} \Delta{\boldsymbol{F}_c^{\boldsymbol{TV}}} \ = \ \bar{\alpha}_{c,1}^{\boldsymbol{TV}} \boldsymbol{\bar{A}}_{c}^{\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,1}^{\boldsymbol{TV}} \ + \ \bar{\alpha}_{c,2}^{\boldsymbol{TV}} \boldsymbol{\bar{A}}_{c}^{\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,2}^{\boldsymbol{TV}} \ + \ \bar{\alpha}_{c,3}^{\boldsymbol{TV}} \boldsymbol{\bar{A}}_{c}^{\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,3}^{\boldsymbol{TV}} \end{equation} Since $\bar A$ is non-diagonalizable, this means \begin{equation*} \boldsymbol{\bar{A}}_{c}^{\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,i}^{\boldsymbol{TV}} \ \neq \ \bar \lambda_{c,i}^{\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,i}^{\boldsymbol{TV}} \end{equation*} for some i's. We know $\boldsymbol{{\bar{R}}}_{c,3}^{\boldsymbol{TV}}$ is a generalized eigenvector and corresponding to $\boldsymbol{{\bar{R}}}_{c,2}^{\boldsymbol{TV}}$, a Jordan chain of order two will be formed, {\em i.e.}, \begin{equation} \boldsymbol{\bar{A}}_{c}^{\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,2}^{\boldsymbol{TV}} \ = \ \bar \lambda_{c,2}^{\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,2}^{\boldsymbol{TV}}\ ~~\mbox{and}~~ \boldsymbol{\bar{A}}_{c}^{\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,3}^{\boldsymbol{TV}} \ = \ \bar \lambda_{c,3}^{\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,3}^{\boldsymbol{TV}} \ + \ \boldsymbol{{\bar{R}}}_{c,2}^{\boldsymbol{TV}} \end{equation} On using above relations in (\ref{full_convection_flux_for_C_part}), we get \begin{equation} \Delta{\boldsymbol{F}_c^{\boldsymbol{TV}}} \ = \ \bar{\alpha}_{c,1}^{\boldsymbol{TV}}\bar \lambda_{c,1}^{\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,1}^{\boldsymbol{TV}} \ + \ \bar{\alpha}_{c,2}^{\boldsymbol{TV}} \bar \lambda_{c,2}^{\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,2}^{\boldsymbol{TV}} \ + \ \bar{\alpha}_{c,3}^{\boldsymbol{TV}} \bar \lambda_{c,3}^{\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,3}^{\boldsymbol{TV}} \ + \ \bar \alpha_{c,3}^{\boldsymbol{TV}}\boldsymbol{{\bar{R}}}_{c,2}^{\boldsymbol{TV}} \end{equation} After using standard Courant splitting for the eigenvalues, $\Delta{\boldsymbol{F}_c^{+\boldsymbol{TV}}}$ and $\Delta{\boldsymbol{F}_c^{-\boldsymbol{TV}}}$ are given by \begin{align}\label{flux_c_plus} \begin{split} \Delta{\boldsymbol{F}_c^{+\boldsymbol{TV}}} \ = \ \bar{\alpha}_{c,1}^{\boldsymbol{TV}}\bar \lambda_{c,1}^{+\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,1}^{\boldsymbol{TV}} \ &+ \ \bar{\alpha}_{c,2}^{\boldsymbol{TV}} \bar \lambda_{c,2}^{+\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,2}^{\boldsymbol{TV}} \ + \ \bar{\alpha}_{c,3}^{\boldsymbol{TV}} \bar \lambda_{c,3}^{+\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,3}^{\boldsymbol{TV}} \\ \ &+ \ \bar \alpha_{c,3}^{\boldsymbol{TV}}\boldsymbol{{\bar{R}}}_{c,2}^{\boldsymbol{TV}} \end{split} \end{align} and \begin{align}\label{flux_c_minus} \begin{split} \Delta{\boldsymbol{F}_c^{-\boldsymbol{TV}}} \ = \ \bar{\alpha}_{c,1}^{\boldsymbol{TV}}\bar \lambda_{c,1}^{-\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,1}^{\boldsymbol{TV}} \ &+ \ \bar{\alpha}_{c,2}^{\boldsymbol{TV}} \bar \lambda_{c,2}^{-\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,2}^{\boldsymbol{TV}} \ + \ \bar{\alpha}_{c,3}^{\boldsymbol{TV}} \bar \lambda_{c,3}^{-\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,3}^{\boldsymbol{TV}} \\ \ &+ \ \bar \alpha_{c,3}^{\boldsymbol{TV}}\boldsymbol{{\bar{R}}}_{c,2}^{\boldsymbol{TV}} \end{split} \end{align} Therefore, we have \begin{align} \begin{split} \Delta{\boldsymbol{F}_c^{+\boldsymbol{TV}}} - \Delta{\boldsymbol{F}_c^{-\boldsymbol{TV}}} \ &= \ \sum_{i = 1}^{3} \bar{\alpha}_{c,i}^{\boldsymbol{TV}}\bar \lambda_{c,i}^{+\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,i}^{\boldsymbol{TV}} \ + \ \bar \alpha_{c,3}^{\boldsymbol{TV}}\boldsymbol{{\bar{R}}}_{c,2}^{\boldsymbol{TV}} \\ \ &- \ \sum_{i = 1}^{3} \bar{\alpha}_{c,i}^{\boldsymbol{TV}}\bar \lambda_{c,i}^{-\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,i}^{\boldsymbol{TV}} \ - \ \bar \alpha_{c,3}^{\boldsymbol{TV}}\boldsymbol{{\bar{R}}}_{c,2}^{\boldsymbol{TV}} \end{split} \end{align} \begin{equation} \\ \Rightarrow \Delta{\boldsymbol{F}_c^{+\boldsymbol{TV}}} - \Delta{\boldsymbol{F}_c^{-\boldsymbol{TV}}} \ = \ \sum_{i = 1}^{3} \bar{\alpha}_{c,i}^{\boldsymbol{TV}}\left(\bar \lambda_{c,i}^{+\boldsymbol{TV}} \ - \ \bar \lambda_{c,i}^{-\boldsymbol{TV}}\right) \boldsymbol{{\bar{R}}}_{c,i}^{\boldsymbol{TV}} \end{equation} or \begin{equation} \Delta{\boldsymbol{F}_c^{+\boldsymbol{TV}}} - \Delta{\boldsymbol{F}_c^{-\boldsymbol{TV}}} \ = \ \sum_{i = 1}^{3} \bar{\alpha}_{c,i}^{\boldsymbol{TV}}\left|\bar \lambda_{c,i}^{\boldsymbol{TV}}\right| \boldsymbol{{\bar{R}}}_{c,i}^{\boldsymbol{TV}} \end{equation} Like previous case, resultant of extra contribution comes out equal to zero. Above relation can be further written as, \begin{align} \begin{split} \Delta{\boldsymbol{F}_c^{+\boldsymbol{TV}}} - \Delta{\boldsymbol{F}_c^{-\boldsymbol{TV}}} \ &= \ \left|\bar \lambda_{c}^{\boldsymbol{TV}}\right|\big(\bar{\alpha}_{c,2}^{\boldsymbol{TV}} \boldsymbol{{\bar{R}}}_{c,2}^{\boldsymbol{TV}} \ + \ \bar{\alpha}_{c,3}^{\boldsymbol{TV}}\boldsymbol{{\bar{R}}}_{c,3}^{\boldsymbol{TV}}\big) \\ \textrm{where} \left|\bar \lambda_{c}^{\boldsymbol{TV}}\right| \ &= \ \left|\bar \lambda_{c,2}^{\boldsymbol{TV}}\right| = \left|\bar \lambda_{c,3}^{\boldsymbol{TV}}\right|. \end{split} \end{align} This can be further written as \begin{equation}\label{TVS-FDS_c_part} \Delta{\boldsymbol{F}_c^{+\boldsymbol{TV}}} - \Delta{\boldsymbol{F}_c^{-\boldsymbol{TV}}} \ = \ \left|\bar \lambda_{c}^{\boldsymbol{TV}}\right|\big[\Delta \boldsymbol{U} \ - \ \bar{\alpha}_{c,1}^{\boldsymbol{TV}}\boldsymbol{{\bar{R}}}_{c,1}^{\boldsymbol{TV}}\big] \end{equation} In order to determine (\ref{TVS-FDS_c_part}) fully, we need to find wave strengths which can be calculated from the relation \begin{equation} \Delta \boldsymbol{U} \ = \ \sum_{i = 1}^{3} \bar{\alpha}_{c,i}^{\boldsymbol{TV}}\boldsymbol{{\bar{R}}}_{c,i}^{\boldsymbol{TV}} \end{equation} or \begin{equation} \begin{bmatrix} \Delta(\rho) \\[0.3em] \Delta(\rho u) \\[0.3em] \Delta(\rho E) \end{bmatrix} \ = \ \bar{\alpha}_{c,1}^{\boldsymbol{TV}} \begin{bmatrix} \ 0 \\[0.3em] \ 0 \\[0.3em] \ 1 \end{bmatrix} \ + \ \bar{\alpha}_{c,2}^{\boldsymbol{TV}} \begin{bmatrix} \ 1 \\[0.3em] \ \bar{u} \\[0.3em] \ \frac{1}{2} \bar{u}^{2} \end{bmatrix} \ + \ \bar{\alpha}_{c,3}^{\boldsymbol{TV}}\begin{bmatrix} \ x_{1} \\[0.3em] \ 1 + \bar{u}x_{1} \\[0.3em] \ \bar u + \frac{1}{2}\bar{u}^{2}x_{1} \end{bmatrix} \end{equation} From the first equation, we get \begin{equation}\label{eq_1_TVFS_wave_strenghts} \Delta(\rho) \ = \ \bar{\alpha}_{c,2}^{\boldsymbol{TV}} \ + \ x_{1}\bar{\alpha}_{c,3}^{\boldsymbol{TV}} \end{equation} From second equation, we have \begin{equation} \Delta(\rho u) \ = \ \bar{u}\bar{\alpha}_{c,2}^{\boldsymbol{TV}} \ + \ (1+\bar{u}x_{1})\bar{\alpha}_{c,3}^{\boldsymbol{TV}} \end{equation} \begin{equation} \\ \Rightarrow \bar{\rho} \Delta u \ + \ \bar{u} \Delta \rho \ = \ \bar{u}\big(\bar{\alpha}_{c,2}^{\boldsymbol{TV}} \ + \ x_{1}\bar{\alpha}_{c,3}^{\boldsymbol{TV}}\big) \ + \ \bar{\alpha}_{c,3}^{\boldsymbol{TV}} \end{equation} On using (\ref{eq_1_TVFS_wave_strenghts}) in the above equation and after cancellation of some terms, we get \begin{equation}\label{eq_2_TVFS_wave_strenghts} \\ \bar{\alpha}_{c,3}^{\boldsymbol{TV}} \ = \ \bar{\rho}\Delta{u} \end{equation} On using (\ref{eq_2_TVFS_wave_strenghts}) in (\ref{eq_1_TVFS_wave_strenghts}) we get \begin{equation} \\ \bar{\alpha}_{c,2}^{\boldsymbol{TV}} \ = \ \Delta{\rho} \ - \ x_{1}\bar{\rho}\Delta{u} \end{equation} Similarly, from third equation we have \begin{equation} \Delta(\rho E) \ = \ \bar{\alpha}_{c,1}^{\boldsymbol{TV}} \ + \ \frac{1}{2}{\bar{u}}^2 \bar{\alpha}_{c,2}^{\boldsymbol{TV}} \ + \ \big(\bar{u} + x_{1}\frac{1}{2}{\bar{u}}^2\big)\bar{\alpha}_{c,3}^{\boldsymbol{TV}} \end{equation} \begin{equation} \Rightarrow \Delta(\frac{p}{\gamma - 1} \ + \ \frac{1}{2} \rho u^2) \ = \ \bar{\alpha}_{c,1}^{\boldsymbol{TV}} \ + \ \frac{1}{2}{\bar{u}}^2 \Big(\Delta{\rho} \ - \ x_{1}\bar{\rho}\Delta{u}\Big) \ + \ \big(\bar{u} + x_{1}\frac{1}{2}{\bar{u}}^2\big) \bar{\rho}\Delta{u} \end{equation} After little algebra we get, \begin{equation} \\ \bar{\alpha}_{c,1}^{\boldsymbol{TV}} \ = \ \frac{1}{(\gamma - 1)}\Delta{p} \end{equation} Therefore, the wave strengths are finally given by \begin{equation} \bar{\alpha}_{c,1}^{\boldsymbol{TV}} \ = \ \frac{1}{(\gamma - 1)}\Delta{p} \ , \ \bar{\alpha}_{c,2}^{\boldsymbol{TV}} \ = \ \Delta{\rho} \ - \ x_{1}\bar{\rho}\Delta{u} \ \ \textrm{and} \ \ \bar{\alpha}_{c,3}^{\boldsymbol{TV}} \ = \ \bar{\rho}\Delta{u} \end{equation} On using above calculated wave strength $\bar{\alpha}_{c,1}^{\boldsymbol{TV}}$ along with eigenvector $\boldsymbol{{\bar{R}}}_{c,1}^{\boldsymbol{TV}} \ = \ \boldsymbol{e}_3$ in (\ref{TVS-FDS_c_part}), we get \begin{equation} \Delta{\boldsymbol{F}_c^{+\boldsymbol{TV}}} - \Delta{\boldsymbol{F}_c^{-\boldsymbol{TV}}} \ = \ \left|\bar \lambda_{c}^{\boldsymbol{TV}}\right|\begin{bmatrix} \rho_R - \rho_ L \\[0.8em] \bar{\rho} \Delta{u} \ + \ \bar{u} \Delta{\rho} \\[0.9em] \frac{1}{2} \big(\bar{u}^2 \Delta{\rho} \ + \ 2 \bar{\rho} \bar{u} \Delta{u}) \end{bmatrix} \end{equation} Thus, the final expressions for TVS-FDS scheme is written as, \begin{equation} \boldsymbol{U}^{n+1}_{j} \ = \ \boldsymbol{U}^{n}_{j} - \frac{\Delta t}{\Delta x} \left[ \boldsymbol{F}^{n}_{j+\frac{1}{2}} \ - \ \boldsymbol{F}^{n}_{j-\frac{1}{2}} \right] \end{equation} where the cell-interface fluxes, $\boldsymbol{F}_{I} \ = \ \boldsymbol{F}_{j\pm\frac{1}{2}}$, are defined by \begin{align} \begin{split} \boldsymbol{F}_{I} \ = \ \frac{1}{2} \left[\boldsymbol{F}_{L} \ + \ \boldsymbol{F}_{R} \right] \ - \ \frac{1}{2} \left[ \left(\Delta{\boldsymbol{F}_c^{+\boldsymbol{TV}}} - \Delta{\boldsymbol{F}_c^{-\boldsymbol{TV}}}\right) + \left(\Delta{\boldsymbol{F}_p^{+\boldsymbol{TV}}} - \Delta{\boldsymbol{F}_p^{-\boldsymbol{TV}}}\right) \right] \end{split} \end{align} \section{Results and discussion} We first consider here smooth solution problem with periodic boundary conditions to check experimental order of convergence for both constructed upwind schemes. After that, both ZBS-FDS and TVS-FDS schemes are tested on various one-dimensional Riemann problems of gas dynamics. For most of numerical examples, computational domain lies between $0$ and $1$, {\em i.e.}, $0\leq x \leq 1.0$ except for Sod's shock tube and shock-entropy test problems. For each problem computational domain is divided into 100 equally spaced cells, except for the shock-entropy interaction test case and for the blast problem test case in which computational domain is divided into $800$ and $3000$ equally spaced cells respectively. \subsection{Experimental order of convergence (EOC)} Order of accuracy of both constructed upwind schemes can be determined using the EOC analysis, as given below. \begin{equation}\label{EOC_formula_1} E \ = \ C(\Delta{x})^{s} \end{equation} where, $E$ is an error between the exact solution and the numerical solution using some appropriate norm. In particular, we are taking three norms namely, $L_{1}, L_{2}$ and $L_{\infty}$. Here, $C$ is a constant, $\Delta{x}$ is grid spacing and $s$ is the order of accuracy, which need to be calculated. On taking logarithms on both sides of (\ref{EOC_formula_1}), we get \begin{equation}\label{EOC_formula_2} log \ {E} \ = \ log \ {C} \ + \ s \ log \ {\Delta{x}} \end{equation} which is the equation of a straight line with slope s. For a given norm, let us initially take $\Delta{x} \ = \ h_{1}$ and on using this in (\ref{EOC_formula_2}), we get \begin{equation}\label{EOC_formula_3} log \ {E}_{norm,h_{1}} \ = \ log \ {C} \ + \ s \ log \ h_1 \end{equation} Next we take $\Delta{x} \ = \ h_{2}$, preferably $h_{2} \ = \ \frac{h_{1}}{2}$, with same norm in (\ref{EOC_formula_3}), we get \begin{equation}\label{EOC_formula_4} log \ {E}_{norm,h_{2}} \ = \ log \ {C} \ + \ s \ log \ h_2 \end{equation} and on subtracting (\ref{EOC_formula_4}) from (\ref{EOC_formula_3}), formula for finding experimental order of convergence ``s'' comes out as follows. \begin{equation} s \ = \ \frac{\left(log \ {E}_{norm,h_{1}} \ - \ log \ {E}_{norm,h_{2}}\right)}{\left(log \ h_1 \ - \ log \ h_2\right)} \end{equation} To check the performance of both schemes in term of errors associated with each norm for different grid sizes, we choose a test case from \cite{Arun_et_al} with initial smooth conditions, for one dimensional Euler system, which are given below. \begin{equation*} \rho{(x,t)} \ = \ 1.0 + 0.2 \sin(\pi(x-ut)), \ \ u{(x,t)} \ = \ 0.1, \ \ p{(x,t)} \ = \ 0.5. \end{equation*} For the present case, final solutions remain smooth and periodic boundary conditions are being employed to get meaningful solutions. Computational domain is chosen as $[0, 2]$, {\em i.e.}, $0\leq x \leq 2$ and all solutions are obtained at final time $t = 0.5$. $L_{1}$ error norm for both schemes are given in Table \ref{norm-1_smooth_prob} and as per expectations, there is reduction in error on refinement of mess size. Similarly, $L_{2}$ error norm and $L_{\infty}$ error norm are given in Table \ref{norm-2_smooth_prob} and Table \ref{norm-3_smooth_prob}. It is clear from all three tables that performance of both schemes is similar, if no discontinuity is present in the solution. \begin{table}[!ht] \caption{$L_{1}$ error norm for smooth solution problem corresponding to both schemes}\label{norm-1_smooth_prob} \centerline{% \begin{tabular}{|c|c|c|c|c|c|} \hline \textrm{grid points}{}~~ & ZBS-FDS scheme & EOC & TVS-FDS scheme & EOC \\ \hline $40$ & 0.004476 & - & 0.004476 & - \\ \hline $80$ & 0.002529 & 0.82 & 0.002529 & 0.82 \\ \hline $160$ & 0.001258 & 1.007 & 0.001258 & 1.007 \\ \hline $320$ & 0.000624 & 1.011 & 0.000624 & 1.011 \\ \hline $640$ & 0.000308 & 1.018 & 0.000308 & 1.018\\ \hline \end{tabular} }% \end{table} \begin{table}[!ht] \caption{$L_{2}$ error norm for smooth solution problem corresponding to both schemes}\label{norm-2_smooth_prob} \centerline{% \begin{tabular}{|c|c|c|c|c|c|} \hline \textrm{grid points}{}~~ & ZBS-FDS scheme & EOC & TVS-FDS scheme & EOC \\ \hline $40$ & 0.005238 & - & 0.005238 & - \\ \hline $80$ & 0.003227 & 0.6988 & 0.003227 & 0.6988 \\ \hline $160$ & 0.001707 & 0.9187 & 0.001707 & 0.9187 \\ \hline $320$ & 0.000885 & 0.9477 & 0.000885 & 0.9477 \\ \hline $640$ & 0.000452 & 0.9693 & 0.000452 & 0.9693\\ \hline \end{tabular} }% \end{table} \begin{table}[!ht] \caption{$L_{\infty}$ error norm for smooth solution problem corresponding to both schemes}\label{norm-3_smooth_prob} \centerline{% \begin{tabular}{|c|c|c|c|c|c|} \hline \textrm{grid points}{}~~ & ZBS-FDS scheme & EOC & TVS-FDS scheme & EOC \\ \hline $40$ & 0.019067 & - & 0.019067 & - \\ \hline $80$ & 0.013968 & 0.44 & 0.013968 & 0.44 \\ \hline $160$ & 0.007847 & 0.83 & 0.007847 & 0.83 \\ \hline $320$ & 0.003994 & 0.974 & 0.003994 & 0.974 \\ \hline $640$ & 0.002006 & 0.9935 & 0.002006 & 0.9935\\ \hline \end{tabular} }% \end{table} \subsection{Sod's shock tube problem and Lax problem} First we consider Sod's shock tube problem in which, an initial discontinuity in the middle evolves to, going from right to left as we observe, a shock, a contact discontinuity and an expansion fan. The initial conditions \cite{Laney_CFD} are $(\rho_L, u_L, p_L) = (1.0, 0.0, 100000.0)$, $(\rho_R, u_R, p_R) = (0.125, 0.0, 10000.0)$ with initial discontinuity at $x_{o} = 0.0$ and computational domain lies between $-10$ to $10$. All numerical results are obtained at final time $t$ = $0.01$. For this test case, both ZBS-FDS scheme and TVS-FDS scheme exhibit almost similar results except near normal shock region, where ZBS-FDS scheme performs slightly better than TVS-FDS scheme. Results corresponding to density variable are presented in Figure \ref{Laney1}. We also present error analysis of both schemes corresponding to $L_{1}$-norm and $L_{2}$-norm, and results are given in Table \ref{norm-1_Laney1}, Table \ref{norm-2_Laney1} respectively. Error analysis indicates that ZBS-FDS scheme is a little more accurate than TVS-FDS scheme. Next we consider Lax test case for which initial conditions are given as $(\rho_L, u_L, p_L) = (0.445, 0.698, 3.528)$, $(\rho_R, u_R, p_R) = (0.5, 0.0, 0.571)$ with $x_{o} = 0.5$ and all numerical results are obtained at final time $t=0.15$. The contact and shock discontinuities here are stronger than those in Sod's shock tube problem. Results of both ZBS-FDS and TVS-FDS schemes are given in Figure \ref{wes2_d}. For this problem too, ZBS-FDS scheme performs slightly better than TVS-FDS scheme. \begin{figure}[!ht] \centerline{% \subfigure[]{% \includegraphics[trim=0 5 35 5, clip, width=0.55\textwidth]{Results/Laney1.pdf}% \label{Laney1} }% \subfigure[]{% \includegraphics[trim=0 5 35 5, clip, width=0.55\textwidth]{Results/wes2_d.pdf}% \label{wes2_d} } \caption{(a) represents results of density variable for Sod's shock tube problem and (b) represents density plots for Lax problem.} \end{figure} \begin{table}[!ht] \caption{$L_{1}$ error norm for the Sod's shock tube problem for both schemes}\label{norm-1_Laney1} \centerline{% \begin{tabular}{|c|c|c|c|} \hline \textrm{grid points}{}~~ & ZBS-FDS scheme & TVS-FDS scheme \\ \hline $40$ & 0.502947 & 0.582406 \\ \hline $80$ & 0.352076 & 0.397561 \\ \hline $160$ & 0.235865 & 0.268590 \\ \hline $320$ & 0.156230 & 0.176909 \\ \hline $640$ & 0.101461 & 0.114140 \\ \hline \end{tabular} }% \end{table} \begin{table}[!ht] \caption{$L_{2}$ error norm for the Sod's shock tube problem for both schemes}\label{norm-2_Laney1} \centerline{% \begin{tabular}{|c|c|c|c|} \hline \textrm{grid points}{}~~ & ZBS-FDS scheme & TVS-FDS scheme \\ \hline $40$ & 0.177260 & 0.196831 \\ \hline $80$ & 0.134255 & 0.144438 \\ \hline $160$ & 0.097736 & 0.105535 \\ \hline $320$ & 0.073471 & 0.078268 \\ \hline $640$ & 0.055553 & 0.058590 \\ \hline \end{tabular} }% \end{table} \subsection{Sonic point problem and strong shock problem} Next, we present numerical results of both schemes for a modified version of Sod's problem. For this problem, solution has a right shock wave, a right travelling contact discontinuity and a left sonic rarefaction wave. This test case is useful in assessing the entropy condition satisfaction property of numerical methods. Initial conditions for this problem are given as $(\rho_L, u_L, p_L) = (1.0, 0.75, 1.0)$, $(\rho_R, u_R, p_R) = (0.125, 0.0, 0.1)$ with initial discontinuity at $x_{o} = 0.3$ and all numerical solutions are obtained at final time $t = 0.2$. For this test case, low diffusive schemes like Roe scheme may violate the entropy condition and give unphysical rarefaction shocks in the expansion region at sonic points. To avoid this drawback, additional numerical diffusion is typically required, which is usually introduced as an entropy fix and one such famous fix is given by Harten \cite{Harten_entropy_fix}. Because of sufficient inbuilt numerical diffusion, both ZBS-FDS scheme and TVS-FDS scheme are seen to satisfy the entropy condition, as can be seen in the results shown in Figure \ref{Toro1_d.eps} with no rarefaction shock or non-smoothness being present in the solution. For this problem, error analysis of $L_1$-norm and $L_2$-norm show TVS-FDS scheme is a slightly more accurate as given in Table \ref{norm-1_Toro1} and Table \ref{norm-2_Toro1}. Second test case is taken from \cite{Toro_Book_on_Riemann_Solvers} and is designed to assess the robustness and accuracy of numerical methods. Its solution consists of a strong shock wave with Mach number $198$, a contact discontinuity and a left rarefaction wave. Initial conditions are given as $(\rho_L, u_L, p_L) = (1.0, 0.0, 1000.0)$, $(\rho_R, u_R, p_R) = (1.0, 0.0, 0.01)$ with $x_{o}=0.5$ and all solutions are obtained at time $t=0.012$. Both schemes work well and results are given in Figure \ref{toro3_d.eps}. Error analysis of $L_1$-norm and $L_2$-norm show ZBS-FDS scheme is a little more accurate and results are given in Table \ref{norm-1_strong_shock} and Table \ref{norm-2_strong_shock}. \begin{figure}[!ht] \centerline{% \subfigure[]{% \includegraphics[trim=0 5 35 5, clip, width=0.55\textwidth]{Results/Toro1_d.pdf}% \label{Toro1_d.eps} }% \subfigure[]{% \includegraphics[trim=0 5 35 5, clip, width=0.55\textwidth]{Results/toro3_d.pdf}% \label{toro3_d.eps} } }% \caption{(a) represents density plots for sonic point problem and (b) represents density plots for strong shock problem.} \end{figure} \begin{table}[!ht] \caption{$L_{1}$ error norm for the sonic point problem for both schemes}\label{norm-1_Toro1} \centerline{% \begin{tabular}{|c|c|c|c|} \hline \textrm{grid points}{}~~ & ZBS-FDS scheme & TVS-FDS scheme \\ \hline $40$ & 0.038718 & 0.036894 \\ \hline $80$ & 0.028065 & 0.026387 \\ \hline $160$ & 0.019058 & 0.017863 \\ \hline $320$ & 0.012698 & 0.011879 \\ \hline $640$ & 0.008396 & 0.007822 \\ \hline \end{tabular} }% \end{table} \begin{table}[!ht] \caption{$L_{2}$ error norm for sonic point problem for both schemes}\label{norm-2_Toro1} \centerline{% \begin{tabular}{|c|c|c|c|} \hline \textrm{grid points}{}~~ & ZBS-FDS scheme & TVS-FDS scheme \\ \hline $40$ & 0.053061 & 0.051063 \\ \hline $80$ & 0.043452 & 0.040727 \\ \hline $160$ & 0.033069 & 0.031264 \\ \hline $320$ & 0.025245 & 0.024187 \\ \hline $640$ & 0.019600 & 0.018961 \\ \hline \end{tabular} }% \end{table} \begin{table}[!ht] \caption{$L_{1}$ error norm for the strong shock problem for both schemes}\label{norm-1_strong_shock} \centerline{% \begin{tabular}{|c|c|c|c|} \hline \textrm{grid points}{}~~ & ZBS-FDS scheme & TVS-FDS scheme \\ \hline $40$ & 0.317106 & 0.334709 \\ \hline $80$ & 0.241142 & 0.258266 \\ \hline $160$ & 0.180898 & 0.192025 \\ \hline $320$ & 0.131432 & 0.138044 \\ \hline $640$ & 0.088449 & 0.092496 \\ \hline \end{tabular} }% \end{table} \begin{table}[!ht] \caption{$L_{2}$ error norm for strong shock problem for both schemes}\label{norm-2_strong_shock} \centerline{% \begin{tabular}{|c|c|c|c|} \hline \textrm{grid points}{}~~ & ZBS-FDS scheme & TVS-FDS scheme \\ \hline $40$ & 0.824979 & 0.856110 \\ \hline $80$ & 0.665983 & 0.699312 \\ \hline $160$ & 0.558651 & 0.574795 \\ \hline $320$ & 0.473423 & 0.479052 \\ \hline $640$ & 0.366668 & 0.372136 \\ \hline \end{tabular} }% \end{table} \subsection{Stationary contact discontinuity} A contact discontinuity occurs when a family of characteristics are parallel to each other in the $x-t$ domain. Since fluid velocity is the same on both sides, contact discontinuities move with fluid. The initial conditions as given in \cite{Toro_Book_on_Riemann_Solvers} are $(\rho_L, u_L, p_L) = (1.4, 0.0, 1.0)$ and $(\rho_R, u_R, p_R) = (1.0, 0.0, 1.0)$. The initial discontinuity is present at $x_{o} = 0.5$. Both ZBS-FDS and TVS-FDS schemes capture the steady contact discontinuity exactly, as shown in Figure \ref{contact_d.eps}. \subsection{Strong shock problem with slowly moving contact discontinuity} This test case is also devised to test the robustness of numerical methods but the main reason for devising this test case is to assess the ability of numerical methods to resolve slowly-moving contact discontinuities. The exact solution of this test consists of a left rarefaction wave, a right-travelling shock wave and a slowly moving contact discontinuity. Initial conditions are given as $(\rho_L, u_L, p_L) \ = \ (1.0, -19.59745, 1000.0)$, $(\rho_R, u_R, p_R) = (1.0, -19.59745, 0.01)$ with $x_{o}=0.8$ and all numerical solutions are obtained at time $t=0.012$. In case of ZBS-FDS scheme, numerical solution goes towards the top portion of slowly moving contact wave whereas, for TVS-FDS scheme it is a little below as given in Figure \ref{toro5_d.eps}. \begin{figure}[!ht] \centerline{% \subfigure[]{% \includegraphics[trim=0 5 35 5, clip, width=0.55\textwidth]{Results/contact_d.pdf}% \label{contact_d.eps} }% \subfigure[]{% \includegraphics[trim=0 5 35 5, clip, width=0.55\textwidth]{Results/toro5_d.pdf}% \label{toro5_d.eps} } }% \caption{(a) represents density plot for stationary contact discontinuity problem and (b) represents density plots for strong shock problem with slowly moving contact discontinuity.} \end{figure} \subsection{Slowly moving shock} Sometimes numerical methods tend to produce oscillations near the shock regions, which are completely unphysical. The oscillations associated with slowly moving shock problems are usually linked with lack of sufficient numerical diffusion in the scheme. We took a test case from \cite{Stiriba_Donat_postshock_oscillations} with initial conditions as $(\rho_L, m_L, E_L) = (3.86, -3.1266, 27.0913)$ and $(\rho_R, m_R, E_R) = (1.0, -3.44, 8.4168)$, where $m = \rho u$ is momentum and $E$ is total energy. Final solutions are obtained at $t = 4$ units and results are given in Figure \ref{MS_d.eps}. \begin{figure}[!ht] \centerline{% \subfigure[]{% \includegraphics[trim=0 5 35 5, clip, width=0.55\textwidth]{Results/MS_d.pdf}% \label{MS_d.eps} }% \subfigure[]{% \includegraphics[trim=0 5 35 5, clip, width=0.55\textwidth]{Results/wes3_d.pdf}% \label{wes3_d.eps} } }% \caption{(a) represents density plots for slowly moving shock problem and (b) represents density plots for Mach $3$ problem.} \end{figure} \subsection{Mach 3 problem} The initial conditions for this problem are, $(\rho_L, u_L, p_L) = (3.857, 0.92, 10.333)$ and $(\rho_R, u_R, p_R) = (1.0, 3.55, 1.0)$ with $x_{o} = 0.4$ and all solutions are obtained at $t = 0.1$ units. This problem consists of a supersonic flow with Mach number $3$ in expansion region and it produces a strong expansion fan. Low diffusive upwind schemes such as Roe's approximate solver fail for this problem and require an entropy fix. According to Wesseling \cite{wesseling}, even after use of Harten's entropy fix, Roe scheme still gives sonic glitch. Both ZBS-FDS and TVS-FDS schemes perform well without needing any entropy fix and results are given in Figure \ref{wes3_d.eps}. \subsection{Interacting blast wave problem}\label{blast_prob} This is one of the most severe test cases used to assess the numerical algorithm for its performance and is taken from Woodward and Colella \cite{Woodward_colella_JCP_1984}. Computational domain is divided into $3000$ equally spaced finite volumes. Initial conditions for density and velocity are constants and given by $\rho = 1.0$, $u = 0$. For pressure variable, two discontinuities are present at position $x = 0.1 \ \textrm{and} \ 0.9$. Initially, $p_{L}$ = $1000$ if $x \in [0.0 , 0.1]$ , $p_{M}$ = $0.01$ if $x \in [0.1,0.9]$ and $p_{R}$ = $100$ if $x \in [0.9 , 1.0]$. Solution of this problem consists of multiple shocks, contact discontinuities and expansions waves. Results for ZBS-FDS scheme are given at two different time levels, as shown in Figure \ref{blast1_d.eps} and \ref{blast2_d.eps}. For this test case, TVS-FDS scheme {\em blew up} in our simulations. \begin{figure}[!ht] \centerline{% \subfigure[]{% \includegraphics[trim=0 5 35 5, clip, width=0.55\textwidth]{Results/blast1_d.pdf}% \label{blast1_d.eps} }% \subfigure[]{% \includegraphics[trim=0 5 35 5, clip, width=0.55\textwidth]{Results/blast2_d.pdf}% \label{blast2_d.eps} } }% \caption{(a) Density plot, for blast wave problem at time t = 0.026 units and (b) density plot, for same problem at time t = 0.038 units.} \end{figure} \subsection{Shock-entropy wave interaction} The shock-entropy wave interaction test case considered here for testing the present schemes is taken from \cite{Balsara and Shu}, with computational domain $x \in [-1,1]$ being divided into $800$ equally spaced finite volumes and all solutions are obtained at final time $t = 0.47$. The initial conditions are given below. \begin{align} \begin{split} (\rho_{L}, u_{L}, p_{L}) \ &= \ \big[3.857143, 2.629369, 10.3333\big] \ \ \textrm{if} \ \ x \ < \ -0.8 \\ (\rho_{R}, u_{R}, p_{R}) \ &= \ \big[1 + 0.2\sin(5\pi x), 0, 1\big] \ \ \textrm{if} \ \ x \ > \ -0.8. \end{split} \end{align} In this problem, a Mach 3 shock wave interacts with density disturbances created by perturbing the initial density. This initial disturbance gives rise to the continuous interaction of smooth flow with the discontinuities. Similar kind of interaction can be observed in compressible turbulence. Therefore, this is a suitable test case to test the scheme for its ability to resolve complex interactions, which can be used in turbulent computations. First order results for ZBS-FDS scheme are presented in Figure \ref{ZBS-FDS_shock_entropy_d.eps}. To achieve second order accuracy, we used Venkatakrishnan's limiter which is a modified version of van Albada limiter \cite{Venkatakrishnan_limiter} and deals with piecewise linear reconstruction of primitive variables. As an example, let us consider a piecewise linear reconstruction for density variable, {\em i.e.}, to obtain \begin{equation} \rho_{i+1/2}^{L} \ = \ \rho_{i} \ + \ \frac{1}{2} \frac{\left(\Delta_{+}^{2} + \epsilon^{2}\right)\Delta_{-} \ + \ \left(\Delta_{-}^{2} + \epsilon^{2}\right)\Delta_{+}}{\Delta_{+}^{2} + \Delta_{+}^{2} + 2\epsilon^{2}} \end{equation} where, \begin{align} \begin{split} \Delta_{+} \ &= \ \rho_{i+1} - \rho_{i} \\ \Delta_{-} \ &= \ \rho_{i} - \rho_{i-1} \end{split} \end{align} and \begin{equation} \epsilon^{2} \ = \ (K\Delta{x})^{3} \end{equation} Similarly, other primitive variables can be reconstructed. Here, $K$ is a constant and $\Delta{x}$ is a grid spacing. Large values of $K$ indicate no limiting and in the present case, we take $K = 0.1$. For this problem, second order results for ZBS-FDS scheme are computed and are given in Figure \ref{ZBS-FDS_shock_entropy_2nd_order_d.eps}. Comparison of both first order results and second order results for ZBS-FDS scheme are given in Figure \ref{ZBS-FDS_shock_entropy_comp_d.eps}. Results for TVS-FDS scheme are presented in Figure \ref{TVS-FDS_shock_entropy_d.eps}, \ref{shock_entropy_2nd_TVS_d.eps}. Both the schemes produce results of nearly similar accuracy for this test case. In both cases, second order accurate results are substantially better, compared to the first order accurate results. \begin{figure}[!ht] \centerline{% \subfigure[]{% \includegraphics[trim=0 5 35 5, clip, width=0.55\textwidth]{Results/shock_entropy_ZBS_d.pdf}% \label{ZBS-FDS_shock_entropy_d.eps} }% \subfigure[]{% \includegraphics[trim=0 5 35 5, clip, width=0.55\textwidth]{Results/shock_entropy_2nd_ZBS_d.pdf}% \label{ZBS-FDS_shock_entropy_2nd_order_d.eps} } }% \caption{(a) 1st-order results for ZBS-FDS scheme, for shock-entropy wave interaction problem and (b) 2nd-order results for ZBS-FDS scheme for same problem.} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[trim=5 5 35 5, clip, width=0.7\textwidth]{Results/ZBS-FDS_shock_entropy_comp_d.pdf} \caption{Comparison of 1st-order and 2nd-order numerical results of ZBS-FDS scheme for shock-entropy wave interaction problem.} \label{ZBS-FDS_shock_entropy_comp_d.eps} \end{center} \end{figure} \begin{figure}[!ht] \centerline{% \subfigure[]{% \includegraphics[trim=0 5 35 5, clip, width=0.55\textwidth]{Results/shock_entropy_TVS_d.pdf}% \label{TVS-FDS_shock_entropy_d.eps} }% \subfigure[]{% \includegraphics[trim=0 5 35 5, clip, width=0.55\textwidth]{Results/shock_entropy_2nd_TVS_d.pdf}% \label{shock_entropy_2nd_TVS_d.eps} } }% \caption{(a) Density plot for 1st-order TVS-FDS scheme, for shock-entropy wave interaction problem and (b) density plot for 2nd-order TVS-FDS scheme for same problem.} \end{figure} \section{Two-dimensional Euler system} The 2-D Euler equations form a system of four coupled non-linear hyperbolic PDEs with independent space variables $x,y$ and independent time variable $t$. In the differential, as well as conservative, form the 2-D Euler system can be written as \begin{equation}\label{2-D_differential_form} \dfrac{\partial \boldsymbol{U}}{\partial{t}} \ + \ \dfrac{\partial \boldsymbol{F}_{1}}{\partial{x}} \ + \ \dfrac{\partial \boldsymbol{F}_{2}}{\partial{y}} \ = \ \boldsymbol{0} \end{equation} where, $\boldsymbol{U}$ is vector of conserved variables and $\boldsymbol{F}_{1}$, $\boldsymbol{F}_{2}$ are flux vectors which are given as follows. \begin{equation} \boldsymbol{U} = \begin{bmatrix} \rho \\[0.4em] \rho u \\[0.4em] \rho v \\[0.4em] \rho E \end{bmatrix} \ \textrm{,} \ \boldsymbol{F}_{1} = \begin{bmatrix} \rho u \\[0.4em] \rho u^{2} + p \\[0.4em] \rho uv \\[0.4em] \rho u E + p u \end{bmatrix} \ \textrm{and} \ \boldsymbol{F}_{2} = \begin{bmatrix} \rho v \\[0.4em] \rho uv \\[0.4em] \rho v^{2} + p \\[0.4em] \rho v E + p v \end{bmatrix} \end{equation} Using the divergence form, equation (\ref{2-D_differential_form}) can be written as \begin{equation}\label{FV_2d_Euler} \dfrac{\partial \boldsymbol{U}}{\partial{t}} \ + \ {\nabla}\boldsymbol{.F} \ = \ \boldsymbol{0} \end{equation} where \begin{equation} \boldsymbol{F} = \begin{bmatrix} \rho u_{\bot} \\[0.4em] \rho u u_{\bot} + p n_{x} \\[0.4em] \rho v u_{\bot} + p n_{y} \\[0.4em] \rho E u_{\bot} + p u_{\bot} \end{bmatrix} \end{equation} is the flux vector and the vector $u_{\bot}$ is defined as the scalar product of the velocity vector $\boldsymbol{u}$ and the unit normal vector $\boldsymbol{n}$, {\em i.e.}, \begin{equation} u_{\bot} \ = \ \boldsymbol{u.} \boldsymbol{n} \ = \ n_{x} u + n_{y} v \end{equation} where $n_x$ and $n_y$ represent the direction cosines of the unit normal $\hat{n}$ to the cell-interface and are given by \begin{equation} n_x \ = \ \dfrac{\Delta{y}}{\Delta{s}} \ , \ n_y \ = \ -\dfrac{\Delta{x}}{\Delta{s}} \end{equation} On integrating (\ref{FV_2d_Euler}) over domain $\varOmega$ with boundary $\partial {\varOmega}$ and on further using Green's theorem, we get \begin{equation} \dfrac{\partial}{\partial t} \int_{\varOmega} \boldsymbol{U} d\varOmega \ + \ \oint_{\partial {\varOmega}} \boldsymbol{F} ds \ = \ \boldsymbol{0} \end{equation} On calculating average value of $\boldsymbol{U}$ over $\varOmega$, the first integral can be re-written and the above equation becomes \begin{equation} \dfrac{\partial \boldsymbol{\bar{U}}}{\partial t} \ = \ - \dfrac{1}{A} \oint_{\partial {\varOmega}} \boldsymbol{F} ds \end{equation} where $A$ is area enclosed by $\varOmega$. For typical two dimensional finite volume, for general quadrilaterals, integral as given by second term of above semi-integral from can be approximated by line integrals. After a little algebra, the discretized finite volume form for 2-D Euler system is given by \begin{equation} \dfrac{\partial \boldsymbol{\bar{U}}_{m}}{\partial t} \ = \ - \frac{1}{A_m} \ \sum_{k} \boldsymbol{F}^{k} \Delta{s}^{k} \end{equation} \begin{figure}[!ht] \begin{center} \includegraphics[trim=0 0 0 0, clip, width=0.55\textwidth]{Results/gen_quard_1-crop.pdf} \caption{Schematic representation of general quadrilateral.} \label{gen_quard-crop} \end{center} \end{figure} where subscript $m$ denotes the cell number, $k$ is the cell-interface index of the $m^{th}$ cell. Similarly, $\boldsymbol{F}^{k}$ and $\Delta{s}^{k}$ are the normal flux and the perimeter of the $k^{th}$ face. Further, $A_{m}$ is the area of $m^{th}$ cell.\\ In 1-D, both ZBS-FDS and TVS-FDS schemes performed reasonably well for most of the important test cases except for the blast wave problem (\ref{blast_prob}), where TVS-FDS {\em blew up} quickly. Otherwise, it is bit difficult to judge which of two is more accurate as ZBS-FDS scheme produces slightly better results for Sod's shock tube problem, Lax problem and for both strong shock problems, whereas TVS-FDS scores over ZBS-FDS in case of sonic point problem, slowly moving shock problem and for Mach 3 problem. In this work, we opt for Zha and Bilgen type splitting in formulating FDS concept based numerical scheme for 2-D Euler system, while emphasizing that this choice is purely based on our convenience and in future other possibility can be explored. \subsection{Analysis of Zha and Bilgen type splitting in 2-D} The flux vector $\boldsymbol{F}$ in the 2-D case is split into a convection part and a pressure part, based on Zha and Bilgen splitting, as follows. \begin{equation} \boldsymbol{F} \ = \ \boldsymbol{F}_{c}^{\boldsymbol{ZB}} \ + \ \boldsymbol{F}_{p}^{\boldsymbol{ZB}} \end{equation} where \begin{equation} \boldsymbol{F}_{c}^{\boldsymbol{ZB}} = \begin{bmatrix} \rho u_{\bot} \\[0.4em] \rho u u_{\bot} \\[0.4em] \rho v u_{\bot} \\[0.4em] \rho E u_{\bot} \end{bmatrix} \ \textrm{and} \ \boldsymbol{F}_{p}^{\boldsymbol{ZB}} = \begin{bmatrix} 0 \\[0.4em] p n_{x} \\[0.4em] p n_{y} \\[0.4em] p u_{\bot} \end{bmatrix} \end{equation} Let $\boldsymbol{A}_{c}^{\boldsymbol{ZB}}$ denote convection flux Jacobian matrix which is given below. \begin{equation} \boldsymbol{A}_{c}^{\boldsymbol{ZB}} \ = \ \begin{bmatrix} \ 0 && n_x && n_y && 0 \\[0.4em] \ -u u_\bot && u_\bot + u n_x && u n_y && 0 \\[0.4em] \ -v u_\bot && v n_x && u_\bot + v n_y && 0 \\[0.4em] \ -E u_\bot && E n_x && E n_y && u_\bot \end{bmatrix} \end{equation} Eigenvalues corresponding to matrix $\boldsymbol{A}_{c}^{\boldsymbol{ZB}}$ are real and equal with set of eigenvalues as $u_\bot, u_\bot, u_\bot, u_\bot$. Analysis of $\boldsymbol{A}_{c}^{\boldsymbol{ZB}}$ shows that it has a defective set of LI eigenvectors, {\em i.e.}, \begin{equation} \boldsymbol{R}_{c,1}^{\boldsymbol{ZB}} \ = \ \begin{bmatrix} \ n_x \\[0.4em] \ u_\bot \\[0.4em] \ 0 \\[0.4em] \ 0 \end{bmatrix} \ \ \textrm{,} \ \boldsymbol{R}_{c,2}^{\boldsymbol{ZB}} \ = \ \begin{bmatrix} \ n_y \\[0.4em] \ 0 \\[0.4em] \ u_\bot \\[0.4em] \ 0 \end{bmatrix} \ \textrm{and} \ \boldsymbol{R}_{c,3}^{\boldsymbol{ZB}} \ = \ \begin{bmatrix} \ 0 \\[0.4em] \ 0 \\[0.4em] \ 0 \\[0.4em] \ 1 \end{bmatrix} \end{equation} On evaluating rank of matrices $(\boldsymbol{A}_{c}^{\boldsymbol{ZB}}- u_\bot \boldsymbol{I_4})$, $(\boldsymbol{A}_{c}^{\boldsymbol{ZB}}- u_\bot \boldsymbol{I_4})^{2} ...$, we find that there will be one Jordan block of order two as $rank(\boldsymbol{A}_{c}^{\boldsymbol{ZB}}- u_\bot \boldsymbol{I_4})^2 \ = \ rank(\boldsymbol{A}_{c}^{\boldsymbol{ZB}}- u_\bot \boldsymbol{I_4})^3$. Let $R(\boldsymbol{A})$ denote the space spanned by the columns of matrix $\boldsymbol{A}_{c}^{\boldsymbol{ZB}} -u_\bot \boldsymbol{I_4}$. Then, as explained in 1-D case, we have \begin{equation} R(\boldsymbol{A}) \ = \ x_{1}\boldsymbol{A}_1 \ + \ x_{2}\boldsymbol{A}_2 \ + \ x_{3}\boldsymbol{A}_3 \ + \ x_4 \boldsymbol{A}_4 \end{equation} where, $\boldsymbol{A}_1,\boldsymbol{A}_2,\boldsymbol{A}_3 \ and \ \boldsymbol{A}_4$ are column vectors of $\boldsymbol{A_{c}^{ZB}} -u_\bot \boldsymbol{I_4}$. Now \begin{equation} R(\boldsymbol{A}) \ = \ x_{1}\begin{bmatrix} \ -u_\bot \\[0.4em] \ -u u_\bot \\[0.4em] \ -v u_\bot \\[0.4em] \ -E u_\bot \ \end{bmatrix} \ + \ x_{2}\begin{bmatrix} \ n_x \\[0.4em] \ u n_x \\[0.4em] \ v n_x \\[0.4em] \ E n_x \ \end{bmatrix} \ + \ x_{3}\begin{bmatrix} \ n_y \\[0.4em] \ u n_y \\[0.4em] \ v n_y \\[0.4em] \ E n_y \ \end{bmatrix} \ + \ x_{4}\begin{bmatrix} \ 0 \\[0.4em] \ 0 \\[0.4em] \ 0 \\[0.4em] \ 0 \ \end{bmatrix} \end{equation} or \begin{equation} R(\boldsymbol{A}) \ = \ (-u_\bot x_{1} + n_x x_2 + n_y x_3) \begin{bmatrix} \ 1 \\[0.4em] \ u \\[0.4em] \ v \\[0.4em] \ E \end{bmatrix} \end{equation} The column vector $(1,u,v,E)^{t}$, which is a range space of $R(\boldsymbol{A})$, becomes generalized eigenvector of $\boldsymbol{A}_{c}^{\boldsymbol{ZB}}$ as null space $N(\boldsymbol{AX})$ is just a scalar coefficient. Let us take $\boldsymbol{A}_{c}^{\boldsymbol{ZB}}\boldsymbol{X}_1 $ which is equal to \begin{equation} \begin{bmatrix} \ \ 0 && n_x && n_y && 0 \\[0.4em] \ -u u_\bot && u_\bot + u n_x && u n_y && 0 \\[0.4em] \ -v u_\bot && v n_x && u_\bot + v n_y && 0 \\[0.4em] \ -E u_\bot && E n_x && E n_y && u_\bot \end{bmatrix} \begin{bmatrix} \ 1 \\[0.4em] \ u \\[0.4em] \ v \\[0.4em] \ E \end{bmatrix} \end{equation} which, on solving, is equal to \begin{equation} u_\bot \begin{bmatrix} \ 1 \\[0.3em] \ u \\[0.3em] \ v \\[0.3em] \ E \end{bmatrix} \end{equation} Thus, $\boldsymbol{A}_{c}^{\boldsymbol{ZB}}\boldsymbol{X}_1 \ = \ u_\bot \boldsymbol{X}_1 $ holds. Now, this generalized eigenvector is expected to form a Jordan chain of order two corresponding to matrix $\boldsymbol{A}_{c}^{\boldsymbol{ZB}}$, {\em i.e.}, \begin{align} \begin{split} \boldsymbol{A}_{c}^{\boldsymbol{ZB}}\boldsymbol{X}_1 \ &= \ u_\bot \boldsymbol{X}_1 \\ \boldsymbol{A}_{c}^{\boldsymbol{ZB}}\boldsymbol{X}_2 \ &= \ u_\bot \boldsymbol{X}_{2} \ + \ \boldsymbol{X}_1 \end{split} \end{align} Other generalized eigenvector $\boldsymbol{X}_2$ can be found from second relation and, in expanded from, it is written as \begin{equation} \begin{bmatrix} \ 0 & n_x & n_y & 0 \\[0.4em] \ -u u_\bot & u_\bot + u n_x & u n_y & 0 \\[0.4em] \ -v u_\bot & v n_x & u_\bot + v n_y & 0 \\[0.4em] \ -E u_\bot & E n_x & E n_y & u_\bot \end{bmatrix} \begin{bmatrix} \ x_{1} \\[0.4em] \ x_{2} \\[0.4em] \ x_{3} \\[0.4em] \ x_4 \end{bmatrix} \ = \ u_\bot \begin{bmatrix} \ x_{1} \\[0.4em] \ x_{2} \\[0.4em] \ x_{3} \\[0.4em] \ x_4 \end{bmatrix} \ + \ \begin{bmatrix} \ 1 \\[0.4em] \ u \\[0.4em] \ v \\[0.4em] \ E \end{bmatrix} \end{equation} here, each $x_{i} \in {\rm I\!R}$, where i runs from 1 to 4, and on solving all four simultaneous equations, we get \begin{equation} n_x x_2 \ + \ n_y x_3 \ = \ 1 \ + \ u_\bot x_1 \end{equation} which together with $(x_1, x_2, x_3, x_4)^t$ defines a generalized eigenvector. Now, if we take \begin{equation} \boldsymbol{P} \ = \ \begin{bmatrix} \ 1 && x_1 && n_x && n_y \\[0.4em] \ u && x_2 && u_\bot && 0 \\[0.4em] \ v && x_3 && 0 && u_\bot \\[0.4em] \ E && x_4 && 0 && 0 \end{bmatrix} \end{equation} then \begin{equation} \boldsymbol{P}^{-1}\boldsymbol{A}_{c}^{\boldsymbol{ZB}}\boldsymbol{P} \ = \ \ \begin{bmatrix} \ u_\bot && 1 && 0 && 0 \\[0.4em] \ 0 && u_\bot && 0 && 0 \\[0.4em] \ 0 && 0 && u_\bot && 0 \\[0.4em] \ 0 && 0 && 0 && u_\bot \end{bmatrix} \textrm{holds.} \end{equation} \\ Let $\boldsymbol{A}_{p}^{\boldsymbol{ZB}}$ denote the Jacobian matrix corresponding to pressure flux function $\boldsymbol{F}_{p}^{\boldsymbol{ZB}}$. After a little algebra $\boldsymbol{A}_{p}^{\boldsymbol{ZB}}$ comes out equal to \begin{equation} \ (\gamma - 1)\ \begin{bmatrix} \ 0 && 0 && 0 && 0 \\[0.4em] \ \varTheta^{2} n_x && -n_x u && -n_x v && n_x \\[0.4em] \ \varTheta^{2} n_y && -n_y u && -n_y v && n_y \\[0.4em] \ \big(\varTheta^{2} - \varPhi^{2}\big) u_\bot && \varPhi^{2}n_x - u_\bot u && \varPhi^{2} n_y - u_\bot v && u_\bot \end{bmatrix} \end{equation} where we define \begin{align} \begin{split} \varTheta^{2} \ &= \ \dfrac{u^2 + v^2}{2} \ \ \textrm{and} \ \ \\ \varPhi^{2} \ &= \ \dfrac{a^2}{\gamma(\gamma -1)} \end{split} \end{align} The eigenvalues of the flux Jacobian matrix $\boldsymbol{A}_{p}^{\boldsymbol{ZB}}$ are: \begin{equation} \lambda_{p,1}^{\boldsymbol{ZB}} \ = \ -\sqrt{\frac{\gamma - 1}{\gamma}} a, \ \lambda_{p,2}^{\boldsymbol{ZB}} \ = \ 0, \ \lambda_{p,3}^{\boldsymbol{ZB}} \ = \ 0, \ \lambda_{p,4}^{\boldsymbol{ZB}} \ = \ \sqrt{\frac{\gamma - 1}{\gamma}} a \end{equation} Since all eigenvalues are real and distinct, therefore $\boldsymbol{A}_{p}^{\boldsymbol{ZB}}$ must have full set of LI eigenvectors and are given by: \begin{equation} \boldsymbol{R}_{p,1}^{\boldsymbol{ZB}} \ = \ \begin{bmatrix} \ 0 \\[0.4em] \ n_x \\[0.4em] \ n_y \\[0.4em] \ u_\bot - \dfrac{a}{\sqrt{\gamma(\gamma - 1)}} \ \end{bmatrix} \ \ , \ \boldsymbol{R}_{p,2}^{\boldsymbol{ZB}} \ = \ \begin{bmatrix} \ u_\parallel \\[0.4em] \ u u_\parallel + \varTheta^{2} n_y \\[0.4em] \ v u_\parallel - \varTheta^{2} n_x \\[0.4em] \ 0 \ \end{bmatrix} \end{equation} \begin{equation} \boldsymbol{R}_{p,3}^{\boldsymbol{ZB}} \ = \ \begin{bmatrix} \ 1 \\[0.4em] \ n_x u_\bot \\[0.4em] \ n_y u_\bot \\[0.4em] \ u_\bot^2 - \varTheta^{2} \ \end{bmatrix} \ \ , \ \boldsymbol{R}_{p,4}^{\boldsymbol{ZB}} \ = \ \begin{bmatrix} \ 0 \\[0.4em] \ n_x \\[0.4em] \ n_y \\[0.4em] \ u_\bot + \dfrac{a}{\sqrt{\gamma(\gamma - 1)}} \ \end{bmatrix} \end{equation} Now, both convection and pressure fluxes at a cell-interface are calculated by using the following. \begin{equation} \boldsymbol{F}_{c,I}^{\boldsymbol{ZB}} \ = \ \frac{1}{2} \big[\boldsymbol{F}_{c,L}^{\boldsymbol{ZB}} + \boldsymbol{F}_{c,R}^{\boldsymbol{ZB}}\big] - \frac{1}{2}\left|\bar{u}_\bot \right| \Delta{\boldsymbol{U}} \end{equation} where \begin{equation} \Delta{\boldsymbol{U}} \ = \ \begin{bmatrix} \rho_R - \rho_ L \\[0.8em] \bar{\rho} \Delta{u} \ + \ \bar{u} \Delta{\rho} \\[0.8em] \bar{\rho} \Delta{v} \ + \ \bar{v} \Delta{\rho} \\[0.8em] \frac{1}{\gamma -1} \Delta{p} \ + \ \frac{1}{2} \big(\bar{u}^2 + \bar{v}^2\big)\Delta{\rho} \ + \ \bar{\rho} (\bar{u} \Delta{v} + \bar{v}\Delta{v}) \end{bmatrix} \end{equation} and \begin{equation} \boldsymbol{F}_{p,I}^{\boldsymbol{ZB}} \ = \ \frac{1}{2} \big[\boldsymbol{F}_{p,L}^{\boldsymbol{ZB}} + \boldsymbol{F}_{p,R}^{\boldsymbol{ZB}}\big] - \frac{1}{2} \sum_{i = 1}^{4} \bar{\alpha}_{p,i}^{\boldsymbol{ZB}}\left|\bar \lambda_{p,i}^{\boldsymbol{ZB}}\right| \boldsymbol{{\bar{R}}}_{p,i}^{\boldsymbol{ZB}} \end{equation} respectively. Like in 1-D case, the average quantities are defined as \begin{align} \begin{split} \bar{\rho} \ &= \ \sqrt{\rho_L \rho_R} \ , \ \bar{u} \ = \ \frac{\sqrt{\rho_L} u_L \ + \ \sqrt{\rho_R} u_R }{\sqrt{\rho_L} \ + \ \sqrt{\rho_R}} \\ \bar{v} \ &= \ \frac{\sqrt{\rho_L} v_L \ + \ \sqrt{\rho_R} v_R }{\sqrt{\rho_L} \ + \ \sqrt{\rho_R}} \ , \ \bar{a}^2 \ = \ \frac{\sqrt{\rho_L} a_L^2 \ + \ \sqrt{\rho_R} a_R^2 }{\sqrt{\rho_L} \ + \ \sqrt{\rho_R}} \\ \bar{u}_\bot \ &= \ \bar{u} n_x + \bar{v} n_y \ \ \textrm{and} \ \ \bar{u}_\parallel \ = \ -\bar{u} n_y + \bar{v} n_x \end{split} \end{align} where $u_\parallel$ denotes velocity component parallel to cell-interface and is given by \begin{equation} u_\parallel \ = \ -u n_y \ + \ v n_x \end{equation} Similarly, wave strengths $\bar{\alpha}_{p,i}^{\boldsymbol{ZB}}$, where $i$ runs from $1 \ \textrm{to} \ 4$, are given as \begin{align} \begin{split} \bar{\alpha}_{p,1}^{\boldsymbol{ZB}} \ &= \ \dfrac{\bar{\rho} \Delta{u_\bot}}{2} - \sqrt{\dfrac{\gamma}{\gamma -1}} \dfrac{\Delta{p}}{2\bar{a}} \\ \bar{\alpha}_{p,2}^{\boldsymbol{ZB}} \ &= \ \dfrac{\bar{u}_\parallel \Delta{\rho} + \bar{\rho}\Delta{u_\parallel}}{\bar{\varTheta}^{2}-\bar{u}_\bot^2} \\ \bar{\alpha}_{p,3}^{\boldsymbol{ZB}} \ &= \ \Delta{\rho} \ - \ \dfrac{\bar{u}_\parallel^{2} \Delta{\rho} + \bar{\rho}\bar{u}_\parallel\Delta{u_\parallel}}{\bar{\varTheta}^{2}-\bar{u}_\bot^2} \\ \bar{\alpha}_{p,4}^{\boldsymbol{ZB}} \ &= \ \dfrac{\bar{\rho} \Delta{u_\bot}}{2} + \sqrt{\dfrac{\gamma}{\gamma -1}} \dfrac{\Delta{p}}{2\bar{a}} \end{split} \end{align} where \begin{align} \begin{split} \bar{\varTheta}^{2} \ &= \ \dfrac{\bar{u}^2 + \bar{v}^2}{2} \\ \Delta{\rho} \ &= \ \rho_{R} - \rho_{L} \\ \Delta{u_\bot} \ &= \ u_{\bot \ R} - u_{\bot \ L} \\ \Delta{u_\parallel} \ &= \ u_{\parallel \ R} - u_{\parallel \ L} \\ \Delta{p} \ &= \ p_R - p_L \end{split} \end{align} \subsection{Numerical examples} In this subsection, the ZBS-FDS scheme is tested on various well-established benchmark test problems. Special attention is given to problems with complex interactions of strong shocks which leads to various shock instabilities. Many well known upwind schemes are known to produce unphysical features in such cases \cite{Quirk}. \subsubsection{Oblique shock reflection} In this test case \cite{shock_ref_1982}, an oblique shock wave is introduced at the top left corner by means of initial conditions and post shock boundary conditions, at the left and top side of the domain, respectively. The computational domain considered for this test case is $[0,3] \times [0,1]$. The initial conditions for this test problem are as given below. \begin{equation*} \left( \rho, u, v, p \right)_{0,y,t}=\left( 1.0, 2.9, 0, 1/1.4 \right) \end{equation*} \begin{equation*} \left( \rho, u, v, p \right)_{x,1,t}=\left( 1.69997, 2.61934, -0.50633, 1.52819 \right) \end{equation*} The incident shock angle measured from the top side of the domain is $29^0$ and the free stream Mach number $M=2.9$. Wall boundary conditions are prescribed at the bottom boundary and supersonic outflow boundary conditions are used at the right side of the computational domain. Both first order and second order results on various grids are presented in Figure \ref{shock_ref_ZBS_FDS}. \begin{figure}[!ht] \begin{center} \begin{minipage}{0.48\linewidth} \centering (a) {\includegraphics[trim=0.0cm 0.0cm 0cm 0.0cm, clip, width=\textwidth]{Results/ShockRefleResults/sr_1st_120_40-eps-converted-to-crop.pdf}} \end{minipage} \begin{minipage}{0.48\linewidth} \centering (a) {\includegraphics[trim=0.0cm 0.0cm 0cm 0.0cm, clip, width=\textwidth]{Results/ShockRefleResults/sr_2nd_120_40-eps-converted-to-crop.pdf}} \end{minipage} \begin{minipage}{0.48\linewidth} \centering (b) {\includegraphics[trim=0.0cm 0.0cm 0cm 0.0cm, clip, width=\textwidth]{Results/ShockRefleResults/sr_1st_240_80-eps-converted-to-crop.pdf}} \end{minipage} \begin{minipage}{0.48\linewidth} \centering (b) {\includegraphics[trim=0.0cm 0.0cm 0cm 0.0cm, clip, width=\textwidth]{Results/ShockRefleResults/sr_2nd_240_80-eps-converted-to-crop.pdf}} \end{minipage} \caption{First order results of ZBS-FDS scheme are presented on left, where second order results are given in right side for shock reflection problem; pressure contours (0.7: 0.1: 2.9) on the grids: (a) $120\times40$ and (b) $240\times80$} \label{shock_ref_ZBS_FDS} \end{center} \end{figure} \subsubsection{Supersonic flow across a compression ramp in a wind tunnel} The computational domain of $[0, 3] \times [0, 1]$ is considered for this test problem. Other geometrical features of the problem include a $15^0$ ramp at the lower part of the computational domain. In this two-dimensional steady test case~\cite{Levy_1993}, supersonic flow of a Mach number $M=2$ encounters a fifteen degree ramp to form an oblique shock wave. This shock wave reflects from the upper wall and interacts with the expansion wave generated at the tip of the ramp corner. The so weakened expansion wave again reflects from the top wall and further interacts with the second reflected shock from the ramp surface. Initial conditions are prescribed at the left boundary, wall boundary conditions are used at the top and on the ramp, and supersonic outflow boundary conditions are imposed at the exit boundary. Both first order and second order results are presented in Figure \ref{wedge_ZBS-FDS}. \begin{figure}[!htbp] \begin{center} \begin{minipage}{0.48\linewidth} \centering (a) {\includegraphics[trim=0.0cm 0.0cm 0cm 0.0cm, clip, width=\textwidth]{Results/ramp/ramp_1st_120_40-crop.pdf}} \end{minipage} \begin{minipage}{0.48\linewidth} \centering (a) {\includegraphics[trim=0.0cm 0.0cm 0cm 0.0cm, clip, width=\textwidth]{Results/ramp/ramp_2nd_120_40-crop.pdf}} \end{minipage} \begin{minipage}{0.48\linewidth} \centering (b) {\includegraphics[trim=0.0cm 0.0cm 0cm 0.0cm, clip, width=\textwidth]{Results/ramp/ramp_1st_240_80-crop.pdf}} \end{minipage} \begin{minipage}{0.48\linewidth} \centering (b) {\includegraphics[trim=0.0cm 0.0cm 0cm 0.0cm, clip, width=\textwidth]{Results/ramp/ramp_2nd_240_80-crop.pdf}} \end{minipage} \caption{First order results of ZBS-FDS scheme are given on left and second order results are presented on right for ramp reflection problem; pressure contours (1.1: 0.05: 3.8) on the grids: (a) $120\times40$ and (b) $240\times80$} \label{wedge_ZBS-FDS} \end{center} \end{figure} \subsubsection{Reflection of a plane shock from wedge} In this is a two-dimensional problem in which the reflection of a plane shock wave from a wedge lies in the double-Mach reflection regime, some Riemann solvers are known to generate kinked Mach stems \cite{Quirk}. In this test case, the kinked Mach stem occurs when a strong normal shock wave moving with Mach $5.5$ encounters the $30^0$ ramp to form Mach reflection and represents a typical shock instability phenomenon. Three shocks meet to form a triple point and the computational domain considered for this problem is $[0, ~2.0] \times [0, ~1.5]$ with initial shock location at $x_0=0.25$. All computational results are obtained at time $t = 0.25$. The computational domain to the right of the shock is initialized with a rest fluid of density $1.4$ and pressure $1$. To the left of the shock, values obtained from the moving shock relations for Mach $5.5$ are used to initialize the domain. Figure \ref{wr} shows the density contours computed with the present scheme and no kink is observed. \begin{figure}[!htbp] \begin{center} \begin{minipage}{0.49\linewidth} \centering {\includegraphics[trim=0.0cm 0.0cm 0cm 0.0cm, clip, width=\textwidth]{Results/wedge_reflection/rho_WR_1sto_400_400-crop.pdf}} \end{minipage} \begin{minipage}{0.49\linewidth} \centering {\includegraphics[trim=0.0cm 0.0cm 0cm 0.0cm, clip, width=\textwidth]{Results/wedge_reflection/Wr_2nd_400_400-crop.pdf}} \end{minipage} \caption{First (left) and Second (right) order results of reflection of a plane shock from a wedge problem with ZBS-FDS scheme on $400\times400$ grid points} \label{wr} \end{center} \end{figure} \subsubsection{Hypersonic flow past a half-cylinder} The hypersonic flow around a half-cylinder is also a well-known test case to examine the capability of numerical methods in resolving complex flow features accurately without giving shock instabilities. In this case, the shock instability is known as 'carbuncle shock' which was initially reported by Peery and Imlay \cite{Peery_carbuncle}. This test is computed for Mach $6$ and Mach $20$ flows on fine and coarse grids in circumferential directions and results are given in Figure \ref{half_cylinder_ZBS}. Many Riemann solvers generate carbuncle shocks \cite{Quirk,Kim_et_al} in the their numerical solutions. As an example we presented results of Roe scheme, where carbuncle phenomena can be seen very clearly. The present method did not exhibit any such phenomena. \begin{figure}[!htbp] \begin{center} \begin{minipage}{0.18\linewidth} \centering (a) {\includegraphics[trim=0.0cm 0.0cm 0cm 0.0cm, clip, width=\textwidth]{Results/half_cylinder/Hc_1st_M6_45_45-crop.pdf}} \end{minipage} \begin{minipage}{0.18\linewidth} \centering (b) {\includegraphics[trim=0.0cm 0.0cm 0cm 0.0cm, clip, width=\textwidth]{Results/half_cylinder/Hc_1st_M6_20_320-crop.pdf}} \end{minipage} \begin{minipage}{0.24\linewidth} \centering Roe {\includegraphics[trim=0.0cm 0.0cm 0cm 0.0cm, clip, width=\textwidth]{Results/half_cylinder/hc_1st_45_45_m6.pdf}} \end{minipage} \begin{minipage}{0.18\linewidth} \centering (c) {\includegraphics[trim=0.0cm 0.0cm 0cm 0.0cm, clip, width=\textwidth]{Results/half_cylinder/Hc_1st_M20_45_45-crop.pdf}} \end{minipage} \begin{minipage}{0.18\linewidth} \centering (d) {\includegraphics[trim=0.0cm 0.0cm 0cm 0.0cm, clip, width=\textwidth]{Results/half_cylinder/Hc_1st_M20_20_320-crop.pdf}} \end{minipage} \caption{First order results of ZBS-FDS scheme for half cylinder problem; density contours (2.0: 0.2: 5.0): (a) Mach $6$ on $45\times45$ grid, (b) Mach $6$ on $20\times320$ grid, (c) Mach $20$ on $45\times45$ and (d) Mach $20$ on $20\times320$ grid} \label{half_cylinder_ZBS} \end{center} \end{figure} \section{Summary} In this study, we attempted to develop flux difference split upwind schemes for convection-pressure splitting frameworks, based on Jordan canonical forms to avoid defective matrices. FDS solver for Liou and Steffen type splitting is not attractive as pressure subsystem doesn't have any contribution of acoustic signals. Newly constructed ZBS-FDS and TVS-FDS schemes are tested on various benchmark problems and don't require any entropy fix for sonic point and strong expansion problems. ZBS-FDS scheme and TVS-FDS scheme perform in similar ways as is evident from several 1-D test cases. We further extend ZBS-FDS scheme to 2-D Euler system, specifically to those test problems for which several Riemann solvers generate shock instabilities \cite{Quirk,Mandal,Huang_Wu_Yan,Shen_Yan}. The performance of ZBS-FDS scheme for these test cases is impressive and deserves further research. For example, accuracy of ZBS-FDS scheme in multi-dimensions can be further improved using some good diffusion regulator such as \cite{DR} and other possible direction is to pursue genuinely multi-dimensional modeling, as the eigenvalues of the convection and pressure parts of the fluxes neatly reflect uni-directional and multi-directional information propagation respectively. \section{Acknowledgments} The authors thank Prof. Michael Junk, Fachbereich Mathematik und Statistik, Universit\"at Konstanz, Germany for very useful discussions. The authors also thank Indian Institute of Science for supporting this research.
1,314,259,993,836
arxiv
\section{Introduction} \label{sec:intro} Deep neural networks (DNNs) have had a significant impact on our lives in the twenty first century---from advancing scientific discovery \cite{karpatne2017theory} to improving the quality of life \cite{sharma2019iot,sundaravadivel2018smart,naylor2018prospects}. DNNs are trained using traditional learning algorithms like Backpropagation on conventional computing platforms using CPUs and GPUs. While DNNs trained using this approach have low error and can be trained in a reasonable amount of time, they consume large amount of memory and power. This will not be sustainable in the post Moore's law era, where a significant portion of deep learning tasks will be ported to edge computing systems \cite{aimone2019neural}. Edge computing systems would form an indispensable part of the entire computation fabric and support critical applications like Internet of Things (IoT), autonomous vehicles, embedded systems etc. \cite{mailhiot2018energy}. Therefore, it is important to train DNNs that are tailored for such systems and have the following three desirable characteristics: \emph{low error}, \emph{low memory}, and \emph{low power}. While low power could be achieved using neuromorphic computing systems \cite{schuman2017survey}, we focus on achieving low error and low memory in this work. Our work in this paper can potentially be extended to neuromorphic systems to achieve these desirable characteristics. In order to achieve low memory, we focus on deep learning models where learning parameters are constrained to have a set of finite discrete values, for example, binary or ternary values. By constraining the values of learning parameters, we significantly reduce the memory required to store them. For instance, a learning parameter constrained to have ternary values ($-1$, $0$, $+1$) can be stored using just $2$ bits, as opposed to a double precision floating point learning parameter used in traditional learning algorithms, which requires $64$ bits. To train DNNs with constrained learning parameters, we propose a novel training algorithm called the Combinatorial Neural Network Training Algorithm (CoNNTrA) in Section \ref{sec:conntra}. Our objective is to demonstrate that CoNNTrA can train deep learning models consuming significantly less memory, yet achieving errors at par with Backpropagation. In Section \ref{sec:performance-evaluation}, we use CoNNTrA to train deep learning models for three machine learning benchmark data sets (MNIST, Iris and ImageNet). We compare the performance of CoNNTrA to that of Backpropagation along four performance metrics: training error, validation error, memory usage and training time. Our results indicate that CoNNTrA models have errors at par with Backpropagation, and consume $32\times$ less memory. \section{Related Work} \label{sec:related} Deep learning models with binary learning parameters have been proposed in the literature for several use cases. Courbariaux et al. propose BinaryConnect, which can train binary neural networks for specialized hardware and test their approach on MNIST, CIFAR-10 and SVHN data sets \cite{courbariaux2015binaryconnect}. Rastegari et al. propose XNOR-Net, which can train binary convolutional neural networks (CNN), test their algorithm on the ImageNet data set, and report $32\times$ savings in memory and $58\times$ faster convolutional operations \cite{rastegari2016xnor}. Wan et al. propose Ternary Binary Network (TBN), having ternary inputs and binary learning parameters, for edge computing devices like portable devices and wearable devices, test their approach on ImageNet and PASCAL VOC data sets and achieve $32\times$ memory savings and $40\times$ faster convolutional operations \cite{wan2018tbn}. Andri et al. propose YodaNN, a hardware accelerator for BinaryConnect CNNs and obtain high power efficiency \cite{andri2016yodann}. In addition to binary and ternary neural networks, several approaches have been proposed in the literature to train quantized neural networks. Hubara et al. propose a method to train quantized neural networks having low precision weights and test their approach on the MNIST, CIFAR-10, SVHN and ImageNet data sets \cite{hubara2017quantized}. Zhou et al. propose a mechanism for iterative optimizations for training quantized neural network and test their approach on AlexNet, GoogLeNet and ResNet \cite{zhou2017iqnn}. Blott et al. describe an end-to-end deep learning framework for exploration and training of quantized neural networks that can optimize for a given platform, design target or specific precision \cite{blott2018finn}. Choi et al. propose a mechanism for parameterized clipping activation for quantized neural networks that enables training with low precision weights \cite{choi2018pact}. We have previously shown that training deep neural networks with constrained learning parameters is an NP-complete problem \cite{date2019combinatorial}. To address this problem, several evolutionary optimization-based approaches have been pursued in the literature. Shen et al. propose an evolutionary optimization-based learning mechanism that finds binary neural networks by searching through the entire search space of learning parameters \cite{shen2019searching}. Too et al. use a binary particle swarm optimization for feature extraction and compare their approach to other evolutionary optimization-based approaches that leverage genetic algorithm, binary gravitational search algorithm and competitive binary grey wolf optimizer \cite{too2019new}. Nogami et al. use a combination of genetic algorithm and simulated annealing to optimize the bin boundaries of quantization for CNN and test their approach on the ImageNet data set using AlexNet and VGG16 \cite{nogami2019optimizing}. There are several limitations of the approaches proposed in the literature, especially with regards to low error, low memory and low power edge computing systems. While the algorithms to train binary or ternary neural networks have the potential to be deployed on edge computing systems, they are very specialized and cannot be used to train neural networks with a set of finite discrete learning parameters directly. On the other hand, while evolutionary optimization-based methods produce accurate models for quantized neural networks, they cannot be deployed on edge computing systems because evolutionary optimization is a compute heavy process and may require large compute clusters. Moreover, most of the approaches proposed in the literature cater to a specific deep learning model, for example, convolutional neural network and it is unclear if they are useful for other deep learning models such as recurrent neural networks or generative adversarial networks. In this work we propose the Combinatorial Neural Network Training Algorithm (CoNNTrA), which is not restricted to any particular neural network architecture, has an efficient time complexity (polynomial), and does not necessarily require significant amount of compute power. CoNNTrA is not restricted to binary or ternary learning parameters specifically, but can train any configuration of learning parameters as long as they are finite and discrete. We believe CoNNTrA would be able to train deep neural networks having low error, low memory and low power for edge computing systems in the post Moore's law era, especially when combined with neuromorphic computing systems, which are known to be resilient and energy efficient \cite{schuman2020resilience}, and have a wide range of applications such as graph algorithms \cite{hamilton2020spike,kay2020neuromorphic}, modeling epidemics \cite{hamilton2020modeling} and predicting supercomputer failures \cite{date2018efficient}. \section{The DNN Training Problem} \label{sec:snn-training-problem} We define the DNN training problem using the following notation: \begin{itemize} \item $\mathbb{R}$, $\mathbb{N}$, $\mathbb{B}$: Set of real numbers, natural numbers and binary numbers ($\mathbb{B} = \{0,1\}$) respectively. \item $\mathbb{T}$: Ternary set $\mathbb{T} = \{-1, 0, +1\}$. \item $\omega$: Set of finite discrete values that the learning parameters $W$ can take, for example, if learning parameters are required to have binary values, $\omega = \mathbb{B}$. \item $N$: Number of points in the training dataset. \item $d$: Dimension of each point in training dataset, which is the same as number of features in the training dataset. \item $k$: Number of classes for classification. \item $X$: $X$ can be a scalar, vector, matrix or tensor containing training data. \item $Y$: $Y$ contains the labels of training data encoded in a one-hot format. Since we have $N$ data points and $k$ classes, $Y \in \mathbb{B}^{N \times k}$. \item $W$: Set of all learning parameters, including all the weights and biases. \item $g(X, W)$: The DNN learning function. \item $e(P, Y)$: The error function which computes the error between predicted labels $P$ and ground truth labels $Y$. \end{itemize} Given training data $X$ and training labels $Y$, we would like to learn the parameters $W$ of the learning function $g(X, W)$ by minimizing the error $e(P, Y)$. In this regard, the DNN training problem with finite discrete weights is defined as follows: \begin{align} \underset{W}{\min} \quad e(P, Y) \label{eq:snn-training} \end{align} where, $P = g(X, W)$ are the labels predicted by the learning function $g$; Each learning parameter in the set $W$ can take values from the finite discrete set $\omega$. \section{DNN Training with Constrained Learning Parameters is NP-Hard} \label{sec:np-hard} We show that under the Euclidean error function, training a single layer neural network with binary weights is NP-Hard by reducing the quadratic unconstrained binary optimization (QUBO) problem, which is known to be NP-Hard \cite{wang2009analyzing,boros2007local,pardalos1992complexity}, to the DNN training problem with finite discrete weights. We first define the QUBO problem: \begin{align} \text{The QUBO Problem:} \quad \min_{z \in \mathbb{B}^d} z^T A z + z^T b + c \label{eq:qubo} \end{align} where, $A$ is a real symmetric positive definite $d \times d$ matrix, $b$ is a $d$-dimensional vector, and $c$ is a real scalar. Note that if $A$ is not symmetric, it can be made symmetric by setting $a_{ij} = \frac{a_{ij} + a_{ji}}{2} \quad \forall i \ne j$, without changing the QUBO problem. It is known that even when $A$ is positive definite, the QUBO problem is NP-Hard \cite{kochenberger2014unconstrained}. The DNN training problem with binary weights and Euclidean error function is defined as follows: \begin{align} \min_{W \in \mathbb{B}^d} e(P,Y) = \frac{1}{N} || P - Y ||^2_2 \label{eq:binary-snn-training} \end{align} where, $P = g(X, W)$ is the vector of values predicted by the learning function $g(X, W) = X^T W$, $X \in \mathbb{R}^{N \times d}$, $Y \in \mathbb{R}^N$, $W \in \mathbb{B}^d$. After expanding Equation \ref{eq:binary-snn-training}, the SNN Training problem becomes: \begin{align} \min_{W \in \mathbb{B}^d} \frac{1}{N} (W^T X^T X W - 2 W^T X^T Y + Y^T Y) \label{eq:binary-snn-training-expanded} \end{align} Given the optimal solution, we can compute the objective function value in polynomial time, so the problem is in NP. Also, Equation \ref{eq:binary-snn-training-expanded} is very similar to Equation \ref{eq:qubo}, in that both are quadratic minimization problems with binary variables. In order to reduce the QUBO problem to Binary SNN Training, we first decompose the real symmetric positive definite QUBO matrix $A$ into a product of a unique lower triangular matrix with real positive diagonal entries $L$ and its transpose $L^T$ using the Cholesky decomposition: \begin{align} A \xrightarrow[]{\text{CHOLESKY}} L L^T \end{align} Because $L$ is a lower triangular matrix with real positive diagonal entries, $L^{-1}$ exists. The reduction is performed as follows: \begin{itemize} \item $W = z$ \item $\frac{X^T X}{N} = A = L L^T $ \\ Therefore, $X = \sqrt{N} L^T $ and $X^T = \sqrt{N} L $ \item $\frac{-2 X^T Y}{N} = b$ \\ Therefore, $Y = -\frac{\sqrt{N}}{2} L^{-1} b$ \item Because both QUBO and Binary SNN Training are unconstrained optimization problems, the scalars $c$ in QUBO and $\frac{1}{N}Y^T Y$ in Binary SNN Training do not affect the optimal solution. In order to equate the scalars in both problems, we can introduce another scalar $c'$ in Binary SNN Training so that $\frac{Y^T Y + c'}{N} = c$ without changing the optimal solution. \end{itemize} By setting $W = z$, $X = \sqrt{N} L^T$ and $Y = -\frac{\sqrt{N}}{2} L^{-1} b$, we have reduced the QUBO problem to Binary SNN Training problem, thus showing that Binary SNN Training problem is NP-Hard. With more complex error functions like softmax and complex neural network architectures like deep convolutional or recurrent neural networks, the SNN training problem is at least as hard as the QUBO problem, if not more. \section{Combinatorial Neural Network Training Algorithm (CoNNTrA)} \label{sec:conntra} \begin{algorithm}[t!] \caption{Discretization Subroutine for CoNNTrA} \label{algo:discretize} \SetAlgoLined \SetKwProg{Fn}{Function}{:}{} \SetKwFunction{discretize}{Discretize} \SetKwFunction{sort}{Sort} \SetKwFunction{zeros}{Zeros} \Fn{\discretize{$W_{pre}$, $\omega$}}{ \KwIn{\\ $W_{pre}$: Pretrained Weights \\ $\omega$: Set of Finite Discrete Values } \BlankLine \BlankLine \KwOut{ \\ $W$: Discretized Weights } \BlankLine \BlankLine Weights: $W =$ \zeros{$|W_{pre}|$}\; $\omega =$ \sort{$\omega$}\; \For{$i = 1$ \KwTo $|W_{pre}|$}{ \For{$j = 1$ \KwTo $|\omega|$}{ \If{ $j == |\omega|$}{ $W[i] = \omega[j]$; } \ElseIf{$W_{pre}[i] \le \frac{1}{2} (\omega[j] + \omega[j+1])$}{ $W[i] = \omega[j]$\; \textbf{break}\; } } } \BlankLine \BlankLine \Return $W$ } \end{algorithm} \begin{algorithm}[t!] \caption{CoNNTrA: Combinatorial Neural Network Training Algorithm} \label{algo:CoNNTrA} \SetAlgoLined \SetKwProg{Fn}{Function}{:}{} \SetKwFunction{conntra}{CoNNTrA} \SetKwFunction{randint}{RandomInteger} \SetKwFunction{discretize}{Discretize} \Fn{\conntra{$X$, $Y$, $W_{pre}$, $\omega$, $g(X,W)$, $e(P, Y)$}}{ \KwIn{\\ $X$: Training Data \\ $Y \in \mathbb{B}^{N \times k}$: Training Labels (One-Hot Format) \\ $W_{pre}$: Pretrained Weights (Reshaped into a Single 1-Dimensional Array) \\ $\omega$: Set of Finite Discrete Values \\ $g(X, W)$: Spiking Neural Network Function \\ $e(P,Y)$: Error Function } \BlankLine \BlankLine \KwOut{ \\ $W_{opt}$: Optimal Weights \\ $\epsilon_{opt}$: Optimal Error } \BlankLine \BlankLine \tcc{PHASE 1: DISCRETIZATION} Weights: $W = $ \discretize{$W_{pre}$, $\omega$} \BlankLine \BlankLine \tcc{PHASE 2: INITIALIZATION} Error: $\epsilon = e(g(X, W), Y)$\; Optimal Weights: $W_{opt} = W$\; Optimal Error: $\epsilon_{opt} = \epsilon$\; Number of training iterations: $T$\; \BlankLine \BlankLine \tcc{PHASE 3: TRAINING} \For{$t = 1$ \KwTo $T$}{ \For{$i = 1$ \KwTo $|W|$}{ $i^{'} =$ \randint{$|W|$}\; \For{$j = 1$ \KwTo $|\omega|$}{ $W[i^{'}] = \omega[j]$\; $\epsilon = e(g(X, W), Y)$\; \If{$\epsilon \le \epsilon_{opt}$}{ $\epsilon_{opt} = \epsilon$\; $W_{opt} = W$\; } } $W[i^{'}] = W_{opt}[i^{'}]$\; } } \BlankLine \BlankLine \Return $W_{opt}, \epsilon_{opt}$ } \end{algorithm} We propose the Combinatorial Neural Network Training Algorithm (CoNNTrA), which is a coordinate gradient descent based algorithm for training DNNs with finite discrete weights. CoNNTrA is presented in Algorithm \ref{algo:discretize} and Algorithm \ref{algo:CoNNTrA}. Say $w_i = \omega_j$ for some $i$ and $j$. We first look at the left and right gradients of the error function. \begin{align} & \text{Left Gradient:} \nonumber \\ & \qquad \frac{\partial e}{\partial w_i} \Bigg|_{\text{left}} = \lim_{h \rightarrow 0} \frac{e(w_i) - e(w_i - h)}{h} \\ & \text{Right Gradient:} \nonumber \\ & \qquad \frac{\partial e}{\partial w_i} \Bigg|_{\text{right}} = \lim_{h \rightarrow 0} \frac{e(w_i + h) - e(w_i)}{h} \end{align} These gradients could be used if $w_i$ could take continuous values. Since $w_i$ cannot take continuous values, $h$ never tends to $0$ in the above equations, but is some finite number greater than $0$. So, we look at the discrete counterparts of gradients: \begin{align} &\text{Left Discrete Gradient:} \nonumber \\ &\qquad \frac{\Delta e}{\Delta w_i} \Bigg|_{\text{left}} = \frac{e(w_i = \omega_j) - e(w_i = \omega_{j-1})}{\omega_j - \omega_{j-1}} \\ &\text{Right Discrete Gradient:} \nonumber \\ &\qquad \frac{\Delta e}{\Delta w_i} \Bigg|_{\text{right}} = \frac{e(w_i = \omega_{j+1}) - e(w_i = \omega_j)}{\omega_{j+1} - \omega_j} \end{align} These discrete counterparts of gradients search in the discrete vicinity of $w_i$ to find a value that lowers the error. This is a local search, and makes up to three calls to the error function, i.e. $e(w_i = \omega_{j-1})$, $e(w_i = \omega_j)$ and $e(w_i = \omega_{j+1})$. We extend this notion of local search and do a global search, i.e. search through all possible values of $w_i$ to find the best value that minimizes the error function. This makes $\mathcal{O}(|\omega|)$ calls to the error function. In this case, we have a better chance of finding a lower value of error function at each iteration. When we do a similar procedure for all the weights, we iteratively find better weight values that decrease the error function gradually as training progresses. CoNNTrA takes as inputs the training data $X$, the training labels $Y$, pretrained weights $W$, set of finite discrete values that the weights can take $\omega$, the SNN function $g(X, W)$ and the error function $e(P, Y)$. Initial weights are the weights obtained when the DNN was trained using Backpropagation by relaxing the finite discrete value constraint. The first step is to discretize the weights using Algorithm \ref{algo:discretize}. In Algorithm \ref{algo:discretize}, we set all the weights in the vicinity of $\omega_j$ to $\omega_j$. For example, if $\omega = \{-1, 0, +1\}$, then for some $i$, if the pretrained weight $w_i > 0.5$, it would be set to $+1$, if $-0.5 < w_i \le 0.5$, it would be set to $0$, and if $w_i \le -0.5$, it would be set to $-1$. We start by initializing the weights $W$ to an array of zeros using the function \texttt{Zeros($x$)}, which returns a zero initialized array of length $x$, in line 2 of Algorithm \ref{algo:discretize}. Next, we sort $\omega$ so that all values in $\omega$ are in increasing order in line 3. Next, in the for loop from line 4 through 14, we iterate over each weight in $W$, and in the for loop from line 5 through 13, we iterate over each value in $\omega$ to find an appropriate discretized value for each weight. The discretized weight value is assigned to the appropriate weight on either line 7 or 10. In the second phase of the algorithm, i.e. the initialization phase, we first compute the error using the discretized weights $W$ and assign it to the initial error $\epsilon$ on line 3. Next, we initialize the optimal weights $W_{opt}$ and optimal error $\epsilon_{opt}$, by setting them to $W$ and $\epsilon$ on lines 4 and 5 respectively. We then define the number of training iterations $T$. In the third phase of the algorithm, i.e. the training phase, we iterate over $T$ training iterations in lines 7--20. During each iteration, we perform a global search over $|W|$ randomly selected weights in lines 8--19. We refer to each random selection of weights as an epoch---so, there are a total of $T \times |W|$ training epochs. During each training epoch, we first randomly select a weight index $i^{'}$ using the function \texttt{RandomInteger($x$)}, which returns a uniform random integer in the interval $[1, x]$. For $w_{i^{'}}$, we perform a global search over all possible values of $w_{i^{'}}$ to find the best value that minimizes the error function in lines 10--17 of Algorithm \ref{algo:CoNNTrA}. If a better value for $w_{i^{'}}$ is found, we update the optimal error $\epsilon_{opt}$ and optimal weights $W_{opt}$ in lines 14 and 15 respectively. After a global search is performed for $w_{i^{'}}$, we set the current weights $W$ to the optimal weights $W_{opt}$ in line 18, so that in the subsequent epochs, we use the current best set of weights. Finally, after all the training epochs are completed, we return $W_{opt}$ and $\epsilon_{opt}$ in line 21 of Algorithm \ref{algo:CoNNTrA}. \subsection{Time Complexity} \label{sub:time-complexity} We analyze the running time of CoNNTrA by going over the running time of each line in Algorithm \ref{algo:discretize} and Algorithm \ref{algo:CoNNTrA}. In the first phase, initializing the weights (line 2 of Algorithm \ref{algo:discretize}) takes $\mathcal{O}(|W|)$ time, and sorting $\omega$ takes $\mathcal{O}(|\omega| \log |\omega|)$ time. Next, the two for loops in lines 4 through 14 of Algorithm \ref{algo:discretize} take up $\mathcal{O}(|W| \cdot |\omega|)$ time. So the running time of discretization phase is $\mathcal{O}(|W| \cdot |\omega|)$. We assume that time taken to do a forward pass on the SNN (i.e. computing $P = g(X, W)$) and computing the error $e(P, Y)$ takes $\tau = \mathcal{O}(e(g(X, W), Y))$ amount of time. Therefore, it takes $\mathcal{O}(\tau)$ time to compute the error on line 3 of Algorithm \ref{algo:CoNNTrA}. It takes $\mathcal{O}(|W|)$ time to initialize $W_{opt}$ on line 4 of Algorithm \ref{algo:CoNNTrA}. Lines 5 and 6 take $\mathcal{O}(1)$ time. So, the initialization phase takes $\mathcal{O}(\tau + |W|)$ time. In the training phase, the for loop from lines 7 through 20 in Algorithm \ref{algo:CoNNTrA} runs $T$ times. The for loop from lines 8 through 19 runs $|W|$ times and the for loop from lines 10 through 17 runs $|\omega|$ times. It takes $\mathcal{O}(\tau)$ time to compute the error $\epsilon$ on line 14. Therefore, the training phase takes $\mathcal{O}(T \cdot |W| \cdot |\omega| \cdot \tau)$ time. Since this dominates the running time of all phases, the running time for CoNNTrA is $\mathcal{O}(T \cdot |W| \cdot |\omega| \cdot \tau)$. Since $\tau$ is usually a polynomial time expression in the number of weights and size of training dataset, CoNNTrA is a polynomial time algorithm. \subsection{Convergence} \label{sub:convergence} During each epoch in the training phase, we update the optimal weights $W_{opt}$ and optimal error $\epsilon_{opt}$ only if the current error $\epsilon$ is lower than $\epsilon_{opt}$. Thus, with every update, $\epsilon_{opt}$ gets closer and closer to 0.0 demonstrating convergence. If CoNNTrA is run for enough number of epochs, the optimal error would converge to a local minimum. \section{Performance Evaluation} \label{sec:performance-evaluation} We compare the performance of CoNNTrA (Algorithm \ref{algo:CoNNTrA}) to traditional Backpropagation using GPU on four benchmark problems: MNIST using a logistic regression classifier, MNIST using a convolutional neural network (CNN), Iris using a deep neural network (DNN), and ImageNet using a convolutional neural network (CNN). The performance metrics used for this comparison are: \begin{enumerate}[topsep=0pt, partopsep=0pt, itemsep=0pt, parsep=0pt] \item Training Error: Percentage of data points classified incorrectly in the training dataset. \item Validation Error: Percentage of data points classified incorrectly in the validation dataset. \item Memory Usage (kilobytes): Amount of memory used to store the weights. \item Training Time (seconds): Total time taken to complete training. \end{enumerate} CoNNTrA was written in Python using the Numpy library \cite{van2011numpy}. The Backpropagation algorithm was run using the TensorFlow library \cite{abadi2016tensorflow} on GPUs. All experimental runs were run on a machine that had 32 cores of two-way multi-threaded Intel Xeon CPUs running at 2.60 GHz, three NVIDIA GPUs (GeForce GTX 1080 Titan, GeForce GTX 950 and GeForce GTX 670), 112 GB DIMM Synchronous RAM, 32 KB L1 cache, 256 KB L2 cache and 20 MB L3 cache. \subsubsection{MNIST Logistic Regression} \begin{figure}[t!] \centering \includegraphics[trim={100 0 100 0},clip,scale=0.3]{Slide1.png} \caption{Schematic diagram of MNIST logistic regression model} \label{fig:mnist-logreg-model} \end{figure} \begin{table}[t!] \centering \caption{Performance metrics for MNIST logistic regression} \begin{tabular}{m{0.2\textwidth} m{0.11\textwidth} m{0.07\textwidth}} \noalign{\smallskip} \hline \noalign{\smallskip} Performance Metric & Backpropagation & CoNNTrA \\ \noalign{\smallskip} \hline \noalign{\smallskip} Training Error (\%) & 6.25 & 7.59 \\ Validation Error (\%) & 7.34 & 8.44 \\ \textbf{Memory Usage (kilobytes)} & \textbf{62.8} & \textbf{1.96} \\ Training Time (seconds) & 81.70 & 236.12 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \end{tabular} \label{tab:mnist-logreg-comparison} \end{table} \begin{figure}[t!] \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[scale=0.4]{mnist-logistic-training-error.png} \caption{Training Error Comparison} \label{fig:mnist-logreg-training-error} \end{subfigure} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[scale=0.4]{mnist-logistic-validation-error.png} \caption{Validation Error Comparison} \label{fig:mnist-logreg-validation-error} \end{subfigure} \caption{Error comparison for MNIST logistic regression models} \label{fig:mnist-logreg-errors} \end{figure} We used a logistic regression model to classify the MNIST images. The inputs to the logistic regression model were vectorized MNIST images, each of size $784 \times 1$. The outputs to the model were the labels of the input images encoded in a one-hot format. The model consisted of a weight matrix of size $784 \times 10$ and a bias vector of size $10 \times 1$. A schematic diagram of the logistic regression model is shown in Figure \ref{fig:mnist-logreg-model}. The activation function for this model was softmax and the loss was computed using the cross entropy loss function. Table \ref{tab:mnist-logreg-comparison} shows the performance metrics of Backpropagation and CoNNTrA for the MNIST task using a logistic regression classifier. The training errors for Backpropagation and CoNNTrA are $6.25\%$ and $7.59\%$ respectively, and the validation errors are $7.32\%$ and $8.44\%$ respectively. The memory usage for Backpropagation and CoNNTrA is $62.8$ and $1.96$ kilobytes respectively. While Backpropagation takes $81.70$ seconds to complete training, CoNNTrA takes $236.12$ seconds. Figure \ref{fig:mnist-logreg-errors} shows the plot of training and validation errors for CoNNTrA (red) and Backpropagation (blue). These errors were computed as percentage of misclassified points in the training and validation datasets respectively. The X-axis in Figures \ref{fig:mnist-logreg-training-error} and \ref{fig:mnist-logreg-validation-error} shows the percentage of training completed. The Y-axis shows the classification errors as a percentage. As training progresses, both algorithms converge to the same ballpark of $6-8\%$, which corresponds to an accuracy of $92-94\%$. \subsubsection{MNIST CNN} \begin{table}[t!] \centering \caption{Performance metrics for MNIST CNN} \begin{tabular}{m{0.2\textwidth} m{0.11\textwidth} m{0.07\textwidth}} \noalign{\smallskip} \hline \noalign{\smallskip} Performance Metric & Backpropagation & CoNNTrA \\ \noalign{\smallskip} \hline \noalign{\smallskip} Training Error (\%) & 1.40 & 2.60 \\ Validation Error (\%) & 1.56 & 2.39 \\ \textbf{Memory Usage (kilobytes)} & \textbf{649.55} & \textbf{20.30} \\ Training Time (seconds) & 121.97 & 4,871.04 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \end{tabular} \label{tab:mnist-cnn-comparison} \end{table} \begin{figure}[t!] \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[scale=0.4]{mnist-cnn-training-error.png} \caption{Training Error Comparison (MNIST CNN)} \label{fig:mnist-cnn-training-error} \end{subfigure} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[scale=0.4]{mnist-cnn-validation-error.png} \caption{Validation Error Comparison (MNIST CNN)} \label{fig:mnist-cnn-validation-error} \end{subfigure} \caption{Error comparison for MNIST CNN models} \label{fig:mnist-cnn-errors} \end{figure} We use the LeNet architecture proposed by LeCun et al. \cite{lecun1998gradient}. The training results for MNIST CNN models are shown in Table \ref{tab:mnist-cnn-comparison} and Figure \ref{fig:mnist-cnn-errors}. The training and validation errors for Backpropagation are $1.40\%$ and $1.56\%$ respectively, and those for CoNNTrA are $2.60\%$ and $2.39\%$ respectively. The memory usage for Backpropagation is $649.55$ kilobytes, while that for CoNNTrA is $20.30$ kilobytes. While Backpropagation takes $121.97$ seconds, CoNNTrA takes $4,871.04$ seconds to complete training. Figure \ref{fig:mnist-cnn-errors} shows the training and validation errors for Backpropagation (blue) and CoNNTrA (red). The final training and validation errors obtained by both models are around $1-3\%$, which is the state of the art for the LeNet CNN, and corresponds to an accuracy of $97-99\%$. The rate of convergence for CoNNTrA, shows an interesting behavior. The rate of convergence gradually decreases until just over $80\%$ of training is completed, after which, it starts decreasing rapidly and converges to $2.60\%$ in Figure \ref{fig:mnist-cnn-training-error} and $2.39\%$ in Figure \ref{fig:mnist-cnn-validation-error}. We attribute this behavior to the following reason. At every training epoch in CoNNTrA, we pick a weight at random and perform a global search across all possible values to find a value that yields the smallest possible error. When the rate of convergence started decreasing rapidly, a weight was picked which had a high impact on the classification error. When a global search was performed for this weight, it drastically improved the error and transformed the neural network function in such a way that there was abundant room to improve the error for subsequently selected weights. \subsubsection{Iris} \begin{figure}[t!] \centering \includegraphics[trim={50 0 70 0}, scale=0.29]{Slide3.png} \caption{Schematic diagram of Iris multi-layer perceptron} \label{fig:iris-mlp-model} \end{figure} \begin{table}[t!] \centering \caption{Performance metrics for Iris DNN} \begin{tabular}{m{0.2\textwidth} m{0.11\textwidth} m{0.07\textwidth}} \noalign{\smallskip} \hline \noalign{\smallskip} Performance Metric & Backpropagation & CoNNTrA \\ \noalign{\smallskip} \hline \noalign{\smallskip} \textbf{Training Error (\%)} & \textbf{1.67} & \textbf{1.67} \\ \textbf{Validation Error (\%)} & \textbf{3.33} & \textbf{3.33} \\ \textbf{Memory Usage (kilobytes)} & \textbf{1.88} & \textbf{0.06} \\ Training Time (seconds) & 4.56 & 4.92 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \end{tabular} \label{tab:iris-dnn-comparison} \end{table} \begin{figure}[t!] \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[scale=0.4]{iris-dnn-training-error.png} \caption{Training Error Comparison (Iris DNN)} \label{fig:iris-dnn-training-error} \end{subfigure} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[scale=0.4]{iris-dnn-validation-error.png} \caption{Validation Error Comparison (Iris DNN)} \label{fig:iris-dnn-validation-error} \end{subfigure} \caption{Error comparison for Iris DNN models} \label{fig:iris-dnn-errors} \end{figure} We use a three layer deep multi-layer perceptron model having two hidden layers for this classification task. Figure \ref{fig:iris-mlp-model} shows a schematic diagram of the multi-layer perceptron model. Each neuron in the hidden layers is indexed using a superscript and a subscript. The superscript indicates the layer index and the subscript indicates the neuron index within that layer. Table \ref{tab:iris-dnn-comparison} shows the performance metrics for the Iris DNN models. Both models achieved the same training and validation errors, i.e. $1.67\%$ and $3.33\%$ respectively. The memory usage for Backpropagation was $1.88$ kilobytes, while that for CoNNTrA was $0.06$ kilobytes. The training time for Backpropagation was $4.56$ seconds and that for CoNNTrA was $4.92$ seconds. Figure \ref{fig:iris-dnn-errors} shows the plot of training and validation errors for Backpropagation (in blue) and CoNNTrA (in red). Training errors for both algorithms follow each other closely and converge at $1.67\%$. Validation error for CoNNTrA is seen to vary abruptly initially until it starts to converge at around the $15\%$ mark with sporadic spikes. The validation errors for both algorithms converge to $3.33\%$. \subsubsection{ImageNet} \begin{table}[t!] \centering \caption{Performance metrics for ImageNet CNN} \begin{tabular}{m{0.2\textwidth} m{0.11\textwidth} m{0.08\textwidth}} \noalign{\smallskip} \hline \noalign{\smallskip} Performance Metric & Backpropagation & CoNNTrA \\ \noalign{\smallskip} \hline \noalign{\smallskip} Training Error (\%) & 15.12 & 16.98 \\ \textbf{Validation Error (\%)} & \textbf{18.62} & \textbf{18.50} \\ \textbf{Memory Usage (kilobytes)} & \textbf{499,026.75} & \textbf{15,594.59} \\ Training Time (seconds) & 388,764.43 & 647,249.96 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \end{tabular} \label{tab:imagenet-cnn-comparison} \end{table} \begin{figure}[t!] \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[scale=0.4]{imagenet-cnn-training-error.png} \caption{Training Error Comparison (ImageNet CNN)} \label{fig:imagenet-cnn-training-error} \end{subfigure} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[scale=0.4]{imagenet-cnn-validation-error.png} \caption{Validation Error Comparison (ImageNet CNN)} \label{fig:imagenet-cnn-validation-error} \end{subfigure} \caption{Error comparison for ImageNet CNN models} \label{fig:imagenet-cnn-errors} \end{figure} We use the AlexNet CNN proposed by Krizhevsky et al. \cite{krizhevsky2012imagenet}. Table \ref{tab:imagenet-cnn-comparison} shows the performance metrics. While Backpropagation takes $388,764.42$ seconds, CoNNTrA takes $647,249.96$ seconds to complete training. The training errors for Backpropagation and CoNNTrA are $15.12\%$ and $16.98\%$ respectively, and the validation errors are $18.62\%$ and $18.50\%$ respectively. All errors computed for ImageNet CNN model are top-5 errors. The memory used by Backpropagation is $499,026.75$ kilobytes, and that used by CoNNTrA is $15,594.59$ kilobytes. Figure \ref{fig:imagenet-cnn-errors} shows the training and validation errors for training the ImageNet CNN model using Backpropagation (blue) and CoNNTrA (red). We see a regular trend in Figure \ref{fig:imagenet-cnn-errors}. Both models converge to around $15-17\%$ training error and $16-18\%$ validation error, which are in the same ballpark as the state of the art errors for the AlexNet model. \subsection{Discussion} \begin{figure} \centering \begin{subfigure}{0.32\textwidth} \centering \includegraphics[scale=0.32]{training-time-bar.png} \caption{Training Time} \label{fig:training-time-bar} \end{subfigure} \\ \begin{subfigure}{0.32\textwidth} \centering \includegraphics[scale=0.32]{training-error-bar.png} \caption{Training Error} \label{fig:training-error-bar} \end{subfigure} \\ \begin{subfigure}{0.32\textwidth} \centering \includegraphics[scale=0.32]{validation-error-bar.png} \caption{Validation Error} \label{fig:validation-error-bar} \end{subfigure} \\ \begin{subfigure}{0.32\textwidth} \centering \includegraphics[scale=0.32]{memory-bar.png} \caption{Memory Usage} \label{fig:memory-bar} \end{subfigure} \caption{Comparison of Backpropagation and CoNNTrA} \label{fig:consolidated-results} \end{figure} Figure \ref{fig:consolidated-results} presents the performance of Backpropagation and CoNNTrA in a consolidated fashion. The X-axis shows datasets and models, and the Y-axis shows the performance metric. Blue and red bars denote Backpropagation and CoNNTrA performance metrics respectively. In Figure \ref{fig:training-time-bar}, we observe that CoNNTrA takes more time to complete training for all tasks. This is because we were using a serialized implementation of CoNNTrA as our objective was to demonstrate a proof of concept that CoNNTrA is able to train models having accuracies at par with Backpropagation. With parallel implementations of CoNNTrA, we expect the training times to significantly reduce. In Figures \ref{fig:training-error-bar} and \ref{fig:validation-error-bar}, the training and validation errors for Backpropagation and CoNNTrA are within the same ballpark and are at par with the state of the art error values for the respective models. The CoNNTrA errors fall slightly short of Backpropagation errors for all models except Iris DNN. Given the same architecture of the models for both Backpropagation and CoNNTrA, the CoNNTrA weights were ternary whereas Backpropagation used double precision floating point weights. Going from double precision to ternary weights resulted in less expressibility for CoNNTrA, because of which we see slightly higher value of training and validation errors. In Figure \ref{fig:memory-bar}, the memory usage for CoNNTrA is about $32 \times$ less than Backpropagation for all models. It requires $2$ bits to store each ternary valued CoNNTrA weight, and $64$ bits to store each double precision floating point Backpropagation weight. A $32\times$ reduction in memory usage is extremely significant in edge computing applications, especially embedded systems, Internet of Things, autonomous vehicles etc. \section{Conclusion} \label{sec:conclusion} Edge computing systems in applications like Internet of Things (IoT), autonomous vehicles and embedded systems in the post Moore's law era will require machine learning models that not only produce low error and train fast, but also consume low memory and power. While traditional learning algorithms like Backpropagation can train deep learning models in a reasonable amount of time and obtain low error, they consume significantly large memory and power. In this work, we propose a novel learning algorithm called Combinatorial Neural Network Training Algorithm (CoNNTrA), which is a coordinate gradient descent-based algorithm that can train deep learning models having constrained learning parameters, for example, having binary or ternary values. The objective of this study was to demonstrate that CoNNTrA can train deep learning models with constrained learning parameters, which yield errors at par with the Backpropagation models, \emph{and} consume significantly less memory. We presented CoNNTrA in Section \ref{sec:conntra} along with its theoretical underpinnings and complexity analysis. In Section \ref{sec:performance-evaluation}, we used CoNNTrA to train deep learning models for three machine learning benchmark data sets (MNIST, Iris and ImageNet). We demonstrated that CoNNTrA can train these models having errors in the same ballpark as Backpropagation models. More importantly, we showed that the CoNNTrA models consume $32\times$ less memory than the Backpropagation models. In our future work, we would like to implement CoNNTrA in an efficient parallelized fashion to improve the training times. We believe that such a parallel implementation of CoNNTrA would be able to train deep learning models that are not just accurate and consume orders of magnitude less memory than Backpropagation, but can also be trained efficiently. This would be invaluable to training machine learning and deep learning models in the post Moore's law era, especially for edge computing systems supporting critical applications. We would also like to study the applicability of CoNNTrA for solving other NP-complete problems like traveling salesman problem, protein folding, and genetic imputation. \bibliographystyle{IEEEtran} \section{Introduction} \label{sec:intro} Deep neural networks (DNNs) have had a significant impact on our lives in the twenty first century---from advancing scientific discovery \cite{karpatne2017theory} to improving the quality of life \cite{sharma2019iot,sundaravadivel2018smart,naylor2018prospects}. DNNs are trained using traditional learning algorithms like Backpropagation on conventional computing platforms using CPUs and GPUs. While DNNs trained using this approach have low error and can be trained in a reasonable amount of time, they consume large amount of memory and power. This will not be sustainable in the post Moore's law era, where a significant portion of deep learning tasks will be ported to edge computing systems \cite{aimone2019neural}. Edge computing systems would form an indispensable part of the entire computation fabric and support critical applications like Internet of Things (IoT), autonomous vehicles, embedded systems etc. \cite{mailhiot2018energy}. Therefore, it is important to train DNNs that are tailored for such systems and have the following three desirable characteristics: \emph{low error}, \emph{low memory}, and \emph{low power}. While low power could be achieved using neuromorphic computing systems \cite{schuman2017survey}, we focus on achieving low error and low memory in this work. Our work in this paper can potentially be extended to neuromorphic systems to achieve these desirable characteristics. In order to achieve low memory, we focus on deep learning models where learning parameters are constrained to have a set of finite discrete values, for example, binary or ternary values. By constraining the values of learning parameters, we significantly reduce the memory required to store them. For instance, a learning parameter constrained to have ternary values ($-1$, $0$, $+1$) can be stored using just $2$ bits, as opposed to a double precision floating point learning parameter used in traditional learning algorithms, which requires $64$ bits. To train DNNs with constrained learning parameters, we propose a novel training algorithm called the Combinatorial Neural Network Training Algorithm (CoNNTrA) in Section \ref{sec:conntra}. Our objective is to demonstrate that CoNNTrA can train deep learning models consuming significantly less memory, yet achieving errors at par with Backpropagation. In Section \ref{sec:performance-evaluation}, we use CoNNTrA to train deep learning models for three machine learning benchmark data sets (MNIST, Iris and ImageNet). We compare the performance of CoNNTrA to that of Backpropagation along four performance metrics: training error, validation error, memory usage and training time. Our results indicate that CoNNTrA models have errors at par with Backpropagation, and consume $32\times$ less memory. \section{Related Work} \label{sec:related} Deep learning models with binary learning parameters have been proposed in the literature for several use cases. Courbariaux et al. propose BinaryConnect, which can train binary neural networks for specialized hardware and test their approach on MNIST, CIFAR-10 and SVHN data sets \cite{courbariaux2015binaryconnect}. Rastegari et al. propose XNOR-Net, which can train binary convolutional neural networks (CNN), test their algorithm on the ImageNet data set, and report $32\times$ savings in memory and $58\times$ faster convolutional operations \cite{rastegari2016xnor}. Wan et al. propose Ternary Binary Network (TBN), having ternary inputs and binary learning parameters, for edge computing devices like portable devices and wearable devices, test their approach on ImageNet and PASCAL VOC data sets and achieve $32\times$ memory savings and $40\times$ faster convolutional operations \cite{wan2018tbn}. Andri et al. propose YodaNN, a hardware accelerator for BinaryConnect CNNs and obtain high power efficiency \cite{andri2016yodann}. In addition to binary and ternary neural networks, several approaches have been proposed in the literature to train quantized neural networks. Hubara et al. propose a method to train quantized neural networks having low precision weights and test their approach on the MNIST, CIFAR-10, SVHN and ImageNet data sets \cite{hubara2017quantized}. Zhou et al. propose a mechanism for iterative optimizations for training quantized neural network and test their approach on AlexNet, GoogLeNet and ResNet \cite{zhou2017iqnn}. Blott et al. describe an end-to-end deep learning framework for exploration and training of quantized neural networks that can optimize for a given platform, design target or specific precision \cite{blott2018finn}. Choi et al. propose a mechanism for parameterized clipping activation for quantized neural networks that enables training with low precision weights \cite{choi2018pact}. We have previously shown that training deep neural networks with constrained learning parameters is an NP-complete problem \cite{date2019combinatorial}. To address this problem, several evolutionary optimization-based approaches have been pursued in the literature. Shen et al. propose an evolutionary optimization-based learning mechanism that finds binary neural networks by searching through the entire search space of learning parameters \cite{shen2019searching}. Too et al. use a binary particle swarm optimization for feature extraction and compare their approach to other evolutionary optimization-based approaches that leverage genetic algorithm, binary gravitational search algorithm and competitive binary grey wolf optimizer \cite{too2019new}. Nogami et al. use a combination of genetic algorithm and simulated annealing to optimize the bin boundaries of quantization for CNN and test their approach on the ImageNet data set using AlexNet and VGG16 \cite{nogami2019optimizing}. There are several limitations of the approaches proposed in the literature, especially with regards to low error, low memory and low power edge computing systems. While the algorithms to train binary or ternary neural networks have the potential to be deployed on edge computing systems, they are very specialized and cannot be used to train neural networks with a set of finite discrete learning parameters directly. On the other hand, while evolutionary optimization-based methods produce accurate models for quantized neural networks, they cannot be deployed on edge computing systems because evolutionary optimization is a compute heavy process and may require large compute clusters. Moreover, most of the approaches proposed in the literature cater to a specific deep learning model, for example, convolutional neural network and it is unclear if they are useful for other deep learning models such as recurrent neural networks or generative adversarial networks. In this work we propose the Combinatorial Neural Network Training Algorithm (CoNNTrA), which is not restricted to any particular neural network architecture, has an efficient time complexity (polynomial), and does not necessarily require significant amount of compute power. CoNNTrA is not restricted to binary or ternary learning parameters specifically, but can train any configuration of learning parameters as long as they are finite and discrete. We believe CoNNTrA would be able to train deep neural networks having low error, low memory and low power for edge computing systems in the post Moore's law era, especially when combined with neuromorphic computing systems, which are known to be resilient and energy efficient \cite{schuman2020resilience}, and have a wide range of applications such as graph algorithms \cite{hamilton2020spike,kay2020neuromorphic}, modeling epidemics \cite{hamilton2020modeling} and predicting supercomputer failures \cite{date2018efficient}. \section{The DNN Training Problem} \label{sec:snn-training-problem} We define the DNN training problem using the following notation: \begin{itemize} \item $\mathbb{R}$, $\mathbb{N}$, $\mathbb{B}$: Set of real numbers, natural numbers and binary numbers ($\mathbb{B} = \{0,1\}$) respectively. \item $\mathbb{T}$: Ternary set $\mathbb{T} = \{-1, 0, +1\}$. \item $\omega$: Set of finite discrete values that the learning parameters $W$ can take, for example, if learning parameters are required to have binary values, $\omega = \mathbb{B}$. \item $N$: Number of points in the training dataset. \item $d$: Dimension of each point in training dataset, which is the same as number of features in the training dataset. \item $k$: Number of classes for classification. \item $X$: $X$ can be a scalar, vector, matrix or tensor containing training data. \item $Y$: $Y$ contains the labels of training data encoded in a one-hot format. Since we have $N$ data points and $k$ classes, $Y \in \mathbb{B}^{N \times k}$. \item $W$: Set of all learning parameters, including all the weights and biases. \item $g(X, W)$: The DNN learning function. \item $e(P, Y)$: The error function which computes the error between predicted labels $P$ and ground truth labels $Y$. \end{itemize} Given training data $X$ and training labels $Y$, we would like to learn the parameters $W$ of the learning function $g(X, W)$ by minimizing the error $e(P, Y)$. In this regard, the DNN training problem with finite discrete weights is defined as follows: \begin{align} \underset{W}{\min} \quad e(P, Y) \label{eq:snn-training} \end{align} where, $P = g(X, W)$ are the labels predicted by the learning function $g$; Each learning parameter in the set $W$ can take values from the finite discrete set $\omega$. \section{DNN Training with Constrained Learning Parameters is NP-Hard} \label{sec:np-hard} We show that under the Euclidean error function, training a single layer neural network with binary weights is NP-Hard by reducing the quadratic unconstrained binary optimization (QUBO) problem, which is known to be NP-Hard \cite{wang2009analyzing,boros2007local,pardalos1992complexity}, to the DNN training problem with finite discrete weights. We first define the QUBO problem: \begin{align} \text{The QUBO Problem:} \quad \min_{z \in \mathbb{B}^d} z^T A z + z^T b + c \label{eq:qubo} \end{align} where, $A$ is a real symmetric positive definite $d \times d$ matrix, $b$ is a $d$-dimensional vector, and $c$ is a real scalar. Note that if $A$ is not symmetric, it can be made symmetric by setting $a_{ij} = \frac{a_{ij} + a_{ji}}{2} \quad \forall i \ne j$, without changing the QUBO problem. It is known that even when $A$ is positive definite, the QUBO problem is NP-Hard \cite{kochenberger2014unconstrained}. The DNN training problem with binary weights and Euclidean error function is defined as follows: \begin{align} \min_{W \in \mathbb{B}^d} e(P,Y) = \frac{1}{N} || P - Y ||^2_2 \label{eq:binary-snn-training} \end{align} where, $P = g(X, W)$ is the vector of values predicted by the learning function $g(X, W) = X^T W$, $X \in \mathbb{R}^{N \times d}$, $Y \in \mathbb{R}^N$, $W \in \mathbb{B}^d$. After expanding Equation \ref{eq:binary-snn-training}, the SNN Training problem becomes: \begin{align} \min_{W \in \mathbb{B}^d} \frac{1}{N} (W^T X^T X W - 2 W^T X^T Y + Y^T Y) \label{eq:binary-snn-training-expanded} \end{align} Given the optimal solution, we can compute the objective function value in polynomial time, so the problem is in NP. Also, Equation \ref{eq:binary-snn-training-expanded} is very similar to Equation \ref{eq:qubo}, in that both are quadratic minimization problems with binary variables. In order to reduce the QUBO problem to Binary SNN Training, we first decompose the real symmetric positive definite QUBO matrix $A$ into a product of a unique lower triangular matrix with real positive diagonal entries $L$ and its transpose $L^T$ using the Cholesky decomposition: \begin{align} A \xrightarrow[]{\text{CHOLESKY}} L L^T \end{align} Because $L$ is a lower triangular matrix with real positive diagonal entries, $L^{-1}$ exists. The reduction is performed as follows: \begin{itemize} \item $W = z$ \item $\frac{X^T X}{N} = A = L L^T $ \\ Therefore, $X = \sqrt{N} L^T $ and $X^T = \sqrt{N} L $ \item $\frac{-2 X^T Y}{N} = b$ \\ Therefore, $Y = -\frac{\sqrt{N}}{2} L^{-1} b$ \item Because both QUBO and Binary SNN Training are unconstrained optimization problems, the scalars $c$ in QUBO and $\frac{1}{N}Y^T Y$ in Binary SNN Training do not affect the optimal solution. In order to equate the scalars in both problems, we can introduce another scalar $c'$ in Binary SNN Training so that $\frac{Y^T Y + c'}{N} = c$ without changing the optimal solution. \end{itemize} By setting $W = z$, $X = \sqrt{N} L^T$ and $Y = -\frac{\sqrt{N}}{2} L^{-1} b$, we have reduced the QUBO problem to Binary SNN Training problem, thus showing that Binary SNN Training problem is NP-Hard. With more complex error functions like softmax and complex neural network architectures like deep convolutional or recurrent neural networks, the SNN training problem is at least as hard as the QUBO problem, if not more. \section{Combinatorial Neural Network Training Algorithm (CoNNTrA)} \label{sec:conntra} \begin{algorithm}[t!] \caption{Discretization Subroutine for CoNNTrA} \label{algo:discretize} \SetAlgoLined \SetKwProg{Fn}{Function}{:}{} \SetKwFunction{discretize}{Discretize} \SetKwFunction{sort}{Sort} \SetKwFunction{zeros}{Zeros} \Fn{\discretize{$W_{pre}$, $\omega$}}{ \KwIn{\\ $W_{pre}$: Pretrained Weights \\ $\omega$: Set of Finite Discrete Values } \BlankLine \BlankLine \KwOut{ \\ $W$: Discretized Weights } \BlankLine \BlankLine Weights: $W =$ \zeros{$|W_{pre}|$}\; $\omega =$ \sort{$\omega$}\; \For{$i = 1$ \KwTo $|W_{pre}|$}{ \For{$j = 1$ \KwTo $|\omega|$}{ \If{ $j == |\omega|$}{ $W[i] = \omega[j]$; } \ElseIf{$W_{pre}[i] \le \frac{1}{2} (\omega[j] + \omega[j+1])$}{ $W[i] = \omega[j]$\; \textbf{break}\; } } } \BlankLine \BlankLine \Return $W$ } \end{algorithm} \begin{algorithm}[t!] \caption{CoNNTrA: Combinatorial Neural Network Training Algorithm} \label{algo:CoNNTrA} \SetAlgoLined \SetKwProg{Fn}{Function}{:}{} \SetKwFunction{conntra}{CoNNTrA} \SetKwFunction{randint}{RandomInteger} \SetKwFunction{discretize}{Discretize} \Fn{\conntra{$X$, $Y$, $W_{pre}$, $\omega$, $g(X,W)$, $e(P, Y)$}}{ \KwIn{\\ $X$: Training Data \\ $Y \in \mathbb{B}^{N \times k}$: Training Labels (One-Hot Format) \\ $W_{pre}$: Pretrained Weights (Reshaped into a Single 1-Dimensional Array) \\ $\omega$: Set of Finite Discrete Values \\ $g(X, W)$: Spiking Neural Network Function \\ $e(P,Y)$: Error Function } \BlankLine \BlankLine \KwOut{ \\ $W_{opt}$: Optimal Weights \\ $\epsilon_{opt}$: Optimal Error } \BlankLine \BlankLine \tcc{PHASE 1: DISCRETIZATION} Weights: $W = $ \discretize{$W_{pre}$, $\omega$} \BlankLine \BlankLine \tcc{PHASE 2: INITIALIZATION} Error: $\epsilon = e(g(X, W), Y)$\; Optimal Weights: $W_{opt} = W$\; Optimal Error: $\epsilon_{opt} = \epsilon$\; Number of training iterations: $T$\; \BlankLine \BlankLine \tcc{PHASE 3: TRAINING} \For{$t = 1$ \KwTo $T$}{ \For{$i = 1$ \KwTo $|W|$}{ $i^{'} =$ \randint{$|W|$}\; \For{$j = 1$ \KwTo $|\omega|$}{ $W[i^{'}] = \omega[j]$\; $\epsilon = e(g(X, W), Y)$\; \If{$\epsilon \le \epsilon_{opt}$}{ $\epsilon_{opt} = \epsilon$\; $W_{opt} = W$\; } } $W[i^{'}] = W_{opt}[i^{'}]$\; } } \BlankLine \BlankLine \Return $W_{opt}, \epsilon_{opt}$ } \end{algorithm} We propose the Combinatorial Neural Network Training Algorithm (CoNNTrA), which is a coordinate gradient descent based algorithm for training DNNs with finite discrete weights. CoNNTrA is presented in Algorithm \ref{algo:discretize} and Algorithm \ref{algo:CoNNTrA}. Say $w_i = \omega_j$ for some $i$ and $j$. We first look at the left and right gradients of the error function. \begin{align} & \text{Left Gradient:} \nonumber \\ & \qquad \frac{\partial e}{\partial w_i} \Bigg|_{\text{left}} = \lim_{h \rightarrow 0} \frac{e(w_i) - e(w_i - h)}{h} \\ & \text{Right Gradient:} \nonumber \\ & \qquad \frac{\partial e}{\partial w_i} \Bigg|_{\text{right}} = \lim_{h \rightarrow 0} \frac{e(w_i + h) - e(w_i)}{h} \end{align} These gradients could be used if $w_i$ could take continuous values. Since $w_i$ cannot take continuous values, $h$ never tends to $0$ in the above equations, but is some finite number greater than $0$. So, we look at the discrete counterparts of gradients: \begin{align} &\text{Left Discrete Gradient:} \nonumber \\ &\qquad \frac{\Delta e}{\Delta w_i} \Bigg|_{\text{left}} = \frac{e(w_i = \omega_j) - e(w_i = \omega_{j-1})}{\omega_j - \omega_{j-1}} \\ &\text{Right Discrete Gradient:} \nonumber \\ &\qquad \frac{\Delta e}{\Delta w_i} \Bigg|_{\text{right}} = \frac{e(w_i = \omega_{j+1}) - e(w_i = \omega_j)}{\omega_{j+1} - \omega_j} \end{align} These discrete counterparts of gradients search in the discrete vicinity of $w_i$ to find a value that lowers the error. This is a local search, and makes up to three calls to the error function, i.e. $e(w_i = \omega_{j-1})$, $e(w_i = \omega_j)$ and $e(w_i = \omega_{j+1})$. We extend this notion of local search and do a global search, i.e. search through all possible values of $w_i$ to find the best value that minimizes the error function. This makes $\mathcal{O}(|\omega|)$ calls to the error function. In this case, we have a better chance of finding a lower value of error function at each iteration. When we do a similar procedure for all the weights, we iteratively find better weight values that decrease the error function gradually as training progresses. CoNNTrA takes as inputs the training data $X$, the training labels $Y$, pretrained weights $W$, set of finite discrete values that the weights can take $\omega$, the SNN function $g(X, W)$ and the error function $e(P, Y)$. Initial weights are the weights obtained when the DNN was trained using Backpropagation by relaxing the finite discrete value constraint. The first step is to discretize the weights using Algorithm \ref{algo:discretize}. In Algorithm \ref{algo:discretize}, we set all the weights in the vicinity of $\omega_j$ to $\omega_j$. For example, if $\omega = \{-1, 0, +1\}$, then for some $i$, if the pretrained weight $w_i > 0.5$, it would be set to $+1$, if $-0.5 < w_i \le 0.5$, it would be set to $0$, and if $w_i \le -0.5$, it would be set to $-1$. We start by initializing the weights $W$ to an array of zeros using the function \texttt{Zeros($x$)}, which returns a zero initialized array of length $x$, in line 2 of Algorithm \ref{algo:discretize}. Next, we sort $\omega$ so that all values in $\omega$ are in increasing order in line 3. Next, in the for loop from line 4 through 14, we iterate over each weight in $W$, and in the for loop from line 5 through 13, we iterate over each value in $\omega$ to find an appropriate discretized value for each weight. The discretized weight value is assigned to the appropriate weight on either line 7 or 10. In the second phase of the algorithm, i.e. the initialization phase, we first compute the error using the discretized weights $W$ and assign it to the initial error $\epsilon$ on line 3. Next, we initialize the optimal weights $W_{opt}$ and optimal error $\epsilon_{opt}$, by setting them to $W$ and $\epsilon$ on lines 4 and 5 respectively. We then define the number of training iterations $T$. In the third phase of the algorithm, i.e. the training phase, we iterate over $T$ training iterations in lines 7--20. During each iteration, we perform a global search over $|W|$ randomly selected weights in lines 8--19. We refer to each random selection of weights as an epoch---so, there are a total of $T \times |W|$ training epochs. During each training epoch, we first randomly select a weight index $i^{'}$ using the function \texttt{RandomInteger($x$)}, which returns a uniform random integer in the interval $[1, x]$. For $w_{i^{'}}$, we perform a global search over all possible values of $w_{i^{'}}$ to find the best value that minimizes the error function in lines 10--17 of Algorithm \ref{algo:CoNNTrA}. If a better value for $w_{i^{'}}$ is found, we update the optimal error $\epsilon_{opt}$ and optimal weights $W_{opt}$ in lines 14 and 15 respectively. After a global search is performed for $w_{i^{'}}$, we set the current weights $W$ to the optimal weights $W_{opt}$ in line 18, so that in the subsequent epochs, we use the current best set of weights. Finally, after all the training epochs are completed, we return $W_{opt}$ and $\epsilon_{opt}$ in line 21 of Algorithm \ref{algo:CoNNTrA}. \subsection{Time Complexity} \label{sub:time-complexity} We analyze the running time of CoNNTrA by going over the running time of each line in Algorithm \ref{algo:discretize} and Algorithm \ref{algo:CoNNTrA}. In the first phase, initializing the weights (line 2 of Algorithm \ref{algo:discretize}) takes $\mathcal{O}(|W|)$ time, and sorting $\omega$ takes $\mathcal{O}(|\omega| \log |\omega|)$ time. Next, the two for loops in lines 4 through 14 of Algorithm \ref{algo:discretize} take up $\mathcal{O}(|W| \cdot |\omega|)$ time. So the running time of discretization phase is $\mathcal{O}(|W| \cdot |\omega|)$. We assume that time taken to do a forward pass on the SNN (i.e. computing $P = g(X, W)$) and computing the error $e(P, Y)$ takes $\tau = \mathcal{O}(e(g(X, W), Y))$ amount of time. Therefore, it takes $\mathcal{O}(\tau)$ time to compute the error on line 3 of Algorithm \ref{algo:CoNNTrA}. It takes $\mathcal{O}(|W|)$ time to initialize $W_{opt}$ on line 4 of Algorithm \ref{algo:CoNNTrA}. Lines 5 and 6 take $\mathcal{O}(1)$ time. So, the initialization phase takes $\mathcal{O}(\tau + |W|)$ time. In the training phase, the for loop from lines 7 through 20 in Algorithm \ref{algo:CoNNTrA} runs $T$ times. The for loop from lines 8 through 19 runs $|W|$ times and the for loop from lines 10 through 17 runs $|\omega|$ times. It takes $\mathcal{O}(\tau)$ time to compute the error $\epsilon$ on line 14. Therefore, the training phase takes $\mathcal{O}(T \cdot |W| \cdot |\omega| \cdot \tau)$ time. Since this dominates the running time of all phases, the running time for CoNNTrA is $\mathcal{O}(T \cdot |W| \cdot |\omega| \cdot \tau)$. Since $\tau$ is usually a polynomial time expression in the number of weights and size of training dataset, CoNNTrA is a polynomial time algorithm. \subsection{Convergence} \label{sub:convergence} During each epoch in the training phase, we update the optimal weights $W_{opt}$ and optimal error $\epsilon_{opt}$ only if the current error $\epsilon$ is lower than $\epsilon_{opt}$. Thus, with every update, $\epsilon_{opt}$ gets closer and closer to 0.0 demonstrating convergence. If CoNNTrA is run for enough number of epochs, the optimal error would converge to a local minimum. \section{Performance Evaluation} \label{sec:performance-evaluation} We compare the performance of CoNNTrA (Algorithm \ref{algo:CoNNTrA}) to traditional Backpropagation using GPU on four benchmark problems: MNIST using a logistic regression classifier, MNIST using a convolutional neural network (CNN), Iris using a deep neural network (DNN), and ImageNet using a convolutional neural network (CNN). The performance metrics used for this comparison are: \begin{enumerate}[topsep=0pt, partopsep=0pt, itemsep=0pt, parsep=0pt] \item Training Error: Percentage of data points classified incorrectly in the training dataset. \item Validation Error: Percentage of data points classified incorrectly in the validation dataset. \item Memory Usage (kilobytes): Amount of memory used to store the weights. \item Training Time (seconds): Total time taken to complete training. \end{enumerate} CoNNTrA was written in Python using the Numpy library \cite{van2011numpy}. The Backpropagation algorithm was run using the TensorFlow library \cite{abadi2016tensorflow} on GPUs. All experimental runs were run on a machine that had 32 cores of two-way multi-threaded Intel Xeon CPUs running at 2.60 GHz, three NVIDIA GPUs (GeForce GTX 1080 Titan, GeForce GTX 950 and GeForce GTX 670), 112 GB DIMM Synchronous RAM, 32 KB L1 cache, 256 KB L2 cache and 20 MB L3 cache. \subsubsection{MNIST Logistic Regression} \begin{figure}[t!] \centering \includegraphics[trim={100 0 100 0},clip,scale=0.3]{Slide1.png} \caption{Schematic diagram of MNIST logistic regression model} \label{fig:mnist-logreg-model} \end{figure} \begin{table}[t!] \centering \caption{Performance metrics for MNIST logistic regression} \begin{tabular}{m{0.2\textwidth} m{0.11\textwidth} m{0.07\textwidth}} \noalign{\smallskip} \hline \noalign{\smallskip} Performance Metric & Backpropagation & CoNNTrA \\ \noalign{\smallskip} \hline \noalign{\smallskip} Training Error (\%) & 6.25 & 7.59 \\ Validation Error (\%) & 7.34 & 8.44 \\ \textbf{Memory Usage (kilobytes)} & \textbf{62.8} & \textbf{1.96} \\ Training Time (seconds) & 81.70 & 236.12 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \end{tabular} \label{tab:mnist-logreg-comparison} \end{table} \begin{figure}[t!] \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[scale=0.4]{mnist-logistic-training-error.png} \caption{Training Error Comparison} \label{fig:mnist-logreg-training-error} \end{subfigure} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[scale=0.4]{mnist-logistic-validation-error.png} \caption{Validation Error Comparison} \label{fig:mnist-logreg-validation-error} \end{subfigure} \caption{Error comparison for MNIST logistic regression models} \label{fig:mnist-logreg-errors} \end{figure} We used a logistic regression model to classify the MNIST images. The inputs to the logistic regression model were vectorized MNIST images, each of size $784 \times 1$. The outputs to the model were the labels of the input images encoded in a one-hot format. The model consisted of a weight matrix of size $784 \times 10$ and a bias vector of size $10 \times 1$. A schematic diagram of the logistic regression model is shown in Figure \ref{fig:mnist-logreg-model}. The activation function for this model was softmax and the loss was computed using the cross entropy loss function. Table \ref{tab:mnist-logreg-comparison} shows the performance metrics of Backpropagation and CoNNTrA for the MNIST task using a logistic regression classifier. The training errors for Backpropagation and CoNNTrA are $6.25\%$ and $7.59\%$ respectively, and the validation errors are $7.32\%$ and $8.44\%$ respectively. The memory usage for Backpropagation and CoNNTrA is $62.8$ and $1.96$ kilobytes respectively. While Backpropagation takes $81.70$ seconds to complete training, CoNNTrA takes $236.12$ seconds. Figure \ref{fig:mnist-logreg-errors} shows the plot of training and validation errors for CoNNTrA (red) and Backpropagation (blue). These errors were computed as percentage of misclassified points in the training and validation datasets respectively. The X-axis in Figures \ref{fig:mnist-logreg-training-error} and \ref{fig:mnist-logreg-validation-error} shows the percentage of training completed. The Y-axis shows the classification errors as a percentage. As training progresses, both algorithms converge to the same ballpark of $6-8\%$, which corresponds to an accuracy of $92-94\%$. \subsubsection{MNIST CNN} \begin{table}[t!] \centering \caption{Performance metrics for MNIST CNN} \begin{tabular}{m{0.2\textwidth} m{0.11\textwidth} m{0.07\textwidth}} \noalign{\smallskip} \hline \noalign{\smallskip} Performance Metric & Backpropagation & CoNNTrA \\ \noalign{\smallskip} \hline \noalign{\smallskip} Training Error (\%) & 1.40 & 2.60 \\ Validation Error (\%) & 1.56 & 2.39 \\ \textbf{Memory Usage (kilobytes)} & \textbf{649.55} & \textbf{20.30} \\ Training Time (seconds) & 121.97 & 4,871.04 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \end{tabular} \label{tab:mnist-cnn-comparison} \end{table} \begin{figure}[t!] \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[scale=0.4]{mnist-cnn-training-error.png} \caption{Training Error Comparison (MNIST CNN)} \label{fig:mnist-cnn-training-error} \end{subfigure} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[scale=0.4]{mnist-cnn-validation-error.png} \caption{Validation Error Comparison (MNIST CNN)} \label{fig:mnist-cnn-validation-error} \end{subfigure} \caption{Error comparison for MNIST CNN models} \label{fig:mnist-cnn-errors} \end{figure} We use the LeNet architecture proposed by LeCun et al. \cite{lecun1998gradient}. The training results for MNIST CNN models are shown in Table \ref{tab:mnist-cnn-comparison} and Figure \ref{fig:mnist-cnn-errors}. The training and validation errors for Backpropagation are $1.40\%$ and $1.56\%$ respectively, and those for CoNNTrA are $2.60\%$ and $2.39\%$ respectively. The memory usage for Backpropagation is $649.55$ kilobytes, while that for CoNNTrA is $20.30$ kilobytes. While Backpropagation takes $121.97$ seconds, CoNNTrA takes $4,871.04$ seconds to complete training. Figure \ref{fig:mnist-cnn-errors} shows the training and validation errors for Backpropagation (blue) and CoNNTrA (red). The final training and validation errors obtained by both models are around $1-3\%$, which is the state of the art for the LeNet CNN, and corresponds to an accuracy of $97-99\%$. The rate of convergence for CoNNTrA, shows an interesting behavior. The rate of convergence gradually decreases until just over $80\%$ of training is completed, after which, it starts decreasing rapidly and converges to $2.60\%$ in Figure \ref{fig:mnist-cnn-training-error} and $2.39\%$ in Figure \ref{fig:mnist-cnn-validation-error}. We attribute this behavior to the following reason. At every training epoch in CoNNTrA, we pick a weight at random and perform a global search across all possible values to find a value that yields the smallest possible error. When the rate of convergence started decreasing rapidly, a weight was picked which had a high impact on the classification error. When a global search was performed for this weight, it drastically improved the error and transformed the neural network function in such a way that there was abundant room to improve the error for subsequently selected weights. \subsubsection{Iris} \begin{figure}[t!] \centering \includegraphics[trim={50 0 70 0}, scale=0.29]{Slide3.png} \caption{Schematic diagram of Iris multi-layer perceptron} \label{fig:iris-mlp-model} \end{figure} \begin{table}[t!] \centering \caption{Performance metrics for Iris DNN} \begin{tabular}{m{0.2\textwidth} m{0.11\textwidth} m{0.07\textwidth}} \noalign{\smallskip} \hline \noalign{\smallskip} Performance Metric & Backpropagation & CoNNTrA \\ \noalign{\smallskip} \hline \noalign{\smallskip} \textbf{Training Error (\%)} & \textbf{1.67} & \textbf{1.67} \\ \textbf{Validation Error (\%)} & \textbf{3.33} & \textbf{3.33} \\ \textbf{Memory Usage (kilobytes)} & \textbf{1.88} & \textbf{0.06} \\ Training Time (seconds) & 4.56 & 4.92 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \end{tabular} \label{tab:iris-dnn-comparison} \end{table} \begin{figure}[t!] \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[scale=0.4]{iris-dnn-training-error.png} \caption{Training Error Comparison (Iris DNN)} \label{fig:iris-dnn-training-error} \end{subfigure} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[scale=0.4]{iris-dnn-validation-error.png} \caption{Validation Error Comparison (Iris DNN)} \label{fig:iris-dnn-validation-error} \end{subfigure} \caption{Error comparison for Iris DNN models} \label{fig:iris-dnn-errors} \end{figure} We use a three layer deep multi-layer perceptron model having two hidden layers for this classification task. Figure \ref{fig:iris-mlp-model} shows a schematic diagram of the multi-layer perceptron model. Each neuron in the hidden layers is indexed using a superscript and a subscript. The superscript indicates the layer index and the subscript indicates the neuron index within that layer. Table \ref{tab:iris-dnn-comparison} shows the performance metrics for the Iris DNN models. Both models achieved the same training and validation errors, i.e. $1.67\%$ and $3.33\%$ respectively. The memory usage for Backpropagation was $1.88$ kilobytes, while that for CoNNTrA was $0.06$ kilobytes. The training time for Backpropagation was $4.56$ seconds and that for CoNNTrA was $4.92$ seconds. Figure \ref{fig:iris-dnn-errors} shows the plot of training and validation errors for Backpropagation (in blue) and CoNNTrA (in red). Training errors for both algorithms follow each other closely and converge at $1.67\%$. Validation error for CoNNTrA is seen to vary abruptly initially until it starts to converge at around the $15\%$ mark with sporadic spikes. The validation errors for both algorithms converge to $3.33\%$. \subsubsection{ImageNet} \begin{table}[t!] \centering \caption{Performance metrics for ImageNet CNN} \begin{tabular}{m{0.2\textwidth} m{0.11\textwidth} m{0.08\textwidth}} \noalign{\smallskip} \hline \noalign{\smallskip} Performance Metric & Backpropagation & CoNNTrA \\ \noalign{\smallskip} \hline \noalign{\smallskip} Training Error (\%) & 15.12 & 16.98 \\ \textbf{Validation Error (\%)} & \textbf{18.62} & \textbf{18.50} \\ \textbf{Memory Usage (kilobytes)} & \textbf{499,026.75} & \textbf{15,594.59} \\ Training Time (seconds) & 388,764.43 & 647,249.96 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \end{tabular} \label{tab:imagenet-cnn-comparison} \end{table} \begin{figure}[t!] \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[scale=0.4]{imagenet-cnn-training-error.png} \caption{Training Error Comparison (ImageNet CNN)} \label{fig:imagenet-cnn-training-error} \end{subfigure} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[scale=0.4]{imagenet-cnn-validation-error.png} \caption{Validation Error Comparison (ImageNet CNN)} \label{fig:imagenet-cnn-validation-error} \end{subfigure} \caption{Error comparison for ImageNet CNN models} \label{fig:imagenet-cnn-errors} \end{figure} We use the AlexNet CNN proposed by Krizhevsky et al. \cite{krizhevsky2012imagenet}. Table \ref{tab:imagenet-cnn-comparison} shows the performance metrics. While Backpropagation takes $388,764.42$ seconds, CoNNTrA takes $647,249.96$ seconds to complete training. The training errors for Backpropagation and CoNNTrA are $15.12\%$ and $16.98\%$ respectively, and the validation errors are $18.62\%$ and $18.50\%$ respectively. All errors computed for ImageNet CNN model are top-5 errors. The memory used by Backpropagation is $499,026.75$ kilobytes, and that used by CoNNTrA is $15,594.59$ kilobytes. Figure \ref{fig:imagenet-cnn-errors} shows the training and validation errors for training the ImageNet CNN model using Backpropagation (blue) and CoNNTrA (red). We see a regular trend in Figure \ref{fig:imagenet-cnn-errors}. Both models converge to around $15-17\%$ training error and $16-18\%$ validation error, which are in the same ballpark as the state of the art errors for the AlexNet model. \subsection{Discussion} \begin{figure} \centering \begin{subfigure}{0.32\textwidth} \centering \includegraphics[scale=0.32]{training-time-bar.png} \caption{Training Time} \label{fig:training-time-bar} \end{subfigure} \\ \begin{subfigure}{0.32\textwidth} \centering \includegraphics[scale=0.32]{training-error-bar.png} \caption{Training Error} \label{fig:training-error-bar} \end{subfigure} \\ \begin{subfigure}{0.32\textwidth} \centering \includegraphics[scale=0.32]{validation-error-bar.png} \caption{Validation Error} \label{fig:validation-error-bar} \end{subfigure} \\ \begin{subfigure}{0.32\textwidth} \centering \includegraphics[scale=0.32]{memory-bar.png} \caption{Memory Usage} \label{fig:memory-bar} \end{subfigure} \caption{Comparison of Backpropagation and CoNNTrA} \label{fig:consolidated-results} \end{figure} Figure \ref{fig:consolidated-results} presents the performance of Backpropagation and CoNNTrA in a consolidated fashion. The X-axis shows datasets and models, and the Y-axis shows the performance metric. Blue and red bars denote Backpropagation and CoNNTrA performance metrics respectively. In Figure \ref{fig:training-time-bar}, we observe that CoNNTrA takes more time to complete training for all tasks. This is because we were using a serialized implementation of CoNNTrA as our objective was to demonstrate a proof of concept that CoNNTrA is able to train models having accuracies at par with Backpropagation. With parallel implementations of CoNNTrA, we expect the training times to significantly reduce. In Figures \ref{fig:training-error-bar} and \ref{fig:validation-error-bar}, the training and validation errors for Backpropagation and CoNNTrA are within the same ballpark and are at par with the state of the art error values for the respective models. The CoNNTrA errors fall slightly short of Backpropagation errors for all models except Iris DNN. Given the same architecture of the models for both Backpropagation and CoNNTrA, the CoNNTrA weights were ternary whereas Backpropagation used double precision floating point weights. Going from double precision to ternary weights resulted in less expressibility for CoNNTrA, because of which we see slightly higher value of training and validation errors. In Figure \ref{fig:memory-bar}, the memory usage for CoNNTrA is about $32 \times$ less than Backpropagation for all models. It requires $2$ bits to store each ternary valued CoNNTrA weight, and $64$ bits to store each double precision floating point Backpropagation weight. A $32\times$ reduction in memory usage is extremely significant in edge computing applications, especially embedded systems, Internet of Things, autonomous vehicles etc. \section{Conclusion} \label{sec:conclusion} Edge computing systems in applications like Internet of Things (IoT), autonomous vehicles and embedded systems in the post Moore's law era will require machine learning models that not only produce low error and train fast, but also consume low memory and power. While traditional learning algorithms like Backpropagation can train deep learning models in a reasonable amount of time and obtain low error, they consume significantly large memory and power. In this work, we propose a novel learning algorithm called Combinatorial Neural Network Training Algorithm (CoNNTrA), which is a coordinate gradient descent-based algorithm that can train deep learning models having constrained learning parameters, for example, having binary or ternary values. The objective of this study was to demonstrate that CoNNTrA can train deep learning models with constrained learning parameters, which yield errors at par with the Backpropagation models, \emph{and} consume significantly less memory. We presented CoNNTrA in Section \ref{sec:conntra} along with its theoretical underpinnings and complexity analysis. In Section \ref{sec:performance-evaluation}, we used CoNNTrA to train deep learning models for three machine learning benchmark data sets (MNIST, Iris and ImageNet). We demonstrated that CoNNTrA can train these models having errors in the same ballpark as Backpropagation models. More importantly, we showed that the CoNNTrA models consume $32\times$ less memory than the Backpropagation models. In our future work, we would like to implement CoNNTrA in an efficient parallelized fashion to improve the training times. We believe that such a parallel implementation of CoNNTrA would be able to train deep learning models that are not just accurate and consume orders of magnitude less memory than Backpropagation, but can also be trained efficiently. This would be invaluable to training machine learning and deep learning models in the post Moore's law era, especially for edge computing systems supporting critical applications. We would also like to study the applicability of CoNNTrA for solving other NP-complete problems like traveling salesman problem, protein folding, and genetic imputation. \bibliographystyle{IEEEtran}
1,314,259,993,837
arxiv
\section{Introduction}\label{sec:intro} Precise determinations of the values of matrix elements of the Cabibbo-Kobayashi-Maskawa (CKM) matrix~\cite{Kobayashi:1973fv,Cabibbo:1963yz} are important for testing the Standard Model of particle physics (SM). In this article a precise determination of the magnitude of the CKM matrix element $\left| V_{cb}\right|$ is reported, based on a measurement of the exclusive decay of $\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell$ with $D^{*\,+} \to D^0 \pi^+$ and $D^{*\,+} \to D^+ \pi^0$ and its isospin conjugate decay mode. In addition, the unfolded differential decay rates of four kinematic quantities, described in section~\ref{sec:sltheory}, that fully characterize the semileptonic decay, are reported for the first time in this decay mode. These measurements will allow for extractions of $\left| V_{cb}\right|$ using unquenched lattice QCD calculations of the $\bar B \to D^{*}$ transition form factors beyond zero recoil when they are available in the future. This measurement complements the previous Belle untagged result in Ref.~\cite{Dungel:2010uk}, by studying the properties of the $\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell$ decay using an orthogonal data set: the second $B$-meson in the collision is reconstructed using a fully reconstructed $B$ sample. This high purity sample allows for more precise reconstruction of the decay kinematics, at the cost of lower efficiency. Other recent measurements of $\left| V_{cb} \right|$ using the exclusive $\bar B \to D^{*} \, \ell \, \bar \nu_\ell$ decay have been performed by the Babar experiment~\cite{Aubert:2007qs,Aubert:2007rs,Aubert:2008yv}. This paper is organized as follows: section~\ref{sec:sltheory} briefly reviews the theory describing semileptonic \mbox{$\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell$} decays. Section~\ref{sec:belle} provides a brief overview of the Belle detector and the data sets used in this analysis. The event reconstruction and selection criteria are summarized in section~\ref{sec:evtreco}, while section~\ref{sec:signalext} provides an overview of the extraction of the inclusive and differential signal yields. Section~\ref{sec:unfolding} discusses the unfolding procedure. Section~\ref{sec:syst} reviews the dominant sources of systematic uncertainty. Section~\ref{sec:vcb} describes the procedure for extracting the CKM matrix element $\left| V_{cb} \right|$. Section~\ref{sec:summary} concludes the article, with a brief summary of the key results. \begin{figure}[h] \includegraphics[width=0.5\linewidth,page=1,trim={0cm 0cm 0.5cm 0cm},clip]{./figures/angles.png} \caption{ The helicity angles $\theta_\ell$, $\theta_v$, and $\chi$ that characterize the $\bar B \to D^{*} \, \ell \, \bar \nu_\ell$ decay are shown: the helicity angle $\theta_\ell$ is defined as the angle between the lepton and the direction opposite the $\bar B$-meson in the virtual $W$-boson rest frame; similarly $\theta_v$ is defined as the angle between the $D$ meson and the direction opposite the $\bar B$-meson in the $D^*$ rest frame; finally the angle $\chi$ is defined as the tilting angle between the two decay planes spanned by the $W-\ell$ and $D^*-D$ systems in the $\bar B$-meson rest frame. } \label{fig:angles} \end{figure} \section{Theory of $\bar B \to D^{*} \, \ell^- \, \bar \nu_\ell$ decays }\label{sec:sltheory} The $\bar B \to D^{*} \, \ell \, \bar \nu_\ell$ decay amplitude depends on one non-perturbative hadronic matrix element that can be expressed using Lorentz invariance and the equation of motion in terms of $\bar B \to D^{*}$ form factors. The four transition form factors $V$, $A_{0/1/2}$ fully describing the $\bar B \to D^{*}$ decay are defined by the hadronic current~\cite{RichmanBurchat}: \begin{widetext} \begin{align}\label{eq:ff} \langle D^{*} (p_{D^*}) | \bar c \, \gamma_\mu \, (1 - \gamma_5) \, P_L \, b |\bar B(p_B) \rangle = & \frac{2 i V(q^2)}{m_B + m_{D^*}} \epsilon_{\mu\nu\alpha\beta} \, \epsilon^{* \, \nu} \, p_B^\alpha \, p_{D^*}^{\beta} - (m+m_{D^*}) A_1(q^2) \left( \epsilon_\mu^* - \frac{\epsilon^* \cdot q}{q^2} q_\mu \right) \nonumber \\ & + A_2(q^2) \, \frac{\epsilon^* \cdot q}{m_B + m_{D^*}} \left( ( p_B + p_{D^*})_\mu - \frac{m^2 - m_{D^*}^2}{q^2} q_\mu \right) \nonumber \\ & - 2 m_{D^*} A_0(q^2) \, \frac{\epsilon^* \cdot q}{q^2} q_\mu \, , \end{align} where $q^\mu = \left( p_B - p_{D^*} \right)^\mu$ is the difference between the $\bar B$-meson and $D^*$-meson four momenta, and $m_B$ and $m_{D^*}$ denote the $B$-meson and $D^*$-meson masses, respectively. The $\epsilon^*$ terms denote the polarization of the $D^*$-meson. The form factors in Eq.~\ref{eq:ff} are functions of the four-momentum transfer squared $q^2$, and the differential decay rate $\bar B \to D^{*} (\to D \pi) \, \ell \, \bar \nu_\ell$ may be expressed in the zero lepton mass limit in terms of three helicity amplitudes $H_0$, $H_\pm$~\cite{RichmanBurchat}: \begin{align} \label{eq:rate} \frac{\text{d} \Gamma( \bar B \to D^{*} (\to D \pi) \, \ell \, \bar \nu_\ell)}{\text{d} w \, \text{d}\cos\theta_v \, \text{d}\cos\theta_{\ell} \, \text{d}\chi} =& \frac{6m_Bm_{D^*}^2}{8(4\pi)^4}\sqrt{w^2-1}(1-2 \, w\, r+r^2)\, G_F^2 \, \left|V_{cb}\right|^2 \, \times \mathcal{B}(D^{*} \to D \pi) \nonumber\\ & \times~ \biggl( (1-\cos\theta_{\ell})^2\sin^2\theta_vH_+^2 + (1+\cos\theta_{\ell})^2\sin^2\theta_vH_-^2 \nonumber \\ &~~\qquad+ 4\sin^2\theta_{\ell}\cos^2\theta_vH_0^2 - 2\sin^2\theta_{\ell}\sin^2\theta_v\cos2\chi H_+H_-\nonumber \\ &~~\qquad- 4\sin\theta_{\ell}(1-\cos\theta_{\ell})\sin\theta_v\cos\theta_v\cos\chi H_+H_0\nonumber \\ &~~\qquad+ 4\sin\theta_{\ell}(1+\cos\theta_{\ell})\sin\theta_v\cos\theta_v\cos\chi H_-H_0 \biggr) ~, \end{align} where the $q^2$ is written as the product of the four-velocities of the initial- and final-state meson, \mbox{$w = \left( m_B^2 + m_{D^*}^2 - q^2 \right)/(2 m_B m_{D^*})$} for later convenience and $r = m_{D^*} / m_B$. The helicity amplitudes are related to the form factors as \begin{align} H_{\pm} &= \left(m_B + m_{D^*} \right) A_1(q^2) \mp \frac{2 m_B}{m_B + m_{D^*}} \left| p_{D^*} \right| V(q^2) \, , \\ H_0 & = \frac{1}{2 m_{D^*} \sqrt{q^2}} \bigg( \left(m_B^2 - m_{D^*}^2 - q^2 \right) \left( m_B + m_{D^*} \right) A_1(q^2) - \frac{4 m_B^2 \left| p_{D^*} \right|^2}{m_B + m_{D^*}} A_2(q^2) \bigg) \, . \end{align} \end{widetext} The light constituents of the $\bar B$- and $D^*$-mesons are only lightly perturbed if the velocities of the $b$- and $c$-quarks inside the $\bar B$- and $D^*$-mesons are similar, e.g. for $q^2 = q^2_{\rm max}$ or $w \sim 1$ ~\cite{WiseManohar}. The four form factors in Eq.~\ref{eq:ff} can be expressed in terms of a single universal form factor $h_{A_1}(w)$ and three ratios $R_i(w)$, \begin{align} A_1 = \frac{w+1}{2} r' h_{A_1}(w) \, , \qquad A_0 = \frac{R_0(w)}{r'} h_{A_1}(w) \, , \nonumber \\ A_2 = \frac{R_2(w)}{r'} h_{A_1}(w)\, , \qquad V = \frac{R_1(w)}{r'} h_{A_1}(w) \, , \end{align} with $r' = 2 \sqrt{m_B m_{D^*}} / \left( m_B + m_{D^*} \right)$. Analyticity and unitarity impose strong constraints on heavy meson decay form factors~\cite{Grinstein:1992hq} and the universal form factor and ratios can be expressed in terms of five parameters \mbox{\{$h_{A_1}(1)$, $\rho_{D^{*}}^2$, $R_{0/1/2}(1)$\}}, cf. Ref.~\cite{Caprini:1997mu}: \begin{align}\label{eq:ffhqet} h_{A_1}(w) &= h_{A_1}(1) \bigg( 1 - 8 \rho_{D^{*}}^2 z + \left(53 \rho_{D^{*}}^2 - 15 \right) z^2 - \left(231 \rho_{D^{*}}^2 - 91 \right) z^3 \bigg) \, , \\ R_0(w) &= R_0(1) - 0.11(w - 1) + 0.01(w-1)^2\, , \\ R_1(w) &= R_1(1) - 0.12(w - 1) + 0.05(w-1)^2\, , \\ R_2(w) &= R_2(1) + 0.11(w - 1) - 0.06(w-1)^2\, , \end{align} with $z = \left( \sqrt{w+1} - \sqrt{2} \right) / \left( \sqrt{w+1} + \sqrt{2} \right)$. The ratio $R_0(w)$ is not important for decays involving light leptons. The current state-of-the-art unquenched calculation Ref.~\cite{Bailey:2014tva} uses up to three light-quark flavours and yields \mbox{$h_{A_1}(1) = 0.906 \pm 0.013$}. Equation~\ref{eq:rate} receives additional electroweak corrections that can be introduced by the replacement of \mbox{$h_{A_1}(1) \to h_{A_1}(1) \, \eta_{EW}$} with $\eta_{EW} = 1.0066$ from Ref.~\cite{Sirlin}. The remaining three parameters, \mbox{$\{\rho_{D^{*}}^2$, $R_{1/2}(1)\}$}, need to be determined experimentally by analyzing the differential $\bar B \to D^{*} \, \ell \, \bar \nu_\ell$ spectrum to convert the measured branching fraction into a value of $\left| V_{cb} \right|$: \begin{align} \left| V_{cb} \right| & = \sqrt{ \frac{ \mathcal{B}(\bar B \to D^{*} \, \ell \, \bar \nu_\ell) }{ \tau \, \Gamma(\bar B \to D^{*} \, \ell \, \bar \nu_\ell)} } \, , \end{align} where $\tau$ is the $B$-meson lifetime and $\Gamma(\bar B \to D^{*} \, \ell \, \bar \nu_\ell)$ is the decay rate with the CKM factor omitted. \section{The Belle detector and data set}\label{sec:belle} The data sample used in this measurement was recorded with the Belle detector~\cite{unknown:2000cg}, that operated at the KEKB storage ring~\cite{Kurokawa:2001nw} between 1999 and 2010. This analysis uses an integrated luminosity of \mbox{711 fb${}^{-1}$} recorded at the centre-of-mass energy of \mbox{$\sqrt{s} = 10.58$ GeV}, corresponding to 772 million $e^+ e^- \to \Upsilon(4S) \to B \bar B$ events. KEKB is an asymmetric $e^+e^-$ collider in which the centre-of-mass of the colliding beams moves with a velocity of $\beta = 0.425$ along the beam axis in the laboratory rest frame. The Belle detector is a large solid angle magnetic spectrometer optimized to reconstruct $e^+ e^- \to \Upsilon(4S) \to B \bar B$ collisions. Its principal detector components are: the silicon vertex detector, the 50-layer central drift chamber, the array of aerogel based Cherenkov counters, the time-of-flight scintillation counters, and the electromagnetic calorimeter built from CsI(Tl) crystals located inside a superconducting solenoid coil producing a 1.5 T magnetic field. The outer layer consists of an instrumented iron flux-return allowing the identification of $K_L^0$ mesons and muons. During data taking two different inner detector configurations were used: the first configuration, corresponding to 152 million $B \bar B$ pairs, consisted of a 2.0 cm beampipe and a 3-layer silicon vertex detector. The second configuration, used to record the remaining 620 million $B \bar B$ pairs, consisted of a 1.5 cm beampipe, a 4-layer silicon vertex detector, and a small-cell inner drift chamber~\cite{svd2}. A more detailed description of the detector and its performance can be found in Refs.~\cite{unknown:2000cg,svd2}. Simulated Monte Carlo (MC) events are used to evaluate background contamination, reconstruction efficiency and acceptance, and for the unfolding procedure. The samples were generated using the \texttt{EvtGen} generator~\cite{Lange:2001uf}, with event sizes corresponding to approximatively ten times that of the Belle collision data. The interaction of particles traversing the detectors is simulated using \texttt{GEANT3}~\cite{brun1984geant}. QED final state radiation was simulated using \texttt{PHOTOS}~\cite{barberio1994photos}. The form factor parametrization in section~\ref{sec:sltheory} is used to model the semileptonic $\bar B \to D^{*} \, \ell \, \bar \nu_\ell$ signal. The $\bar B \to D \, \ell \, \bar \nu_\ell$ decays are modelled using the form factor parametrization in Ref.~\cite{Caprini:1997mu}. Semileptonic decays into orbitally excited charmed mesons, $\bar B \to D^{**} \, \ell \, \bar \nu_\ell$, were modelled using the form factor parametrization of Ref.~\cite{ref:LLSW}. The branching fractions for $B$-meson and charm decays are taken from Ref.~\cite{pdg}. Efficiencies in the MC are corrected using data driven control samples. \section{Event reconstruction and selection}\label{sec:evtreco} Collision events are reconstructed using the hadronic full reconstruction algorithm of Ref.~\cite{Feindt:2011mr}: In the algorithm one of the $B$-mesons, called the $B_{\rm tag}$-candidate, is reconstructed in hadronic decay channels using over 1100 decay modes. The efficiency of this approach is approximately 0.3\% and 0.2\% for charged and neutral $B$-mesons, respectively. Despite the relatively low efficiency, knowledge of the charge and momenta of the decay constituents in combination with the known beam-energy allows one to precisely infer the flavour and four-momentum of the second $B$-meson produced in the collision. The $B_{\rm tag}$-candidates are required to have a beam constrained $B$-meson mass, $$M_{\rm bc} = \sqrt{ s/4 - \left| \vec p_{\rm tag} \right|^2} \, $$ larger than $5.265$ GeV~\footnote{We use natural units with $\bar h = c = 1$.}, where $\sqrt{s}$ denotes the centre-of-mass energy of the colliding $e^+e^-$ pair and $\vec p_{\rm tag}$ denotes the reconstructed three-momentum of the $B_{\rm tag}$-candidate in the centre-of-mass frame of the colliding $e^+e^-$ pair. In addition a requirement of $ -0.15 \, \text{GeV} < \Delta E < 0.1$ GeV is imposed with $$\Delta E = E_{\rm tag} - \sqrt{s}/2 $$ and $E_{\rm tag}$ denoting the reconstructed energy of the $B_{\rm tag}$-candidate in the centre-of-mass frame of the colliding $e^+e^-$ pair. In each event a single $B_{\rm tag}$-candidate is chosen according to the highest classifier score of the hierarchical full reconstruction algorithm. All tracks and neutral clusters used to form the $B_{\rm tag}$-candidate are removed from the event to define a signal side. \subsection{Signal side Reconstruction} The signal $\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell$ decay is reconstructed in three steps~\footnote{Isospin conjugated modes are implied throughout the manuscript.}: \begin{enumerate} \item A lepton candidate (an electron or muon) is reconstructed, and identified using a particle identification (PID) likelihood ratio described in Ref.~\cite{unknown:2000cg}. A minimal lepton momentum of $0.3$ GeV for electrons and $0.6$ GeV for muons is required, while the track of the lepton candidate must be within the detector acceptance with a polar angle relative to the beam axis of $17^{\circ} < \theta_e < 150^{\circ}$ and $ 25^{\circ} < \theta_\mu < 145^{\circ}$ for electrons and muons, respectively. In addition, impact parameter requirements on the lepton candidates in the plane perpendicular to the beam are applied. For electron candidates, bremsstrahlung and final state radiation photons are recovered using a cone around the lepton trajectory with an opening angle of $5^{\circ}$. In the case that several photon candidates are in this cone, the one with the smallest opening angle to the electron is used. Events with more than one well identified lepton are vetoed. \item Charged and neutral $D$-meson candidates are reconstructed from kaon candidates, charged tracks and $\pi^0$ candidates. Kaons and pions are identified as described in Ref.~\cite{unknown:2000cg} using a PID likelihood ratio, and must also satisfy impact parameter requirements. The $\pi^0$ candidates are reconstructed from photon candidates, which consist of clusters in the calorimeter not matched to any track. The energy requirement for photon candidates evolves as a function of polar angle: $E_\gamma > 100$ MeV for $\theta_\gamma < 33^{\circ}$, $E_\gamma > 50$ MeV for $ 33^{\circ} < \theta_\gamma < 128^{\circ}$, and $E_\gamma > $ 150 MeV for $\theta_\gamma > 128^{\circ}$. The invariant mass of the $\pi^0$ candidates must fall within a mass window of $M_{\pi^0} = [0.12,0.15)$ GeV. All combinations of particles that form $D^0$ or $D^+$ meson candidates with an invariant mass within 14 MeV of $m_{D^+} = 1870$ MeV and $m_{D^0} = 1865$ MeV respectively, are used in a fit for a secondary vertex to select a single $D^0$ or $D^+$ candidate per event. The decay modes used are $D^+ \to K^- \pi^+ \pi^+$, $D^0 \to K^- \pi^+$, $ D^0 \to K^- \pi^+ \pi^0$, $D^0 \to K^- \pi^- \pi^+ \pi^+$, which account for 9.4\% and 26.3\% of the total $D^+$ and $D^0$ branching fractions. In events with a $D^+$ candidate no additional track is allowed on the signal side. In events with a $D^0$ candidate exactly one additional track is required. \item Finally candidate $D^{*}$-mesons are reconstructed: here the decay of $D^{*\,+} \to D^0 \pi^+$ is reconstructed by combining the four-momentum of the reconstructed $D^0$ with the remaining charged track in the event. Events with $D^{*\,+} \to D^0 \pi^+$ candidates are rejected if the reconstructed mass difference $\Delta M = M_{D^*} - M_{D}$ has a value outside a window of $[135,155)$ MeV, corresponding to three times the expected $\Delta M$ resolution as estimated from MC. The decay of $D^{*\,+} \to D^+ \pi^0$ is reconstructed by combining the four-momentum of the reconstructed $D^{+}$ with all possible $\pi^0$ candidates and a single candidate is chosen by selecting the candidate with a $\Delta M = M_{D^*} - M_{D}$ closest to the expected value of $140$ MeV and fall in the window $[130,150)$ MeV, corresponding to three times the expected resolution of $\Delta M$. The $D^{*\,+} \to D^0 \pi^+$ and $D^{*\,+} \to D^+ \pi^0$ decays account for 98.4\% of the total $D^{*\,+}$ branching fraction. \end{enumerate} \subsection{Calibration of the hierarchical full reconstruction algorithm}\label{subsec:calibration} The efficiency of the full hadronic reconstruction algorithm is calibrated using a procedure described in Ref.~\cite{Glattauer:2015teq} based on a study of inclusive $\bar B \to X \, \ell \, \bar \nu_\ell$ decays. In this approach full reconstruction events are selected by requiring exactly one lepton on the signal side, employing the same lepton and $B_{\rm tag}$ selection criteria as outlined above. The $\bar B \to X \, \ell \, \bar \nu_\ell$ enriched events are split into subsamples according to their hadronic $B_{\rm tag}$ final state topology and further separated into specific ranges of the multivariate classifier used in the hierarchical selection. Each subsample is studied individually to derive a calibration factor for the hadronic tagging efficiency: this is done by confronting the number of inclusive semileptonic $B$-meson decays, $N(\bar B \to X \, \ell \, \bar \nu_\ell)$, in data with the expectation from the simulation, $N^{MC}(\bar B \to X \, \ell \, \bar \nu_\ell)$, assuming the branching fraction of Ref.~\cite{pdg}. The semileptonic yield is determined by a binned likelihood fit to the spectrum of the lepton three-momentum and the correction factors in each subsample is given by \begin{align} C_{\rm tag} = N(\bar B \to X \, \ell \, \bar \nu_\ell) / N^{MC}(\bar B \to X \, \ell \, \bar \nu_\ell) \, . \end{align} The free parameters of the fit were prompt semileptonic $\bar B \to X \, \ell \, \bar \nu_\ell$ decays, fake lepton contributions and secondary true lepton contributions and in total 1120 correction factors were determined. The largest uncertainties on the $C_{\rm tag}$ correction factors are from the assumed $\bar B \to X \, \ell \, \bar \nu_\ell$ shape and the lepton PID performance, cf. Section~\ref{sec:syst}. \section{Reconstruction of kinematic quantities and signal extraction} \label{sec:signalext} The signal $\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell$ can be reconstructed using the missing momentum in the collision, \begin{align} p_{\rm miss} & = p_\nu = p_{e^+ e^-} - p_{\rm tag} - p_{D^*} - p_\ell \, , \end{align} where the subscript indicates the corresponding four-momenta of the colliding $e^+e^-$ pair, the tag side $B$-meson, and the reconstructed signal side $D^{*}$ and lepton. To separate signal $\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell$ decays from background processes, the missing mass squared used, calculated from the missing momentum via \begin{align} M_{\rm miss}^2 = p_{\rm miss}^2 \, . \end{align} Only correctly reconstructed signal peaks at $M_{\rm miss}^2=0$, consistent with a single missing neutrino. Figure~\ref{fig:B0Incl} shows the reconstructed $M_{\rm miss}^2$ distribution after the initial selection and the reconstruction of the $D^{*\, +}$-meson: correctly reconstructed $\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell$ signal decays are shown in red and sharply peak around $M_{\rm miss}^2 \sim 0$. Decays of real $D^0$, $D^+$ or $D^{*\, +}$ candidates that have been incorrectly reconstructed are shown in brown and exhibit very similar resolution in $M_{\rm miss}^2$. Fake lepton contributions, continuum events and $\bar B \to D \, \ell \, \bar \nu_\ell$ decays are negligible; the largest selected background contribution is from $\bar B \to D^{**} \, \ell \, \bar \nu_\ell$ decays and other non-semileptonic $B$-meson decays that pass the selection criteria. Most of these are from cascade decays, where a secondary decay of a $D$-meson produced a lepton. \begin{figure}[t] \vspace{0.5ex} \includegraphics[width=0.8\linewidth,page=1,trim={0cm 0cm 0.5cm 0cm},clip]{./figures/B0_mm2_log.pdf} \put(-130.,130.){\includegraphics[width=0.26\linewidth,page=1,clip]{./figures/legend.pdf}} \vspace{-3ex} \caption{ The $M_{\rm miss}^2$ distribution of all events after the $\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell$ reconstruction. The coloured histograms correspond to either correctly (red) or incorrectely reconstructed signal (brown) or various backgrounds. The largest background comes from semileptonic $\bar B \to D^{**} \, \ell \, \bar \nu_\ell$ decays and other $B$-meson decays } \label{fig:B0Incl} \end{figure} \begin{figure}[h!] \includegraphics[width=0.41\textwidth,page=1,trim={0cm 0cm 0.5cm 0cm},clip]{./figures/w_B0.pdf} \includegraphics[width=0.44\textwidth,page=1,trim={0cm 0cm 0.5cm 0cm},clip]{./figures/cl_B0.pdf} \put(-177.,88.){\includegraphics[width=0.13\linewidth,page=1,clip]{./figures/legend.pdf}} \\ \includegraphics[width=0.43\textwidth,page=1,trim={0cm 0cm 0.5cm 0cm},clip]{./figures/cv_B0.pdf} \includegraphics[width=0.41\textwidth,page=1,trim={0cm 0cm 0.5cm 0cm},clip]{./figures/chi_B0.pdf} \caption{ The reconstructed kinematic variables $w$, $\cos \theta_\ell$, $\cos \theta_v$, and $\chi$ are shown, as defined in the text. } \label{fig:B0Vars} \end{figure} The kinematic variables $w$, $\cos \theta_\ell$, $\cos \theta_v$ and $\chi$ are reconstructed from the four momenta of the signal side $D^{*\, +}$, the charged lepton, and the tag-side $B$-meson. The hadronic recoil, $w$, is determined by reconstructing the four-momentum of the signal-side $\bar B$-meson as $p_B = p_{e^+ e^-} - p_{\rm tag}$ and combining it with the $D^{*\,+}$ four-momentum; the decay angles are calculated from all four-vectors boosted into the rest-frame of the signal $\bar B$-meson. The helicity angle $\theta_\ell$ is the angle between the lepton and the direction opposite to the $\bar B$-meson in the virtual $W$-boson rest frame. The helicity angle $\theta_v$ is the angle between the $D$ meson and the direction opposite the $\bar B$-meson in the $D^*$ rest frame. Finally, $\chi$ is the angle between the two decay planes spanned by the $W-\ell$ and $D^*-D$ systems in the $\bar B$-meson rest frame. Figure~\ref{fig:B0Vars} compares the reconstructed kinematic variables in data with the expectation from MC. The number of $\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell$ signal events is calculated using an unbinned maximum likelihood fit to the $M_{\rm miss}^2$ distribution. Incorrectly reconstructed $D^{*\,+}$-mesons are treated as a resolution effect in the variables in question when extracting the form factors in Section~\ref{sec:vcb}. Similarly, all backgrounds are merged into a single component, fixing their relative contributions to the values in the simulation. The likelihood function has the form \begin{align}\label{eq:likelihood} \mathcal{L}(M_{\rm miss}^2; \nu^{\rm sig}, \nu^{\rm bkg}) & = \frac{e^{- \nu}}{n!} \prod_i^{n} \bigg( \nu^{\rm sig} \mathcal{S}(M_{{\rm miss} \, i}^2) + \nu^{\rm bkg} \mathcal{B}(M_{{\rm miss} \, i}^2) \bigg) \, \end{align} where $\nu^{\rm sig}$ is the fitted number of signal events, $\nu^{\rm bkg}$ is the fitted number of background events, and $\nu = \nu^{\rm sig} + \nu^{\rm bkg}$ is the mean value of the Poisson distribution for $n$ observed events in data. The terms $\mathcal{S}(M_{{\rm miss} \, i}^2)$ and $\mathcal{B}(M_{{\rm miss} \, i}^2)$ denote the signal and background probability distribution functions (PDFs) respectively, evaluated for an event $i$ with a value of missing mass squared of $M_{{\rm miss} \, i}^2$. The likelihood Eq.~\ref{eq:likelihood} is maximized numerically, either for all events or in bins of the kinematic observables. The number of signal events is not constrained to be positive in the fit. The signal and background PDFs are constructed from signal and background MC events using Gaussian kernel estimators~\cite{Cranmer:2000du} and the fit tested with pseudo-experiments and independent subsets of MC events to ensure the the procedure is statistically unbiased. \subsection{Total branching fraction fit result} The number of signal events obtained from the fit is $\nu^{\rm sig} = 2374 \pm 53$. We also provide separate results for electron and muon final states, which are in good agreement with the expectation from MC as summarised in Table~\ref{tab:fit_summary}. The number of signal decays can be converted into the $\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell$ branching fraction using the total number of $B \bar B$ events produced at Belle of $N_{B \bar B} = \left(772 \pm 11\right) \times 10^6 $, the product of the reconstruction and tagging efficiency $\left( \epsilon_{\rm reco} \epsilon_{\rm tag} \right)$, and the $B^0/B^+$ production ratio $f_{+0}$ defined as \begin{align} f_{+0} & = \frac{ \mathcal{B}( \Upsilon(4S) \to B^+ \bar B^+) }{ \mathcal{B}( \Upsilon(4S) \to B^0 \bar B^0)} = 1.058 \pm 0.024 \, , \end{align} from Ref.~\cite{pdg}. The product of the reconstruction and tagging efficiency is determined from MC after application of the calibration procedure described in Section~\ref{subsec:calibration}: \begin{align} \left( \epsilon_{\rm reco} \epsilon_{\rm tag} \right) = 3.19 \times 10^{-5} \, . \end{align} The measured $\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell$ branching fraction is then given by \begin{align} \mathcal{B}(\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell) & = \frac{ \nu^{\rm sig} \left( \epsilon_{\rm reco} \epsilon_{\rm tag} \right)^{-1} }{ 4 N_{B \bar B} \left( 1 + f_{+0} \right)^{-1} } \, , \end{align} where the factor of $4$ accounts for having two $B$-mesons in each decay and that we average the branching fraction over both light leptons. We measure \begin{align}\label{res:b0bf} \mathcal{B}(\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell) & = \left(4.95 \pm 0.11 \pm 0.22 \right)\times 10^{-2} \, , \end{align} where the first error in the branching fraction is statistical and the second error from systematic uncertainties. A full breakdown of the systematic uncertainties is discussed in Section~\ref{sec:syst}. This branching fraction can be compared with the current world average \begin{align} \mathcal{B}_{\rm wa}(\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell) & = \left(4.88 \pm 0.01 \pm 0.10 \right)\times 10^{-2} \, , \end{align} from Ref.~\cite{hfag} and we find good agreement. For the separate branching fractions to $\ell = e$ and $\ell = \mu$ we find \begin{align} \mathcal{B}(\bar B^0 \to D^{*\,+ } \, e^- \, \bar \nu_e) & = \left(5.04 \pm 0.15 \pm 0.23 \right)\times 10^{-2} \, , \end{align} and \begin{align} \mathcal{B}(\bar B^0 \to D^{*\,+} \, \mu^- \, \bar \nu_\mu) & = \left(4.84 \pm 0.15 \pm 0.22 \right)\times 10^{-2} \, , \end{align} where both are in good agreement with each other and hence with the average Eq.~\ref{res:b0bf}. The ratio of both branching fractions is measured to be \begin{align} R_{e\mu} = \frac{ \mathcal{B}(\bar B^0 \to D^{*\,+} \, e^- \, \bar \nu_e) }{ \mathcal{B}(\bar B^0 \to D^{*\,+ } \, \mu^- \, \bar \nu_\mu)} & = 1.04 \pm 0.05 \pm 0.01 \, . \end{align} \begin{table}[t] \begin{tabular}{c|cccccc} \hline\hline $\ell$ & $\nu^{\rm sig}$ & $\nu^{\rm sig}_{\rm MC}$ & $\epsilon_{\rm reco} \epsilon_{\rm tag}$ \\ \hline $e+\mu$ & $2374 \pm 53$ & $2310.1$ & $3.19 \times 10^{-5}$ \\ $e$ & $1306 \pm 40$ & $1248.8$ & $3.45 \times 10^{-5}$ \\ $\mu$ & $1066 \pm 34$ & $1061.3$ & $2.93 \times 10^{-5}$ \\ \hline\hline \end{tabular} \caption{The measured ($\nu^{\rm sig}$) and expected ($\nu^{\rm sig}_{\rm MC}$) \mbox{$\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell$} signal yields are listed for the combined fit and for the electron and muon subsamples, as well as the product of the reconstruction and tagging efficiencies.} \label{tab:fit_summary} \end{table} \subsection{Differential fit and statistical correlations} Each bin of the measured distributions of the hadronic recoil and angular variables is independently fitted for signal yields, and hence there is no assumption on the background distribution across these variables. The distributions are fitted in ten bins each using an equidistant binning (but extending the last bin in $w$ to account for the kinematic endpoint of the spectrum). This choice is a compromise of providing differential information, but also to reduce migration between the reconstructed and true underlying value of the kinematic quantities. A summary of the bin boundaries can be found in Table~\ref{tab:binning}. Figure~\ref{fig:B0Fitdiff} shows the $M_{\rm miss}^2$ distribution for three out of the forty differential bins for $w \in [1,1.05)$, $\cos \theta_\ell \in [0.8,1.0)$ and $\chi \in [0,\pi/5)$. The purity in each bin is very high and the unbinned PDFs have been integrated over the bins to allow for an easier comparison. The finite detector resolution and the mis-reconstruction of signal-side particles result in migration The inversion or unfolding of such effects for comparison to theory is discussed in Section~\ref{sec:unfolding}. The measured yields of the four kinematic variables are statistically correlated with each other as they a formed from the same reconstructed events. In order to simultaneously use information from $\{ w, \cos \theta_\ell, \cos \theta_v, \chi \}$ in the fit to determine $\left|V_{cb} \right|$, these correlations must be determined. This is achieved by using a bootstrapping procedure~\cite{bootstrap}: in each data subsample each data event is assigned a different Poisson weight $P(\nu = 1)$ and the yield extraction is repeated using these weighted events. A large number of subsamples is used to calculate the statistical correlation between the various bins. \begin{table*} \begin{tabular}{c|c} \hline\hline Variable & Bins \\ \hline $w$ & $[1.00, 1.05, 1.10, 1.15, 1.20, 1.25, 1.30, 1.35, 1.40, 1.45, 1.504]$ \\ $\cos \theta_\ell$ & $[-1.0, -0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8, 1.0]$ \\ $\cos \theta_v$ &$[-1.0, -0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8, 1.0]$ \\ $\chi$ &$[0, \pi/5, 2 \pi/5, 3 \pi/5, 4 \pi/5, \pi, 6 \pi/5, 7 \pi/5, 8 \pi/5, 9\pi/5,2 \pi ]$ \\ \hline\hline \end{tabular} \caption{ The binning of the $w$, $\cos \theta_\ell$, $\cos \theta_v$, and $\chi$ distributions is shown \label{tab:binning} } \end{table*} \begin{figure}[h!] \subfigure[ \, $w \in [1, 1.05)$ ]{\includegraphics[width=0.4\linewidth]{./figures/mm2_w_bin} \put(-90.,80.){\includegraphics[width=0.14\linewidth,page=1,clip]{./figures/plot_B0_w.pdf}}} \subfigure[ \, $\cos \theta_\ell \in [0.8, 1.0)$ ]{\includegraphics[width=0.4\linewidth]{./figures/mm2_cl_bin}} \\ \subfigure[ \, $\chi \in [0, \pi/5)$ ]{\includegraphics[width=0.4\linewidth]{./figures/mm2_chi_bin}} \caption{ The $M_{\rm miss}^2$ distributions after the likelihood fit for three representative bins in $w$, $\cos \theta_\ell$, and $\chi$ are shown. The PDFs were integrated over the corresponding bin boundaries for comparison between the data points and the signal and background contributions.} \label{fig:B0Fitdiff} \end{figure} \section{Unfolding of differential yields}\label{sec:unfolding} Finite detector resolution and mis-reconstructed $D$ or $D^{*\, +}$-mesons result in migrations between the kinematic bins of $\{w, \cos \theta_\ell, \cos \theta_v, \chi \}$. Such migrations can be expressed in a detector response matrix of conditional probabilities, $\mathcal{P}( \text{reco bin} \, i \, | \, \text{true bin} \, j )$, \begin{align} \mathcal{M}_{ij} & = \mathcal{P}( \text{reco bin} \, i \, | \, \text{true bin} \, j ) \, , \end{align} defined for each kinematic observable. The vector of extracted yields ${\bm \nu_{\rm sig} }$ for a given kinematic observable $x$ can then be related to the vector of differential branching fractions ${\bf \Delta \mathcal{B} / \Delta x }$ as \begin{align}\label{eq:diffBF} {\bf \Delta \mathcal{B} / \Delta x } & = \left( {\bf \epsilon_{\rm reco} \epsilon_{\rm tag} } \right)^{-1} \times \mathcal{M}^{-1} \times {\bm \nu_{\rm sig}} \, \times \frac{1}{4 N_{B \bar B} \left( 1 + f_{+0} \right)^{-1} } \, . \end{align} Here the efficiency of reconstructing an event with a given true value of the kinematic variable $x$ inside a bin $j$ is parametrized as a diagonal matrix $ {\bf \epsilon_{\rm reco} \epsilon_{\rm tag} }$: \begin{align} \left( {\bf \epsilon_{\rm reco} \epsilon_{\rm tag} } \right)_{jj} & = \mathcal{A}( \text{true bin} \, j ) \, , \end{align} which is often called the acceptance $\mathcal{A}( \text{true bin} \, j )$. Inverting the detector response in Eq.~\ref{eq:diffBF} is a non-trivial task: a direct numerical inversion of $\mathcal{M}$ leads to a large enhancement of statistical fluctuations. For the extraction of $\left| V_{cb} \right|$, the underlying theory is folded with the detector response and the acceptance. To preserve the measured spectra, the migration matrix is inverted using the SVD unfolding algorithm~\cite{Hocker:1995kb} Additional uncertainties are included in the error budget, introducing variations of $3 \sigma$ in the world average of the measured form factors to estimate the model error. Table~\ref{tab:rates} lists the unfolded information converted in differential rates $\Delta \Gamma / \Delta x = \Delta \mathcal{B} / \Delta x \times \tau^{-1}$ using the $B^0$-lifetime of $\tau = 1.520 \, \text{ps}$. The full correlation matrix is provided in Appendix~\ref{app:correlation}. \begin{table}[h!]\footnotesize \begin{tabular}{c|ccc} \hline\hline Variable & Bin & $\Delta \Gamma / \Delta x$ \, [$10^{-15}\, $\text{GeV}] \\ \hline $w$ &1 & $1.32 \pm 0.11$ \\ &2 & $2.08 \pm 0.15$ \\ &3 & $2.39 \pm 0.15$ \\ &4 & $2.57 \pm 0.16$ \\ &5 & $2.63 \pm 0.16$ \\ &6 & $2.46 \pm 0.15$ \\ &7 & $2.25 \pm 0.14$ \\ &8 & $2.08 \pm 0.14$ \\ &9 & $1.99 \pm 0.13$ \\ &10 & $1.83 \pm 0.14$ \\ \hline $\cos \theta_v$ &1 & $2.80 \pm 0.20$ \\ &2 & $2.30 \pm 0.14$ \\ &3 & $1.95 \pm 0.13$ \\ &4 & $1.70 \pm 0.12$ \\ &5 & $1.58 \pm 0.12$ \\ &6 & $1.65 \pm 0.11$ \\ &7 & $1.77 \pm 0.12$ \\ &8 & $2.00 \pm 0.14$ \\ &9 & $2.50 \pm 0.17$ \\ &10 & $3.19 \pm 0.25$ \\ \hline \hline \end{tabular} \hspace{5ex} \begin{tabular}{c|ccc} \hline\hline Variable & Bin & $\Delta \Gamma / \Delta x$ \, [$10^{-15}\, $\text{GeV}] \\ \hline $\cos \theta_\ell$ &1 & $0.73 \pm 0.07$ \\ &2 & $1.18 \pm 0.10$ \\ &3 & $1.64 \pm 0.11$ \\ &4 & $2.04 \pm 0.14$ \\ &5 & $2.34 \pm 0.15$ \\ &6 & $2.50 \pm 0.16$ \\ &7 & $2.54 \pm 0.16$ \\ &8 & $2.68 \pm 0.16$ \\ &9 & $2.83 \pm 0.21$ \\ &10 & $2.82 \pm 0.25$ \\ \hline $\chi$ &1 & $1.86 \pm 0.16$ \\ &2 & $2.31 \pm 0.16$ \\ &3 & $2.59 \pm 0.16$ \\ &4 & $2.37 \pm 0.16$ \\ &5 & $1.95 \pm 0.13$ \\ &6 & $1.87 \pm 0.15$ \\ &7 & $2.11 \pm 0.15$ \\ &8 & $2.33 \pm 0.16$ \\ &9 & $2.15 \pm 0.15$ \\ &10 & $1.89 \pm 0.16$ \\ \hline\hline \end{tabular} \caption{ The unfolded differential rates in units of $10^{-15} \, \text{GeV}$ are shown. } \label{tab:rates} \end{table} \section{Systematic uncertainties} \label{sec:syst} There are several systematic uncertainties that affect the measured yields and branching fractions: Table~\ref{tab:syst_summary} summarizes the most important sources for the $\mathcal{B}(\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell)$ branching fraction while the full set of systematics discussed in this section is also derived for the detector response and acceptance corrections, and propagated accordingly into the determination of $|V_{cb}|$ and the form factors. The largest systematic uncertainty on the branching fraction stems from the uncertainty on the tagging calibration, which is evaluated by shifting the central values of the correction factors, $C_{\rm tag}$, according to their corresponding statistical and correlated systematic uncertainties. The systematic uncertainties on the correction factors are due to the modelling of the $\bar B \to X \, \ell \, \bar \nu_\ell$ reference decay and the lepton PID efficiency errors and fake rates. Several replicas of the MC with these new correction factors are produced. The resulting differential spectra are almost unaffected by the change in tagging correction, thus only the impact on the overall acceptance is evaluated. The systematic error is estimated using a 68\% spread of the change in acceptance from many replicas and found to be of the order of $3.6$\%. The uncertainty on the tracking efficiency is 0.35\% per track and assumed to be fully correlated between all signal-side tracks. Possible differences on the tracking efficiency between simulated and measured events on the tagging side are absorbed in the tagging calibration factor. The uncertainty on the $\pi^0$ reconstruction efficiency is 2\%. Uncertainties on external parameters, such as the uncertainty on the number $B$-meson pairs ($N_{B \bar B}$) produced at Belle, the uncertainty on $f_{+0}$, and decay branching fractions are varied within their uncertainties and propagated to the final results. The constructed PDF shapes for signal and background components exhibit statistical uncertainties from the finite size of the MC samples. The resulting uncertainties are evaluated by bootstrapping the MC and replicas are produced by reweighing each MC event with a Poisson distribution of mean $\nu = 1$. For each MC replica the PDF shapes are rebuilt and the signal extraction on data is repeated. The resulting 68\% spread in the extracted yields are used as an estimator for the systematic uncertainty. The uncertainties from electron, muon, and kaon PID efficiency corrections are also evaluated by producing replicas of the data: each replica is reweighed by a weight corresponding to the statistical and systematic error of the corresponding PID ratio, taking into account that the systematic errors are correlated over all events. This is done separately for each source and the 68\% spread on the final result is used as the uncertainty. For the construction of the systematic covariance matrix all uncertainties from a given single source are assumed to be fully correlated across all bins with the exception of the statistical uncertainty on the PDF shapes. \begin{table}[h!] \vspace{2ex} \begin{tabular}{l|c} \hline\hline Error Source & $\Delta \mathcal{B}$ [\%] \\ \hline Tagging Calibration & $3.6$ \\ Tracking Efficiency & $1.6$\\ $N_{B \bar B}$ & $1.4$ \\ $f_{+0}$ & $1.1$ \\ PDF shapes & $0.9$ \\ $\pi^0$ Efficiency & $0.5$ \\ $\mathcal{B}(D \to K \pi (\pi)(\pi))$ & $0.4$ \\ $\mathcal{B}(D^* \to D \, \pi)$ & $0.2$ \\ $\mathcal{B}(\bar B \to D^{**} \, \ell \, \bar\nu_\ell)$ & $0.2$ \\ $e$ PID & $0.2$ \\ $\mu$ PID & $0.1$ \\ $\pi_{\rm slow}$ Eff. & $0.1$ \\ $\mathcal{B}(\bar B \to D \, \ell \, \bar\nu_\ell)$ & $<0.1$ \\ $\bar B \to D^{(*,**)} \, \ell \, \bar \nu_\ell$ FFs & $<0.1$ \\ Lepton Fakerates & $< 0.1$ \\ $K$ PID & $< 0.1$ \\ \hline Total & 4.5 \\ \hline\hline \end{tabular} \caption{ Summary of the relative systematic errors ordered by importance in the total branching fraction measurement. } \label{tab:syst_summary} \end{table} \begin{figure} \includegraphics[width=0.48\textwidth,page=1,trim={0cm 0cm 0.5cm 0cm},clip]{./figures/B0_fitres.pdf} \includegraphics[width=0.48\textwidth,page=2,trim={0cm 0cm 0.5cm 0cm},clip]{./figures/B0_fitres.pdf} \\ \includegraphics[width=0.48\textwidth,page=3,trim={0cm 0cm 0.5cm 0cm},clip]{./figures/B0_fitres.pdf} \includegraphics[width=0.48\textwidth,page=4,trim={0cm 0cm 0.5cm 0cm},clip]{./figures/B0_fitres.pdf} \\ \caption{ The fit result (solid red histograms) and the corresponding $\Delta \chi^2 + 1$ errors (dashed histograms) are shown. Details of the fit can be found in the text. } \label{fig:FitRes} \end{figure} \section{Precise determination of $\left| V_{cb} \right|$}\label{sec:vcb} The differential yields and their correlations are used to extract the form factor parameters defined in Section~\ref{sec:sltheory} and $\left| V_{cb} \right|$. This is done by constructing a $\chi^2$ function of the form \begin{align}\label{eq:chi2} \chi^2 & = \left( {\bm \nu_{\rm sig}} - {\bm \nu_{\rm sig}^{\rm pred}} \right) C^{-1} \, \left( {\bm \nu_{\rm sig}} - {\bm \nu_{\rm sig}^{\rm pred}} \right) + \chi_{\rm NP}^2 \, , \end{align} with ${\bm \nu_{\rm sig} }$ the vector of measured yields, and \mbox{$ {\bm \nu_{\rm sig}^{\rm pred}} = \left( {\bm \epsilon_{\rm reco} \epsilon_{\rm tag} } \right) \times \mathcal{M} \times {\bm \Delta \Gamma / \Delta x } \, \tau $} the predicted number of signal events. The differential decay rate ${\bm \Delta \Gamma / \Delta x }$ is a function of the four parameters of interest, $\{ \left| V_{cb} \right|, \rho_{D^*}^2, R_1(1), R_2(1) \}$ The covariance matrix $C$ contains all uncertainties associated to the signal extraction, while additional nuisance parameter terms $\chi_{\rm NP}^2$ are added to account for the uncertainties from multiplicative factors degenerate with $\left| V_{cb} \right|$. The normalization of the universal form factor, $h_{A1}(1)$, is constrained to the lattice prediction of Ref.~\cite{Bailey:2014tva} (cf. Section~\ref{sec:sltheory}) using a constraint term of the form \begin{align} \chi^2_{\rm la} & = \bigg(h_{A1}(1) - h_{A1}^{\rm la}(1) \bigg)^2 / \left( \sigma_{h_{A1}(1)}^{\rm la} \right)^2 \, , \end{align} where $h_{A1}^{\rm la}(1) = 0.906$ and $\sigma_{h_{A1}(1)}^{\rm la} = 0.013$. Similar constraints are added to propagate the uncertainties from the full reconstruction algorithm calibration uncertainty, the error on the number of $B\bar B$-meson pairs, and the uncertainty on $f_{+0}$. \begin{figure}[t] \includegraphics[width=0.48\textwidth,page=5,trim={0cm 0cm 0.5cm 0cm},clip]{./figures/B0_fitres.pdf} \includegraphics[width=0.48\textwidth,page=6,trim={0cm 0cm 0.5cm 0cm},clip]{./figures/B0_fitres.pdf} \\ \caption{ The best fit values for $\left| V_{cb} \right|$:$\rho_{D^*}^2$ and $R_1(1):R_2(1)$ with the corresponding $\Delta \chi^2 + 1$, $\Delta \chi^2 + 2$, and $\Delta \chi^2 + 4$ contours are shown in red, while black contour shows the current world average from Ref.~\cite{hfag}. } \label{fig:FitRes2} \end{figure} Equation~\ref{eq:chi2} is numerically minimized to find the best fit values for $\left| V_{cb} \right|$ while the form factor parameters and their uncertainties are determined by scanning the $\Delta \chi^2 +1$ contours. Figure~\ref{fig:FitRes} shows the fitted yields for all four variables as well as their respective best fit values and uncertainties. The fit has a $\chi^2 = 40.1$ with $40-4$ degrees of freedom, corresponding to a fit probability of $30\%$. We measure \begin{align} \left| V_{cb} \right| & = \left( 37.4 \pm 1.3 \right) \times 10^{-3} \, , \end{align} where the values of the form factors and of $\left| V_{cb} \right|$ are in good agreement with the current world average~\cite{hfag}. All numerical values are summarized in Table~\ref{tab:fitsummary}, and Figure~\ref{fig:FitRes2} shows the extracted values of $\left| V_{cb} \right| : \rho_{D^*}^2$ and $R_1(1): R_2(1)$. The correlation between $\{ \left| V_{cb} \right|, \rho_{D^*}^2, R_1(1), R_2(1) \}$ is determined to be \begin{align} C = & \left( \begin{matrix} 1 & 0.41 & -0.20 & -0.14 \\ 0.41 & 1 & 0.19 & -0.86 \\ -0.20 & 0.19 & 1 & -0.46 \\ -0.14 & -0.86 & -0.46 & 1 \\ \end{matrix} \right) \, . \end{align} The results of the $\left| V_{cb} \right|$ and form factor fit to the unfolded differential branching fractions are provided in Appendix~\ref{app:VcbFit} . \begin{table}[t] \vspace{2ex} \begin{tabular}{l|cc} \hline\hline Parameter & This result & World Average \\ \hline $\left| V_{cb} \right| \times 10^3$ & $37.4 \pm 1.3$ & $39.2 \pm 0.7$ \\ $\rho_{D^*}^2$& $1.03 \pm 0.13$ & $1.21 \pm 0.03$ \\ $R_1(1)$& $1.38 \pm 0.07$ & $1.40 \pm 0.03$ \\ $R_2(1)$& $0.87 \pm 0.10$ & $0.85 \pm 0.02$ \\ \hline\hline \end{tabular} \caption{ The best-fit values of the fit is compared with the world average from Ref.~\cite{hfag}. } \label{tab:fitsummary} \end{table} \section{Summary and Conclusions}\label{sec:summary} In this paper the precise determination of $\left| V_{cb} \right|$ using semileptonic $\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell $ decays using a fully reconstructed dataset is reported. The total and differential signal yields in kinematic observables are extracted: the recoil parameter $w$ and three decay angles that fully characterize the $\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell $ decay. The statistical correlations of the four variables are determined and the yields are unfolded as binned differential decay widths From the total yield the $\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell$ branching fraction is determined to be \begin{align} \mathcal{B}(\bar B^0 \to D^{*\,+} \, \ell^- \, \bar \nu_\ell) & = \left(4.95 \pm 0.11 \pm 0.22 \right)\times 10^{-2} \, , \end{align} which is in good agreement with the current world average of Ref.~\cite{pdg}. The value of $\left| V_{cb} \right|$ is determined by simultaneously fitting all four kinematic variables: \begin{align} \left| V_{cb} \right| & = \left( 37.4 \pm 1.3 \right) \times 10^{-3} \, , \end{align} which is in good agreement with the current world average ~\cite{hfag}. The unfolded differential decay rates are reported for the first time, which can be directly compared to theoretical expectations. Finally, using the full correlation matrix of the extracted form factor parameters, a prediction for the ratio of semileptonic decays with $\tau$ and light lepton final states can be computed, \begin{align} R(D^*) = \frac{ \mathcal{B}( \bar B \to D^{*} \, \tau \, \bar \nu_\tau) }{\mathcal{B}( \bar B \to D^{*} \, \ell \, \bar \nu_\ell)} \, , \end{align} with $\ell = e$ or $\mu$. This is of interest as many recent measurements report a significant enhancement over the SM expectation of this ratio. Using the fitted values of $\rho_{D^*}^2$, $R_1(1)$ and $R_2(1)$ and the associated uncertainties we obtain \begin{align} R(D^*)_{\rm SM} & = 0.242 \pm 0.005 \, , \end{align} based on a value of $R_0(1) = 1.14 \pm 0.11$ from Ref.~\cite{Fajfer:2012vx} for the form factor ratio unconstrained by light lepton measurements. This ratio is slightly lower than the prediction from Ref.~\cite{Fajfer:2012vx} of $ R(D^*)_{\rm SM} = 0.252 \pm 0.003$ and in tension with the current world average~\cite{hfag} \begin{align} R(D^*)_{\rm wa} & = 0.310 \pm 0.015 \pm 0.008 \, , \end{align} where the first error is statistical and the second from systematic uncertainties. The tension between the predicted and the observed values is approximately $3.8$ standard deviations. \vspace{0.3cm} \section*{Acknowledgments} We thank Stefan Schacht and Andrew Kobach to point out an inconsistency in the $\left| V_{cb} \right|$ fit results. We thank the KEKB group for the excellent operation of the accelerator; the KEK cryogenics group for the efficient operation of the solenoid; and the KEK computer group, the National Institute of Informatics, and the PNNL/EMSL computing group for valuable computing and SINET5 network support. We acknowledge support from the Ministry of Education, Culture, Sports, Science, and Technology (MEXT) of Japan, the Japan Society for the Promotion of Science (JSPS), and the Tau-Lepton Physics Research Center of Nagoya University; the Australian Research Council; Austrian Science Fund under Grant No.~P 26794-N20; the National Natural Science Foundation of China under Contracts No.~10575109, No.~10775142, No.~10875115, No.~11175187, No.~11475187, No.~11521505 and No.~11575017; the Chinese Academy of Science Center for Excellence in Particle Physics; the Ministry of Education, Youth and Sports of the Czech Republic under Contract No.~LG14034; the Carl Zeiss Foundation, the Deutsche Forschungsgemeinschaft, the Excellence Cluster Universe, and the VolkswagenStiftung; the Department of Science and Technology of India; the Istituto Nazionale di Fisica Nucleare of Italy; the WCU program of the Ministry of Education, National Research Foundation (NRF) of Korea Grants No.~2011-0029457, No.~2012-0008143, No.~2014R1A2A2A01005286, No.~2014R1A2A2A01002734, No.~2015R1A2A2A01003280, No.~2015H1A2A1033649, No.~2016R1D1A1B01010135, No.~2016K1A3A7A09005603, No.~2016K1A3A7A09005604, No.~2016R1D1A1B02012900, No.~2016K1A3A7A09005606, No.~NRF-2013K1A3A7A06056592; the Brain Korea 21-Plus program and Radiation Science Research Institute; the Polish Ministry of Science and Higher Education and the National Science Center; the Ministry of Education and Science of the Russian Federation and the Russian Foundation for Basic Research; the Slovenian Research Agency; Ikerbasque, Basque Foundation for Science and the Euskal Herriko Unibertsitatea (UPV/EHU) under program UFI 11/55 (Spain); the Swiss National Science Foundation; the Ministry of Education and the Ministry of Science and Technology of Taiwan; and the U.S.\ Department of Energy and the National Science Foundation. \clearpage
1,314,259,993,838
arxiv
\section{Introduction} Since their first introduction, Online Social Networks (OSN) have been deeply investigated for possible implications of the online public debate on political processes~\cite{Adamic2005}. In the last decade, the centrality of OSN for political communications and debates has steady increased: OSN represent one of the most used tool for citizens to get an opinion~\cite{eurobarometer2019}. It is not surprising, then, that political parties use them extensively to carry out a sort of never-ending propaganda. Although in the literature there are different opinions on the impact that a particular grouping of users in OSN can have on their offline behavior~\cite{Dubois2018,Valensise2021a,Gallotti2021}, it is undeniable that the online social environment is strongly polarized. The origin of such a polarization has been deeply discussed in the sociological literature ~\cite{Urman2019,Yarchi2020} and seems to be extremely dependent on country's party systems~\cite{Barbera2015}. In particular, the concepts of \emph{selective exposure}, \emph{confirmation bias}, \emph{echo chambers} and \emph{filter bubbles} have had a great relevance in the literature. Selective exposure leads people to prefer information that confirms their preexisting beliefs~\cite{Lazer1094,gangware2019weapons}, while confirmation bias makes information consistent with one's preexisting beliefs more persuasive~\cite{DelVicario2016}. Such phenomena imply the formation of groups of users, characterised by following the same information in terms, e.g., of news outlets and personal opinions. These groups are thus closed in so called \emph{echo chambers}: `a bounded, enclosed media space that has the potential to both magnify the messages delivered within it and insulate them from rebuttal'~\cite{jamieson08echo,Garrett2009,DelVicario2016}. Echo chambers, by being impervious to information coming from outside that may contradict the pre-existing views of the chamber members, are believed to strongly contribute to the polarization of the online debate~\cite{Zollo2017}. Polarization is also fomented by \emph{filter bubbles}. This paradigm was first introduced by the activist Eli Pariser in 2011~\cite{Pariser11hide}: personalised results provided by search engines and shown in social media feeds make users be trapped in a bubble of information they like and away from data and viewpoints considered less valuable, but that could challenge their beliefs. Echo chambers and filter bubbles are similar concepts, with the difference that the latter do not involve an active participation of the user in deciding what information to be exposed to (being the bubble the result produced by customization algorithms), while the former are still bubbles, but the users may want to enclose themselves voluntarily. \paragraph{Discourse and discursive communities} Whether circulating within an echo chamber or suggested by recommendation algorithms, the type of information users come across online is fundamental to reinforcing or not the division into `closed' groups. Nevertheless, also the study of the interactions between users is of absolute interest to detect polarization phenomena. The term \emph{discourse community} was coined in 1982 and it indicates `groups that have goals or purposes, and use communication to achieve these goals'~\cite{borg03discourse}. A discourse community is itself immaterial, and this tends to project it onto the forum on which it operates~\cite{porter92audience}. Thus, with the advent of OSN, discourse communities were projected onto the platforms themselves~\cite{kehus10definition}: `A discourse community can be viewed as a social network, built from participants who share some set of communicative purposes'. According to Berkenkotter~\cite{berkenkotter93}, `just as the digital world is constantly evolving, discourse communities continually define and redefine themselves through communications among members'. In the discourse community definition, we implicitly know the identities of the individuals forming the community. Actually, in the case of Twitter, it is just partially true, since we have trustworthy information only about a small minority of accounts. For this reason, we prefer to use the term \emph{discursive communities}, as it was introduced in Ref.~\cite{Radicioni2021a} to identify group of users that are connected by non-trivial pattern of discourse, but for which we have limited information about the identity of the group itself. Nevertheless, since we can \emph{infer} the discourse community of the discursive community by looking at a set of non-trivial data characterising the group, as the most frequent keywords used therein, the difference is more formal than substantial. Therefore, in the following we will use the two terms interchangeably. To detect discursive communities in OSN, the first contributions applied mixed approaches to the political debate on Twitter~\cite{Conover2011, Conover2011a, Conover2012}. The work considered political debate on Twitter about the US presidential election campaign, i.e. a `perfectly polarized' one in which two opposite fronts face each other. The authors manually annotated the most frequent keywords characterizing Republicans and Democratics' narratives and use them to infer the political orientation of accounts using them. The orientation of accounts not using hashtags was later inferred using a label propagation algorithm~\cite{Raghavan2007b}. Remarkable, a clear partition in two distinct groups of users, supporters of the two political parties, was observed in the \textit{retweet network} only (the network of users sharing content created by others). Finally, using a label propagation algorithm on the retweet network, the authors were able to successfully assign to all accounts the proper political orientation, that can be translated in the present context to the correct discursive community.\\ Every country has a different way in which opinions are polarised. This is due to the various party systems and electoral laws and, in principle, there could be more than just two fronts~\cite{Barbera2015}. A methodology for detecting discourse communities less susceptible to human error should therefore rise from the data directly, rather than being based on \emph{a priori} manual annotation. The approach firstly proposed in Ref.~\cite{Becatti2019} meets the desired property: the idea is to infer the various discursive communities starting from accounts whose identity is certified by the social network itself. In Twitter, these are the so called \textit{verified} accounts. This class of accounts tend to produce new content rather than retweet the one created by others~\cite{Becatti2019}. Since we trust information regarding their identities, the issue is the identification of the discursive communities anchored to them, something that can be done using their interactions with `standard' users (based on the results by Conover et al.~\cite{Conover2011, Conover2011a, Conover2012}, in terms of retweets). Let the reader consider a pair of verified Twitter users. If they share a large number of retweeters, it is reasonable to think that they attract `similar' users. In this sense, that pair of accounts are perceived to belong to the same discursive community, sharing similar views and ideas. Nevertheless, it is hard to state \emph{a priori} how many common retweeters two verified users should have in order to be considered as `similar'; in this sense, a maximum entropy null-model is used as benchmark~\cite{Cimini2018}. (More technical details on this construction can be found in the Section~\ref{sec:disc}.) The labels for verified users are then propagated, following the same approach as in Ref.s~\cite{Conover2011, Conover2011a, Conover2012}. In the present paper, we are going to follow this approach, that has great performances on manually annotated datasets~\cite{Saracco2022}.\\ The recent literature has extensively analysed online debate and discourse communities, focusing, from time to time, on coordinated activities in discursive communities~\cite{Becatti2019, Caldarelli2020,Bruno2021}, on the semantic network associated to the various discursive communities~\cite{Radicioni2021a,Patuelli2021, Radicioni2021b} or on their exposure to disinformation campaigns~\cite{Caldarelli2021, Mattei2021}. In the present paper, we tackle the analysis of the network structure of discursive communities: we collect and study 8 thematic Twitter datasets, on topics ranging from sports, to Covid-19, to political elections and immigration policies. Our main result is that \textit{almost all the discourse communities therein features a bow-tie structure.}\\ \paragraph{Bow-ties} Bow-tie structures were initially introduced by Broder et al. in order to study the structure of World Wide Web (WWW)~\cite{Broder2000}. The authors represented WWW as a directed network in which webpages are the nodes and the hyperlinks connecting them are the edges. Broder et al. noticed that the network displays a huge Weakly Connected Component (WCC), i.e., the maximal subgraph in which all nodes can be reached by any other one in the same subgraph, disregarding the direction of the link. This WCC includes more than 75\% of all nodes. WCC breaks into three main pieces: a Strongly Connected Component (SCC), in which each node can be reached by any other one in the same block, following the direction of the links; a group of nodes that can reach SCC, without being reached by it (called IN); a group of nodes that can be reached by SCC, but that cannot reach it (the OUT block). The observation is that SCC is the most populated sector, followed by the IN and the OUT sectors. Most of the websites can be found in the SCC, linking between each other; the IN sector was instead mostly composed by search engines, while the OUT one includes authorities, as Wikipedia. Yang et al.~\cite{Yang2011} refined the partition of the structure in~\cite{Broder2000}, introducing INTENDRILS, OUTTENDRILS, TUBES and OTHERS. The entire situation is pictorially represented in Fig.~\ref{fig:structure}. \begin{figure}[ht!] \centering \includegraphics[scale=0.8]{structure.png} \caption{\textbf{The seven sectors of Yang's bow-tie structure}} \label{fig:structure} \end{figure} \paragraph{Results in a nutshell} In the case of our 8 thematic datasets, we find that a bow-tie structure is present in those discourse communities debating 1) about politics, like in the case, e.g., of election campaigns, and 2) about society, e.g., on the proper response to the pandemic or the appropriate management of migration fluxes. Instead, bow-ties are hardly present when the debate is about less socially relevant topics as sports (this confirms what observed in Ref.~\cite{Barbera2015}). There are two relevant points in observing the presence of bow-tie structures in discursive communities: how big the bow-tie is respect to the entire discursive community (a feature that is called \textit{uninformative}, \textit{weak} or \textit{strong} bow-tie in the main text) and how random the presence of this structure is (i.e., its statistical significance). Regarding the first point, when the bow-tie is informative, even in the worst case, it represents more than 80\% of all nodes in the discursive community, i.e., much more than what Broder et al. observed for WWW. Regarding the second point, in order to be sure that the observed bow-ties are not due to a random organization of links only, we compared the observed quantities with a maximum entropy null-model for directed network, conserving the in- and out-degree sequences~\cite{Mastrandrea2014}. The results show that the dimension of most of the bow-tie sectors are statistically significant, i.e., they carry a signal that cannot be due to the degree sequence only. In this sense, the presence of a bow-tie structure is an extremely non-trivial feature of the system.\\ We can add more detail to the analysis of this structure. When the bow-tie is informative, we observe two cases: the OUT-dominant and the INTEND-dominant ones, depending on which sector is the largest (respectively, OUT or INTENDRILS). The OUT sector has access to all information produced in the discursive community and, in particular, to the one produced by the most active block, SCC. Thus, in principle, the OUT-dominant bow-tie should be more informed regarding the content shared in the discursive community. Instead, in the INTEND-dominant bow-ties, the most crowded sector is the one of INTENDRILS, i.e., the retweeters of IN that are not retweeted by anyone else and that cannot access to all content created by SCC.\\ In principle, it should be desirable to have an OUT-dominant bow-tie: when the OUT sector is the most populated, there are many accounts that are exposed to information from all other sectors. This should give the accounts a multi-faceted, pluralistic knowledge. However, we carry out an analysis on the quality of content produced in the various sectors of the bow-ties, and our outcome returns a different picture. Indeed, the most active block is SCC, which, in discursive communities affected by m/disinformation, is responsible of the greatest flux of contents from non reliable sources. In this sense, since the greatest block is directly hit by questionable content produced by SCC, the OUT-dominant bow-ties are exposed to m/disinformation campaigns. This creates local infodemics. According to WHO, ``\emph{infodemics are an excessive amount of information about a problem, which makes it difficult to identify a solution}"\footnote{Coronavirus disease 2019 (COVID-19) Situation Report – 45:~\url{https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200305-sitrep-45-covid-19.pdf?sfvrsn=ed2ba78b_4}}.\\ Summarising, our contribution is twofold: \begin{itemize} \item almost all the discursive communities in 8 datasets of Twitter debates, on different topics in different countries, display a bow-tie structure which is statistically significant; \item relating the presence of the bow-tie with the concept of infodemics and the production of controversial content, in most cases the majority of users is exposed to untrustworthy information. \end{itemize} We would like to remark that the results in this manuscript do not represent the only contribution that connects the diffusion of m/disinformation to the network structure (see, for instance, work in~\cite{Artime2020,Guarino2021, Valensise2021, Castioni2021}, just to consider some of the most recent contributions). However, this is the first time that the diffusion of m/disinformation is related to the presence of the bow-tie structure in discursive communities. \begin{comment} This structural result carries with it the following major findings: \begin{itemize} \item la maggior parte degli utenti Twitter verified stanno nel settore IN: ulteriore testimonianza che questi accounts privilegiano il creare contenuti piuttosto che sharare quelli degli altri; \item nelle discourse communities in cui sono presenti accounts di partiti e politici conservativi, il settore OUT e' il settore piu' popolato della bow-tie. Il settore OUT e' esposto ai contenuti degli altri settori: quando OUT e' molto popolato, molti accounts sono esposti ai piu' vari contenuti, da quelli reputabili a quelli di dubbia credibilita' \item un'analisi degli urls presenti nei tweets e nei retweets scambiati nelle comunita' discorsive mostra come la maggior parte dei siti di notizie considerati poco reputabili appaiono nel settore SCC. \item quando i due punti precendenti sono co-esistenti, si ha che molti accounts (in OUT) sono esposti a m/disinformazione. \item l'esistenza della struttura bow-ties nelle communities politiche potrebbe essere dovuta al caso. In realta' [DICM]. \textbf{fabio descrivi meglio tu che non abbiamo un risultato casuale?} \end{itemize} \end{comment} \section{Datasets} In order to make our analysis as general as possible, we consider several Twitter datasets across different countries and about different topics. The data were collected using the Twitter Search API. In detail: \begin{itemize} \item \textbf{Covid-19 datasets}: we explore Twitter posts containing keywords related to the Covid-19 pandemic\footnote{In particular, the keywords for tweets collection were ``coronavirus", ``ncov", ``covid", ``SARS-CoV2", ``\#coronavirus", ``\#coronaviruses", ``\#WuhanCoronavirus", ``\#CoronavirusOutbreak", ``\#coronaviruschina", ``\#coronaviruswuhan", ``\#ChinaCoronaVirus", ``\#nCoV", ``\#ChinaWuHan", ``\#nCoV2020", ``\#nCov2019", ``\#covid2019", ``\#covid-19", ``\#SARS\_CoV\_2", ``\#SARSCoV2", ``\#COVID19". The subset of Italian messages has been matter of investigation in Ref.~\cite{Caldarelli2021} too.}, in different languages and therefore diffused in different countries. In particular, we consider the \textbf{Italian}, \textbf{German} and \textbf{French} debates about the pandemic, in the period between February and April 2020. The Italian dataset consists of 4,471,916 tweets published between February 17 and April 23. The German dataset contains 1,552,582 tweets posted between February 10 and April 23, the French one has 3,060,197 posts published between March 23 and April 7. The different time frames for data collection have been chosen according to the intensity of the Twitter traffic. \item \textbf{Dutch elections dataset}: we collect Twitter posts about the national elections in the Netherlands in 2021. The keywords used for downloading data were ``tweedekamer", ``verkiezingen", ``kabinet", ``coalitie", ``stem", ``stembus", ``verkiezingen2021"\footnote{Respectively, ``House of representatives", ``reconnaissance", ``cabinet", ``coalition", ``vote", ``ballot box", ``explorations".} and only messages in Dutch were selected. The dataset contains 1,002,696 tweets posted between February 2 and March 31, 2021. \item \textbf{Italian debate on migrants}: we select Twitter posts shared in Italy with keywords regarding the discussion about the migration flows from Northern Africa to the Italian coasts. The dataset consists in 1,082,029 posts, published between January 23, 2019 and February 22, 2019. The dataset is described in more details in Ref.~\cite{Caldarelli2020}. \item\textbf{Italian debate on the Astrazeneca vaccine}: we examine 583,327 Twitter posts published in Italian, regarding the discussion about the safety of the Astrazeneca vaccine against Covid-19: the keywords used for the download were ``astrazeneca", ``aifa", ``ema", ``trombosi"\footnote{Respectively, ``astrazeneca", "Italian Medicines Agency", ``European Medicines Agency" and ``thrombosis".}. The dataset contains posts shared between March 15, 2021 and May 15, 2021. \item\textbf{Italian and Turkish EURO2020}: we analyze 298,538 Italian tweets and 522,363 Turkish ones about the European Football Championship EURO2020; the keyword used for the download was simply ``\#euro2020". The tweets were published between, respectively, June 11-13 and June 11-23, 2021. \end{itemize} So as not to burden the presentation, in the following we will present the results about the Italian Covid-19, Italian EURO2020 and Turkish EURO2020 datasets. We will show the results related to the other datasets wherever there will be something substantially different, compared with the Italian dataset. However, all graphics and results about the other datasets can be found in the Supplementary Material. \section{Results} \subsection{Discursive communities}\label{sec:disc} Our analysis focuses on the structure of networks of retweets, for each dataset. Retweeting a post is one of the possible ways in which people can interact on Twitter and it consists in sharing the content of a tweet written by another user. It usually means endorsing the post content and it has also the effect of raising its visibility~\cite{Conover2011, Conover2011a, Conover2012}. We start by distinguishing between \emph{verified} and \emph{non-verified} accounts. The former ones denote Twitter users whose identity has been verified by the social platform. This procedure is usually adopted to certify the accounts of renowned people and organizations and figures of public interest in general, as politicians, journalists, political parties, newspapers and TV-channels. We place the verified accounts on one layer of a bipartite network\footnote{In a bipartite network, nodes belong to two different sets, called layers. An edge can exist only between vertices placed on different layers.} and the non-verified ones on the other one, again considering links as retweets between them\footnote{In the present construction, we disregard the information about the direction of the retweet, since we are interested in the interaction between the two class of users. Nevertheless, as mentioned above, verified users tend more to create new contents (i.e tweet) than to share it with her followers (retweet).}. The main idea is to anchor the definition of discursive communities on verified users since they usually introduce new content and posts: as observed in many other studies~\cite{Becatti2019,Caldarelli2020, Caldarelli2021,Radicioni2021a, Radicioni2021b, Mattei2021, Gonzalez2021}, verified users are, on average, much more retweeted than common users. Such a procedure obtains great performances, since it can be observed that the various discursive communities are coherent in terms of verified users belonging to the same political front; in a further analysis we are comparing this procedure with annotated datasets, better quantifying our performances~\cite{Saracco2022}. Following the methodology introduced in Becatti et al.~\cite{Becatti2019}, we count the common neighbors of each pair of verified users or, in simpler words, the number of non-verified users that have interacted (by retweeting or being retweeted) with the same pair of verified ones. The aim is projecting the bipartite network into the layer of the verified accounts, establishing an edge between two of them if the number of their common neighbors is significantly higher than what expected by a proper null-model. When this happens, we can assert that the two verified users refer to the same audience and, therefore, they probably share similar content and opinions. The statistical significance of the number of common neighbors can be established only comparing it with the predictions of an accurate benchmark, which, in this case, is represented by the Bipartite Configuration Model (\emph{BiCM},~\cite{Saracco2015a}), an entropy-based model suited for bipartite networks. A complete description of the model and the projecting procedure can be found in Section~\ref{sec:methods}. The result of the above procedure is a monopartite network of verified users. We further obtain a partition in communities implementing the Louvain algorithm~\cite{Blondel2008} for the optimization of the modularity, with a slight modification. The standard definition of the modularity~\cite{Newman2004} implements the Chung-Lu null-model~\cite{Chung2002}. Literally, Chung-Lu null-model can be considered as a sparse matrix approximation of the entropy-based null-model defined in~\cite{squartini2011analytical} and, indeed, returns wrong results in the presence of strong hubs~\cite{Cimini2018}. We thus replaced the Chung-Lu null-model in the modularity with the unipartite configuration model (\emph{UCM}) defined in~\cite{squartini2011analytical} - more details can be found in Section~\ref{sec:methods}. For all the datasets, looking at the members of each discursive community, we can \emph{a posteriori} associate the latter to a political wing, using the available information for verified users. We thus obtain clusters of users (even if we cannot characterise them on the basis of other topological quantities\cite{zlatic2009rich}) which represent the main wings of the political scenario of each of the examined countries. In addition, in almost all the datasets, we identify also a Media cluster, with official accounts of newspapers, TV-channels, radio and other media.\\ In the Appendix, the interested reader can find a complete description of all the discursive communities for the Italian Covid-19 dataset. For the other datasets, a brief description of their discursive communities is in the Supplementary Material. \subsubsection{Political orientation of non-verified users} The next step in our procedure consists in extending the discursive communities to non-verified accounts. More in details, following the approach in Ref.~\cite{Caldarelli2020}, we use the membership of verified users as (fixed) seeds for the label propagation algorithm proposed by Raghavan et al.~\cite{Raghavan2007b} on the retweet network. This network is a monopartite and directed one in which nodes represent users and a link between them indicates that one user has retweeted the other one at least once: the single edge starts from the retweeted user and is directed towards the one who retweets. Let us remind that, in case the algorithm cannot find a dominant label for a specific vertex (i.e., in case of a tie), it randomly removes some of the edges attached to that vertex and repeats the procedure: for this reason, we run the label propagation 500 times and assign to each node the most frequent label (actually, the noise in the assignment of the labels is extremely limited). Fig.~\ref{fig:disc} shows the percentages of nodes placed in the various discursive communities for the Italian Covid-19 dataset (a detailed description of the various communities can be found in the caption of the figure). Considering also the other datasets, in almost all the cases, the label propagation procedure could assign a label to approximately 90\% of the nodes. As we could expect, in the Covid-19 datasets, the Media community is always the most numerous one: updates on the spread of the pandemic, written by the official accounts of various media, received a great amount of retweets. \begin{figure} \centering \includegraphics[scale=0.35]{discursive_covid_ita.png} \caption{\textbf{Percentages of nodes in each discursive community, Italian Covid-19 dataset.} Due to the presence of politicians and political parties from a specific political area, the various discursive communities are called following their political alignment. ``PD" stays for the Italian Democratic Party (\emph{Partito Democratico}); \emph{Italia Viva} (``IV") is the political party of the former prime Minister and former PD secretary Matteo Renzi, while M5S is the ``Movimento 5 Stelle", a political movement born on the web and being the most represented party in the Italian parliament at the time of the data collection. ``FI" stays for \emph{Forza Italia}, the political party of the former Prime Minister Silvio Berlusconi, while the ``DX" (\emph{Destra}) community includes right wing parties as Lega and Fratelli d'Italia. The most crowded discursive community is the one of Media in which there are most of the online news outcasts and newspapers. The accounts for which it was not possible to assign a discursive community are in grey.} \label{fig:disc} \end{figure}\\ As highlighted in other works~\cite{Becatti2019,Caldarelli2020,Caldarelli2021, Radicioni2021a,Radicioni2021b,Bruno2021,Mattei2021}, the presence of well-defined discursive communities is the signal that users on Online Social Networks (OSNs) are strongly polarized, i.e., they tend to tend to split into groups, which one with same opinions and political orientation. \subsection{The bow-tie structure} The original concept of bow-tie by Broder et al.~\cite{Broder2000} sees WWW divided into 3 main sectors: a Strongly Connected Component (SCC), in which each node can be reached by any other one in the same block, following the direction of the links; a group of nodes that can reach SCC, without being reached by it (called IN); a group of nodes that can be reached by SCC, but that cannot reach it (the OUT block). The description by Broder et al. was subsequently refined by Yang et al. \cite{Yang2011}, who split the network in seven distinct parts\footnote{In the following, we will call the various part \emph{blocks} or \emph{sectors} interchangeably.}: \begin{itemize} \item the greatest Strongly Connected Component (\textbf{SCC}); \item the \textbf{IN} block \item the \textbf{OUT} block \item the \textbf{TUBES} sector, including nodes reachable from IN and accessing OUT, but not being part of SCC; \item the \textbf{INTENDRILS} group, collecting all those nodes pointed by IN that cannot reach the OUT block; \item the \textbf{OUTTENDRILS} sector, containing all those nodes pointing to OUT that cannot reach nodes in IN; \item the \textbf{OTHERS} group, including all those nodes that cannot be placed in one of the previous six sectors. \end{itemize} In Fig.~\ref{fig:structure} there is a schematic representation of the bow-tie structure defined in Ref.~\cite{Yang2011}. The seven groups of nodes are mutually disjointed. We remark that every directed network can be divided in blocks using the bow-tie decomposition. Nevertheless, as a rule of thumb, the bow-tie representation is {\it informative} about the network structure if the number of nodes in blocks other than OTHERS is greater or of the same order of those in OTHERS: the greatest the impact of the non-OTHERS blocks, the more informative the bow-tie structure is. \subsubsection{The bow-tie structure of the discursive communities} In the present manuscript, we investigate the presence of a bow-tie structure in the discursive communities of the retweet network, i.e., in the network composed by Twitter accounts (the nodes) and retweets (the links connecting the original author to the retweeter). Results show that, when considering political online debates, a bow-tie structure is informative in almost every discursive community of our datasets, while for non-political debates (as the case of Euro2020), the bow-tie structure is less informative. Euro2020 itself records the extreme case in which more than one half of the nodes are in the OTHERS sector. We state that this bow-tie structure is {\it uninformative} -- see, for example, the case of the Turkish debate during Euro2020 in Fig.~\ref{fig:tur_euro2020}. We remark that the presence of informative bow-ties in many of the discorsive communities here investigated is not a trivial result. Indeed, there are no evident reasons for expecting such distribution of the nodes \emph{a priori}. When a bow-tie structure is informative, we observe two recurrent situations in the investigated datasets and, according to them, we classify the bow-tie into two different categories: \begin{itemize} \item When the OTHERS block is smaller than SCC, we will refer to \textbf{strong} bow-tie structures; \item When the OTHERS block is greater than SCC, we will refer to \textbf{weak} bow-tie structures. \end{itemize} Furthermore, when the bow-tie is informative, may it be weak or strong, we can categorize it in two different ways, that we called respectively \textbf{OUT-dominant} and \textbf{INTEND-dominant}. In OUT-dominant bow-ties, most of the nodes of the bow-tie are placed in the OUT sector. As a rule of thumb, OUT-dominant bow-ties are more frequent when the bow-tie is strong, but we can find some counter-examples. The INTEND-dominant bow-tie is a bow-tie structure in which instead the most part of the nodes is located in the INTENDRILS sector, i.e., when most part of the users retweets accounts from the IN zone and has little to no interaction with the users in the other sectors. INTEND-dominant bow-ties are in general more frequent in weak bow-ties. We highlight that it is not so strange that the most crowded blocks in the bow-ties are OUT and INTENDRILS: it was already observed in Ref.~\cite{Gonzalez-Bailon2013} that the greatest number of users tend to mostly retweet content created by others and limit their production of new messages. The difference between OUT-dominant and INTEND-dominant bow-ties is the {\it access to information}: OUT-dominant bow-ties are those in which the majority of users can access almost all messages exchanged over the discursive community, while in the INTEND-dominant ones the majority of users limits their retweets to the content produced by accounts in the IN block. Otherwise stated, the main difference between INTEND- and OUT-dominant bow-tie structures is that the former displays a more `hierarchical' structure, i.e., few accounts (those in the IN sector) introduce new content and many others just share it (the INTENDRILS sector). Instead, in OUT-dominant bow-ties, the greatest part of the users (i.e., the OUT block) not only shares posts by accounts in the IN block, but also it retweets content by users in SCC, OUTTENDRILS and TUBES. We argue that this behaviour, while more `democratic', is, at the same time, more risky. In fact, we will see in Subsection~\ref{ssec:user_loc} that users with high visibility and which introduce new content on Twitter can be found mostly in the IN sector: typically, they are verified accounts. As observed in other studies, see, e.g. Ref.~\cite{Caldarelli2021}, verified users tend to limit the spreading of low-quality content. We may argue, then, that users interacting mostly with verified users are safer from m/disinformation campaigns. In the following, we will see that the reputability of information shared confirms our hypothesis and we will come back on the matter.\\ \begin{comment} Following this characterization, we observe that the main difference between weak and strong bow-tie structures is that the former display a more `hierarchical' structure, i.e., few accounts (in the IN sector) introduce new contents and many others just sharing them (the INTENDRILS sector). Instead in strong bow-tie discursive communities, the greatest part of the users (i.e. the OUT block) is not limited to share posts by accounts in the IN block, but retweet also contents from users in SCC, OUTTENDRILS and TUBES. In a sense, this behaviour, while more `democratic', is, at the same time, more risky: as observed in other studies, see, e.g. Ref.~\cite{Caldarelli2021}, verified users tend to limit the spreading of low-quality content. We may argue, then, that users interacting mostly with verified users are safer from m/disinformation campaigns. In the following, we will come back on the matter. \end{comment} Fig.~\ref{covid_ita} displays the bow-tie structure of each discursive community for the Italian Covid-19 dataset (analogous plots for the other datasets can be found in the Supplementary Material). A single node represents one bow-tie sector and its dimension is proportional to the number of accounts in it. First, according to the definitions given above, the bow-tie structure is informative in all the discursive communities. In the cases of DX and IV, the bow-tie is particularly informative: its blocks include respectively 96.5\% and 98.3\% of the entire discursive community. Second, different discursive communities display bow-ties with different strengths. For instance, DX and IV discursive communities display strong bow-ties, while, M5S, Media, PD and FI have weak ones, since their SCCs are relatively small (and smaller than OTHERS). Third, the graph shows that the DX, IV, MEDIA and FI communities display OUT-dominant bow-ties, in which the OUT sector is the biggest one; considering all the investigated datasets, OUT-dominant bow-ties represent the most frequent configuration, being 21 out of 31 communities. Instead, 6 out of 31 discursive communities are INTEND-dominant bow-ties (as PD and M5S in Fig.~\ref{covid_ita}). We remark that, in all our datasets, all the right wing discursive communities display bow-ties with an OUT-dominant structure; in most of the cases, these bow-ties are also strong. The colours of the nodes in Fig.~\ref{covid_ita} are going to be explained in the following section. \begin{figure} \centering \includegraphics[scale=0.3]{graphs_covid_ita.png} \caption{\textbf{The bow-tie structure of the discursive communities in the Italian Covid-19 dataset.} The dimension of the sectors is proportional to the number of nodes: DX and IV discursive communities have strong bow-ties (the OTHERS block is smaller than SCC), while the others are weak (the OTHERS block is greater than SCC, still being smaller then bow-tie WCC). The DX, IV, FI and MEDIA groups display a OUT-dominant bow-tie structure, with the most part of the nodes located in the OUT sector. M5S and PD communities have a INTEND-dominant bow-tie structure, the INTENDRILS sector being the dominant one.\\ The colour of the blocks quantifies the distance between the observed dimensions and those predicted by the Direct Configuration Model (DCM). The observed dimension for the OTHERS sector is significantly less numerous (considering a significance level at $\alpha=0.01$) for all the communities, besides PD. Remarkably, for INTEND-dominant bow-ties, also other sectors, as SCC and INTENDRILS, are usually bigger than what we expect from the model.} \label{covid_ita} \end{figure} \subsubsection{Statistical significance of the bow-tie structure}\label{ssec:statistical_significance} It may be argued that the bow-tie structures featured by the discursive communities in our datasets are just an accident, due to the different role of the various users in the debate. In fact, those accounts that have high out-degrees and low in-degrees are naturally in the IN sector; those that, viceversa, have high in-degrees and low out-degrees are in the OUT sector, and so on. To test whether the presence of bow-ties is merely attributable to the behavioral characteristics of the accounts, we compare the dimensions of the different sectors, as observed in the real network, with those in a randomised system in which the in- and out-degree sequences are fixed. If the partition in the various bow-tie sectors were just a matter of the degree sequence, none of the dimensions of the various blocks should be statistically significant. Otherwise, we should observe a significant mismatch with respect to the expectation of the null-model. In order to have an unbiased benchmark, we build an entropy-based null-model that preserves the in- and out-degree sequences, being maximally random for all the rest (see Ref.~\cite{Cimini2018} for a review on the subject). Summarising, starting from a real network, we consider the set of all possible graph realizations (the graph \emph{ensemble}) having the same number of nodes as in the real system. Then, we assign to each representative of the ensemble a different probability of realization by maximising the entropy of the ensemble, but constraining the average value of some topological property of the real network (in our case, the in- and out-degree sequences). In this way, even if the single realization of the ensemble does not display the network properties that we would like to preserve, the entire ensemble, on average, does. In the last years, such procedure has been adopted to analyse financial and economic systems~\cite{squartini2011analytical,Squartini2013a,Squartini2013b, Picciolo2013, Mastrandrea2014, Gualdi2016a, Saracco2015a, Saracco2016, Saracco2016a, DiGangi2018, Squartini2018, Bardoscia2021, Straka2017, Gabrielli2019, Adam2019, Bruno2018, Cimini2021, DiVece2021,Lin2020,Vallarano2021}, biological networks~\cite{Straka2018, PayratoBorras2019, Bruno2020, Caruso2021} and online social networks~\cite{Becatti2018, Becatti2019, Caldarelli2020, Caldarelli2021, Radicioni2021a, Radicioni2021b, Mattei2021, Patuelli2021, Bruno2021} and it was shown to be effective to extract the relevant structure from a real network~\cite{Parisi2020,Neal2021}. Here, we implement the Direct Configuration Model (\emph{DCM}), firstly introduced in Ref.~\cite{Mastrandrea2014} and implemented in the Python module \href{https://nemtropy.readthedocs.io/en/master/}{\texttt{NEMtropy}}~\cite{Vallarano2021}. More details on the exact derivation of DCM can be found in Subsection~\ref{ssec:DCM}.\\ Going back to Fig.~\ref{covid_ita}, the colour of the circles indicates the agreement between the actual size of the bow-tie sectors and the size predicted by the DCM: we are interested in detecting both too ``big" and too ``small" blocks. In particular, the darker the colour of the sectors in Fig.~\ref{covid_ita}, the larger the $-\logten(\text{p-value})$ (so the lower the p-value) and the greater the disagreement of the real system from the randomization. For each sector, the two-tailed p-value has been calculated looking to a sample of 1000 graphs generated by the DCM. The p-value tells us about the existence of a disagreement, but not about the direction of the disagreement. For instance, looking at the DX bow-tie in Fig.~\ref{covid_ita}, both the dimensions of OTHERS and SCC have a really small p-value, thus they do not agree with the randomization, but the OTHERS block is smaller than predicted by the DCM, while SCC is larger. \begin{table} \begin{tabular}{c||c|c|c|c|c|c|c} & \textbf{SCC} & \textbf{IN} & \textbf{OUT} & \textbf{TUBES} & \textbf{INTE.} & \textbf{OUTTE.} & \textbf{OTHERS} \\ \hline \hline \textcolor{orange}{\textbf{DX}}\textcolor{red}{$\bullet$} & $10^{-35}$* & 0.7 & 0.4 & $10^{-18}$* & $10^{-39}$* & $10^{-74}$* & 0*\\ \hline \textcolor{teal}{\textbf{M5S}}\textcolor{blue}{$\bullet$} & $10^{-15}$* & $10^{-18}$* & $10^{-48}$* & $10^{-9}$* & $10^{-58}$* & 0.5 & 0.0006*\\ \hline \textcolor{orange}{\textbf{IV}}\textcolor{red}{$\bullet$} & $10^{-90}$* & $10^{-8}$* & $10^{-81}$* & $10^{-12}$* & $10^{-72}$* & 0.0004* & 0*\\ \hline \textcolor{teal}{\textbf{PD}}\textcolor{blue}{$\bullet$} & 0.04 & 0.7 & 0.9 & 0.08 & 0.1 & 0.8 & 0.1\\ \hline \textcolor{teal}{\textbf{FI}}\textcolor{red}{$\bullet$} & 0.03 & 0.01 & 0.1 & 0.4 & 0.5 & 0.005 & $10^{-10}$*\\ \hline \textcolor{teal}{\textbf{MEDIA}}\textcolor{red}{$\bullet$} & $10^{-57}$* & $10^{-33}$* & 0.9 & $10^{-9}$* & $10^{-19}$* & 0.0002* & 0*\\ \hline \end{tabular} \caption{\textbf{P-values related to the various bow-tie sectors in the Covid-19 Italian dataset.} In \textcolor{orange}{orange} strong bow-ties, while in \textcolor{teal}{teal} weak ones; \textcolor{red}{$\bullet$} indicates OUT-dominant bow-ties, \textcolor{blue}{$\bullet$} indicates INTEND-dominant ones. If we set the statistical significance to $\alpha=0.01$, then we then have to correct for multiple hypothesis testing per block. In the present case, we used the False Discovery Rate method (FDR,~\cite{Benjamini1995}). In the table, validated p-values are marked by an asterisk `$*$'. The OTHERS block is statistically significant (in particular it is smaller than in the randomization) for all discursive communities but the PD one. It is remarkable that the dimension of SCC is significant in all strong bow-ties, while the one of OUT is significant only for IV bow-tie.}\label{tab:pvalues} \end{table} Table~\ref{tab:pvalues} reports the exact p-values of the different blocks for the various bow-ties of Fig.~\ref{covid_ita}. The significance of the blocks for each bow-tie can be assessed by using the False Discovery Rate (FDR) correction~\cite{Benjamini1995}, setting the statistical significance level to $\alpha=0.01$. In the present case the correction is limited, due to the small number of blocks in the bow-tie. It is interesting to observe that, in both strong and weak bow-ties, the OTHERS block is statistically significant in all the discursive communities but PD. In particular, the dimension of the OTHERS block is much smaller then predicted by the null-model and the presence of the bow-tie is not due to the degree sequence only. SCC is statistically significant (and bigger than expected) for all bow-ties but FI and PD. The IN block is often statistically significant and smaller than expected. We may notice that in the strong bow-tie of IV discursive community the dimensions of all sectors are statistically significant, while none are in the PD bow-tie, which is the smallest discursive community. It is worth noting that also the dimension of the discursive community has a role: due to the limited possible variability, smaller bow-ties feature more agreement with the model. \subsection{Verified users' distribution}\label{ssec:user_loc} Usually, verified accounts on Twitter belong to public characters and organizations, such as journalists, politicians, actors, political parties, media, and VIPS in general. Previous studies testify that verified users tend to introduce new content and have high visibility on the platform~\cite{Gonzalez-Bailon2013, Becatti2019, Caldarelli2020, Caldarelli2021, Radicioni2021a}. Thus, we expect to find them in the IN block. The results in Fig.~\ref{fig:histo_verified1} confirm this intuition: in the case of OUT-dominant bow-ties (leftmost panel), the 33.2\% of verified users, on average, are in the IN sector. High percentages of verified users are also in the SCC block (23.5\%). In the case of INTEND-dominant bow-ties (central panel), the percentage of verified users in the IN group increases to 42.8\%; the second block per percentage of verified users is INTENDRILS (20.1\%). In those communities where the bow-tie structure is not informative (right panel, Fig.~\ref{fig:histo_verified1}), a high percentage (42.9\%) of verified users, on average, is in the OTHERS sector. In a few cases of not informative bow-ties, it happens that verified users are mostly in the OUTTENDRILS sector. In this last case, their messages hardly reach a big audience and are simply retweeted by a group of strong retweeters (OUT sector), not catching the interest of the accounts in the SCC. Let us remark that in the case of non-informative bow-ties the dimension of OUT and SCC blocks is nevertheless limited. \begin{figure} \centering \includegraphics[scale=0.3]{distribution_verified.png} \caption{\textbf{Distribution of the percentage of verified users in each sector of the discursive communities with, respectively, OUT-dominant, INTEND-dominant and not informative bow-ties.} Each bar-chart displays the average percentage of verified users in a specific sector, calculated respectively for all the OUT-dominant, INTEND-dominant and not informative bow-ties. In the cases of OUT-dominant and INTEND-dominant bow-ties, the highest percentages of verified accounts can be found in the IN group. Moreover, in OUT-dominant bow-ties, we can found a relevant percentage of verified accounts also in the SCC. Naturally, for those communities with no bow-tie structure the verified accounts are mostly placed in the OTHERS sector and, to less extent, in the OUTTENDRILS one.} \label{fig:histo_verified1} \end{figure} Fig.~\ref{fig:ver_covid_ita} reports the same bar-chart, about the presence of verified users, for the bow-ties of the Covid-19 Italian dataset. It is possible to observe that in OUT-dominant bow-ties - i.e., DX, IV, FI and MEDIA - verified users are mainly in IN and SCC sectors. Also, in INTEND-dominant bow-ties, the INTENDRILS sector contains quite a number of verified users. Other user characterizations of the bow-tie blocks can be found in the appendices. \begin{figure}[ht!] \centering \includegraphics[scale=0.33]{verified_covid_ita.png} \caption{\textbf{Percentage of verified accounts in the bow-tie sectors for each discursive community of the Covid-19 dataset.} The bar-charts confirm that verified accounts are mainly located in the IN sector and, to a less extent, in the SCC one. Only for the PD group, which has a INTEND-dominant bow-tie structure, verified accounts are mostly placed in the INTENDRILS block. } \label{fig:ver_covid_ita} \end{figure} \subsubsection{Conservatives groups} \begin{figure} \centering \includegraphics[scale=0.7]{conservatives_covid_ita.png} \caption{\textbf{Percentage of nodes and edges in SCC for the communities in the Italian Covid-19 dataset.}\\In the Italian Covid-19 dataset, the conservative and right-oriented discursive community (DX) has more numerous and denser SCCs, as it is displayed in the highest two graphics. In the lowest graphic, it can be seen that, also considering the number of links per node in SCC, DX results again the first discursive community. These results hold for all the conservative groups in all the datasets under investigation.} \label{fig:conservatives} \end{figure} The bar-charts in Fig.~\ref{fig:conservatives} show the percentage of nodes, the percentage of edges and the number of edges per node in the Strong Connected Component, for each discursive community of the Italian Covid-19 dataset. Not only DX is the one with the greatest number of nodes and the greatest number of links in SCC, but also the link density of SCC in DX is much greater than that of any other discursive community. Thus, the number of links in SCC of DX is not proportionate to the number of nodes, and it results in a greater average degree per node. We found very similar behaviours also for the right-oriented communities of the other datasets.\\ In fact, in all our datasets, the discursive communities of conservative groups (i.e., DX in the Italian dataset, AfD in the German one, Conservatories in the Dutch one) are those with the highest percentage of nodes and, especially, of edges within SCC. This peculiar feature signals the presence of a common (self-)organization of accounts in line with conservative ideas on Twitter. \begin{figure} \centering \includegraphics[scale=0.7]{newsguard_dx.png} \caption{\textbf{Bow-tie structure of the DX group and percentages of retweets containing URLs of untrustworthy webpages.} The DX community in the Italian Covid-19 dataset presents the highest number of retweets containing a link to untrustworthy webpages. Most of them origin from SCC: 8.4\% of the retweets in SCC and 7.3\% of the retweets between SCC and OUT contain not reliable URLs. In the diagram, we also insert the link between IN and OUT (the dashed line), which, considering the definition of each sector, is not forbidden a priori.} \label{fig:newsguard} \end{figure} NewsGuard\footnote{\url{https://www.newsguardtech.com/it/}} is an independent software toolkit that monitors the quality and transparency of several news websites worldwide. Through the tags that NewsGuard has assigned to news sites whose links appear in the retweets of our communities, we are able to quantify the amount of retweets containing untrustworthy URLs. The recurrent situation is that almost only the conservative discursive communities display retweets with such URLs. For the Italian Covid-19 dataset, the DX group has 26,318 retweets with links to untrustworthy webpages of news sites, many more than in other communities: 1,356 retweets for M5S, 78 retweets for IV, 20 retweets for MEDIA, 9 retweets for FI and 0 for the PD group. A very similar situation has been found for the other datasets, see Supplementary Materials. Another interesting aspect is that the most part of retweets containing not reliable URLs has origin in the strongly connected component. Fig.~\ref{fig:newsguard} shows in red the percentage of retweets containing URLs of untrustworthy news pages within and between the sectors of the bow-tie structure for the DX group. The highest percentage can be found in SCC and between SCC and OUT. Again, this is a recurrent situation also for the conservative communities of the other datasets under investigation. \subsubsection{The case of EURO2020} Here, we devote a specific section to comment about the case of the European football championship (EURO2020) dataset\footnote{We do this for academic reasons, and not because Italy won the Euro2020 championship.}. This dataset features a less divisive, less debated, and less discussed tweets topics. The topics of all the other datasets either have a strong political nature or are debating with sharp different positions. We then analyze whether the fact that topics are less discussed/devated has anything to do with the presence -or absence- of a bow-tie structure in the EURO2020 dataset. We identified 5 discursive communities for the Italian dataset and 2 discursive communities for the Turkish one. Of these 7, 4 do not have an informative bow-tie structure (in fact, most part of the nodes are in OTHERS), and the other three have a weak one (OTHERS is smaller than the weakly connected component of the bow-tie, but still greater than the strongly connected one). \begin{figure} \centering \includegraphics[scale=0.3]{graphs_tur_euro2020.png} \caption{\textbf{The bow-tie structure of the discursive communities for the Turkish EURO2020 dataset.}\\ The SPORTS group contains the official accounts of football players and clubs, and sports newspapers, while AK refers to the Justice and Development Party (Turkish: Adalet ve Kalkınma Partisi, AKP), which is a conservative political party in Turkey, including President Erdogan and his ministries. The SPORTS discursive community does not display an informative bow-tie structure, while the AK one has an extremely weak (INTEND-dominant) bow-tie. The dimension of the sectors is proportional to the number of nodes therein and the color quantifies the distance between the observed and the predicted dimension. Looking to the color of the vertices, it is possible to see that the observed dimensions are not statistically significant.} \label{fig:tur_euro2020} \end{figure}\\ Fig.~\ref{fig:tur_euro2020} reports the bow-tie structures of the two discursive communities in the Turkish dataset. The SPORTS group contains the official accounts of football players and clubs, and of sports newspapers. AK refers to the Justice and Development Party (Turkish: Adalet ve Kalkınma Partisi, AKP), which is a conservative political party in Turkey including President Erdogan and his ministries. While SPORTS does not display any informative bow-tie, AK has a weak one. Following our interpretation, the latter displays a more hierarchical conversation on Twitter, in which the SCC is not numerous. Moreover, the dimensions of the sectors are mostly not statistically significant. \\ For the Italian case (Fig.~\ref{fig:ita_euro2020}) the main discursive community is formed by football players, sports newspapers and journalists. There is also a MEDIA community, containing accounts of Italian media, and other three small political communities (DX, IV, M5S). MEDIA, DX and IV does not display an informative bow-tie structure (respectively, 74\%, 81.2\% and 63.6\% of the nodes are in OTHERS), while FOOTBALLERS and M5S show a weak bow-tie (respectively 15.9\% and 23.9\% of nodes in OTHERS). \begin{figure} \centering \includegraphics[scale=0.3]{graphs_ita_euro2020.png} \caption{\textbf{The bow-tie structure of the discursive communities for the Italian EURO2020 dataset.}\\ The dimension of the sectors is proportional to the number of nodes therein and the color quantifies the distance between the observed and the predicted dimension. The main discursive community is formed by football players, sports newspapers and journalists. Then, we identified a MEDIA community, containing accounts of Italian media, and three small political communities (DX, IV, M5S). MEDIA, DX and IV do not display an informative bow-tie structure (respectively 74\%, 81.2\% and 63.6\% of the nodes in OTHERS), while FOOTBALLERS and M5S show a weak bow-tie (respectively 81.1\% and 75.7\% of nodes in INTENDRILS).} \label{fig:ita_euro2020} \end{figure}\\ Euro2020 dataset is the only, among ours, in which no discursive communities have a strong bow-tie structure. \section{Methods}\label{sec:methods} \subsection{The Bipartite Configuration Model}\label{ssec:BiCM} In order to create the various discursive communities we needed an appropriate null-model as benchmark for identifying those verified users that share the same audience. In this sense, it is necessary to compare the observed quantities with accurate predictions in order to state their significance: actually, the common audience may appear similar just due to the extreme activity of the considered verified users.\\ We represent the interaction between verified accounts -the ones whose identity is certified by Twitter platform- and unverified ones (i.e. all the others) via a bipartite undirected binary network in which a link connects a verified users to an unverified ones if there is at least a retweet between one and the other, or viceversa. Since the information about the number of different accounts interacting -via tweet or retweet- with a user is encoded, in this representation, in the degree sequence for nodes of both layers, we need a benchmark discounting it. The natural choice is to choose an entropy-based null-model, since it provides, by definition an unbiased framework~\cite{Cimini2018}: the null-model is maximally random, but for the constraints imposed on the system. The bipartite null-model discounting the degree sequence is the Bipartite Configuration Model (BiCM,~\cite{Saracco2015a}). In the present section we will briefly revise the steps of its definition.\\ Let us consider a bipartite network in which the two layers $\top$ and $\bot$ have dimension, respectively, $N_\top$ and $N_\bot$; in the following, Latin indices will be used to identify nodes on the $\top$ layer while Greek ones will be used for the $\bot$ layer. Then, the bipartite network can be represented by its biadjacency matrix, i.e. a $N_\top\times N_\bot$ matrix $\mathbf{M}$ whose generic entry $m_{i\alpha}$ is $1$ if the node $i\in\top$ is connected to the node $\alpha\in\bot$ and 0 otherwise. Let us start from a real bipartite network $G_\text{Bi}^*$ (in the following, all quantities denoted by a $*$ will indicate those measured on the real network). First, let us define an ensemble of graphs, i.e. the set of all the possible bipartite graphs having the same number of nodes of $G_\text{Bi}^*$, but with all different topologies, from the fully connected to the empty ones. Then, we can define the Shannon entropy over the ensemble, by assigning a different probability to each of its elements: \begin{equation*} S=-\sum_{G_\text{Bi}\in\mathcal {G}_\text{Bi}}P(G_\text{Bi})\ln{P(G_\text{Bi})}; \end{equation*} where, $P(G_\text{Bi})$ is the probability of the generic element of the graph ensemble $G_\text{Bi}$. Let us now maximise the entropy, while constraining the network degrees: in particular, we want that the ensemble average of degrees to match the value observed on the real network, in order to have a null-model tailored to the real system. In term of the biadjacency matrix, the degree sequences of the $\top$ and $\bot$ layers respectively read $k_i=\sum_\alpha m_{i\alpha}$ and $h_\alpha=\sum_i m_{i\alpha}$. Using the method of the Lagrangian multipliers, the constrained maximisation can be expressed as the maximisation of $S'$, defined as \begin{eqnarray*} S'&=&S\nonumber\\ &&+\sum_i\eta_i\left[k_i^*-\sum_{G_\text{Bi}\in\mathcal {G}_\text{Bi}}P(G_\text{Bi})k_i(G_\text{Bi})\right]+\sum_\alpha\theta_\alpha\left[h_\alpha^*-\sum_{G_\text{Bi}\in\mathcal {G}_\text{Bi}}P(G_\text{Bi})h_\alpha(G_\text{Bi})\right]\nonumber\\ &&+\zeta\left[\sum_{G_\text{Bi}\in\mathcal {G}_\text{Bi}}P(G_\text{Bi})-1\right] \end{eqnarray*} where $S$ is the Shannon entropy defined above, $\eta_i$, $\theta_\alpha$ are the Lagrangian multipliers relative to the degree sequences, respectively, on $\top$ and $\bot$, and $\zeta$ is the one relative to the probability normalization. Maximising $S'$ leads to a probability per graph $G_\text{Bi}\in\mathcal{G}_\text{Bi}$ that can be factorised in terms of the probabilities per link $p_{i\alpha}$~\cite{park2004statistical}, i.e. \begin{equation} \label{factorized} P(G_\text{Bi})=\prod_{i,\alpha}p_{i\alpha}^{m_{i\alpha}(G_\text{Bi})}\,(1-p_{i\alpha})^{1-m_{i\alpha}(G_\text{Bi})}, \end{equation} where $p_{i\alpha}=\dfrac{e^{-\eta_i-\theta_\alpha}}{1+e^{-\eta_i-\theta_\alpha}}$. Nevertheless, at this level the above equation is just formal, since we do not know the numerical value of $\eta_i$ and $\theta_\alpha$. To this aim, we can then maximise the likelihood of the real network~\cite{Garlaschelli2008,squartini2011analytical}; it can be shown that the likelihood maximisation is equivalent to imposing \begin{equation*} \langle k_i\rangle_\text{BiCM}=k_i^*,\,\forall\:i\in\top;\qquad\langle h_\alpha\rangle_\text{BiCM}=h_\alpha^*,\,\forall \alpha\:\in\bot. \end{equation*} \subsection{Validated projection of bipartite networks} We want to infer similarities among nodes on the same layer. We can use as a measure of similarity the number of common neighbours - for each couple of verified users, the number of unverified users that have interacted, via tweet or retweet, with both. Let us assume, without loss of generality, that we want to project the information contained in the bipartite network onto the $\top$ layer and call $V_{ij}$ the number of common neighbors between nodes $i,j\in\top$\footnote{Following Ref.~\cite{Saracco2016}, we use the letter $V$ to indicate common neighbours, since this pattern appear in the bipartite network as a ``V" between the layer.}. \begin{comment} If we imagine of disposing the layer of hashtags above that of users, when two hashtags share the same user, a V-shaped form emerged, like in figure \ref{Figure14}. \begin{figure} \centering \includegraphics[scale=0.4]{Figure_14.png} \caption{ A simple representation of three different V-motifs between a pair of hashtags.} \label{Figure14} \end{figure} This structure is called a \emph{V-motif}. If we call $\mathbf{A}$ the adjacency matrix describing our bipartite networks, whose dimensions will be $N_H\times N_U$ (where $N_U$ and $N_H$ are respectively the total number of users and the total number of hashtags) and $a_{hu}$ its entries, then the total number of V-motifs between two nodes $h$ and $h'$ is \end{comment} In terms of the biadjacency matrix, $V_{ij}$ can be expressed as \begin{equation*} V_{ij}=\sum_\alpha V_{ij}^\alpha=\sum_\alpha m_{i\alpha}m_{j\alpha}, \end{equation*} where we have defined $V_{ij}^\alpha= m_{i\alpha}m_{j\alpha}$; $V_{ij}^\alpha=1$ if both $i$ and $j$ are connected to node $\alpha\in\bot$ and 0 otherwise. Let us now compare the observed $V_{ij}$ for each possible pair of nodes in $\top$ with the prediction of the BiCM. Since link probabilities are independent, the presence of each V-motif $V_{ij}^\alpha$ can be regarded as the outcome of a Bernoulli trial: \begin{equation*} \begin{split} f_{\text{Ber}}(V_{ij}^\alpha=1)=&p_{i\alpha}p_{j\alpha},\\ f_{\text{Ber}}(V_{ij}^\alpha=0)=&1-p_{i\alpha}p_{j\alpha}. \end{split} \end{equation*} In general, the probability of observing $V_{ij}=n$ can be expressed as a sum of contributions, running on the n-tuples of considered nodes (in this case, the ones belonging to the layer of users). Indicating with $A_n$ all possible nodes n-tuples among the layer of $\bot$, this probability amounts at \begin{equation}\label{eq:PB} f_{PB}(V_{ij}=n)=\sum_{A_n}\left[\prod_{\alpha\in A_n}p_{i\alpha}p_{j\alpha}\prod_{\alpha'\notin A_n}(1-p_{i\alpha'}p_{j\alpha'})\right], \end{equation} where the second product runs over the complement set of $A_n$. Eq.~(\ref{eq:PB}) represent the generalization of the usual Binomial distribution when the single Bernoulli trials have different probabilities, also known as Poisson Binomial distribution~\cite{Hong2013}. We can, then, verify the statistical significance of the observed co-occurrences by calculating their p-value according to the distribution in Eq.~\ref{eq:PB}, i.e. the probability of observing a number of co-occurrences greater than, or equal to, the observed one: \begin{equation} \text{p-value}\big(V^*_{ij}\big)=\sum_{V_{ij}\ge V^*_{ij}}f_{PB}\big(V^*_{ij}\big). \end{equation} Repeating this calculation for every pair of nodes, we obtain $\binom{N_\top}{2}$ p-values. In order to state the statistical significance of the hypotheses belonging to this group, it is necessary to adopt a multiple hypothesis testing correction; in the present paper, we use the False Discovery Rate (FDR,~\cite{benjamini1995controlling}), since it controls the false positives rate. \subsection{Direct Configuration Model}\label{ssec:DCM} From the entire retweet network, in which the various accounts are represented as nodes in a direct network in which an arrow points the retweeter of a post, starting from its author, we extracted the various subgraphs of discursive community. Then, in order to compare the observed dimensions of the bow-tie sectors of these subgraphs and state their statistical significance, we adopted the \emph{Direct Configuration Model} (DCM), which is the entropy-based model suited for direct monopartite networks~\cite{Mastrandrea2014}. For directed networks, the adjacency matrix is (in general) not symmetric, and each node $i$ is characterized by two degrees: the out-degree $k^{\text{out}}_i=\sum_ja_{ij}$ and the in-degree $k^{\text{in}}=\sum_ja_{ji}$, where $a_{ij}$ is the generic entry of the (directed) adjacency matrix $\mathbf{A}$. The Directed Configuration Model (DCM) is therefore defined as the ensemble of direct networks with given out-degree and in-degree sequences. Using the same machinery as in the previous subsection~\ref{ssec:BiCM}, it is possible to derive a probability per graph: if $G_D$ is the generic representative of the ensemble of directed graphs $\mathcal{G}_D$, then the probability per graph $P(G_D)$ reads: \begin{equation*} P(G_D)=\prod_{i,j\neq i}q_{ij}^{a_{ij}(G_D)}(1-q_{ij})^{1-a_{ij}(G_D)}. \end{equation*} Thus, again the probability per graph factorises in terms of probabilities per link $q_{ij}$, which can be expressed in terms of Lagrangian multipliers \begin{equation*} q_{ij}=\dfrac{e^{-\gamma_i-\delta_j}}{1+e^{-\gamma_i-\delta_j}}, \end{equation*} where $\gamma_i$ and $\delta_j$ are the Lagrangian multipliers associated, respectively to the out-degree of node $i$ and to the in-degree of node $j$. In order to get the numerical value of $\gamma_i$ and $\delta_j$, we can use the maximum likelihood as in the above subsection~\ref{ssec:BiCM}, which is equivalent to impose \begin{equation*} \langle k_i^{\text{out}}\rangle_\text{DCM}=k_i^{*^{\text{out}}},\qquad\langle k_i^{\text{in}}\rangle_\text{DCM}=k_i^{*^{\text{in}}},\,\forall i. \end{equation*} Since the bow-tie decomposition is highly non linear, in order to calculate the statistical significance of the dimension of the various blocks, we generated a sample of 1000 different graphs for each discursive community, using the probabilities provided by the DCM. Then, we obtained a distribution for the dimensions of the bow-tie sectors just looking to the decomposition of each graph in our ensemble. At this point, we could calculate a two-tailed p-value with a significance at $\alpha=0.01$ for estimating the distance between the dimensions observed with those reproduced by the ensemble. \subsection{Modularity and community detection} In the present analysis, we inferred the discursive communities from the communities in the validated network of verified users. In particular, we used the modularity based Louvain algorithm~\cite{Blondel2008}.\\ The modularity~\cite{Newman2010} compares the number of edges within the actual communities with its expectation under a certain null-model. Modularity can be written as \begin{equation} Q=\frac{1}{2m}\sum_{ij}\big(a_{ij}-p_{ij}\big)\,\delta(C_i,C_j) \end{equation} where $m$ is the total number of links of the network, $a_{ij}$ are the entries of the adjacency matrix, $p_{ij}$ is the probability to have a link between nodes $i$ and $j$ according to the chosen null-model, $C_i$ and $C_j$ are, respectively, the communities of nodes $i$ and $j$ and the Kronecker delta $\delta(C_i,C_j)$ selects all the pairs of nodes contained in the same community (equal to 1 if $C_i=C_j$ or 0 otherwise). In the original definition in Ref.~\cite{Girvan2002}, the null-model chosen is the Chung-Lu one~\cite{Chung2002}, which conserve the degree sequence, but it is known to be inconsistent for dense networks that present strong hubs~\cite{Cimini2018}. In the present paper we use instead the entropy-based Undirected Configuration Model (UCM) defined in~\cite{Garlaschelli2008,squartini2011analytical}: it can be shown that in the case of sparse network, the UCM can be approximated by the Chung-Lu null-model. \subsection{\texttt{NEMtropy}} In the present paper, we implemented the BiCM, the DCM and the Louvain algorithm using UCM null-models via the Python module \href{https://pypi.org/project/NEMtropy/}{\texttt{NEMtropy}}, described in Ref.~\cite{Vallarano2021}.\\ \section{Discussion} Bow-tie structures were initially introduced for the description of the World Wide Web (WWW)~\cite{Broder2000}: websites are represented by nodes in a direct network in which the edges represent the hyperlinks. Broder et al.~\cite{Broder2000} show that the greatest number of websites belongs to a Weakly Connected Component (WCC) with a peculiar structure, see Fig.~\ref{fig:structure}. In particular, there are 3 main sectors that are crucial for the interpretation of the system: SCC, IN, and OUT. \noindent SCC is the main Strongly Connected Component of WCC and contains the greatest subgraph in which each node is reachable by any other node in the same group. In WWW, the SCC block includes the greatest part of the websites. \noindent The IN block contains all nodes that can access the nodes in SCC without being part of it. In WWW, these are the search engines: they link the greatest number of websites and direct the users to the websites closer to their requests. \noindent The OUT block contains all nodes that are reachable by nodes in SCC, without being part of it. In WWW, these are the authorities, i.e., websites as Wikipedia, considered as a reference for many websites. \noindent Nodes in the bow-tie structure of WWW represents nearly 75\% of the websites in the whole WWW (at least, at time of publication of Ref.~\cite{Broder2000}).\\ In the present manuscript, we analysed eight thematic Twitter datasets in different languages, related to various debates in Europe. We extracted the discourse communities from the datasets and we investigate their network structure. Discourse (or discursive) communities are groups of users that interact among themselves by sharing the content created by others. It was shown that discursive communities tend to mirror the political orientation of users~\cite{Conover2011,Conover2011a,Conover2012,Becatti2019, Caldarelli2020,Caldarelli2021, Radicioni2021a, Radicioni2021b, Bruno2021,Mattei2021}. \paragraph{Discursive communities and bow-ties} A first result of the analysis carried out in this work is that, in almost all the discursive communities extracted from the eight datasets, WCCs of the bow-tie include the great majority of the accounts. Particularly, a bow-tie structure is present in those discursive communities debating about politics, like in the case, e.g., of election campaigns (it is the case of the Dutch elections dataset) or debating about Society, e.g., `how to handle a pandemic?' (it is the case of the Italian, German and French datasets about Covid-19) or 'how to manage migration fluxes?' (it is the case of the Italian online debate on migrants). Instead, a bow-tie structure is absent when the topics of the discussion are sportive ones, as in the case of Euro2020 Turkish and Italian datasets. More in details, we state that the bow-tie is informative if the corresponding WCC includes more than one half of the nodes of the entire discursive community, otherwise it is not informative. In the present datasets, we found that bow-ties are informative in all the discursive communities debating politics. In the case of the Euro2020 dataset, bow-ties are not informative, or, if present, they are extremely weak. When the bow-tie is informative, we found essentially 2 cases: 1) the most crowded block is the OUT one; 2) the most crowded block is the INTENDRILS one. The former is typical of the discursive communities of right wing parties in all European political/societal debates of our datasets, while the latter is more common in less active political discursive communities in many political/societal datasets. \paragraph{Which users in which bow-tie sectors and the exposure to m/disinformation} A closer inspection of the nodes in the various blocks and the quality of the shared content permit to better characterise the users in the bow-tie. The first observation is that the greatest part of the verified users, i.e., those accounts for which the identity of the owner has been certified by Twitter, in the IN sector, in each bow-tie. This finding is not surprising: as already observed in previous studies, verified users create content and are less active in sharing messages written by others~\cite{Becatti2019, Caldarelli2020, Radicioni2020, Caldarelli2021, Radicioni2021a, Gonzalez2021}. Verified users are mostly politicians and official accounts of political parties, as well as journalists and official accounts of their newscasts and newspapers. In this sense, a discursive community displaying a INTEND-dominant bow-tie structure (where INTRENDILS is the most crowded block) may appear, at first sight, as a less democratic group: the content is created by a few accounts and shared by a group of followers that limit their interactions to sharing the messages coming from the IN block. Instead, in a OUT-dominant bow-tie, the greatest block is OUT and it can access the content created by all the other blocks in the bow-tie (with the only exception of INTENDRILS), so having the possibility to intercept every voice in the discursive community. Actually, the issue is on the quality of the content created in the various blocks, see Fig.~\ref{fig:newsguard}. Leveraging our ongoing collaboration with the NewsGuard organization\footnote{\url{https://www.newsguardtech.com/it/}}, we annotated the URLs that appear in tweets in our datasets, based on the reliability and transparency ratings of the news sites to which those URLs belong (ratings given by NewsGuard). It turns out that the lowest reliable URLs, in a strong bow-tie, are the ones shared in SCC. The fact that verified accounts are not responsible for the vast majority of m/disinformation sharing was already observed in Ref.~\cite{Caldarelli2021} and, in the present context, it reflects the fact that accounts in IN are minimally responsible for the spreading of low quality/untrustworthy content. Otherwise stated, when the source of information is not identifiable, the average quality of the content is lowered down. A largely populated OUT block implies that the greatest part of the accounts has access to a great variety of content, but its quality is lower than in the case of weak bow-ties. It is worth to consider also a peculiarity of right-wing discursive communities: for all those, the bow-tie is strong (i.e., the dimension of the OTHERS block is smaller than the SCC one) and it is neatly OUT-dominant. While the OUT-dominant configuration is already structurally prone to the diffusion of m/disinformation, this propensity is even more emphasized by the extreme activity of the SCC: for instance, in the Covid-19 Italian dataset the link density in the right-wing bow-tie is at least 3 times greater than any other OUT-dominant strong bow-ties.\\ \emph{Infodemic} is a recently introduced neologism, that became particularly popular during the Covid-19 pandemic. According to the WHO, ``\emph{infodemics are an excessive amount of information about a problem, which makes it difficult to identify a solution. Infodemics can spread misinformation, disinformation and rumors during a health emergency. Infodemics can hamper an effective public health response and create confusion and distrust among people}\footnote{Coronavirus disease 2019 (COVID-19) Situation Report – 45:~\url{https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200305-sitrep-45-covid-19.pdf?sfvrsn=ed2ba78b_4}}". The effects of the recent Covid-19 infodemic, even if debated~\cite{Valensise2021,Gallotti2021a}, may put at risk the countermeasures to the spread of an epidemic and it is worrisome for policy makers\footnote{See, for instance, the Joint Communication titled “Tackling COVID-19 disinformation - Getting the facts right” (June 10th, 2020), available at the following link: \url{https://ec.europa.eu/info/sites/default/files/communication-tackling-covid-19-disinformation-getting-facts-right_en.pdf}.}. In the present work, we relate the infodemic phenomenon to the specific structure of the discursive communities. As highlighted above, for the investigated datasets: \begin{itemize} \item OUT-dominant bow-ties are the most affected by low quality contents; \item This effect is particularly amplified in right-wing discursive communities, due to the extraordinary dimension and density of their SCC sectors (that strongly contributes to the creation and diffusion of low quality contents). \end{itemize} \paragraph{Statistical significance of the analysis} Here, we remark an important aspect of our analysis, of uttermost importance. In the analysis of a complex network, it is necessary to consider what is being measured, and what is its baseline. A typical example is the modularity, i.e. one of the most used target function for community detection. The problem resides in stating what is the number of links inside a group of nodes that is enough to form a community. In this case, we build a null-model, i.e., a model that shows part of the properties of the original system, being random for all the rest, to have a proper benchmark for our observations. We then compare the number of edges inside a group of nodes with the one expected by the null-model. Without the null-model, we could not know whether the number of links that bind a group of nodes are due to the degree sequence, or whether they are instead the genuine signal of the presence of a community. In the present study, we used an entropy-based null-model as a benchmark for our analysis~\cite{Squartinia,Cimini2018}. An entropy-based null-model allows to have a benchmark that is tailored to the system under analysis. It fixes (on average) some topological quantities to the values observed in the real network and leaves all the rest completely random. Being based on the (Shannon) entropy maximisation, it guarantees that it uniformly considers all the possible configurations (it is `ergodic', using Statistical Physics jargon), thus it does not introduce any bias in the analysis. To strengthen the analysis, we study if the bow-tie structures are due to the degree sequence of the nodes in the various discursive communities. In fact, the size of IN and OUTTENDRILS could simply be due to the presence of many nodes with zero in-degree (an analogous consideration could be done for the OUT and the INTENDRILS blocks, considering, instead the out-degree). Thus, strong, weak and not informative bow-ties could be due to degree sequence only, and do not carry any kind of information on their own. We thus used the Directed Configuration Model defined in Ref.~\cite{Squartini2013a} and implemented by the Python module \href{https://pypi.org/project/NEMtropy/}{\texttt{NEMtropy}}~\cite{Vallarano2021}. Our results show that the dimensions of the blocks in the bow-tie are very often statistically significant: the p-value of the observed dimensions of the various blocks against the null-model expected distribution are extremely small, such that they are not compatible with the degree sequence, or, otherwise stated, the dimension of the various blocks cannot be explained using the degree sequences only. \paragraph{Limitations} Even if we have obtained strong results (see the null-model validation check on the dimension of the bow-tie sectors), we have nevertheless to remark few aspects of our analysis that can limit its generalization. First, the analysis is related to eight different thematic datasets in different languages, all referring to European debates, some of them of political nature. Indeed, while the total amount of messages analysed is quite impressive, we are aware that, even if the spectrum of the arguments covered is various, our findings may be valid on our datasets only. In the near future, we are going to expand the countries covered by our analyses and expand the list of arguments under analysis. Following our jargon, OUT-dominant bow-ties display high level of m/disinformation. It is not a causal relation: the presence of OUT-dominant bow-ties does not imply the presence of an infodemic. In fact, if the reputability of the sources shared by SCC were high, we would have not observed any infodemic signal. Nevertheless, it is true that OUT-dominant bow-ties help the diffusion of m/disinformation, when present, since accounts in OUT are exposed to all contents created by nearly every block in the discursive community. Finally, it can be argued that the observed bow-ties appear just in discursive communities, a method that is recent and quite limited in the application~\cite{Becatti2019,Caldarelli2020,Caldarelli2021,Radicioni2021a,Radicioni2021b,Mattei2021,Bruno2021}. Part of the authors of the present manuscript have in preparation a paper in which they compare the results obtained with different methods for extracting the information of discursive communities~\cite{Saracco2022}: indeed the methodology analysed in the present paper is sound and among the most effective ones and show really good performances when compared to manually annotated data. \section*{Acknowledgements} FS ackowledge Pietro Galgani and Lizanne Dirkx for support in both the download and the analysis of the Dutch election dataset; Giulia Andrighetto, Stefano Guarino, Enrico Mastrostefano, Elena Pavan, Eugenia Polizzi and Tiziano Squartini for useful discussions. All authors acknowledge support from IMT PAI project Toffee. \bibliographystyle{ieeetr}
1,314,259,993,839
arxiv
\section{Introduction} Acoustic, elastic, and electromagnetic waves are of quite different nature, but in many geometries of practical relevance, they can all be described by the linear wave equation of second order. If we are interested in the distribution of the wave intensity in a domain $\Omega\subset \mathbb{R}^n$ after a transitional time, we have to solve the time harmonic problem, which is the Helmholtz equation \nocite{Helmholtz-1860} \begin{equation} \label{eq:Waveguide} -\nabla\cdot (a \nabla u) = \omega^2 u + f\qquad\text{ in }\Omega\,. \end{equation} In this equation, $f: \Omega\to \mathbb{R}$ is a source, the positive coefficient field $a: \Omega\to \mathbb{R}$ describes the properties of the medium, we assume that $a$ equals an $\varepsilon$-periodic function $a_+(x)$ on the right of a bounded region and an $\varepsilon$-periodic function $a_-(x)$ on the left. The periodicity $\varepsilon>0$ and the frequency $\omega>0$ are given, we consider them as fixed parameters throughout this work. The aim is to find the solution $u: \Omega \to \mathbb{C}$ to equation \eqref {eq:Waveguide}. We are interested in the analysis of wave-guides. Restricting to the two-dimen\-sional case for simplicity, we consider the infinite strip $\Omega := \mathbb{R}\times (0,H)$ with the height $H>0$. It remains to choose boundary conditions. The main difficulty comes from the radiation conditions that have to be imposed for $x_1\to \pm\infty$. In contrast, the analysis is essentially independent of the boundary condition on the lateral boundary $(\mathbb{R}\times \{ 0 \}) \cup (\mathbb{R}\times \{ H \})$. To make a choice, we work with periodicity conditions as in \cite {Lamacz-Schweizer-Bloch}: values and derivatives coincide at $(x_1,0)\in \mathbb{R}\times \{ 0 \}$ and $(x_1,H)\in \mathbb{R}\times \{ H \}$. {\bf Radiation condition.} At the lateral boundaries, for $x_1\to \pm\infty$, we have to impose a radiation condition. It is not an easy task to formulate the radiation condition in a wave-guide. For a periodic semi-infinite wave-guide, a condition was formulated and analyzed in \cite {Hoang2011}, similarly, the periodic wave-guide was analyzed in \cite {FlissJoly2016, JolyLiFliss2006}, a wave-guide with different coefficients in the two infinite directions was analyzed in \cite {Lamacz-Schweizer-Bloch}. The latter publication provides a the uniqueness result for the suggested radiation condition. It is this radiation condition which we base our numerical scheme on. Let us formulate the following fact in order to illustrate the complexity of the wave-guide problem \eqref {eq:Waveguide}. Let the coefficients $a$ be real and bounded from below by a positive number $a_0$. Furthermore, assume that $a$ is $\varepsilon$-periodic on $x_1<0$ and on $x_1>0$. Let the source $f:\Omega\to \mathbb{R}$ of class $L^2(\Omega)$ have a bounded support. Open question: Does \eqref {eq:Waveguide} possess a radiating solution $u$? In the above question we avoided the precise formulation of the radiation condition -- this is adequate since the different forms of radiation conditions in a periodic wave-guide are essentially equivalent. The form of the radiation condition that was suggested in \cite {Lamacz-Schweizer-Bloch} can be written as \begin{equation} \label{eq:outgoingright-intro} -\hspace{-1.15em}\int_{RY_\varepsilon} \left|\Pi^+_{<0}(\left\{ u\right\}^+_{R,R}) \right|^2\to 0\quad \text{ as }\quad R\to \infty\,. \end{equation} The condition uses the periodicity cell $Y_\varepsilon = (0,\varepsilon)^2$, the function $\left\{ u\right\}^+_{R,L}$, which is the restriction of $u$ (periodically extended in the vertical direction) to the domain $\varepsilon (R ,R+L)\times \varepsilon (0,R)$. For large $R$, we hence consider the solution $u$ on the far right. Note that we use here boxes of width $\varepsilon L$ at position $\varepsilon R$, while only $L = R$ was considered in \cite {Lamacz-Schweizer-Bloch}. The symbol $\Pi^+_{<0}$ denotes the projection of the argument (which is a function on a square in $\mathbb{R}^2$) onto the space spanned by left-going Bloch waves, i.e.\,Bloch waves for which the first component of the Poynting vector is negative. The superscript ``$+$'' indicates that the Bloch waves are calculated for the periodic coefficient $a$ of the right half-cylinder. The symbol $-\hspace{-1.15em}\int$ denotes the mean value integral, $-\hspace{-1.15em}\int_A f := |A|^{-1} \int_A f$. A condition analogous to \eqref {eq:outgoingright-intro} with a projection onto right-going waves must be imposed on the left. \medskip The aim of this contribution is to introduce and to analyze a numerical scheme that can be used to solve the wave-guide problem. \subsection{An approach based on \eqref {eq:outgoingright-intro}} The numerical problem must be formulated in a bounded domain. Furthermore, as in many other related approaches, we must additionally introduce a absorption coefficient $\delta\ge 0$. Our analytical results will only cover the case $\delta>0$, but practical experience shows that the numerical scheme works well also for $\delta=0$. The truncation is performed with two positive integer parameters $R,L\in \mathbb{N}$. We use the inner domain $\Omega_R := (-R \varepsilon, R\varepsilon) \times (0,H)_\sharp$ with $H=\varepsilon K$, $K\in \mathbb{N}$, and the extended domain $\Omega_{R+L} := (-(R+L) \varepsilon, (R+L) \varepsilon) \times (0,H)_\sharp$, the symbol $\sharp$ indicates that we demand periodicity conditions in vertical direction. In the following we suppress the dependencies on $\delta>0$ and $L>0$, and denote the unknown function on the truncated domain as $u = u_{R}: \Omega_{R+L} \to \mathbb{C}$. We impose that $u$ satisfies the Helmholtz equation on the inner domain with the absorption parameter $\delta$: \begin{equation} \label{eq:Waveguide-R-L} -\nabla\cdot (a \nabla u) = \omega^2 (1+{\rm i}\delta) u + f \qquad\text{ in }\Omega_R\,. \end{equation} In order to formulate the radiation condition, we use the positive parameter $L>0$ and consider the radiation boxes $W_{R,L}^+ := \varepsilon (R, R+L) \times (0,H)_\sharp$ and $W_{R,L}^- := \varepsilon (-(R+L), -R) \times (0,H)_\sharp$ as sketched in Figure \ref {fig:geometry}. \begin{figure}[ht] \centering \begin{tikzpicture}[scale = 0.63] \draw[-] (-8,0)--(8,0); \draw[-] (-8,4)--(8,4); \draw[-] (-8,0)--(-8,4); \draw[-] (-5,0)--(-5,4); \draw[-] (8,0)--(8,4); \draw[-] (5,0)--(5,4); \draw[dashed] (0,0)--(0,4); \node[] at (8,-.55) {$\varepsilon (R+L)$}; \node[] at (5,-.55) {$\varepsilon R$}; \node[] at (-8,-.55) {$-\varepsilon (R+L)$}; \node[] at (-5,-.55) {$-\varepsilon R$}; \node[] at (0, -.55) {$0$}; \draw[<->] (9.3,0)--(9.3,4); \node[] at (10.7, 2) {$H = \varepsilon K$}; \node[] at (6.5,2) {$W_{R,L}^+$}; \node[] at (-6.5,2) {$W_{R,L}^-$}; \node[] at (4,0.7) {$\Omega_{R}$}; \draw[->] (11,-0.5)--(12.5,-0.5); \node[] at (12.6, -.8) {$x_1$}; \draw[->] (11.2,-0.7)--(11.2,0.9); \node[] at (11.3, 1.1) {$x_2$}; \end{tikzpicture} \caption{Geometry of the truncated domain} \label{fig:geometry} \end{figure} The restrictions of a function $u:\Omega_{R+L}\to \mathbb{C}$ to these two rectangles are denoted as $\left\{ u\right\}^+_{R,L}$ and $\left\{ u\right\}^-_{R,L}$. More precisely, we additionally shift the lower left corner to the origin and set, for $x_1\in [0, \varepsilon L)$ and $x_2\in [0, H)$, \begin{equation*} \left\{ u\right\}^+_{R,L}(x_1, x_2) := u(\varepsilon R + x_1, x_2)\,,\quad \left\{ u\right\}^-_{R,L}(x_1, x_2) := u(-\varepsilon (R+L) + x_1, x_2)\,. \end{equation*} We emphasize that the radiation boxes have width $\varepsilon L$ and are positioned at $\pm \varepsilon R$, while we restricted ourselves to $L = R$ in \cite {Lamacz-Schweizer-Bloch} to simplify notations. The idea is now to impose \eqref {eq:outgoingright-intro} and its counterpart on the left hand side in a strong form at a finite distance; in this first attempt we demand \begin{equation} \label{eq:outgoing-R-L} \Pi^+_{<0}(\left\{ u\right\}^+_{R,L}) = 0\qquad\text{and}\qquad \Pi^-_{>0}(\left\{ u\right\}^-_{R,L}) = 0\,. \end{equation} The projections are defined below in Definition \ref {def:projection}. Condition \eqref {eq:outgoing-R-L} expresses that, in the right radiation box $W_{R,L}^+$, the solution does not contain left-going waves, and, in the left radiation box $W_{R,L}^-$, the solution does not contain right-going waves. \paragraph{Coupling conditions across interfaces.} We finally have to demand conditions along the two interior interfaces $\Gamma_R^+ := \overline{\Omega_R} \cap \overline{W_{R,L}^+} = \{\varepsilon R\} \times (0,H)_\sharp$ and $\Gamma_R^- := \overline{\Omega_R} \cap \overline{W_{R,L}^-} = \{-\varepsilon R\} \times (0,H)_\sharp$. We impose on $u$ the weak continuity condition \begin{equation} u \in H^1(\Omega_{R+L})\,.\label{eq:weak-cont} \end{equation} A second condition is needed to replace the continuity of the flux, which may be expressed as \begin{equation} \label{eq:flux-R-L} \left[ e_1\cdot a\nabla u \right]_{\Gamma_R^\pm} = 0\,, \end{equation} where the bracket $[ . ]_{\Gamma_R^\pm}$ denotes the jump of a function across the interface $\Gamma_R^\pm$. Let us assume that the problem parameters $a$, $\omega$, $\delta$, and the truncation parameters $R$ and $L$ are fixed. Our first attempt to define a truncated problem is the following. \begin{quote} (P$_0$): Given $f$, find $u$ that satisfies \eqref {eq:Waveguide-R-L}--\eqref {eq:flux-R-L}. \end{quote} {\em Warning:} Problem (P$_0$) is not a useful truncated problem since it does not contain a partial differential equation in the boxes $W_{R,L}^\pm$. The main result of this work is the formulation of a more useful problem (P). Problem (P) will be defined with a function space $V$, which strengthens \eqref{eq:outgoing-R-L} and with a bilinear form $\beta$, which encodes a weaker version of \eqref{eq:flux-R-L}, see \eqref {eq:beta-problem}. Theorem \ref {thm:existence} provides the solvability of this problem and a stability estimate. The numerical results presented in Section \ref {sec.numerics} are obtained using problem (P). \subsection{Literature} An outgoing wave condition for homogeneous media was suggested by Sommerfeld in 1912, today it is the undoubted radiation condition for the full space problem. If, for numerical purpose, the domain is truncated, the radiation condition must be replaced by a condition at a finite distance. One of the ideas is to use a boundary condition that exploits a representation of the solution outside the truncated domain (integral representation or Dirichlet-to-Neumann map). Another idea is to introduce an absorbing layer that surrounds the truncated domain (perfectly matched layer technique). \paragraph{Radiation conditions in periodic media.} The two sketched ideas cannot easily be adapted to treat periodic media: Integral representations are not available and a non-reflecting boundary condition is not exact, since a non-homogeneous medium always reflects waves in part. The derivation of perfectly matched layers typically requires an explicit representation of propagating modes, which is not available in periodic media. Outgoing wave conditions in periodic wave-guides have been introduced and analyzed e.g.\,in \cite {FlissJoly2016, Hoang2011}. Loosely speaking, a radiating solution is a function that consists, at large distances from the origin and up to small errors, of outgoing Bloch waves. The two contributions \cite {FlissJoly2016, Hoang2011} treat the (globally) periodic wave-guide problem and the periodic half-wave-guide problem, respectively, and they contain existence results that are based on a limiting absorption principle. A slightly different radiation condition for the locally periodic wave-guide was suggested in \cite {Lamacz-Schweizer-Bloch}. Regarding further results on limiting absorption principles we mention \cite {JolyLiFliss2006, Radosz2015}. Other conditions are the ``modal radiation condition'', formulated in Definition 2.4 of \cite{Bonnet-Ben-etal-SIAP2009} and the ``pole condition'' of \cite {Hohage-Schmidt-Z-2003}. We mention \cite {Nazarov2014} and the references therein for other approaches to radiation conditions. Regarding the general treatment of waves in periodic media (e.g.\,in photonic crystals) we refer to \cite {PhotonicCrystals-book, Kuchment-2001}. Regarding the tool of Bloch expansions and Bloch measures, we refer to \cite{Allaire-Conca-1998}. \paragraph{Numerical treatment of radiation conditions.} The numerical treatment of exact boundary conditions in an inhomogeneous material was considered in \cite{Fliss-Joly-2009}, extending the approach of \cite {JolyLiFliss2006} to a material that is inhomogeneous in two directions. Their approach uses Dirichlet-to-Neumann maps that are defined by half-infinite wave guide problems. The authors provide an explanation why none of the classical approaches to implement outgoing wave conditions at finite distance (local radiation condition, perfectly matched layers, standard Dirichlet-to-Neumann maps) can easily be adapted to periodic media. We note that, just as in the contribution at hand, the analysis of \cite{Fliss-Joly-2009} is restricted to the case with positive losses. The method was developed further to a numerical scheme in \cite{Joly-Fliss-2012}. The ideas were used in \cite{Fliss-2013} to the study of line defects in a periodic photonic crystal; these defects have been analyzed also in \cite{HoangRadosz-2014} with the result that a line defect cannot support finite energy modes (bound states). For extensions of the numerical scheme to Robin type boundary conditions see \cite {FlissKlindworthSchmidt2015}. \vspace*{3mm} {\em Enriched finite elements.} Our numerical method is a Galerkin method in which we use two different types of ansatz functions: Standard piecewise linear hat functions in the interior of the domain and Bloch waves in the radiation boxes. The approach is reminiscent of enriched finite element methods, see e.g.\,\cite{KOH2011, SBH2006}. \paragraph{Negative refraction.} Photonic crystals can exhibit astonishing behavior -- one of them is negative refraction. When a planar wave hits the interface between free space and photonic crystal, then one part of the wave is reflected, another part generates waves inside the photonic crystal. It has been observed that the waves in the crystal can travel in a direction that corresponds to negative refraction. There are two explanations for this effect. The first one is based on a study of a homogenized equivalent medium that replaces the photonic crystal. This replacement provides a good approximation if the periodicity is small compared to the wave-length. We refer to Figure \ref {F:comp_hom_om_sm} for the numerical results of our method in this case. Indeed, the results show a good agreement between the solution with the periodic medium on the right and the solution with the homogenized medium. If the homogenized medium happens to have a negative index of refraction, then negative refraction is visible in the homogenized problem and also in the periodic problem (not the case in Figure \ref {F:comp_hom_om_sm}). For the underlying idea we mention \cite{Pendry2000}, for mathematical justifications we refer to \cite{BFe2, BouchitteSchweizer-Max, Lamacz-Schweizer-Max, Lamacz-Schweizer-Neg}. In \cite {EfrosPokrovsky-SolidState-2004, Pokrovsky2003333} the negative refraction effect is explained in the spirit of negative index materials. The second explanation of the effect of negative refraction, observed and outlined in \cite {PhysRevB.65.201104}. The main point of \cite {PhysRevB.65.201104} is that negative refraction can occur between two materials with positive index (where no negative refraction occurs in the homogenization limit). The analysis is purely based on the study of the band structure of the left and right medium. Figure \ref {F:comp_hom_om_lg} illustrates this effect: The incoming wave from the left travels north-east. For the homogenized material in the right half (results of the bottom figure), the transmitted wave also travels north-east. In contrast, for wave-length and periodicity of comparable size (top figure), the transmitted wave travels south-east. Both \cite {Lamacz-Schweizer-Bloch} and the work at hand support the interpretation of \cite {PhysRevB.65.201104}: Negative refraction is possible in positive index materials. We emphasize that we use here the same photonic crystal that was also used in \cite {EfrosPokrovsky-SolidState-2004} and \cite {PhysRevB.65.201104}. This periodic medium does not have a negative effective index in the sense of homogenization. \section{Bloch expansion formalism and problem (P)}\label{S:Bloch} We have to fix the notations of the Bloch expansion formalism. The formalism allows us, on the one hand, to define the projections that have already been used in condition \eqref {eq:outgoing-R-L}. On the other hand, we will be able to formulate the modified problem (P), which we suggest as a useful truncated problem. We assume in the following that the medium is $\varepsilon$-periodic on the right and on the left. More precisely, for two $Y_\varepsilon$-periodic functions $a_+$ and $a_-$, we assume $a(x) = a_+(x)$ for $x_1 \ge \varepsilon R/2$ and $a(x) = a_-(x)$ for $x_1 \le - \varepsilon R/2$. We work in two space dimensions, but the methods are not restricted to this case. \subsection {Bloch formalism} For $\varepsilon>0$ let $Y_\varepsilon = \varepsilon (0,1)^2$ be the periodicity cell and let $H = \varepsilon K$ with $K\in \mathbb{N}$ be the height of the domain $\Omega = \mathbb{R}\times (0,H)_\sharp$. We use the finite index set $Q_{K} := \{0,\frac{1}{K}, \frac{2}{K}, \dots, \frac{K-1}{K}\}$ and employ a Pre-Bloch expansion in the vertical direction: Any function $u\in L^2_\mathrm{loc}(\mathbb{R}\times (0,H);\mathbb{C})$ can be expanded in periodic functions with phase-shifts: There is a unique family of $\varepsilon$-periodic functions $\Phi_{j_2}(x_1, \cdot)$ such that, in the sense of $L^2_\mathrm{loc}( \mathbb{R}\times (0,H); \mathbb{C})$, \begin{equation} \label{eq:discrete-u-1} u(x_1, x_2) = \sum_{j_2\in Q_{K}} \Phi_{j_2}(x_1, x_2)\, e^{2\pi {\rm i} j_2 x_2/\varepsilon}\,. \end{equation} The analogous result holds when we expand a function $u\in L^2((0,\varepsilon L)\times (0, \varepsilon K); \mathbb{C})$ in both directions $x_1$ and $x_2$. In this case one obtains for $j= (j_1, j_2)\in Q_L\times Q_K$ functions $\Phi_j = \Phi_j(x_1, x_2)$ that are $\varepsilon$-periodic in both directions. We regard the $\varepsilon$-periodic functions $\Phi_{j}$ for $j = (j_1, j_2)$ as maps $Y_\varepsilon \to \mathbb{C}$ and expand them in terms of eigenfunctions of the operator \begin{equation}\label{eq:L_j-operator} \mathcal{L}_j^\pm := -\left(\nabla + 2\pi {\rm i} j/\varepsilon\right) \cdot \left(a_\pm(x) \left(\nabla+ 2\pi {\rm i} j/\varepsilon\right)\right)\,, \end{equation} which is defined on $H^1_\sharp(Y_\varepsilon; \mathbb{C})$. The definition of the operator $\mathcal{L}_j^\pm$ is motivated by the following fact: If $\Psi_{j}^\pm$ is an eigenfunction of $\mathcal{L}_j^\pm$ with eigenvalue $\mu_j^\pm$, then $\Psi_{j}^\pm e^{2\pi {\rm i} j\cdot x / \varepsilon}$ is a solution to the Helmholtz equation \eqref {eq:Waveguide} with $a = a_\pm$ and $\omega^2 = \mu_j^\pm$ and $f=0$. \begin{definition}[Bloch eigenfunctions] \label{def:Bloch-eigenfunctions} For $\varepsilon>0$ and $j\in [0,1]^2$ we denote by $\left(\Psi^\pm_{j,m} \right)_{m\in\mathbb{N}_0}$ an orthogonal family of eigenfunctions to the symmetric operator $\mathcal{L}_j^\pm$, ordered to have $\mu_{m+1}^\pm(j)\geq \mu_m^\pm(j)$ for all $m\in\mathbb{N}_0$. We normalize with $-\hspace{-0.85em}\int_{Y_\varepsilon} | \Psi^\pm_{j,m} |^2 = 1$. \end{definition} The subsequent lemma is a classical result on Bloch expansions, see e.g.\,\cite{PBL-1978}. \begin{lemma}[Bloch expansion]\label{lem:Bloch-expansion} For $L, K \in\mathbb{N}$ and $\varepsilon>0$ we consider the rectangle $W = (0,\varepsilon L)\times (0,\varepsilon K)$ and $u\in L^2(W; \mathbb{C})$. For both eigenfunction families $(\Psi^+_{j, m})_{j,m}$ and $(\Psi^-_{j, m})_{j,m}$ the function $u$ possesses a unique expansion with coefficients $\alpha_{j,m}^\pm \in\mathbb{C}$ and convergence in $L^2(W;\mathbb{C})$: \begin{equation}\label{eq:bloch-exp-1} u(x) = \sum_{j\in Q_{L}\times Q_{K}} \sum_{m=0}^\infty \alpha^\pm_{j, m} \Psi^\pm_{j, m}(x)\, e^{2\pi {\rm i} j\cdot x / \varepsilon}\,. \end{equation} \end{lemma} We use the index-set $I_{L,K} := \{(j,m) | j\in Q_{L}\times Q_{K},\ m\in\mathbb{N}_0\}$ and multi-indices $\lambda = (j,m)\in I_{L,K}$, and define for $x\in (0, \varepsilon L)\times (0, \varepsilon K)$ \begin{align} \label{eq:abbreviationUlambda} U^\pm_\lambda(x) := \Psi^\pm_\lambda (x)\, e^{2\pi {\rm i} j\cdot x/\varepsilon}\,. \end{align} With this notation, \eqref {eq:bloch-exp-1} simplifies to \begin{equation} \label{eq:discrete-v-eps-Bloch} u(x) = \sum_{\lambda \in I_{L,K}} \alpha^\pm_\lambda U^\pm_\lambda (x)\,. \end{equation} We note that the Bloch basis functions inherit orthogonality properties from the eigenfunctions $\Psi^\pm_{j,m}$. Given $L, K\in \mathbb{N}$, we calculate on $W = (0,\varepsilon L)\times (0, \varepsilon K)$ for $\lambda = (j,m)$ and $\tilde\lambda = (\tilde j,\tilde m)$, $\lambda, \tilde{\lambda} \in I_{L,K}$: \begin{align*} \int_W \bar U^+_\lambda(x) U^+_{\tilde\lambda}(x) \, dx = \int_W \Psi^+_\lambda (x) \Psi^+_{\tilde\lambda} (x) e^{2\pi {\rm i} (\tilde j - j) \cdot x/\varepsilon}\, dx = (\varepsilon^2 L K)\ \delta_{\lambda, \tilde\lambda}\,. \end{align*} Indeed, for $j\neq \tilde j$, the expression vanishes by Lemma A.1 of \cite {Lamacz-Schweizer-Bloch}. For $j= \tilde j$ and $m\neq \tilde m$, the expression vanishes by orthogonality of the different eigenfunctions $\Psi^+_\lambda$ to one $j$. In the remaining case $\lambda = \tilde\lambda$, the statement is a consequence of the normalization of $\Psi^+_\lambda$. The same applies to $(U^+_\lambda)_\lambda$. Due to the $L^2(W)$-orthogonality, we have the Plancherel formula \begin{align} \label{eq:L2normexpansion} \|u\|^2_{L^2(W)} = \varepsilon^2 LK \sum_{\lambda \in I_{L,K}}|\alpha^\pm_\lambda|^2\,. \end{align} \subsubsection*{Poynting numbers, index sets and projections} We study a fixed index $\lambda \in I := \{ \lambda = (j,m) | j\in [0,1]^2, m\in \mathbb{N}_0\}$. To the index $\lambda$ we associate two Bloch waves, $U_\lambda^+$ and $U_\lambda^-$ for the right domain and the left domain, respectively. The Bloch waves $U_\lambda^\pm$ can transport energy to the left or to the right; we now introduce the Poynting numbers $P^\pm_\lambda$ which indicate the direction of energy transport. The sign of $P^+_\lambda$ coincides with the sign of the first component of the group velocity, see Theorem 3 in \cite{FlissJoly2016} and the explanation in Section 3.1. of \cite{Lamacz-Schweizer-Bloch}. We set \begin{equation} \label{eq:Poynting-def} P^\pm_\lambda := \Im\, \Xint-_{Y_\varepsilon} \bar U^\pm_\lambda (x)\, e_1\cdot \left[a_\pm(x)\nabla U^\pm_\lambda(x)\right]\, dx\,. \end{equation} \begin{definition}[Projections]\label{def:projection} Let $u\in L^2(W; \mathbb{C})$ be a function on the rectangle $W = (0,\varepsilon L)\times (0,\varepsilon K)$ with the discrete Bloch expansion \eqref {eq:discrete-v-eps-Bloch}. We define the projections $\Pi^\pm_{>0}$ onto right-going Bloch waves by \begin{equation}\label{eq:projection} \left( \Pi^\pm_{>0} u \right) (x) := \sum_{\stackrel{\lambda\in I_{L,K}}{P_\lambda^\pm>0}} \alpha^\pm_\lambda U^\pm_\lambda(x)\,. \end{equation} Projections $\Pi^\pm_{<0}$ onto left-going Bloch waves are defined accordingly. \end{definition} \paragraph{Sesquilinear forms $b^\pm$.} We consider two functions $u\in L^2(W,\mathbb{C})$ and $v\in H^1(W,\mathbb{C})$ on the rectangle $W = (0,\varepsilon L)\times (0,\varepsilon K)$. Two energy-flux sesquilinear forms are defined by \begin{equation} \label{eq:b-alt} b^\pm(u,v) :=\Xint-_{W}\bar u(x)\, e_1\cdot\left[a_\pm(x)\nabla v(x)\right]\,dx\,. \end{equation} The connection to the Poynting number $P^\pm_\lambda$ of \eqref {eq:Poynting-def} is expressed by \begin{align} \label{eq:relationbS-2} P^\pm_\lambda = \Im\, b^\pm\left(U^\pm_\lambda, U^\pm_\lambda\right)\,. \end{align} The following lemma has been shown in \cite {Lamacz-Schweizer-Bloch} for $L = K$, the proof in the general case needs only notational changes. \begin{lemma}[Orthogonality property of $b^\pm$] \label{lem:orthogonalwavenumber} Given $L, K \in \mathbb{N}$, let $\lambda, \tilde\lambda\in I_{L,K}$ be two indices with $\lambda=(j,m)$, $\tilde\lambda=(\tilde j,\tilde m)$ and $j\neq \tilde j$. Then the basis functions $U^\pm_\lambda$ and $U^\pm_{\tilde\lambda}$ of \eqref {eq:abbreviationUlambda} satisfy \begin{align} \label{eq:orthogonalwavenumber} b^\pm(U_\lambda^\pm, U_{\tilde\lambda}^\pm) = 0 \,. \end{align} \end{lemma} \subsection{Problem (P)} We can now formulate the truncated problem (P). We propose this problem on a bounded domain as a replacement of the Helmholtz equation with a radiation condition. Our aim is to modify problem (P$_0$), which was defined by equations \eqref {eq:Waveguide-R-L}--\eqref {eq:flux-R-L}. In problem (P) we keep equation \eqref {eq:Waveguide-R-L}. Equation \eqref {eq:outgoing-R-L} is strengthened, see \eqref {eq:outgoing-R-L-eta} below. The continuity condition \eqref {eq:weak-cont} is kept and the flux condition \eqref {eq:flux-R-L} is weakened, see \eqref{eq:flux-R-L-eta}. The approximate problem is designed by choosing two index sets $I^+ \subset I_{L,K}$ and $I^- \subset I_{L,K}$. We recall that every element $\lambda \in I^\pm$ is of the form $\lambda = (j,m)$ with $j_1\in Q_L \subset [0,1]$, $j_2\in Q_K \subset [0,1]$, $m\in \mathbb{N}$. The index sets $I^\pm$ are chosen with the property \begin{equation}\label{eq:Ipm-poynt} \lambda\in I^+ \Rightarrow P^+_\lambda > 0, \quad \lambda\in I^- \Rightarrow P^-_\lambda < 0, \end{equation} i.e. Bloch waves of the right radiation box travel to the right and Bloch waves of the left radiation box travel to the left. Moreover, in the numerics $I^\pm$ are finite. For given sets $I^\pm$ we define the space \begin{equation} \label{eq:X_eta} X^\pm := \mathrm{span} \{ U^\pm_\lambda \,|\, \lambda \in I^\pm \}\,. \end{equation} In order to approximate the Helmholtz equation on the unbounded domain we first choose $R,L,\delta > 0$, and the two index sets $I^\pm$. Given these parameters, the aim is to find $u: \Omega_{R+L} \to \mathbb{C}$ that satisfies the following four conditions: (i) The Helmholtz equation \eqref {eq:Waveguide-R-L} on $\Omega_R$. (ii) The radiation condition \eqref {eq:outgoing-R-L} in the strengthened form \begin{equation} \label{eq:outgoing-R-L-eta} \left\{ u\right\}^\pm_{R,L} \in X^\pm \,. \end{equation} We recall that, when $I^\pm$ consists only of indices of outgoing waves, the projections \eqref {eq:outgoing-R-L} onto incoming waves automatically vanish. (iii) The weak continuity condition \eqref {eq:weak-cont}. (iv) A continuity condition that replaces \eqref {eq:flux-R-L}; we obtain this condition as the natural interface condition in a variational formulation of the problem. Problem (P) will be made precise in Definition \ref {def:problem-P} below. Essentially, the aim is to find $u$ that satisfies (i)--(iv). \subsubsection*{The variational formulation} In order to impose conditions (ii) and (iii), we seek $u$ in the infinite dimensional function space \begin{equation} \label{eq:V_eta} V := \left\{ u\in H^1(\Omega_{R+L}) \left| \phantom{\int}\!\!\!\! u \text{ vertically periodic, } \left\{ u\right\}^+_{R,L} \in X^+, \left\{ u \right\}^-_{R,L} \in X^- \right. \right\}\,. \end{equation} Note that $V$ depends on the choice of the index sets $I^\pm$. We now formulate (i) (the Helmholtz equation \eqref {eq:Waveguide-R-L}) in a weak form, and, at the same time, encode the flux condition (iv). In order to make integration by parts possible, we introduce the special cut-off function $\vartheta : \mathbb{R}\to [0,1]$, defined by \begin{align*} \vartheta(\xi) := \begin{cases} 0 &\text{for } |\xi| \ge \varepsilon(R+L),\\ 1 &\text{for } |\xi| \le \varepsilon R,\\ (\varepsilon (R+L) - |\xi|)/(\varepsilon L)\quad &\text{else,} \end{cases} \end{align*} compare Figure \ref {fig:cut-off}. We regard $\vartheta$ also as a function on two-dimensional domains such as $\Omega_{R+L}$ by setting $\vartheta(x_1, x_2) = \vartheta(x_1)$. \begin{figure}[ht] \centering \begin{tikzpicture}[scale = 0.7] \draw[->] (-9,0)--(9,0); \draw[thick] (-9,0)--(-8,0)--(-5,2)--(5,2)--(8,0)--(9,0); \draw[-] (-8,-0.14)--(-8,0.14); \draw[-] (-5,-0.14)--(-5,0.14); \draw[-] (5,-0.14)--(5,0.14); \draw[-] (8,-0.14)--(8,0.14); \draw[->,dashed] (0,0)--(0,3); \node[] at (8,-.55) {$\varepsilon (R+L)$}; \node[] at (5,-.55) {$\varepsilon R$}; \node[] at (-8,-.55) {$-\varepsilon (R+L)$}; \node[] at (-5,-.55) {$-\varepsilon R$}; \node[] at (0, -.55) {$0$}; \node[] at (1, 2.9) {$\vartheta(x_1)$}; \node[] at (9.5, 0.2) {$x_1$}; \end{tikzpicture} \caption{The cut-off function $\vartheta$} \label{fig:cut-off} \end{figure} In order to motivate the central definition of this article, we take the complex conjugate of the Helmholtz equation \eqref {eq:Waveguide-R-L} and multiply with the product $v\, \vartheta$, where $v\in V$ is arbitrary. We obtain \begin{align*} &\int_{\Omega_{R+L}} a \nabla \bar u\cdot \nabla (v\, \vartheta) - \int_{\Omega_{R+L}} (1 - {\rm i}\delta \mathrm{\bf 1}_{\Omega_R})\, \omega^2 \bar u\, v\, \vartheta = \int_{\Omega_{R+L}} \bar f\, v\,\vartheta\,. \end{align*} The gradient of $\vartheta$ can be expressed explicitly as $\nabla\vartheta = -(\varepsilon L)^{-1} e_1$ on $W_{R,L}^+$, $\nabla\vartheta = (\varepsilon L)^{-1} e_1$ on $W_{R,L}^-$, and $\nabla\vartheta = 0$ on $\Omega_R$. We use the above relation to define an approximate problem. \begin{definition}[Problem (P)]\label{def:problem-P} Given $R,L,\delta>0$, the index sets $I^\pm$, and $f\in L^2(\Omega)$ with support in $\Omega_R$, a function $u\in V$ is called a solution to problem (P) if \begin{equation} \label{eq:P} \begin{split} \beta(u,v) := &\int_{\Omega_{R+L}} a \nabla \bar u\cdot \nabla v\, \vartheta - \int_{\Omega_{R+L}} (1 - {\rm i}\delta \mathrm{\bf 1}_{\Omega_R})\, \omega^2 \bar u\, v\, \vartheta\\ &\quad - \frac1{\varepsilon L} \int_{W_{R,L}^+} a \nabla \bar u\cdot e_1\, v + \frac1{\varepsilon L} \int_{W_{R,L}^-} a \nabla \bar u\cdot e_1\, v = \int_{\Omega_R} \bar f\, v \end{split} \end{equation} holds for every $v\in V$. \end{definition} \begin{remark}\label{rem:equiv} Problem (P) is formally equivalent to the Helmholtz equation \eqref {eq:Waveguide-R-L}. More precisely, the following holds: Let $u$ be a solution (P). Then $u$ solves the Helmholtz equation \eqref{eq:Waveguide-R-L} on $\Omega_{R}$. Let $u\in H^1(\Omega_{R+L})$ be a solution of the Helmholtz equation \eqref{eq:Waveguide-R-L} on $\Omega_{R+L}$ with $\delta$ replaced by $\delta \mathrm{\bf 1}_{\Omega_R}$ and with the source $f$ supported in $\Omega_R$. Then $u$ satisfies \eqref {eq:P} (but not necessarily $u\in V$). \end{remark} \begin{proof} Regarding the first statement, we consider an arbitrary test-function $v \in C_c^\infty(\Omega_R)$. Then $v\, \vartheta = v$ and $\nabla v\, \vartheta = \nabla v$, integrals over $W_{R,L}^\pm$ vanish. Therefore \eqref {eq:P} is nothing but the weak formulation of \eqref {eq:Waveguide-R-L}. To verify the second statement, it suffices to take the conjugate complex of \eqref {eq:Waveguide-R-L}, to multiply with $v\, \vartheta$ and to integrate. No boundary terms appear in the integration by parts since $\vartheta$ vanishes for $x_1 = \pm (R+L)$. \end{proof} \paragraph{The coupling condition in a special case.} Let us investigate solutions to (P) in the case $\delta=0$, assuming that $X^\pm$ is spanned by Bloch waves $U_\lambda^\pm$ that have exactly the eigenvalue $\omega^2$. We can integrate by parts in \eqref {eq:P} and obtain \begin{align*} &- \int_{\Omega_{R+L}} \nabla\cdot (a \nabla \bar u) v\, \vartheta - \int_{\Omega_{R+L}} \omega^2 \bar u\, v\, \vartheta\\ &\quad\qquad - \int_{\Gamma_R^+} [e_1\cdot a\nabla \bar u]_{\Gamma_R^+}\, v + \int_{\Gamma_R^-} [e_1\cdot a\nabla \bar u]_{\Gamma_R^-}\, v = \int_{\Omega_R} \bar f\, v\,. \end{align*} The function $u$ solves the Helmholtz equation in $\Omega_R$ by Remark \ref {rem:equiv}. On the other hand, as a linear combination of solutions, $u$ solves the Helmholtz equation also in $W_{R,L}^\pm$. This implies that the first two integrals cancel with the right hand side. Since $v$ was arbitrary in $V$, we have \begin{equation} \label{eq:flux-R-L-eta} \int_{\Gamma_R^\pm} [e_1\cdot a\nabla u]_{\Gamma_R^\pm}\, U_\lambda^\pm = 0\quad \text{ for every }\ U_\lambda^\pm\in X^\pm\,. \end{equation} In this sense, problem (P) implements a weak flux condition that replaces \eqref {eq:flux-R-L}. \section{Existence result} Problem (P) of Definition \ref {def:problem-P} reads: Find $u\in V$ that satisfies \begin{equation} \label{eq:beta-problem} \beta(u,v) = \int_{\Omega_R} \bar f\, v \qquad \forall v\in V\,. \end{equation} We will derive a coercivity result for the form $\beta$ and obtain, as a corollary, an existence result for problem (P). The coercivity result will be based on the following assumptions. \begin{assumption}\label{ass:assumptions} We introduce the following assumptions on the index sets $I^\pm$ and the corresponding spaces $X^\pm$. \begin{enumerate} \item[(A1)] Positive speed: There exists a positive number $c_0 > 0$ such that, for every $\lambda$ with $U_\lambda^\pm \in X^\pm$, there holds \begin{equation} \label{eq:pos-speed} \pm P_\lambda^\pm \ge c_0\,. \end{equation} \item[(A2)] For every pair of indices $\lambda = (j,m), \tilde\lambda = (\tilde j,\tilde m) \in I^\pm$ the wave numbers are different: $j\neq \tilde j$. \item[(A3)] Regularity: For some constant $C_0>0$ and every $u\in X^\pm$ holds \begin{equation} \label{eq:inverse} \| u \|_{H^1(W)}^2 \le C_0 \| u \|_{L^2(W)}^2 \,. \end{equation} \end{enumerate} \end{assumption} \begin{remark} On Assumption \ref {ass:assumptions}. (i) Assumption (A2) is only used to exploit $b^\pm(U_\lambda^\pm, U_{\tilde\lambda}^\pm) = 0$ for $\tilde\lambda \neq \lambda$. The assumption is not essential for the numerical scheme. (ii) If Assumption (A2) is satisfied, then the sets $I^+$ and $I^-$ are necessarily finite and the spaces $X^\pm$ are finite dimensional. (iii) When $I^+$ and $I^-$ are finite sets, then (A3) is automatically satisfied since all functions $U_\lambda^\pm$ possess $H^1$-regularity. Under the same assumption, (A1) is satisfied if and only if no wave $U_\lambda^\pm$ is used in $X^\pm$, which travels in vertical direction. \end{remark} \begin{theorem}[Existence result for problem (P)] \label{thm:existence} Let $R, L, \delta$ be positive parameters and let $f\in L^2(\Omega)$ be a function with support in $\Omega_R$. Let the coefficient function $a\in L^\infty(\Omega; \mathbb{R})$ be bounded from below by $a_0>0$ and identical to $Y_\varepsilon$-periodic functions $a_\pm$ for $\pm x_1 > \varepsilon R/2$. Let the index sets $I^\pm$ satisfy properties (A1)--(A3) of Assumption \ref {ass:assumptions} with constants $c_0, C_0 > 0$. Then problem (P) of Definition \ref {def:problem-P} has a unique solution $u$. For a constant $C = C(R, L, a_0, \delta, c_0, C_0)$ we have the stability estimate \begin{equation} \label{eq:sol-est} \| u \|_{H^1(\Omega_{R+L})} \le C \| f \|_{L^2(\Omega_{R})}\,. \end{equation} \end{theorem} We derive the above theorem with a constant $C$ that satisfies $C\sim \delta^{-1}$ for small $\delta$. The numerical experiments show a much better behavior of the solution $u$: The scheme has good convergence properties even for $\delta = 0$. \begin{proof}[Proof of Theorem \ref {thm:existence}] The aim on the next pages is to derive, for two numbers $\sigma, \gamma > 0$, a coercivity estimate of the form \begin{equation} \label{eq:coercivity} \Im\, \beta(u,u) + \sigma\delta\ \Re\, \beta(u,u) \ge \gamma \| u\|_{H^1(\Omega_{R+L})}^2\,. \end{equation} We obtain this result as relation \eqref {eq:coercivity-H1-RL} in Proposition \ref {prop:H1RL-lower}. The Lax-Milgram Lemma implies the existence statement of Theorem \ref {thm:existence}. We note that the Lax-Milgram lemma in complex Hilbert spaces is applicable for sesquilinear forms that satisfy a coercivity estimate of the form \eqref {eq:coercivity}. We refer to \cite {Alt-FA} for a proof of the Lax-Milgram Lemma that works with the coercivity assumption $| \beta(u,u) | \ge \gamma \| u \|_{H^1}^2$, which is implied by \eqref {eq:coercivity}. Let us recall here the main point of the proof, which is the derivation of estimate \eqref {eq:sol-est} for solutions of \eqref {eq:beta-problem}: Using $v = u\in V$ as a test-vector in \eqref {eq:beta-problem} and exploiting \eqref {eq:coercivity} yields \begin{align*} \gamma \| u\|_{H^1(\Omega_{R+L})}^2 &\le \Im\, \beta(u,u) + \sigma\delta\ \Re\, \beta(u,u) \le (1 + \sigma\delta) | \beta(u,u) |\\ &\le (1 + \sigma\delta) \left| \int_{\Omega_R} \bar f\, u \right| \le (1 + \sigma\delta) \| f\|_{L^2(\Omega_R)} \| u\|_{L^2(\Omega_R)} \,. \end{align*} This provides \eqref {eq:sol-est} with the constant $C = (1 + \sigma\delta) \gamma^{-1}$. \end{proof} \subsection{Coercivity in $L^2$} The main feature of the bilinear form $\beta$ is the positivity of the imaginary part. Moreover, the imaginary part controls certain norms of the argument. \begin{lemma}[$L^2$-coercivity]\label{lem:L2R-lower} Let the index sets $I^\pm$ satisfy the outgoing wave property \eqref{eq:Ipm-poynt}. Then the sesquilinear form $\beta$ of Definition \ref {def:problem-P} satisfies the following $L^2$-coercivity estimate: \begin{equation} \label{eq:coercivity-L2-R} \Im\, \beta(u,u) \ge \delta\omega^2 \| u\|_{L^2(\Omega_{R})}^2 \qquad \forall u\in V\,. \end{equation} Let additionally the positive speed property \eqref {eq:pos-speed} be satisfied in $X^\pm$ with the constant $c_0>0$. Then we have the stronger estimate \begin{equation} \label{eq:coercivity-L2-W} \Im\, \beta(u,u) \ge \delta\omega^2 \| u\|_{L^2(\Omega_{R})}^2 + \frac{c_0}{\varepsilon L}\, \left( \| u\|_{L^2(W^+_{R,L})}^2 + \| u\|_{L^2(W^-_{R,L})}^2\right)\,. \end{equation} \end{lemma} \begin{proof} Let $u\in V$ be arbitrary. By definition of the space $V$, the function $u$ can be expanded in Bloch waves in the two rectangles $W_{R,L}^\pm$. We write the shifted functions as \begin{equation} \label{eq:expand-box} \left\{ u\right\}^+_{R,L} = \sum_{\lambda\in I^+} \alpha_\lambda^+ U_\lambda^+\,,\qquad \left\{ u\right\}^-_{R,L} = \sum_{\lambda\in I^-} \alpha_\lambda^- U_\lambda^+\,. \end{equation} With $\beta$ from \eqref {eq:P} we now calculate $\beta(u,u)$. The integrals over $W_{R,L}^\pm$ can be expressed with the sesquilinear forms $b^\pm$ from \eqref{eq:b-alt}. The orthogonality property of Lemma \ref {lem:orthogonalwavenumber} allows to calculate \begin{align*} \beta(u,u) &= \int_{\Omega_{R+L}} a |\nabla u|^2\, \vartheta - \int_{\Omega_{R+L}} (1- {\rm i}\delta \mathrm{\bf 1}_{\Omega_R})\, \omega^2 | u |^2 \vartheta\\ &\quad - \varepsilon K\, \overline{b^+}\left(\left\{ u\right\}^+_{R,L} , \left\{ u\right\}^+_{R,L}\right) + \varepsilon K\, \overline{b^-}\left(\left\{ u\right\}^-_{R,L} , \left\{ u\right\}^-_{R,L}\right)\\ &= \int_{\Omega_{R+L}} a |\nabla u|^2\, \vartheta - \int_{\Omega_{R+L}} (1- {\rm i}\delta \mathrm{\bf 1}_{\Omega_R})\, \omega^2 | u |^2 \vartheta\\ &\quad - \varepsilon K\, \sum_{\lambda\in I^+} |\alpha_\lambda^+|^2 \ \overline{b^+}(U_\lambda^+, U_\lambda^+) + \varepsilon K\, \sum_{\lambda\in I^-} |\alpha_\lambda^-|^2 \ \overline{b^-}(U_\lambda^-, U_\lambda^-)\,. \end{align*} Taking the imaginary part and using the definition of $P^\pm$ from \eqref {eq:relationbS-2}, we obtain \begin{equation}\label {eq:Im-beta-u-u} \Im\, \beta(u,u) = \int_{\Omega_R} \delta\, \omega^2 | u |^2 + \varepsilon K \sum_{\lambda\in I^{+}} |\alpha_\lambda^+|^2 P_\lambda^+ - \varepsilon K \sum_{\lambda\in I^{-}} |\alpha_\lambda^-|^2 P_\lambda^-\,. \end{equation} Non-negativity of $P_\lambda^+$ for $\lambda\in I^{+}$ and non-positivity of $P_\lambda^-$ for $\lambda\in I^{-}$ implies the lower bound \eqref {eq:coercivity-L2-R}. \smallskip {\em Estimate \eqref {eq:coercivity-L2-W}.} In the case that the positive speed property is satisfied, the box integrals yield a strictly positive contribution. Inserting \eqref {eq:pos-speed} into \eqref {eq:Im-beta-u-u} we find \begin{equation} \begin{split} \Im\, \beta(u,u) &\ge \int_{\Omega_R} \delta\, \omega^2 | u |^2 +c_0 \varepsilon K \left( \sum_{\lambda\in I^{+}} |\alpha_\lambda^+|^2 + \sum_{\lambda\in I^{-}} |\alpha_\lambda^-|^2 \right)\\ &\ge \int_{\Omega_R} \delta\, \omega^2 | u |^2 +\frac{c_0}{\varepsilon L} \left( \int_{W_{R,L}^+} |u|^2 + \int_{W_{R,L}^-} |u|^2 \right)\,, \end{split}\label{eq:L2-intermediate-834} \end{equation} where we used the Plancherel formula \eqref {eq:L2normexpansion} in the last line. This yields \eqref {eq:coercivity-L2-W}. \end{proof} \begin{remark} (i) The $L^2(\Omega_R)$ coercivity of Lemma \ref {lem:L2R-lower} is not sufficient for an existence result since the sesquilinear form $\beta$ is defined on $H^1(\Omega_{R+L})$. (ii) The lower bound in \eqref {eq:coercivity-L2-R} depends on $\delta$. This fact is discouraging when one seeks to perform a limiting absorption principle, which needs estimates that are uniform in $\delta$. By contrast, considering only the norm of the solution in the radiating boxes $W_{R,L}^\pm$, the bound in \eqref {eq:coercivity-L2-W} is independent of $\delta$. This $\delta$-independent bound seems to be the reason for the numerically observed stability of problem (P): The scheme works well even for $\delta = 0$. \end{remark} \subsection{Coercivity in $H^1$} We turn now to the coercivity estimate that corresponds to the chosen function space. The two assumptions in \eqref {eq:ass-b-sigma} and \eqref {eq:ass-b-Re-Im} essentially demand the smallness of $\sigma$ in comparison to $1$ and to $c_0/\delta$. \begin{proposition}[$H^1(\Omega_{R+L})$-coercivity] \label{prop:H1RL-lower} Let the index sets $I^{\pm}$ satisfy properties (A1)--(A3) of Assumption \ref {ass:assumptions} with constants $c_0, C_0 > 0$. Let $\sigma>0$ be small enough to satisfy the two properties \begin{align} \label{eq:ass-b-sigma} \sigma &\le \min \left\{ \frac12 , \frac{c_0}{4 \varepsilon L \delta \omega^2} \right\}\,,\\ 2 \sigma\delta\ \left| \Re\, b^\pm(U_\lambda^\pm, U_\lambda^\pm) \right| &\le \Im\, b^\pm(U_\lambda^\pm, U_\lambda^\pm) \qquad\forall \lambda \text{ with } U_\lambda^\pm \in X^\pm\,. \label{eq:ass-b-Re-Im} \end{align} Then there exists $\gamma = \gamma(c_0, \delta, \sigma, \omega, a_0) >0$ such that for every $u\in V$ holds \begin{equation} \label{eq:coercivity-H1-RL} \Im\, \beta(u,u) + \sigma\delta\ \Re\, \beta(u,u) \ge \gamma \| u\|_{H^1(\Omega_{R+L})}^2\,. \end{equation} \end{proposition} \begin{proof} Relation \eqref {eq:coercivity-L2-W} of Lemma \ref {lem:L2R-lower} together with \eqref {eq:Im-beta-u-u} provides the following lower bound for the imaginary part of $\beta(u,u)$: \begin{equation} \label{eq:L2-result-2354} \begin{split} \Im\, \beta(u,u) \ge\, & \delta\omega^2 \| u\|_{L^2(\Omega_{R})}^2 + \frac{c_0}{2\varepsilon L}\, \left( \| u\|_{L^2(W^+_{R,L})}^2 + \| u\|_{L^2(W^-_{R,L})}^2\right)\\ &+ \frac{\varepsilon K}{2} \sum_{\lambda\in I^{+}} |\alpha_\lambda^+|^2 P_\lambda^+ - \frac{\varepsilon K}{2} \sum_{\lambda\in I^{-}} |\alpha_\lambda^-|^2 P_\lambda^-\,. \end{split} \end{equation} We now evaluate the real part of $\beta(u,u)$ from its definition in \eqref {eq:P}. After a multiplication with the factor $\sigma\delta$ we find, with the orthogonality \eqref {eq:orthogonalwavenumber}, \begin{equation} \label{eq:L2-result-real} \begin{split} &\sigma\delta\ \Re\, \beta(u,u) = \, \sigma\delta\, \int_{\Omega_{R+L}} a |\nabla u|^2\, \vartheta - \sigma\delta\, \int_{\Omega_{R+L}} \omega^2 |u|^2 \vartheta\\ &\quad - \sigma\delta\, \varepsilon K\, \sum_{\lambda\in I^{+}} |\alpha_\lambda^+|^2 \, \Re\, b^+(U_\lambda^+, U_\lambda^+) + \sigma\delta\, \varepsilon K\, \sum_{\lambda\in I^{-}} |\alpha_\lambda^-|^2 \, \Re\, b^-(U_\lambda^-, U_\lambda^-)\,. \end{split} \end{equation} Due to assumption \eqref {eq:ass-b-Re-Im} on $\sigma$, the two sums in \eqref {eq:L2-result-real} are smaller in absolute value than the two sums in \eqref {eq:L2-result-2354}. Due to assumption \eqref {eq:ass-b-sigma} on $\sigma$, the second integral in \eqref {eq:L2-result-real} is bounded in absolute value by the half of the first two contributions on the right hand side of \eqref {eq:L2-result-2354}. We therefore obtain \begin{equation*} \begin{split} \Im\, \beta(u,u) + \sigma\delta\ \Re\, \beta(u,u) \ge &\frac{\delta\omega^2}{2} \| u\|_{L^2(\Omega_{R})}^2 + \frac{c_0}{4\varepsilon L}\, \left( \| u\|_{L^2(W^+_{R,L})}^2 + \| u\|_{L^2(W^-_{R,L})}^2\right)\\ & + \sigma\delta a_0 \int_{\Omega_{R+L}} |\nabla u|^2\, \vartheta \,, \end{split} \end{equation*} where $a \ge a_0$ was used. The inverse estimate \eqref {eq:inverse} implies that the right hand side controls the squared $H^1(\Omega_{R+L})$-norm. We thus arrive at \eqref {eq:coercivity-H1-RL}. \end{proof} \begin{remark} An inspection of the assumptions on $\sigma$ shows that the coercivity constant $\gamma$ has the properties $\gamma\sim \delta$ for small $\delta>0$ and $\gamma\sim \sigma \sim c_0$ for small $c_0 >0$. \end{remark} \section{Numerical method and examples} \label{sec.numerics} \subsection{Numerical method} \subsubsection*{Finite element discretization of problem (P)} Our aim is to approximate problem (P) with a finite element method (FEM), using an enriched, problem adapted set of basis functions. Once a finite dimensional subspace $V_h$ of $V$ is defined with basis functions, we have obtained in a natural way a discretization of problem (P). Our construction uses piecewise linear hat functions on a triangular mesh. More precisely, the space $V_h$ is spanned by standard (piecewise linear) hat functions in $\Omega_R$ and by (approximation of) Bloch waves in the radiation boxes. We use Bloch waves $U_\lambda^+$ with a positive Poynting number in the radiation box $W_{R,L}^+$ and $U_\lambda^-$ with a negative Poynting number in the box $W_{R,L}^-$. The Bloch waves themselves are computed with piecewise linear hat functions. \paragraph{Choice of a regular grid.} We use a (uniform) triangulation mesh with right angled triangles on $\overline{\Omega_{R+L}}$. The fineness parameters $h_1>0$ and $h_2>0$ denote the lengths of the triangle legs in the $x_1$ and $x_2$ directions respectively. The grid points $x^{(k)}\in \overline{\Omega_{R+L}}$, $k=1,\dots, N_h$ are enumerated so that $$ x^{(k)} \in \begin{cases} (-\varepsilon R,\varepsilon R)\times [0,\varepsilon K) &\text{ for } k=1,\dots,N_0, \\ \overline{W^+_{R,L}} &\text{ for } k=N_0+1,\dots,N_0+N_W,\\ \overline{W^-_{R,L}} &\text{ for } k=N_0+N_W+1,\dots,N_0+2N_W=N_h.\\ \end{cases} $$ \paragraph{Hat functions.} To the grid we assign the standard piecewise linear hat functions $\phi_k,k=1,\dots,N_h$ with $\phi_k(x^{(l)})=\delta_{k,l}$ for $k,l=1,\dots,N_h$. To impose vertical periodicity in $\Omega_R$, each hat function $\phi_k$ with $x^{(k)}\in [-\varepsilon (R+L),\varepsilon (R+L)]\times \{0\}$ (i.e.\,a lower boundary point) consists of the hat function half corresponding to $x^{(k)}$ and the hat function half corresponding to the artificial grid point $(x_1^{(k)},x_2^{(k)}+\varepsilon K)$. These hat functions (and, hence, any linear combination thereof) are periodic in the vertical direction. \paragraph{Bloch waves.} Next, we define the Bloch wave basis functions. For each selected wave vector $j$ we first solve the eigenvalue problems $\mathcal{L}^\pm_j \Psi_j^\pm=\mu^\pm(j)\Psi_j^\pm$ on the cube $Y_\varepsilon$ with periodic boundary conditions. The FEM-solutions to these problems are denoted $\Psi_{j,m}^{\pm,h}$, $m\in \mathbb{N}$. For each $m$ in an appropriately chosen subset of $\mathbb{N}$ we extend the solution by periodicity onto the radiation box $\overline{W^\pm_{R,L}}$ and use \eqref{eq:abbreviationUlambda} to define $U_{\lambda}^{\pm,h}$ (recall that $\lambda=(j,m)$). Each Bloch wave $U_{\lambda}^{\pm,h}$ is extended by zero to all grid points outside $\overline{W^\pm_{R,L}}$. Our selected set of indices $\lambda$ is denoted by $I^{\pm,h}$ and is specified below. The resulting Bloch waves can be written, for each $\lambda_i\in I^{\pm,h}$, as \begin{equation}\label{E:Bloch-hat} U_{\lambda_i}^{+,h}(x)=\sum_{k=N_0+1}^{N_0+N_W}\kappa_k^{+,i}\phi_k(x), \quad U_{\lambda_i}^{-,h}(x)=\sum_{k=N_0+N_W+1}^{N_h}\kappa_k^{-,i}\phi_k(x)\,, \end{equation} with coefficients $\kappa_k^{\pm,i}\in \mathbb{C}$ for all $i,k$. We emphasize that the functions $U_{\lambda_i}^{\pm,h}$ are continuous on $\overline{\Omega_{R+L}}$ for all $i$. On the other hand, they are concentrated in the radiation boxes in the sense that $U_{\lambda_i}^{+,h}(x^{(k)})=0$ for $x^{(k)}_1<\varepsilon R$, $U_{\lambda_i}^{-,h}(x^{(k)})=0$ for $x^{(k)}_1>-\varepsilon R$. The sets $I^{\pm,h}$ are discrete analogs of $I^\pm$ satisfying (A1)--(A3) in Assumption \ref{ass:assumptions}. However, in contrast to $I^\pm$ we choose for the numerics the $j-$domain to be $\mathbb{B}:=\left(-\tfrac{1}{2},\tfrac{1}{2}\right]^2,$ such that $\frac{2\pi}{\varepsilon}\mathbb{B}$ is the standard Brillouin zone corresponding to the periodicity cell $Y_\varepsilon$. This symmetric choice has the advantage that the band structure plots clearly show the conical shape in the case of homogeneous media. The set $Q_{K}$ from Section \ref{S:Bloch} (defined to ensure the vertical $\varepsilon K$-periodicity of the solution in $W^\pm_{R,L}$) needs to be modified to $$ Q'_K:=\begin{cases} \{-\frac{K-2}{2K},-\frac{K-4}{2K},\dots, \frac{1}{2}\} , & \text{for } K\in 2\mathbb{N},\\ \{-\frac{K-1}{2K},-\frac{K-3}{2K},\dots, \frac{K-1}{2K}\} , & \text{for } K\in 2\mathbb{N}+1. \end{cases} $$ The Poynting numbers $P_\lambda^\pm$ are computed via a numerical quadrature of \eqref{eq:Poynting-def} in the piecewise linear finite element space. \paragraph{Approximation of the space $V$.} Assuming for simplicity that the number of Bloch basis functions is the same in both radiation boxes, $|I^{+,h}|= |I^{-,h}|=: N_{\text{Bl}}\in \mathbb{N}$, we can now define the finite dimensional space as \begin{equation}\label{E:Vh} V_h:= \text{span}\{\psi_1, \dots,\psi_{N_0+2N_{\text{Bl}}}\}, \end{equation} where $$\psi_k:=\begin{cases} \phi_k \ &\text{for} \ k=1, \dots N_0,\\ U_{\lambda_i}^{+,h} \ \text{with} \ \lambda_i\in I^{+,h} \ &\text{for} \ k=N_0+i, i= 1,\dots, N_{\text{Bl}},\\ U_{\lambda_i}^{-,h} \ \text{with} \ \lambda_i\in I^{-,h} \ &\text{for} \ k=N_0+N_{\text{Bl}}+i, i= 1,\dots, N_{\text{Bl}}. \end{cases}$$ \paragraph{Discretization of problem (P).} The finite dimensional subspace $V_h \subset V$ defines a discrete problem (P). The complex conjugate of \eqref{eq:P} can be written with matrices and coordinate vectors as $${\bf A}\vec{U}-{\bf B}\vec{U}-\omega^2{\bf M}^{(\delta)}\vec{U}=\vec{F}\,.$$ Here $\vec{U} \in \mathbb{C}^{N_0+2N_{\text{Bl}}}$ is the unknown coordinate vector and the matrix entries are, for each $k,l=1,\dots,N_0+2N_{\text{Bl}}$, $$ \begin{array}{rlrl} {\bf A}_{k,l}&=\int_{\Omega_{R+L}} a\vartheta \nabla \overline{\psi_k}\cdot \nabla \psi_l, &{\bf B}_{k,l} &=\frac{1}{\varepsilon L}\left(\int_{W^+_{R,L}}a\overline{\psi_k}e_1 \cdot \nabla \psi_l-\int_{W^-_{R,L}}a\overline{\psi_k}e_1 \cdot \nabla \psi_l\right),\\ {\bf M}^{(\delta)}_{k,l}&=(1+{\rm i} \delta{\bf 1}_{\Omega_R})\int_{\Omega_{R+L}}\overline{\psi_k}\psi_l, &F_k &= \int_{\Omega_R} f\overline{\psi_k}. \end{array} $$ Due to the representation \eqref{E:Bloch-hat}, all integrals involving the Bloch waves $U^{\pm,h}_{\lambda_i}$ can be evaluated using solely integrals of hat functions $\phi_k$. \subsubsection*{Numerical implementation caveats} \paragraph{Choice of the Bloch indices $I^{\pm,h}$.} A suitable choice of the index sets $I^{\pm,h}$ is crucial for an efficient and accurate numerical scheme. A direct analog of $I^\pm$ satisfying $I^\pm\subset I_{L,K}$ and assumptions (A1)--(A2) can be built by the following procedure. First, for each $j\in Q'_L\times Q'_K$ one solves the Bloch eigenvalue problems $\mathcal{L}^\pm_j \Psi_j^\pm=\mu^\pm(j)\Psi_j^\pm$ in the FEM-approximation. For each $j$ one keeps only the eigenvalue $\mu_m^\pm(j)$ closest to $\omega^2$, which selects a natural number $m$ for every vector $j$. Subsequently, one filters out eigenvalues $\mu^+_m(j)$ with a non-positive Poynting number of the Bloch wave and eigenvalues $\mu^-_m(j)$ with a non-negative Poynting number of the Bloch wave. The remaining pairs $(j,m)$ define the sets $I^{\pm,h}$. Numerical tests have shown that this approach works, but more accurate results are obtained when the horizontal $L$-periodicity requirement (i.e.\,$j_1\in Q_L'$) is dropped. We take the liberty to choose points $(j,m)\in I^{\pm,h}$ so that the frequency level $\omega^2$ is better realized: $|\omega^2-\mu^\pm_m(j)|$ is minimized. In practice, we first solve the eigenvalue problems $\mathcal{L}^\pm_j \Psi_j^\pm=\mu^\pm(j)\Psi_j^\pm$ in the FEM-approximation for all $j$ (with $j_2\in Q'_K$) on a selected $j_1-$mesh of $(-1/2, 1/2]$. For each $j$ we save only the eigenvalue $\mu_m^\pm(j)$ closest to $\omega^2$. Subsequently, for each $j_2 \in Q'_K$ we search for intersections of the line $\left(-\tfrac{1}{2},\tfrac{1}{2}\right]\times\{j_2\}$ with the level set of the band structure at $\omega^2$. Such intersections can occur for more eigenvalue families $\mu^\pm_m,m\in \mathbb{N}$. For each such family $\mu^\pm_m$ the intersections are found by an interpolation producing an approximation of the $j_1$-coordinates at which $\mu^\pm_m(j_1,j_2)=\omega^2$. The resulting pair $(m,(j_1,j_2))$ is then included in the set $I^{\pm,h}$ if the Poynting number of the Bloch wave $U_{(j,m)}^{\pm,h}$ has the appropriate sign. Including these intersection points in $I^{\pm,h}$ leads to the same accuracy of the calculations with a much smaller number $N_\text{Bl}$. \paragraph{Orthogonalization of the Bloch waves $U_{\lambda_i}^{\pm,h}$ with $\lambda_i \in I^{\pm,h}$.} The index set $I^{\pm,h}$ selected by the above procedure typically contains $j$-points lying no further than $\tfrac{\pi}{\varepsilon K}$ apart. This separation is small for $K$ large and the corresponding Bloch waves $U^{+,h}_{\lambda_i},\lambda_i\in I^{+,h}$ are similar if they belong to the same $m$. In order to keep the condition number of ${\bf A}$ small, we $L^2$-orthonormalize the set $U^{+,h}_{\lambda_i},\lambda_i\in I^{+,h}$ via the modified Gram-Schmidt procedure. The superscript ``$-$" is treated analogously. \paragraph{Scattering problem with an incoming wave.} When studying scattering problems with an incoming field $u^{(\text{in})}$ supported on $x_1<0$, we use the following method to transform $u^{(\text{in})}$ into the inhomogeneity $f$ (without changing $f$ on $x_1\geq 0$). We set $$u_\theta:= u-u^{(\text{in})}\theta \quad \text{with} \ \theta(x_1,x_2)=\theta(x_1)=\begin{cases}1, & x_1<-\varepsilon R\\ 1-\tanh(d(x_1+\tfrac{\varepsilon R}{2})), & x_1\in [-\varepsilon R,0)\\ 0, & x_1\geq 0,\end{cases}$$ where $d$ is sufficiently large to ensure that $\theta$ is close to zero at $x_1=0$. This leads to the transformed problem \begin{equation}\label{E:Helmh_incom} -\nabla \cdot (a\nabla u_\theta)-\omega^2(1+{\rm i} \delta)u_\theta=\tilde{f}, \quad \tilde{f}:=f+2a\nabla u^{(\text{in})}\cdot \nabla \theta+u^{(\text{in})}a\Delta \theta\,, \end{equation} which we treat exactly as described above. \subsection{Numerical results I: Comparison with homogenization}\label{S:comp_hom} In our first numerical example we consider the interface between a homogenous and a periodic material and study a single incoming plane wave $u^{(\text{in})} = e^{{\rm i} j^{(\text{in})}\cdot x}$, $j^{(\text{in})}\in \mathbb{R}^2$. We use the method developed above to calculate an approximate solution $u=u_\theta+u^{(\text{in})}$, where $u_\theta$ solves \eqref{E:Helmh_incom}. We compare this solution $u$ of the original problem with the solution $u_\text{hom}$ of the interface problem with a homogenized material on the right. We choose $\varepsilon=1$, $R=15$, $L=6$ and $H=14$ and a discretization given by $h_1=0.05$ and $h_2\approx 0.0526$. We use $N_\text{Bl}=4$ Bloch waves in each radiation box. The absorption constant is set to $\delta =10^{-4}$. The heterogeneous material is described by the constant $1$ (``air") on $x_1<0$ and a periodic array of discs on $x_1\geq 0$. We choose the same structure as in \cite{PhysRevB.65.201104}, i.e. \begin{equation}\label{E:a-Luo} a(x)=\begin{cases} 1, \ &x_1<0\\ a_+(x), \ &x_1\geq 0 \end{cases} \end{equation} with \begin{equation} a_+(x) := \begin{cases} \frac{1}{12}, &\text{dist}(x,\{(\tfrac{1}{2},0),(0,\tfrac{1}{2}),(\tfrac{1}{2},1),(1,\tfrac{1}{2})\})<\tfrac{1}{\sqrt{2}}0.35\\ 1 &\text{otherwise} \end{cases} \label{eq:a+formula} \end{equation} for $x\in Y_\varepsilon$ and $a_+(x)=a_+(x+\varepsilon e_j)$, $j=1,2$ for all $x\in \mathbb{R}^2$ . The incoming wave $u^{(\text{in})}(x)=e^{{\rm i} j^{(\text{in})}\cdot x}$ has to satisfy $\omega^2=|j^{(\text{in})}|^2$ since $a\equiv 1$ on $x_1<0$. For $\varepsilon\to 0$ (or, equivalently, for incoming waves of large period) the problem on $x_1\geq 0$ can be approximated by the homogenized equation \cite{Conca-Vanninathan,DLS-2014} $$ \begin{aligned} &-a_*\Delta u_\text{hom}=\omega^2 u_\text{hom}\,,\\ &a_*=\frac{1}{2}\left(\frac{\varepsilon}{2\pi}\right)^2\partial_{j_1}^2\mu^+_0(0) =-\hspace{-1.15em}\int_{Y_\varepsilon}a_+(x)\left[1-\frac{{\rm i}}{2}\left(\partial_{x_1}\partial_{k_2}\psi_0^+(x)+\partial_{x_2}\partial_{k_1}\psi_0^+(x)\right) \right]\, dx\,. \end{aligned} $$ Note that the homogenization coefficient $a_*$ is a scalar due to spatial symmetries of $a_+$. The resulting numerical value for the above discretization is \begin{equation}\label{E:astar} a_* \approx 0.1699\,. \end{equation} In our first example we choose the frequency $\omega$ and the incoming wave $u^{(\text{in})}(x)=e^{{\rm i} j^{(\text{in})}\cdot x}$ with \begin{equation}\label{E:incom-sm-om} \omega =0.2\pi\approx 0.628, \quad j^{(\text{in})}\approx (0.440, 0.449)\,. \end{equation} Since the frequency is quite small (and, hence, the wavelength is large), we can expect that the homogenized setting provides a good approximation. The band structure and the level set at $\omega$ is plotted in Figure \ref{F:comp_hom_bd_str}, where also the points $j$ with $(j,m)\in I^{+,h}$ are marked by black dots (all very close to each other). At the selected frequency $\omega=0.2 \pi$ the band structure is only a small perturbation of a cone, which means that the band structure is similar to that of a homogenous medium. Indeed, homogenization theory provides a good qualitative prediction, as shown in Figure \ref{F:comp_hom_om_sm}. \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.6]{FIGURES/band_str_om_p63-eps-converted-to.pdf} \includegraphics[scale=0.6]{FIGURES/band_str_om_1p85-eps-converted-to.pdf} \caption{\label{F:comp_hom_bd_str} \small Band structure for $a_+$ given in \eqref {eq:a+formula}. The three surfaces are the graphs of $\sqrt{\mu_m(j)}$ for $m=0,1,2$ (identical in (a) and (b)). The red line shows points on the graph that have height $\omega$. The black lines show points on the graph that satisfy $j_2 = j_2^{(\text{in})}$. The black dots are the $(j,\sqrt{\mu})$-coordinates of the ``transmission'' Bloch waves $U^{+,h}_{\lambda_i}$, $i=1,\dots,N_{\text{Bl}}$ with $\lambda_i\in I^{+,h}$ selected by the algorithm. (a) Situation for parameters $\omega$ and $j^{(\text{in})}$ of \eqref {E:incom-sm-om}. (b) Situation for the parameters of \eqref {E:incom-negref}. } \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.6]{FIGURES/compare_homogeniz_a_om_p63-eps-converted-to.pdf} \includegraphics[scale=0.6]{FIGURES/compare_homogeniz_astar_om_p63-eps-converted-to.pdf} \caption{\label{F:comp_hom_om_sm} \small The color-coding shows $\text{Re}(u)$ on $\Omega_{R+L}$, where $u$ is the solution to an incoming wave given by \eqref{E:incom-sm-om}, the setting is that of Section \ref{S:comp_hom}. (a) A heterogeneous medium on the right as in \eqref{E:a-Luo}; (b) A homogeneous medium on the right with the homogenized coefficient $a_*\approx 0.1699$.} \end{center} \end{figure} In our second example we choose a frequency that is relatively large: \begin{equation}\label{E:incom-negref} \omega =1.85, \ j^{(\text{in})}\approx (1.269, 1.346). \end{equation} The band structure at $\omega$ is far from a conical shape. Indeed, the prediction of the homogenized model is not in agreement with the solution for the heterogeneous material, see Figure \ref{F:comp_hom_om_lg}. \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.6]{FIGURES/compare_homogeniz_a_om_1p85-eps-converted-to.pdf} \includegraphics[scale=0.6]{FIGURES/compare_homogeniz_astar_om_1p85-eps-converted-to.pdf} \caption{\label{F:comp_hom_om_lg} \small $\text{Re}(u)$ for the incoming wave given by \eqref{E:incom-negref} and the setting of Section \ref{S:comp_hom}. (a) The interface from \eqref{E:a-Luo}; (b) On the right half, $a_+$ is replaced by the homogenized coefficient $a_*\approx 0.1699$. One clearly sees negative refraction in (a), while homogenization predicts a positive refraction in (b).} \end{center} \end{figure} The parameters in \eqref{E:incom-negref} are chosen to produce negative refraction. Negative refraction can be deduced from the negative second component of the group velocity of the ``transmission" Bloch waves $U^+_\lambda$, $\lambda \in I^+$ with frequency $\omega$. In this situation, the incoming wave propagates upwards, while the transmitted wave propagates downwards. The group velocities (multiplied by 2 for better visibility) of the ``transmission'' Bloch waves $U^{+,h}_{\lambda_i}$, $i=1,\dots,N_{\text{Bl}}=4$ with $\lambda_i\in I^{+,h}$ selected by the algorithm are plotted in Figure \ref{F:k-vg_sel} (b). For completeness we show in Figure \ref{F:k-vg_sel} (a) the group velocities of the ``reflected Bloch waves'' $U^{-,h}_{\lambda_i}$. In both (a) and (b) all four group velocity arrows lie very close to each other such that only one arrow is visible. \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.6]{FIGURES/k_vg_refl_om_1p85-eps-converted-to.pdf} \includegraphics[scale=0.6]{FIGURES/k_vg_transm_om_1p85-eps-converted-to.pdf} \caption{\label{F:k-vg_sel} \small The Brillouin zone $\mathbb{B}$. The closed red curves mark the level set of the band structure at the level $\omega^2$ given in \eqref{E:incom-negref} and with the setting of Section \ref{S:comp_hom}. Hence, the Bloch waves at the red points solve the Helmholtz equation with $\omega^2$. The horizontal lines mark points that correspond to vertically periodic waves, $j_2\in Q'_K$. The arrows show the group velocities for those waves that are relevant in the numerical result. The red arrow pointing north-east represents the incoming wave, the dashed horizontal line marks its $j_2$ component. This component of $j_2$ is preserved across the interface. (a) The situation in the left medium with the Bloch waves $U^{-,h}_{\lambda_i}$, $\lambda_i\in I^{-,h}$. (b) The situation in the right medium with the Bloch waves $U^{+,h}_{\lambda_i}, \lambda_i\in I^{+,h}$. In (b) the group velocities are multiplied by $2$ for better visibility. } \end{center} \end{figure} \paragraph{Refraction at interfaces between homogeneous media: Snell's law and Fresnel formulas.} For a quantitative test of the numerical method we compute the analytical solution for an interface separating two homogeneous media. Here Snell's and Fresnel formulas are available and provide a reference solution. We choose the same setting as in Figure \ref{F:comp_hom_om_lg} (b). For the interface with $a(x)=1$ for $x_1<0$ and $a(x)=a_*>0$ for $x_1\geq 0$ Snell's law reads $\sqrt{a_*} =\tfrac{\sin \theta_+}{\sin \theta_-}$, where $\tfrac{1}{\sqrt{a_*}}$ is the refractive index of the material on $x_1\geq 0$ and $\theta_+,\theta_-$ are the angles between $j^{(\text{out})},j^{(\text{in})}$ and the horizontal axis, respectively. Here $j^{(\text{out})}$ is the wavevector of the transmitted wave. For a straight vertical interface it is $j^{(\text{out})}_2=j^{(\text{in})}_2$, we therefore find $\tfrac{\sin \theta_+}{\sin \theta_-}=\tfrac{|j^{(\text{in})}|}{|j^{(\text{out})}|}$. In summary, Snell's law for this setting is \begin{equation}\label{E:Snell} \sqrt{a_*}=\frac{|j^{(\text{in})}|}{|j^{(\text{out})}|}\,. \end{equation} Fresnel's formulas can be derived from the continuity of $u$ and $\partial_{x_1}u$ across the interface. Writing $$ u(x)=\begin{cases} e^{{\rm i} j^{(\text{in})}\cdot x} + R e^{{\rm i} (-j_1^{(\text{in})}x_1+j_2^{(\text{in})}x_2)}, \quad & x_1<0,\\ Te^{{\rm i} j^{(\text{out})}\cdot x}, \quad & x_1\geq 0\,,\\ \end{cases} $$ we obtain \begin{equation}\label{E:Fresnel} R=\frac{j_1^{(\text{in})}-a_*j_1^{(\text{out})}}{j_1^{(\text{in})}+a_*j_1^{(\text{out})}}\,, \quad\text{and}\quad T = 1+R\,. \end{equation} Given $j^{(\text{in})}$ and $a_*$, \eqref{E:Snell} and the equation $j^{(\text{out})}_2=j^{(\text{in})}_2$ determine $j^{(\text{out})}$, hence $R$ and $T$ can be evaluated from \eqref{E:Fresnel}. We compare these values with the numerical ones. In the numerical results, we interpret the wavevector of the dominant Bloch wave in $I^{+,h}$ as the vector $j^{(\text{out})}$. We denote the correspoding index in $I^{+,h}$ by $\lambda_\text{out}$. Similarly, we denote by $\lambda_\text{refl}$ the index of the dominant Bloch wave in $I^{-,h}$. The coefficients $R$ and $T$ are approximated by the coefficient of the basis functions $U^{-,h}_{\lambda_\text{refl}}$ and $U^{+,h}_{\lambda_\text{out}}$, respectively (after renormalizing $U^{-,h}_{\lambda_\text{refl}}$ and $U^{+,h}_{\lambda_\text{out}}$ such that $\|U^{-,h}_{\lambda_\text{refl}}\|_{L^2([0,1]^2)}=\|U^{+,h}_{\lambda_\text{out}}\|_{L^2([0,1]^2)}=1$). We denote these coefficients $\alpha_\text{refl}$ and $\alpha_\text{out}$, respectively. We use the incoming field as in Figure \ref{F:comp_hom_om_lg} (b), i.e. that given in \eqref{E:incom-negref}. Discretizing with $h_1=0.05$ and $h_2\approx 0.0526$, we get $a_*\approx 0.1699$ ($\sqrt{a_*}\approx 0.4122$) and $\frac{|j^{(\text{in})}|}{|j^{(\text{out})}|} \approx 0.414383$. This approximation improves when the FEM-discretization is refined. In Figure \ref{F:RT_conv} we plot the errors $|R-|\alpha_\text{refl}||$ and $|T-|\alpha_\text{out}||$ for the absorption parameter values $\delta=10^{-p}, p=2,3,4,5,6$. For $\delta$ even smaller the errors do not decrease due to the dominance of the discretization error. By refining the discretization, the error for $\delta<10^{-6}$ can be made smaller. \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.5]{FIGURES/R_T_conv_in_delta-eps-converted-to.pdf} \caption{\label{F:RT_conv} \small Convergence of the error in the reflection and transmission coefficients $R$ and $T$ with respect to $\delta$.} \end{center} \end{figure} \subsection{Numerical results II: Scattering and negative refraction with a localized source} In these tests we consider the same interface as in Section \ref{S:comp_hom}. Instead of an incoming field we investigate a spatially localized source; we choose \begin{equation}\label{E:source} f(x)=2e^{-3|x-x_*|^2}, \ x_*=(-3.5,0). \end{equation} This source generates waves in all directions, hence a relatively large number $N_\text{Bl}$ of Bloch basis functions is needed. We consider a vertically wide domain in order for the periodic boundary conditions to have a smaller effect near the source location. The domain is given by $H=100$, $\varepsilon=1$, $\varepsilon R=45$, $\varepsilon L =15$, and we set $N_\text{Bl} = 180$. We choose again $\omega=1.85$. Figure \ref{F:source} shows the solution modulus for $h_1 = h_2 =0.0625$. In the crystal, close to the interface, the field is clearly focused in a strip near the central line. We interpret that this effect is generated by the negative refraction at the selected frequency. In Figure \ref{F:source-ks} we plot once more the group velocities of the Bloch waves $U^{-,h}_{\lambda_i}$, $\lambda_i \in I^{-,h}$ in (a) and of $U^{+,h}_{\lambda_i}, \lambda_i \in I^{+,h}$ in (b). The size of the dots at the foot of each arrow is proportional to the relative modulus of the coefficient of the Bloch wave $U^{-,h}_{\lambda_i}$ in (a) and $U^{+,h}_{\lambda_i}$ in (b). The strength of each Bloch wave in the solution $u$ is thus visualized. Note that the large vertical size $H$ leads to a large set $Q'_K$, which explains the large number of gray horizontal lines in Figure \ref{F:source-ks}. \begin{figure}[ht!] \begin{center} \def\includegraphics[scale=0.75]{FIGURES/source_at_m3p5_one_interf_w_100_Nbl_180-eps-converted-to.pdf}{\includegraphics[scale=0.75]{FIGURES/source_at_m3p5_one_interf_w_100_Nbl_180-eps-converted-to.pdf}} \bottominset{\includegraphics[scale=0.4]{FIGURES/source_at_m3p5_one_interf_w_100_Nbl_180_zoom_noaxis-eps-converted-to.pdf}}{\includegraphics[scale=0.75]{FIGURES/source_at_m3p5_one_interf_w_100_Nbl_180-eps-converted-to.pdf}}{40pt}{-60pt} \caption{\label{F:source} \small $|u|$ for the scattering problem with the Gaussian source \eqref{E:source}. The interface is as in \eqref{E:a-Luo}, $\omega=1.85$ and $N_\text{Bl}=180$, $\varepsilon R=45$, $\varepsilon L =15$, $\varepsilon=1$, $H=100$, $h_1=h_2=0.0625$. The inset zooms in on the vicinity of the center (near the source location).} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.6]{FIGURES/k_vg_refl_source_om_1p85_width100_Nbl_180-eps-converted-to.pdf} \includegraphics[scale=0.6]{FIGURES/k_vg_transm_source_om_1p85_width100_Nbl_180-eps-converted-to.pdf} \caption{\label{F:source-ks} \small The Brillouin zone for the scattering problem of Figure \ref{F:source}, the symbols are as in Figure \ref{F:k-vg_sel}, (a) showing the left medium and (b) the right medium. In contrast to the case of a single incoming wave, the numerical solution now uses many different Bloch waves, indicated by the arrows. Since all waves in $I^{\pm,h}$ are outgoing, all arrows point to the left in the left medium and to the right in the right medium. The size of the dots at the foot of each arrow is porportional to the modulus of the coefficient of each Bloch wave.} \end{center} \end{figure} In order to confirm the lensing effect of a crystal for frequencies with negative refraction, we use the same source as above but truncate the crystal after 10 horizontal periods, see Figure \ref{F:source_fin_crys}. With the frequency $\omega =0.2\pi\approx 0.628$, where the refraction is positive, no focusing occurs; contrastingly, at $\omega=1.85$, a focused image is seen on the right side of the crystal. This confirms findings of \cite{PhysRevB.65.201104}. We use here $\varepsilon=1$, $\varepsilon R=27$, $\varepsilon L =9$, $H=60$, $h_1\approx 0.0769$, $h_2\approx 0.0714$, and $N_\text{Bl}=80.$ \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.51]{FIGURES/source_at_m3p5_fin_cryst_thick_10_w70_Nbl_80_om_p63-eps-converted-to.pdf}\hspace{-.6cm} \includegraphics[scale=0.51]{FIGURES/source_at_m3p5_fin_cryst_thick_10_w70_Nbl_80_om_1p85-eps-converted-to.pdf} \caption{\label{F:source_fin_crys} \small $|u|$ for the scattering problem with the Gaussian source \eqref{E:source} and a crystal of thickness $10 \varepsilon$, $\varepsilon =1$. The material in $x_1\in [0,10\varepsilon]$ is given by $a_+$ in \eqref {eq:a+formula} and by $1$ in $x_1\notin [0,10\varepsilon]$. We use $N_\text{Bl}=80,$ $H=60$, $\varepsilon R=27$, $\varepsilon L =9$, and $h_1\approx 0.0769$, $h_2\approx 0.0714$. Left: Frequency $\omega=0.2\pi\approx 0.628$ leading to a positive refraction. Right: Frequency $\omega=1.85$ with a negative refraction and a resulting lensing effect.} \end{center} \end{figure} \subsection*{Acknowledgements} Support of the first author by the DFG grant DO1467/3-1 and of the second author by the DFG grant Schw 639/6-1 is gratefully acknowledged. \bibliographystyle{abbrv}
1,314,259,993,840
arxiv
\section{Introduction} In this paper, we study the distribution of the $\ell$-part of the relative class groups $\mathrm{Cl}(L/K)$ of a quadratic extension of global fields $L/K$ where $K$ contains the $\ell^n$th roots of unity for an odd prime $\ell$. Malle observed in \cite{Malle} that the usual Cohen-Lenstra heuristics for the $\ell$-part of the class group of quadratic extensions of $\mathbb Q$ do not match numerical data in this setting (already when $\ell=3, n=1, K = \mathbb Q(\mu_3)$). We give a modified prediction which is compatible with both numerical data and the function field model. In fact, we have found it fruitful to study the class group together with two extra invariants, $\omega_{L/K} \in ( \wedge^2 \mathrm{Cl}(L/K) ) [\ell^n] $ and $\psi_{L/K} : \mathrm{Cl}(L/K)^\vee [\ell^n] \to \mathrm{Cl}(L/K) [\ell^n]$, which we define using class field theory and Galois cohomology in Definitions \ref{psi-nt-defi} and \ref{omega-nt-defi}. They reveal, in two different ways, a bilinear structure on the class group closely related to the Weil pairing on abelian varieties and the Cassels-Tate pairing on Tate-Shafarevich groups. We define a set $\mathcal C_{\ell,n}$ of triples of a {finite abelian $\ell$-}group $G$, an element of $(\wedge^2 G)[\ell^n]$, and a homomorphism $G^\vee [\ell^n] \to G[\ell^n]$; we construct a measure $Q^t\mu$ on $\mathcal C_{\ell,n}$ which has several natural characterizations - as the unique measure with certain moments, as the limit of two different random matrix models as the matrix size goes to infinity, and by an explicit formula. We conjecture that, for a fixed field $K$ containing the $\ell^n$th roots of unity, distribution of the $\ell$-part of the relative class group $ \mathrm{Cl}(L/K)$ together with these two invariants converges to $Q^t\mu$ as the discriminant of the extension $L/K$ goes to $\infty$, where $t$ is half the degree of $K$ over $\mathbb Q$. We prove a weaker form of this conjecture in the function field case. (The strength of this statement is exactly analogous to the strength of the form of Cohen-Lenstra over function fields proved by Ellenberg, Venkatesh, and Westerland \cite{EVW}, and the work in their paper is the key input in our proof.) In addition, we perform some numerical experiments in the case $K = \mathbb Q(\mu_3)$. In this case the modifications needed to the Cohen-Lenstra predictions for the distribution of the class group were well-understood, so we focus on checking that our new invariants have the expected distribution, which they do. \subsection{A main result} We think the invariants $\psi$ and $\omega$ we define are interesting, and, for reasons discussed in the next subsection, they are helpful in our proofs. But their definitions will have to wait until after the introduction (see Definitions \ref{psi-nt-defi}, \ref{omega-nt-defi}, and \ref{relativepsiomega}). For now, we state a corollary of our main result in the function field setting making no mention of the $\psi$ and $\omega$ invariants. \medskip Fix an odd prime $\ell$. For each finite field $\mathbb F_q$ and positive integer $g$, let $\mathbb{cH}_{g, \mathbb F_q} $ denote the set of smooth, projective hyperelliptic curves over $\mathbb{F}_q$ of genus $g$. To a hyperelliptic curve $C$, we may associate the Picard group ${\rm Pic}^0(C)$ and its $\ell$-power part ${\rm Pic}^0(C)_{\ell}$. The next theorem describes the probability that this $\ell$-power part takes a given value, in the limit where $g$ and $q$ both go to infinity, when $q$ is congruent to $1$ mod $\ell$. This formula uses the Pochhammer symbol $$(a;q)_k = \prod_{j=0}^{k-1} (1- a q^j) .$$ \begin{thm}\label{intro-simplified} For any pair of sequences $g_i$ and $q_i$ such that $g_i \to \infty, q_i \to \infty$, where the $q_i$ are all odd prime powers congruent to $1$ modulo $\ell^n$ but not modulo $\ell^{n+1}$, the limit \[ \lim_{i \to \infty} \frac{ \left| \left \{ C \in \mathbb{cH}_{g, \mathbb F_q} \mid {\rm Pic}^0(C)_{\ell} \cong G \right\} \right| } { | \mathbb{cH}_{g, \mathbb F_q} | } \] exists and is equal to $$\frac{\prod_{i=1}^{\infty}(1+\ell^{-i})^{-1}}{|{\rm Aut}(G)|}\cdot |(\wedge^2G)[\ell^n]|\cdot A_n \left( G[\ell^n] \right)$$ where $$ A_n \left( \oplus_{i=1}^{n} (\mathbb{Z}/\ell^i\mathbb{Z})^{m_i} \right) = (\ell^{-1};\ell^{-1})_{m_n}\cdot \prod_{i=1}^{n-1} (\ell^{-1};\ell^{-2})_{\lceil m_i/2\rceil}= \prod_{i=1}^n \prod_{ \substack{ 0 \leq j < m_i \\ 2\mid j\textrm{ or }i=n }} (1 -\ell^{-j-1}). $$ \end{thm} Accounting for $\psi$ and $\omega$, the formula simplifies considerably: the probability of a group $G$ together with invariants $\omega$ and $\psi$, equals the moment associated to $(G, \omega, \psi)$ (which happens to equal $\frac{1}{ \operatorname{Sym}^2 G[\ell^n]}$ ) divided by the number of automorphisms of $G$ fixing $\omega$ and $\psi$, times a constant independent of $G$, if $\psi$ is invertible, and $0$ otherwise. This shape of formula, where the measure equals the moment divided by the number of automorphisms times a constant, is completely analogous to what occurs for the classical Cohen-Lenstra heuristics without roots of unity, and thus serves to partially explain the more complicated Theorem \ref{intro-simplified}. We make an analogous conjecture in the number field case: \begin{conj}\label{ntmain} Let $\ell$ be an odd prime and $n$ a natural number. Let $K$ be a number field which contains the $\ell^n$th roots of unity but not the $\ell^{n+1}$st roots of unity. Let $t = \frac{1}{2} [ K : \mathbb{Q} ].$ Let $G$ be a finite abelian $\ell$-group and fix $\omega_G \in( \wedge^2 G)[\ell^n]$ and $\psi_G \in {\rm Hom} ( G^\vee [\ell^n], G[\ell^n] )$. Let $S_{K,X}$ denote those quadratic extensions $L/K$ for which $\left| \mathrm{Norm}_{K/\mathbb{Q}}( \mathrm{Disc}(L/K) ) \right| \leq X.$ Then \[ \lim_{X \to \infty} \frac{1} { | S_{K,X} | } \left| \left\{ L/K \in S_{K,X} \mid ( \mathrm{Cl}(L/K)_{\ell} , \omega_{L/K}, \psi_{L/K} ) \cong (G,\omega_G, \psi_G) \right\} \right| =Q^t \mu (G,\omega_G, \psi_G).\] \end{conj} The invariants $\psi_{L/K}$ and $\omega_{L/K}$ and the measure $Q^t \mu$ needed to interpret this conjecture will be defined in Sections \ref{s-beg} and \ref{number-field-invariants}. \subsection{Prior work} We now explain some of the prior work on this problem, which will also clarify why we use these new invariants. Cohen and Lenstra \cite{CL} made predictions for the distribution of the $\ell$-part of the class groups of a random quadratic number field. Cohen and Martinet \cite{CM} generalized these predictions to the $\ell$-part of the class groups of random quadratic extensions of a fixed number field, or even more generally to $\Gamma$-extensions of a fixed number field for a group $\Gamma$. Malle \cite{Malle} found numerical evidence which suggested that these generalized heuristics fail when the base number field contains the $\ell$th roots of unity. In \cite{Malle10}, he proposed a modified conjecture when the base field contains the $\ell$th roots of unity, but not the $\ell^2$th roots of unity. He left open what the distribution should be for higher powers of $\ell$. One fruitful approach to study class groups of number fields is to first study the analogous problem over function fields. The rich structure of function fields sometimes suggests hidden structures not directly apparent from the number field perspective. Friedman and Washington suggested in \cite{FW} that as $K$ varies through imaginary quadratic function fields over $\mathbb F_q$, $\mathrm{Cl}(K)_\ell $ should behave statistically as $ \mathrm{coker}\left(1 - F \right)$, where $F$ is a random matrix in $\mathrm{GSp}^{(q)}_{2g} (\mathbb Z_\ell),$ the coset of $\mathrm{Sp}_{2g} (\mathbb Z_\ell)$ inside $\mathrm{GSp}_{2g} (\mathbb Z_\ell) $ consisting of symplectic similitudes of similitude factor $q.$ The key motivation for this is that $\mathrm{Cl}(K)_\ell$ is isomorphic to the cokernel of $1-F$ for some element $F$ in $\mathrm{GSp}^{(q)}_{2g}(\mathbb{Z}_\ell)$. More specifically, $F$ is the matrix by which Frobenius acts on the $\ell$-adic Tate module of the Jacobian of the curve underlying $K$. Friedman and Washington guessed that Frobenius should behave like a random element of that group. In fact, Yu proved in \cite{Yu} that Frobenius does behave like a random element of that group in the limit as $q \to \infty$, so Friedman and Washington's suggestion is valid in that regime. The key step in Yu's method is showing that the monodromy of the covering of the moduli space of hyperelliptic curves of genus $g$ defined by the Tate modules of their Jacobians is exactly ${\rm Sp}_{2g} (\mathbb Z_\ell)$ which, through Deligne's equidistribution theorem, shows that as $q \to \infty$, Frobenius is suitably equidistributed in the appropriate coset of this group. However, Friedman and Washington did not calculate the distribution of ${\rm Coker} (1-F)$ for $F$ in $\mathrm{GSp}^{(q)}_{2g}(\mathbb{Z}_\ell)$. Instead, they calculated the distribution of the cokernel of a random matrix in $M_{2g}(\mathbb Z_\ell)$. They conjectured that these two distributions agree in the limit as $g$ goes to $\infty$. Achter showed that this is false when $q \not \equiv 1 \mod \ell$ \cite{Achter}. This raised the question of what the true limit ${\rm Coker} (1-F)$ is as $g$ goes to $\infty$ (or if this limit even exists), and how it depends on $q$. For the analogy to number fields, one should observe that the roots of unity of $K$ are exactly $\mathbb F_q^\times$, and therefore $K$ contains the $\ell^n$th roots of unity if and only if $\ell^n$ divides $|\mathbb F_q^\times| = q-1$. So the analogue of a number field that contains the $\ell^n$th roots of unity but not the $\ell^{n+1}$th is the case when $q \equiv 1 \mod \ell^n$ but $q \not \equiv 1 \mod \ell^{n+1}.$ If the distribution of $\mathrm{coker}\left(1 - F \right)$ converges, as $g$ goes to $\infty$, to the same value for all $q$ in this congruence class, then we could conjecture that class groups of such number fields have the same distribution. If Friedman and Washington's conjecture were correct, this limit would match the limit as $g$ goes to $\infty$ of the distribution of cokernels of random matrices, which is the classical Cohen-Lenstra distribution. Thus, the fact that this conjecture fails is compatible with the numerical evidence that the Cohen-Lenstra conjecture fails in the presence of $\ell$-power roots of unity. Garton was able to make progress towards calculating the limit. In the case $\ell || q-1,$ Garton gave in \cite{Garton} a formula for the pointwise $g \to \infty$ limit of these random matrix measures. In fact, the formula he derives is identical to Malle's conjectured limiting distribution for the class group of quadratic extensions of number fields containing $\ell$th roots of unity, justifying Malle's function-field motivation for his conjecture \cite{Malle10}. He also found the $g\to\infty$ limit in the case when $\ell^2 \mid \mid q-1$. When $\ell^n \mid\mid q-1$ for $n>2$, he was not able to show that the limit existed, but was able to show that any limit of a subsequence has the expected moments. We prove the convergence when $\ell^n \mid \mid q-1$ for any $n$, and we give an explicit formula for the distribution. Our approach is indirect: we calculate the distribution of the group $\mathrm{coker}\left(1 - F \right)$ together with the two extra invariants $\psi,\omega$ then sum over all possibilities for $\psi$ and $\omega.$ The definitions of the $\psi$ and $\omega$ invariants in this setting are purely group-theoretic, and may seem more motivated than the arithmetic definition. Because the $\psi$ and $\omega$ invariants are helpful for this proof, and dramatically simplify the formula for the measure, we believe they are of greater importance, and so we study them together with the class group throughout this work. Ellenberg, Venkatesh, and Westerland proved in \cite{EVW} a form of Cohen-Lenstra for the $\ell$-parts of class groups of quadratic function fields over $\mathbb F_q$ where $q \not \equiv 1 \mod \ell$. The desired statement is that the distribution of the class group converges to the expected distribution for fixed $q$, as the degree of the discriminants grows to $\infty$. Ellenberg, Venkatesh, and Westerland obtain this convergence when $q$ goes to $\infty$ arbitrarily slowly with the degree of the discriminant. This is much more difficult than the $q \to\infty$ case and requires all the tools from the $q \to \infty$ case, in particular the monodromy computations of \cite{Yu} plus sophisticated \'{e}tale cohomology and topological arguments. However, they did not calculate the distribution where $q \equiv 1 \mod \ell$, instead only calculating the moments. In \cite{LT}, two of us made progress on the $q \equiv 1 \mod \ell$ case by defining the $\omega$ invariant in the function field context. In this paper, using both the $\omega$ and $\psi$ invariants and other new tricks, we prove a result exactly analogous to Ellenberg, Venkatesh, and Westerland's. This relies heavily on the upper bounds for cohomology groups of certain spaces proven in \cite{EVW}. Because class field theory describes the class group as the Galois group of the maximal unramified abelian extension, some prior work has tried to generalize the Cohen-Lenstra heuristics to Galois groups of non-abelian unramified extensions. Venkatesh and Ellenberg defined \cite{VE}, and Wood and Wood generalized \cite{WW}, a ``lifting invariant" associated to such unramified extensions, and conjectured values for the corresponding moments. We expect that the lifting invariant specializes to our $\omega$ invariant in the abelian case. However, neither work gave a distribution for the group together with this invariant, while we do give a precise distribution. \subsection{Plan of the paper} In Section \ref{s-beg}, we define the set $\mathcal C_{\ell,n}$ of groups with extra invariants $\omega, \psi$ that we consider for the rest of the paper. In fact, we define a category of such groups. The measures we study in this paper will be on the set of isomorphism classes of this category. We state Theorem \ref{measure-exists-unique}, which describes a measure $\mu$ on this set which is uniquely characterized by its moments. The proof of this theorem is contained in Section \ref{linearrandommodels}. In Section \ref{s-ffdef}, we associate an element of $\mathcal C_{\ell,n}$ to a curve over a finite field, or more abstractly, to a symplectic similitude $F$ of a free $\mathbb Z_\ell^{2g}$-module with symplectic structure. In these cases the group $G$ is the $\ell$-part of the Picard group and the cokernel of $1-F$ respectively, and $\omega$ and $\psi$ are defined in a relatively straightforward manner. We define measures $\mu_g^q$ by averaging these elements over all hyperelliptic curves of genus $g$. We state a conjecture that these measures converge to $\mu$ as $g$ goes to $\infty$, as well as the slightly weaker convergence result Theorem \ref{ffmain}, which we prove in Section \ref{functionfieldproof}. In Section \ref{number-field-invariants}, we begin to study the number field case. We define extra invariants $\omega$ and $\psi$ on the class group of a number field, or on the relative class group of a number field extension. We conjecture that, averaging over quadratic extensions $L/K$ of a fixed number field $K$ containing the $\ell^n$th-roots of unity but not the $\ell^{n+1}$th, the distribution of the relative class group, with these extra invariants, converges to the measure $Q^t \mu$, obtained from $\mu$ by iteratively applying a quotienting operation $Q$ (Conjecture \ref{ntmain}). The next three sections are devoted to fleshing out this conjecture. In Section \ref{s-alternate-psi}, we give alternate, equivalent, definitions of the invariant $\psi$; one definition uses torsors, and the other uses Hilbert symbols. In Section \ref{s-internal-consistency}, we check that the triples $(G, \omega, \psi)$ obtained from number fields always lie in the support of the measure $Q^t \mu$. In Section \ref{s-NT-FF-compatible}, we check that the definitions of $\omega$ and $\psi$ in the number field case, when transferred to the function field case, match our definitions in Section \ref{s-ffdef} for curves. In Section \ref{linearrandommodels}, we give the main analytic arguments of the paper. The most important result is that the measure $\mu$ matches the measure arising from the random matrix model studied by Friedman and Washington. To prove this, we first introduce a different random matrix model, which is linear in the sense that it involves cokernels of random elements of an affine subspace of $M_{2g}(\mathbb Z_\ell)$. We then show that $\mu$ matches the measure arising from this random matrix model. To complete the proof, we show that our two random matrix models have the same moments, and that the measure $\mu$ is uniquely determined by its moments, demonstrating that $\mu$ matches the measure from the Friedman-Washington matrix model instead. Without the extra invariants, that the moments determine the measure follows from work of Wood \cite[Theorem 3.1]{Wood}, but we use a different strategy to account for the invariants. Finally, in this section, we use our control of the measure $\mu$ to describe the measure $Q^t \mu$ as well. In Section \ref{functionfieldproof}, we prove the function field equidistribution result Theorem \ref{ffmain}. To do this, we define spaces whose $\mathbb F_q$-point counts are related to the moments of the distribution of $ {\rm Pic}^0(C)_\ell, \omega_C,$ and $\psi_C$. Counting the connected components of these spaces is a purely group-theoretic exercise once the monodromy result of \cite{Yu} is used. To count the points of these spaces in the $g \to\infty ,q\to\infty$ limit, it suffices to bound their Betti numbers. We show these spaces are covered by certain spaces defined in \cite{EVW}, allowing us to show their Betti numbers are at most the Betti numbers of the spaces in \cite{EVW}, which were already bounded in \cite{EVW}. In Section \ref{data}, we give numerical evidence for Conjecture \ref{ntmain}. \subsection{Linearization} We want to highlight the key idea behind the construction of the linear random model, as we think it might clarify broader work in arithmetic statistics. We imagine that the distribution of $\mathrm{coker}\left(1 - F \right)$, for $F \in \mathrm{GSp}^{(q)}_{2g} (\mathbb Z_\ell),$ is closely related to the distribution of $\mathrm{coker} (\log F)$, where $\log F$ lies in the logarithm of $ \mathrm{GSp}^{(q)}_{2g} (\mathbb Z_\ell)$, i.e. in a coset of the Lie algebra $\mathfrak{sp}_{2g} (\mathbb Z_\ell)$ inside the larger Lie algebra $\mathfrak{gsp}_{2g} (\mathbb Z_\ell)$ (with the coset taken as additive groups). If $F$ were congruent to $1$ modulo $\ell$, this heuristic could be made rigorous using the $\ell$-adic convergence of the logarithm power series. Because $F$ is almost certainly not congruent $1$ to modulo $\ell$, there seems to be no hope of relating the non-linear random matrix model to its linearization directly. Nonetheless, we show by an indirect argument that these distributions are the same. Thus, we hope that further comparison results could be proven between random matrix models involving random elements of $\ell$-adic groups, which are often closely related to function field distributions, and random matrix models involving random elements of their Lie algebras, which can be much easier to work with. The first example of this phenomenon was the comparison result discovered by Friedman and Washington \cite{FW} between the cokernels of $1-F$ for random $F$ in $\mathrm{GL}_n(\mathbb Z_\ell)$ and the cokernel of random $n\times n$ matrices over $\mathbb Z_\ell$. An example where this could be applied is the work of Poonen-Rains \cite{PR} and the subsequent work of Bhargava-Kane-Lenstra-Poonen-Rains \cite{BKLPR} modelling Selmer groups of elliptic curves. The second of these works gives a model for Selmer groups (and therefore also the ranks and Tate-Shafarevich groups) of elliptic curves as cokernels of random alternating matrices. In the function field setting, Selmer groups are known to be cokernels of random orthogonal matrices (see \cite[Theorem 4.4]{Landesmann} and \cite{FLR}). The fact that the alternating matrices are the Lie algebra of the orthogonal groups might explain the effectiveness of their heuristic from the function field perspective. \subsection{Acknowledgments} We would like to thank Melanie Wood for helpful conversations regarding this paper. While working on this paper, W.S. served as a Clay Research Fellow. \tableofcontents \section{Bilinearly Enhanced Groups}\label{s-beg} \subsection{Elements of $\wedge^2$ and a bilinear pairings} \label{wedge2bilinear} We fix an odd prime $\ell$ and a positive integer $n$. For a finite, abelian group $G$ of odd order, we define $G^\vee = {\rm Hom}(G, \mathbb Q/\mathbb Z)$ and $\wedge^2G$ to be the subgroup of $G\otimes G$ spanned by $x^y = x\otimes y-y\otimes x$ for all $x,y \in G.$ For an element $\omega\in\wedge^2G$, an integer $r\geq 0$, we define a bilinear form $\omega_r$ on $G^\vee [\ell^r]$, valued in $\frac{1}{\ell^r} \mathbb{Z} / \mathbb{Z}$, as follows: By identifying $G^\vee[\ell^r]\cong {\rm Hom}(G,\mathbb{Z}/\ell^r\mathbb{Z})$ we obtain a natural map \begin{align*} \omega_r: G^\vee[\ell^r]\otimes G^\vee[\ell^r] &\rightarrow {\rm Hom}(G\otimes G, \mathbb{Z}/\ell^r\mathbb{Z}\otimes \mathbb{Z}/\ell^r\mathbb{Z}) \\ &\cong {\rm Hom}(G\otimes G,\mathbb{Z}/\ell^r\mathbb{Z}) \\ &\cong(G\otimes G)^\vee[\ell^r] \\ &\xrightarrow{\text{evaluate at } \omega} \frac{1}{\ell^r} \mathbb{Z} / \mathbb{Z}. \end{align*} \subsection{The Category of $\ell^n$-BEGs} \begin{definition}\label{beg} Consider a triple $(G,\omega,\psi)$: \begin{itemize} \item $G$ is a finite abelian $\ell$-group \item $\omega\in\wedge^2G[\ell^n]$ \item $\psi:G^\vee[\ell^n]\rightarrow G[\ell^n]$ \end{itemize} For $\gamma \in G^\vee[\ell^n]$ and $\delta \in G^\vee$ define $$\langle \gamma,\delta \rangle := \delta( \psi(\gamma) ).$$ We say the triple $(G,\omega,\psi)$ is an $\ell^n$-Bilinearly Enhanced Group ($\ell^n$-BEG) if $\psi,\omega$ satisfy the following compatibility condition: for all $r \geq 0$ and all $\alpha,\beta\in G^\vee[\ell^{n+r}],$ \begin{equation}\label{psiomegacomp} \langle \ell^r\alpha,\beta\rangle = \langle \ell^r\beta,\alpha \rangle + 2 \cdot \omega_{G,n+r}(\alpha,\beta). \end{equation} \end{definition} \begin{definition}\label{begcategory} We denote by $\mathcal{C}_{\ell,n}$ the following category: \begin{itemize} \item The objects of $\mathcal{C}_{\ell,n}$ consist of all $\ell^n$-BEGs. \item A morphism between two objects $(G,\omega_G,\psi_G)$ and $(H,\omega_H,\psi_H)$ consists of a group homomorphism $f:G\rightarrow H$ such that $f_*\omega_G=\omega_H$ and $f_*\psi_G=\psi_H$. \end{itemize} \end{definition} Note that $\mathcal{C}_{\ell,n}$ is not an abelian category. Moreover, a morphism in $\mathcal{C}_{\ell,n}$ is an epimorphism iff the map on abelian groups is surjective, whereas there are more monomorphisms then one might initially expect. \subsection{Random Measures on $\mathcal{C}_{\ell,n}$} We shall be interested in studying measures on $\mathcal{C}_{\ell,n}$. Given that it is a category, it is natural to study measures by considering their \emph{moments}, i.e. by integrating along them the test functions $\#{\rm Surj}(*,G^{\bullet})$ for various $\ell^n$-BEGs $G^\bullet$. This is very convenient since in the number field and function field settings we are trying to model, the moments are what we have direct access to. In Section \ref{linearrandommodels} we define a `universal' measure $\mu$ on $\mathcal{C}_{\ell,n}$ with various natural properties. We justify this measure by showing that it arises as the limiting measure in two random matrix models. Perhaps most importantly, we prove that it is determined by its moments. \begin{thm}\label{measure-exists-unique} There is a unique probability measure $\mu$ on $\mathcal{C}_{\ell,n}$ satisfying $$\mathbb{E}_{\mu}\#{\rm Surj}(*,(G,\omega_G,\psi_G)) = \frac{1}{| {\rm Sym}^2G[\ell^n] |}.$$ Moreover, the support of $\mu$ consists precisely of those $(G,\omega_G,\psi_G)$ such $\psi_G$ is an isomorphism, in which case $$\mu(G,\omega_G,\psi_G) = \frac{c_\ell}{| {\rm Aut}(G,\omega_G,\psi_G) | \cdot | {\rm Sym}^2G[\ell^n] |}$$ where $c_\ell=\prod_{i=0}^{\infty} (1-\ell^{-(2i+1)}).$ \end{thm} Moreover, we prove in Lemma \ref{momimpmeas} that $\mu$ is determined by its moments in a strong sense, meaning that if another measure $\nu$ has moments which are close to the moments of $\mu$, then $\nu$ itself is close to $\mu$ \subsection{Generalized Random Measures on $\mathcal{C}_{\ell,n}$} It will be necessary for us to define a slight generalization of the universal measure $\mu$. To motivate this, consider the classical case of Cohen-Lenstra setting. In the case of imaginary quadratic fields, the $\ell$-part of the class group is well modelled by the cokernel of a large square matrix. However, if one is interested in the case of real quadratic fields then this amounts to quotienting out a random abelian group by `one additional random element' (which might be 0), as was done in \cite{CL} and \cite{CM}. We thus define an operator $Q$ to formalize the idea of quotienting out by a random element: \begin{definition}\label{Qtmu} Let $\nu$ be a measure on triples $\mathcal C_{\ell,n} $. Define the measure $Q\nu$ as follows: $$Q\nu(G,\omega_G,\psi_G):= \int_{\mathcal C_{\ell,n }} \frac{\#\{f:\mathbb{Z}_\ell\rightarrow H\mid (H,\omega_H,\psi_H)/{\rm Im} f\sim (G,\omega_G,\psi_G)\}}{|H|} d\nu(H,\omega_H,\psi_H).$$ \end{definition} This gives us a one parameter family of generalizations $Q^t\mu$ of $\mu$. We prove that these measures are also determined by their moments in Lemma \ref{Qmomimpmeas}, and compute them as well as their supports and moments explicitly. \section{Definitions, conjectures, and statements of results in the function field setting }\label{s-ffdef} Let $C/\mathbb{F}_q$ be a curve. Our goal in this section is to introduce additional invariants with which to adorn the group ${\rm Pic}^0(C)(\mathbb{F}_q)_\ell$; these additional invariants are non-trivial only when the function field $\mathbb{F}_q(C)$ contains $\ell^n$-power roots of unity or equivalently when $\ell^n | q-1.$ To this effect, we pass to the Jacobian of the curve, which we think of simply as a principally polarized abelian variety. The group ${\rm Pic}^0(C)(\mathbb{F}_q)_\ell$ can be recovered purely from the data of the Tate module of the Jacobian and the action of Frobenius thereon. The Weil pairing funishes the Tate module with a symplectic pairing, and Frobenius acts as a symplectic similitude with respect to this pairing. This is the setting in which we will define our two additional invariants. We work in this setting both to achieve the greatest possible generality, and because it will be paramount for defining our random models. Afterwards, we will specialize to the case of Abelian varieties, and then even further to Jacobians of curves. \subsection{The $\omega$ and $\psi$ invariants attached to a symplectic similitude} \label{omegapsisymplecticsimilitude} Suppose $\omega: T \times T \rightarrow \mathbb{Z}_\ell$ is a perfect symplectic pairing for $T$ a free $\mathbb Z_\ell$-module of rank $2g$. We will suggestively refer to $\omega$ as the \emph{Weil pairing}. Let $V = T_{\mathbb{Q}_\ell} =T \otimes \mathbb Q_\ell.$ Let $F \in \mathrm{GSp}^{(q)}(T,\omega),$ i.e. $$\omega(Fx,Fy) = q \cdot \omega(x,y) \text{ for all } x,y \in T.$$ Let $\omega_T := \sum_{i = 1}^g \omega(e_i,f_i)^{-1} \cdot (e_i \wedge f_i) \in \wedge^2 T,$ where $e_i,f_i$ runs over a symplectic basis of $T,$ i.e. a basis $\{e_i, f_i: i = 1,\ldots,g \}$ for which \begin{itemize} \item every $e_i$ or $f_i$ is orthogonal to every $e_j$ or $f_j$ if $i \neq j$ \item $\omega(e_i,f_j)$ is non-zero in $\mathbb{Z}_\ell / \ell.$ \end{itemize} It is easy to check that $\omega_T$ does not depend on the choice of symplectic basis. By the recipe from \S \ref{wedge2bilinear}, $\omega_{T}$ defines the sequence of alternating bilinear pairings \begin{align*} \omega_m: {\rm Hom}(T/\ell^m, \frac{1}{\ell^m} \mathbb{Z}_\ell / \mathbb{Z}_\ell ) \times {\rm Hom}(T / \ell^m, \frac{1}{\ell^m} \mathbb{Z}_\ell / \mathbb{Z}_\ell ) &\rightarrow \frac{1}{\ell^m} \mathbb{Z}_\ell / \mathbb{Z}_\ell \\ \left( \frac{1}{\ell^m} \omega(\bullet, s), \frac{1}{\ell^m} \omega(\bullet, t) \right) &\mapsto \frac{1}{\ell^m} \omega(s,t). \end{align*} Define $H := \frac{T}{(1-F)T}.$ The element $\omega_T$ pushes forward to $\omega^o_H \in \wedge^2 H.$ As explained in $\S \ref{wedge2bilinear},$ these induce alternating bilinear pairings $\omega^o_{H,m}$ on ${\rm Hom} \left(H, \frac{1}{\ell^m} \mathbb{Z}_\ell / \mathbb{Z}_\ell \right).$ More concretely, $${\rm Hom} \left(H, \frac{1}{\ell^m} \mathbb{Z}_\ell / \mathbb{Z}_\ell \right) = {\rm Ker} \left( 1 - F^\vee | {\rm Hom} \left(T / \ell^m, \frac{1}{\ell^m} \mathbb{Z}_\ell / \mathbb{Z}_\ell \right) \right) \subset {\rm Hom} \left( T / \ell^m, \frac{1}{\ell^m} \mathbb{Z}_\ell / \mathbb{Z}_\ell \right),$$ where $F^\vee: {\rm Hom} \left(T / \ell^m, \frac{1}{\ell^m} \mathbb{Z}_\ell / \mathbb{Z}_\ell \right) \to {\rm Hom} \left(T / \ell^m, \frac{1}{\ell^m} \mathbb{Z}_\ell / \mathbb{Z}_\ell \right)$ is the transpose of $F$, and \begin{equation*} \omega^o_{H,m} = \omega_m |_{H^\vee [\ell^m] \times H^\vee[\ell^m]}. \end{equation*} Note that, because $F$ acts on $\omega^o_H$ by the identity and by multiplication by $q$, $\omega^o_H$ is $\ell^n$-torsion. \bigskip Now suppose $\ell^n || q-1.$ The snake lemma for the diagram $$\begin{CD} 0 @>>> T @>>> V @>>> V/T @>>> 0\\ @. @V{1-F}VV @V{1-F}VV @V{1-F}VV @. \\ 0 @>>> T @>>> V @>>> V/T @>>> 0 \end{CD}$$ defines an isomorphism $\mathrm{snake}: {\rm Ker}(1-F | \; V/T) \cong {\rm Coker}(1-F| \; T) = H.$ For all $m > 0,$ the Weil pairing identification of ${\rm Hom}( T/\ell^m, \mathbb{Z}_\ell / \ell^m)$ with $T / \ell^m$ identifies $F^\vee$ with $qF^{-1}.$ So there are isomorphisms \begin{align*} {\rm Hom}(H, \mathbb{Z}_\ell / \ell^n) &\rightarrow {\rm Ker}( 1 - F^\vee | \; {\rm Hom}(T, \mathbb{Z}_\ell / \ell^n) ) \\ &= {\rm Ker}(1 - F^\vee | \; {\rm Hom}(T / \ell^n, \mathbb{Z}_\ell / \ell^n) ) \\ &\xrightarrow{\text{Weil pairing}} {\rm Ker}( 1 - qF^{-1} |\; T / \ell^n ) \\ &= {\rm Ker}( 1 - F^{-1} |\; T / \ell^n ) \hspace{2.0cm} \text{because } q \equiv 1 \mod \ell^n \\ &=^{ \cdot 1/\ell^n} {\rm Ker}\left(1-F | \; (V/T) [\ell^n] \right) \\ &= {\rm Ker}\left(1-F | \; V/T \right) [\ell^n] \\ &=^{\mathrm{snake}} H[\ell^n]. \end{align*} \begin{definition}\label{def-psi-G} Define the invariant $\psi_H : {\rm Hom}(H, \frac{1}{\ell^n} \mathbb{Z}_\ell / \mathbb{Z}_\ell ) \rightarrow H[\ell^n]$ as multiplication by $\ell^n$ composed with all of the above maps; it is an isomorphism since all of the constituent maps are isomorphisms. \end{definition} We define a corresponding pairing \begin{align*} {\rm Hom} \left(H, \frac{1}{\ell^n} \mathbb{Z}_\ell / \mathbb{Z}_\ell \right) \times {\rm Hom} \left(H, \mathbb{Q}_\ell / \mathbb{Z}_\ell \right) &\rightarrow \frac{1}{\ell^n}\mathbb{Z}_\ell / \mathbb{Z}_\ell \\ (\alpha, \beta) &\mapsto \langle \alpha, \beta \rangle_H := \beta( \psi_H(\alpha) ). \end{align*} \bigskip Every element of ${\rm Hom}(H , \mathbb{Z}_\ell / \ell^n ) = {\rm Ker} \left(1 - F^\vee | \; {\rm Hom}(T / \ell^n, \mathbb{Z}_\ell / \ell^n) \right)$ can be expressed as $\omega(\bullet, s) \mod \ell^n$ for some $s \in T$ uniquely determined mod $\ell^n T.$ Because because $F \in \mathrm{GSp}^{(q)}(T,\omega),$ lying in ${\rm Ker}(1 - F^\vee)$ amounts to $(1 - qF^{-1})s \in \ell^n T$ or equivalently, $(F-1)s \in \ell^n T$ because $q \equiv 1 \mod \ell^n T.$ Unravelling the definition of $\psi,$ we find \begin{equation} \label{unravelpsi} \psi_H \left( \frac{1}{\ell^n} \omega(\bullet, s) \right) = \frac{1}{\ell^n} (F-1) s \mod (F-1)T. \end{equation} \begin{lemma}[Compatibility between $\psi$ and $\omega$] \label{compatibilitypsiomega} Let $r \geq 0$ be any non-negative integer. Let $\alpha, \beta \in {\rm Hom} \left(H, \frac{1}{\ell^{n+r}} \mathbb{Z}_\ell / \mathbb{Z}_\ell \right).$ Then $\omega^o_H$ and $\psi_H$ satisfy the following compatibility relation: $$\langle \ell^r \alpha, \beta \rangle_H - \langle \ell^r \beta, \alpha \rangle_H = \frac{q-1}{\ell^n} \omega^o_{H,n+r}(\alpha, \beta).$$ \end{lemma} \begin{proof} We can represent $\alpha$ and $\beta$ as \begin{align*} \alpha &= \frac{1}{\ell^{n+r}} \omega(\bullet, s) \mod \mathbb{Z}_\ell, \\ \beta &= \frac{1}{\ell^{n+r}} \omega(\bullet, t) \mod \mathbb{Z}_\ell, \end{align*} where $s,t \in T$ are unique mod $\ell^{n+r} T$ and satisfy $(F-1)s, (F-1)t \in \ell^{n+r} T.$ By definition of the pairings $\omega^o_{H,m},$ \begin{equation} \label{omegapartcompatibility} \omega^o_{H,n+r}(\alpha,\beta) = \frac{1}{\ell^{n+r}} \omega(s,t) \mod \mathbb{Z}_\ell. \end{equation} By the calculation from \eqref{unravelpsi}, \begin{align} \label{psipartcompatibilitypart1} \langle \ell^r \alpha, \beta \rangle_H &= \beta \left( \psi_H(\alpha) \right) \nonumber \\ &= \beta \left( \frac{1}{\ell^n} (F-1)s, t \right) \nonumber \\ &= \frac{1}{\ell^{n+r}} \omega \left( \frac{1}{\ell^n} (F-1)s,t \right) \mod \mathbb{Z}_\ell \end{align} and likewise \begin{align} \label{psipartcompatibilitypart2} \langle \ell^r \beta, \alpha \rangle_H &= \alpha \left( \psi_H(\beta) \right) \nonumber \\ &= \alpha \left( \frac{1}{\ell^n} (F-1)t, s \right) \nonumber \\ &= \frac{1}{\ell^{n+r}} \omega \left( \frac{1}{\ell^n} (F-1)t,s \right) \mod \mathbb{Z}_\ell. \end{align} Combining \eqref{omegapartcompatibility}, \eqref{psipartcompatibilitypart1}, and \eqref{psipartcompatibilitypart2}: \begin{align*} &\langle \ell^r \alpha, \beta \rangle_H - \langle \ell^r \beta, \alpha \rangle_H - \frac{q-1}{\ell^n} \omega^o_{H,n+r}(\alpha, \beta) \mod \mathbb{Z}_\ell \\ &= \frac{1}{\ell^{2n+r}} \left( \; \omega( (F-1)s,t) - \omega( (F-1)t,s ) - (q-1) \omega(s,t) \; \right) \\ &= \frac{1}{\ell^{2n+r}} \left( \; \omega( (F-1)s,t) - \omega( (F-1)t,s ) - (q-1) \omega(s,t) \; \right) \\ &= \frac{1}{\ell^{2n+r}} \left( \; \omega( (F-1)s,t) - \omega( (F-1)t,s ) - \omega(Fs,Ft) + \omega(s,t) \; \right) \\ &= \frac{1}{\ell^{2n+r}} \omega( (F-1)s, (1-F)t ) \\ &= 0 \mod \mathbb{Z}_\ell , \end{align*} where the last line follows because $(F-1)s, (1-F)t \in \ell^{n+r} T.$ \end{proof} Finally, we introduce a scaling factor so as to make the compability relation in the above lemma match up with that in equation \ref{psiomegacomp}. Namely, we define $\omega_H=\frac{q-1}{2\ell^n} \omega^o_H$. \subsection{The $\omega$ and $\psi$ invariants for a principally polarized abelian varieties and curves over a finite field}\label{abomegapsi} Let $A / \mathbb{F}_q$ be a principally polarized abelian variety. Let $\zeta$ be a generator for $\mu_{\ell^n} \subset \mathbb{F}_q^\times.$ Let $T = T_{\ell}(A)$ denote the $\ell$-adic Tate module of $A$ and let $V = T_{\mathbb{Q}_\ell}.$ Let $\mu = \varprojlim \mu_{\ell^m},$ where $\mu_{\ell^m}$ are the $\ell^m$th roots of unity. This is a free $\mathbb{Z}_\ell$-module of rank 1. We will identify $\mu$ with $\mathbb{Z}_\ell$ by choosing a basis vector for $\mu.$ Via the principal polarization, the Weil pairing defines a natural alternating non-degenerate symplectic pairing $\omega: T \times T \rightarrow \mathbb{Z}_\ell.$ The Frobenius endomorphism $F: A \rightarrow A$ induces $F: T_\ell(A) \rightarrow T_\ell(A).$ It acts as a symplectic similitude in $\mathrm{GSp}^{(q)}(T,\omega)$: $$\omega(Fx,Fy) = q \cdot \omega(x,y) \text{ for all } x,y \in T.$$ Let $H = T / (1-F)T.$ Note that $V / T$ is naturally isomorphic to $A[\ell^\infty].$ By the snake lemma for the diagram $$\begin{CD} 0 @>>> T @>>> V @>>> V/T @>>> 0\\ @. @V{1-F}VV @V{1-F}VV @V{1-F}VV @. \\ 0 @>>> T @>>> V @>>> V/T @>>> 0 \end{CD},$$ the group $H$ is isomorphic to ${\rm Ker}(1-F| \; A[\ell^\infty]) = A(\mathbb{F}_q)_{\ell}.$ By the construction described in \S \ref{omegapsisymplecticsimilitude}, these data induce the triple $(H, \omega_H, \psi_H).$\footnote{The invariants $\omega_H$ and $\psi_H$ only depend on $b \mod \ell^n \mu,$ which is equivalent to a choice of generator $\zeta \in \mu_{\ell^n}.$} The resulting triple is the \emph{bilinearly enhanced group associated to $A$}. We denote the invariants $\omega_H,\psi_H$ by $\omega_A, \psi_A.$ \begin{remark} It is not hard to check that the inverse $\psi_A^{-1}$ of $\psi_A$ is the Tate-Lichtenbaum pairing \cite[XI.9]{Silverman}, but we will not use this fact in our arguments. We use $\psi$ because $\psi$, unlike its inverse, descends to quotients of the group of rational points of the abelian variety. \end{remark} \bigskip For every smooth projective curve $C / \mathbb{F}_q,$ the Jacobian $\mathrm{Pic}^0(C)$ is an abelian variety with a canonical principal polarization. By the above discussion, we can attach bilinearly enhanced group $(\mathrm{Pic}^0(C)(\mathbb{F}_q)_\ell, \omega_{\mathrm{Pic}^0(C)}, \psi_{\mathrm{Pic}^0(C)})$ to the curve $C.$ To ease notation, we denote this triple by $(G_C, \omega_C, \psi_C).$ For the principal polarization implicitly used to define these invariants, we will always use the canonical one. \subsection{Equidistribution Conjecture} We fix $q,\ell^n$ as before. For each positive integer $g$, we may consider the set $\mathbb{cH}_{g, \mathbb{F}_q}$ of smooth, projective hyperelliptic curves over $\mathbb{F}_q$ of genus $g$. To a hyperelliptic curve $C,$ we associate the $\ell^n$-BEG $(G_C,\omega_C,\psi_C)$. We thus obtain a corresponding counting measure $\mu^q_g$ on $\mathcal{C}_{\ell,n}$. We conjecture that for $q$ fixed and $g \to \infty,$ the measures $\mu^q_g$ converge to the measure $\mu$ referred to in Theorem \S \ref{measure-exists-unique} and formally defined in \S \ref{universalmeasuredefinition}; this refines the analogue in this setting of the conjecture \cite{Malle10} and also generalizes some conjectures from \cite{LT}. \begin{conj}\label{eqconjff} As $g\rightarrow\infty$, the measures $\mu^q_g$ converge to $\mu$ in the weak-* topology. \end{conj} \subsection{Statements of Results} While we cannot prove Conjecture \ref{eqconjff}, we may make partial progress towards it in the style of \cite{EVW}, by building on their work. Informally, we prove the moments of $\mu_g^q$ get close to those of $\mu$ for large $g$. Moreover, the error gets smaller as $q$ gets bigger. More precisely, we prove \begin{thm}\label{ffmain} Fix an element $G^\bullet=(G,\omega_G,\psi_G)\in\mathcal{C}_{\ell,n}$, and suppose $q$ is sufficiently large wrt $|G|$. Let $\mathbb{E}_G^+,\mathbb{E}_G^-$ be the limsup,liminf respectively of $\mathbb{E}_{\mu^q_g}\#{\rm Surj}(*,G^\bullet)$ as $g\rightarrow\infty$. Then $$\mid\mathbb{E}_G^{\pm}-\mathbb{E}_\mu\#{\rm Surj}(*,G^\bullet)\mid=O_G(q^{-1/2}).$$ Moreover, if $g,q$ both tend to infinity then $\mu^q_g$ converges to $\mu$ in the weak-* topology. \end{thm} \subsection{Generalizations for Conjecture \ref{eqconjff}} \label{relativeclassgrouppuncturedcurves} We motivate in this section a generalization of Conjecture \ref{eqconjff}. Specifically, we work in the more general setting where the base curve is not necessarily $\mathbb{P}^1$, and is not necessarily proper. This will be useful later on when we motivate our conjecture in the number field setting. To that end, let $C$ be a smooth, projective curve over $\mathbb{F}_q$, and let $S\subset C$ be a {reduced effective} divisor over $\mathbb{F}_q$. We consider double covers $D \xrightarrow{\pi} C$ where $D$ is a smooth projective curve, and $\pi$ is unramified over $S$. We set $T = \pi^{-1}(S)$, which is also reduced. We are interested in studying the $\ell$-part of the Picard group of $D-T$. However, since this will be split by the action of the non-trivial automorphism of $\pi$, it is better to consider the relative class group \[{\rm Pic}(D-T/C-S):=\frac{{\rm Pic}(D-T)}{{\rm Pic}(C-S)}.\] Let ${\rm Div}_S, {\rm Div}_T$ denote the divisors on $C$ (resp. $D$) supported on $S$ (resp. $T$). \begin{lemma} The natural restriction map induces right exact sequence $${\rm Div}_T / \pi^\ast {\rm Div}_S\rightarrow{\rm Pic}(D/C) \xrightarrow{\sim} {\rm Pic}(D - T / C - S).$$ \end{lemma} \begin{proof} There are maps of right exact sequences $$\begin{CD} {\rm Div}_S @>>> {\rm Pic}(C) @>>> {\rm Pic}(C - S) @>>> 0 \\ @V{\pi^\ast}VV @V{\pi^\ast}VV @V{\pi^\ast}VV \\ {\rm Div}_T @>>> {\rm Pic}(D) @>>> {\rm Pic}(D - T) @>>> 0 \\ \end{CD}$$ with exact rows. The result follows from the snake lemma. \end{proof} Since $\omega_D,\psi_D$ naturally push forward along quotient maps, we obtain elements $({\rm Pic}(D/C)_\ell, \omega_{D/C}, \psi_{D/C})$ of $\mathcal{C}_{\ell,n}$. If $S$ consists of $s$ closed points and $T$ consists of $t$ closed points, then ${\rm Div}_T / \pi^\ast {\rm Div}_S$ is a free abelian group on $u = t-s$ generators. As such, it seems reasonable to model the image of ${\rm Div}_T / \pi^\ast {\rm Div}_S$ as a random $u$-generated subgroup of ${\rm Pic}(D/C).$ We define $\mu^g_{C,S}$ to be the counting measures corresponding to the elements $({\rm Pic}(D/C)_\ell, \omega_{D/C}, \psi_{D/C})$ obtained as $\pi$ varies along genus $g$ double covers of $C$ which are unramified over $S$. The operator $Q^u$ defined in Definition \ref{Qtmu} exactly models the operation of quotienting out by $u$-random elements, which motivates the following conjecture: \begin{conj}\label{eqconjffaffine} Let $C,S,\mu^g_{C,S}$ be as above. As $g\rightarrow\infty$, the measures $\mu^g_{C,S}$ converge to $Q^u\mu$ in the weak-* topology. \end{conj} \section{The invariants beyond the function field setting}\label{number-field-invariants} Let $K$ be a number field containing the $\ell^n$th roots of unity for some odd prime $\ell$ and positive integer $n$, but not the $\ell^{n+1}$st roots of unit, and \emph{fix a generator $\zeta$ of $\mu_{\ell^n}(K)$}. We will define invariants $\psi_K$ and $\omega_K$ on $\mathrm{Cl}(K) [\ell^\infty]$ that mimic those defined in the function field setting. To motivate these definitions, we can compare them to the function field case. If we replace every occurrence of $\textrm{Spec\,} \mathcal{O}_K$ in these definitions with the projective curve $C$ whose function field $\mathbb F_q(C)$ is $K$, we will define invariants $\psi_K$ and $\phi_K$ on the Picard group of $C$. In Section \ref{s-NT-FF-compatible}, we will see that $\psi_K= \psi_C $ and $\phi_K=\phi_C$, so these definitions agree with our earlier ones. In Section \ref{s-internal-consistency} we will check that $\psi_K$ and $\omega_K$ satisfy the compatibility condition \eqref{psiomegacomp} making $ ( \mathrm{Cl}(K) [\ell^\infty], \omega_K, \psi_K )$ a bilinearly enhanced group. \bigskip \subsection{Definition of $\psi_K$} \begin{definition}\label{psi-nt-defi} Working in the fppf site of $\textrm{Spec\,}\mathcal{O}_K$, recall the Kummer sequence $1\rightarrow\mu_{\ell^n}\rightarrow \mathbb{G}_m\rightarrow \mathbb{G}_m\rightarrow 1$. From the Kummer sequence and the fact that $H^1(\textrm{Spec\,}\mathcal{O}_K,\mathbb{G}_m)\cong \mathrm{Cl}(K)$ we get the exact sequence $$1\rightarrow \mathcal{O}_K^\times\otimes\mathbb{Z}/{\ell^n}\mathbb{Z}\xrightarrow{\delta} H^1(\textrm{Spec\,}\mathcal{O}_K,\mu_{\ell^n})\rightarrow \mathrm{Cl}(K)[{\ell^n}]\rightarrow 1.$$ Now, for any scheme $X$ we have that $H^1(X,\mathbb{Z}/{\ell^n}\mathbb{Z}) \cong{\rm Hom}(\pi_{1,et}(X)^{\textrm{ab}},\mathbb{Z}/{\ell^n}\mathbb{Z})$ which yields $H^1(\textrm{Spec\,}\mathcal{O}_K,\mathbb{Z}/{\ell^n}\mathbb{Z})\cong \mathrm{Cl}(K)^\vee[{\ell^n}]$ (from class field theory). We thus get a map $$\mathrm{Cl}(K)^\vee[{\ell^n}] \cong H^1(\textrm{Spec\,}\mathcal{O}_K,\mathbb{Z}/{\ell^n}\mathbb{Z}) \rightarrow H^1(\textrm{Spec\,}\mathcal{O}_K,\mu_{\ell^n}) \rightarrow \mathrm{Cl}(K)[{\ell^n}]$$ where the middle map is induced by cup product with $\zeta$. We define this map to be $\psi_K$. \end{definition} Note that, for $K$ the function field of a curve $C$ over a field $k$, we define $\mathrm{Cl}(K)$ as the group ${\rm Pic}(C)$ of line bundles on $C / k$, not its subgroup ${\rm Pic}^0(C)$ of degree zero line bundles. On the other hand, our definition of $\psi_C$ is in terms of ${\rm Pic}^0(C)$. In Section \ref{s-NT-FF-compatible}, we will see that $\psi_C$ and $\psi_K$ are equal up to the inclusion ${\rm Pic}^0(C) \to {\rm Pic}(C)$. \subsection{Definition of $\omega_K$} The definition of $\omega_K$ is more involved - we do not construct it directly. We construct $\omega_K$ indirectly via pairings $\omega_{m,K}$ defined in \S \ref{omega-pairings} and discuss motivation for the definition of $\omega_K$ afterwards in \S{motivation-omega-K}. We first state the group-theoretic Lemma \ref{wedge-from-pairing} that gives a criterion for $\omega$ to be determined uniquely by a system of pairings $\omega_m$. We then construct our pairings $\omega_{m,K}$ in \S \ref{omega-pairings}, verifying the hypotheses of Lemma \ref{wedge-from-pairing}; this defines an element of $(\wedge^2\mathrm{Cl}(K)) [\ell^n]$ in the number field case. In \S \ref{s-NT-FF-compatible}, we verify that the analogous definition in the function field case is compatible with our earlier definition by the Weil pairing. \subsubsection{Lemma relating $\wedge^2 G$ and systems of alternating bilinear pairings on $G^\vee[\ell^m]$} For $a, b \in G^\vee [\ell^m]$, we view $a$ and $b$ as functions from $G$ to $\ell^{-m} \mathbb Z / \mathbb Z$, which gives a map $a \otimes b: G \otimes G \to ( \ell^{-m} \mathbb Z / \mathbb Z ) \otimes (\ell^{-m} \mathbb Z / \mathbb Z) = \ell^{-2m} \mathbb Z / \ell^{-m } \mathbb Z$. By embedding $\wedge^2 G$ into $G$ via $x\wedge y \mapsto x\otimes y - y\otimes x$, we have a map $\wedge^2 G \to \ell^{-2m} \mathbb Z / \ell^{-m } \mathbb Z$, which we also call $a \otimes b$. \begin{lemma}\label{wedge-from-pairing} Let $G$ be a finite abelian $\ell$-group. Suppose we are given, for each $m$, a symplectic bilinear form $\omega_m : G^\vee [\ell^m] \times G^\vee [\ell^m] \to \mathbb Q_\ell/ \mathbb Z_\ell $. Suppose that for all $a \in G^\vee [\ell^m] , b \in G^\vee [\ell^{m+1} ] $ we have \begin{equation}\label{omega-m-compatibility} \omega_m ( a, \ell b ) = \omega_{m+1} ( a, b ) \end{equation} Then there exists a unique $\omega \in \wedge^2 G$ such that for all natural numbers $m$ and for all $a, b \in G^\vee [\ell^m] $,we have \begin{equation}\label{omega-omega-m} \ell^m (a \otimes b ) (\omega) = \omega_m (a,b).\end{equation} \end{lemma} \begin{proof} We can express $G \cong \bigoplus_{i=1}^r \mathbb Z/\ell^{e_i}$ as a direct sum of cyclic groups $ \mathbb Z/\ell^{e_i}$ with generators $x_i$. Any element of $G^\vee [\ell^m]$ is a linear combination of the forms $f_{i,m}$ for $i$ from $1$ to $r$, where $f_{i,m}$ sends $x_i$ to $\ell^{ - \min(e_i,m)}$ and $x_j$ to $0$ for $j \neq i$. Thus, we have $\omega_m ( a,b) =\ell^m (a\otimes b) (\omega)$ for all $a,b$ if and only if we have \begin{equation}\label{wwm-basis} \omega_m( f_{i,m}, f_{j,m}) = \ell^m (f_{i,m} \otimes f_{j,m} ) (\omega) \end{equation} for all $1 \leq i< j \leq r$. For any $\omega \in \wedge^2 G$, we can write $\omega = \sum_{i,j} c_{i,j} (x_i \wedge x_j )$ for some $c_{i,j}$. We have \[ (f_{i,m} \otimes f_{j,m} ) (x_i \wedge x_j) = f_{i,m} (x_i) f_{i,j}(x_j) = \ell^{- \min(m,e_i) - \min(m,e_j) } \] and thus \begin{equation} \label{basis-evaluation-formula} (f_{i,m} \otimes f_{j,m} ) (\omega ) = \ell^{ - \min(m,e_i) - \min(m,e_j) } c_{i,j} .\end{equation} If $\omega$ satisfies \eqref{wwm-basis} for all $m, i, j$, then taking $m = \min(e_i,e_j)$, which gives $\ell^{ - \min(m,e_i) - \min(m,e_j) } =\ell^{-2\min(e_i,e_j) } $, we have \[ c_{i,j} \equiv \ell^{ \min (e_i,e_j)} \omega_{\min(e_i,e_j)} (f_{i,\min(e_i,e_j)}, f_{j,\min(e_i,e_j)} ) \mod \ell^{\min(e_i,e_j)} .\] Since $(x_i \wedge x_j)$ is $\ell^{\min(e_i,e_j)}$-torsion, this implies that \[\omega = \sum_{i,j} c_{i,j} (x_i \wedge x_j ) \sum_{i,j} \ell^{\min (e_i,e_j)} \omega_{\min(e_i,e_j)} (f_{i,\min(e_i,e_j)}, f_{j,\min(e_i,e_j)} )( x_i \wedge x_j) .\] If we prove the converse, that this value of $\omega$ satisfies \eqref{wwm-basis} for all $m,i,j$, then we will have established the existence and uniqueness of a solution. To do this, applying \eqref{basis-evaluation-formula}, it suffices to check that \[ \omega_m ( f_{i,m} , f_{j,m}) = \ell^{ m - \min(m,e_i) - \min(m,e_j)} \omega_{\min(e_i,e_j)} (f_{i,\min(e_i,e_j)}, f_{j,\min(e_i,e_j)} ).\] For $m= \min(e_i,e_j)$ this is trivial and so we prove it by descending and ascending induction on $m$. For the descending induction, we observe that as long as $m \leq \min(e_i,e_j) $, $ f_{j, m-1} =\ell { f}_{j,m}$. By \eqref{omega-m-compatibility}, \begin{align*} \omega_{m-1}(f_{i,m-1} , f_{j,m-1} ) &= \omega_{m-1}( f_{i,m-1}, \ell f_{j,m} ) \\ &= \omega_m ( f_{i,m-1}, f_{j,m} ) \\ &= \omega ( \ell f_{i,m}, f_{j,m} ) \\ &= \ell \omega( f_{i,m}, f_{j,m}), \end{align*} which is handles the induction step because the exponent $m - \min(m,e_i) - \min(m,e_j)$ increases by $1$ when $m$ decreases by $1$. For the ascending induction, assume without loss of generality that $e_i \leq e_j$. For $m \geq e_i$, we have $f_{i,m+1} = f_{i,m}$, and $ f_{j,m} = \ell f_{j,m+1} $ if $m<e_j$ and $ f_{j,m+1}$ if $m \geq e_j$. Thus by \eqref{omega-m-compatibility}, \begin{align*} \omega_{m+1} (f_{i,m+1}, f_{j,m+1}) &= \omega_{m+1} ( f_{i,m} ,f_{j,m+1}) \\ &= \omega_m ( f_{i,m} , \ell f_{j,m+1} ) \\ &= \begin{cases} \omega_m (f_{i,m}, f_{j,m} ) & m< e_j \\ \ell \omega_m (f_{i,m}, f_{j,m}) & m \geq e_j \end{cases}, \end{align*} which handles the inductions step because the exponent $m - \min(m,e_i)- \min(m,e_j)$ is constant if $e_i \leq m < e_j$ and increases by $1$ if $e_i,e_j \leq m$. \end{proof} \begin{cor} In Lemma \ref{wedge-from-pairing}, if the pairings $\omega_m$ all take $\ell^n$-torsion values for some $n$, then $\omega$ is $\ell^n$-torsion. \end{cor} \begin{proof} If $\omega_m$ are the bilinear forms defined by $\omega$, then $\ell^n \omega_m$ are the bilinear forms defined by $\ell^n \omega$. Thus if $\ell^n \omega_m =0$, by the uniqueness of $\omega$, it follows that $\ell^n \omega=0$. \end{proof} \subsubsection{Construction of the pairings $\omega_{m,K}$} \label{omega-pairings} Recall that $K$ is a global field containing $\ell^n$th roots of unity with generator $\zeta.$ Let $m$ be a natural number. Artin-Verdier duality defines (among other things) a pairing \[ ()_{\textrm{AV} } : H^1 ( \textrm{Spec\,} \mathcal{O}_K, \mu_{\ell^m}) \times H^2 ( \textrm{Spec\,} \mathcal{O}_K, \mathbb Z/\ell^m) \to \ell^{-m}\mathbb Z/ \mathbb Z \] Let $\zeta_m \in H^1 ( \textrm{Spec\,} \mathcal{O}_K, \mu_{\ell^m})$ be given by the $\mu_{\ell^m}$-torsor consisting of the $\ell^m$th roots of our fixed generator $\zeta$ of $\mu_{\ell^n}.$ Define a bilinear form $\omega_{m,K}: H^1 ( \textrm{Spec\,} \mathcal{O}_K, \mathbb Z/\ell^m) \times H^1 ( \textrm{Spec\,} \mathcal{O}_K, \mathbb Z/\ell^m) \to \frac{1}{\ell^m} \mathbb{Z} / \mathbb{Z}$ by \[ \omega_{m,K}(a, b) = -\frac{1}{2}( \zeta_m, a \cup b)_{\textrm{AV}}.\] By the class field theory isomorphism $H^1 ( \textrm{Spec\,} \mathcal{O}_K, \mathbb Z/\ell^m) \cong \mathrm{Cl}(K)^\vee [\ell^m] $ we can equivalently view this as a bilinear form on $\mathrm{Cl}(K)^\vee [\ell^m] $. \begin{lemma} Let $K$ be a global field. \begin{enumerate} \item $\omega_{m,K}$ is a symplectic bilinear form. \item $\omega_{m,K} (a,b)$ has order dividing $\ell^n$. \item $\omega_{m,K}$ and $\omega_{m+1}$ satisfy the compatibility \eqref{omega-m-compatibility}. \end{enumerate}\end{lemma} \begin{proof} (1) is clear because the cup product in degree 1 is symplectic bilinear. (2) is clear because $\zeta_m$ is $\ell^n$-torsion. (3) takes more work. First note that the class field theory isomorphism sends the inclusion map \[ \mathrm{Cl}(K)^\vee [\ell^m] \to \mathrm{Cl}(K)^\vee [\ell^{m+1} ] \] to the multiplication by $\ell$ map \[ H^1 ( \textrm{Spec\,} \mathcal{O}_K, \mathbb Z/\ell^m) \to H^1 ( \textrm{Spec\,} \mathcal{O}_K, \mathbb Z/\ell^{m+1})\] and the multiplication by $\ell$ map \[ \mathrm{Cl}(K)^\vee [\ell^{m+1}] \to \mathrm{Cl}(K)^\vee [\ell^{m} ] \] to reduction mod $\ell^m$ map \[ H^1 ( \textrm{Spec\,} \mathcal{O}_K, \mathbb Z/\ell^{m+1} ) \to H^1 ( \textrm{Spec\,} \mathcal{O}_K, \mathbb Z/\ell^{m}).\] We will denote the multiplication-by-$\ell$ map on cohomology classes as $a \mapsto \ell a$ and the reduction mod $\ell^m$ map as $b \mapsto \overline{b}$. Using this notation, and our definition of $\omega_m$, Equation \eqref{omega-m-compatibility} can be stated as \[ ( \zeta_m, (\ell a) \cup b)_{\textrm{AV}} =( \zeta_{m+1}, a\cup \overline{b})_{\textrm{AV}} .\] To verify this, let us check the formula \[ (\ell a) \cup b = \ell (a \cup \overline{b}).\] The cup product map is induced on cohomology by the multiplication $\mathbb Z/\ell^m \times \mathbb Z/\ell^m \rightarrow \mathbb Z/\ell^m$. The maps $a \to \ell a$ and $b \to \overline{b}$ are induced in cohomology by the maps $\mathbb Z / \ell^m \to \mathbb Z/\ell^{m+1}$ and $\mathbb Z/\ell^{m+1} \to \mathbb Z/\ell^m$, respectively. So any composition of these is induced on cohomology by a composition of maps of groups. To check the identity on cohomology, it suffices to check on the level of groups, where we must determine that multiplying one element by $\ell$ and then by another is equivalent to first multiplying by the other element and then by $\ell$, which is obvious. Using this, we obtain \begin{align*} ( \zeta_m, (\ell a) \cup b)_{\textrm{AV}} &= (\zeta_m, \ell (a \cup \overline{b} ))_{\textrm{AV} } \\ &= \ell ( \overline{\zeta}_m, a\cup \overline{b})_{\textrm{AV}} \\ &= \ell ( \zeta_{m+1}, (a\cup \overline{b}))_{\textrm{AV}}, \end{align*} where the first identity follows from what we have described, the second from the same argument applied to the cup product in the definition of Artin-Verdier duality, and the third from the fact that $\overline{\zeta}_m = \zeta_{m+1}$, which is clear from the definition of $\zeta_m$ and $\zeta_{m+1}$. Here $\overline{\zeta}_{m+1}$ is defined using the map $H^1 ( \textrm{Spec\,} \mathcal{O}_K, \mu_{\ell^{m+1}} ) \to H^1 ( \textrm{Spec\,} \mathcal{O}_K, \mu_{\ell^{m}})$, analogously to the $\mathbb Z/\ell^m$ case. \end{proof} \begin{definition}\label{omega-nt-defi} For $K$ a global field, let $\omega_K \in (\wedge^2 \mathrm{Cl}(K) ) [\ell^n] $ be the unique $\omega$ such that \[\ell^m ( a \otimes b )(\omega_K) =\omega_{m,K} (a,b)\] for all $m.$ Here, $\omega_{m,K}$ are the alternating pairings defined at the beginning of Section \ref{omega-pairings}. \end{definition} \subsection{Motivation for the definition of $\omega_K$}\label{motivation-omega-K} We offer some motivation for the definition of $\omega_K.$ First note that, as a general matter, it is more common in mathematics to define bilinear forms first and then to define elements of $\wedge^2$ or $\operatorname{Sym}^2$ in terms of them. Thus it is reasonable to first attempt to understand $\wedge^2 \mathrm{Cl}(K) [\ell^n]$ dually, in terms of bilinear forms. Because we are working with abelian groups and not vector spaces, it is not obvious which bilinear forms correspond to elements of $\wedge^2 \mathrm{Cl}(K) [\ell^n]$, but Lemma \ref{wedge-from-pairing} gives the answer. Lemma \ref{wedge-from-pairing} tells us to look for symplectic forms on the group of maps from the class group to $\mathbb Z/\ell^m$, which we recognize by class field theory as $H^1 ( \textrm{Spec\,} \mathcal{O}_K, \mathbb Z/\ell^m)$. A typical source of symplectic pairings on $H^1$ is the cup product, and it is especially natural to use the cup product here because of the relationship between the cup product and the Weil pairing, which we discuss later in Lemma \ref{omega-comparison}. To obtain such a pairing, we need a linear form on $H^2( \textrm{Spec\,} \mathcal{O}_K, \mathbb Z/\ell^m)$, which by Artin-Verdier duality is equivalent to an element in $H^1(\textrm{Spec\,} \mathcal{O}_K, \mu_{\ell^m})$. Finding the correct definition is then a matter of finding the correct torsor. Because our definition in the function field setting works equally well for any curve, it should correspond to a torsor that can be defined for any curve over $\mathbb F_q$. These would be the torsors that are pulled back from $\textrm{Spec\,} \mathbb F_q$, or, equivalently, split over $\overline{\mathbb F}_q (X)$. The analogue of the special field extension $\overline{\mathbb F}_q (X)$ in the number field setting is probably the cyclotomic field, since $\overline{\mathbb F}_q (X)$ is generated over $\mathbb{F}_q(X)$ by the roots of unity. To make a $\mu_{\ell^m}$-torsor that splits over a cyclotomic field, the simplest choice is to take the $\ell^m$th roots of a root of unity, and since we want our torsor to be $\ell^n$-torsion, and we have an $\ell^n$th root of unity available, that is a natural choice. The scalar constant of $-\frac{1}{2}$ is to make the compatibilty relation between $\psi_K$ and $\omega_K$ match up with our definitions, as well as with the function field case. Of course, if we scaled both definitions of $\omega$ as well as our compatibility relation \eqref{psiomegacomp} by the same element of $\mathbb{Z}_\ell^\times$, all our results would remain essentially unchanged. \subsection{Bilinear invariants for relative class groups} The simplest case of the Cohen-Lenstra heuristics describes the class groups of quadratic extensions of $\mathbb Q$. Because these almost never contain $\ell^n$th roots of unity for $\ell>2$, we focus instead on varying quadratic extensions $L$ of a fixed global field $K$, where $K$ contains the $\ell^n$th roots of unity. However, when doing this, $\mathrm{Cl}(L)_{\ell}$ will always contain $\mathrm{Cl}(K)_{\ell}$ as a subgroup. Because this subgroup does not vary, its distribution is uninteresting, so we quotient out by it. This motivates our use of the relative class group. Similarly, we have relative versions of $\psi$ and $\omega$. \begin{definition} \label{relativepsiomega} For an extension $L/K$, the {\emph relative class group} $\mathrm{Cl}(L/K)$ is the quotient $\mathrm{Cl}(L)/ \mathrm{Cl}(K)$, where we view ideal classes on $K$ as ideal classes on $K$ by tensoring with $\mathcal O_L$. We define $\psi_{L/K}$ and $\omega_{L/K}$ to be the pushforwards of $\psi_L$ and $\omega_L$ from $\mathrm{Cl}(L)$ to $\mathrm{Cl}(L/K)$. \end{definition} As long as the degree $[L:K]$ is prime to $\ell$, the natural map $\mathrm{Cl}(L)_\ell$ to $\mathrm{Cl}(K)_\ell $ is split by the norm map $\mathrm{Cl}(K)_\ell \to \mathrm{Cl}(L)_\ell$ and so $\mathrm{Cl}(L/K)_\ell$ is a summand of $\mathrm{Cl}(K)_\ell$. \subsection{Conjecture for the distribution of triples $(\mathrm{Cl}(L/K)_\ell, \psi_{L/K},\omega_{L/K})$ for quadratic extensions of number fields containing $\ell^n$-roots of unity} When we transfer from the function field setting to the number field setting, we conjecture that the distribution of triples $(\mathrm{Cl}(L/K)_\ell, \psi_{L/K},\omega_{L/K})$ is governed by a modification of the measure $\mu$ characterized by Theorem \ref{measure-exists-unique}. A modification of $\mu$ is necessary because the places of $L$ lying over $\infty$ must be thought of as analogous to punctures in the curve appearing in the function field case. In Conjecture \ref{eqconjffaffine}, we conjectured that for punctured curves the distribution of the Picard group is controlled by the measures $Q^{u} \mu$ defined in Definition \ref{Qtmu} by quotienting out by $u$ random elements. In the number field case, our conjecture is analogous: \begin{conj}[Conjecture \ref{ntmain}]\label{ntmain-2} Let $\ell$ be an odd prime and $n$ a natural number. Let $K$ be a number field which contains the $\ell^n$th roots of unity but not the $\ell^{n+1}$th roots of unity. Let $t$ be half the degree of $K$. Then as $L$ varies over quadratic extensions of $K$, the BEG $( \mathrm{Cl}(L/K)_{\ell} , \psi_{L/K}, \omega_{L/K})$ is equidistributed according to the measure $Q^t \mu$. More formally, let $S_{K, X}$ be the set of quadratic extensions $L/K$ of discriminant less than $X$. We conjecture that the counting measures of $( \mathrm{Cl}(L/K)_{\ell} , \psi_{L/K}, \omega_{L/K})$ averaged over $S_{K,X}$, converge to $Q^t \mu$ in the weak-* topology as $X \to\infty$.\end{conj} In Section \ref{s-internal-consistency}, we will check, in Proposition \ref{class-group-support}, that $(\mathrm{Cl}(L/K)_{\ell}, \psi_{L/K} , \omega_{L/K} ) $ is always contained in the support of $Q^t \mu$, which is a basic sanity check on Conjecture \ref{ntmain-2}. The motivation for quotienting by exactly $\frac{ [K:\mathbb Q]}{2}$ random elements comes primarily from the function field case, where we conjectured that the distribution ${\rm Pic}(D/C)_\ell$, where $D$ is a double cover of a punctured curve $C$, was $Q^u \mu$, where $u$ was the number of punctures of $D$ minus the number of punctures of $C$ (equivalently, the number of punctures of $C$ that are split in $u$). For an extension $L/K$ of number fields, because $K$ contains $\mathbb Q(\mu_{\ell^n})$, it is totally complex, and so has $\frac{[K:\mathbb Q]}{2}$ infinite places. Because these are all complex places, they all split in $L/K$, and hence the analogue of $u$ is $\frac{[K:\mathbb Q]}{2}$. Alternatively, if one thinks of the elements we quotient by as coming from the unit group, the same logic shows that the rank of the unit group of $L$ modulo the unit group of $K$ is $\frac{[K:\mathbb Q]}{2}$. \section{Alternate definitions of $\psi_K$}\label{s-alternate-psi} We present some equivalent definitions of the invariant $\psi_K$. Throughout this section, we fix a generator $\zeta \in \mu_{\ell^n}(\mathcal O_K).$ \subsection{Definition by Restricting Torsors} We can express the composition $H^1(\textrm{Spec\,}\mathcal{O}_K,\mathbb{Z}/{\ell^n}\mathbb{Z}) \xrightarrow{\cup \zeta} H^1(\textrm{Spec\,}\mathcal{O}_K,\mu_{\ell^n}) \rightarrow \mathrm{Cl}(K)[{\ell^n}]$ in a different way, using torsors. It is not possible to directly express $H^1(\textrm{Spec\,} O_K,\mathbb{Z}/{\ell^n}\mathbb{Z}) \cong{\rm Hom}(\mathrm{Cl}(K),\mathbb{Z}/{\ell^n}\mathbb{Z})$ this way, as that map is not defined using torsors but rather using class field theory. \begin{prop}\label{torsor-definition} \begin{enumerate} \item Given a $\mathbb Z/{\ell^n}$-torsor $Y$ over $\textrm{Spec\,} \mathcal{O}_K$, viewed as a scheme, the $\mathbb G_m$-torsor associated to it by the map $$H^1(\textrm{Spec\,}\mathcal{O}_K,\mathbb{Z}/{\ell^n}\mathbb{Z}) \xrightarrow{\cup \zeta} H^1(\textrm{Spec\,}\mathcal{O}_K,\mu_{\ell^n}) \rightarrow H^1(\textrm{Spec\,} \mathcal{O}_K, \mathbb G_m)$$ is the inverse of the space of invertible functions on $Y$ where the canonical $\mathbb Z/{\ell^n}$-action on $Y$ multiplies the functions by the fixed generator of $\mu_{\ell^n}$ in $\mathbb G_m$. \item Given a $\mathbb Z/{\ell^n}$-torsor over $\textrm{Spec\,} \mathcal{O}_K$, viewed as an \'{e}tale algebra $R$ over $\mathcal O_K$ with an automorphism of order ${\ell^n}$, the locally free module associated to it by $H^1(\textrm{Spec\,}\mathcal{O}_K,\mathbb{Z}/{\ell^n}\mathbb{Z})\xrightarrow{\cup \zeta} H^1(\textrm{Spec\,}\mathcal{O}_K,\mu_{\ell^n}) \rightarrow H^1(\textrm{Spec\,} \mathcal{O}_K, \mathbb G_m)= \mathrm{Cl}(K)$ is the dual to the subset of $R$ where the automorphism of order ${\ell^n}$ of acts by the fixed generator of $\mu_{\ell^n}$. \end{enumerate} \end{prop} \begin{proof}\begin{enumerate} \item In general, given groups $H$ and $G$, a map $i: H \to G$, and a left $H$-torsor $Y$ representing an element in $H^1(X,H)$, the induced image in $H^1(X,G)$ is given by the left $G$-torsor of maps $ f: X \to G$ such that for $h \in H, y\in Y$, $ f(hx) = f(x) h^{-1}$, with the action of $G$ given by left multiplication. This can be checked immediately with the cocycle definition of functoriality of $H^1$. In the case $H = \mathbb Z/{\ell^n}$, $G = \mathbb G_m$, $i$ sending a generator of $H$ to a generator of $\mu_n$, this is exactly the stated construction. \item This follows from the previous part and the observation that a $\mathbb G_m$-torsor is associated to the unique invertible sheaf whose invertible sections over each open set are equal to the $\mathbb G_m$-torsor. The space of invertible functions on $Y$ such that the canonical $\mathbb Z/{\ell^n}$-action on $Y$ multiplies the functions by the fixed generator of $\mu_{\ell^n}$ in $\mathbb G_m$ is simply the invertible elements of the module of elements of $R$ where the automorphism of order $n$ acts by the fixed generator of $\mu_{\ell^n}$, and then dualizing this module corresponds to inverting the torsor. \end{enumerate} \end{proof} \subsection{ Definition by Hilbert Symbols} \label{psihilbertsymbol} In this section we construct an alternative definition for $\psi_K$ using Hilbert symbols. This will be convenient for explicit calculations, and is what we used in our numerical experiments. To understand the map $H^1(K, \mathbb Z/{\ell^n}) \xrightarrow{\cup \zeta} H^1(K, \mu_{\ell^n})$ more explicitly, we use the following commutative diagram: $$\begin{CD} H^1(\textrm{Spec\,} \mathcal{O}_K, \mathbb Z/{\ell^n}) @>>> H^1(K, \mathbb Z/{\ell^n}) \\ @V{\cup \zeta}VV @V{\cup \zeta}VV \\ H^1( \textrm{Spec\,} \mathcal{O}_K, \mu_{\ell^n}) @>>> H^1(K, \mu_{\ell^n}). \end{CD}$$ Because $\mu_{\ell^n}= \mathbb Z/{\ell^n}$ over $K$, the rightmost map is an isomorphism. Since $\textrm{Spec\,} \mathcal{O}_K$ is a normal scheme of dimension 1, the top map is an injective. it follows that the map $H^1(\textrm{Spec\,} \mathcal{O}_K, \mathbb Z/{\ell^n})\xrightarrow{\cup \zeta} H^1(\textrm{Spec\,} \mathcal{O}_K, \mu_{\ell^n})$ is also injective. Consider the pairing $\gamma:\mathbb{A}_K^{\times}\times \mathbb{A}_K^{\times} \rightarrow \mu_{\ell^n}$ sending $\gamma(a,b)=\sum_v \langle a,b\rangle_{{\ell^n},v}$ where $\langle ,\rangle_{{\ell^n},v}$ denotes the ${\ell^n}$-Hilbert symbol pairing at $v$. By Hilbert reciprocity, $K^{\times}$ is isotropic for $\gamma$, so $\gamma$ descends to a pairing on $K^{\times}\backslash \mathbb{A}_K^{\times} \times K^{\times}/(K^{\times})^{\ell^n},$ which we also denote by $\gamma.$ Class field theory gives an isomorphism $H^1 ( K, \mathbb Z/{\ell^n}) = {\rm Hom}( K^{\times} \backslash \mathbb A_K^\times , \mathbb Z/{\ell^n})$. Furthermore $H^1 ( K, \mu_{\ell^n}) = K^\times / (K^\times)^{\ell^n}$ by Kummer theory. \begin{lemma}\label{class-kummer-isomorphism} The isomorphism $K^\times / (K^\times)^{\ell^n} \cong {\rm Hom}( K^{\times} \backslash \mathbb A_K^\times , \mathbb Z/{\ell^n})$ obtained by composing the Kummer theory isomorphism $K^\times / (K^\times)^{\ell^n} \cong H^1(K,\mu_{\ell^n}),$ the isomorphism $H^1(K, \mathbb Z/{\ell^n}) \cong H^1(K,\mu_{\ell^n})$ induced by a fixed choice of generator $\zeta \in H^0(K,\mu_{\ell^n}),$ and the class field theory isomorphism is induced by the pairing $\gamma$: \begin{align*} K^\times / (K^\times)^{\ell^n} &\rightarrow {\rm Hom}( K^{\times} \backslash \mathbb A_K^\times , \mathbb Z/{\ell^n}) \\ b &\mapsto \iota \circ \gamma(\cdot, b), \end{align*} where $\iota: \mu_{\ell^n} \xrightarrow{\sim} \mathbb{Z} / \ell^n \mathbb{Z}$ is the isomorphism induced by the choice of $\zeta.$ \end{lemma} \begin{proof} Let $b$ in $K^\times$ be an element. Its associated class in $H^1(K,\mu_{\ell^n})= H^1(K,\mathbb Z/{\ell^n})$ corresponds to the degree ${\ell^n}$ abelian extension $K(\sqrt[{\ell^n}]{b})$. We must check that this Galois character, viewed as a character of the idele class group, is given by $ a \mapsto \sum_v \langle a,b\rangle_{{\ell^n},v}$. Because the ideles are contained in a product of local fields, it is sufficient to check that the character of the Galois group of $K_v$ defined by $K_v(\sqrt[{\ell^n}]{b})$ is given by $a \mapsto \langle a,b \rangle_{{\ell^n},v}$. This is one definition of the Hilbert symbol. \end{proof} \begin{lemma}\label{adelic-dual-description} Under the identification of $H^1(K,\mathbb{Z} / \ell^n \mathbb{Z})$ with ${\rm Hom}( K^{\times} \backslash \mathbb A_K^\times , \mathbb Z/{\ell^n})$ via the class field theory isomorphism, the image of $H^1(\textrm{Spec\,} \mathcal{O}_K, \mathbb Z/{\ell^n})$ inside $H^1(K, \mathbb Z/{\ell^n})$ is the subset of $ {\rm Hom}( K^{\times} \backslash \mathbb A_K^\times , \mathbb Z/{\ell^n})$ that is trivial on $\mathcal{O}_{K_v} ^\times$ for all finite places $v$ of $K$. \end{lemma} \begin{proof} $H^1(\textrm{Spec\,} \mathcal{O}_K, \mathbb Z_{\ell^n}) = {\rm Hom}( \left( \pi_1^{et} (\textrm{Spec\,} \mathcal{O}_K)\right)^{\mathrm{ab}} , \mathbb Z/{\ell^n})$ is the subset of ${\rm Hom} (\left( \textrm{Gal}(K)\right)^{\mathrm{ab}}, \mathbb Z/{\ell^n})$ that is trivial on the kernel of the natural map $\left( \textrm{Gal}(K)\right)^{\mathrm{ab}} \to \left( \pi_1^{et} (\textrm{Spec\,} \mathcal{O}_K)\right)^{\mathrm{ab}})$, which is the natural map from the Galois group of the maximal abelian extension to the maximal abelian unramified extension, which in the language of class field theory is precisely the profinite completion of the map $K^{\times} \backslash \mathbb A_K^\times \to K^{\times} \backslash \mathbb A_K^\times/\prod_v \mathcal{O}_{K_v}^\times$, hence elements trivial on the kernel of this map are precisely elements trivial on $ \mathcal{O}_{K_v}^\times$. (The profinite completion may be ignored because we are working with finite order characters of these groups.) \end{proof} \begin{lemma}\label{kummer-torsor-description} The image of $H^1(\textrm{Spec\,} \mathcal{O}_K, \mu_{\ell^n})$ inside $H^1(K, \mu_{\ell^n})$ is the subset of $K^\times / (K^\times)^{\ell^n}$ consisting of elements whose valuation at each finite place is a multiple of ${\ell^n}$. \end{lemma} \begin{proof} A $\mu_{\ell^n}$-torsor over $K$ necessarily extends to a $\mu_{\ell^n}$ torsor over an open subset of $\textrm{Spec\,} \mathcal{O}_K$. To check it extends to the whole ring, by Beauville-Laszlo, it is necessary and sufficient to check that it extends to each complete local ring. To do this, we compare the Kummer sequences for $K, K_v,$ and $\textrm{Spec\,} \mathcal{O}_{K_v}$. \[ \begin{tikzcd} K^\times \arrow[r,"{\ell^n}"] \arrow[d] & K^\times \arrow[r] \arrow[d] & H^1(K,\mu_{\ell^n}) \arrow[r]\arrow[d] & 0 \\ K_v^\times \arrow[r,"{\ell^n}"] & K_v^\times \arrow[r] & H^1(K_v,\mu_{\ell^n}) \arrow[r] & 0 \\ \mathcal{O}_{K_v}^\times \arrow[r,"{\ell^n}"]\arrow[u] & \mathcal{O}_{K_v}^\times \arrow[r]\arrow[u] & H^1 (\textrm{Spec\,} \mathcal{O}_{K_v},\mu_{\ell^n}) \arrow[r]\arrow[u] & 0 \\ \end{tikzcd} \] Hence an element of $H^1(K, \mu_{\ell^n})$, when restricted to $H^1(K_v,\mu_{\ell^n})$, lies in the image of $H^1(\textrm{Spec\,} \mathcal{O}_{K_v},\mu_{\ell^n})$, if and only if the corresponding element of $K^\times / (K^\times)^{\ell^n}$, when restricted to $K_v^\times / (K_v^\times)^{\ell^n}$, lies inside the image of $ \mathcal{O}_{K_v}^\times / ( \mathcal{O}_{K_v}^\times)^{\ell^n}$. The image of $ \mathcal{O}_{K_v}^\times / ( \mathcal{O}_{K_v}^\times)^{\ell^n}$ is equal to $ \mathcal{O}_{K_v}^\times (K_v^\times)^{\ell^n}$, which consists precisely of elements whose $v$-adic valuation is a multiple of ${\ell^n}$. \end{proof} \begin{lemma}\label{kummer-map-description} The natural map to $H^1(\textrm{Spec\,} \mathcal{O}_K, \mu_{\ell^n}) \to H^1(\textrm{Spec\,} \mathcal{O}_K, \mathbb G_m)[{\ell^n}] = Cl(K)[{\ell^n}]$ coming from the Kummer exact sequence is given by sending an element of $f\in K^\times/(K^\times)^n$ whose valuation is a multiple of ${\ell^n}$ at all finite places to $\prod_{\mathfrak p \in \textrm{Spec\,} \mathcal{O}_K} \mathfrak p^{ v_p(f)/{\ell^n}}$. \end{lemma} Note that this is well-defined as an ideal class $f$ as multiplying by the ${\ell^n}$th power of any element simply multiplies $\prod_{\mathfrak p \in \textrm{Spec\,} \mathcal{O}_K} \mathfrak p^{ v_p(f)/{\ell^n}}$ by that element's principal ideal, and raising $\prod_{\mathfrak p \in \textrm{Spec\,} \mathcal{O}_K} \mathfrak p^{ v_p(f)/{\ell^n}}$ to the ${\ell^n}$th power produces the principal ideal generated by $f$. \begin{proof} By Proposition \ref{torsor-definition}, given a $\mu_{\ell^n}$-torsor, the associated $\mathbb G_m$-torsor can be defined as the inverse of the torsor of $\mathbb G_m$-valued functions on the torsor that transform by multiplication by $\mu_{\ell^n}$ under the action of $\mu_{\ell^n}$. In particular any meromorphic such function gives us a map from the torsor to $\mathbb G_m$, and hence lets us write it as a fractional ideal. For the torsor $\sqrt[{\ell^n}]{f}$, for $f \in K$, $\sqrt[{\ell^n}]{f}$ is such a function. Over any point $\mathfrak p$, the order to which $\mathfrak p$ appears in this fractional ideal is the highest power of $\mathfrak p$ that divides elements (locally) in the image of this function. Because all elements in the image are multiples of $\sqrt[{\ell^n}]{f}$ by local units, the highest power of $\mathfrak p$ that divides them is $v_{\mathfrak p}(f) /{\ell^n}$.\end{proof} Putting it all together, we get the following description of the map $\psi_K : \mathrm{Cl}(K)^\vee [n] \to \mathrm{Cl} (K)[n]$, previously defined cohomologically. \begin{prop} \label{hilberysymboldescription} There is a natural identification \[ \mathrm{Cl}(K)^\vee [{\ell^n}] = {\rm Hom} \left( K^\times \backslash \mathbb A_K^\times / \prod_v \mathcal{O}_{K_v}^\times ,\mathbb Z/{\ell^n} \right). \] Any such homomorphism on the adeles can be written as $a \mapsto \sum_{v}\langle a,b\rangle_{{\ell^n},v}$ for some $b \in K^\times$ such that: \begin{enumerate} \item The valuation of $b$ at each place is a multiple of ${\ell^n}$, \item For every place $v$ the element $b_v$ pairs trivially with all of $\mathcal{O}_{K_v}^\times$. \end{enumerate} The element $b$ is unique up to multiplication by elements of $(K^\times)^{\ell^n}$. Moreover, the ideal class $\prod_{\mathfrak p \in \textrm{Spec\,} \mathcal{O}_K} \mathfrak p ^{v_{\mathfrak p}(b)/{\ell^n}}$ is the image of the original element of $\mathrm{Cl}(K)^\vee [{\ell^n}] $ under the map $\psi_K: \mathrm{Cl}(K)^\vee [{\ell^n}] \to \mathrm{Cl}(K)[{\ell^n}]$ defined in Definition \ref{psi-nt-defi}. \end{prop} \begin{proof} This follows by combining all the lemmas in this subsection. The description of $ \mathrm{Cl}(K)^\vee [{\ell^n}] $ is Lemma \ref{adelic-dual-description}. The description of elements in terms of Hilbert symbols is Lemma \ref{class-kummer-isomorphism}. The fact that $b$ has a valuation at each place a multiple of ${\ell^n}$ is the commutativity of the diagram combined with Lemma \ref{kummer-torsor-description}. The description of $\psi_K$ follows from the commutativity of the diagram and Lemma \ref{kummer-map-description}. \end{proof} \section{Compatibility between $\psi$ and $\omega$ in the number field case}\label{s-internal-consistency} The goal of this section is to show that $(\mathrm{Cl} (L/K)_{\ell}, \omega_{L/K}, \psi_{L/K})$ is always contained in the support of the measure $Q^t \mu,$ defined in Definition \ref{Qtmu} and conjectured in Conjecture \ref{ntmain-2} to govern the distribution of the triples $(Cl (L/K)_{\ell}, \omega_{L/K}, \psi_{L/K}).$ Because $Q^t \mu$ is a measure on the set $\mathcal C_{\ell,n}$ of isomorphism classes of BEGs, we first, by a series of lemmas, show the compatibility condition \eqref{psiomegacomp}, which verifies that $(Cl (L/K)_{\ell}, \omega_{L/K}, \psi_{L/K})$ is a BEG. We next show that $\operatorname{ker} \psi_{L/K}$ has rank $t= \frac{ [K:\mathbb Q]}{2}$, which is sufficient, by the measure calculation in the next section, to imply that it lies in the support. \subsection{Equivalence between Artin-Verdier and Class Field Theory} We begin by establishing the compatibility between two separate pairings on $H^1$. As $\omega_{L/K} $ is defined using Artin-Verdier Duality, this will ultimately allow us to relate $\omega_{L/K}$ and $\psi_{L/K}$. As $\omega$ is defined as a series of pairings indexed by a natural number $m$, we will consider throughout a natural number $m$, not necessarily equal to $n$, and the $\ell^m$th roots of unity $\mu_{\ell^m}$. \begin{lemma}\label{lem-Brauer-invariant} Let $\mathfrak p$ be a prime of $\mathcal O_K$, $K_{\mathfrak p}$ the corresponding local field, $\pi$ a uniformizer, $\kappa(\pi) \in H^1 ( K_{\mathfrak p} , \mu_{\ell^m})$ the image of $\pi$ under the connecting map from the Kummer sequence. Let $\alpha \in H^1 ( \mathcal O_{K_{\mathfrak p} } , \mathbb Z/\ell^m)$ be a torsor which, viewed as a map from the fundamental group of $\mathcal O_{K_{\mathfrak p}}$ to $\mathbb Z/\ell^m$, sends $\operatorname{Frob}_{\mathfrak p}$ to $k \in \mathbb Z/\ell^m$. Regard $\kappa(\pi) \cup (\alpha) \in H^2 (K_{\mathfrak p}, \mu_{\ell^m})$ as an element of the Brauer group. Then we have the formula for the invariant of the Brauer class \[ \operatorname{inv} ( \kappa(\pi) \cup (\alpha) ) = -\frac{k}{\ell^m} .\] \end{lemma} \begin{proof} It suffices to check this in the case $k=1$, as the torsor with $k=1$ generates the group of torsors. This we now do by an explicit calculation with Brauer groups. Let $n=\ell^m$. We let $\phi$ be the 2-cocycle $\kappa(\pi)\cup\alpha$. Letting $\pi_0^n=\pi$ and computing explicitly, we see that $$\phi(\sigma,\tau) = \left(\frac{\sigma(\pi_0)}{\pi_0}\right)^{n_\tau}$$ where $n_\tau$ acts on the residue field by the $n_\tau$th power of Frobenius. Let $K_{n}$ be the unramified extension of $K$ of degree $n,$ and $L=K_n(\pi_0)$. To an element $g\in G=\textrm{Gal}(L/K)$ we assign $(\mu_g,n_g)$ such that $g\mid_{K_n}=F^{n_g}$ and $\frac{g\pi_0}{\pi_0} = \mu_g$. This identifies $G$ with the semi-direct product of $\mu_n$ and $\mathbb{Z}/n\mathbb{Z}$. Now we may rewrite our cocycle $\phi$ as $\phi(g,h):=\mu_g^{n_h}$. By the standard dictionary between $H^2(K,\mu_n)$ and $n$-torsion in the Brauer group $\mathrm{Br}(K)$ \cite[IV,Cor 3.15]{MilneClassFieldTheory}, this gives rise to the central simple algebra $A_\phi$ given by $$A_\phi:=\displaystyle\bigoplus_{g\in G} Le_g$$ with multiplication defined by \begin{align*} e_g \ell &= g(\ell)e_g, \\ e_ge_h &= \phi(g,h)e_{gh}. \end{align*} We'll exhibit a simple subalgebra $B$ of $A_\phi$ satisfying: \begin{itemize} \item $Z_{A_\phi}(B)$ is isomorphic to a matrix algebra over $K,$ say $M_d(K).$ \item $\mathrm{inv}_K(B) = -\frac{1}{n}.$ \end{itemize} By the Centralizer Theorem, $A_\phi \cong B\otimes_K Z_{A_\phi}(B).$ So by the above two items, it follows that \begin{align*} \mathrm{inv}(A_\phi) &= \mathrm{inv}_K(B) + \mathrm{inv}_K(Z_{A_\phi}(B)) \\ &= -\frac{1}{n} + \mathrm{inv}_K( M_m(K) ) \\ &= - \frac{1}{n}. \end{align*} Because $\mu_n \subset K_n,$ we can diagonalize the ($K$-linear) action of $F$ on $K_n$: $K_n = \bigoplus_{\mu \in \mu_n} K_{\mu},$ where $K_{\mu} = \{\alpha \in K_n: F(\alpha) = \mu \cdot \alpha\}.$ Let $C :=\displaystyle\sum_\mu K_\mu e_{\mu,0}$. Note that $K_n \cong C$ via the isomorphism $\sum_{\mu \in \mu_n} k_{\mu} \mapsto \sum_{\mu \in \mu_n} k_{\mu} e_{\mu,0}$; ; via this isomorphism, we see that Frobenius on $C$ equals $k e_{\mu,0} \mapsto F(k) e_{F(\mu),0}$ for $k \in K, \mu \in \mu_n.$ Define $B := C[\pi_0^{-1}e_{1,1}].$ \newline We claim that conjugation by $\pi_0^{-1} e_{1,1}$ induces Frobenius on $C.$ Indeed, for $k \in K_n$, we have \begin{align}\label{comm} (\pi_0^{-1}e_{1,1})k e_{\mu,0}(\pi_0^{-1} e_{1,1})^{-1} &= \pi_0^{-1}e_{1,1}k e_{\mu,0}\pi_0e_{1,-1} \nonumber \\ &=\pi_0^{-1}F(k)e_{1,1}\mu\pi_0 e_{\mu,0}e_{1,-1} \nonumber \\ &=F(k)F(\mu) e_{1,1}e_{\mu,0}e_{1,-1} \nonumber \\ &=F(k)F(\mu) e_{F(\mu),1}e_{1,-1} \nonumber \\ &=F(k)F(\mu) F(\mu)^{-1} e_{F(\mu),0} \nonumber \\ &= F(k)e_{F(\mu),0}. \end{align} It follows from the above that $C$ is its own centralizer in $B$. Indeed, let $c\in C$ be a generator over $K$. Then for a general element $\sum_ {k=0}^{n-1} c_k (\pi_0^{-1} e_{1,1})^k\in B$ commuting with $c$, we see that $$c\sum c_k (\pi_0^{-1} e_{1,1})^k = \sum c_kF^k(c) (\pi_0^{-1} e_{1,1})^k$$ from which it follows that $c_k=0$ for $k\neq 0$. Also, considering the centralizer of $\pi_0^{-1}e_{1,1},$ it follows from \eqref{comm} that the center of $B$ equals $K$. We next verify that $B$ is simple. Let $J$ be a non-zero 2-sided ideal. Then $J$ is a vector space over $C$ under left mutiplication. Conjugation by $C^\times$ breaks $B$ up into the $1$-dimensional eigenspaces over $C$ with distinct characters, namely $C(\pi_0^{-1} e_{1,1})^k$ has the character $c\rightarrow \frac{c}{F^k(c)}$. Because $J$ is invariant under $C^\times$ conjugation, it must contain at least one of these eigenspaces. Each of these eigenspaces contains a unit, and therefore $J=B$. From the above computation and since the valuation of $\pi_0^{-1}e_{1,1}$ is $-\frac{1}{n}$ it follows that $\mathrm{inv}_K(B)=-\frac{1}{n}$ (see \cite[IV.4]{MilneClassFieldTheory}). \medskip It remains to show that $Z_{A_\phi}(B)$ is isomorphic to a matrix algebra over $K.$ To do this, let $R=K_n[e_{\mu,0}, \mu \in \mu_n].$ The action of Frobenius on $R$ is given by Frobenius on $K_n$ and $F(e_{\mu,0}):=e_{F(\mu),0}$. Note that the fixed algebra $R^F$ is contained in $Z_{A_\phi}(B)$ by equation \eqref{comm}. Now we have an isomorphism $\phi:R\rightarrow K_n^n$ sending $y=\sum_\mu a_\mu e_{\mu,0}$ to $\phi(y) = (\sum_\mu a_\mu \mu^k)_{k = 0,\ldots,n}$ which is $F$-equivariant for the component-wise action of $F$ on $K_n^n$. Thus we see that $R^F\cong K^n$. It follows that $Z_{A_\phi}(B)$ is a central simple algebra of dimension $n^2$ containing $n$ mutually orthogonal idempotents, and thus $Z_{A_\phi}(B)\cong M_n(K)$, completing the proof. \end{proof} \begin{lem}\label{lem-giant-diagram} For any $\alpha \in H^1 (\mathcal O_K, \mathbb Z/\ell^m)$, the following diagram commutes: \[ \begin{tikzcd} H^0(K_{\mathfrak p}, \mathbb G_m) \arrow[r] \arrow[d, "\kappa"]& H^{1}_c(U, \mathbb G_m) \arrow[r] \arrow[d, "\kappa"]& H^1_c( \mathcal O_K, \mathbb G_m) \arrow[r] \arrow[d, "\kappa"]& H^1 ( \mathcal O_K, \mathbb G_m)\arrow[d, "\kappa"] \\ H^1(K_{\mathfrak p}, \mathbb \mu_{\ell^m} ) \arrow[r]\arrow[d,"\cup \alpha"] & H^{2}_c(U, \mu_{\ell^m} ) \arrow[r] \arrow[d,"\cup \alpha"]& H^2_c( \mathcal O_K, \mu_{\ell^m} ) \arrow[r] \arrow[d,"\cup \alpha"]& H^2 ( \mathcal O_K, \mu_{\ell^m} )\arrow[d,"\cup \alpha"] \\ H^2(K_{\mathfrak p}, \mathbb \mu_{\ell^m} ) \arrow[r] & H^{3}_c(U, \mu_{\ell^m} ) \arrow[r] & H^3_c( \mathcal O_K, \mu_{\ell^m} ) \arrow[r] & H^3 ( \mathcal O_K, \mu_{\ell^m} ) \\ \end{tikzcd}\] where the maps $\kappa$ arise from the Kummer sequence, the horizontal arrows in the left and right columns arise from the exact sequence of compactly supported cohomology \cite[III, Proposition 0.4(a)]{MilneArithmeticDuality}, and the horizontal arrows in the middle column arise from \cite[III, Proposition 0.4(c)]{MilneArithmeticDuality}. \end{lem} \begin{proof} Let $U = \operatorname{Spec} \mathcal O_K - \{ \mathfrak p\}$ . Milne defines the compactly supported cohomology groups of a sheaf $\mathcal F$ on $U$ as the shifted mapping cone of the natural map of complexes \[ \Gamma ( U, I^* ( \mathcal F)) \to \Gamma (K_\mathfrak p, I^*(\mathcal F) ) \times \prod_{v |\infty} \Gamma ( K_v, I^* (\mathcal F) )\] where $I^*( \mathcal F)$ is an injective resolution of $\mathcal F$ on the flat site of $U$. Note that the restriction of $I^*(\mathcal F)$ to $\textrm{Spec\,} K_\mathfrak v$ is an injective resolution of the restriction of $\mathcal F$ to $K_{v}$. The compactly supported cohomology groups of $\mathcal F$ on $\mathcal O_K$ are defined similarly, as the shifted mapping cone of \[ \Gamma ( U, I^* ( \mathcal F)) \to \prod_{v |\infty} \Gamma ( K_v, I^* (\mathcal F) ).\] This induces natural maps \[H^i ( K_{\mathfrak p}, \mathcal F) \to H^{i+1}_c (U, \mathcal F) \textrm{ and } H^{i+1}_c ( \mathcal O_K, \mathcal F) \to H^{i+1} (\mathcal O_K, \mathcal F)\] arising directly from the construction of $H^*_c$ as a mapping cone. These calculate the left and right horizontal arrows of the diagram. The middle arrow is constructed by first mapping $H^{i}_c (U, \mathcal F) $ to the mapping cone \[ \Gamma ( U, I^i ( \mathcal F)) \to \Gamma_{\mathfrak p} (K_\mathfrak p, I^{i+1} (\mathcal F) ) \times \prod_{v |\infty} \Gamma ( K_v, I^i (\mathcal F) )\] and then identifying this mapping cone as $H^i( \mathcal O_K, \mathcal F)$. The first vertical arrow, the Kummer sequence, involves choosing a triple of injective resolutions of $\mu_{\ell^m}, \mathbb G_m,$ and $\mathbb G_m$ that themselves form a short exact sequence \cite[III, Proposiition 0.4(b)]{MilneArithmeticDuality}. The commutativity of the squares can be checked straightforwardly on cochains, because the inverse image along a map of sheaves, differential, and inverse image along another map of sheaves we use to define the connecting homomorphism commute with the various pullbacks of sections to different spaces we use to define the horizontal maps. For the second vertical arrow, the cup product, after choosing an injective resolution of $\mu_{\ell^m}$, we choose a complex of flat sheaves, isomorphic to $\mathbb Z/\ell^m$, where our chosen class $\alpha$ appears as a cocycle. This can be done by choosing a finite \'{e}tale covering where $\alpha$ splits and taking the Cech complex or that covering, or more simply by choosing an extension $\mathbb Z/\ell^m \to \mathcal F \to \mathbb Z/\ell^m$ representing $\alpha$ and using the complex $\mathcal F \to \mathbb Z/\ell^m$. We then choose a further injective resolution of $\mu_{\ell^m}$ that resolves the tensor product of these two complexes. The cup product is then expressed as multiplication of cochains. Because multiplication of cochains commutes with pullback, it commutes with the horizontal maps. Hence all the squares are commutative. \end{proof} \begin{prop}\label{prop-cft-av} The following two pairings $H^1(\mathcal O_K, \mathbb G_m) \times H^1(\mathcal O_K, \mathbb Z/\ell^m) \to \mathbb Z / \ell^m$ are equal: \begin{enumerate} \item Identify $H^1(\mathcal O_K, \mathbb Z/\ell^m)$ with $\mathrm{Hom}(\pi_1( \textrm{Spec\,} \mathcal{O}_K)^{\mathrm{ab}}, \mathbb{Z} / \ell^m)$ with $\mathrm{Hom}(\mathrm{Cl}(K), \mathbb{Z} / \ell^m)$ (the latter identification by class field theory). Identify $H^1(\mathcal{O}_K, \mathbb{G}_m)$ with $\mathrm{Cl}(K).$ Then pair $\mathrm{Hom}(\mathrm{Cl}(K), \mathbb{Z} / \ell^m)$ with $\mathrm{Cl}(K)$ by evaluation. \item Map $H^1(\mathcal O_K, \mathbb G_m)$ to $H^2(\mathcal O_K, \mu_{\ell^m})$ by the connecting homomorphism from the Kummer sequence. Then take cup product of $H^2(\mathcal O_K, \mu_{\ell^m})$ with $H^1(\mathcal{O}_K, \mathbb{Z} / \ell^m)$ which lands in $H^3(\mathcal{O}_K, \mu_{\ell^m}).$ Then apply the Artin-Verdier duality trace map $H^3(\mathcal{O}_K, \mu_{\ell^m}) \rightarrow \mathbb{Z} / \ell^m.$ \end{enumerate} \end{prop} \begin{proof} It suffices to check that, for all sufficiently large primes $\mathfrak p$, and all torsors $\alpha \in H^1(\mathcal O_K, \mathbb Z/\ell^m)$, the image of the class $[\mathfrak{p}]$ of the ideal sheaf $\mathfrak p$ under the pairings with $\alpha$ defined by (1) and (2) are equal; this suffices because all elements of the class group arise from infinitely many primes. By Artin reciprocity, $[\mathfrak{p}]$ corresponds to $\mathrm{Frob}_{\mathfrak{p}}$ under class field theory. So the pairing of $\alpha$ with $[\mathfrak{p}]$ defined by (1) is simply the action of ${\rm Frob}_{\mathfrak p}$ on the $\overline{K}$-points of the torsor $\alpha.$ In particular, it depends only on the restriction of $\alpha$ to $K_{\mathfrak p}.$ We will now calculate the pairing of $\alpha$ with $[\mathfrak{p}]$ defined by (2). By definition, this is the image of $ (\kappa[\mathfrak p] \cup \alpha) \in H^3(\mathcal O_K, \mu_{\ell^m})$ under the identification $H^3(\mathcal O_K, \mu_{\ell^m}) \cong \mathbb Z/\ell^m$ from Artin-Verdier duality. Let $\pi$ be a uniformer of $K_{\mathfrak p}$. Let us first check that $[\mathfrak p] \in H^1 (\mathcal O_K, \mathbb G_m)$ is the image of $\pi$ under the three arrows in the top row of the commutative diagram of Lemma \ref{lem-giant-diagram}. We can view $H^{1}_c(U, \mathbb G_m)$ as consisting of line bundles on $U$ with trivializations on the punctured formal neighborhood of the points in $S$. Under this identification, the image of $\pi$ in $H^1_c(U, \mathbb G_m)$ is the trivial line bundle on $U$ with the identity trivialization at all infinite places and with the trivialization at the place $\mathfrak p$ twisted by $\pi$. Given a line bundle $L$ on $U$ with a trivialization on the punctured formal neighborhood of $\mathfrak p$, there is a unique way of extending it to a line bundle on $\textrm{Spec\,} \mathcal O_K$ with a trivialization on the formal neighborhood of $\mathfrak p$. This is the line bundle whose sections are sections of $L$ on $U$ whose image under the trivialization does not have a pole at $\mathfrak p$. The map $H^1_c(U, \mathbb G_m) \to H^1_c(\mathcal O_K , \mathbb G_m)$ sends a line bundle with trivialization to the extended line bundle. Applying this to our chosen line bundle, the sections of $\mathcal O_U$ whose image under the trivialization dividing by $\pi$ do not have a pole at $\mathfrak p$ are exactly the sections in the ideal $\mathfrak p$, so the extended line bundle is the ideal sheaf $\mathfrak p$. Finally, the natural map $H^1_c(\mathcal O_K , \mathbb G_m) \to H^1(\mathcal O_K , \mathbb G_m)$ forgets the trivialization at $\infty$. Thus, $\pi$ is sent to the ideal class $[\mathfrak p]$. The groups on the bottom row of the diagram are all isomorphic to $\mathbb Z/\ell^m$ in a standard way. For instance this follows from \cite[Proposition 2.6 on page 169]{MilneArithmeticDuality}, which shows that $H^2( K_\mathfrak p, \mathbb G_m) = H^3_c(U, \mathbb G_m) = H^3_c(\mathcal O_K, \mathbb G_m) = H^3(\mathcal O_K, \mathbb G_m)= \mathbb Q/\mathbb Z$ and $H^1( K_\mathfrak p, \mathbb G_m) = H^2_c(U, \mathbb G_m) = H^2_c(\mathcal O_K, \mathbb G_m) = H^2(\mathcal O_K, \mathbb G_m)= 0$. The standard isomorphism on the bottom-left is the invariant map of the Brauer group. The standard map on the bottom-right is used to define the Artin-Verdier pairing. Hence we have \begin{align*} &( \alpha, \kappa [\mathfrak{p}] )_{\mathrm{AV}} \\ &= \operatorname{inv} ( \kappa( \pi ) \cup \alpha) \\ &= \alpha ( \operatorname{Frob}_{\mathfrak p}) \end{align*} by Lemma \ref{lem-Brauer-invariant}. This matches the pairing of $\alpha$ with $[\mathfrak p]$ under the pairing (1), as desired. \end{proof} \subsection{Checking compatibility between $\psi$ and $\omega$} In this section we proceed to use Proposition \ref{prop-cft-av} to establish relation (1), the compatibility relation between $\psi$ and $\omega.$ To do this, we find it convenient to study the connecting homomorphisms associated to certain explicit group scheme extensions between $\mathbb G_m$ and $\mu_{\ell^N}$ for various $N.$ \begin{definition} Let $\langle \alpha, \beta \rangle: \mathrm{Cl}(K)^\vee [\ell^n] \times \mathrm{Cl}(K)^\vee \to \mathbb Z/\ell^n$ be the pairing defined by applying the class field theory isomorphism $\mathrm{Cl}(K)^\vee [\ell^n] \cong H^1( \mathcal O_K , \mathbb Z/\ell^n)$, the map $\mathbb Z/\ell^n \to \mu_{\ell^n}$ defined by our fixed generator of $\mu_{\ell^n},$ and the Kummer map to map $\alpha$ to $\mathrm{Cl}(K)$ and then contracting with $\beta$. \end{definition} Throughout this subsection we fix $m=n+r$ . \begin{definition} Let $G$ be the group scheme that sits in the middle of the exact sequence $0 \to \mu_{\ell^{n+r} } \to G\to \mathbb Z/\ell^{n+r} \rightarrow 0$ obtained by pulling back the Kummer exact sequence $0 \to \mu_{\ell^{n+r} } \to \mathbb G_m \to \mathbb G_m \rightarrow 0$ along the series of maps $\mathbb Z/\ell^{n+r} \to \mathbb Z/\ell^n\to \mu_{\ell^n} \to \mathbb G_m,$ where the second map is defined via our fixed choice of generator $\zeta$ for $\mu_{\ell^n}$.\end{definition} \begin{lemma} The group scheme $G$ consists of pairs $(a,x)$ with $a \in \mathbb Z/\ell^{n+r}$, $x \in \mu_{\ell^{2n+r}}$ such that $x^{\ell^{n+r}} = \zeta^a$ for $\zeta$ our chosen generate of $\mu_{\ell^n}$ \end{lemma} \begin{proof} By definition, $G$ is the fiber product of $\mathbb G_m$ and $\mathbb Z/\ell^{n+r}$ over $\mathbb G_m$ under the $\ell^{n+r}$ power and $a \mapsto \zeta^a$ maps respectively, so it consists of pairs $x \in \mathbb G_m, a\in \mathbb Z/\ell^{n+r}$ with $x^{\ell^{n+r}} = \zeta^a$, which because $\zeta^{l^n}=0$ forces $x^{\ell^{2n+r}}=1$ so $x \in \mu_{\ell^{2n+r}}$. \end{proof} \begin{definition} Let $B: H^i ( \mathcal O_K, \mathbb Z/\ell^{n+r}) \to H^{i+1} (\mathcal O_K, \mu_{\ell^{n+r}})$ be the connecting homomorphism associated to this exact sequence $0 \to \mu_{\ell^{n+r} } \to G \to \mathbb Z/\ell^{n+r} \rightarrow 0$. \end{definition} \begin{lemma}\label{langle-rangle-B} For $\alpha, \beta \in \mathrm{Cl}(K)^\vee [\ell^{n+r}]$, viewed as elements of $H^1 (\mathcal O_K, \mathbb Z/\ell^{n+r}),$ we have \[\langle \ell^r \alpha, \beta \rangle = (B\alpha,\beta)_{AV} =\operatorname{tr} ( B\alpha \cup \beta) \] where $\operatorname{tr} :H^3 (\mathcal O_K, \mu_{\ell^{n+r}} ) \to \mathbb Q/\mathbb Z$ is the Artin-Verdier trace. \end{lemma} \begin{proof} By definition, $\langle \alpha, \beta \rangle$ is obtained by taking $\ell^r\alpha$, viewing it as an element of $H^1( \mathcal O_K , \mathbb Z/\ell^n)$, mapping to $H^1(\mathcal O_K, \mu_{\ell^n})$ (via our fixed choice of generator $\zeta$ for $\mu_{\ell^n}$) and then to $H^1(\mathcal O_K, \mathbb G_m)$, and then contracting with $\beta.$ By Proposition \ref{prop-cft-av}, this is equivalent to taking $\ell^r \alpha$, mapping to $H^1(\mathcal O_K, \mathbb G_m)$ along that series of maps, applying the connecting homomorphism from the Kummer sequence to map into $H^2(\mathcal O_K, \mu_{\ell^m})$, and then taking an Artin-Verdier pairing of the result with $\beta$. Viewing $\ell^r \alpha$ as an element of $H^1( \mathcal O_K , \mathbb Z/\ell^n)$ is equivalent to viewing $\alpha$ as an element of $H^1( \mathcal O_K , \mathbb Z/\ell^{n+r} )$ and mapping to $H^1( \mathcal O_K , \mathbb Z/\ell^n)$ by reduction mod $\ell^n$. So all told this is equivalent to sending $\alpha$ from $H^1( \mathcal O_K , \mathbb Z/\ell^n)$ to $H^1(\mathcal O_K, \mathbb G_m)$ by the composed map $\mathbb Z/\ell^{n+r} \to \mathbb G_m$, which is $a\mapsto \zeta^a$, applying the Kummer exact sequence connecting map, and then Artin-Verdier duality. Because $G$ is the pullback of the Kummer exact sequence on that series of maps, $B$ is the composition of that series of maps with the Kummer exact sequence. Finally, the relation between Artin-Verdier duality and the Artin-Verdier trace is simply the definition of the Artin-Verdier duality map. \end{proof} \begin{definition} Let $G'$ be the group scheme consisting of pairs $a \in \mathbb Z/\ell^{2n+r}, x \in \mu_{\ell^{2n+r}}$ such that $x^{n+r} = \zeta^{2a}$, modulo the subscheme of pairs $(\ell ^{n+r} t, \zeta^{t})$. \end{definition} Then $G'$ has a map to $\mathbb Z/\ell^{n+r}$ given by taking $a$ modulo $\ell^{n+r}$, whose kernel is isomorphic to $\mu_{\ell^{n+r}}$ under the map $(a,x) \mapsto x \zeta^{- a / \ell^{n+r}}$. \begin{definition} Let $B': H^i ( \mathcal O_K, \mathbb Z/\ell^{n+r}) \to H^{i+1} (\mathcal O_K, \mu_{\ell^{n+r}})$ be the connecting homomorphism associated with the exact sequence $0\to \mu_{\ell^{n+r}} \to G' \to \mathbb Z/\ell^{n+r} \to 0 $. \end{definition} \begin{definition} Let $f: G \times G \to G'$ be the bilinear map of group schemes that sends $(a_1, x_1) \times (a_2, x_2)$ to $(\tilde{a}_1\tilde{a}_2, x_1^{\tilde{a}_2} x_2^{\tilde{a}_1})$, where $\tilde{a}_1$ and $\tilde{a}_2$ are lifts of $a_1$ and $a_2$ respectively from $\mathbb Z/\ell^{n+r}$ to $\mathbb Z/\ell^{2n+r}$. \end{definition} \begin{lemma} The map $f$ is well-defined. \end{lemma} \begin{proof} Adding $\ell^{n+r}$, say to $a_1$, has the effect of adding $\ell^{n+r} \tilde{a}_2$ to the first coordinate and multiplying the second coordinate by $x_2^{ \ell^{n+r} } = \zeta^{a_2}$. \end{proof} \begin{lemma} \begin{enumerate} \item The map $f$ is compatible with the projections onto $\mu_{\ell^{n+r}}$. \item The maps $\mu_{\ell^{n+r} } \times G \to \mu_{\ell^{n+r}} \subset G'$ and $ G\times \mu_{\ell^{n+r} } \to \mu_{\ell^{n+r}} \subset G'$ induced by $f$ are the same as those obtained by projecting from $G$ to $\mathbb Z/\ell ^{n+r}$ and taking the obvious multiplication. \end{enumerate} \end{lemma} \begin{proof} Both can be checked immediately.\end{proof} \begin{lemma}\label{B-B'} For $\alpha, \beta \in H^1 (\mathcal O_K, \mathbb Z/\ell^{n+r}),$ we have \[ B \alpha \cup \beta - \alpha \cup B\beta = B'(\alpha \cup \beta).\]\end{lemma} \begin{proof} To do this, we use the facts that there is a natural map from Cech cohomology of a sheaf to usual cohomology, compatible with cup products and connecting homomorphisms, and that it is an isomorphism in degree $1$. Thus we can represent $\alpha$ and $\beta$ by Cech cocycles. We can calculate the connecting homomorphism explicitly as, first, an arbitrary lift of those cocycles from $\mathbb Z/\ell^{n+r}$ to $G$ (possibly after refinement), second applying the Cech differential, and third recognizing the result as a cocycle for $\mu_{\ell^{n+r}}$. The bilinear map $G \times G \to G'$ induces a cup product where we cup $G$-cochains with $G$-cochains to obtain $G'$-cochains, and in particular for $\tilde{\alpha}$ a lift of $\alpha$ to $G$ and $\tilde{\beta}$ a lift of $\beta$ to $G$, $\tilde{\alpha} \cup \tilde{\beta}$ is a lift of $\alpha \cup \beta$ to $G'$. Then we have \[ B' (\alpha \cup \beta) = d_{G'} (\tilde{\alpha} \cup \tilde{\beta}) = d_G \tilde{\alpha} \cup \tilde{\beta} + \alpha \cup d_G \tilde{\beta} \] and $d_G \tilde{\alpha}$ is a cochain for $G$ such that $\pi ( d\tilde{\alpha} )= d \pi(\tilde{\alpha})= d\alpha=0$, hence is a class in $H^2( \mathcal O_K, \mu_{\ell^{n+r}})$ and in fact is $B\alpha$, so its cup product with $\tilde{\beta}$ is the same as the cup product with $\pi(\tilde{\beta})= \beta$, so \[ d_G \tilde{\alpha} \cup \tilde{\beta} = B \alpha \cup \beta\] and similarly the other term is $\alpha \cup B\beta$.\end{proof} \begin{lemma}\label{langle-rangle-B'} For $\alpha, \beta \in Cl(K)^\vee [\ell^{n+r}]$, viewed as elements of $H^1 (\mathcal O_K, \mathbb Z/\ell^{n+r}),$ we have \[ \langle \ell^r \alpha,\beta\rangle - \langle \ell^r \beta, \alpha \rangle ={\rm tr} ( B' ( \alpha \cup \beta)).\] \end{lemma} \begin{proof} This follows on combining Lemmas \ref{langle-rangle-B} and \ref{B-B'}, upon remembering that $\alpha \cup B \beta = B\beta\cup \alpha$ because $B \beta$ is in degree $2$, which is even. \end{proof} \begin{definition} Let $G^*$ be the Cartier dual of $G'$. Let $B^*: H^i ( \mathcal O_K, \mathbb Z/\ell^{n+r}) \to H^{i+1} (\mathcal O_K, \mu_{\ell^{n+r}})$ be the connecting homomorphism associated to $G^*$. \end{definition} \begin{lemma}\label{B'-B*} We have \[ {\rm tr}( B' ( \alpha \cup \beta))=- {\rm tr}( \alpha \cup \beta \cup B^* (1)) \] \end{lemma} \begin{proof} Because the trace map factors through $H^3(\mathcal O_K, \mathbb G_m$, it suffices to show that \[ B' ( \alpha \cup \beta)+ \alpha \cup \beta \cup B^* (1) =0 \in H^3(\mathcal O_K, \mathbb G_m).\] As in the proof of Lemma \ref{B-B'}, we may assume that $\alpha$ and $\beta$ are Cech cocycles, and perform the calculations in cohomology. We can lift $\alpha \cup \beta \in C^2( \mathcal O_K, \mathbb Z/\ell^{n+r}) $ to a cochain $\tilde{\alpha} \cup \tilde{\beta} \in C^2(\mathcal O_K, G')$ and lift $1\in H^0 (\mathcal O_K, \mathbb Z/\ell^{n+r})$ to a cochain $\tilde{1} C^0(\mathcal O_K, G^*$. By definition, $B' (\alpha \cup \beta)$ consists of applying the differential to obtain a cochain in $C^3 (\mathcal O_K, G')$, pulling back to $C^3(\mathcal O_K, \mu_{\ell^{n+r})}$, and then mapping to $C^3 (\mathcal O_K, \mathbb G_m)$. This last step is equivalent to cupping with $1$ under the Cartier duality pairing $\mu_{\ell^{ n+r}} \times \mathbb Z/\ell^{n+r} \to \mathbb G_m$. So the pulling back to $\mu_{\ell^{n+r}}$ and then mapping to $\mathbb G_m$ is equivalent to cupping with $\tilde{1}$ under the Cartier duality pairing $G' \times G^* \to \mathbb G_m$, which extends it. So \[ B' (\alpha \cup \beta) = d( \tilde{\alpha} \cup \tilde{\beta} ) \cup \tilde{1} .\] Similarly, we have \[(\alpha \cup \beta) \cup B^*(1) = (\tilde{\alpha} \cup \tilde{\beta} ) \cup d \tilde{1} ,\] because $ d \tilde{1}$ is the image of $B*(1)$ under the map $C^1 ( \mathcal O_K, \mu_{\ell^{n+r}}) \to C^1 (\mathcal O_K, G^*)$ and $\alpha \cup \beta$ is the image of $\tilde{\alpha} \cup \tilde{\beta}$ under the dual map $C^2( \mathcal O_K, G' ) \to C^1 (\mathcal O_K, \mathbb Z/\ell^{n+r} )$. Thus, \[ B' ( \alpha \cup \beta)+ \alpha \cup \beta \cup B^* (1) = d ( \tilde{\alpha} \cup \tilde{\beta} \cup \tilde{1}) \] which is a coboundary and thus vanishes in cohomology, as desired. \end{proof} \begin{lemma}\label{langle-rangle-B*} For $\alpha, \beta \in Cl(K)^\vee [\ell^{n+r}]$, viewed as elements of $H^1 (\mathcal O_K, \mathbb Z/\ell^{n+r}),$ we have \[ \langle \ell^r \alpha,\beta\rangle - \langle \ell^r \beta, \alpha \rangle =( B^* 1, \alpha\cup \beta)_{AV} .\]\end{lemma} \begin{proof} By Lemmas \ref{langle-rangle-B'} and \ref{B'-B*}, we have \begin{align*} \langle \ell^r \alpha,\beta\rangle - \langle \ell^r \beta, \alpha \rangle &= {\rm tr} ( B' (\alpha \cup \beta) ) \\ &= {\rm tr} ( B'(\alpha \cup \beta) \cup 1 ) \\ &= {\rm tr}( \alpha \cup \beta \cup B^* 1) \\ &= {\rm tr} ( B^*1 \cup \alpha \cup \beta ) \\ &= (B^*1, \alpha\cup \beta)_{AV}. \end{align*} \end{proof} \begin{lemma}\label{B^*-alpha} We have $B^* 1 = - \alpha_{n+r}$ where $\alpha_{n+r} \in H^1( \mathcal O_K, \mu_{\ell^{n+r}})$ is identified with the torsor of $\ell^{n+r}$th roots of our fixed generator $\zeta$ of $\mu_{\ell^n}$ (defined in \S\ref{number-field-invariants}). \end{lemma} \begin{proof} By definition, $B^*1$ is the torsor defined as the inverse image of $1\in \mathbb Z/\ell^{n+r}$ under the exact sequence $0 \to \mu_{\ell}^{n+r} \to G^* \to \mathbb Z/\ell^{n+r} \to 1$. To calculate this, observe by dualizing the definition of $G'$ that $G^*$ is the group consisting of pairs of $(a^* \in \mu_{\ell^{2n+r}}, x^* \in \mathbb Z/\ell^{2n+r}$ such that $\left(a^*\right)^{\ell^{n+r}} \zeta^{ x^*}=1$, modulo the subgroup generated by $(\zeta^2, \ell^{n+r} )$. To calculate the torsor, we look at the set of elements sent to $1 \in \mathbb Z/\ell^{n+r}$ by the projection onto $x^*$ mod $\ell^{n+r}$, which is isomorphic under the map $(a^*, x^*) \to a^* \zeta^{- 2 \frac{x^* -1}{\ell^{n+r}}} $ to the set of $\ell^{n+r}$th roots of $\zeta^{-1}$. On the other hand, $\alpha_{n+r}$ is defined as the torsor of $\ell^{n+r}$th roots of $\zeta$. Since they are inverse torsors, their cohomology classes are negatives of each other. \end{proof} \begin{lemma}\label{psi-omega-comp-verify} Let $m=n+r$ and let $\alpha$ and $\beta$ be elements of $Cl(K)^\vee [\ell^m].$ There is an equality \[ \langle \ell^r \alpha,\beta\rangle - \langle \ell^r \beta, \alpha \rangle =2 \cdot \omega_{n+r,K} (\alpha,\beta) \] \end{lemma} \begin{proof} This follows from Lemmas \ref{langle-rangle-B*} and \ref{B^*-alpha} as well as Definition \ref{omega-nt-defi}. \end{proof} \subsection{Non-degeneracy of $\psi_{L/K}$} Finally, we check the non-degeneracy condition for $(\mathrm{Cl}(L/K)_{\ell}, \omega_{L/K}, \psi_{L/K})$ to be contained in the support of $Q^t \mu$. \begin{lemma}\label{number-field-rank-bound} The $\ell$-rank of the kernel of $\psi_{L/K}$ is at most $t=\frac{[K:\mathbb Q]}{2} $.\end{lemma} \begin{proof} We first focus on $\psi_K$ for a single field $K$. By the definition of $\psi_K$, the kernel of $\psi_K$ consists of those elements of $H^1(\textrm{Spec\,}\mathcal{O}_K,\mathbb{Z}/\ell^n\mathbb{Z})$ which map to those elements of $H^1(\textrm{Spec\,}\mathcal{O}_K,\mu_{\ell^n})$ which have trivial image in the class group. We have the commutative diagram \[ \begin{tikzcd} H^1(\textrm{Spec\,} \mathcal{O}_K, \mathbb Z/{\ell^n}) \arrow[r]\arrow[d] & H^1(K, \mathbb Z/{\ell^n}) \arrow[d] \\ H^1( \textrm{Spec\,} \mathcal{O}_K, \mu_{\ell^n}) \arrow[r] & H^1(K, \mu_{\ell^n}) \\ \end{tikzcd}\] in which the top arrow is injective because $\textrm{Spec\,} \mathcal{O}_K$ is normal, and the right arrow is in isomorphism, so the map $H^1(\textrm{Spec\,} \mathcal{O}_K, \mathbb Z/{\ell^n}) \to H^1(K, \mu_{\ell^n})$ is injective, hence the map $H^1(\textrm{Spec\,}\mathcal{O}_K,\mathbb{Z}/\ell^n\mathbb{Z}) \to H^1(\textrm{Spec\,}\mathcal{O}_K,\mu_{\ell^n})$ is injective. Thus we may identify ${\rm Ker} \psi_K$ with a subgroup of the kernel of the natural map $H^1(\textrm{Spec\,}\mathcal{O}_K,\mu_{\ell^n})\to H^1( \textrm{Spec\,} \mathcal{O}_K, \mathbb G_m)$, which by the Kummer exact sequence is $\mathcal{O}_K^{\times}\otimes\mathbb{Z}/\ell^n\mathbb{Z}$. Since $K$ contains the $\ell^n$th roots of unity, it is totally complex and has unit rank $\frac{ [K:\mathbb Q]}{2}$. Thus by Dirichlet's unit theorem, $\mathcal{O}_K^{\times}\otimes\mathbb{Z}/\ell^n\mathbb{Z} \cong (\mathbb{Z}/\ell^n\mathbb{Z})^{\frac{[K:\mathbb{Q}]}{2} +1}$, where the $+1$ comes from torsion. Thus, the $\ell$-rank of its subgroup ${\rm Ker}\psi_K$ is at most $\frac{[K:\mathbb{Q}]}{2}+1$. Now, we return to our setting. Note that since $L$ is odd, the order 2 automorphism of $L/K$ gives a canonical splitting $$\textrm{Cl}(L)[\ell^n]\cong \textrm{Cl}(K)[\ell^n]\oplus\textrm{Cl}(L/K)[\ell^n]$$ which is respected by the map $\psi_L$, and such that $\psi_L$ restricts to $\psi_K$. Following the above, and letting $\mathcal{O}_{L/K}^\times\subset\mathcal{O}_L^\times$ denote the kernel of the norm map to $K$, we may identify ${\rm Ker}\psi_{L/K}$ with a subgroup of $\mathcal{O}_{L/K}^\times \otimes \mathbb{Z}/\ell^n\mathbb{Z}$, and conclude that it has $\ell$-rank at most $\frac{[L:\mathbb{Q}]-[K:\mathbb{Q}]}{2}$ which is equal to $t_{L/K}$ since both $L$ and $K$ are totally complex. \end{proof} \begin{prop}\label{class-group-support} The triple $(\mathrm{Cl}(L/K)_{\ell}, \omega_{L/K}, \psi_{L/K})$ is contained in the support of $Q^t \mu$ where $t = \frac{ [K:\mathbb{Q}]}{2}$. \end{prop} \begin{proof} By Lemma \ref{psi-omega-comp-verify} and $\omega_L$ and $\psi_L$ satisfy \eqref{psiomegacomp}. Thus their projections $\omega_{L/K}$ and $\psi_{L/K}$ satisfy \eqref{psiomegacomp}. Thus by Definition \ref{beg}, $(\mathrm{Cl} (L/K)_{\ell}, \omega_{L/K}, \psi_{L/K})$ is a BEG. Because the product in Theorem \ref{quotient-measure-formula} is manifestly nonzero, it follows from that theorem that the support of $Q^t \mu$ consists of all BEGs where the kernel of $\psi_G$ has rank at most $t$. By Lemma \ref{number-field-rank-bound}, the kernel of $\psi_{L/K}$ has rank at most $t$ and thus $(\mathrm{Cl} (L/K)_{\ell}, \omega_{L/K}, \psi_{L/K})$ is indeed contained in the support. \end{proof} \section{Compatibility of the general definitions of $\psi$ and $\omega$ with Frobenius} \label{s-NT-FF-compatible} Let $C$ be a smooth projective geometrically irreducible curve over a finite field $k$ containing the $\ell^n$th roots of unity. Let $A = \mathrm{Pic}^0(C).$ Because $A$ is a principally polarized abelian variety, the construction in \S3.1 gives $A(k)_\ell$ the structure of a bilinearly enhanced group, which we denote $( \mathrm{Pic}^0(C)(k)_\ell, \omega_C, \psi_C)$. This is the same notation we used in the special case where $C$ is hyperelliptic in \S3.2. The exact sequence $0 \to {\rm Pic}^0(C)(k) \to {\rm Pic}(C) (k) \to \mathbb Z \to 0$ (which is exact by Lang's theorem) induces an isomorphism $\mathrm{zer}: {\rm Pic}^0(C)(k) [\ell^n] \to {\rm Pic}(C) (k)[\ell^n] $ and a surjection $\mathrm{zer}^\vee: {\rm Pic}(C) (k)^\vee [\ell^n] \to {\rm Pic}^0(C)(k)^\vee [\ell^n]$ with kernel $\mathbb Z/\ell^n$. The aim of this section is to prove the commutative diagram \[ \mathrm{zer} \circ \psi_C \circ \mathrm{zer}^\vee = \psi_K \] where $K$ is the function field of $C$. \begin{lemma}\label{psi-psi-Weil} Let $B$ be an abelian variety. Let $\tau: \pi_1(B_{\overline{k}} ) \to \mu_{\ell^n}$ be a homomorphism. This induces a class $[\tau]$ in $H^1 ( B_{\overline{k}} , \mu_{\ell^n})$ and hence\footnote{$B^\vee[\ell^n] = \mathrm{Pic}^0 (B)[\ell^n] = \mathrm{Pic}(B) [\ell^n] =H^1 ( B_{\overline{k}} , \mu_{\ell^n})$ because the component group of $\mathrm{Pic}(B)$ is torsion-free.} an $\ell^n$-torsion class $[\tau]$ in $H^1(B_{\overline{k}} , \mathbb G_m) [\ell^n] = B^\vee [\ell^n]$. Because $\tau$ is an $\ell^n$-torsion character, we can also view it as a map $ \tau': B[\ell^n] \cong \pi_1(B_{\overline{k}} ) \otimes \mathbb Z/\ell^n \to \mu_{\ell^n} $ Pairing against $[\tau] \in B^\vee[\ell^n]$ in the Weil pairing $B[\ell^n] \times B^\vee [\ell^n] \to \mu_{\ell^n}$ recovers the homomorphism $\tau': B[\ell^n] \to \mu_{\ell^n}$. \end{lemma} \begin{proof} This follows from the definition of the Weil pairing in \cite[\S20, p.183]{Mu}. To see the equivalence between Mumford's definition and ours, recall how one obtains an $\ell^n$-torsion line bundle from $\tau$. First, one considers the multiplication by $\ell^n$ \'{e}tale cover $m:B\rightarrow B$. To descend the trivial line bundle $\mathcal{O}_B$ amounts to giving isomorphisms $r_g:\mathcal{O}_B\rightarrow \mathcal{O}_B$ for each $g\in B[\ell^n]$, and so one simply takes $r_g=\tau'(g)$. In Mumford's notation, his $\chi$ is our $\tau'$. \end{proof} \begin{lemma}\label{psi-psi-Lang} Let $C$ be a curve over a finite field $k$ and let $A$ be its Jacobian, a principally polarized abelian variety. Viewing $G \subset \mathrm{Cl}(K)_\ell$ as a subset of $\pi_1(C)^{\mathrm{ab}}_\ell$ via class field theory, the natural homomorphism $\pi_1(A_{\overline{k}})_\ell = \pi_1(C_{\overline{k}})^{\mathrm{ab}}_\ell \to \pi_1(C)^{\mathrm{ab}}_\ell= \pi_1(A)^{\mathrm{ab}}_\ell$ factors through $G.$ The map to $G$ is exactly the map $\phi: A[\ell^r] \to G$ of \S3.1 composed with the natural map $\pi_1(A_{\overline{k}})_\ell = T_\ell(A) \to A[\ell^r]$. \end{lemma} \begin{proof} Lang gave a beautiful construction of class field theory over function fields using the map $1-F$ and the Jacobian. He did not explain this in detail in his paper, because class field theory over function fields was already known. We explain how class field theory gives this identity. Fix a divisor $D$ of degree 1 on $C$, giving an identification $\mathrm{Cl}(K)_\ell = G \times \mathbb Z_\ell$ and an Abel-Jacobi map $C \to A$. Consider the abelian cover $C'$ of $C$ defined by the fiber product over $A$ of $C$ with the map $(1-F) :A \to A$, which has Galois group $A(k)$. Consider further the base change $C^*$ of $C'$ to $\overline{k}$, which has Galois group $G \times \widehat{\mathbb Z} = \widehat{ \mathrm{Cl}(K)}$. We claim the natural map from $\pi_1^{\mathrm{ab}}(C)$ to the Galois group of this cover is the same as the identification of this Galois group with the profinite completion of $\mathrm{Cl}(K)$ under class field theory. To do this, it suffices to check that every Frobenius element is sent to the same element of $\widehat{\mathrm{Cl}(K)}$ under these two definitions, because the Frobenius elements are dense in the Galois group. Under class field theory, the Frobenius element at a closed point $v$ is sent to the class of the line bundle $\mathcal O(-v)$, or, writing $D$ for our degree $1$ divisor, $( \mathcal O(-v + \deg v \cdot D), \deg v)$ in $G \times \widehat{ \mathbb{Z} }$. On the other hand, we can check how $\operatorname{Frob}_v$ acts on the fiber of $C^*$ over $C$ at $v$. First, we calculate the action on $C'$. Let $x$ be a geometric point of $C$ lying over $v.$ The points in $C'$ lying over $x$ can be expressed as pairs $(x,y)$ with $y \in A(\overline{k})$ such that $(1 - F)(y) = \mathrm{AJ}(x)$ where $\mathrm{AJ}$ is the Abel-Jacobi map. The action of $\operatorname{Frob}_v$ on the fiber of $C'$ at $x$ is given by $F^{\deg v}$. We have \begin{align*} F^{\deg v} (y) - y &= \sum_{i=0}^{\deg v-1} F^{i+1}(y) - F^i(y) \\ &= \sum_{i=0}^{\deg v-1} F^{i} ( F(y) -y ) \\ &= - \sum_{i=0}^{\deg v-1} F^{i} (\mathrm{AJ}(x)). \end{align*} Now $\mathrm{AJ}(x)$ is the class of the line bundle $\mathcal O(x - D)$. So $F^i(\mathrm{AJ}(x))$ is the class of $\mathcal O( F^i(x) - D)$. Thus $\sum_{i=0}^{\deg v-1} F^{i} (\mathrm{AJ}(x))$ is the class of $\mathcal O \left( \left(\sum_{i=0}^{\deg v-1} F^i(x) \right) - \deg v \cdot D \right) $. Since $\mathcal O (v) =\mathcal O \left(\sum_{i=0}^{\deg v-1} F^i(x) \right)$, we conclude that $ F^{\deg v}(y) - y $ is the class of $\mathcal O ( -v + \deg v \cdot D )$. In other words, $F^{\deg v}$ acts on this fiber by translation by $\mathcal O ( -v + \deg v \cdot D ) \in G$. Finally, $F^{\deg v}$ acts on $\overline{k}$ by $F^{\deg v}$, which corresponds to the element $\deg v \in \widehat{\mathbb Z}$. So indeed these two homomorphisms are the same on each Frobenius element, and thus equal. Now, to understand the map $\pi_1 (A _{\overline{k}})_\ell = \pi_1(C_{\overline{k}})^{\mathrm{ab}}_\ell \to \pi_1(C)^{\mathrm{ab}}_\ell $, it suffices to see how elements of $\pi_1 (A _{\overline{k}}) $ act on $C^*$. These elements fix $\overline{\mathbb F_q}$, so their action on $C^*$ depends only on their action on $C'$ and factors through $G$. The isomorphism $\pi_1 (A _{\overline{k}})= \pi_1(C_{\overline{k}})^{\mathrm{ab}} $ is defined via the embedding of $C$ into $A$, so their action on $C'$ is equal to their action on the covering $(1-F)$ of $A$ by $A$, which has Galois group $A(k)$. We identify $\pi_1 (A _{\overline{k}})_\ell $ with the inverse limit of $A[\ell^n]$ for natural numbers $n$ via its actions on the coverings of $A$ by $A$ defined by the multiplication by $\ell^n$ map. Choose $m$ such that the $(1-F)$ covering factors through the multiplication by $m$ map. Let $n$ be the $\ell$-adic valuation of $m$ and let $m'=m/\ell^n$. There exists a homomorphism $M: A \to A$ such that $(1-F) M = m = M(1-F) $. The image of an $\ell^n$-torsion point $x$ in $A(k)$ is then given by $M( (m')^{-1} x)$. Letting $y$ be any inverse image in $A[\ell^{\infty}]$ of $x$ under $1-F$, we have $m y = M ((1-F) y )= M( x)$ so that $\ell^n y = M ( (m')^{-1} x)$ . Thus, $x$ is obtained from $M ( (m')^{-1} x)$ by the snake lemma map of the diagram of \S3.1 - we take $M ( (m')^{-1} x)$ in $A[\ell^{\infty}]$, pull back to $A[\ell^{\infty}]$ under $\times \ell^n$ to obtain $y$, pushforward under $1-F$ to obtain $x$, and then recognize it as an element of $A[\ell^r]$. Because $\phi$ is by definition the inverse of this snake lemma map, this shows that the composition of $\phi$ with the projection from $\pi_1(A _{\overline{k}})_\ell$ matches the action of $\pi_1(A _{\overline{k}})$ on $C'$ and thus, by our earlier discussion, agrees with class field theory. \end{proof} \begin{thm} \[ \mathrm{zer} \circ \psi_C \circ \mathrm{zer}^\vee = \psi_K .\] \end{thm} \begin{proof} Fix $\alpha \in H^1 ( C, \mathbb Z/\ell^n),$ which is naturally identified with $\mathrm{Cl}(K)^\vee [\ell^n]$ by class field theory. We wish to show $\psi_K (\alpha) = \psi_C ( \mathrm{zer}^\vee(\alpha))$ as elements of $\mathrm{Cl}(K)[\ell^n]$. Because the natural map $\mathrm{Cl}(K)[\ell^n] = {\rm Pic}(C) [\ell^n] \to {\rm Pic}(C_{\overline{k}} ) [\ell^n]$ is injective, it suffices to check that the pullbacks of $\psi_K (\alpha) $and $\psi_C ( \mathrm{zer}^\vee(\alpha))$ to $C_{\overline{k}}$ are equal. Let $\overline{\alpha}$ be the pullback of $\alpha$ to $C_{\overline{k}}$. Recall, from Definition \ref{psi-nt-defi}, that $\psi_K(\alpha)$ is defined via the composition $H^1 ( C, \mathbb Z/\ell^n) \to H^1(C, \mu_{\ell^n} ) \to H^1(C, \mathbb G_m)$. Let $\overline{\psi}_K$ be defined by the analogous composition $H^1 ( C_{\overline{k}} , \mathbb Z/\ell^n) \to H^1(C_{\overline{k}} , \mu_{\ell^n} ) \to H^1(C_{\overline{k}} , \mathbb G_m)$. Because these maps are compatible with the pullback to $C_{\overline{k}}$, the pullback of $\psi_K(\alpha)$ to $C_{\overline{k}}$ is $\overline{\psi}_K(\overline{\alpha})$. We identify $H^1(C_{\overline{k}}, \mathbb Z/\ell^n)$ with the set of homomorphisms from $T_\ell(A)$ to $\mathbb Z/\ell^n$. Viewing $\alpha$ as a homomorphism ${\rm Pic}(C) (k) \to \mathbb Z/\ell^n$, the pullback $\overline{\alpha}$ of $\alpha$ is obtained by composing $\alpha$ with the projection $T_\ell(A) \to {\rm Pic}(C)(k)_\ell$. The latter composition factors through $T_\ell(A) / \ell^n,$ which is naturally identified with $A[\ell^n].$ By Lemma \ref{psi-psi-Weil}, the Weil pairing with $\overline{\psi}_K(\overline{\alpha})$ equals this homomorphism $A [\ell^n] \to \mathbb Z/\ell^n$, composed with the map $\mathbb Z/\ell^n \to \mu_{\ell^n}$ defined by $\zeta$. Because the Weil pairing is a perfect pairing, it suffices to check that the Weil pairing with $\overline{\psi}_K(\overline{\alpha})$ equals the Weil pairing with the pullback of $\psi_C( \mathrm{zer}^\vee (\alpha))$, which we now compute, following the construction of \S\ref{abomegapsi}. We have the map $\phi: T_\ell(A) \to \mathrm{coker}(1-F | \; T_\ell(A) ) \cong {\rm Pic}^0(C) (k)_\ell$ and the Cartier dual map $\phi^\vee: {\rm Pic}^0(C) (k)_\ell^\vee \to A[\ell^\infty].$ The $\ell^n$-torsion element $\phi^\vee ( \mathrm{zer}^\vee (\alpha))$ of $A[\ell^\infty]$ is Cartier dual to a map $T_\ell(A) \to \mu_{\ell^n}$. This dual map is obtained by first applying the projection $\phi: A[\ell^m] \to {\rm Pic}^0(C) (k)$, then $\mathrm{zer}: {\rm Pic}^0(C)(k) \to {\rm Pic}(C) (k)$, and then applying $\alpha$. (Because $\alpha$ is an element of the dual $\mathrm{Cl}(K)^\vee [\ell^n]$, i.e. the space of linear forms $\mathrm {Cl}(K) \to \mathbb Z/\ell^n$, this map is referred to as $\alpha$ and not $\alpha^\vee$.) Since the duality between $T_\ell(A)$ and $A[\ell^{\infty}]$ comes from the Weil pairing, $\phi^\vee ( \mathrm{zer}^\vee (\alpha))$ is the element of $A[\ell^m]$ whose Weil pairing with an element of $A[\ell^m]$ is this composition $\alpha^\vee \circ \mathrm{zer} \circ \phi$. On the other hand, we just saw that the Weil pairing of $\overline{\psi}_K(\overline{\alpha})$ with an element of $A[\ell^n]$ is obtained by lifting to an element of $T_\ell(A)$, projecting to ${\rm Pic}^0(C)(k)_\ell$, including into ${\rm Pic}(C)(k)_\ell$ and then applying $\alpha$. We chose $m$ so that the map $T_\ell(A) \to {\rm Pic}^0(C)(k)_\ell$ factors through $A[\ell^m]$ and then defined $\phi$ to be this factorization. Thus, we see that given any element of $A[\ell^m]$, taking the Weil pairing with $\phi^\vee ( \mathrm{zer}^\vee (\alpha))$ is equivalent to multiplying by $\ell^{m-n}$ to obtain an element of $A[\ell^n]$ and taking the Weil pairing with $\overline{\psi}_K(\overline{\alpha})$. In other words, embedding $A[\ell^n]$ in $A[\ell^m]$ the usual way, we have $ \phi^\vee ( \mathrm{zer}^\vee (\alpha))= \overline{\psi}_K(\overline{\alpha}) $ In other words, $\phi^\vee ( \mathrm{zer}^\vee (\alpha))$ is the image of $\overline{\psi}_K ( \overline{\alpha})$ under the inclusion map $A[\ell^n] \to A[\ell^m]$. By the definition of $\psi$ (Definition \ref{def-psi-G}), it suffices to show that for $\alpha_0 \in T_\ell(A) \otimes \mathbb Q_\ell $ such that $ \alpha_0 \mod T_\ell(A) = \phi^\vee ( \mathrm{zer}^\vee (\alpha)) \psi_C(\alpha)= \overline{\psi}_K(\overline{\alpha})$, we have $\phi ( (1-F) \alpha_0 ) = \overline{\psi}_K(\overline{\alpha})$. But $\phi$ was defined by the snake lemma as the inverse of the map obtained by taking a lift along the map $T_\ell(A) \otimes \mathbb Q_\ell \to A[\ell^{\infty} ]$ and then applying $1-F$, so this identity follows. \end{proof} \begin{lemma}\label{P-1-case} Let $c \in H^2 (\mathbb P^1_{k}, \mathbb Z/\ell^m )$ be a class. The Artin-Verdier pairing of $c$ with $\zeta_m$ is equal to $\frac{1-q}{\ell^n}$ times the pullback of $c$ to $H^2(\mathbb P^1_{\overline{k} }, \mathbb Z/\ell^m)$ followed by the trace map on $H^2(\mathbb P^1_{\overline{k} }, \mathbb Z/\ell^m)$. \end{lemma} \begin{proof} There is a natural map $H^2 ( \mathbb P^1_k , \mathbb Z/\ell^{\min(n,m)}) \to H^2 (\mathbb P^1_k, \mathbb Z/\ell^m)$ given by the inclusion. Let us check that this map is an isomorphism. To do this, observe that $ H^2 (\mathbb P^1_k, \mathbb Z/\ell^m)$ is, by a spectral sequence, the ${\rm Frob}_q$-invariants in $H^2(\mathbb P^1_{\overline{k}}, \mathbb Z/\ell^m)$, and similarly with $H^2 ( \mathbb P^1/k , \mathbb Z/\ell^{\min(n,m)})$. The natural map $H^2 ( \mathbb P^1_{\overline{k}} , \mathbb Z/\ell^{\min(n,m)}) \to H^2 (\mathbb P^1_{\overline{k}}, \mathbb Z/\ell^m)$ is not necessarily an isomorphism, but it is an isomorphism on ${\rm Frob}_q$-invariants, because $\operatorname{Frob}_q$ acts by multiplication by $q$ on both groups so the $\operatorname{Frob}_q$-invariants are exactly the $(q-1)$-torsion elements. Furthermore, we have an isomorphism $H^2 ( \mathbb P^1_k , \mathbb Z/\ell^{\min(n,m)}) \cong H^2 ( \mathbb P^1_k , \mu_{\ell^{\min(n,m)}) })$ using our chosen generator for $\mu_{\ell^n}$. Thus, we can assume $c$ lies in the image of $H^2 ( \mathbb P^1_k , \mu_{\ell^{\min(n,m)}) })$. Furthermore, it suffices to take $c$ a generator of this group. By Kummer theory, the Kummer class applied to a degree $1$ line bundle on $C$ is sufficient. The trace map on $H^2 (\mathbb P^1_{\overline{k}}, \mathbb Z/\ell^m)$, composed with the projection from $H^2 ( \mathbb P^1_{\overline{k} } , \mu_{\ell^{\min(n,m)}) })$, is equal to the trace map on $H^2 (\mathbb P^1_{\overline{k}}, \mathbb Z/\ell^m)$, since the trace map is compatible with inclusions of cyclic groups. By definition \cite[XVII, (1.1.3.2) and (1.1.3.3)]{sga4-3}, the trace map of the Kummer class of a degree $1$ line bundle is $1$. Multiplied by $1$, we get $\frac{q-1}{\ell^n}$. The Artin-Verdier pairing between the projection of this Kummer class along $H^2 ( \mathbb P^1_{k } , \mu_{\ell^{\min(n,m)}) }) \to H^2 (\mathbb P^1_k, \mathbb Z/\ell^{\min(n,m)}) \to H^2(\mathbb P^1_k, \mathbb Z/\ell^m)$ and $\zeta_m \in H^1( \mathbb P^1_k, \mu_\ell^m)$ is the Artin-Verdier pairing of this Kummer class and the image of $\zeta_m$ along the Cartier dual maps $H^1( \mathbb P^1_k, \mu_\ell^m) \to H^1 (\mathbb P^1_k , \mu_{\ell^{\min(n,m)}} ) \to H^1 (\mathbb P^1_k, \mathbb Z/\ell^{\min n, m})$. The image of $\zeta_m$ in $H^1 (\mathbb P^1_k , \mu_{\ell^{\min(n,m)}} ) $ is the torsor of $\ell^{\min(n,m)}$th roots of our fixed generator $\zeta\in \mu_{\ell^n}$. The action of ${\rm Frob}_q$ on this torsor is by multiplication by $\zeta^{ \frac{q-1}{ \ell^{\min (n,m)} }}$ The map $\mu_{\ell^{\min(n,m)}}\to \mathbb Z/\ell^{\min (n, m)})$ sends $\zeta^{ \ell^{n - \min(n,m) x} } $ to $x$, so the action of ${\rm Frob}_q$ on the image of $\zeta_m$ in $H^1 (\mathbb P^1_k, \mathbb Z/\ell^{\min (n, m)})$ is by adding $\frac{q-1}{ \ell^{\min(n,m)} \cdot \ell^{n - \min(n,m)}} = \frac{q-1}{\ell^n} \in \mathbb Z/\ell^n$. It follows from \label{prop-cft-av} that the Artin-Verdier pairing of the Kummer class of a degree $1$ line bundle with this torsor is equal to the action of the Galois group element corresponding under class field theory to a degree $1$ line bundle with this torsor. Every degree $1$ line bundle is the inverse of the ideal sheaf at a point, and thus is sent by class field theory to a Galois element acting on $\overline{k}$ as ${\rm Frob}_q^{-1} $, so acting on this torsor as $\frac{q^{-1} -1}{\ell^n} \equiv \frac{1-q}{\ell^n} \mod q$, and so the Artin-Verdier pairing is $\frac{1-q}{\ell^n}$, as desired. \end{proof} \begin{lemma}\label{omega-comparison} For each finite field $\mathbb F_q$, prime $\ell$, and natural number $n$ such that $q \equiv 1 \mod \ell^n$ but $q\not\equiv 1\mod \ell^{n+1}$, and curve $C$ over $\mathbb F_q$, we have \[\omega_K = \omega_C\] where $K =\mathbb F_q(C)$ is the function field of $C.$ \end{lemma} \begin{proof} By the uniqueness statement in Lemma \ref{wedge-from-pairing}, it suffices to check that the bilinear forms $\omega_{m,K}$ defined in \S \ref{omega-pairings} is equal to $ \omega_{C,m}= \ell^m (a \otimes b) (\omega_{C} ) $. By definition, the bilinear form $\omega_{m,K}$ takes classes $a,b$ to $ - \frac{1}{2} (\zeta_m, a \cup b)_{AV}$. On the other hand, the bilnear form $\omega_{m,C}$ can be obtained by pulling back to $A[\ell^m]$, equivalently, $H^1 (C_{\overline{k}}, \mathbb Z/\ell^m)$, and then taking $\frac{ 1- q}{ 2 \ell^n}$ times the Weil pairing. The Weil pairing is equivalent to the cup product $H^1 (C_{\overline{k}}, \mathbb Z/\ell^m) \times H^1 (C_{\overline{k}}, \mathbb Z/\ell^m) \to H^2 (C_{\overline{k}}, \mathbb Z/\ell^m)$ followed by the trace map on $H^2 (C_{\overline{k}}, \mathbb Z/\ell^m)$. See \cite[Chapter 5, Prop 3.4]{SGA45} and \cite[V.2, Rmk 2.4(f)]{MilneEtaleCohomology}. Because cup product is compatible with pullback from $C_k$ to $C_{\overline{k}}$, it suffices to check that the two linear forms on $H^2(C_k, \mathbb Z/\ell^m)$, the first one being the Artin-Verdier pairing with $\zeta_m$ times $\frac{-1}{2}$, and the second being the pullback to $C_{\overline{k}}$ followed by the trace times $\frac{q-1 }{2\ell^n}$, are equal. \bigskip We can first handle the case $C = \mathbb P^1$, which is exactly Lemma \ref{P-1-case}. We will now use the case of $\mathbb P^1$ to handle the general case. To do this, we fix a map $ f:C \to \mathbb P^1$ and check separately that both our linear forms are compatible with the projection to $\mathbb P^1$ in the sense that evaluating the form on a given class in $H^2(C_k, \mathbb Z/\ell^m)$ and pushing forward the form to $H^2(\mathbb P^1_k ,\mathbb Z/\ell^m)$ and then evaluating the linear form give the same result. For the trace map in \'{e}tale cohomology, this compatibility with pushforward is \cite[XVIII, Lemma 1.1.5]{sga4-3}. This is stated over an algebraically closed field, so we must in addition use the fact that pushforward along $C \to \mathbb P^1$ commutes with pullback to an algebraically closed field. For Artin-Verdier duality, recall that the pairing of $\beta\in H^2(C_k, \mathbb Z/\ell^m)$ with $\zeta_m$ proceeds in 2 steps. We first consider $\zeta_m \cup\beta\in H^3(C_k,\mathbb{Z}/\ell^m)$ and then apply the isomorphism $H^3(C_k,\mathbb G_m)\cong \mathbb{Q}/\mathbb{Z}$ and $H^3(C_k,\mathbb G_m)[\ell^m]\cong H^3(C_k,\mathbb{Z}/\ell^m)$. By the push-pull formula, it follows that $$f_*\beta\cup \zeta_m = f_*(\beta\cup \zeta_m)$$ where we abuse notation slightly by using $\zeta_m$ to denote cohomology classes on $C$ and on $\mathbb P^1$. It is therefore sufficient to check that the isomorphism $H^3(C_k,\mathbb G_m)\cong \mathbb{Q}/\mathbb{Z}$ commutes with pushforward. But this follows from its definition via the Brauer group in \cite[II.2.1]{MilneArithmeticDuality}, together with the following commutative diagram: \[ \xymatrix{ 0\ar[r]&\mathbb G_{m,C}\ar[r]\ar[d]& g_*\mathbb G_{m,\eta}\ar[r]\ar[d]& \mathrm{Div}_C\ar[r]\ar[d]& 0\\ 0\ar[r]&\mathbb G_{m,\mathbb P^1}\ar[r]& g_*\mathbb G_{m,\eta}\ar[r]& \mathrm{Div}_{\mathbb P^1}\ar[r]& 0\\ } \] where the vertical maps are the natural norm and pushforward maps respectively. Because the linear forms are equal on $\mathbb P^1$, and preserved by projection to $\mathbb P^1$, they are equal in general. \end{proof} \section{Random Matrix Theory} \label{linearrandommodels} For the entirety of this section, we fix a prime number $\ell$ and a positive integer $n$. \subsection{The Linear Model: Motivation} \label{linearrandommodelmotivation} If we draw our intuition from the function field setting, it is natural to want to model the distribution of ${\rm Coker}(F - 1)$ where $F$ is sampled randomly from ${\rm GSp}^{(q)}(\mathbb{Z}_\ell)$. However, the non-linearity of this model makes it hard to work with. Instead, we linearize in the following way: If $F \in {\rm GSp}_{2g}^{(q)}(\mathbb{Z}_\ell)$ is close to 1, then $${\rm Coker}(\log(F)) = {\rm Coker}(\log(1 + (F-1))) = {\rm Coker}(F-1).$$ It is thus plausible that ${\rm Coker}(F-1)$ and ${\rm Coker}(\log(F))$ are distributed in the same way. The Lie algebra of ${\rm Sp}_{2g}(\mathbb{Z}_\ell)$ consists of skew-symplectic matrices\footnote{That is, matrices $M$ such that $\langle Mv, w\rangle= -\langle v, Mw \rangle$ for $\langle \cdot, \cdot \rangle$ the symplectic pairing.}. Therefore, the logarithm of a general element in ${\rm GSp}_{2g}^{(q)}(\mathbb{Z}_\ell)$ should take the shape $\frac{1}{2} \log(q) + M,$ where $M$ is skew-symplectic. Assuming $\ell$ is odd, the assumption $\ell^n || q - 1$ implies that $\frac{1}{2} \log q$ has valuation $n.$ Therefore, the cokernel of $M + \frac{1}{2} \log q$ has the same distribution as the cokernel of $M + \ell^n.$ This motivates our linear random model: we consider the cokernel of $M + \ell^n,$ where $M$ is randomly sampled from the Lie algebra of ${\rm Sp}_{2g}(\mathbb{Z}_\ell)$ with respect to its additive Haar measure. \subsection{The Linear Model} \label{linearrandommodelcalculations} Let $\ell$ be an odd prime and consider the following model. Let $\mathbb{Z}_{\ell}^{2g},\omega = \sum_{i=1}^g e_i \wedge f_i$ be the standard symplectic space, and $M$ be a skew-symplectic endomorphism. Let $G=G_M$ be the cokernel of $M+\ell^n.$ We further wish to endow $G_M$ with additional data so as to obtain an element of $\mathcal{C}_{\ell,n}$: The first structure is the pushforward $\overline{\omega}=\omega_M\in\wedge^2 G$ of the standard symplectic form, where $\overline{\cdot}$ denotes reduction mod $(M + \ell^n)(\mathbb{Z}_\ell^{2g})$. Note that $\omega_G$ is $\ell^n$-torsion, since \begin{align*} 0 &=\sum \overline{Me_i} \wedge \overline{f_i} + \overline{e_i} \wedge \overline{Mf_i} \\ &= -2\ell^n \sum \overline{e_i} \wedge \overline{f_i}. \\ &= -2\ell^n \omega_M. \end{align*} The second structure is the isomorphism $\psi_M:G^{\vee}[\ell^n]\rightarrow G[\ell^n]$ stemming from the snake lemma, as in \S \ref{omegapsisymplecticsimilitude}. Explicitly, if we dualize we get an identification of $G^{\vee}$ with the kernel of $\ell^n-M$ on $(\mathbb{Q}_{\ell}/\mathbb{Z}_{\ell})^{2g}$, and we define $\psi_G$ by sending $\alpha$ to $(M+\ell^n)\alpha'$, where $\alpha'$ is any lift of $\alpha$ to $\mathbb{Q}_{\ell}^{2g}$. This provides us with a bilinear form on $G^{\vee}[\ell^n]\times G^{\vee}$ given by $\langle \alpha,\beta\rangle_G:= \beta(\psi(\alpha))$ and like in Lemma \ref{compatibilitypsiomega}, the triple $(G_M,\omega_M,\psi_M)$ satisfies the compatibility relation \eqref{psiomegacomp}. \begin{definition}\label{linearmeasuredef} We define the linear measure $\mu_g$ on $\mathcal{C}_{\ell,n}$ as the pushforward of the Haar measure on skew-symplectic matrices $M$ under the map $M\rightarrow (G_M,\omega_M,\psi_M)$. \end{definition} We shall show that as $g\rightarrow\infty$ the $\mu_g$ converge to a natural probability measure $\mu$, and we shall begin by computing its moments. \subsection{Moments of $\mu_g$} First, we need a couple preliminary lemmas: \begin{lemma}\label{allsur} Fix a finite abelian group $G$. Fix $\omega_G \in \wedge^2 G$ satisfying $\ell^n \omega_G = 0.$ If $g$ is large enough then there exists a surjection $f :\mathbb{Z}_\ell^{2g}\rightarrow G$ such that $f \omega = \omega_G$, and for any such surjection $f$ there exists a skew-symplectic $M$ with $f\circ(M+\ell^n)=0$. \end{lemma} \begin{proof} By Witt's extension theorem the set of all $f$ satisfying $f\omega=\omega_G$ forms a single orbit under ${\rm Sp}_{2g}\mathbb{Z}_\ell$. For the second claim, it is therefore enough to find $M$ satisfying the conditions of the Lemma relative to a single $f$ for which $f\omega=\omega_G$. For every $k>0,$ any $2\times 2$ anti-diagonal matrices whose off-diagonal entries are $- x,2\ell^n+x$ with valuations $k,n$ resp. defines a surjection $f:\mathbb{Z}_\ell^2\rightarrow H_k= \mathbb{Z}/\ell^n \oplus \mathbb{Z}/\ell^r$ for which $f\omega = \omega_k$ generates $\wedge^2H_k[\ell^n]$. An arbitrary $\omega_G$ can be obtained by pushing forward $\oplus \omega_k$ under a surjective map $\oplus_k H_k \rightarrow G$. This verifies the first claim, and we can now define $M$ for the second claim. Let $M$ be the direct sum of the transformations defined by \begin{align*} (M + \ell^n)e_1 &= (2\ell^n + x) e_1 \\ (M + \ell^n)e_2 &= -x \cdot e_2. \end{align*} Then $M$ has trace 0 and so is skew-symplectic with integral entries and ${\rm Im}(M + \ell^n) = {\rm Ker} f.$ The claim follows. \end{proof} \begin{lemma}\label{eqdist} Fix $G,\omega_G\in\wedge^2G$. As $g \to \infty,$ the proportion of surjections $f:\mathbb{Z}_\ell^{2g}\rightarrow G$ for which $f\omega = \omega_G$ approaches $\frac{1}{\mid\wedge^2G\mid}.$ \end{lemma} \begin{proof} First, note that since almost all homomorphisms are surjections, we can and do use a random homomorphism $f$ instead of a random surjection. Let $\nu_g$ be the pushforward measure on $\wedge^2G$ thus obtained for a given $g$. Note that $\nu_g$ is the $g\,$th convolution of $\nu_1$. Since $\nu_g$ has full support by Lemma \ref{allsur}, the claim follows. \end{proof} We now compute the asymptotics of the $\mu_g$ and their moments. \begin{thm}\label{totmommeas} Fix $(G,\omega_G,\psi_G)\in\mathcal{C}_{\ell,n}$. As $g\rightarrow\infty$, we have $$\mathbb{E}_{\mu_g}\mid{\rm Surj}(*,(G,\omega_G,\psi_G))\mid \rightarrow \frac{1}{\left| {\rm Sym}^2G[\ell^n] \right|}.$$ Moreover, if $\psi_G$ is an isomorphism, then $$\mu_g((G,\omega_G,\psi_G)) \rightarrow \frac{c_\ell}{\left| {\rm Aut}(G,\omega_G,\psi_G) \right| \cdot \left| {\rm Sym}^2G[\ell^n] \right|}$$ where $c_\ell=\prod_{i=0}^{\infty} (1-\ell^{-(2i+1)}).$ And if $\psi_G$ is not an isomorphism, then $$\mu_g((G,\omega_G,\psi_G)) \rightarrow 0.$$ \end{thm} \begin{proof} Fix a surjection $f$ with $f\omega = \omega_G$. By Lemma \ref{eqdist}, there are $(1-o_g(1))\cdot\frac{|G|^{2g}}{\#\wedge^2G}$ of these. By Lemma \ref{allsur}, for large enough $g$ there is at least one skew-symplectic $M$ satisfying $f \circ (M + \ell^n) = 0.$ The set of all such $M$ form a coset for the group of those skew-symplectic matrices $N$ with $f\circ N=0$. Identifying $L=\mathbb{Z}_\ell^{2g}$ with $L^\ast$ via the symplectic pairing, $M$ can be viewed as a self-dual map from $L^\ast$ to $L.$ Thus we are looking for the measure of symmetric matrices with image in the kernel of $f$. Write $$G=\oplus_{i=1}^r \left( \mathbb{Z}/\ell^{m_i}\right) \cdot b_i.$$ Pick a basis $e_i$ for $L$ so that $f(e_i)=b_i$ with $b_i=0$ for $i>r$ by convention Let $b_i^\vee$ denote the dual basis for $G^\vee.$ Let $A$ denote the matrix of $M+\ell^n$ with respect to the bases $e_i$ and $e_i^*$, so that $Ae_i^*=\sum_j A_{i,j}e_j$. Then the condition that $f\circ (M+\ell^n)=0$ is equivalent to $A_{i,j}$ being divisible by $\ell^{m_j}$. Moreover, if we let $r_i:=\max(0,m_i-n)$ then $\ell^{r_i}b_i$ is a basis for $G[\ell^n]$ and the the map $\psi_G$ is given by $$\psi_G(\ell^{r_i}b_i^\vee)=\sum_j \frac{A_{i,j}}{\ell^{m_i-r_i+r_j}}\ell^{r_j}b_j.$$ Now, changing $M$ amounts to adding a symmetric matrix $B$ to $A$. Clearly, $B$ must satisfy $\ell^{\max(m_i,m_j)} \mid B_{i,j}$. The Haar measure of all such $B$ is easily computed to be $\frac{\mid\wedge^2G\mid}{|G|^{2g}}$. To finish the proof of the first part of the lemma, we must show that any $\psi_G$ that is compatible with $\omega_G$ can occur as above with an appropriate choice of $A'$. Now for any two such $\psi^1_G,\psi^2_G$ consider the difference $\psi^3_G=\psi^1_G-\psi^2_G$. We may write $$\psi^3_G(\ell^{r_i}b_i^\vee)=\sum_j \frac{a_{i,j}}{\ell^{m_i-r_i+r_j}}\ell^{r_j}b_j.$$ The element $a_{i,j}$ is well defined modulo $\ell^{\min(m_j,n)+m_i+r_j-r_i} = \ell^{m_j+\min(m_i,n)}$. Applying the compatibility relation \ref{psiomegacomp} with $b_i^\vee,b_j^\vee$ and $r=\max(r_i,r_j)$ implies that $a_{i,j}$ and $a_{j,i}$ are equal modulo $\ell^{\min(m_i+m_j,n+m_i,n+m_j)}$. Since $$\min(m_j+\min(m_i,n), m_i+\min(m_j,n)) = \min(m_i+m_j,n+m_i,n+m_j)$$ it follows that we may pick a single integer which represents $a_{i,j}$ and $a_{j,i}$ simultaneously. Setting $A'_{i,j}$ to be this integer completes the proof. \medskip Now, for the second part of the lemma, note that we are now looking for matrices $M$ such that the image of $M+\ell^n$ is equal to the kernel of $f$. This is equivalent to the two conditions \begin{itemize} \item $\ell^{m_j} | A_{i,j},$ the condition from earlier which guarantees that ${\rm Im}(M + \ell^n) \subset {\rm Ker} f$ \item $\ell^{\sum m_i}\mid\mid\det A,$ which combined with the above bullet point implies that ${\rm Im}(M + \ell^n) = {\rm Ker} f.$ \end{itemize} After taking out a factor of $\ell^{m_j}$ from the $j \,$th column in $A$, we see that the resulting matrix is block upper-triangular, consisting of an $r\times r$ matrix $C$ and a $2g-r\times 2g-r$ matrix $D$, and we are looking for the probability that both of these are invertible. Since $k\geq 1$, the reduction of $D$ mod $\ell$ is just a symmetric matrix, which is invertible with probability tending to $c_\ell$(this is the $t=0$ case of Lemma \ref{symmetricprob}). We claim that $C$ is invertible iff $\psi$ is invertible, which would complete the proof. Note that $C$ is block upper triangular, as is $\psi$, so to check invertibility we only have to restrict to the blocks where $m_i$ is fixed. The claim is now immediate, as the matrices for $C$ and $\psi$ are both the appropriate submatrices of $\left(\frac{A_{i,j}}{\ell^{m_i}}\right)_{i,j}$. \end{proof} We shall also need a uniform upper bound for the intermediate measures $\mu_g$, so as to apply Fatou's Lemma when we study the limiting measure \begin{lemma}\label{upperbound} There exists an absolute constant $c$ such that $$\sum_{\omega_G\in \wedge^2G[\ell^n]}\#{\rm Surj}\left((\mathbb{Z}_\ell^{2g},\omega),(G,\omega_G)\right)\leq c|G|^{2g}\cdot \frac{|\wedge^2G[\ell^n]|}{|\wedge^2 G|}.$$ \end{lemma} \begin{proof} First, we claim that for such a surjection to exist, the group $G'=\ell^nG$ must have $\ell$-rank at most $g$. To see this, note that since $\omega_G$ is $\ell^n$-torsion it pushes forward to 0 under the natural surjection from $G$ to $G'$. (This implication can be checked using a basis.) Since $\mathbb{Z}_\ell^{2g}$ then surjects onto $G'$ and maps $\omega$ to $0$, dualizing we see that $G'^{\vee}[\ell]$ embeds into $\mathbb{F}_p^{2g}$ as an isotropic subspace and thus has rank at most $g$. The claim follows. We may thus assume that $\ell^nG$ has rank at most $g$ and is thus a quotient of $\mathbb{Z}_\ell^g$. Now, we prove the lemma by using Fourier analysis. We in fact bound the total number of homomorphisms $f:\mathbb{Z}_\ell^{2g}\rightarrow G$ which take $\omega$ to $\omega_G$. In this proof, let $H^\vee$ denote ${\rm Hom}(H,\mathbb{S}^1),$ where $\mathbb{S}^1 := \{z \in \mathbb{C}: |z| = 1 \}.$ We compute \begin{align*} \sum_{f:\mathbb{Z}_\ell^{2g}\rightarrow G} \delta_{f_*\omega\in\wedge^2G[\ell^n]}&=\sum_{f:\mathbb{Z}_\ell^{2g}\rightarrow G} \mathbb{E}_{\chi\in\ell^n(\wedge^2G)^\vee} \left( \chi(f_*\omega)\right)\\ &=\mathbb{E}_{\chi\in\ell^n(\wedge^2G)^\vee}\sum_{f:\mathbb{Z}_\ell^{2g}\rightarrow G}\chi(f_*\omega)\\ &=\mathbb{E}_{\chi\in\ell^n(\wedge^2G)^\vee}\left(\sum_{f:\mathbb{Z}_\ell^{2}\rightarrow G}\chi(f_*\omega)\right)^g\\ \end{align*} Now each $\chi\in \left(\wedge^2 G \right)^\vee$ is naturally an alternating bilinear form on $G$ valued in $\mathbb{S}^1,$ so we denote ${\rm Ker}\chi$ as those elements of $G$ that pair to $0$ with every other element of $G$. It is then easy to see that $\sum_{f:\mathbb{Z}_\ell^{2}\rightarrow G}\chi(f_*\omega) = |{\rm Ker}\chi|\cdot |G|$ so that the above sum becomes $$ \frac{|G|^{2g}\cdot \left|\wedge^2G[\ell^n]\right|}{|\wedge^2G|}\sum_{\chi\in\ell^n(\wedge^2G)^\vee} \frac{1}{[G:{\rm Ker}\chi]^g}.$$ Now, $\chi\in\ell^n(\wedge^2G)^\vee$ is equivalent to ${\rm Ker}\chi \supset G[\ell^n].$ Let $G' = G / G[\ell^n].$ There is a surjection \begin{align*} &\{ (f,\omega): f: G' \twoheadrightarrow G'/N \text{ surjective with } {\rm Ker} f = N, \omega \text{ non-degenerate, alternating on } G/N \} \\ &\twoheadrightarrow \{ \text{alternating bilinear forms on } G' \text{ with kernel } N \}\\ \end{align*} defined by $$(f,\omega) \mapsto \left[ (g,g') \mapsto \omega(f(g),f(g')) \right].$$ Every fiber of the above mapping consists of a single $\mathrm{Aut}(G'/N)$ orbit. The number of such orbits is at most $$\frac{\# \mathrm{Surj}(G',G'/N) \cdot |\wedge^2 G'/N|}{\mathrm{Aut}(G'/N)},$$ because $\mathrm{Aut}(G'/N)$ acts freely on the left side. Thus, \begin{align*} \sum_{\chi\in\ell^n(\wedge^2G)^\vee} \frac{1}{[G:{\rm Ker}\chi]^g} &\leq \sum_{N \subset G'} \frac{\#\mathrm{Surj}(G',G'/N) \cdot | \wedge^2 G'/N | }{\#{\rm Aut} \left( G'/N \right)\cdot |G'/N|^g}\\ &\leq \sum_H \frac{|\wedge^2\!\!H|}{\#{\rm Aut} H}\\ \end{align*} where $H$ varies over all finite abelian $\ell$-groups and the last inequality follows since $$\#{\rm Surj}\left({G/G[\ell^n],H}\right)\leq \#{\rm Surj}\left(\mathbb{Z}_\ell^g,H\right) = |H|^g.$$ It thus remains to show that $ \sum_H \frac{|\wedge^2\!H|}{\#{\rm Aut} H}$ is finite. This is an easy calculation with partitions, which we carry out in Lemma \ref{part}. \end{proof} \begin{lemma}\label{part} $ \sum_H \frac{|\wedge^2\!H|}{\#{\rm Aut} H}$ is finite, where the sum is over all finite abelian $\ell$-groups $H$. \end{lemma} \begin{proof} We may parametrize $H$ with sequences $(a_i)_{i\in\mathbb{N}}$ of non-negative integers only finitely many of which are non-zero, identifying a sequence with $\oplus_i (\mathbb{Z}/\ell^i)^{a_i}$. We denote by $n_H$ the maximum integer such that $a_n>0$. Let $c=\prod_{i>0}(1-\ell^{-1})$. Then $$|\wedge^2\!H| = \ell^{\sum_{i<j}ia_ia_j+\sum_i ia_i(a_i-1)/2}=\ell^{\sum_{i\leq j} ia_ia_j- \sum_i ia_i(a_i+1)/2}$$ and $$\#{\rm Aut} H= \ell^{\sum_{i\leq j} iaia_j} \prod_i\prod_{1\leq k\leq n_H}(1-\ell^{-k})^{-1}\geq \ell^{\sum_{i\leq j} iaia_j} c^n$$ and so \begin{align*} \sum_H \frac{|\wedge^2\!H|}{\#{\rm Aut} H}&\leq \sum_H c^{-n_H}\ell^{\sum_i ia_i(a_i+1)/2}\\ &\leq \sum_{n_H,a}c^{-n_H-1}\ell^{-n_Ha(a+1)/2}\\ &\leq 2c^{-1}\sum_{n_H}(c\ell)^{-n_H}\\ &\leq\frac{2c^{-1}}{1-c^{-1}\ell^{-1}}\\ \end{align*} \end{proof} \subsection{The universal measure $\mu$} \label{universalmeasuredefinition} \begin{thm}\label{univmeas} As $g\rightarrow\infty$ the measures $\mu_g$ weak-* converge to a probability measure $\mu$, with the moments from Theorem \ref{totmommeas}. \end{thm} \begin{proof} First, note that by Theorem \ref{totmommeas} $\mu(G) = \frac{c_\ell\cdot \mid\wedge^2G[\ell^n]\mid h_G}{{\rm Aut}(G)}$ where $h_G$ is the fraction of pairs $\omega_G,\psi_G$ such that $\psi_G$ is invertible. By writing $G=\oplus_{j=1}^n (\mathbb{Z}/\ell^j)^{r_j(G)}$ it is easy to see that $$h_G = \prod_{j=1}^{\lfloor n/2\rfloor} (\ell^{-1};\ell^{-1})_{r_j(G[\ell^n])} \prod_{j=\lfloor n/2\rfloor +1}^{n} (\ell^{-1};\ell^{-2})_{\lceil r_j(G[\ell^n])/2\rceil}.$$ In particular, $\mu(G) \asymp \frac{\mid \wedge^2G[\ell^n]\mid }{{\rm Aut}(G)}$. Now, for any group $G$, it follows from Theorem \ref{totmommeas} together with the calculations in Lemma \ref{upperbound} that $$\mathbb{E}_{\mu_g}\#{\rm Surj}\left(*,G\right)\leq c\cdot |\wedge^2G[\ell^n]|,$$ so that in particular it follows that \begin{equation}\label{eq-star} \mu_g(G)\ll \mu(G).\end{equation} By Fatou's Lemma, it follows that $\mu$ is a probability measure. It remains to show that $\mu$ has the predicted moments. By Fatou's Lemma again, it follows from \eqref{eq-star} that for any group $H$, we have $\lim_g\mathbb{E}_{\mu_g}\#{\rm Surj}\left(*,H\right) = \mathbb{E}_{\mu}\#{\rm Surj}\left(*,H)\right)$. This implies \begin{align*} \limsup_g\sum_{\omega_H,\psi_H}\mathbb{E}_{\mu_g}\#{\rm Surj}\left(*,(H,\omega_H,\psi_H)\right)&= \limsup_g\mathbb{E}_{\mu_g}\#{\rm Surj}\left(*,H\right)\\ &=\mathbb{E}_{\mu} \#{\rm Surj}\left(*,H\right)\\ &=\sum_{\omega_H,\psi_H}\mathbb{E}_{\mu}\#{\rm Surj}\left(*,(H,\omega_H,\psi_H)\right)\\ \end{align*} On the other hand, by Fatou's Lemma, for each pair $(\omega_H,\psi_H)$ we have $$\liminf_g\mathbb{E}_{\mu_g}\#{\rm Surj}\left(*,(H,\omega_H,\psi_H)\right)\leq \mathbb{E}_{\mu}\#{\rm Surj}\left(*,(H,\omega_H,\psi_H)\right).$$ Since $\liminf \leq \limsup$ we must have equality, and so the claim follows. \end{proof} \begin{lemma}\label{momimpmeas} The measure $\mu$ is determined by its moments. \end{lemma} \begin{proof} Suppose $\mu'$ is another measure with the same moments. Note that we immediately get the inequality $$\mu'(G,\omega_G,\psi_G)\leq \frac{1}{\left| {\rm Aut}(G,\omega_G,\psi_G) \right| \cdot |{\rm Sym}^2G[\ell^n]|}.$$ Let $V$ be the vector space of functions on triples $(G,\omega_G,\psi_G)$ where $\psi_G$ is invertible. Write $U$ for the column vector whose components are $\mu'(G,\omega_G,\psi_G)$, and let $M$ be the matrix whose components are $$\#{\rm Surj}\left((G,\omega_G,\psi_G),(H,\omega_G,\psi_H)\right).$$ Then $MU=T$ by assumption, where $T$ has components $|{\rm Sym}^2G[\ell^n]|^{-1}$. Now let $D$ be the diagonal matrix with entries $\#{\rm Aut}(G,\omega_G,\psi_G)|{\rm Sym}^2G[\ell^n]|$ Then $MD^{-1} \cdot DU = T$. Now $DU$ and $T$ are both in $L^\infty(V)$, and the rows of $MD^{-1}-I$ have sum $c_{\ell}^{-1}-1<1$ so $I-MD^{-1}$ has operator norm less than 1 and $MD^{-1}$ is therefore invertible as an operator on $L^\infty(V)$. Thus, $DU= (MD^{-1})^{-1} T $ uniquely determines $DU$, and thus $U$. \end{proof} \subsection{The non-linear model} We define a non-linear model along the lines of \cite{LT}, but taking the pairings $\psi_G$ into account. We then use our work on the linear model above to prove that the non-linear model converges to the same measure $\mu$, resolving in particular \cite[Conjecture 3.1]{LT}. So, let $\nu^n_g$ be defined as follows. Take $q\in\mathbb{Z}_{\ell}$ to be any element such that $\ell^n|| q-1$. Now we define the measure $\nu^n_g$ to be the pushforward of the Haar measure on $F\in {\rm GSp}_{2g}^{(q)}$ to our category $\mathcal{C}_n$ under the map $F\rightarrow {\rm Coker}(1-F)$. We begin by showing show that $\nu^n_g$ has the ``right" moments. First, we recall the following \begin{lemma}\cite[Theorem 3.1]{LT}\label{nonlinearbound} Fix $H^\cdot=(H,\omega_H) \in(\wedge^2 H)[\ell^n]$. If we forget the $\psi_G$ factor, the $\nu^n_g$ -expected number of surjections $G^\cdot=(G,\omega_G)$ to any lift of $H^\cdot$ is equal to $0$ for $g\leq g(H)$ and $1$ for $g>g(H)$. \end{lemma} We now need to know that when we throw in the $\psi_H$ then we eventually get the right moments, for large enough $g$. \begin{lemma}\label{nonlinearmoments} Fix $H^\cdot=(H,\omega_H,\psi_H)\in\mathcal{C}_g$. The $\nu^n_g$ -expected number of surjections $G^\cdot=(G,\omega_G,\psi_G)\rightarrow H^\cdot$ is equal to $1/|{\rm Sym}^2 H[\ell^n]|$ for $g\gg_H 1$. \end{lemma} \begin{proof} We follow closely the proof of \cite[Theorem 3.1]{LT}. Following that proof, for $g>g(H)$ we fix a surjection $f:\mathbb{Z}_\ell^{2g}\rightarrow H$ such that $f_*\omega = \omega_H$. Let $V:={\rm Ker} f$. Then the set of all surjections onto $H$ pushing $\omega$ to $\omega_H$ forms a single orbit under pre-composition by ${\rm Sp}_{2g}(\mathbb{Z}_\ell)$. Let $S_f$ denote the set of elements $F\in{\rm GSp}^{(q)}_{2g}(\mathbb{Z}_\ell)$ such that ${\rm Im}(1-F)\subset {\rm Ker} f$, or equivalently $f=f\circ F$. Note that $S_f$ is a left torsor for ${\rm stab}(f)\subset {\rm Sp}_{2g}(\mathbb{Z}_\ell)$. We fix $F_0\in S_f$. It will be useful for us also to recall that $s\in {\rm stab}(f)$ iff $(1-s)V^*=0$. Now we consider what happens to the $\psi$-pairing under an element $F_0s$. Dualizing, we have $$f^\vee: H^\vee\hookrightarrow (\mathbb{Q}_\ell/\mathbb{Z}_\ell)^{2g}$$ and $$\psi_{F_0s}(h^\vee)=f\circ(1-F_0s)(\overline{f^\vee(h^\vee)})$$ where $\overline{\cdot}$ denotes any lift to $\mathbb{Q}_\ell^g$. As $(1-sF_0) = (1-F_0) + (1-s)F_0$, so we see that $$(\psi_{sF_0}-\psi_{F_0})(h^\vee) = f\left((1-s)(F_0(\overline{f^{\vee}(h^\vee)})\right).$$ Next, note that $$f^{\vee}(H^\vee[\ell^n])={\rm Ker}(q-F_0)\mid_{(\mathbb{Q}_\ell/\mathbb{Z}_\ell)^{2g}[\ell^n] }= V^*[\ell^n],$$ which is therefore killed by $(1-s)$. Letting $v=F_0(\overline{f^{\vee}(h^\vee)})$, it follows that $f((1-s_1s_2)(v)) = f((1-s_1))(v) +f\circ s_1( (1-s_2)v) = f((1-s_1)v)+f((1-s_2)v)$, and thus we see that the map \begin{align*} R:{\rm stab}(f) &\rightarrow {\rm Hom}(H^\vee[\ell^n],H[\ell^n]) \\ s &\mapsto \psi_{sF_0}-\psi_{F_0} \end{align*} is a group homomorphism. The claim will thus follow if we show that for large enough $g$, the image of $R$ is of size $|{\rm Sym}^2 H[\ell^n]|$. Note that the map $R$ can be thought of as the composition of $R_0:{\rm stab}(f)\rightarrow {\rm Hom}(V^*,\mathbb{Z}_{\ell}^{2g}/V)^\dagger$ given by $s\rightarrow R_0(s)(v^*)=(1-s)v^*$ with the restriction to $V^*[\ell^n]$. Here $^{\dagger}$ denotes the self-dual maps. It is thus sufficient to prove that for large enough $g$, $R_0$ is surjective. Now, fix an element $\rho\in {\rm Hom}(V^*,\mathbb{Z}_{\ell}^{2g}/V)^{\dagger}$. We claim that there exists a skew-symplectic element $M$ which is a multiple of $\ell$, such that $MV^*=0$ and moreover $M$ induces the map $\rho$. In fact, this follows from the proof of Theorem \ref{totmommeas}. Now we set $s=e^M$. Note that since $M\mathbb{Z}_\ell^{2g}\subset V$ it follows that $s$ induces the same map as $M$, which completes the proof. \end{proof} \begin{thm} The measures $\nu^n_g$ converge to $\mu_g$ as $n\rightarrow\infty$. \end{thm} \begin{proof} Let $\nu$ be in the weak-* closure of $\nu^n_g$. By Lemma $\ref{nonlinearbound}$ it follows that $\nu^n_g(G)\ll \mu_g(G)$ and so it follows that the moments of $\nu$ are the limits of the moments of the $\nu^n_g$, which are the same as the moments of $\mu_g$ by Lemma \ref{nonlinearmoments}. The claim follows by Lemma \ref{momimpmeas}. \end{proof} \subsection{ Quotienting out by an additional $t$ elements}~\\ \label{quotienting} \medskip Recall that in Definition \ref{Qtmu} we defined the operation $Q$ that takes a measure on $\mathcal C_{\ell,n}$ to another measure on $\mathcal C_{\ell,n}$ obtained by quotienting each group by a random element, and defined the measure $Q^t \mu$ by iteratively applying $Q$ to $\mu$. In this section, we study $Q^t \mu$. We have the following general lemma: \begin{lemma} \label{Qmom} If $\mu$ is a measure with finite moments, then so is $Q\mu$ and $$\mathbb{E}_{Q^t\mu}\#{\rm Surj}(*,(G,\omega_G,\psi_G)) = |G|^{-t} \mathbb{E}_{\mu}\#{\rm Surj}(*,(G,\omega_G,\psi_G)).$$ \end{lemma} \begin{proof} It suffices to handle the case where $t=1$. Note $Q\mu$ is the quotient of a $\mu$-random group $H$ by the image of a random map $f$. So a a surjection from a $Q\mu$-random group $(H,\omega_H,\psi_H)$ to $(G,\omega_G,\psi_G)$ is the same as a surjection $\psi$ from a $\mu$-random group $(H,\omega_H,\psi_G)$ together with a map $f:\mathbb{Z}_{\ell}\rightarrow{\rm Ker}\phi$. Noting such a kernel has index $|G|$ we get \begin{align*} &\mathbb{E}_{Q\mu}\#{\rm Surj}(*,(G,\omega_G,\psi_G))\\ &= \sum_{(H,\omega_H,\psi_H)} \mu(H,\omega_H,\psi_H)\frac{\#\{\phi:(H,\omega_H,\psi_H)\twoheadrightarrow(G,\omega_G,\psi_G),f:\mathbb{Z}_{\ell}\rightarrow{\rm Ker}\phi\}}{|H|}\\ &= \sum_{(H,\omega_H,\psi_H)} \mu(H,\omega_H,\psi_H)\frac{\#{\rm Surj}\left((H,\omega_H,\psi_H)\twoheadrightarrow(G,\omega_G,\psi_G)\right)}{|G|}\\ &=|G|^{-1} \mathbb{E}_{\mu}\#{\rm Surj}(*,(G,\omega_G,\psi_G)) \end{align*} as desired. \end{proof} Since the moments of $Q^\mu$ are smaller then the moments of $\mu$, the same proof for Lemma \ref{momimpmeas} applies in this setting and gives: \begin{lemma}\label{Qmomimpmeas} The $Q^t\mu$ is determined by its moments. \end{lemma} We now proceed to compute $Q^t\mu$. Note that while $\mu$ is supposed only where $\psi_G$ is an isomorphism, $Q^t\mu$ could potentially be supported where ${\rm Coker}\psi_G$ has $\ell$-rank at most $t$. \begin{thm}\label{quotient-measure-formula} If $(G,\omega_G,\psi_G)$ is such that the image of $\psi_G(G^\vee[\ell])$ in $G[\ell]$ has codimension $s\leq t$, then $$Q^t\mu(G,\omega_G,\psi_G) = \frac{1}{|{\rm Aut}(G,\omega_G,\psi_G)|\cdot |{\rm Sym}^2G[\ell^n]|\cdot|G|^{t}}\frac{(\ell^{-1})_t}{(\ell^{-1})_{t-s}}\prod_{i=t+1}^{\infty}(1+\ell^{-i})^{-1}$$ \end{thm} \begin{proof} Recall that we are looking for the measure of pairs of maps $M:\mathbb{Z}_\ell^{2g}\rightarrow \mathbb{Z}_{\ell}^{2g}, B:\mathbb{Z}_{\ell}^t\rightarrow \mathbb{Z}_{\ell}^{2g}$ so that M is skew-symplectic, the cokernel of $(M+\ell^n)\oplus B$ is isomorphic to $G$ and the pushforward of $\psi$ under $M+\ell^n$ is $\psi_G$. Fix a surjection $f$ with $f\omega = \omega_G$. By Lemma \ref{eqdist}, there are $(1-o_g(1))\cdot\frac{|G|^{2g}}{\#\wedge^2G}$ of these. By Lemma \ref{allsur}, for large enough $g$ there is at least one $M$ satisfying $f \circ (M + \ell^n) = 0.$ The set of all such $M$ form a coset of those matrices $N$ with $f\circ N=0$. Identifying $L=\mathbb{Z}_\ell^{2g}$ with $L^\ast$ via the symplectic pairing, $M$ can be viewed as a self-dual map from $L^\ast$ to $L.$ Upon identifying $L$ with $L^\ast$ via the symplectic form, we identify $\ell^n \cdot 1$ with $\ell^n \cdot$ the identification map. To ease notation, we continue to refer to this map as $\ell^n$ below. Write $$G=\oplus_{i=1}^r \mathbb{Z}/\ell^{m_i} b_i.$$ Pick a basis $e_i$ for $L$ so that $f(e_i)=b_i$ with $b_i=0$ for $i>r$ by convention. Let $b_i^\vee$ denote the dual basis for $G^\vee.$ Let $A$ denote the matrix of $M+\ell^n$ with respect to the bases $e_i$ and $e_i^*$, so that $Ae_i^*=\sum_j A_{i,j}e_j$, and l. Then the condition that $f\circ (M+\ell^n)=0$ is the condition that $A_{i,j}$ is divisible by $\ell^{m_j}$. Moreover, if we let $r_i:=\max(0,m_i-n)$ then $\ell^{r_i}b_i$ is a basis for $G[\ell^n]$ and the the map $\psi$ is given by $$\psi(\ell^{r_i}b_i^\vee)=\sum_j \frac{A_{i,j}}{\ell^{m_i-r_i+r_j}}\ell^{r_j}b_j.$$ Now, changing $M$ amounts to adding a symmetric matrix $A'$ to $A$. Clearly, $A'$ must satisfy $\ell^{\max(m_i,m_j)} \mid A'_{i,j}$. The Haar measure of all such $A'$ is easily computed to be $\frac{\mid\wedge^2G\mid}{|G|^{2g}}$. Moreover, exactly as in the proof of theorem \ref{totmommeas} we can evidently make any $\psi_G$ which is compatible with $\omega_G$ occur by picking an appropriate $A'$. All such $\psi_G$ occur with equal measure as they are distinct cosets of allowable matrices $A'$. Let $C$ be the $2g\times 2g+t$ matrix of $(A+\ell^n,B)$ in the bases $e_i,e_i^\vee$ and an arbitrary basis for the domain $\mathbb{Z}_\ell^t$ of $B$. We need the Haar measure of all such $C$ which are surjective onto the kernel of $f$, and which induces the pairing $\psi_G$. The set of those $C$ which map into the kerne of $f$ and induce $\psi_G$ has Haar meaure $\frac{|\wedge^2G|}{|{\rm Sym}^2G[\ell^n]|\cdot|G|^{2g+t}}$ by the above. We restrict to such $C$ from now on. Let $C'$ be the matrix $C$ with the $i$'th row divided by $\ell^{m_i}$ for $1\leq i\leq r$, and let $C_0$ be the reduction of $C'$ modulo $\ell$. Note that $C_0$ is of the shape $$\begin{pmatrix}\psi_* & D & B_1 \\ 0 & X=X^t & B_2\end{pmatrix}$$ with $\psi_*$ being determined by $\psi_G$, and $D,B_1,X,B_2$ being Haar-random with the only condtion that $X$ is symmetric. For $C$ to map surjectively onto ${\rm Ker} f$ is equivalent to $C_0$ being surjective, or equivalently for the rank of $C_0$ to be $2g$. This in particular requires $\begin{pmatrix} X B_2 \end{pmatrix}$ to be surjective. The lemma below follows an unpublished note of Robert Rhoades where he computes the number of symetric matrices having a fixed rank over a field. \begin{lemma}\label{symmetricprob} Let $E=\begin{pmatrix} X B_2 \end{pmatrix}$ be a random $m\times m+t$ matrix over $\mathbb{F}_{\ell}$ with $X=X^t$. Then as $m\rightarrow\infty$, the probability that $E$ is surjective tends to $\prod_{i=t+1}^{\infty}(1+\ell^{-i})^{-1}$. \end{lemma} \begin{proof} Define $(x)_j:= \prod_{i=1}^j (1-x^i)$, and ${a\choose b}_x:=\frac{(x)_a}{(x)_b(x)_{a-b}}$. Let $c=\prod_{i=1}^{\infty} (1+\ell^i)^{-1}.$ Let $I(n,j)$ be the number of symmetric $n\times n$ matrices which have corank $j$. We claim that $I(n,j)=I(n-j,0)\ell^{j(n-j)}\cdot {n\choose j}_{\ell^{-1}}$. To see this, note that a an $n\times n$ self-dual map $\phi:L\rightarrow L^\ast$ of rank $n-j$ has kernel a $j$-dimensional subspace $E$, and induces a self-dual map on the quotient space $L/E$. The number of $j$-dimensional subspaces is $\ell^{j(n-j)}\cdot {n\choose j}_{\ell^{-1}}$ and so the claim follows. Letting $n\rightarrow\infty$ we see that the probability of a large symmetric matrix having corank $j$ is $\frac{c\ell^{-\frac{j^2+j}{2}}}{(\ell^{-1})_j}$. Now, if $X$ has corank $j\leq t$ the probability that $B_2$ surjects from $\mathbb{F}_\ell^t $ onto the cokernel of $X$ is the probability that a random $t\times j$ matrix is surjective which is $\prod_{i=t-j+1}^{t} (1-\ell^{-i}) = \frac{(\ell^{-1})_t}{(\ell^{-1})_{t-j}}$. Thus, the probability that $E$ is surjective tends to $$c\sum_{j=0}^t \frac{\ell^{-\frac{j^2+j}{2}}(\ell^{-1})_t}{(\ell^{-1})_j(\ell^{-1})_{t-j}} = c\sum_{j=0}^t \ell^{-\frac{j^2+j}{2}}{t\choose j}_{\ell^{-1}}=c\prod_{i=0}^{t-1} (1+\ell^{i+1})$$ by the q-binomial theorem. This completes the proof. \end{proof} Assuming that $\begin{pmatrix} 0 & X & B_2 \end{pmatrix}$ is surjective, let $K$ denote its kernel. Then write $F$ for the $r\times t$ matrix representing the restriction of $\begin{pmatrix} D & B_1 \end{pmatrix}$ to $K$. Then for $C'$ to be surjective it is equivalent for the matrix $\begin{pmatrix} \psi_* & F \end{pmatrix}$ to be surjective. \begin{lemma} The corank of $\psi_*$ is equal to the codimension of $\psi_G(G^\vee[\ell])$ in $G[\ell]$. \end{lemma} \begin{proof} Notice that the matrix $\psi_*$ in the bases $b_i^{\vee}, b_i$ represents the reduction of the map $\psi_G^{\vee}:G^\vee/\ell^n\rightarrow G/\ell^n$. Thus its corank is equal to the dimension of $G/\ell{\rm Im}\psi_G^{\vee}$. Dualizing back gives the result. \end{proof} Given the lemma, the only remaining condition is for $F$ to be surjective onto the cokernel of $\psi_*$. The probability that a random map from $\mathbb{F}_{\ell}^t$ to $\mathbb{F}_{\ell}^s$ is surjective is $\frac{(\ell^{-1})_t}{(\ell^{-1})_{t-s}}$, which completes the proof. \end{proof} \subsection{Proof of the stability theorem} \begin{thm}[Stability of $\mu_{n,u}$]\label{CLStability} Fix $n \in \mathbb{Z}_{> 0}$ and $u \in \mathbb{Z}_{\geq 0}.$ Fix $S \subset | \mathcal{C}_n |$ a finite subset. Fix $\epsilon > 0.$ The measure $\mu_{n,u}$ enjoys the following stabiilty property: there is some $\delta = \delta(S,\epsilon) > 0$ and $T = T(S,\epsilon)$ satisfying: \begin{quote} If $\mu$ is any probability measure on $|\mathcal{C}_n|$ satisfying $$\left| \mathbb{E}_{\mu} \left(\# \mathrm{Surj}(\bullet, \Gamma) \right) - \mathbb{E}_{\mu_{n,u}} \left(\# \mathrm{Surj}(\bullet, \Gamma) \right) \right| < \delta \text{ for all } \Gamma \in T,$$ then $$\left| \mu(\Gamma) - \mu_{n,u}(\Gamma) \right| < \epsilon \text{ for all } \Gamma \in S.$$ \end{quote} \end{thm} \begin{proof} The statement is equivalent to the following: \begin{quote}\label{limmeaus} If $\nu_i$ is a sequence of measures such that for all $A^{\cdot}\in |\mathcal{C}_n|$ we have $$\mathbb{E}_{\nu_i} \left(\# \mathrm{Surj}(\bullet, A^\cdot) \right) \to \mathbb{E}_{\mu_{n,u}} \left(\# \mathrm{Surj}(\bullet, A^\cdot) \right)$$ then $\nu_i$ weak-* converges to $\mu_{n,u}$ (i.e. converges pointwise on elements of $\mathcal{C}_n$). \end{quote} We set $\nu_i$ to be a sequence as in the statement above. We set $$f_{A^\cdot}(X^\cdot)=\#{\rm Surj}(X^\cdot,A^\cdot).$$ For a positive integer $e$, we set $\mathcal{C}_{n,e}$ to be the subset of $\mathcal{C}_n$ consisting of triples whose underlying abelian group is torsion of order $\ell^e$. Note that there is a natural pushforward functor $\Phi_e:\mathcal{C}_n\rightarrow\mathcal{C}_{n,e}$ induced by the map $A\rightarrow A\otimes\mathbb{Z}/\ell^e\mathbb{Z}$. Note that for $X^\cdot\in\mathcal{C}_n$ and $A^\cdot\in\mathcal{C}_{n,e}$ we have ${\rm Surj}(X^\cdot,A^\cdot)\cong{\rm Surj}(\Phi_e(X^\cdot),A^\cdot)$. In particular $$\int_{\mathcal{C}_n} f_{A^\cdot}(\Phi_e(X^\cdot)) d\nu(X^\cdot) =\int_{\mathcal{C}_{n,e}} f_{A^\cdot}(Y^\cdot) d\Phi_e(\nu)(Y^\cdot).$$ We set $\nu^e_i:=\Phi_e(\nu_i)$ for all $e>0$. First, we shall prove the following proposition: \begin{prop}\label{tailbound} For all $e>0,\epsilon>0,A^\cdot\in \mathcal{C}_{n,e}$ there exists an integer $c$ such that $$\int_{|Y|>c} f_{A^\cdot}(Y^\cdot) d\nu^e_i(Y^\cdot) <\epsilon$$ for all $i$. \end{prop} Let $Ab_e$ be the category of finite abelian groups of exponent dividing $\ell^e$. There is a natural forgetful map $F:\mathcal{C}_{n,e}\rightarrow Ab_e$ which satisfies $$\#{\rm Surj}(A^\cdot,B^\cdot)\leq \#{\rm Surj}(A,B).$$ It is therefore sufficient to prove the following statement: \begin{prop}\label{tailboundab} For all $e>0,A\in Ab_e,\epsilon>0$ there exists an integer $c$ such that $$\int_{|X|>c} \#{\rm Surj}(X,A) dF(\nu^e_i)(X) <\epsilon$$ for all $i$. \end{prop} \begin{proof} Consider $A':=A\oplus \mathbb{Z}/\ell\mathbb{Z}$. For $c>\ell^{eM}$, any $X$ with $|X|>c$ satisfies that the rank of $X[\ell]$ is larger then $M$. We claim that $$\#{\rm Surj}(X,A')\geq \#{\rm Surj}(X,A)\ell^{M-{\rm rk} A[\ell]}.$$ To see this, note that for any surjection $f:X\rightarrow A$ we may pick a subgroup $Y\subset X$ which surjects onto $A$ with at most ${\rm rk} A[\ell]$ generators. Now the number of liftings of $f$ to a surjection onto $A'$ is at least $\#{\rm Surj}(X/Y,\mathbb{Z}/\ell\mathbb{Z})$ which is at least of size $\ell^{M-{\rm rk} A[\ell]} -1$, and this can be made arbitrarily large. Thus for any $\epsilon'$ we may fine a sufficiently large $c$ such that we have \begin{align*} \int_{|X|>c} \#{\rm Surj}(X,A) dF(\nu^e_i)(X)&\leq \epsilon'\int_{|X|>c} \#{\rm Surj}(X,A') dF(\nu^e_i)(X)\\ &\leq\epsilon'\int_{Ab_e} \#{\rm Surj}(X,A') dF(\nu^e_i)(X)\\ &\leq\epsilon'\int_{\mathcal{C}_n}\sum_{A'^\cdot\in F^{-1}(A')}f_{A'^\cdot}(X)d\nu_i(X)\\ \end{align*} By assumption $\int_{\mathcal{C}_n}\sum_{A'^\cdot\in F^{-1}(A')}f_{A'^\cdot}(X)d\nu_i(X)$ is absolutely bounded (in a manner depending only on $A$ and the sequence $\nu_i$), so by taking $\epsilon'$ sufficiently small we obtain our desired result. \end{proof} Let $\nu$ be a measure in the weak-* closure of $\nu_i$. We will show that $\nu= \mu_{n,m}$. By passing to a subsequence, we may assume that $\nu_i $ converge weak-* to $\nu$. \begin{lemma} For any $e>0$, $ \Phi_e (\nu)$ is in the weak-* closure of $\nu_i^e $ \end{lemma} \begin{proof} It suffices to prove, for all $G \in \mathcal {C}_{n,e}$, that \[ \lim_{i \rightarrow \infty} \nu_i ( \Phi_e^{-1} ( G)) = \nu ( \Phi_e^{-1}(G)).\] For a natural number $r\geq n$, let $\mathcal C_{n,>r}$ be the set of elements of $\mathcal C_{n,>r}$ whose underlying finite abelian group is not $\ell^r$-torsion. We will first prove that $\lim\sup_{i \rightarrow \infty} \nu_i ( \mathcal C_{n,>r} ) = \frac{1}{ \ell^r (\ell-1) }$. To do this, note that the number of elements of $\mathcal C_n$ whose underling abelian group is isomorphic to $\mathbb Z/\ell^{r+1}$ is at most $\ell^n$, because there are at most $\ell^n$ choices of $\psi$ and $\omega$ must vanish since $\wedge^2 \mathbb Z/\ell^{r+1} = 0$. Furthermore each element of $\mathcal C_{n,>r}$, because it is not $\ell^r$-torsion, has at least $\ell^r (\ell-1)$ surjective maps to $\mathbb Z/\ell^{r+1}$. So \begin{align*} \ell^r (\ell -1) \nu_i ( \mathcal C_{n,>r} ) &\leq \int_{ \mathcal C_{n,>r} } {\rm Surj} (X, \mathbb Z/ \ell^{r+1} ) \nu_i(X) \\ &\leq \int_{ \mathcal C_{n} } {\rm Surj} (X, \mathbb Z/ \ell^r) \nu_i(X) \\ &= \sum_{ \substack {Y \in \mathcal C_n \\ F(Y) \cong \mathbb Z/\ell^{r+1} }}\int_{\mathcal C_n} {\rm Surj}(X,Y) d\nu_i(X) \end{align*} which is a sum of at most $\ell^{2}$ terms, each of which converges as $i$ goes to $\infty$ to \[ \int_{\mathcal C_n} {\rm Surj}(X,Y) d \mu_{n,u}(X) = \frac{1}{ {\rm Sym}^2 \mathbb Z/\ell^{r+1} [\ell^n] }= \frac{1}{\ell^n}.\] Hence the sum converges as $i$ goes to $\infty $ to $1$, giving the statement. Using this, \begin{align*} \lim_{i \rightarrow \infty} \nu_i ( \Phi_e^{-1} ( G)) &\leq \lim\sup_{i \rightarrow \infty} \nu_i ( \Phi_e^{-1} ( G) \cap \mathcal C_{n,>r} ) + \lim\sup_{i \rightarrow \infty} \nu_i ( \Phi_e^{-1} ( G) \cap \mathcal C_{n,r} ) \\ &\leq \lim\sup_{i \rightarrow \infty} \nu_i ( \mathcal C_{n,>r} ) + \lim\sup_{i \rightarrow \infty} \nu_i ( \Phi_e^{-1} ( G) \cap \mathcal C_{n,r} ) \\ &\leq \frac{1}{ \ell^r (\ell-1)} + \lim\sup_{i \rightarrow \infty} \nu_i ( \Phi_e^{-1} ( G) \cap \mathcal C_{n,r} \\ &=\frac{1}{ \ell^r (\ell-1)} + \nu ( \Phi_e^{-1} ( G) \cap \mathcal C_{n,r} ) \leq \frac{1}{ \ell^r (\ell-1) }+ \nu ( \Phi_e^{-1} (G) ), \end{align*} with the key step because $\Phi_e^{-1} ( G) \cap \mathcal C_{n,r} $ is finite since groups in it have rank at most the rank of $G$ and torsion bounded by $\ell^r$. \end{proof} Then using the proposition and the lemma, for all $A^\cdot\in\mathcal{C}_{n,e},\epsilon>0$ we can find $c$ such that we have \begin{align*} \int_{\mathcal{C}_n} f_{A^\cdot}(X^\cdot) d\nu(X^\cdot)&=\int_{\mathcal{C}_n} f_{A^\cdot}(\Phi_e(X^\cdot)) d\nu(X^\cdot)\\ &=\int_{\mathcal{C}_{n,e}} f_{A^\cdot}(Y^\cdot) d\Phi_e(\nu)(Y^\cdot)\\ &\geq\int_{|Y|<c} f_{A^\cdot}(Y^\cdot) d\Phi_e(\nu)(Y^\cdot)\\ &= \lim_{i\rightarrow\infty}\int_{|Y|<c} f_{A^\cdot}(Y^\cdot) d\nu^e_i(Y^\cdot)\\ &\geq \lim_{i\rightarrow\infty}\int_{\mathcal{C}_{n,e}} f_{A^\cdot}(Y^\cdot) d\nu^e_i(Y^\cdot) - \epsilon\\ &= \lim_{i\rightarrow\infty}\int_{\mathcal{C}_{n,e}} f_{A^\cdot}(\Phi_e(X^\cdot) ) d\nu_i(X^\cdot) - \epsilon\\ &= \int_{\mathcal{C}_n} f_{A^\cdot}(X^\cdot) d\mu_{n,u}(X^\cdot) - \epsilon\\ \end{align*} Taking $\epsilon$ to 0 we see $\int_{X} f_{A^\cdot}(X^\cdot) d\nu(X^\cdot)\geq\int_{X^\cdot} f_{A^\cdot}(X^\cdot) d\mu_{n,u}(X^\cdot)$ and thus they are equal by Fatou's Lemma. The Theorem then follows by Lemma \ref{momimpmeas}. \end{proof} \subsection{Relating $\mu$ to Malle's conjecture for class groups of number fields with $\ell$-power roots of unity} We define a map $\psi:G^\vee[\ell^n]\rightarrow G[\ell^n]$ to be \emph{allowable} if $\langle \psi(\alpha),\beta\rangle = \langle \psi(\beta), \alpha\rangle $ whenever $\ell^r\alpha=\ell^s\beta=0, r+s\leq n.\newline$ Let $A_{s,n}(G)$ be the number of maps $\psi:G^\vee[\ell^n]\rightarrow G[\ell^n]$ such that the corank of $\psi(G^\vee[\ell])$ in $G[\ell]$ is $s$. \begin{lemma} The universal measure assigns to a group $G$ the value $$Q^t\mu(G) = \frac{\prod_{i=t+1}^{\infty}(1+\ell^{-i})^{-1}\ell^{R_n(G)}}{|{\rm Aut}(G)||{\rm Sym}^2G[\ell^n]|\cdot|G|^{t}}\sum_{s=0}^t A_{s,n}(G)\frac{(\ell^{-1})_t}{(\ell^{-1})_{t-s}}$$ where $R_n(G)$ is defined by \[ R_n\left( \prod_{i=1}^r (\mathbb Z/\ell^{e_i} \mathbb Z) \right) = \sum_{\substack{ 1\leq i < j \leq r\\ \max(e_i,e_j) \leq n }} \min( e_i, e_j, n-\max(e_i,e_j)).\] \end{lemma} \begin{proof} By the orbit stabilizer theorem, $$Q^t\mu(G):=\sum_{(\omega_G,\psi_G)} \frac{{\rm Aut}(G)}{{\rm Aut}(G,\omega_G,\psi_G)}Q^t\mu(G,\omega_G,\psi_G)$$ where the sum is over all pairs of $\omega_G,\psi_G$ yielding an $\ell^n$-BEG. Applying Theorem \ref{quotient-measure-formula} yields $$Q^t\mu(G)=\frac{\prod_{i=t+1}^{\infty}(1+\ell^{-i})^{-1}}{|{\rm Aut}(G)||{\rm Sym}^2G[\ell^n]|\cdot|G|^{t}}\sum_{s=0}^{\min(r,t)}B{s,n}(G)\frac{(\ell^{-1})_t}{(\ell^{-1})_{t-s}}$$ where $B_{s,n}(G)$ counts the number of pairs of $\psi_G,\omega_G$ which yield an $\ell^n$-BEG and such that the corank of $\psi(G^\vee[\ell])$ in $G[\ell]$ is $s$. Now, every allowable $\psi_G$ has at least one compatible $\omega_G$, and since the condition on $\omega_G$ given $\psi_G$ is an additive coset condition, the number of compatible $\omega_G$ is the same. It therefore remains to prove that the number of $\omega_G$ compatible with $\psi_G=0$ is $\ell^{R_n(G)}$. To see this, write $G=\oplus\mathbb{Z}/\ell^{e_i}\mathbb{Z}\cdot b_i$ and write $\omega_G=\sum_{i,j}c_{i,j} b_i\wedge b_j$, where $c_{i,j}\in\mathbb{Z}/\ell^{\min(e_i,e_j)}\mathbb{Z}$. Wlog $e_i\leq e_j$. \begin{itemize} \item \emph{Case 1: $e_i\geq n$.} Then $$0=\omega_{G,e_i}(\ell^{e_j-e_i}b_j^\vee,b_i^\vee)=c_{i,j}\mod{\ell^{e_i}}$$ which implies $c_{i,j}=0$. \item \emph{Case 2: $e_j\geq n>e_i$.} Then $$0=\omega_{G,n}(\ell^{e_j-n}b_j^\vee,b_i^\vee) = c_{i,j}\ell^{n-e_i}\mod\ell^n$$ which implies $c_{i,j}=0$. \item \emph{Case 3: $e_j<n$}. Then $$0=\omega_{G,n}(b_j^\vee,b_i^\vee) = c_{i,j}\ell^{2n-e_i-e_j}\mod\ell^n.$$ This implies $\ell^{e_i+e_j-n}\mid c_{i,j}$. Since $c_{i,j}$ is only defined modulo $\ell^{e_i}$, this gives $\ell^{\min(e_i,n-e_j)}$ possibilities for $c_{i,j}$. \end{itemize} Multiplying over all pairs $(i,j)$ gives the result. \end{proof} For the case of $n=1$, Malle\cite{Malle10} conjectured that $G$ should occur with probability $$\frac{\prod_{i=t+1}^{\infty}(1+\ell^{-i})^{-1}}{|{\rm Aut}(G)||G|^t}\cdot \frac{\ell^{{r\choose 2}}(\ell^{-1})_{r+t}}{(\ell^{-1})_{t}}.$$ \begin{lemma} The above two quantities agree. In other words, our conjecture agrees with Malle's. \end{lemma} \begin{proof} Since for $n=1$ all $\psi$ are allowable, it is sufficient to show that $$\frac{\ell^{{r\choose 2}}(\ell^{-1})_{r+t}}{(\ell^{-1})_{t}} =\frac{1}{\ell^{{r+1\choose 2}}} \sum_{s=0}^{\min(r,t)}A_{s,n}(G)\frac{(\ell^{-1})_t}{(\ell^{-1})_{t-s}}$$ or slightly more elegantly $$\frac{\ell^{r^2}(\ell^{-1})_{r+t}}{(\ell^{-1})_{t}} = \sum_{s=0}^{\min(r,t)}A_{s,1}(G)\frac{(\ell^{-1})_t}{(\ell^{-1})_{t-s}}$$ Note that since $A_{s,1}(G)$ depends only on $G[\ell]$, it is sufficient to handle the case where $G=\mathbb{F}_{\ell}^r$, which we henceforth assume. The number $A_{s,1}(G)$ of maps from $\mathbb{F}_{\ell}^r\rightarrow \mathbb{F}_{\ell}^r$ with image a subspace of dimension $r-s$ is the number of such subspaces, which is $\ell^{s(r-s)}{r\choose s}_{\ell^{-1}}$ multiplied by the number of surjections which is $\frac{(\ell^{-1})_r}{(\ell^{-1})_s}$. Using this and dividing through, we see that the above identity reduces to $${r+t\choose r}_{\ell^{-1}}=\sum_{s=0}^{\min(r,t)}{t\choose s}_{\ell^{-1}}{r\choose s}_{\ell^{-1}}$$ which is a q-vandermonde identity. \end{proof} Garton \cite{Garton} gave a very nice formula in the case of $t=0,n=1$ for $\mu(G)$. We can also recover and generalize Garton's result as follows: \begin{prop}\label{Gartonbign} Let $G$ be a finite abelian $\ell$-group satisfying $G[\ell^n] = \oplus_{i=1}^n (\mathbb{Z}/\ell^i\mathbb{Z})^{m_i}$. Then $$\mu(G)=\frac{\prod_{i=1}^{\infty}(1+\ell^{-i})^{-1}}{|{\rm Aut}(G)|}\cdot |\wedge^2G[\ell^n]|\cdot (\ell^{-1};\ell^{-1})_{m_n}\cdot \prod_{j=1}^{n-1} (\ell^{-1};\ell^{-2})_{\lceil m_i/2\rceil}.$$ \end{prop} It would be interesting to generalize this to the case of arbitrary $t$ in a simple closed form. \begin{proof} We must count the number $A_{0,n}(G)$ of allowable $\psi_G$ which are invertible. Note that each such can be decomposed into $\psi_G=\psi^++\psi^-$ where $\psi^+$ is symmetric, and $\psi^-$ is antisymmetric. The number of symmetric $\psi^+$ is ${\rm Sym}^2G[\ell^n]$, and this cancels out in the compatibility condition for $\omega_G$, so all such $\psi^+$ can occur. To understand the allowability condition on $\psi^-$, we proceed as above writing $G=\oplus\mathbb{Z}/\ell^{e_i}\mathbb{Z}\cdot b_i$. Set $r_j=\max(0,e_j-n)$. Then a basis for $G[\ell^n]$ is $\ell^{r_i} b_i$ and a basis for $G^\vee[\ell^n]$ is $\ell^{r_i}b_i^\vee$. Let $c_{i,j}\in\mathbb{Z}/\ell^{\min(e_i,e_j)}\mathbb{Z}$ be defined such that $\psi^-(\ell^{r_i}b_i)^\vee=\sum_j c_{i,j}\ell^{e_j-\min(e_i,e_j)} b_j$ so that $c_{i,j}=-\min c_{j,i}$ since $\psi^-$ is anti-symmetric. Assume wlog $e_j\geq e_i$. The allowability condition on $\psi^-$ gives the following restrictions. We a \begin{itemize} \item \emph{Case 1: $e_j\geq n$.} Then the allowability condition is empty, giving $\min(e_i,e_j,n)$ possible values for $c_{i,j}$. \item \emph{Case 2: $n>e_i+e_j.$} Then again, the allowability condition is empty, giving $\min(e_i,e_j)$ possible values for $c_{i,j}$. \item \emph{Case 3: $e_i+e_j\geq n>e_j$}. Then the allowability condition gives that $$0=\langle \ell^{e_i+e_j-n}b_j^\vee,b_i^\vee\rangle = c_{i,j}\ell^{e_i+e_j-n+(n-e_i)}=c_{i,j}\ell^{e_j}=0\mod\ell^n.$$ This implies $\ell^{n-e_j}\mid c_{i,j}$. Since $c_{i,j}$ is only defined modulo $\ell^{e_i}$, this gives $\ell^{e_i+e_j-n}$ possibilities for $c_{i,j}$. \end{itemize} We thus see that \begin{align*} &\#\{\textrm{allowable }\psi^-\}\ell^{R_n(G)} \\ &=\prod_{e_j\geq n}\ell^{\min(e_i,e_j,n)}\prod_{n>e_i+e_j}\ell^{\min(e_i,e_j,n)}\prod_{e_i+e_j>n>\max(e_i,e_j)}\ell^{(e_i+e_j-n)+(n-\max(e_i,e_j))}\\ &=\prod_{i<j}\ell^{\min(e_i,e_j,n)}\\ &=\mid\wedge^2G[\ell^n]\mid\\ \end{align*} It remains to restrict to those $\psi$ which are invertible, which is equivalent to $\psi\mod\ell$ being invertible. Note that in the natural basis, $\psi\mod\ell$ is a block diagonal matrix whose blocks are of size $m_1,m_2,\dots,m_n$, so it is necessary and sufficient for the blocks to all be invertible. Moreover, by the computation above ,for all but the last $m_n\times m_n$ block the blocks are all symmetric (since we are working modulo $\ell$, whereas for the $m_n\times m_n$ block there is no restriction. Since the probability that a random $r\times r$ symmetric matrix over $\mathbb{F}_\ell$ is invertible is $ (\ell^{-1};\ell^{-2})_{\lceil r/2\rceil}$, the claim follows. \end{proof} \section{Proof of Theorem \ref{ffmain}} \label{functionfieldproof} The goal of this section is to prove Theorem \ref{ffmain}. To do so, we express the moments for our intermediate measures $\mu^q_g$ in terms of point counts for certain moduli spaces. These spaces are very similar to the ones studied in \cite{EVW}, and we use their results on cohomological stability to obtain our main theorem. Fix an element $(G,\omega_*,\psi_* )\in\mathcal{C}_{\ell,n}$. We will estimate, for a random hyperelliptic curve of genus $g$ over $\mathbb F_q$, where $\mathbb F_q$ is a finite field containing the $\ell^n$th roots of unity, the average number of maps from the class group to $G$ such that the $\omega$ and $\psi$-invariants induced by the map are $\omega_*$ and $\psi_*$ respectively. The error term will be independent of $g$ for $g$ sufficiently large, but depend on $G$, and go to zero with $q$. To that end, let $N$ be a natural number with $\lfloor \frac{N-1}{2} \rfloor = g$. Let $Y_N$ be the Hurwitz space parameterizing degree $2$ covers of $\mathbb P^1$ over $\mathbb F_q$ ramified at $N$ points in $\mathbb A^1$ and also ramified at $\infty$ if $N$ is odd. Then $Y_N $ parameterizes a family of smooth, complete, hyperelliptic curves of genus $g$. Thus we have a map $a: \pi_1(Y_N) \to \mathrm{GSp}_{2g}^{\ell^n} (\mathbb Z_\ell)$, where $ \mathrm{GSp}_{2g}^{\ell^n} (\mathbb Z_\ell)$ is the subgroup of $\mathrm{GSp}_{2g}( \mathbb Z_\ell)$ whose similitude character is $1$ mod $\ell^n$. (Note that, in defining this map, we have fixed a symplectic form on $\mathbb Z_\ell^{2g}$.) Furthermore, there is a structure map $d: \pi_1(Y_N) \to \widehat{\mathbb Z}$. Let $\omega_*^o = \frac{ 2 \ell^n}{q-1} \omega_*$. Fix a surjection $\phi: \mathbb Z_\ell^{2g} \to G$ such that the pushforward of the fixed symplectic form on $\mathbb Z_\ell^{2g}$ is $\omega_*^o$. Let $H \subseteq \mathrm{GSp}_{2g}^{\ell^n} (\mathbb Z_\ell)$ be the subgroup of matrices which fix $\phi$. For $F \in H$, let $\psi_\phi(F) \in {\rm Hom}( G^\vee[\ell^n], G[\ell^n]) $ be the map induced by the image of $F$ in $ \mathrm{GSp}_{2g}^{\ell^n} (\mathbb Z_\ell)$ defined in subsection \ref{abomegapsi} \begin{lemma} The function $\psi_\phi: H \to \operatorname{Hom}( G^\vee[\ell^n]\rightarrow G[\ell^n])$ is a homomorphism. \end{lemma} \begin{proof}Fix $F_1,F_2\in H$. Let $\alpha$ be an element in $H^\vee [\ell^n]$. Let $\alpha_0$ be an element of $V$ that maps to $\phi^\vee (\alpha) \in A[\ell^n] \subset V/T$. By \eqref{unravelpsi}, $\psi_\phi (F_1) ( \alpha)$ is $\phi ( (1- F_1) \alpha_0) $, and similarly for $F_2$ and $F_1 F_2$. \[ (1- F_1F_2) \alpha_0 = (1-F_1) \alpha_0 + (1-F_2) \alpha_0 + (1-F_1) (1-F_2) \alpha_0 \] and since $F_2$ fixes $\phi$, \[ F_2 \phi^\vee(\alpha) =\phi^\vee(\alpha) \] so \[ (1-F_2) \phi^\vee (\alpha) =0 \] so \[ (1-F_2) \alpha_0 \in T\] and thus \[ (1-F_1)(1-F_2) \alpha_0 \in (1-F_1) T \] so \[\phi ( (1-F_1) (1-F_2) \alpha_0 )=0 .\] \end{proof} Let $Y_N^{ G, \omega_*^o, \psi_*}$ be the finite \'{e}tale covering space of $Y_N$ corresponding to the subgroup $H^*$ of $\pi_1(Y_N)$ consisting of elements $\sigma$ with $a(\sigma) \in H$ and $\psi_\phi (a(\sigma) ) = d (\sigma)\cdot \psi_*$. \begin{lemma} The image of $H \cap Sp_{2g} (\mathbb Z_\ell)$ under $\psi_\phi$ is the set of homomorphisms whose induced pairing on $G^{\vee}[\ell^n]$ is symmetric, which has cardinality $|{\rm Sym}^2(G) [ \ell^n]| $. \end{lemma} \begin{proof} This is exactly what is proven in the end of Lemma \ref{nonlinearmoments}, where this map is referred to as $R$. \end{proof} \begin{lemma}\label{ffieldcompare} Let $(G,\omega_*,\psi_*)\in\mathcal{C}_{\ell,n}$. Then $\left| Y_N^{ G, \omega_*^o, \psi_*}( \mathbb F_q) \right| $ is equal to $|{\rm Sym}^2(G) [ \ell^n]| $ times the sum over points in $Y_N(\mathbb F_q)$ of the number of surjections from the class group of the corresponding function field to $G$ with $\omega_C^o= \omega_*^o$ and $\psi_C = \psi_*$. \end{lemma} \begin{proof} For an element $y \in Y_N(\mathbb F_q)$, with Frobenius element $F \in \pi_1(Y_N)$, the number of points in $Y_N^{ G, \omega_*^o, \psi_*}[ \mathbb F_q]$ lying over $y$ is equal to \[ | \{ g\in \pi_1(Y_N)/ H^* | g^{-1} F g \in H^* \} | \] \[ = | \{ g\in \pi_1(Y_N)/ H^* | a(g)^{-1} a(F) a(g) \in H, \psi_\phi ( a(g)^{-1} a(F) a(g) ) = \psi_* \} | \] because $d (g^{-1} F g) = d(g) =1 $. Now $a(g)^{-1} a(F) a(g) \in H$ if and only if $\phi \circ a(g)^{-1} a(F) a(g) = \phi$, which occurs if and only if $(\phi \circ a(g)^{-1}) \circ a(F) = (\phi \circ a(g){-1})$. Similarly, we have $ \psi_\phi ( a(g)^{-1} a(F) a(g) ) = \psi_{\phi \circ a(g)^{-1}} ( a(F))$. So this count is equal to \[ \sum_{ \substack{ \phi' : \mathbb Z_\ell^{2g} \to G \\ \phi' \circ a(F) = \phi' \\ \psi_{\phi'}(a(F)) = \psi_*\\ \omega_{\phi'} = \omega_* }}| \{ g \in \pi_1(Y_N)/ H^* | \phi \circ a(g)^{-1} = \phi' \} | \] with the condition on $\omega_{\phi'}$ following from the existence of a $g$ with $\phi \circ a(g)^{-1} = \phi' $ Because $Sp_{2g}$ acts transitively on the surjections to $G$ with a given value of $\omega^o$, and the image of $\pi_1$ under $a$ contains $Sp_{2g}$, the cardinality $| \{ g \in \pi_1(Y_N)/ H^* | \phi \circ a(g)^{-1} = \phi' \} |$ is independent of the choice of $\phi'$. So we may assume $\phi' = \phi$, in which case the first condition is equivalent to $a(g) \in H$, and so the number of possibilities is equal to the cardinality of the image of the homomorphism \[ \pi:\sigma \mapsto \psi( a(\sigma)) - d(\sigma)\cdot\psi_*. \] By the previous lemma, ${\rm Im}\pi$ contains all the elements whose pairing is symmetric. On the other hand, $a(\sigma)$ necessarily satisfies the compatibility condition \ref{psiomegacomp} with respect to $\frac{ q^{ d(\sigma)}-1 }{2 \ell^n } \omega_*^o $. Because $\frac{ q^{ d(\sigma)}-1 }{2 \ell^n }$ is congruent modulo $\ell^n$ to $d(\sigma) \frac{q-1}{ \ell^n } $, and $ \ell^n \omega_*^o =0$, we have \[ \frac{ q^{ d(\sigma)}-1 }{2 \ell^n } \omega_*^o = d(\sigma) \frac{q-1}{\ell^n} \omega_*^o.\] It follows that $a(\sigma)$ satisfies the compatibility condition \ref{psiomegacomp} with respect to $d(\sigma) \frac{q-1}{\ell^n} \omega_*^o= d(\sigma) \omega_* $. Because also satisfies the compatibility condition \ref{psiomegacomp} with respect to $d(\sigma)\cdot \psi_*$, so it follows that $\psi( a(\sigma)) - d(\sigma)\cdot\psi_*$ defines a symmetric pairing on $G^\vee [\ell^n]$ for any $\sigma$, and so the image of $\pi$ consists exactly of those elements that are symmetric. \end{proof} \begin{lemma}\label{ffieldcover} Let $G'$ be any group with a surjection $\pi: G' \to G$ such that $\pi(G'[\ell^n]) = 0$ inside $G[\ell^n]$. Then there is a finite etale covering from some component of $\mathbf{H}_{G', n, \overline{\mathbb F}_q}$ to $Y_N^{G , \omega_*^o, \psi_*, \overline{\mathbb F}_q}$, where $\mathbf{H}_{G',n}$ is the Hurwitz space defined in \cite[\S7.1]{EVW}. \end{lemma} \begin{proof} Both spaces are finite \'{e}tale coverings of $Y_N$. It is sufficient to find a component of $\mathbf{H}_{G',n}$ such that its geometric fundamental group, viewed as a subgroup of $\pi_1(Y_N)$, is contained in the geometric fundamental group of $Y_N^{G, \omega_*,\psi_*}$. Because the $d$ homomorphism is trivial on the geometric fundamental group, the geometric fundamental group of $Y_N^{G, \omega_*,\psi_*}$ is simply the subgroup of $\pi_1^{\mathrm{geom}}(Y_N)$ consisting of $\sigma$ with $a(\sigma) \in H$, $\psi(a(\sigma))=0$. We can choose a surjection $\phi': \mathbb Z_\ell^{2g} \to G'$ such that $ \pi \circ \pi'= \phi$. Then some component of $\mathbf{H}_{G',n}$ has fundamental group consisting of those $\sigma$ in $\pi_1^{\mathrm{geom}}(Y_N)$ such that $a(\sigma)$ fixes $\phi'$. Clearly this implies that $a(\sigma)$ fixes $\phi$ and thus lies in $H$, so it remains to check that $\psi_\phi(\sigma) $ vanishes for these $\sigma$. This is because $\psi_\phi$ is compatible with surjections of groups, so $\psi_\phi(\sigma)$ is given by \[ G^\vee [\ell^n] \to G^{' \vee}[\ell^n] \to G' [\ell^n] \to G[\ell^n] \] with the middle arrow $\psi_{\phi'}(\sigma)$, but by our construction of $G$ the map $ G' [\ell^n] \to G[\ell^n] $ vanishes, so indeed $\psi_{\phi}(\sigma)$ vanishes for all such $\sigma$.\end{proof} \begin{lemma}\label{ffieldcount} The number of connected components of $Y_N^{ G, \omega_*^o, \psi_*}$ defined over $\mathbb F_q$ is one. \end{lemma} \begin{proof} This follows from the fact that $\pi_1^{\mathrm{geom}}(Y_N)$ acts transitively on $\pi_1^{\mathrm{geom}}(Y_N)/ H^*$. \end{proof} \begin{thm} For $q$ sufficiently large with respect to $|G|$, the number of pairs of a degree $2$ cover of $\mathbb P^1_{\mathbb F_q}$, ramified at a divisor of degree $N$ in $\mathbb A^1_{\mathbb F_q}$, plus $\infty$ if $N$ is odd, and a quotient $G$ of the $\ell$-class group with $\omega_C^o = \omega_*^o$ and $\psi_C=\psi_* $ is $\frac{q^N}{ |{\rm Sym}^2(G[\ell^n])| } +{ O (q^{N-1/2})}$. \end{thm} \begin{proof} By Lemma \ref{ffieldcompare}, this is the same as $\frac{1}{ {\rm Sym}^2(G[\ell^n])}$ times the number of $\mathbb F_q$-points of the space $Y_N^{ G, \omega_*^o, \psi_*}$. Because $Y_N^{ G, \omega_*^o, \psi_*}$ is a finite \'{e}tale cover of $Y_N$, it has dimension $N$. Let $\tilde{\ell}$ be a prime other than the characteristic of $\mathbb F_q$ Thus by the Lefschetz fixed point formula, for a sufficiently large prime We have \[ \left| Y_N^{ G, \omega_*^o, \psi_*}(\mathbb F_q)\right| = \sum_{i}=0^{2N} (-1)^i \operatorname{tr} \left( {\rm Frob}_q | \; H^i_c \left( Y_N^{ G, \omega_*^o, \psi_*, \overline{\mathbb F_q}} , \mathbb Q_{\tilde{\ell}} \right) \right) .\] Because $Y_N^{ G, \omega_*^o, \psi_*}$ is geometrically irreducible, $H^{2n}( Y_N^{ G, \omega_*^o, \psi_*, \overline{\mathbb F_q}} , \mathbb Q_{\tilde{\ell}})$ is one-dimensional, with Frobenius action multiplication by $q^N$. Thus \begin{align*} \left| \left| Y_N^{ G, \omega_*^o, \psi_*}(\mathbb F_q)\right| - q^N \right| &\leq \sum_{i=0}^{2N-1} \left| \operatorname{tr} \left({\rm Frob}_q | \; H^i_c \left( Y_N^{ G, \omega_*^o, \psi_*, \overline{\mathbb F_q}} , \mathbb Q_{\tilde{\ell}} \right) \right) \right| \\ &\leq \sum_{i=0}^{2N-1} q^{i/2} \dim H^i_c \left( Y_N^{ G, \omega_*^o, \psi_*, \overline{\mathbb F_q}} , \mathbb Q_{\tilde{\ell}} \right) \\ &\leq \sum_{i=0}^{2N-1} q^{i/2} \dim H^i_c \left( Y_N^{ G'} , \mathbb Q_{\tilde{\ell}} \right) \end{align*} where $Y_N^{G'}$ is a $\mathbf{H}_{G',n}$, by Deligne's Riemann hypothesis and because $Y_N^{G'}$ is a finite \'{e}tale cover of $Y_N^{ G, \omega_*, \psi_*, \overline{\mathbb F_q}}$. Now by Po\'{i}ncare duality, \begin{align*} \sum_{i=0}^{2N-1} q^{i/2} \dim H^i_c \left( Y_N^{ G'} , \mathbb Q_{\tilde{\ell}} \right) &= \sum_{i=1}^{2N} q^{N- i/2} \dim H^i \left( Y_N^{ G'} , \mathbb Q_{\tilde{\ell}} \right) \\ &\leq \sum_{i=1}^{2N} q^{N- i/2} C(G' \rtimes \mathbb Z/2, (0,1))^{i+1} \\ &\leq q^N \frac{ C(G' \rtimes \mathbb Z/2, (0,1))^{2}}{ \sqrt{q} } \frac{1}{1- \frac{ C(G' \rtimes \mathbb Z/2, (0,1))}{\sqrt{q}}} \\ &= O(q^{N-1/2}) \end{align*} by \cite[Proposition 7.8]{EVW}, as long as $\tilde{\ell}$ is sufficiently large (which we may freely assume). \end{proof} \begin{proof}[Proof of Theorem \ref{ffmain}]: Lemma \ref{ffieldcompare} says that $\frac{\#Y_N^{ G, \omega_*^o, \psi_*}}{q^N}$ is exactly equal to the moment $\mathbb{E}_{\mu^q_g}\#{\rm Surj}(*,(G,\omega_*,\psi_*))$. The first part of Theorem \ref{ffmain} is then exactly the statement of \ref{ffieldcount}. For the second part, note that if $g,q\rightarrow\infty$ then the moments of $\mu^q_g$ converge to the moments of $\mu$. The statement then follows from Theorem \ref{CLStability}. \end{proof} \section{Data} \label{data} Below, we present computational data for the case of $K = \mathbb{Q}(\mu_3), \ell = 3$ so that $t = 1.$ Let $\zeta$ be a fixed primitive third root of unity in $K.$ We tabulated fields of the form $K(\sqrt{z})$ for squarefree $z = m \cdot 1 + n \cdot \alpha$ for $0 \leq m,n \leq 2000$ for $\alpha = - \zeta^2$. This amounted to 3105738 fields We remark that it is sufficient to look only in this ``first sextant" range of $z$ for the following reasons: First, Since $\zeta,\zeta^2$ are both squares it is sufficient to restrict to those $z$'s with argument between $-\pi/3$ and $\pi/3$. Second, the field $K(\sqrt{z})$ is abstractly isomorphic to $K(\sqrt{\bar{z}})$, and this isomorphism acts on $\mu_3$ by inversion. So, if the 3-part of the class group of $K(\sqrt{z})$ and its $\psi$-invariant equal $G$ and $\psi_G$ respectively, then the 3-part of the class group of $K(\sqrt{\bar{z}})$ and its $\psi$-invariant equal $G$ and $-\psi_G$ respectively. Thus, including also those $z$'s with argument between $-\pi/3$ and $0$ would have the effect of making the totals for $(G,\psi_G)$ and $(G,-\psi_G)$ identical for reasons of symmetry. Since $t = 1,\psi_G$ determines $\omega_G$; $\omega_G$ is the ``antisymmetric part" of $\psi_G.$ For this reason, we only record data pertaining to $\psi_G$ in the table below. In the below table, \begin{itemize} \item The column ``observed proportion" records the proportion of fields \emph{with fixed class group isomorphism type} observed to have the value given in the column ``$\psi_G$." \item The column ``expected proportion" records $\frac{Q^t \mu(G,\psi)}{Q^t \mu(G)}.$ \item The homomorphism $\psi:{\rm Hom}( G, \mu_3) \rightarrow G[3]$ is recorded is as follows: Suppose \begin{align*} G &=\oplus_{i=1}^n \frac{\mathbb{Z}}{3^{n_i}\mathbb{Z}}\cdot e_i, \\ {\rm Hom}(G,\mu_3) &=\oplus_{i=1}^n\frac{\mathbb{Z}}{3\mathbb{Z}}\cdot f_i \end{align*} so that $f_i(e_j)=\zeta^{\delta_{i,j}}$. Then we write down the matrix of $\psi$ with resect to the bases $f_i,3^{n_i-1}e_i$. \item In the ``$\psi_G$" column, we list a complete set of representatives for the isomorphism classes of $\psi_G,$ per the convention outlined in the previous bullet point, for $G \cong \mathbb{Z} / 3, \mathbb{Z}/9, \mathbb{Z}/27, \mathbb{Z}/81, \mathbb{Z}/243,$ and $\mathbb{Z} / 3 \oplus \mathbb{Z} / 3.$ There were 1258 instances of other groups that occured but due to the large number of isomorphism classes of $\psi$ that occur in these cases we chose not to include this in the table below. \end{itemize} \begin{tabu}{| c | c | c |[2pt] c | c |} \hline class group $G$ & $\psi_G$ & total tabulated & observed proportion & conjectured proportion\\ \hline trivial & () & 2698000 & 1.0 & 1.0\\ \tabucline[2pt]{-} $\mathbb{Z} / 3$ & (0) & 89565 & 0.2516 & 0.25 \\ \hline $\mathbb{Z} / 3$ & (1) & 132764 & 0.3730 & 0.375 \\ \hline $\mathbb{Z} / 3$ & (2) & 133622 & 0.3754 & 0.375 \\ \tabucline[2pt]{-} $\mathbb{Z} / 9$ & (0) & 9186 & 0.2468 & 0.25 \\ \hline $\mathbb{Z} / 9$ & (1) & 13866 & 0.3726 & 0.375 \\ \hline $\mathbb{Z} / 9$ & (2) & 14161 & 0.3805 & 0.375 \\ \tabucline[2pt]{-} $\mathbb{Z} / 27$ & (0) & 819 & 0.2495 & 0.25 \\ \hline $\mathbb{Z} / 27$ & (1) & 1240 & 0.3778 & 0.375\\ \hline $\mathbb{Z} / 27$ & (2) & 1223 & 0.3726 & 0.375\\ \tabucline[2pt]{-} $\mathbb{Z} / 81$ & (0) & 31 & 0.2095 & 0.25 \\ \hline $\mathbb{Z} / 81$ & (1) & 58 & 0.3919 & 0.375 \\ \hline $\mathbb{Z} / 81$ & (2) & 59 & 0.3986 & 0.375 \\ \tabucline[2pt]{-} $\mathbb{Z} / 243$ & (0) & 2 & 0.6667 & 0.25 \\ \hline $\mathbb{Z} / 243$ & (1) & 0 & 0.0 & 0.375 \\ \hline $\mathbb{Z} / 243$ & (2) & 1 & 0.3333 & 0.375 \\ \tabucline[2pt]{-} $\mathbb{Z} / 3 \oplus \mathbb{Z} / 3$ & $\left( \begin{array}{cc} 0 & 0 \\ 0 & 0 \end{array} \right)$ & 0 & 0.0 & 0.0 \\ \hline $\mathbb{Z} / 3 \oplus \mathbb{Z} / 3$ & $\left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right)$ & 317 & 0.0321 & 0.0385\\ \hline $\mathbb{Z} / 3 \oplus \mathbb{Z} / 3$ & $\left( \begin{array}{cc} 2 & 0 \\ 0 & 0 \end{array} \right)$ & 315 & 0.0319 & 0.0385\\ \hline $\mathbb{Z} / 3 \oplus \mathbb{Z} / 3$ & $\left( \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right)$ & 2008 & 0.2538 & 0.2308 \\ \hline $\mathbb{Z} / 3 \oplus \mathbb{Z} / 3$ & $\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right)$ & 1602 & 0.1621 & 0.1731 \\ \hline $\mathbb{Z} / 3 \oplus \mathbb{Z} / 3$ & $\left( \begin{array}{cc} 0 & 2 \\ 1 & 0 \end{array} \right)$ & 317 & 0.0321 & 0.0289 \\ \hline $\mathbb{Z} / 3 \oplus \mathbb{Z} / 3$ & $\left( \begin{array}{cc} 1 & 2 \\ 1 & 0 \end{array} \right)$ & 1188 & 0.1202 & 0.1154 \\ \hline $\mathbb{Z} / 3 \oplus \mathbb{Z} / 3$ & $\left( \begin{array}{cc} 2 & 2 \\ 1 & 0 \end{array} \right)$ & 1171 & 0.1185 & 0.1154 \\ \hline $\mathbb{Z} / 3 \oplus \mathbb{Z} / 3$ & $\left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right)$ & 727 & 0.0736 & 0.0865 \\ \hline $\mathbb{Z} / 3 \oplus \mathbb{Z} / 3$ &$\left( \begin{array}{cc} 2 & 1 \\ 0 & 1 \end{array} \right)$ & 1737 & 0.1758 & 0.1731\\ \hline \end{tabu} \subsection{How computations were done} \label{computationmethod} The above data was all tabulated in SAGE. We used the Hilbert symbol method, described in \S \ref{psihilbertsymbol}, for computing the homomorphism $\psi_{L/K}$ for quadratic extensions $L / K = \mathbb{Q}(\mu_3).$ By the results of that section, $\psi$ is the composition: \begin{equation} \label{psiascomposition} \mathrm{Hom}(\mathrm{Cl}(K),\mu_3) \xrightarrow{\iota^{-1}} L^\times \cap O / (L^\times)^3 \xrightarrow{\text{inclusion}} L^\times \cap V / (L^\times)^3 \xrightarrow{\pi} \mathrm{Cl}(L)[3], \end{equation} where \begin{itemize} \item $L^\times \cap O$ denotes the subset of $L^\times$ pairing trivially with $O_{L_v}^\times$ under the 3-Hilbert symbol at $v$ for all finite places $v$ \item $L^\times \cap V$ denotes those elements of $L^\times$ having valuation divisible by $3$ for all finite places $v.$ \end{itemize} The isomorphism $\iota^{-1}$ is the inverse of the Hilbert symbol map described in \S \ref{psihilbertsymbol}: \begin{align} \label{sumoflocalhilbertsymbols} \iota: L^\times \cap O / (L^\times)^3 &\rightarrow \mathrm{Hom}(\mathrm{Cl}(L),\mu_3) \nonumber \\ b &\mapsto \left( f_b: x \in L^\times \backslash \mathbb{A}_{L,\mathrm{fin}}^\times / \prod O_{L_v}^\times \mapsto \sum_{v \text{ finite}} \langle x,b \rangle_{3,v} \right), \end{align} where $\langle \rangle_{3,v}$ denotes the 3-Hilbert symbol at the place $v.$ The surjection $\pi$ arises from the bottom row of the commutative diagram $$\begin{CD} 0 @>>> O_L^\times / (O_L^\times)^3 @>>> H^1(O_L, \mu_3) @>>> H^1(O_L,\mathbb{G}_m)[3] @>>> 0 \\ @. @V{=}VV @V{\sim}VV @VVV @.\\ 0 @>>> O_L^\times / (O_L^\times)^3 @>{j}>> L^\times \cap V / (L^\times)^3 @>{\pi}>> H^1(O_L, \mathbb{G}_m)[3] = \mathrm{Cl}(L)[3] @>>> 0 \end{CD}$$ where the top row is the Kummer exact sequence, the middle isomorphism is explicated in \S \ref{psihilbertsymbol}, $\pi(b) := \prod_v v^{\mathrm{val}_v(b)/3},$ and $j$ is the obvious inclusion map. To compute the composition from \eqref{psiascomposition}, we work backwards as follows: \begin{itemize} \item[(a)] Represent every element of $\mathrm{Cl}(L)[3]$ using a fractional ideal. \item[(b)] For every representative ideal $I$ from (a), find $\alpha_I$ for which $I^3 = (\alpha_I).$ Also, compute representatives for the global unit group $O_L^\times / (O_L^\times)^3.$ By the bottom row of the above Kummer sequence commutative diagram, every element of $L^\times \cap V / (L^\times)^3$ may be represented uniquely as $u \cdot \alpha_I$ as $I$ ranges through representative ideals and $u$ ranges over representatives for $O_L^\times / (O_L^\times)^3.$ \item[(c)] $L^\times \cap O / (L^\times)^3$ consists of those $b \in L^\times / (L^\times)^3$ for which $L(b^{1/3}) / L$ is everywhere unramified. \footnote{Recall that $O_{L_v}^\times \subset L_v^\times$ corresponds to the inertia subgroup of $G_{L_v}$ by local class field theory. Pairing trivially with $O_{L_v}^\times$ under the local $3$-Hilbert symbol for all places $v$ is thus equivalent to $L_v( (u \cdot \alpha)^{1/3} ) / L_v$ being unramified for all $v,$ i.e. $L((u\cdot \alpha)^{1/3}) / L$ being everywhere unramified.} Identify such $b$ among the representatives $u \cdot \alpha_I$ found in (b). \item[(d)] For elements $b := u \cdot \alpha_I$ determined in (c), compute the homomorphism $f_b$ from \eqref{sumoflocalhilbertsymbols}. \item[(e)] Do linear algebra to represent each `standard basis element' $f$ (depending on a choice of 3rd root of unity in $K$) as $f = f_b$ for $b = \prod_{i = 1}^k (u_i \cdot \alpha_{I_i})^{n_i}$ and integers $n_i.$ The desired homomorphism $\psi$ is uniquely determined by \begin{align*} \psi: \mathrm{Hom}(\mathrm{Cl}(L),\mu_3) &\rightarrow \mathrm{Cl}(L)[3] \\ f &\mapsto \text{ ideal class of } \prod_{i = 1}^k I_i^{n_i}. \end{align*} \end{itemize} \bigskip All infrastructure for manipulations with linear algebra, ideals, class groups, and unit groups was readily available through SAGE. However, we were unable to find pre-existing code in SAGE to compute local Hilbert symbols (for $\ell \neq 2$) needed in step (d), so we coded this ourselves. For (d), we used global methods: \begin{itemize} \item For $b \in L^\times \cap O / (L^\times)^3$ and $x \in L_v,$ the Hilbert symbol $\langle x,u \rangle_{3,v}$ equals $\langle x',b \rangle_{3,v}$ for any $x' \in L$ with $\mathrm{val}_v(x') = \mathrm{val}_v(x).$ For appropriately chosen $x',$ we used Hilbert reciprocity to express $\langle x',b \rangle_{3,v}$ as a corresponding product of local symbols ``at favorable places" which we managed to calculate directly. \end{itemize} \bigskip We refer the interested reader to our annotated code, available at \texttt{https://sites.google.com/site/michaellipnowski/} for further details.
1,314,259,993,841
arxiv
\section{Introduction} \noi A large litterature is devoted to the effect of applying a laser field to various objects (electron beams, two-level atoms, etc). In early papers [1-3 the laser wave was modelled as purely electric, the effect of a magnetic contribution being introduced more recently~[4-5] Experimental devices are concerned with a wide variety of atoms, but theoretical developments so far have mainly considered an atom as a two-level system; however this simplification (which seems to be adequate in most realistic cases) does not fully take into account the few-body structure of atoms. So, at least in principle, a more accurate treatment is desirable; as a very first step, it is natural to consider a two-body problem involving two charges in interaction in the presence of a plane electromagnetic wave. In this spirit hydrogen-like bound states are the most simple targets offering the two-body structure, but for simplicity we focus here on spinless particles. A relativistic two-particle system can be covariantly described by a pair of coupled Klein-Gordon equations referred to as mass-shell constraints; these constraints determine the evolution of a wave function which depends on a pair of four-dimensional arguments~[6-16] This approach is suited for situations where particle creation can be neglected whereas other relativistic effects must be taken into account. In this article we consider a pair of charged particles undergoing some mutual interaction whereas they are coupled with an external electromagnetic field. The following assumptions are made: \noi The external field is not affected by the motion of the system, \noi This external field does not create pairs. \noi We neglect the radiation emitted by the charges in their motion. \noi Thus the validity of this picture is implicitly limited by some conditions involving the strenght and the analytic shape of the field, the Compton wavelength of the particles, etc. The conditions under which laser fields {\em may} create pairs have been widely discussed in the litterature (see Ref. ~[4,17 for fermionic pairs), but here we are interested in the cases where pairs {\em cannot} be created. In the absence of external field the contact of the constraint formalism with QED is well established, through the quasi-potential approach~[18-21 or by reducing the Bethe-Salpeter (BS) equation~[22,23] and realistic mass-shell constraints can be exhibited, see Crater {\em et al.}~\cite{crater}, see Sazdjian and Jallouli~\cite{jallou97} when the mutual interaction also is of electromagnetic nature. \noi In the presence of external field, it seems natural to perform in the same manner a reduction of the BS equation and so derive the coupled wave equations. This task was carried out by Bijtebier and Broekaert \cite{bijbroek} in the case of {\em static} external potentials, but the tractability of the method for time-depending cases is problematic. Moreover it is sometimes interesting to consider also phenomenological models of (mutual) interaction, not provided with a given BS kernel. \noi So we prefer a direct coupling to the field, constructed with help of these reasonable prescriptions: correct limits when one of the interactions vanishes, and invariance under the spacetime symmetries that survive application of the field. \medskip In the present paper the explicit form of the mass-shell constraints is supposed to be known for the isolated system. Naturally the mutual interaction between the charges may be simply electromagnetic, but in several models this electromagnetic term is neglected in front of other forces of phenomenological origin. When an external field is applied, we face this old problem that relativistic interactions cannot be just linearly composed; this was soon understood by Sokolov in early attempts to construct n-body systems, in the context of a very different approach~[27,28] In our framework, the trouble is that the coupled wave equations must remain compatible one with the other, and this requirement is essentially nonlinear. As a result we generally do not know how to write down in closed form a new pair of compatible wave equations that takes into account also the coupling to the external field. Nevertheless this problem becomes solvable when the external potential corresponding to the external field has a particular symmetry that we proposed to call {\em strong} translation invariance. The solution is furnished by an {\sl ansatz} elaborated in such a way that wave equations have the correct limits when either the mutual interaction or the external potential is turned off; these equations also have the correct nonrelativistic limit. Moreover in this context, under reasonably general assumptions, the principle of isometric invariance~\cite{IJTP2009} is satisfied, say: the coupling to the external field {\em must respect the surviving isometries}; in other words the spacetime isometries that would survive (as symmetries of the system) application of the external field to mutually independent particles should remain symmetries of the system when these particles also undergo the mutual interaction. This property was not considered when the ansatz was proposed for the first time; having it finally satisfied is a bonus. Our method was initiated by J. Bijtebier~\cite{bij89} in the simplest case of a stationary external interaction. Introducing the notion of {\em strong} translation invariance, we carried out a systematic generalization~\cite{drozNCim} which can be applied to a lot of more complicated situations. Focusing on the case where external interaction is due to some electromagnetic plane waves, we already started many years ago a discussion about the tractability of the ansatz~[32,33 and we pointed out the interest of considering a monochromatic superposition of two waves; but the subject was not developped further. In the present paper our purpose is to construct (for the first time explicitly) the mass-shell constraints and to exhibit the first integrals of the motion, allowing for a reduction of the number of degrees of freedom. By the same token several complicated calculations and involved arguments given in our aforementioned articles will be replaced by a more comprehensive exposition. \medskip Section 2 is devoted to a survey of the general formalism we employ, in both the one-body and two-body cases. \noi In Section 3 we return to one-body motion, focusing on the case where external field is made of plane electromagnetic waves. Indeed the one-body motion must be considered first, because strong translation invariance, which helps us to formulate the {\em two}-body problem in closed form, is basically defined as a property of the {\em one}-body motion in the field. We present and discuss two cases of external fields: \noi i) the single monochromatic plane wave, \noi ii) a monochromatic superposition of two such waves, which can be seen as made of two counter-propagating waves. \noi Here we show how strong translation invariance arises and we discuss its usefulness with repect to our purpose. \noi Then, in Section 4, the details of formulating the two-body wave equations in presence of the monochromatic superposition are displayed. Section 5 is devoted to concluding remarks. \medskip {\sl Notation, terminology} $ \ $ Signature $+---$, units $c=\hbar =1$. {\em Isometry} refers to any transformation of the Poincar\'e group acting on spacetime. Let us call {\em surviving symmetries} of a system those symmetries that are not destroyed by application of the external field. Particle labels $a,b$ run from 1 to 2. \section{Basic formulas} This section is not limited to the case of electromagnetic plane waves; here we intend to consider a more general case where the external interaction enjoys the appropriate symmetry. \noi In fact strong translation invariance which (when it arises) helps us to formulate the two-body problem, is primarily a property of the one-body motion in the field. Therefore in the next subsection we first consider a {\em single} test particle submitted to an external field. \subsection{The one-body wave equation, symmetries } Let $x, p$ be the canonical variables satisfying the standard commutation relations $ [x^\alp , p_\mu ] = i \del ^\alp _\mu $. The one-body wave function is a scalar on spacetime, say $\psi(x)$. The Klein-Gordon equation $ 2K \psi = m^2 \psi $ is written using the {\em half-squared mass operator} \beq K = \half p^2 + G , \label{1K-G} \eeq where the interaction term $G$ will be called {\em external interaction potential}. It is a {\em scalar}, not to be confused with the electromagnetic { \em vector} potential $A_\mu$. This scheme may be obtained by quantization of a classical relativistic (and manifestly covariant) formalism where the equations of motion stem from a {\em scalar} generator $ K_{\rm cl} = \half p_{\rm cl} \cdot p _{\rm cl} + G_{\rm cl} $, the {\em eight} canonical variables $x _{\rm cl} ^\alp , { p _{\rm cl}} _\alp $ being conjugate in terms of their Poisson brackets; the generator is an obvious constant of the motion and is interpreted as half of the squared mass. Although this approach is not very popular among physicists, it is very simple and natural from a geometrical point of view, as it rests on the symplectic structure of the eight-dimensional phase space. In this formulation the evolution parameter is affine, that is proportional to the proper time; the role of the half-squared mass with respect to this parameter is analogous to the role of the energy with respect to the absolute time in Newtonian mechanics. \bigskip \noi {\sl Strong translation invariance}. \noi Only some fields give rise to strong translation invariance. The most simple example corresponds to an interaction potential of the form $G= G(\vec { x }, \vec { p})$, with obvious notation using the lab frame, supposed to be unique\cite{bij89}. \medskip \noi {\em Definition $\ $ Any phase-space function is {\em strongly} translation invariant along direction $w$ when it commutes with both $w \cdot x $ and $w \cdot p $}. \noi When it commutes with {\em only} $w \cdot x $ it is said {\em simply} translation invariant. \medskip \noi We are interested in having the external potential strongly translation invariant. In the space of four-vectors, the directions leaving $G$ strongly translation invariant, when they exist, span a linear space $E_L$ called {\sl longitudinal space}. For our purpose it is essential that $E_L$ admit an orthonormal basis~\footnote{ But some situations of physical interest are not included in this case: for instance when the external field is a {\em single} monochromatic plane wave, it turns out that the wave vector is a direction of strong translation invariance; but it is a null vector, so in this case $E_L$ admits no orthonormal frame.}; in this situation referred to as {\em normal}, the space of four-vectors is an orthogonal direct sum $ E = E_L \oplus E_T $ where $E_T $ is the {\sl transverse space}. \noi Any four-vector $\xi$ can be written as $$\xi = \xi_\lll + \xi_\ttt, \qquad \quad \xi_\lll = \tau \xi $$ with $\tau $ projector onto $E_L$. The canonical variables are split accordingly, say $x = x_\lll + x_\ttt , \qquad p =p_\lll + p_\ttt $. Phase-space functions are called longitudinal (resp. transverse) when they depend only on the longitudinal (resp. transverse) canonical variables, equivalently they commute with all the transverse (resp. longitudinal) canonical variables. \noi The case where $E_L$ fails to admit an orthonormal basis will be referred to as {\em degenerate}. \beprop { The interaction potential is a transverse quantity}. \enprop \noi Proof $\ $Let $w_\aaa$ be an orthonormal basis of $E_L$ ( the number of values taken by indices $\aaa , \BBB$ is just the dimension of $E_L $ ), so $w_\aaa \cdot w_\aaa = \vareps _ \aaa $, with $\vareps _\aaa = \pm 1$ depending on the signature of $E_L$. The projector onto $E_L$ is $$ \tau ^\alp _\beta = \sum _\aaa \vareps _\aaa w_\aaa ^\alp w _{\aaa \beta} $$ $$ x_\lll ^\alp = \tau ^\alp _\beta x^\beta = \sum _\aaa \vareps _\aaa w_\aaa ^\alp (w_\aaa \cdot x) $$ \beq [G , x_\lll ^\alp ] = \sum _\aaa \vareps _\aaa w_\aaa ^\alp [G , w \cdot x ] = 0 \eeq Similarly $ p_\lll ^\alp = \tau ^\alp _\beta p^\beta $ and one finds \beq [G , p_\lll ^\alp ] = 0 \eeq [] \subsection{Two-body wave equations} \medskip The wave function is $\Psi (q_1 , q_2)$ where the points $q_1 , q_2$ in spacetime are canonically conjugate to the momenta $\disp p_a ^\alp =-i {\dron \over \dron q_a ^\alp}$. Noice that the arguments of $\Psi$ are denoted as $q_1 , q_2$ rather than $x_1 , x_2$ because their classical ( {\em i.e.} non-quantum ) analogs may fail to coincide with the positions in the whole phase space of classical relativistic dynamics~[34-36] The coupled wave equations involve the half-squared-mass operators $H_a$ as follows \beq 2 H_a \ \Psi = m_a ^2 \ \Psi , \qquad \ a=1, 2 \label{weq2} \eeq where the commutator $ [H_1 , H_2 ] $ must vanish. First integrals (also called constants of the motion) are characterized by commuting with {\em both} $H_1$ and $ H_2 $. \noi The wave equations (\ref{weq2}) can obviously be replaced by their sum and difference. Moreover it is convenient to define $$ P = p_1 + p_2 , \qquad Q = \half ( q_1 + q_2) , \qquad y = \half (p_1 -p_2 ) , \qquad z= q_1 - q_2 $$ and $ j_a = q_a \wedge p_a $. The Lie algebra of the Poincar\' e group is spanned by its generators $P^\alp$ and $J_{\mu \nu}$, where $J =Q \wedge P + z \wedge y = j_{1 } + j_{2 } $. \noi In the absence of {\em mutual} interaction we would simply have the following equations \beq 2 K_a \ \Psi = m_a ^2 \ \Psi \eeq \beq K_a = \half p_a ^2 + G_a \label{defKa} \eeq where $G_a (q_a , p_a )$ is the {\em external} interaction potential acting on particle $a$; in an external electromagnetic field, $K_a$ and $G_a$ are obtained by replacing $x, p$ and the charge $e$ respectively by $q_a , p_a$ and $e_a$ in $K$ and in $G$, see \cite{IJTP2009}. \medskip \noi {\sl Definition}$\qquad \ $ The couple of potentials $G_1 , G_2$ is {\em strongly invariant along direction $w$} when each potential separately is strongly invariant along $w$ in the one-body sector, say $$ [ G_1 , \ w \cdot q_1 ] = [ G_1 , \ w \cdot p_1 ] = [ G_2 , \ w \cdot q_2 ] = [ G_2 , \ w \cdot p_2 ] =0 $$ Again the {\sl longitudinal space} $E_\lll$ is defined as the span of strong translation invariance directions, and the question arises as to know whether $E_\lll$ admits an orthonormal basis. In the {\em normal case} (that is when it does admit such a basis) Proposition~1 entails that $G_1$ and $G_2$ are {\em transverse}. \medskip If the {\em external} field were zero , we would have (the superscript $\zer$ refers to the absence of external field) \beq 2 H _a^\zer \ \Psi = m_a ^2 \ \Psi \eeq where we could have $ \disp H _a^\zer = \half p_a ^2 + V^\zer _a $ but in this work we assume a {\em unipotential model}, say $V^\zer _1 = V^\zer _2 = V^\zer $, hence \beq H _a^\zer = \half p_a ^2 + V^\zer \label{Vzerunipot} \eeq and we suppose that mutual interaction is explicitly known when the external field vanishes, in other words $V^\zer $ is given. For simplicity let us assume that the mutual interaction takes on the form \beq V^\zer = f (Z, P^2 , y \cdot P ) \label{defVzer} \eeq where \beq Z = {z \cdot z} \ P^2 - (z \cdot P )^2 \label{defZ} \eeq is the main ingredient; so the three dynamical variables in $V^\zer$ are mutually commuting. \noi Let us stress that the simplification which results from assuming (\ref{Vzerunipot})(\ref{defVzer}) leaves still enough generality to encompass a wide class of mutual interactions; as an example see the model of electromagnetic interaction derived from Feynman's diagrams by Jallouli and Sazdjian~\cite{jallou97}. \bigskip \noi When all interactions are present, eqs.(\ref{weq2}) hold with \beq H _a = K_a + V \label{unipotext} \eeq where $V$ is a suitable modification of $V^\zer$, to be constructed in order to satisfy the vanishing of $ [ H_1 , H_2 ]$ and to reproduce the correct limits when either of the interactions is absent. \noi In general the compatibility condition $$ [K_1 - K_2 , V ] = 0 $$ cannot be solved for $V$ in closed form. But when the external potentials $G_a$ enjoy some special symmetry, this equation can be transformed to a tractable problem, owing to a change of representation, as follows \noi Let $ {\cal O}$ be any operator; the {\sl external-field representation} is formally defined by \beq \Psi ' = {\rm e} ^{i B} \ \Psi , \qquad \ {\cal O}' = {\rm e} ^{i B} {\cal O} {\rm e} ^{-i B} \label{transbij} \eeq where $B$ suitably depends on the external potentials. Of course we are left with the task of solving for $V'$ \beq [K'_1 - K'_2 , V'] =0 \eeq which will be possible provided $B$ is choosen such that $K'_1 - K'_2$ is computable and takes on a simple form. Moreover we have to write down explicitly the transformed of system (\ref{weq2}), which requires that both $K'_1$ and $K'_2$ be computable by (\ref{transbij}). The concept of {\em strong translation invariance, naturally extended to the two-body sector, offers a possibility to carry out this program}, choosing $B$ such that \beq K'_1 - K'_2 = y_\lll \cdot P_\lll \label{difKprim} \eeq \bigskip \noi Before going further a few remarks are in order. \noi In many cases of interest, the presence of external potentials does not kill all the isometries of spacetime. {Example $\ $ We shall see later on in Section~4 that, in an electromagnetic field satisfying the equations (\ref{defAmu})(\ref{defW}) below, the survivors of the Poincar\'e algebra are, in an adapted frame \beq P_1 = p_{ (1) 1 } + p_{ (2) 1} , \qquad \qquad P_2 = p_{ (1) 2 } + p_{ (2) 2} \label{alg2} \eeq (in these formulas and whenever necessary in the sequel, we put {\em parenthesis around the particle indices}). \noi In the above example $P_{\lll 1} = P_1$ is the only nonvanishing component of $P_\lll$. In contrast $P_2$ is just another conserved quantity. \medskip \noi {\sl Notation} $\ $ In order to avoid confusing the square of a vector with its second contravariant component, we make the following convention: \noi use covariant components for the momenta $P_\alp , y_\beta$, and contravariant components for the coordinates $Q^\mu , z^\nu$. So $P^2$ stands for $P \cdot P$, but $z^2$ is the second component of $z$, whereas the square of $z$ is explicitly noted as $z \cdot z$. This convention also holds with longitudinal and transverse parts, for example $P_\lll^2 = P_\lll \cdot P_\lll$, etc, but we write the square of $z_\ttt$ as $z_\ttt \cdot z_\ttt$, etc. \medskip In the remaining part of this section we are concerned with strong invariance in the {\em normal case} where (by definition) longitudinal and transverse parts are well defined, which corresponds to the existence of an orthonormal basis in $E_L$ . Under this assumption it was proved that {\em the longitudinal piece of linear momentum, say $P_{\lll \alp}$, is conserved} (see Section 3.2 of \cite{drozNCim}). \bigskip Formula (\ref{difKprim}) is ensured by taking $B = T L$, where $T$ and $L$ are respectively transverse and longitudinal operators suitably chosen, namely \beq T = y_\ttt \cdot P_\ttt + G_1 - G_2 \label{defT} \eeq \beq L = {P_\lll \cdot z_\lll \over P_\lll ^2 } \label{defL} \eeq Note that only $T$ depends on the external field. But the denominator in $L$ requires some caution in order to make sense; so we are led to cut off the space of states a sector corresponding to the vanishing eigenvalues of $P_\lll ^2$, as follows. The wave function can be considered as a function of $Q$ and $z$, which can be written as a Fourier expansion with respect to $ Q$, \beq \Psi = {1 \over (2 \pi )^ 2 } \int {\rm e} ^ {i K \cdot Q} \ \Upsilon (K , z ) d^4 K \label{fouPsi} \eeq Introducing an arbitrarily small but positive constant $\epsilon$, we shall restrict the support of $\Upsilon $ to the values of the vector $K_\alp$ satisfying $$K_\lll ^2 \geq \epsilon$$ Using once for all the Fourier development (\ref{fouPsi}) it is easy to check that the operators $z, y, Q, P$ and $1/P_L ^2$ respect the cut-off; in other words, any of these operators maps into itself the space of wave functions whose Fourier transform satisfies the support condition above. \noi Naturally $\disp \ P_\alp = -i {\dron \over \dron Q^\alp} \ $ and $ \ \disp y_\alp = -i {\dron \over \dron z^\alp } \ $ thus for instance we have that $$ { 1 \over P_L ^2} \Psi = {1 \over (2 \pi )^ 2 } \int {\rm e} ^ {i K \cdot Q} \ { 1 \over K_L ^2} \ \Upsilon (K , z ) d^4 K $$ $\ P_L$ is a constant of the motion and we shall eventually focus on eigenstates of it; in that case the support condition will get trivially simplified. \bigskip Note that $T$ is manifestly transverse in (\ref{defT}); however we can write equivalently \beq T = K_1 - K_2 - y_\lll \cdot P_\lll \label{vardefT} \eeq \noi It is obvious in (\ref{defL}) that $P_\lll$ commutes with $L$, thus also with $B$, hence $P'_\lll = P_\lll$. \noi Similarly transformation (\ref{transbij}) brings no change in $L$, nor in $T$, say $L' = L , \ T' =T$ . \noi The explicit form of $K'_1$ and $K'_2$ was derived from (\ref{defKa}) in \cite{drozNCim}. In order to get rid of a clumsy notation ($L^2 \not= L^\alp L_\alp$) we replace here the four-vector $L^\alp$ proposed in \cite{drozNCim} by its definition, say $L^\alp = P_\lll ^\alp / P_\lll \cdot P_\lll$, hence for the sum \beq K'_1 + K'_2 = K_1 + K_2 - 2T \, {y_\lll \cdot P_\lll \over P_\lll ^2} + {T^2 \over P_\lll ^2} \label{somKprim} \eeq and (\ref{difKprim}) for the difference; in fact the external-field representation was taylored for having that (\ref{difKprim}) holds true. \noi Finally, after defining $$ \mu = \half (m_1 ^2 + m_2 ^2 ) , \qquad \ \nu = \half (m_1 ^2 - m_2 ^2 ) $$ the coupled wave equations $H' _a \Psi ' = \half m_a^2 \Psi ' $ obtained from (\ref{weq2}) take on this form \beq (K'_1 + K'_2 + 2V') \Psi ' = \mu \Psi ' \label{somprim} \eeq \beq y_\lll \cdot P_\lll \Psi ' = \nu \Psi ' \label{difprim} \eeq (note that the latter equation does not depend on the mutual interaction). \medskip The explicit form of $V'$ is constructed from that of $V^\zer $ as follows: according to (\ref{difKprim}) we have to ensure that \beq [ y_\lll \cdot P_\lll , V' ] = 0 \label{Vprimsolve} \eeq A solution is easy to find, introducing the no-field "limit" of $B$, obtained by cancelling $G_1$ and $G_2$ in $T$, say $$ B^\zer = y_\ttt \doot P_\ttt \, L $$ indeed we observe that $$ {\rm e}^{iB^\zer} \, y \doot P \, {\rm e}^{-iB^\zer} = {\rm e} ^{iB} \, y \doot P \, {\rm e}^{-iB} = y_\lll \cdot P_\lll $$ and we know that $Z$ commutes with $y \cdot P$, therefore defining $\Zhat$ as the no-external-field limit of $Z'$, namely \beq \Zhat = {\rm e}^{iB^\zer} Z {\rm e}^{-iB^\zer} \label{formalZhat} \eeq it turns out that $\Zhat $ commutes with $y_\lll \cdot P_\lll $. \noi Moreover it is obvious that $P^2$ commutes with $y_\lll \cdot P_\lll $, thus finally any function of $\Zhat , P^2 , y_\lll \cdot P_\lll$ is expected to solve (\ref{Vprimsolve}). \medskip \noi The {\sl ansatz} consists in constructing $V'$ from $V^\zer$ as follows \beq V ' = f ( \Zhat , P^2 , y_\lll \cdot P_\lll ) \label{ansatz} \eeq where $f$ is the function given in (\ref{defVzer}). It is not difficult to check that this formula yields the correct limits when either $f$ or both $G_1$ and $G_2$ vanish. \noi Fortunately $\Zhat$ is explicitly computed; formula (\ref{formalZhat}) yields \beq \Zhat = Z + 2 (P_\lll ^2 \quad z \cdot P -P^2 \quad z_\lll \cdot P_ \lll ) L + P_\ttt ^2 P_\lll ^2 L^2 \label{defZhat} \eeq In \cite{IJTP2000} this formula was cast into this equivalent but more compact form \beq \Zhat = z_\ttt \cdot z_\ttt \ P^2 - ( z_\ttt \cdot P)^2 + P^2 (z_\lll \cdot z_\lll - { ( z_\lll \cdot P_\lll ) ^2 \over P_\lll ^2}) \label{7IJTP2000} \eeq We emphasize that in Ref. \cite{IJTP2000} all numbered formulas from (1) up to (7) included are general and by no means limited to the case of constant electromagnetic field~\footnote{In contrast eq. (8) of that reference holds only if $P_\lll$ is timelike, which is not the case eventually considered in the present paper.}. Indeed inserting (\ref{defL}) into the middle term of (\ref{defZhat}) we obtain $$2 (P_\lll ^2 \ z \cdot P -P^2 \ z_\lll \cdot P_ \lll ) L = 2 (z \cdot P) (z_\lll \cdot P) - 2{P^2 \over P_\lll ^2} (z_\lll \cdot P) ^2$$ $$2 (P_\lll ^2 \ z \cdot P -P^2 \ z_\lll \cdot P_ \lll ) L = 2 (z \cdot P) (z_\lll \cdot P) - 2 (z_\lll \cdot P) ^2 (1+ P_\ttt ^2 / P_\lll ^2 ) , $$ splitting $z \cdot P $ yields $$2 (P_\lll ^2 \ z \cdot P -P^2 \ z_\lll \cdot P_ \lll ) L = 2 (z_\lll \cdot P) ^2 + 2 (z_ \ttt \cdot P) (z_\lll \cdot P) - 2 (z_\lll \cdot P) ^2 (1+ P_\ttt ^2 / P_\lll ^2 ) $$ Now add $Z$, taking (\ref{defZ}) into account; we find a cancellation of $(z_ \ttt \cdot P) (z_\lll \cdot P)$ and remain with $$ Z + 2 (P_\lll ^2 \ z \cdot P -P^2 \ z_\lll \cdot P_ \lll ) L = z \cdot z \ P^2 - (z_ \ttt \cdot P)^2 - (z_\lll \cdot P) ^2 (1 + 2 P_\ttt ^2 / P_\lll ^2 )$$ Owing to (\ref{defZhat}), $\Zhat$ is given by adding $ P_\ttt ^2 P_\lll ^2 L^2 $ to this quantity. But (\ref{defL}) implies that$ P_\ttt ^2 P_\lll ^2 L^2 = (z_\lll \cdot P) ^2 P_\ttt^2 / P_\lll ^2$, thus $$ \Zhat = z \cdot z \ P^2 - (z_\ttt \cdot P )^2 - (z_ \lll \cdot P)^2 (1 + P_\ttt^2 / P_\lll ^2 ) $$ $$ \Zhat = z \cdot z \ P^2 - (z_\ttt \cdot P )^2 - (z_ \lll \cdot P )^2 {P^2 \over P_\lll ^2 } $$ after splitting $ z \cdot z \ P^2 $ we are left with (\ref{7IJTP2000}), that is formula (7) of \cite{IJTP2000}. [] \noi In order to develop (\ref{somprim}) we need to compute the r.h.s. of (\ref{somKprim}). \noi Eq. (\ref{defKa}) implies $ K_1 +K_2 = P^2 / 4 + y^2 + G_1 + G_2 $ that we insert into (\ref{somKprim}), hence \beq K'_1 +K'_2 = P^2 / 4 + y^2 + G_1 + G_2 - 2T \, {y_\lll \cdot P_\lll \over P_\lll ^2} + {T^2 \over P_\lll ^2} \label{varsomKprim} \eeq In the following sections we shall specify the external field. \section{One-body motion in electromagnetic waves} \noi Consider first the motion of a single charged pointlike and spinless body (treated as a test particle) subject to {\em any} electromagnetic field $F =\dron \wedge A$. Let angular momentum be noted as $j = x \wedge p$. In the Klein-Gordon equation we have \beq K = \half (p- {e } A ) ^ 2 , \qquad \ G = - \half (e A \cdot p + e p \cdot A - e^2 A^2 ) \label{defG} \eeq Note that \beq [ A^\mu , p_\nu ] = i \, \dron _\nu A^\mu \ \label{Amupnu} \eeq \bigskip \subsection{Single electromagnetic plane wave} The behavior of a relativistic charged particle in an electromagnetic plane wave has been studied a long time ago, even including spin~[38-40] Although the structure of a single electromagnetic plane wave is well known, here we insist on its manifestly covariant formulation as follows. \noi The wave vector is a four-vector $k$, null and oriented toward the future; the electromagnetic field is a tensor \beq F = a (k\wedge u) \sin ( k \cdot x + \alp ), \label{def1F} \eeq $u$ is a constant spacelike unit vector ($u \cdot u = -1$), $a$ and $\alp$ scalar constants. Although $k$ is given, the factor $u$ in the bi-vector $k\wedge u$ is not unique since we can add to $a u^\alp$ an arbitrary null vector proportional to $k. \ $ A vector-potential for this field is \beq A^\mu = a u^\mu \cos (k\cdot x + \alp ) , \qquad \qquad k \cdot k =0 \label{planwav} \eeq The Lorenz gauge condition requires $u \cdot k =0$ \noi The linear space of four-vectors can be written as $$ E = \cale _{03} \oplus \cale _{12} , $$ where $k \in \cale _{03} $ and $u \in \cale _{12} $. Note however that this direct-sum decomposition {\em is not intrinsically} defined. \noi We can use an orthonormal basis $(E_\alp )$ defined such that $E_0$ points toward the future, $k = \ome (E_0 + E_3 )$ and $E_2 =u$. \noi So we have \beq k^ \mu = ( \ome , 0, 0, \ome ) \ , \qquad \qquad \ome > 0 \eeq equivalently $ \ k_ \mu = ( \ome , 0, 0, - \ome ) $. Note that $k \cdot x = \ome (x_0 + x_3 ) = \ome (x^0 - x ^3 ) $. \noi Moreover $ u^\mu = (0, 0, 1, 0 ) $. \bigskip \noi {\sl First integrals} \noi The {\sl constants of the motion} are induced by the symmetries of $K$ in (\ref{defG}). \noi Since $k \cdot x $ does not depend on $x^1 , x^2$ it follows that $A^\mu$ is invariant under translation and rotation in $\cale _{12}$, in other words $A^\mu$ commutes with $p_1 , p_2 ,$ and $ j_{12} $. Further observe that $u$, thus also $A$, lies in $\cale _{12}$ thus $A \cdot p + p \cdot A$ depends only on $x^0 , x^3 , p_1, p_2$, which entails $[G, p_1 ] = [G, p_2 ] = 0$, therefore $[K, p_1 ] = [K, p_2 ] = 0$, so both $G$ and $K$ are (at least simply) invariant by translation~\footnote{In contrast $G $ fails to be invariant by {\em rotation} in $\cale _{12}$, as can be seen by a direct computation.} in $\cale _{12}$ . \noi Hence $p_1$ and $p_2$ as {\em first integrals}. \noi Moreover $[k \cdot x , k\cdot p] = 0$, thus $[A, k\cdot p] $ vanishes as well, implying that $ \ A \cdot p , \quad p \cdot A$ and $ A \cdot A \ $ separately all commute with $k \cdot p $. Hence $ [G, k \cdot p ]= [K, k \cdot p ] = 0 $, so finally $ \ k \cdot p $ {\em is another constant of the motion}. \noi In any adapted frame, $\ k \cdot p = \ome (p_0 + p_3 ) \ $ so conservation of $k \cdot p $ amounts to having $p_0 + p_3 = {\rm const.}$ (note that in contrast $(p_0 - p_3 )$ {\em is not} a constant of the motion). To summarize: the system enjoys {\em translation} invariance along $E_1 , E_2 $ and $k$, more precisely $K$ as well as $G$ are invariant under these translations. We could state equivalently: \beprop $\ K$ and $ G \ $ are {\em at least simply} invariant by translation along direction $w$ iff $\ w \cdot k =0$. \enprop \noi This condition characterizes the 3-plane tangent to the light cone along $k$, say $\Pi _3$. \bigskip But we are interested in the possibility of having $\ G \ $ {\em strongly} translation invariant; the question is as to know whether $G$ admits directions of {\em strong} translation invariance. Translation invariance along $w$ will be {\em strong} iff additionally $[G, w \cdot x ] $ vanishes. One finds that (irrespective of the analytical shape of $A$) $ [w \cdot x , G ] $ vanishes iff $w \cdot u = 0 $. Finally \noi $G$ {\em is {\em strongly} translation invariant along $w$ provided} $$ w \cdot k = w \cdot u = 0 $$ In other words $w$ must belong to the two-dimensional vector space $\Pi _2$ spanned by $k$ and $u$. Unfortunately it is trivial to check that in $\Pi _2$ any direction orthogonal to $u$ is necessarily colinear with $k$. Thus this vector space {\em fails to admit} any orthonormal frame; it does not provide a unique and straightforward definition of longitudinal and transverse directions. This drawback with the single plane wave was pointed out in a previous work; in the same paper we already advocated the interest of considering rather a superposition of two plane waves~\cite{drozFBS}. \ \subsection{Superposition of two plane waves} Now let us consider the case where the electromagnetic field is a superposition of two plane waves travelling along {\em the same} right line, with respective wave vectors $\ k, \ l \ $, non-colinear, both null and future oriented, so that their scalar product is strictly positive, say $$\ k \cdot l = 2 \ome ^2$$ With an obvious notation the field is $F = F_\onerom + F_\tworom $, namely \beq F = a ( k \wedge u ) \sin ( k \cdot x + \alp ) + b ( l \wedge u ) \sin ( l \cdot x + \beta ) \label{defF} \eeq where $\ k, \ l \ $ are constant null vectors, $a, b, \alp , \beta$ constant scalars; $u$ is a constant spacelike unit vector ( $\ u\cdot u = -1 \ $) orthogonal to both $k$ and $l$. \beprop Since $k , l$ are given null and non-colinear, the factor $u$ in the bi-vectors $\ k \wedge u , \ l \wedge u \ $ is unique \enprop \noi Proof $\ $Looking for a possible $u'$ such that $\ k \wedge u' = k \wedge u$ and $ \ l \wedge u' = l \wedge u \ $ entails $u' - u = \lam k , \qquad u' - u = \mu l, \ $ for some $\lam , \mu \in \batonR $, thus $\lam k = \mu l$ which is impossible unless $\lam =0 =\mu$, therefore $u' = u$. [] \medskip The most simple vector-potential, such that $F = \dron \wedge A $, can be written as \beq A^\mu = A^\mu _\onerom + A^\mu _\tworom = W u^\mu \label{defAmu} \eeq with \beq W = W_\onerom + W_\tworom = a \cos (k \cdot x + \alp ) + b \cos (l \cdot x + \beta ) \label{defW} \eeq The Lorenz gauge condition is ensured by requiring that $u \cdot k = u \cdot l =0$. \medskip \beprop Given two {\em non-colinear} null vectors, $\ k, \ l \ $ future oriented, it is always possible to find orthonormal basis where $k^0 = l^0$. \enprop \noi Proof $\ $ Define $$ E_0 = { (k +l) \over 2 \ome} , \qquad \ E_3 = { (k-l ) \over 2 \ome} $$ We get $$ E_0 ^ 2 = 1 , \qquad E_3 ^ 2 = - 1, \qquad E_0 \cdot E _3 =0 $$ and $ (k +l) (k -l) =0$, moreover $$ k \cdot E_0 = {k \cdot l \over 2 \ome} = \ome , \qquad \ k \cdot E_3 = - {k \cdot l \over 2 \ome} = - \ome $$ $$l \cdot E_0 = \ome , \qquad l \cdot E_3 = \ome $$ and $(k+l) ^2 = 4 \ome ^2 , \qquad \ (k-l ) ^2 = - 4 \ome ^2 $. \noi So the two-dimensional vector space spanned by $k, l$ has signature $+ -$ and admits $E_0 , E_3 $ as orthonormal basis. \noi Its orthocomplement in the space ${E}$ of four-vectors, say $\cale _{12}$, has elliptic signature; the couple $E_0 , E_3$ can be completed by $E_1 ,E_2 $ as to form an orthonormal basis, In contrast to the case of a single plane wave, in the present case {\em the splitting $E = \cale _{03} \oplus \cale _{12}$ is intrinsically defined}; note that $u \in \cale _{12}$. [] \medskip \noi Any basis constructed that way (which gives to both waves the same frequency) will be called a {\sl monochromatic basis}. In such a basis we can write \beq k^\mu = ( \ome , 0, 0, \ome ) \qquad \quad l^ \mu = ( \ome , 0, 0, - \ome ) \eeq \subsubsection{One-body motion, first integrals} \noi $A$ depends only on $x^0 , x^3$, thus $ [A , p_1 ] = [A , p_2 ] = 0$. Moreover (\ref{defAmu}) implies that $A$ lies in $\cale _{12}$. It follows that $ A \cdot p + p \cdot A$ depends only on the mutually commuting arguments $x^0 , x^3 , p_1 , p_2 $. Finally $G$ and also $K$ commutes with both $p_1$ and $ p_2$, say \beq [p_1 , G ] = [p_1 , K ] = [p_2 , G ] = [p_2 , K ] = 0, \label{pK} \eeq this property of {\em simple translation invariance} in the plane $\cale _{12} $ entails that {\em $p_1$ and $p_2$ are constants of the motion}. In contrast $ A^\mu _\onerom$ commutes with $ p_0 + p_3 $ whereas $A^\mu _\tworom$ commutes with $ p_0 - p_3 $, moreover $ [ A^\mu _\onerom , p_0 - p_3 ] $ is a function of $ x^0 - x^3$ only whereas $ [ A^\mu _\tworom , p_0 + p_3 ] $ is a function of $ x^0 + x^3$ only. The most general direction of ${\cale}_{03}$ can be written as $w = {w_+} (E_0 + E_3 ) + {w_-} (E_0 - E_3 ) $ ( with $ w_\pm $ constant scalars). One finds that $[A^\mu , w \cdot p ]$ is a sum of two independent functions; it cannot be identically zero, thus no direction of $\cale_{03}$ could generate a translation leaving $A^\mu$ invariant (in fact $ A^\mu _\onerom + A^\mu _\tworom $ exhibits {\em no invariance} at all in the plane $\cale_{03}$). Similar argument holds for $ [A \doot p + p \doot A , \, w \cdot p ] $ and $ [ A \doot A , \, w \cdot p ] $, and finally $G$ cannot be translation invariant along a direction of $\cale_{03}$. \subsubsection{Strong translation invariance} Let us prove more briefly this statement announced several years ago with a complicated justification~\cite{mpulcian}. \beprop The interaction term $G$ corresponding to (\ref{defAmu}) is strongly translation invariant along a unique direction $w$, defined as orthogonal to $k, l$ and $u$. \enprop \noi Proof $\qquad $ in the previous subsection we saw that $G$ is (at least simply) translation invariant along any direction of the plane $\cale _{12}$, and by no direction of the plane $\cale _{03}$. \noi Thus any possible direction of {\em strong} invariance of $G$ must be searched only within the plane $\cale _{12}$. Remind that $u$ being orthogonal to $k$ and $l$, it belongs to the plane $ \cale _{12}$. \noi {\em Hereafter we shall specify further the monochromatic basis by taking $E_2 =u$, so we have $\ u ^\mu = (0, 0, 1, 0)$}. This choice determines { \em the adapted } monochromatic basis; as a result we can write the second formula (\ref{defG}) on this form \beq G = - \half e (W p_2 + p_2 W ) - \half e^2 W^2 \label{redefG} \eeq where the only momentum involved is the component $p_2$. Let us now look for a direction $w$ of {\em strong} translation invariance. In addition to ordinary invariance just characterized above, we must have that $[G , w \cdot x ]$ vanishes. Since the quadratic piece (with respect to $eA$) of $G$ trivially commutes with $x$, we are left with $$ 2 [G , w \cdot x ] = -e A^\alp [ p_\alp , w \cdot x ] - e [p_\alp , w \cdot x ] A^\alp $$ but $\ [ p_\alp , w \cdot x ] = -i w_\alp$, therefore $[G , w \cdot x ]$ vanishes iff $w$ is orthogonal to $A$, which means orthogonal to $u$. In the $\cale _{12}$ plane the only direction orthogonal to $u$ is that of $E_1$. [] \medskip We shall normalize $w$ by choosing $w =E_1$, say $$ w ^\mu = (0, 1, 0, 0) $$ This spacelike direction determines a $1 \oplus 3$ decomposition longitudinal/transverse in the linear space of four-vectors~\footnote{ space $ \oplus $ three-dimensional hyperbolic, not to be confused with the usual time $\oplus $ space decomposition.}. In agreement with the convention made in §2.2, the ray spanned by $w$ will be called {\sl longitudinal}, whereas the span of $k, l, u$ will be called the {\sl transverse} 3-plane. \medskip To summarize: in the field which corresponds to (\ref{defAmu}) the wave equation is $ \disp \ (p-e A)^2 \psi = m^2 \psi \ $, neither $k \cdot x$ nor $l \cdot x$ commutes with the squared-mass operator; in contrast the quantities $ \quad p_1 , \quad \ p_2 \quad $ are constants of the motion; they can be diagonalized, say $ \ p_1 \psi = \rho _1 \psi , \qquad \ p_2 \psi = \rho_2 \psi \ $, with $ \rho_1 , \rho_2$ numerical constants. So we can write, up to a normalization factor $$ \psi {\rm e}^{ i (\rho_1 x^1 + \rho_2 x^2 ) } \ \gam (x^0 , x^3 ) $$ In the sequel we shall tackle the two-body problem, in order to construct a pair of compatible mass-shell constraints. \section{Two-body system in a monochromatic superposition} \bigskip Let us resume our analysis of the two-body problem initiated in subsection 2.2. As we saw up there, in the absence of mutual interaction the motion of each particle would be ruled by the Hamiltonian generator $K_a = \half p_a ^2 + G_a$. Now we focus on the situation characterized by the form (\ref{defAmu}) of electromagnetic vector-potential. We use the adapted frame described in the previous section and extend formula (\ref{redefG}) to the two-body system by replacing $x, p, e$ as indicated in subsection 2.2; we find \beq 2G _a = -e_a ( W_a p_{(a) 2} + p_{(a) 2} W_a ) - (e_a W_a )^2 \label{2Ga} \eeq where \beq W_a = a \cos (k \cdot q_a + \alp ) + b \cos (l \cdot q_a + \beta ) \label{defWa} \eeq and with the following \noi {\sl Notation} $\ $ When there is a risk that particle label be confused with coordinate label, the former is put between parenthesis; no parenthesis otherwise: for instance $p_{(1) 2}$ is the second component of the momentum of particle 1. \medskip \noi {\em Remark $ $} Strong translation invariance of $G_a$ is a consequence of that of $G$, which stems from the shape of the electromagnetic field as described in section 3.2 and stated in Proposition~5, without any further condition. \medskip \noi Note that each $W_a$ depends only on $Q^0 , Q^3, z^0, z^3$ while $G_a$ additionally depends on $P_2$ and $y_2$, through the identities \beq p_{ (1) 2} = \half P_2 + y_2 , \qquad \quad p_{ (2) 2} = \half P_2 - y_2 \eeq \noi \medskip \noi It stems from (\ref{pK}) that \beq [p_{(1) 1} , K_1 ] = [p_{(1) 2} , K_1 ] = 0 \eeq \beq [p_{(2) 1} , K_2 ] = [p_{(2) 2} , K_2 ] = 0 \eeq and it is trivial that \beq [p_{(1) \alp} , K_2 ] = [p_{(2) \alp} , K_1 ] = 0 \eeq whence we deduce \beq [P_1 , K_a ] = [P_2 , K_a ] = 0 \label{PK} \eeq The unique longitudinal direction is $E_1$ with contravariant components $ (0, 1, 0, 0, )$ and now we have for any four-vector \beq \xi_\lll = - (\xi \cdot w ) \ w , \qquad \quad \xi _ \ttt = \xi + (\xi \cdot w ) \ w \eeq but $ \xi \cdot w = \xi \cdot E_1 = \xi _1 = -\xi ^1$. \noi Here $w$ is spacelike; in any adapted frame we can write for any couple $\xi , \eta$ \beq \xi_\lll \cdot \eta_\lll= - (\xi \cdot w ) (\eta \cdot w ) = - \xi _1 \eta _1 \eeq \noi thus we have $$z_\lll \cdot P_\lll = - z_1 P_1 , \qquad \ P_\lll ^2 = - P_1 ^2 , \qquad \ z_\lll ^2 = - z_1 ^2 $$ \beq { (z_\lll \cdot P )^2 \over P_\lll ^2 } = - z_1 ^2 \label{cancellor} \eeq Owing to this last formula, the third term in the r.h.s. of (\ref{7IJTP2000}) vanishes and we remain with \beq \Zhat = z_\ttt \cdot z_\ttt \ P ^2 - ( z_\ttt \cdot P_\ttt )^2 \label{bijZhat} \eeq analogous with Bijtebier's formula (see (4.3)(4.4) in \cite{bij89}) concerning the case where the external field was stationary; but here the external potential is strongly invariant along a {\em spacelike} direction. On the other hand, equation (\ref{varsomKprim}) gets simplified as follows \beq K'_1 +K'_2 = P^2 / 4 + y^2 + G_1 + G_2 - 2T {y_1 \over P_1} - {T^2 \over P_1 ^2} \label{simpsomKprim} \eeq \medskip \noi Linear momentum is $ P _\alp = p_{(1) \alp} + p_{(2) \alp} $. As we mentioned in subsection 2.2 its longitudinal piece $$P_{\lll \alp} = - P_1 \ w_ \alp $$ is conserved, therefore we can diagonalize $P_1$, and fix its eigenvalue say $\lam_1 \not= 0$ (according to the restriction made in subsection~2.2). So we get rid of one spacelike degree of freedom, namely $Q^1$. Moreover, in view of theorem~2 of \cite{IJTP2009}, we expect that $P_2$ also is to be conserved. Let us directly check this point. In view of (\ref{PK}) all we have to prove is that $P_2$ also commutes with $V$, or equivalently that $P' _2$ commutes with $V'$. But we first observe that \beprop $P_2 $ is not affected by transformation (\ref{transbij}) , say $P'_2 = P_2$ \enprop Proof $\ $ $P_2$ is a purely transverse quantity, thus it commutes with $L$. Then a glance at (\ref{vardefT}) and (\ref{PK}) ensures that $P_2$ commutes also with $T$, so finally with $LT$ which is the generator of the transformation.[] \noi Then looking at (\ref{ansatz}) the question is whether $ P_2$ commutes with $\Zhat$, which is obvious in (\ref{7IJTP2000}), so \noi $P_2$ {\em is a constant of the motion}, as expected; we assign to it a sharp value, say $\lam_2$. \medskip We can summarize: the survivors of the Poincar\'e Lie algebra are $P_1, P_2$, they define a conserved vector, let it be noted as \beq P_{\perp \alp} = (0, P_1 , P_2 , 0 ) \eeq and the principle of isometric invariance is satisfied. Notice that $ P'_{\perp } $ is not affected by transformation (\ref{transbij}), say $ P'_{\perp \alp} = P_{\perp \alp} $. \medskip In contrast to the case without external field, $P^2$ is not anymore a first integral whereas $P_{\perp }^2$ remains conserved. \noi At this stage it is convenient to remind that in the absence of external field the coupled wave equations are usually reduced to a spectral problem for the quantity $\disp N = H_1 + H_2 - {(H_1 - H_2 )^2 / P^2} - {P^2 / 4} $ which is intimately related to the properties of relative motion~\footnote{ The eigenvalue of $-N$ appears denoted as $b^2$ in the work of Todorov~[18-21] divided by the reduced mass it is proportional to the leading term in the development of the mass defect $M-(m_1 +m_2)$, insofar as an {\em isolated system} is oncerned. }. \noi In the present case $N$ is not anymore a first integral, but now it is natural to consider instead of it this invariant combination \beq N _{\perp} = H_1 + H_2 - { (H_1 - H_2 )^2 \over P_{\perp }^2 } - { 1 \over 4 } P_{\perp } ^2 \label{defNperp} \eeq The system (\ref{somprim})(\ref{difprim}) is to be solved in the external-field representation; we can impose that $\Psi '$ be an eigenfunction of $P _{\perp \alp}$ , say $$ P _{\perp \alp} \ \Psi ' = I_\alp \ \Psi ' , \qquad I_\alp = (0, \lam_1 , \lam_2 , 0 ) , $$ this choice renders $N'_\perp$ diagonal. The cut-off introduced in Section~2.2 is simply expressed as $\lam _1^2 \geq \varepsilon$, hence $I^2 = - (\lam_1 ^2 + \lam_2 ^2 ) < 0$. \medskip \noi We have $$ P^2 \Psi ' = ( P_0 ^2 - P_ 3 ^2) \Psi ' - (\lam _1 ^2 + \lam _2 ^ 2 ) \Psi ' $$ note that $P^2 \not= {P'}^2$. \subsection{Reducing the wave equations.} Here we aim at solving the coupled wave equations (\ref{somprim})(\ref{difprim}) by an eigenstate of $P_1 , P_2$, say \beq \Psi '= {\rm e}^ {i ( \lam_1 Q^1 + \lam_2 Q^2 ) } \ \phi (Q ^0 , Q^3 , z^\alp ) \eeq Let us consider first (\ref{difprim}) and remember that $y_1 = -i {\dron / \dron z^ 1}$. Since $y_\lll \cdot P_\lll =- y_1 P_1$ we must have $y_1 P_1 \ \Psi ' = - \nu \Psi '$, but $ P_1 \Psi ' = \lam_ 1 \Psi'$, so $y_1$ is constant of the motion with eigenvalue $- \nu / \lam _1$, and (\ref{difprim}) is to be solved by writting \beq \Psi '= {\rm exp} i( \lam_1 Q^1 + \lam_2 Q^2 - {\nu \over \lam _1} z^1 ) \ \chi (Q ^0 , Q^3 , z_\ttt ) \label{soludif}\eeq In order to determine $\chi$ we now develop equation (\ref{somprim}). \noi We separate the coordinates $Q^1 , Q^2 , z^1$ from $Q^0 , Q^3 , z_\ttt$. It is clear from (\ref{soludif}) that $\Psi'$ is eigenstate not only of $P_1, P_2$, but also of $ y_1$, with respective eigenvalues $\lam _1 , \lam_2$ and $ - {\nu / \lam_1}$. In view of this remark it is convenient to introduce the following {\sl Notation}: $ \ $ to any dynamical variable $\calf (Q,P,z,y ) $ we associate the substitution \beq {\underline \calf} = {\rm subs}. ( \calf \ | \ P_1 = \lam_1 , \ P_2 = \lam_2 , \ y_1 = - {\nu / \lam_1} ) \eeq so ${\underline \calf} $ and $\calf$ yield the same result when applied to $\Psi '$. For instance \beq \soulP ^2 = P_0^2 - P^2 _3 -\lam _1^2 -\lam_2^2 , \qquad \qquad { \soulP} _\lll ^2 = - \lam_1^2 \label{soulPsq} \eeq From (\ref{2Ga}) we obtain \beq 2 \soulG _a = - e_a (W_a {\underline p}_{ (a) 2} + {\underline p}_{ (a) 2} W_a ) - (e_a W_a )^2 \label{2soulGa} \eeq where \beq {\underline p}_{ (1) 2} = \half \lam_2 + y_2 , \qquad \ {\underline p}_{ (2) 2} = \half \lam_2 - y_2 \eeq Formula (\ref{simpsomKprim}) yields \beq \soulK'_1 + \soulK'_2 = {1 \over 4} \soulP^2 - ( {\nu \over \lam_1 } )^2 + y_\ttt ^2 + \soulG_1 + \soulG_2 + {1 \over \lam_1^2 } (2 \nu \soulT - \soulT^2 ) \label{somsoulKprima} \eeq where $\soulG _a $ is given by (\ref{2soulGa}) above, while the expression for $\soulT$ results from (\ref{defT}), that is \beq \soulT = y_0 P_0 - \lam_2 y_2 -P_3 y_3 + \soulG _1 - \soulG_2 \eeq \noi But in order to achieve writting the explicit form of (\ref{somprim}) we still have to evaluate $V' \Psi'$. \noi In view of (\ref{defVzer})(\ref{unipotext})(\ref{ansatz}) we consider the action of $\Zhat, P^2 $ and $y \cdot P$ on the wave functio . In the present case $\Zhat$ is given by (\ref{bijZhat}), hence $ \Zhat \Psi ' = {\soulZhat} \ \Psi '$ where of course we define \beq \soulZhat = z_\ttt \cdot z_\ttt \ {\underline P^2} - (z^0 P_0 + z^3 P_3 + \lam_2 z^2 )^2 \label{soulZhat} \eeq Note that, as differential operators, $\soulP ^2$ and $\soulZhat$ act only on the variables $Q^0 , Q^3$. Finally $V' \Psi' = \soulVprim \Psi' $, defining \beq \soulVprim = f(\soulZhat , {\underline P^2}, \nu ) , \label{soulVprim} \eeq in this expression $f$ encodes all information about mutual interaction; it is a priori given and the details about its arguments are formulas (\ref{soulPsq}) and (\ref{soulZhat}). The reduced wave equation thus takes on this form \beq ( \soulK'_1 + \soulK'_2 + 2 \soulVprim ) \chi = \mu \chi \label{lastreduc} \eeq wherein (\ref{somsoulKprima}) and (\ref{soulVprim}) are to be inserted. In spite of its formal aspect, (\ref{lastreduc}) cannot be considered as an eigenvalue equation for $\mu$, since $m_1 , m_2$ are parameters fixed from the outset. In fact this equation is, in a trivial manner, equivalent to an eigenvalue problem for the quantity defined in (\ref{defNperp}), say $$N'_{\perp } \Psi ' = ( H'_1 + H'_2 -{\nu ^2 \over I^2 } - { I^2 \over 4 } ) \Psi ' $$ Indeed according to (\ref{unipotext}) we recall that $H'_1 + H'_2 = K'_1 + K'_2 + 2 V'$ therefore the number $ \disp \sig = \mu - ({\nu ^2 \over I^2 } + {I^2 \over 4}) $ is the eigenvalue of $N'_{\perp}$. \section{Conclusion and outlook} From the start we have discarded the apparently simpler model involving a single plane electromagnetic wave, which actually is problematic for our purpose, since it leads to a degenerate case of strong translation invariance. In this work we obtained a pair of compatible mass-shell constraints describing the motion of two charged spinless particles subject to a laser made of two counter-propagating plane waves. The form of these equations is given explicitly, in the external-field representation, through formulas (\ref{somprim})(\ref{difprim}) with help of (\ref{defT})(\ref{2Ga})(\ref{simpsomKprim}) and (\ref{ansatz}), assuming that the term of mutual interaction was known in closed form in the absence of external field. \noi The monochromatic superposition of two plane waves (although it preserves less symmetries than a single plane wave) provides a {\em normal} case of strong translation invariance. Moreover (in contrast to the single wave) this superposition allows us to distinguish, in an intrinsic way, a preferred frame of reference (the adapted basis) which could be viewed, rather naturally, as the {\em laboratory frame}. \noi Enough symmetry of translation is preserved anyway, as to furnish two constants of the motion, $P_1, P_2$, the former associated with strong translation invariance, and both of them in agreement with the principle of isometric invariance. \noi On the one hand these first integrals permit to factorize out two degrees of freedom, namely $Q_1 , Q_2$. On the other hand (\ref{difprim}) leads to the elimination of $z^1$ so we remain with a unique reduced wave equation to be solved for $\chi (Q_0 , Q_3 , z_\ttt )$. We are left with a problem of five degrees of freedom. Note that, the longitudinal direction being spacelike, the spacelike relative coordinate $z^1$ is eliminated instead of the so-called "relative time"; a similar situation also occurs in the simple case where the external field is a constant homogeneous electric field~\cite{drozPhysRevA}. At the present stage we are at least provided with a manifestly covariant formalism which has the correct limits when either of the interactions vanishes and which satisfies the principle of isometric invariance. Naturally it would be interesting to renew the contact with BS equation in the spirit of \cite{bijbroek}, and to compare the result with the present approach; but now the variation of external field {\em in time} might be a serious complication for this program. In the meanwhile it is encouraging that isometric invariance which was neither explicitly required nor even invoked in the early foundations of our method~[30,31] turns out to be satisfied after all. Some attention is still required in order to clarify the physical meaning of $N_\perp$, but this issue is already transparent in the equal-mass case ($\nu = 0$), where $N_\perp$ is just the conserved piece of $N$. \noi Further work could be devoted to solve the reduced wave equation for a relativistic harmonic oscillator as a toy model, choosing $ f = {\rm const. }\ (P^2 )^{- 1/2} Z $ in formula~(\ref{defVzer}). \noi Another open problem in the hope of realistic applications is of course the introduction of spin.
1,314,259,993,842
arxiv
\section{Introduction}\vspace{-0cm} The use of demand-side management~(DSM) mechanisms has recently attracted significant attention in the smart grid literature. DSM schemes offer smart grid customers an opportunity to change their demands over time so as to reduce their overall electricity costs. From the utility company's perspective, such a shifting of demand can reduce the peak hour demand on the grid~\cite{hossain2012smart}. A successful implementation of DSM schemes requires customers to actively subscribe to the offers made by the utility company. However, recent empirical studies have shown that customers remain reluctant to subscribe to DSM mechanisms, despite the efforts of utility companies~\cite{FAHEY}. Therefore, it is important to develop a new generation of DSM mechanisms that can improve the penetration of the technology and thus accelerate the deployment of the smart grid. There has been an abundant body of work dealing with DSM~\cite{mohsenian2010autonomous, atzeni2013noncooperative, QZ00, fadlullah2014gtes, chen2014autonomous, chai2014demand}. The authors in~\cite{mohsenian2010autonomous} proposed a distributed, game-theoretic DSM system, in which each user chooses a daily schedule of household appliances and loads to optimize the energy consumption. The work in~\cite{atzeni2013noncooperative} proposed different game-theoretic approaches to optimize a day-ahead DSM mechanism by using storage units. The overall value of implementing DSM and demand response schemes was studied in~\cite{QZ00} via a Stackelberg game formulation. The work in~\cite{fadlullah2014gtes} investigated how energy consumption may be optimized through a two-step centralized model, in which a power supplier provided consumers with an energy price parameter and consumption summary vector. In~\cite{chen2014autonomous}, the authors adopted an instantaneous load billing scheme so as to shift consumers' peak hour demand and to charge them fairly for their energy consumption. In~\cite{chai2014demand}, utility companies and residential users were modeled in two levels, which reduce demand variation and peak load. Thus, the works in~\cite{mohsenian2010autonomous, atzeni2013noncooperative, QZ00, fadlullah2014gtes, chen2014autonomous, chai2014demand} study the various economic and optimization aspects of DSM and are representative of the majority of existing works in this area. However, most of those existing works assume that customers will act rationally and subscribe to DSM as long as their objective function can be optimized~\cite{mohsenian2010autonomous, QZ00, fadlullah2014gtes, chai2014demand, atzeni2013noncooperative, chen2014autonomous}. In practice, as observed by numerous real-world experimental studies~\cite{fiegenbaum1988attitudes, barberis2001prospect, levy1997prospect}, the process of individuals can deviate significantly from the rational premise of conventional game theory. This has been corroborated by the relatively low adoption rate of DSM in deployed smart grids~\cite{FAHEY}. In this respect, prospect theory (PT), a Nobel-prize winning theory developed by Kahneman and Tversky, provides the necessary tools to better understand real-world decision making and its deviation from rational behavior~\cite{kahneman1979prospect}. In particular, PT studies have shown that, in real life, users often have subjective behavior when faced with uncertainty of outcome, such as lottery outcomes. Also, it has been shown that customers may have subjective perceptions on how others behave and how this behavior impacts their gains or losses in economic-oriented scenarios. Since the price in DSM strongly depends on aggregate demands and thus on the participation of all customers, the overall perceptions of the customers on each other's DSM decisions will impact their behavior. PT has been widely used in the social sciences~\cite{harrison2009expected}. In addition some recent works~\cite{6310922, Mandayam, okuda2009design, poortoappear} have applied PT to wireless networks. However, to the best of our knowledge, beyond our early work in~\cite{PTICASSP} that studies a two-player storage management PT game, no existing work seems to have investigated how PT considerations impact participation in DSM. The main contribution of this paper is to propose a new framework for DSM that accounts for realistic customer participation behavior using prospect theory. In particular, the DSM problem is cast as a noncooperative game between customers. In this game, each customer can decide whether or not to participate in a DSM program, while the utility company seeks to reduce the total peak hour demand so as to maintain a desirable target load on the grid. The proposed game uses PT to explicitly incorporate the customers' \emph{subjective perceptions} on DSM decisions and their impact on potential cost savings. Here, subjective perceptions pertain to the way in which each customer evaluates its electricity payment and how that depends on other customers' actions. In this respect, each customer is seeking to minimize a cost function that reflects its one-day electricity bill, under other customers' participation and its impact on the price. Compared to related works on DSM~\cite{mohsenian2010autonomous, atzeni2013noncooperative, QZ00, fadlullah2014gtes, chen2014autonomous, chai2014demand, saad2012game, 6787063, weaver2009game, coogan2013energy, cui2013game}, this paper brings forward novel ideas from PT in order to explicitly account for realistic customer behavior, which can differ significantly from the classical, rational path predicted by traditional game theory. To solve the game, under both PT and classical game theory, we propose a new learning algorithm that allows the customers to interact and reach an $\epsilon$-mixed Nash equilibrium. We prove the convergence of the algorithm and discuss the properties of the reached operating point. Extensive simulation results based on realistic data show that deviations from rational behavior can strongly impact the overall level of participation in DSM. The remainder of the paper is organized as follows: Section~\ref{sec:sysmodel} presents the studied system model. In Section~\ref{sec:game}, we formulate the problem as a PT-based game. In Section~\ref{sec:algo}, we introduce the concept of fictitious play and describe our proposed algorithm. Simulation results are presented in Section~\ref{sec:sim}, while conclusions are drawn in Section~\ref{sec:conc}.\vspace{-0cm} \section{System Model}\label{sec:sysmodel} \subsection{Demand cost}\label{sec:dp} Consider a smart grid consisting of a set $\mathcal{N}$ of customers in which each customer $i \in \mathcal{N}$ consumes a certain amount of energy per hour. All customers are offered the opportunity to participate in a demand side management scheme provided by the utility company. For customer $i$, we define an hourly \emph{energy demand scheduling vector} in line with existing works such as, \begin{equation}\label{eq:x} \boldsymbol{x}_i=[x_i^1, x_i^2,\ldots, x_i^H], \end{equation} where $H=24$ hours. For a certain hour $h \in \mathcal{H}=\{1,2,\ldots, H\}$, $x_i^h$ represents the energy demand of customer $i$. The total energy demand from all customers at time $h$ is thus \begin{equation}\vspace{-0.1cm}\label{eq:actuald} d^h=\sum_{i \in \mathcal{N}} x_i^h. \end{equation} At a given time $h$, we assume that the price per energy unit charged to a customer $i$ is dependent on its fraction of the total current load as follows~\cite{mohsenian2010optimal}: \begin{equation}\label{eq:price} c_i^h(\boldsymbol{x})=B \frac{x_i^h}{\sum_{i \in \mathcal{N}} x_i^h}, \end{equation} where $B$ could be designed based on notions from a locational marginal pricing (LMP) scheme~\cite{powerbookLMP} in which the price function is not necessarily time-dependent. The electricity price for each customer as given by (\ref{eq:price}) allows one to allocate the price for the amount of energy that a customer consumes. For example, as the number of customers increases, the total demand will increase. Then, the electricity pricing must reflect this change in the demand. For each customer, the individual electricity pricing is dependent on its demand proportion as captured by (\ref{eq:price}). Hence, the total cost of user $i$ over $H$ hours is given by \begin{equation} \sum_{h=1}^H c_i^h(\boldsymbol{x}) x_i^h. \end{equation} Given the price and local demand, an energy market is set up in which all customers seek to minimize their costs while maintaining their energy demands at a desired level. Here, all customers will interact so as to determine their demands under DSM. These demands include the quantities of energy required and impact the price at a certain time. Instead of fixed reservation prices announced by a utility company, in our model, each user can strategically change its demand, in which its demand fraction of the total demand impacts the underlying electricity price. In this respect, the price in (\ref{eq:price}) can lead to an increased cost for all customers, including those that decide not to participate in load management. Moreover, the demand delayed by DSM will cause a varying electricity price in subsequent hours. Thus, we will develop a DSM mechanism based on load shifting. \vspace{-0.1cm} \subsection{Load Management}\label{sec:ls} In this subsection, we propose a load shifting mechanism to analyze the variations of the demand over time. Inherently, load shifting allows part of the peak hour load to be moved to an off-peak hour, in which such controlled demand is delayed to the following time slot in our formulation. For the proposed model, we assume that, for each participating customer, the demand can be adjusted so as to meet a \emph{predefined/target energy demand} profile (vector) set by the utility company, given by \begin{equation}\label{eq:gd}\vspace{-0.1cm} \boldsymbol{G_d}=[g_d^1, g_d^2,\ldots, g_d^H]. \end{equation} On the one hand, a utility company wants to reduce peak hour demand in order to decrease load on the grid. On the other hand, the company wants to maintain the amount of load shifted within a reasonable range while avoiding the creation of a new peak hour. For example, if the power company wants to reduce $10\%$ of the load at a given hour $\hat{H}$, the target energy demand could be defined by \begin{equation}\label{eq:gd2}\vspace{-0.1cm} g_d^h=\begin{cases} g^h, &\text{if } h \neq \hat{H},\\ \beta g^h, &\text{if } h = \hat{H}, \end{cases} \end{equation} where $\beta=0.9$ and $g_h \in [g^1, g^2,\ldots, g^H]$ can be the \emph{historical demand} referenced by the utility company. Here, we assume that a customer can determine when it starts to participate in DSM. In this respect, customer $i$ will choose its starting time (defined as the \emph{participating time} $a_i$) so as to minimize its cost by observing the difference between the predefined demand per user and its daily demand. In particular, we assume that a customer $i$ would not leave DSM after choosing to participate over $H=24$ hours and thus, the number of participating customers at a given time $h$ is $I^h \triangleq |\mathcal{I}^h|=|\{a_i \ge h\}|$. Here, $a_i$ is the starting hour and we discuss its impact in more detail in Section~\ref{sec:gameconcept}. In this case, the reduced demand of participating customer $i$ is given by \begin{equation}\label{eq:redde}\vspace{-0.1cm} r_i^h=\begin{cases} \gamma_i(x_i^h-\frac{g_d^h-\sum_{i \in \mathcal{N}\setminus \mathcal{I}} x_i^h}{I^h})^+, &\text{if } g_d^h<l^h,\\ 0, &\text{if } g_d^h\ge l^h, \end{cases} \end{equation} where $(q)^+:= \max (0, q)$ and $l^h$ is the total demand that includes both $d^h$ and the amount shifted from the previous hour $h-1$, such that $l^h=d^h+\sum_{i \in \mathcal{N}} r_i^{h-1}$. $0<\gamma_i \le 1$ is a factor by which customer $i$ wants to reduce from exceeding its demand, as $\frac{g_d^h-\sum_{i \in \mathcal{N}\setminus \mathcal{I}} x_i^h}{I^h}$ is the averaged demand suggested by the utility company for all participating customers. In particular, if the demand of participating customer $i$ is less than the average demand for participating customers, i.e., $x_i^h<\frac{g_d^h-\sum_{i \in \mathcal{N}\setminus \mathcal{I}} x_i^h}{I^h}$, its reduced demand is $r_i^h=\gamma_i \cdot 0=0$. Using (\ref{eq:redde}), if customer $i$ participates in DSM at time $h$, its demand will be \begin{equation}\label{eq:now}\vspace{-0.1cm} y_i^h=x_i^h-r_i^h. \end{equation} Then, the participating customer $i$ moves its shifted load to the following hour, and thus, its DSM demand at time $h<t<H$ is \begin{equation}\label{eq:later1}\vspace{-0.1cm} y_i^{t}=(x_i^{t}+r_i^{t-1})-r_i^{t}, \end{equation} while the demand at the final hour $H$ is \begin{equation}\label{eq:later2} y_i^H=x_i^{H}+r_i^{H-1}. \end{equation} To analyze such a load management scheme, we next propose a new framework that builds on the analytical tools of classical game theory and prospect theory. \vspace{-0cm} \section{Game-theoretic Formulation for Demand-Side management}\label{sec:game}\vspace{-0cm} In this section, we first formulate a noncooperative game between the customers, and then study the proposed load shifting model using expected utility theory~\cite{GT00} and prospect theory~\cite{kahneman1979prospect}, also discussing their various properties.\vspace{-0.1cm} \subsection{Noncooperative Game Model}\label{sec:gameconcept}\vspace{-0cm} In order to analyze the interactions between customers, we use noncooperative game theory~\cite{GT00}, as the strategy choices of the customers are interdependent. We formulate a strategic noncooperative DSM game $\Xi=(\mathcal{N},\{\mathcal{A}_i\}_{i\in\mathcal{N}},\{u_i\}_{i\in \mathcal{N}})$, where $\mathcal{N}$ is the set of players, the action $a_i \in \mathcal{A}_i := \{1, 2, \ldots, H\}$ of customer $i$ is the time (hour) at which customer $i$ would like to begin participation in DSM, and the cost function $u_i$ of customer $i$ captures its electricity payment to the company, using the price in (\ref{eq:price}). Here, we note that, although a customer can participate in load management, its total daily demand remains the same (i.e., $\sum_{h=1}^{H} x_i^t=\sum_{h=1}^{H} y_i^t$). Thus, the utility function (cost) for a player $i \in \mathcal{N}$ that chooses an action $a_i$ is given by \begin{equation}\label{eq:utility} \begin{split} u_i(a_i, \boldsymbol{a}_{-i})=\sum_{h=1}^H c_i^h\biggl(\sum_{i \in \mathcal{N}} y_i^h(a_i,\boldsymbol{a}_{-i}) \biggr)\times y_i^h(a_i,\boldsymbol{a}_{-i}), \end{split} \end{equation} where $\boldsymbol{a}_{-i}=[a_1, a_2, \dots, a_{i-1}, a_{i+1}, \dots, a_N]^T$ is the vector of action choices of all players other than $i$, and $y_i^h(a_i,\boldsymbol{a}_{-i})$ is the DSM demand of user $i$, compared to the initial demand $x_i^h$ in (\ref{eq:x}). For example, each customer can determine a starting hour $a_i$ and shift its load to after that hour. The goal of each customer $i$ is to choose a strategy $a_i \in \mathcal{A}_i$ so as to minimize its cost as given in (\ref{eq:utility}). For characterizing a desirable outcome for the studied game $\Xi$, one must derive a common solution that enables one to capture the coupling between the individual optimization of each customer. We define $\boldsymbol{a}=(a_i, \boldsymbol{a}_{-i})$ as the vector of all players' strategies. For each such vector $\boldsymbol{a}$, we will have a different electricity price $c_i^h(\cdot)$ in (\ref{eq:price}). Prior to finding a solution for the proposed DSM game, we will introduce expected utility theory and prospect theory. \vspace{-0.1cm} \subsection{Expected Utility Theory (EUT)}\vspace{-0cm} In a smart grid, as the customers may, over a long time period, change their DSM preferences, we are interested in studying the frequency with which they choose a certain time to begin DSM participation. Therefore, we mainly study the proposed game under \emph{mixed strategies}~\cite{GT00} so as to capture the customers long-term, probabilistic choices of a DSM start time. Let $\boldsymbol{p}= [\boldsymbol{p}_1,\ldots,\boldsymbol{p}_N]$ be the vector of all mixed strategies, where, for every customer $i\in \mathcal{N}$, we have $\boldsymbol{p}_i=[p_i(1),\ldots, p_i(H)]^T$ and $p_i(a_i)$ is the probability corresponding to the choice of a pure strategy $a_i \in \mathcal{A}_i$. Under the conventional EUT model, the expected cost of customer $i$ is captured via the expected value over its mixed strategy. Computing each user's utility requires the vector of all players' strategies. In particular, we assume that the smart grid communication infrastructure will make this information available to users who participate in DSM. Thus, the EUT cost of a player $i$ will be given by \vspace{-0cm} \begin{equation} \label{eq:multiplayerET}\vspace{-0cm} \begin{split} &U_i^{\text{EUT}}( \boldsymbol{p})=\sum_{\boldsymbol{a} \in \mathcal{A}}\bigg(\prod_{l=1}^N p_l(a_l)\bigg) u_i(a_i, \boldsymbol{a}_{-i}), \end{split} \end{equation} where $\boldsymbol{a}$ is the vector of all players' strategies and $\mathcal{A}=\mathcal{A}_1 \times \mathcal{A}_2 \times \dots \times \mathcal{A}_N$. In particular, the mixed-strategy $\boldsymbol{p}(h)=[p_i(h), \boldsymbol{p}_{-i}(h)]^T$ represents the empirical frequencies with which the customers choose a certain participation time $h$. These frequencies capture how often customer $i$ participates in DSM. $\boldsymbol{p}_{-i}(h)$ is the vector of mixed strategies of all players other than $i$, corresponding to $\boldsymbol{a}_{-i}$ in (\ref{eq:utility}). \vspace{-0.1cm} \subsection{Prospect Theory}\label{sec:pt}\vspace{-0cm} As previously mentioned, a player can evaluate its expected utility by using (\ref{eq:multiplayerET}), in which case the customers are assumed to act rationally and objectively under EUT. However, in real life, individuals do not truly behave rationally nor do they trust the rationality of others' behavior. Thus, in order to develop a realistic model of DSM, one must account for the fact that, in practice, customers may not assess their utilities objectively. Indeed, it has been shown that, despite the benefits of DSM, its adoption in practice has remained slow, due to unexpected customer behavior~\cite{FAHEY}. To study such realistic/practical participation models, we will develop a DSM game model, in which a customer may have subjective views on how the opposing players will choose their strategies. This, in turn, impacts the way in which this customer evaluates its utility in (\ref{eq:utility}) which depends on other customers' strategies. Indeed, it has been observed that in real-life decision-making, people tend to subjectively weight uncertain outcomes~\cite{prelec1998probability}. Uncertainty, here, is defined as the fact that the amount of utility that a customer will obtain, depends on the decision making behavior of other customers, which is probabilistic. For example, a customer cannot be sure of the time at which other customers will begin their participation in DSM. Thus, the customer will not be sure how much economic gain its participation in DSM will yield; since such a participation depends on all the players' strategies. Thus, this customer's EUT evaluation as per (\ref{eq:multiplayerET}) might be \emph{overweighted} or \emph{underweighted} due to the subjective observation of others. Moreover, this customer may need to properly assess whether to shift its demand or not, as it is unsure of other customers' strategies. In this respect, a customer might have its own subjective perception on the participation of other customers in the DSM game (and on the actual price), thus deviating from the rational assumption of conventional game theory and EUT. In order to analyze such subjective perceptions, we will use the behavioral framework of prospect theory~\cite{kahneman1979prospect}. In this studied model, we mainly focus on how a customer evaluates the strategies of its opponents and, thus, acts accordingly. Our focus is on the prospect-theoretic perspective, which primarily deals with how human decision makers deal with economic outcomes that have some form of uncertainty. One important PT concept is the so-called \emph{weighting effect}. Weighting effect naturally implies that customers will have subjective views on how their opponents will act (i.e., a weighted view on the probability vector of those players), which, in turn, maps to subjective views on their expected utilities. In the proposed game, we use this weighting effect to incorporate a subjective evaluation for each user's observation of the mixed strategy of its opponents. Thus, under PT, instead of objectively observing the mixed strategy vector $\boldsymbol{p_{-i}}$ chosen by the other players, each user perceives a weighted version of it, $w_i(\boldsymbol{p_{-i}})$. $w_i(\cdot)$ is a nonlinear transformation that maps an objective probability to a subjective one. PT studies have shown that most people will often overweight low probability outcomes and underweight high probability outcomes \cite{kahneman1979prospect}. Without loss of generality, we assume that all players use a similar weighting approach, such that $w_i(\cdot)=w(\cdot),\ \forall i \in \mathcal{N}$. Although many weighting functions exist, we choose the widely used Prelec function (for a given probability $\sigma$)~\cite{prelec1998probability}, \vspace{-0cm} \begin{equation} \label{eq:weight}\vspace{-0.1cm} w(\sigma)=\exp(-(-\ln \sigma)^\alpha),\ 0<\alpha \le 1, \end{equation} where $\alpha$ is a parameter used to characterize the distortion between subjective and objective probability. Note that when $\alpha=1$, the weighting effect will coincide with the conventional EUT probability. Fig.~\ref{fig:probaweight} illustrates the probability weighting effect. In this figure, we can see that the objective probability and subjective estimation insect at $p=1/e$ and the curve approximates EUT as $\alpha$ increases. Also, weighted summations are not necessarily equal to $1$ (i.e., $w(0.4)+w(0.6)<1$). \begin{figure}[!t] \begin{center} \vspace{-0.2cm} \includegraphics[width=8cm]{probaweight.eps} \vspace{-0.5cm} \caption{\label{fig:probaweight} The subjective probability under the weighting effect, as the objective probability varies.} \end{center}\vspace{-0.9cm} \end{figure} Under PT, the expected utility achieved by a player $i$, given the weighting effect, is \vspace{-0cm} \begin{equation} \label{eq:multiplayerPT}\vspace{-0cm} \begin{split} &U_i^{\text{PT}}( \boldsymbol{p}) = \sum_{\boldsymbol{a} \in \mathcal{A}}\!\! \bigg(p_i(a_i)\!\!\!\!\!\! \prod_{l \in \mathcal{N} \setminus \{i\}}^N\!\!\!\!\!\! w(p_l(a_l))\bigg) u_i(a_i, \boldsymbol{a}_{-i}). \end{split} \end{equation} Here, we assume that a player has a subjective evaluation only of the other players' strategy probabilities. In this respect, customer $i$'s subjective evaluation of its own probability is equal to its objective probability. Having defined the cost functions under both EUT and PT, our next goal is to find a solution for the game. Given the set of probability distributions $\mathcal{P}_i$ over its set of strategies $\mathcal{A}_i$, a suitable solution of the game would be a mixed-strategy Nash equilibrium defined as follows: \begin{definition} A mixed strategy profile $\boldsymbol{p}^* \in \mathcal{P}=\prod_{i=1}^N \mathcal{P}_i $ is a \emph{mixed strategy Nash equilibrium} (NE) if, for each customer $i \in \{1,2,\ldots, N\}$, we have ($U_i$ represents $U_i^\text{EUT}$ and $U_i^\text{PT}$ under EUT and PT, respectively)\vspace{-0cm} \begin{equation} U_i(p_i^*,\boldsymbol{p}_{-i}^*) \le U_i(p_i,\boldsymbol{p}_{-i}^*), \ \forall p_i \in \mathcal{P}_i. \end{equation} \end{definition} In practice, to avoid slow convergence times and unnecessary overhead, we can consider approximate equilibrium solutions that allow the players to reach the neighborhood of an equilibrium. Hence, we mainly focus on $\epsilon$-Nash equilibria which are defined as follows: \begin{definition}\label{def:eNE} A mixed strategy profile $\boldsymbol{p}^* \in \mathcal{P}=\prod_{i=1}^N \mathcal{P}_i $ is an \emph{$\epsilon$-Nash equilibrium ($\epsilon$-NE)} if, for every player $i$, we have\vspace{-0cm} \begin{equation}\label{eq:epsilonNE} U_i(p_i^*,\boldsymbol{p}_{-i}^*) \le U_i(p_i,\boldsymbol{p}_{-i}^*)+\epsilon, \ \forall p_i \in \mathcal{P}_i, \end{equation} where $\epsilon$ is a small positive number. \end{definition} In order to find the solution for the game $\Xi$ under both EUT and PT, we find a mixed $\epsilon$-NE in strategic form which represents a point within a close neighborhood of the exact equilibrium~\cite{GT00}. \vspace{-0cm} \section{Game Solution and Algorithm}\label{sec:algo}\vspace{-0cm} In this section, we propose a novel algorithm, under both EUT and PT, to solve the studied DSM game and find an equilibrium point. The proposed algorithm builds on classical fictitious play (FP)~\cite{brown1951iterative}. Thus, we propose the following iterative algorithm to find an $\epsilon$-Nash equilibrium of the proposed game: \begin{equation} \label{eq:algo} \boldsymbol{p}_i^{(k+1)}=\boldsymbol{p}_i^{(k)}+\frac{\lambda}{k+1}(\boldsymbol{v}_i^{(k)}-\boldsymbol{p}_i^{(k)}), \end{equation} where $0<\lambda<1$ is an inertia weight, $k$ is the iteration index, and $\boldsymbol{v}_i^{(k)}=[v_i^{(k)}(a_1), v_i^{(k)}(a_2), \ldots, v_i^{(k)}(a_H)]^T$ is a vector of player $i$'s strategies such that, $v_i^{(k)}(a_l)=1$ if player $i$ chooses the $l$th strategy at iteration $k$ and $v_i^{(k)}(a_l)=0$ for updating the strategies excluding the $l$th strategy. The pure strategy i.e., the $l$th strategy, is the one that minimizes the expected utility with respect to the updated empirical frequencies. Thus, player $i$ can repeatedly choose its strategy and update $\boldsymbol{v}_i^{(k)}$ as follows: \begin{equation}\label{eq:FPaction}\vspace{-0.1cm} v_i^{(k)}(a_l^{(k)})\!=\! \begin{cases}1, \text{if } a_l^{(k)}\!=\!\arg\min\limits_ {a_i \in \mathcal{A}_i}{u}_{i}(a_i, \boldsymbol{p}_{-i}^{(k-1)}),\!\!\!\!\\ 0, \text{otherwise,} \end{cases} \end{equation} where the utility here is the expected value obtained by player $i$ with respect to the mixed strategy of the opponent, when player $i$ chooses pure strategy $a_l$ for EUT and its weighted mixed strategy $w(p_l(a_l))$ for PT. It is well known that our algorithm (as a simplified iterative approach of smooth fictitious play (SFP)) is guaranteed to converge to a mixed $\epsilon$-Nash equilibrium~\cite{fudenberg1995consistency}, as the players' beliefs (probabilistic choices) converge to a fixed point. In particular, a player's belief is its mixed strategy set, and $\epsilon$ represents the utility difference based on beliefs. Here, we define $\epsilon_p$ as the mixed strategy difference corresponding to the utility. SFP can reach an $\epsilon$-NE under EUT, in which the belief difference $\epsilon_p$ between two iterations is small when the number of iterations $k$ goes to infinity. However, to our knowledge, such a result has not been extended to PT. Here, we first prove that the proposed algorithm in (\ref{eq:algo}) will converge to a fixed point in beliefs, as follows: \begin{theorem}\label{th:cov1} There exists an inertia weight $\lambda$, $0<\lambda<1$, such that the iterative algorithm in (\ref{eq:algo}) converges to a fixed point in belief, as a mixed $\epsilon_p$-equilibrium ($\epsilon_p \le \frac{\sqrt{2}}{k+1}$). \end{theorem} \begin{proof} For the proposed DSM game, the proposed SFP process is guaranteed to converge to a fixed point in beliefs~\cite{fudenberg1995consistency} under EUT. Here, we mainly prove that, under PT, a player $i$'s iterative sequence $\{\boldsymbol{p}_i(k)\}$ converges to a mixed $\epsilon_p$-equilibrium in beliefs. From (\ref{eq:algo}), we have \begin{equation}\label{eq:pik1} \begin{split} \boldsymbol{p}_i^{(k+1)}=&\biggl(1-\frac{\lambda}{k+1}\biggr)\boldsymbol{p}_i^{(k)}+\frac{\lambda}{k+1}\boldsymbol{v}_i^{(k)}\\ =&\biggl(1-\frac{1}{k+1}\biggr)\boldsymbol{p}_i^{(k)}+\frac{1}{k+1}\boldsymbol{v}_i^{(k)}+\frac{1-\lambda}{k+1}\boldsymbol{p}_i^{(k)}\\ &-\frac{1-\lambda}{k+1}\boldsymbol{v}_i^{(k)}, \end{split} \end{equation} where the first two terms represent the iteration using FP~\cite{brown1951iterative} and, we define \begin{equation}\label{eq:epk}\vspace{-0.1cm} \epsilon_p=\biggl|\frac{1-\lambda}{k+1}\boldsymbol{p}_i^{(k)}-\frac{1-\lambda}{k+1}\boldsymbol{v}_i^{(k)}\biggr|. \end{equation} In (\ref{eq:pik1}) and (\ref{eq:epk}), we present the belief difference between the proposed algorithm and FP, where there exists an $\epsilon_p$ difference at the $k$th iteration. In particular, the distance boundary between FP belief and the approached belief can be bounded as follows: \begin{equation}\label{eq:epdis} \begin{split} \epsilon_p = \frac{1-\lambda}{k+1}| (\boldsymbol{p}_i^{(k)}-\boldsymbol{v}_i^{(k)})| \le \frac{1-\lambda}{k+1}\cdot \sqrt{2}, \end{split} \end{equation} where $\epsilon_p$ becomes small as $k$ increases. Indeed, the Euclidean distance in (\ref{eq:epdis}) involves $\boldsymbol{v}_i^{(k)}$ in (\ref{eq:FPaction}), one of whose components is $1$, and the mixed strategy set $\boldsymbol{p}_i^{(k)}$, whose components will sum to $1$. Thus, the maximum Euclidean distance between $\boldsymbol{p}_i^{(k)}$ and $\boldsymbol{v}_i^{(k)}$ is less than $\frac{\sqrt{2}}{k+1}$ at the $k$th iteration. Within the given boundary, (\ref{eq:pik1}) can be rewritten as \begin{equation}\label{eq:pik2}\vspace{-0cm} \begin{split} \boldsymbol{p}_i^{(k+1)}\!\!=&\biggl(1-\frac{\lambda}{k+1}\biggr)\boldsymbol{p}_i^{(k)}+\frac{\lambda}{k+1}\boldsymbol{v}_i^{(k)}\\ =&\biggl(1-\frac{\lambda}{k+1}\biggr)\biggl(1-\frac{\lambda}{k}\biggr)\boldsymbol{p}_i^{(k-1)}+\frac{\lambda}{k+1}\boldsymbol{v}_i^{(k)}\\ &+\biggl(1-\frac{\lambda}{k+1}\biggr)\frac{\lambda}{k}\boldsymbol{v}_i^{(k-1)}\\ =&\cdots\\ =&\biggl(1-\frac{\lambda}{k+1}\biggr)\biggl(1-\frac{\lambda}{k}\biggr)\cdots\biggl(1-\frac{\lambda}{2}\biggr)\boldsymbol{p}_i^{(1)}\\ &+\frac{\lambda}{k+1}\boldsymbol{v}_i^{(k)}+\biggl(1-\frac{\lambda}{k+1}\biggr)\frac{\lambda}{k}\boldsymbol{v}_i^{(k-1)}+\cdots\\ &+\biggl(\prod_{j=2}^{k}\bigl(1-\frac{\lambda}{j+1}\bigr)\biggr)\frac{\lambda}{2}\boldsymbol{v}_i^{(1)}. \end{split} \end{equation} The first term in (\ref{eq:pik2}) is $\biggl(\prod_{k=1}^{k=+\infty}(1-\frac{\lambda}{k+1})\biggr)\boldsymbol{p}_i^{(1)}$. Since $\prod_{k=1}^{k=+\infty}(1-\frac{\lambda}{k+1})$ is decreasing as $k$ increases and bounded by $|\boldsymbol{p}_i^{(1)}|\le 1$, the first term is convergent as $k$ goes to infinity. In fact, this term converges to $0$. For the remaining terms in (\ref{eq:pik2}), we can define \begin{equation} \begin{split} h_j=&\biggl(1-\frac{\lambda}{k+1}\biggr)\biggl(1-\frac{\lambda}{k}\biggr)\cdots\biggl(1-\frac{\lambda}{j+2}\biggr)\frac{\lambda}{j+1}\boldsymbol{v}_i^{(j)}\\ =&\biggl(\prod_{j\le k}\bigl(1-\frac{\lambda}{j+2}\bigr)\biggr)\frac{\lambda}{j+1}\boldsymbol{v}_i^{(j)}. \end{split} \end{equation} Similarly, since $|h_j|$ is decreasing as $k$ increases and bounded by $|\boldsymbol{v}_i^{(j)}|=1$, $\sum_{j=1}^{k} h_j$ is convergent as $k$ goes to infinity. Thus, we conclude that, $\boldsymbol{p}_i^{(k+1)}$ will converge to a fixed point, as a mixed $\epsilon_p$-equilibrium in beliefs using (\ref{eq:algo}). \end{proof} \begin{remark} \label{rmk:fp} When $\lambda=1$, the proposed algorithm in (\ref{eq:pik1}) is reduced to FP. Hence (\ref{eq:pik1}) and (\ref{eq:pik2}) can be derived as $\boldsymbol{p}_i^{(k+1)} =\frac{1}{k+1}\boldsymbol{p}_i^{(1)}+\frac{1}{k+1}(\boldsymbol{v}_i^{(k)}+\cdots+\boldsymbol{v}_i^{(1)})$. Thus, FP ($\lambda=1$) might cycle if $\boldsymbol{v}_i^{(k)}$ repeats after some iteration, i.e., $\boldsymbol{v}_i^{(2)}=\boldsymbol{v}_i^{(4)}=\boldsymbol{v}_i^{(6)}=\cdots$, and $\boldsymbol{v}_i^{(1)}=\boldsymbol{v}_i^{(3)}=\boldsymbol{v}_i^{(5)}=\cdots$, while the proposed algorithm ($0<\lambda<1$) in (\ref{eq:algo}) converges to a fixed point. \end{remark} \begin{theorem}\label{th:cov2} For the proposed DSM game, the proposed algorithm in (\ref{eq:algo}) is guaranteed to converge to a mixed $\epsilon$-NE under both EUT and PT, as its beliefs converge to a mixed $\epsilon_p$-equilibrium. \end{theorem} The proof of Theorem~\ref{th:cov2} is given in the appendix. \begin{table}[!t]\vspace{-0.2cm} \centering \caption{ \vspace*{-0.1em}Proposed Load Shifting Solution}\vspace*{-0.5em} \begin{tabular}{p{8cm}} \hline\vspace*{+0.05em} \textbf{Phase 1 - Proposed Dynamics:} \vspace*{.1em}\\ \hspace*{1em}Each customer $i \in \mathcal{N}$ chooses a starting strategy $\boldsymbol{p}_i^{\textrm{init}}$ as its mixed strategy of participation.\vspace*{.1em}\\ \hspace*{2em}\textbf{repeat,}\vspace*{.2em}\\ \hspace*{1em}1) Each customer $i \in \mathcal{N}$ observes others' participation $\boldsymbol{p}_{-i}$\vspace*{.1em}\\ \hspace*{1em}2) Each customer $i\! \in\! \mathcal{N}$ updates its better response strategy using\\ \hspace*{1em}the proposed algorithm in (\ref{eq:algo}):\\ \hspace*{3em}i) The utility operator communicates with the customer\vspace*{.1em}\\ \hspace*{3em}using the grid's two-way architecture communication\vspace*{.1em}\\ \hspace*{3em}(see \cite{EW03} or \cite{hossain2012smart} and references therein).\vspace*{.1em}\\ \hspace*{3em}ii) Customers' loads can be shifted via DSM as in Section~\ref{sec:ls}.\vspace*{.1em}\\ \hspace*{4em}\textbf{Load Shedding}\vspace*{.1em}\\ \hspace*{4em}a) The utility operator advertises all customers' participation\vspace*{.1em}\\ \hspace*{4em}times $a_i \in \mathcal{A}_i$ using their mixed strategies.\vspace*{.1em}\\ \hspace*{4em}b) Each customer publishes its participation $p_i^\text{PT}$ based on a \\ \hspace*{4em}mixed strategy $\boldsymbol{p}^\text{EUT}$, representing the subjective observation\\ \hspace*{4em}with underlying weight effect.\vspace*{.1em}\\ \hspace*{4em}c) After combining probabilities as in (\ref{eq:multiplayerET}), customer $i$ will\\ \hspace*{4em}receive an expected cost under EUT, sent by the operator,\\ \hspace*{4em}and observe the current vector of strategies $\boldsymbol{p}_{-i}$ so as to\\ \hspace*{4em}assess its subjective utility in (\ref{eq:multiplayerPT}) under PT.\vspace*{.1em}\\ \hspace*{4em}d) Customer $i$ chooses its strategic response in (\ref{eq:algo}). \vspace*{.1em}\\ \hspace*{2em}\textbf{until} convergence to a mixed NE strategy vector $\boldsymbol{p}^*$.\vspace*{.2em}\\ \textbf{Phase 2 - Power Company Strategy} \vspace*{.1em}\\ \hspace*{1em}1) The operator receives the participation information\vspace*{.1em}\\ \hspace*{1em}given the mixed strategy set as per $\boldsymbol{a}^*$.\vspace*{.1em}\\ \hspace*{1em}2) Actual load shifting is implemented under realistic participation.\vspace*{.1em}\\ \hline \end{tabular}\label{tab:algo}\vspace{-0.7cm} \end{table} We summarize the proposed DSM solution in Table~\ref{tab:algo}. To find the solution of this game under both EUT and PT, we use the algorithm in (\ref{eq:algo}) to solve for the mixed $\epsilon$-NE. In the first phase of the algorithm, each customer will set an initial probability vector, while any non-participating customers will set its probability to $0$ throughout the whole DSM process. At the beginning, each participating customer observes others' strategies and evaluates its expected costs of every strategy. In the evaluation process, a participating customer can receive an estimate of the expected costs from the grid operator under EUT, since the grid operator objectively collects information and provides required services/information. Then, the customers will overweight or underweight the information about others' strategies based on the individual weighting effect in (\ref{eq:weight}). Moreover, each customer will subjectively estimate its expected costs of every strategy and then frequently report its participation based on its mixed strategy. The communication between customers and grid operator will continue until the expected costs satisfy the setting $\epsilon$ in utility (\ref{eq:multiplayerET}), (\ref{eq:multiplayerPT}) and (\ref{eq:epsilonNE}). Once a mixed $\epsilon$-NE is reached, the customers will frequently signal their participation decisions based on probabilities and participate in demand-side management in Phase $2$. This phase of the proposed load shifting solution is the practical market operation. Given the submitted information, the grid operator will shift/reduce local loads. The actual process of Phase $2$ is beyond the scope of this paper and will follow real-world contract negotiations that could be interesting to study in future work. \vspace{-0.1cm} \section{Simulation Results and Analysis}\label{sec:sim}\vspace{-0cm} For our simulations, we use the real-world load profile in~\cite{web} which represents customers' initial demands, i.e., the data between April $29$th, 2013, and May $4$th, 2013, from Miami International Airport, since the data of local customer houses, or groups, is large, private and confidential. In all simulations, each customer can choose a starting time to participate in DSM from the time period between $18$:$00$ and $20$:$00$. Alternatively, the customer can decide not to participate; that is, $\mathcal{A}_i = \{18, 19, 20, 24\}, \forall i \in \mathcal{N}$. Also, we set $\beta=0.86$ and $\gamma=0.6$ for all customers. \begin{figure}[!t] \begin{center} \vspace{-0.2cm} \includegraphics[width=8cm]{probahistP1.eps} \vspace{-0.5cm} \caption{\label{fig:histP1} The probability performance of all mixed strategic participations for Customer $1$ under both EUT and PT.} \end{center}\vspace{-0.2cm} \end{figure} \begin{figure}[!t] \begin{center} \vspace{-0.2cm} \includegraphics[width=8cm]{probahist1900.eps} \vspace{-0.5cm} \caption{\label{fig:hist1900} The mixed strategic participations for the six customers using both EUT and PT at $19$:$00$.} \end{center}\vspace{-0.9cm} \end{figure} Fig.~\ref{fig:histP1} shows, for a smart grid with six customers, all mixed strategies of a selected Customer $1$ under both EUT and PT. In this figure, we choose $\alpha=0.7$ for all customers to represent their distortion under the weighting effect in (\ref{eq:weight}). Clearly, Customer 1's PT strategy is different than its EUT strategy. Compared to the EUT results (solid lines), the PT participating strategy (dash line) at $20$:$00$ is larger, while the other PT mixed strategies are smaller. In essence, we observe that PT generally enhances (reduces) high (low) frequency EUT strategies. For instance, under EUT, Customer $1$ will have the highest participation strategy at $20$:$00$ due to load shifting as per (\ref{eq:redde}) and the varying price in (\ref{eq:price}). In particular, its largest mixed strategy is greater than the average mixed strategy, i.e., $0.25$ in the proposed four-strategy game. Second, because of the weighting effect in (13), in Fig.~\ref{fig:histP1}, we can see how PT behavior can be different from EUT. In particular, we observe that a PT customer wants to participate more than in the EUT case at times when it has a low hourly energy prices in (\ref{eq:price}), which also corresponds to a situation with high participation under EUT. In other words, for minimizing its cost, under PT, each customer tends to increase its participation at the hours with lower hourly prices in (\ref{eq:price}) and to decrease its participation at the hours with higher hourly prices. This is mainly due to the relationship between shifting the demand and the resulting price as captured by (\ref{eq:price}) and (\ref{eq:utility}). Thus, under realistic behavioral considerations, customers will accentuate the traditional behavior perceived under EUT. This will consequently impact the overall DSM performance as shown in the next figures. Accordingly, under PT, we can observe that higher EUT probabilities will become more pronounced. Thus, the largest mixed strategy using EUT, i.e., at $20$:$00$, would be overweighted via PT observation, and vice versa. This is due to the fact that each PT customer takes a risk by participating when the hourly costs are low. Fig.~\ref{fig:hist1900} shows, using the same parameters as Fig.~\ref{fig:histP1}, the probability that each customer participates in DSM at $19$:$00$ under both EUT and PT. In this figure, we can first see that the mixed strategy of each customer using PT is different from that resulting from EUT. Under EUT, the rational mixed strategies made by some customers, such as Customers $3$-$6$, are higher than the average mixed strategy, because they have low costs at $19$:$00$. In particular, given the price in (\ref{eq:price}), a customer's lowest demand between $18$:$00$ and $20$:$00$ would cause the lowest costs and the highest participation. This implies that a rational customer wants to participate in DSM when it does not require a lot of energy (or its demand is low) in practice, since the payment is low. Under PT, the customers' realistic decisions will impact the price in (\ref{eq:price}) and change their participation levels. In Fig.~\ref{fig:hist1900}, some customers' mixed strategic components using PT are greater than those of EUT, i.e., Customers $3$-$6$, while Customers $1$ and $2$ have a lower participation probability using PT. If four PT customers, based on their loads between $18$:$00$ and $24$:$00$, decide to shift more load at $19$:$00$, their PT demands will be less than EUT. Then, compared to EUT, the PT prices of Customers $1$ and $2$ will increase because of the changing load fraction in (\ref{eq:price}). Because of the increasing prices, Customers $1$ and $2$'s costs increase and they will be less interested in participating at $19$:$00$. Indeed, there will exist some customers, such as Customers $1$ and $2$, who want to decrease their participation at $19$:$00$ under PT as shown in Fig.~\ref{fig:histP1}. Then, the overall demand increases and Customers $3$-$6$ obtain a decreasing pricing in (\ref{eq:price}) which will make these four customers more likely to participate at 19:00. \begin{figure}[!t] \begin{center} \vspace{-0.2cm} \includegraphics[width=8cm]{nonload.eps} \vspace{-0.5cm} \caption{\label{fig:nonload} The expected nonparticipating load for the six customer game under mixed strategies using both EUT and PT over $24$ hours, when all customers have the same $\alpha=0.7$.} \end{center}\vspace{-0.9cm} \end{figure} Fig.~\ref{fig:nonload} shows the expected nonparticipating load profile in the proposed DSM game as time varies. We choose the same customers as in Fig.~\ref{fig:histP1} with the distortion parameters are set to $\alpha = 0.7$ for all customers. The actual demand is the summation of initial demands in (\ref{eq:actuald}). In Fig.~\ref{fig:nonload}, the nonparticipating load is the minimum expected load that the power company must supply, while the participating load represents the load that can be partly shifted based on each individual customer's action in (\ref{eq:redde}). Here, all customers can start the game from $18$:$00$ and the expected nonparticipating loads between EUT and PT are different. On the one hand, the difference between EUT and PT during $18$:$00$ and $20$:$00$ is due to the change in the customers' decisions, as previously shown in Fig.~\ref{fig:hist1900}. On the other hand, between $21$:$00$ and $23$:$00$, the customers nonparticipating load using PT is always less than that using EUT. Indeed, if we translate the proposed game into a ``participate or not participate'' game, $21$:$00$ can be used to distinguish two such strategies, and customer behaviors before $21$:$00$ can impact their participation at a later time. However, the fact that the nonparticipation level for PT is less than that for EUT after $21$:$00$ directly relates to the choice of a distortion parameter $\alpha$. In particular, a small deviation from the rational strategy for $\alpha=0.7$ leads to an increased competition between the customers due to the fact that a weighted observation increases the costs in (\ref{eq:utility}). Such small deviation represents a case in which a customer participates in DSM but does not trust its view of the opponents' strategy or the information received from the power company. Thus, a slight deviation from the rational path causes increasing costs and customers are more apt to shift their loads under PT to decrease the impact of being non-rational, compared to EUT. As a result, after $21$:$00$, as seen in Fig.~\ref{fig:nonload}, the PT nonparticipating load will be less than that for EUT. At $24$:$00$, the power company needs to deal with the remaining load as defined in (\ref{eq:later2}). \begin{figure}[!t] \begin{center} \vspace{-0.2cm} \includegraphics[width=8cm]{nonloaddifal.eps} \vspace{-0.5cm} \caption{\label{fig:nonloaddifal} The expected nonparticipating load for the six customer game under mixed strategies using both EUT and PT over $24$ hours, when customers have different values of $\alpha$.} \end{center}\vspace{-0.9cm} \end{figure} Compared to Fig.~\ref{fig:nonload}, Fig.~\ref{fig:nonloaddifal} shows the expected nonparticipating load profile using different distortion parameters, as time varies. The distortion parameter $\alpha$ in (\ref{eq:weight}) allow us to measure how a customer perceives the actions of its opponents. A large $\alpha$ implies a small distortion, while a small $\alpha$ represents an excessively subjective perception. In particular, we choose $\alpha=[0.5\ 0.5\ 0.2\ 0.1\ 0.1\ 0.1]^T$. In this figure, we can see that, when some customers have a very irrational observation of their opponents, the PT nonparticipating load between $21$:$00$ and $23$:$00$ will be higher than EUT. This implies that, in reality, if some customers deviate significantly from their rational strategies (for example, a customer forgets to assist the power company in load shifting), the power company will not be able to shift the total load predicted by the rational, objective model. Thus, the power company can use the distortion parameter to decide on how to design its DSM scheme. \begin{figure}[!t] \begin{center} \vspace{-0.2cm} \includegraphics[width=8cm]{alpha1900.eps} \vspace{-0.5cm} \caption{\label{fig:alpha1900} The expected nonparticipating load of all customers at $19$:$00$ as $\alpha$ varies.} \end{center}\vspace{-0.9cm} \end{figure} Fig.~\ref{fig:alpha1900} shows the expected nonparticipating load at $19$:$00$, as the weighting effect parameter $\alpha$ varies. In this figure, we can see that the expected nonparticipating load under EUT is $65.7\%$ of the total load and, the nonparticipating load under PT is less than EUT when $\alpha>0.56$. This is because the majority of PT customers have more interest in participating at $19$:$00$, as shown in Fig.~\ref{fig:hist1900}. Thus, the power company can shift more load in practice, compared to EUT. Also, this figure shows that there exists a distortion threshold, such that, if $\alpha$ is greater (smaller) than the threshold, PT customers will have lower (higher) nonparticipating loads than EUT cases. A large distortion parameter, or a small deviation from EUT, yields an increased competition thus raising the costs to the customers. Consequently, the customers will become risk seeking and more apt to shift their loads and decrease their costs. Thus, the increasing PT costs will force the majority to shift more loads, compared to EUT. However, a small distortion parameter, or a large deviation from EUT, will lead to highly distorting behavior from the customers which will lead to increasingly high competition and decreasing participation, as customers become extremely risk averse and unwilling to participate in the DSM process. In a nutshell, Fig.~\ref{fig:alpha1900} shows that a small deviation from EUT may be beneficial for the power company as it increases customers' participation. In contrast, a significant deviation from EUT will inevitably lead to highly risk averse behavior which will prevent most customers from participating; thus yielding detrimental results for the grid and preventing the operator from reaping the benefits of DSM. \begin{figure}[!t] \begin{center} \vspace{-0.2cm} \includegraphics[width=8cm]{iter.eps} \vspace{-0.5cm} \caption{\label{fig:iter} The probability performance of mixed strategy for six customers based on EUT and PT.} \end{center}\vspace{-0.2cm} \end{figure} In Fig.~\ref{fig:iter}, we show all PT mixed strategies of Customer $1$ (corresponding to Fig.~\ref{fig:histP1}), as the number of iterations increases. The proposed algorithm can be shown to converge \emph{sublinearly} by studying its rate of convergence which is defined as $R=\lim\limits_{k \rightarrow \infty}\frac{|\boldsymbol{p}^{(k+1)}-\boldsymbol{p}^*|}{|\boldsymbol{p}^{(k)}-\boldsymbol{p}^*|}$. Here, we mainly test the convergence of the proposed algorithm in (\ref{eq:algo}). In particular, Customer $1$'s initial probability is $\boldsymbol{p}_1^{(0)}=[0.2500\ 0.3333\ 0.2083\ 0.2083]^T$ and it converges to $\boldsymbol{p}_1^{*}=[0.1110\ 0.1480\ 0.6484\ 0.0925]^T$. From Fig.~\ref{fig:histP1}, due to the low hourly costs, Customer $1$ will increase its participation at $20$:$00$, corresponding to the highest EUT frequency. In Fig.~\ref{fig:iter}, since the initial probability at $20$:$00$ is small, i.e., $p_1^{(0)}(a_3)=0.2083$, Customer $1$ will choose its best strategy and update $v_1(a_3)=1$ in (\ref{eq:FPaction}). Thus, we can see that the participating probability at $20$:$00$ increases, as the iteration $k$ increases. Similarly, because of the large initial probabilities at other strategies, Customer $1$ will decrease underlying participation via updating $v_1(a_1)=v_1(a_2)=v_1(a_4)=0$ in (\ref{eq:FPaction}). Also, Table~\ref{tab:multi} shows Customer $1$'s all mixed $\epsilon$-Nash equilibria under both EUT and PT. Using different initial probability vectors $\boldsymbol{p}^{(0)}$, the proposed algorithm in (\ref{eq:algo}) will reach different Nash equilibria, and the differences between EUT and PT at two different Nash equilibria are not the same. \begin{table}[!t]\vspace{-0cm} \scriptsize \centering \caption{ \vspace*{-0em} Customer $1$'s All Mixed $\epsilon$-NE under both EUT and PT}\vspace*{-0.3cm} \begin{tabular}{|c|c|c|} \hline & EUT & PT \\ [0.5ex]\hline $1$ & $[0.1667\ 0.2222\ 0.4722\ 0.1389]^T$ & $[0.1110\ 0.1480\ 0.6484\ 0.0925]^T$ \\\hline $2$ & $[0.2667\ 0.1333\ 0.5333\ 0.0667]^T$ & $[0.1776\ 0.0888\ 0.6891\ 0.0444]^T$ \\\hline $3$ & $[0.0392\ 0.1569\ 0.4509\ 0.3530]^T$ & $[0.0261\ 0.1045\ 0.6343\ 0.2351]^T$ \\\hline $4$ & $[0.1159\ 0.2609\ 0.5072\ 0.1159]^T$ & $[0.0772\ 0.1738\ 0.6718\ 0.0772]^T$\\\hline $5$ & $[0.2222\ 0.0889\ 0.4666\ 0.2222]^T$ & $[0.1480\ 0.0592\ 0.6447\ 0.1480]^T$\\\hline \end{tabular} \label{tab:multi}\vspace{-0.7cm} \end{table} \vspace{-0.1cm} \section{Conclusions}\label{sec:conc}\vspace{-0cm} In this paper, we have introduced a novel approach for studying the problem of demand-side management. It is of interest to factor in explicitly the behavior of customers in DSM. In particular, we have developed a game-theoretic approach, based on prospect theory, using which each player subjectively observes other players' actions and determines its own actions so as to minimize a cost function that captures the electricity cost over $24$ hours. Then, we have proposed an algorithm and have proved that it reaches a mixed $\epsilon$-NE. Simulation results have shown that deviations from classical, objective game-theoretic DSM mechanisms can lead to unexpected results and loads on the grid, depending on the level of the customers' subjective perceptions of each others' actions. Therefore, whether a customer participates in DSM or not, depends on this level of rationality of the customer. In a nutshell, the results of this paper have provided important insights into the factors underlying the modest participation in DSM schemes observed in real-world smart grid systems. \vspace{-0.1cm}
1,314,259,993,843
arxiv
\section{Introduction} Observations carried out in roughly the past twenty years have revealed a rapid evolution of cosmic sources, both normal, actively star-forming and AGN-dominated galaxies, over the last several billion years. This was mostly achieved from continuum rest-frame UV photometric imaging in the optical (e.g. \citealt{Lilly1995}), and H$\alpha$ or [OII] line spectroscopy (e.g. \citealt{Gallego1995}), all, however, including very uncertain dust extinction corrections. \textit{Galex} has also been used for similar purposes by \cite{Martin2005.59M} and \cite{Bothwell2011}, among others. In the far IR, the pioneering exploration by the \textit{IRAS} satellite revealed a particularly dramatic evolution of the galaxy LFs (\citealt{Saunders1990}), illustrating the importance of local studies at infrared (IR) wavelengths. This result was later confirmed up to $z\simeq1$ by \textit{ISO} (\citealt{Pozzi2004}), and \textit{Spitzer} studies using the MIPS 24$\,\mu$m (\citealt{LeFloch2005}, \citealt{Marleau2007}, \citealt{Rodighiero2010}) and 70$\,\mu$m (\citealt{Frayer2006}, \citealt{Huynh2007}, \citealt{Magnelli2009},\citealt{Patel2013}) channels. At longer, sub-millimetre wavelengths the balloon borne telescope BLAST was able to estimate the galaxy LF at low \textit{z} and map its evolution (\citealt{Eales2009}), although with limited statistics and uncertain identification of the sources. Finally, surveys in the radio bands have also been exploited, with the necessity to include large bolometric corrections, for luminosity function estimates (\citealt{Condon1989}; \citealt{Serjeant2002}). Interpretations of these fast evolutionary rates are actively debated in the literature, with various processes being claimed as responsible (like gas fuel consumption in galaxies, heating of the gas so as to prevent cooling and collapse, decreasing rates of galaxy interactions with time, etc.). Indeed, galaxy evolution codes have often found it difficult to reproduce these data, and slower evolution seems predicted by the models than it is observed. However, the estimates of the low-redshift luminosity functions of galaxies, and correspondingly the total star-formation and AGN accretion rates, still contain some significant uncertainties. In particular, due to the moderate volumes sampled at low redshift, an essential pre-requisite for determining the LLFs is the imaging of large fields, where it is difficult however to achieve the required multi-wavelength homogeneous coverage and complete redshift information. In the very local universe, at $z<0.02$, a sample of a few hundreds sources from the Early Release Compact Source Catalogue by the \textit{Planck} all-sky survey (Planck Collaboration VII, 2011) have been used by \cite{Negrello2013} to estimate luminosity functions at 350, 500, and 850$\,\mu$m. Although the authors were very careful to account for various potentially problematic factors, namely the photometric calibration from the large \textit{Planck} beam, removal of Galactic emission and CO line contribution to the photometry, their estimate might not be completely immune to the effects of large inhomogeneities (like the Virgo cluster) inherent in their very local spatial sampling (see Sec. \ref{discussion} for further details). \cite{Vaccari2010} report a preliminary determination of the local sub-millimetre luminosity functions of galaxies, exploiting the much improved angular resolution and mapping speed of the SPIRE instrument \citep{Griffin2010} on the \textit{Herschel Space Observatory} \citep{Pilbratt2010}. They used data from the Lockman Hole and Extragalactic First Look Survey (XFLS) fields of the \textit{Herschel} Multi-tiered Extragalactic Survey program (HerMES, \texttt{http://hermes.sussex.ac.uk}, \citealt{Oliver2012}) over about 15 deg$^2$ observed during the \textit{Herschel} Science Demonstration Phase (SDP), and including a few hundred sources to a flux limit of about 40 mJy in all three SPIRE bands (250, 350, 500$\,\mu$m). Their published functions were integrated over a wide redshift interval at $0<z<0.2$. Because of the limited source statistics, \cite{Vaccari2010} could not take into account any evolutionary corrections, while significant evolution is expected to be present over this large redshift bin. Still based on the HerMES database, but using a much larger total area, many more independent sky fields, and deeper fluxes, the present paper reports on a systematic effort to characterise the local and low-redshift luminosity functions of galaxies in the sub-millimetre bins. The \textit{Herschel} survey catalogue has been cross-correlated with existing optical photometry and spectroscopy in the fields, as well as with photometric data in the mid- and far-IR from \textit{Spitzer} (\citealt{Werner2004}). By fitting the source-by-source multi-wavelength photometry with spectral templates, the bolometric IR luminosities and bolometric luminosity functions can also be estimated. Importantly, the much improved statistics allows us to work in narrow redshift bins, so as to disentangle luminosity function shapes from evolution, and to obtain the most robust and complete statistical characterisation over the last few Gyrs of galaxy evolution. By combining this long-wavelength information with similar analyses in the optical-UV, we can determine the local bolometric luminosity density and comoving star-formation rate and their low-$z$ evolution. The paper is structured as follows. In Section 2 we describe the multi-wavelength data set that we use, as well as the selection of the samples, source identification, and SED fitting. In Section 3 we detail the statistical methods used in our data analysis and the various adopted luminosity function estimators, including the Bayesian parametric recipe that we develop. Our results are reported in Section 4, including the multi-wavelength luminosity functions, the local luminosity densities and the star-formation rates. Our results are then discussed in Section 5 and our main conclusions summarised in Section 6. Throughout the paper we adopt a standard cosmology with $\Omega_\mathrm{M}=0.3$, $\Omega_\Lambda=0.7$ and $H_0=70~\mathrm{km\,s^{-1}\,Mpc^{-1}}$. \section{The HerMES Wide Sample}\label{sample.sec} The Herschel Multi-tiered Extragalactic Survey, or HerMES, is a \textit{Herschel} Guarantee Time (GT) Key Programme (\citealt{Oliver2012} \footnote{\url{http://hermes.sussex.ac.uk/}}) and the largest single project on \textit{Herschel}, for a total 900 hours of observing time. HerMES was designed to comprise a number of tiers of different depths and areas, and has performed observations with both SPIRE \citep{Griffin2010} and PACS \citep{Poglitsch2010}, surveying approximately 70 deg$^2$ over 12 fields whose sizes range from 0.01 to 20 deg$^2$. To estimate the SPIRE LLF we use HerMES L5/L6 SPIRE observations (see Tab. 1 in \citealt{Oliver2012} for more details on the observations) covering five fields: Lockman Hole (LH); Extragalactic First Look Survey (XFLS); Bootes, ELAIS-N1 (EN1) and XMM-LSS. In the following, these fields and the SPIRE sample arising from them will collectively be referred to as the HerMES Wide Fields and Sample respectively. These fields are the widest \textit{Herschel} HerMES fields where imaging data are available with both \textit{Spitzer} IRAC and MIPS cameras, thus enabling the detailed study of the full infrared SED of a significant number of sources in the local Universe. \begin{table} \centering \begin{tabular}{cccc} \hline \textbf{Field} & \textbf{250$\,\mu$m detections} & \textbf{Area [deg$^2$]} & \textbf{Set}\\ \hline\hline LH & 2336 (942/1394) & $~11.29$ & 34 \\ XFLS & 801 (427/374) & $~4.19$ & 40 \\ BOOTES & 1792 (1220/572) & $~9.93$ & 37 \\ EN1 & 693 (246/447) & $~3.91$ & 35 \\ XMM & 1606 (367/1239)& $~9.59$ & 36 \\ Total & 7087 (3195/3892) & $~38.9$ & \\ \hline \end{tabular} \caption[]{Number of $0.02 < z \lsim0.5 $ sources used to estimate the SPIRE LLFs. The number of sources with spectroscopic/photometric redshifts is indicated in brackets after the total number of sources. The 250$\,\mu$m sample is cut at $S_{\mathrm{250}} > 30$~mJy, according to the SPIRE 250$\,\mu$m completeness (see text for details).''Set'' refers to Tab. 1 in \cite{Oliver2012} and identifies the HerMES specific observing mode in each field.} \label{spire-llf-numbers.tab} \end{table} \subsection{SPIRE source extraction}\label{extraction.sec} Source confusion is the most serious challenge for \textit{Herschel} and SPIRE source extraction and identification. In particular, confusion is an important driver in determining the optimal survey depth. By making, a maximum use of the full spectrum of ancillary data it is possible to limit the confusion problem at the source detection and identification steps. For this reason the choice of HerMES survey fields has been largely driven by the availability of extensive multi-wavelength ancillary data at both optical and infrared wavelengths. In particular, \cite{Roseboom2010} (and \citealt{Roseboom2012}) developed a new method for SPIRE source extraction, hereafter referred to as XID, which improves upon more conventional techniques (e.g., \citealt{Smith2012}; \citealt{Wang2014}) based on existing source extraction algorithms optimised for \textit{Herschel} purposes. The XID technique makes use of a combination of linear inversion and model selection techniques to produce reliable cross-identification catalogues based on \textit{Spitzer} MIPS 24$\,\mu$m source positions. The tiered nature of HerMES is well matched to the variable quality of the \textit{Spitzer} data, in particular the MIPS 24$\,\mu$m observations. This is confirmed by simulation performed using pre-\textit{Herschel} empirical models (e.g., \citealt{FernandezConde2008}; \citealt{LeBorgne2009}; \citealt{Franceschini2010}) which shared the comparable sensitivities of the 250 and 24$\,\mu$m source densities. Since the HerMES Wide fields are homogeneously covered by the Spitzer Data Fusion\ (described in Sec~\ref{df.sec}), which provides homogeneous MIPS 24$\,\mu$m source lists, the SPIRE flux densities used in this paper are obtained with the XID technique using the Spitzer Data Fusion\ MIPS 24$\,\mu$m positional priors (or, in other words, the MIPS 24$\,\mu$m positions are used as a prior to guide the SPIRE flux extraction on the SPIRE maps). As reported in \cite{Roseboom2010}, using a prior for the SPIRE source identification based on MIPS 24$\,\mu$m detections could, in principle, introduce an additional incompleteness related to the relative depth at 24$\,\mu$m catalogues used in the process and the distribution of intrinsic SED shapes. However, \cite{Roseboom2010} show how incompleteness would affect only the fainter SPIRE sources with the higher 250$\,\mu$m / 24$\,\mu$m flux density ratios, which are very likely to be ultra-red high-redshift objects. We can therefore be confident that for relatively nearby sources the XID catalogues are complete at the relatively bright flux limits used in this work. This relatively complex procedure of association is reported in the dedicated papers by \cite{Roseboom2010} and \cite{Roseboom2012}, to which we refer the reader for further details about this method. \subsection{SPIRE sample selection}\label{sam.sec.} To define the sample to be used for our LLF determinations, we use SPIRE flux density estimates obtained using the XID method (\citealt{Roseboom2010} and \citealt{Roseboom2012}), applied to SCAT maps produced by \cite{Smith2012} and using MIPS 24$\,\mu$m positional priors based on the Spitzer Data Fusion\ detailed in Sec.~\ref{df.sec}. The SPIRE 250$\,\mu$m channel is the most sensitive of the three SPIRE bands and thus we select sources on the basis of a SPIRE 250$\,\mu$m reliability criterion (discussed in \citealt{Roseboom2010}) defined as $\chi^2_{250} < 5$ and $\mathrm{SNRT}_{250} > 4$, where the first criterion is the $\chi^2$ of the source solution in the neighbourhood of a source (7 pixel radius) and the second is the signal to noise ratio (SNR) at a given selection $\lambda$, including confusion, and referred to as 'Total' SNR$_{\lambda}$ or SNRT$_{\lambda}$. The SPIRE 250$\,\mu$m catalogues of L5/L6 HerMES observations are highly complete and reliable down to approximately 25/30/35$\,$mJy at 250/350/500$\,\mu$m, respectively, as shown in Fig. \ref{wide.comp} (left). In order to combine the data collected in these different fields we have to ensure a uniform completeness both in flux and in redshift coverage across fields, thus, due to some minor differences across the fields we decided to cut our sample at 30$\,$mJy at 250 $\mu$m. These minor differences are visible in Fig. \ref{wide.comp} (right) where we compare SPIRE 250$\,\mu$m number counts estimated for the 5 wide fields and for the COSMOS deep field (the COSMOS sample is from Vaccari et al., in prep). These discrepancies are consistent with the levels of cosmic variance predicted by theoretical models for fields of this size \citep{Moster2011}, as well as with the slightly different depths of MIPS 24$\,\mu$m observations available for these fields, which were used to guide HerMES XID source extraction. In any case, differences are on the whole small and have major effects only at low flux densities, well below our selected limit. The greatest discrepancy is shown in XFLS, where the SPIRE 250$\,\mu$m completeness reflects the slightly brighter flux limit of the XFLS MIPS 24$\,\mu$m and IRAC catalogues, due to a shorter exposure time in comparison with the other fields. \begin{figure*} \begin{center} {\myincludegraphics{width=0.43\textwidth}{./{}/lhs-comp-plot-stilts.pdf}} {\myincludegraphics{width=0.53\textwidth}{./{}/cosmos-spire-250-counts-resolved-vs-wide-eps-converted-to.pdf}} \end{center} \caption[]{SPIRE 250$\,\mu$m source counts (right) and completeness (left) based on XID catalogues from \cite{Roseboom2012} for the HerMES Wide Fields sample used to estimate the SPIRE LLFs, compared with COSMOS estimates from Vaccari et al. (in prep.). The black solid line signs the flux limit of our selection.} \label{wide.comp} \end{figure*} \subsection{The \textit{Spitzer Data Fusion}}\label{df.sec} As previously mentioned, the HerMES fields were chosen so as to have the best multi-wavelength data for sky areas of a given size. In particular, the fields used in this work are covered by Spitzer 7-band IRAC and MIPS imaging data which enable not only an improved identification process but also the detailed characterisation of the infrared SEDs of \textit{Herschel} sources. In this work we exploit the \textit{Spitzer Data Fusion} (\citealt{Vaccari2010} and Vaccari et al., in prep., \url{http://www.mattiavaccari.net/df}). The Spitzer Data Fusion\ combines \textit{Spitzer} mid- and far-infrared data from the Spitzer Wide-area InfraRed Extragalactic (SWIRE, \citealt{Lonsdale2003}) survey in six fields, the Spitzer Deep-Wide Field Survey (SDWFS, PI Daniel Stern, Spitzer PID 40839), the Spitzer Extragalactic First Look Survey (XFLS, PI Tom Soifer, Spitzer PID 26), with photometric data at UV, optical and NIR wavelengths, as well as optical spectroscopy over about 70 deg$^2$ in total. It thus makes full use of public survey data from the GALEX, SDSS, INT WFS, 2MASS, UKIDSS and VISTA projects, as well as further optical imaging obtained by the SWIRE, SDWFS and XFLS teams. It also provides spectroscopic information variously available from SDSS, NASA/IPAC Extragalactic Database (NED \url{http://ned.ipac.caltech.edu}), recent literature and proprietary follow-up programmes. The Spitzer Data Fusion\ thus represents an ideal starting point to perform statistical studies of infrared galaxy populations, such as detailed SED fitting analyses to estimate photometric redshifts and masses, as well as star formation rates (SFRs); an early version of the database has already been used to that effect by \cite{RowanRobinson2013}. It has been used to validate \textit{Herschel} SDP observations within the HerMES consortium team and to produce current and future public HerMES catalogues \footnote{available at \url{http://hedam.oamp.fr/HerMES/}}. Since this paper only uses the Spitzer Data Fusion\ to derive SPIRE local luminosity function estimates, we refer the reader to Vaccari et al. (in prep.) for a complete description of the database and in the following we only summarise its basic properties as they relate to this work. The Spitzer Data Fusion\ is constructed by combining \textit{Spitzer} IRAC and MIPS source lists, as well as ancillary catalogues, following a positional association procedure. Source extraction of IRAC 4-band images and of MIPS 24$\,\mu$m images is carried out using Sextractor \citep{Bertin1996}, whereas MIPS 70 and 160$\,\mu$m source extraction is carried out using APEX \citep{Makovoz2005}. Catalogue selection is determined by a reliable IRAC 3.6 or IRAC 4.5$\,\mu$m detection. We then associate MIPS 24$\,\mu$m detections to IRAC detections using a 3 arcsec search radius, while MIPS 70 and 160$\,\mu$m catalogues are matched against MIPS 24$\,\mu$m positions using a search radius of 6 and 12 arcsec, respectively. UV, optical and near-infrared catalogues are then matched against IRAC positions using a 1 arcsec search radius. This multi-step approach increases the completeness and reliability of the longer-wavelength associations, while better pin-pointing MIPS sources using their IRAC positions. The HerMES wide fields used in this work are part of the Spitzer Data Fusion\ and are all covered both by \textit{Spitzer} 7-band infrared imaging and by SDSS 5-band optical imaging and optical spectroscopy \citep{Csabai2007,Abazajian2009, Carliles2010, Bolton2012}. They also benefit by a vast quantity of additional homogeneous multi-wavelength observations and additional spectroscopic redshifts available from NED, as well as the recent literature, and our own \textit{Spitzer}/\textit{Herschel} proprietary follow-up programmes. We thus associate a reliable spectroscopic redshift to our sources whenever this is available and otherwise rely on SDSS photometric redshift estimates based on a KD-tree nearest neighbour search (see \citealt{Csabai2007} for more details). In so doing we follow a commonly adopted photometric reliability criterion for SDSS good photometry, only selecting detections with SDSS \textit{cmodelmag} $r_{\mathrm{AB}} < 22.2$, thus avoiding unreliable photometric redshifts. In Fig. \ref{Lz.wide} we report SDSS $r_{\mathrm{AB}}$ and redshift histograms of the HerMES Wide sample. In order to avoid effects of incompleteness in redshift, we limit our HerMES Wide sample to $z \lsim 0.5$, below the completeness and reliability limit of SDSS redshift estimates. Moreover, to avoid the possible redshift incompleteness that affects the very bright and nearby galaxies in SDSS data, we cut our sample to the lowest redshift of $z = 0.02$, as suggested by e.g. \cite{MonteroPrada2009}. As discussed in \cite{Roseboom2010} the SPIRE source extraction works very well for point-like sources, but can underestimate the fluxes of the extended sources; cutting the sample at $z > 0.02$, also avoids this problem since the vast majority of extended sources are located at lower redshifts. The numbers of sources of the HerMES Wide sample are detailed in Tab. \ref{spire-llf-numbers.tab}. \begin{figure*} \begin{center} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-L250-z.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-Lbol-z.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-rhist-0005.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-zhist-0005.pdf}} \end{center} \caption[]{\textbf{Top}: SPIRE 250$\,\mu$m (expressed as $\nu L_\nu$) and IR bolometric luminosity versus redshift; \textbf{Bottom}: SDSS $r_{\rm AB}$ (left) and redshift histograms (right) for the HerMES Wide Fields sample used to estimate the SPIRE LLFs. The $L-z$ plots are colour-coded according to the SED best fit class obtained by the SED fitting procedure following the list reported in Sec. \ref{sedfit.sec}. The histograms report the relative quantities for the photometric and spectroscopic samples in blue and in green, respectively, with the total sample illustrated in red.} \label{Lz.wide} \end{figure*} \subsection{SED fitting} \label{sedfit.sec} Thanks to the Spitzer Data Fusion\ we are able to perform the multi-wavelength SED fitting analysis of our HerMES Wide Fields sample and thus estimate the IR bolometric ($8-1000\,\mu$m) and monochromatic rest-frame luminosities and relative k-corrections. We perform the SED fitting analysis using \texttt{Le Phare} (\citealt{Arnouts99} and \citealt{Ilbert2006}). To perform the fit we use SDSS $ugriz$, 2MASS $JHK_\mathrm{s}$, IRAC-3.6, IRAC-4.5, IRAC-5.8, IRAC-8.0, MIPS-24, MIPS-70, MIPS-160, SPIRE-250, SPIRE-350 and SPIRE-500 flux densities, which are available over the whole area covered by our sample. As template SEDs we use two different set of empirical templates according to the range of wavelengths we are fitting: in the optical-MIR range (up to 7$\,\mu$m rest-frame) we use the same templates and extinction laws exploited by the COSMOS team to estimate the COSMOS photometric redshifts as in \cite{Ilbert2009}, while to fit the IR/submm range (from 7$\,\mu$m rest-frame upwards) we use the SWIRE templates of \cite{Polletta2007} and their slightly modified version described in \cite{Gruppioni2010}, for a total of 32 and 31 SEDs respectively; this includes Elliptical, Spiral, AGN, Irregular and Starburst spectral types as summarised in Tab. \ref{sed.list}. As an example we report two typical examples of our SED fitting results in Fig. \ref{Lephare-fit}. Splitting the overall wavelength coverage into two provides us with a particularly good fit to the FIR bump and a reasonably good fit at all other wavelengths for all sources, with a mean value of the reduced $\chi^2$ of around 0.5. Fig. \ref{Lz.wide} (upper panels) we report the $L-z$ distribution for both the $L_{250}$ and the $L_{\mathrm{IR}}$ rest-frame luminosities obtained through the SED fitting procedure. Thanks to this multi-wavelength SED fitting we are able to also investigate the relation between monochromatic rest-frame luminosities at different wavelengths. As an example we report in Fig. \ref{sed.fit.plus} a comparison between SPIRE 250$\,\mu$m and PACS 100$\,\mu$m monochromatic rest-frame luminosities plotted against the IR bolometric luminosity. Historically, the monochromatic rest-frame luminosity at $60-100\,\mu$m has been considered a good indicator of the IR bolometric luminosity, due to a strong correlation between the two (e.g., \citealt{Patel2013} used the relation between MIPS 70$\,\mu$m and $L_{\mathrm{IR}}$). In Fig. \ref{sed.fit.plus} we show that we confirm this trend in our SED fitting results while, on the other hand, the SPIRE 250$\,\mu$m luminosity doesn't show a strong correlation with the IR bolometric luminosity and thus cannot be used as a reliable indicator of the total IR emission of the galaxy. As also confirmed by other HerMES works that have carefully studied the SED shape of the HerMES sources (e.g. \citealt{Symeonidis2013}) we find that the SEDs in the FIR regime of our local HerMES sample peak close to the PACS 100$\,\mu$m band and thus the monochromatic luminosity at this wavelength best traces the total IR bolometric luminosity integrated between 8 and 1000$\,\mu$m. It is also interesting to notice the very different behaviour of the k-corrections estimated at SPIRE 250$\,\mu$m and PACS 100$\,\mu$m (lowest panels of \ref{sed.fit.plus}). The differences between these two are remarkable and this is reflected in the different behaviour of the resulting luminosities. While a detailed physical analysis of our sample is beyond the scope of this paper, we did exploit our SED fitting analysis and the IRAC colour-colour criteria by \cite{Lacy2004} and \cite{Donley2012} to search for any potential AGN contamination in our sample. On the whole, the vast majority of our sources shows galaxy- or starburst-like best fit SEDs with less than 10\,\% of the sample being best-fit by AGN-like SEDs (SED classes between 17 and 25 and between 28 and 31 as reported in Tab. \ref{sed.list}). These numbers do not change significantly even if we fit a single SED template to the whole range of available photometry (from optical to SPIRE bands). Fig. \ref{AGN.colours} confirms that our objects mostly lie within the starburst-dominated region of the IRAC colour-colour plot, with only a small fraction of the sources (mainly located at $z>0.25$) sitting in the area usually occupied by AGN-like objects. On the whole we find that about 20\% of our sources sit in the AGN region identified by \cite{Lacy2004}, with less than 6\% at $z\leq0.2$ and about 30\% at $0.2<z\leq0.5$. These fractions change significantly if we apply the selection reported in \cite{Donley2012} which is able to better discriminate pure bona-fide AGNs from samples that are contaminated by low- and high- redshift star forming galaxies as the one selected by Lacy's criterion. We find that only 3\% of our total sample is identified as AGN-dominated by Donley's criterion, less than 2\% at $z\leq0.2$ and 4\% at $0.2<z\leq0.5$. \begin{table} \centering \begin{tabular}{cccc} \hline \textbf{Index} & \textbf{SED class} & \textbf{Spectral type} & \textbf{Reference}\\ \hline\hline 01 & Ell13 & Elliptical & Polletta+07 \\ 02 & Ell5& Elliptical & Polletta+07 \\ 03 & Ell2& Elliptical & Polletta+07 \\ 04 & S0& Spiral & Polletta+07 \\ 05 & Sa& Spiral & Polletta+07 \\ 06 & Sb& Spiral & Polletta+07 \\ 07 & Sc& Spiral & Polletta+07 \\ 08 & Sd& Spiral & Polletta+07 \\ 09 & Sdm& Spiral & Polletta+07 \\ 10 & Spi4& Spiral & Polletta+07 \\ 11 & N6090& Starburst & Polletta+07 \\ 12 & M82& Starburst & Polletta+07 \\ 13 & Arp220& Starburst & Polletta+07 \\ 14 & I20551& Starburst & Polletta+07 \\ 15 & I22491& Starburst & Polletta+07 \\ 16 & N6240& Starburst & Polletta+07 \\ 17 & Sey2& Obscured AGN & Polletta+07 \\ 18 & Sey18& Obscured AGN & Polletta+07 \\ 19 & I19254& Obscured AGN & Polletta+07 \\ 20 & QSO2& Unobscured AGN & Polletta+07 \\ 21 & Torus& Unobscured AGN & Polletta+07 \\ 22 & Mrk231& Obscured AGN & Polletta+07 \\ 23 & QSO1& Unobscured AGN & Polletta+07 \\ 24 & BQSO1& Unobscured AGN & Polletta+07 \\ 25 & TQSO1& Unobscured AGN & Polletta+07 \\ 26 & Sb& Spiral & Gruppioni+10 \\ 27 & Sdm& Spiral & Gruppioni+10 \\ 28 & Sey2& Obscured AGN & Gruppioni+10 \\ 29 & Sey18& Obscured AGN & Gruppioni+10 \\ 30 & Mrk231& Obscured AGN & Gruppioni+10 \\ 31 & qso\_high& Unobscured AGN & Gruppioni+10 \\ \hline \end{tabular} \caption[List of the SEDs used to perform the SED fitting analysis in the IR/submm regime]{List of the SEDs used to perform the SED fitting analysis in the IR/submm. The 'Spectral Type' columns shows the grouping procedure we implemented in order to collect together those SED classes with similar properties in terms of FIR colours.} \label{sed.list} \end{table} \begin{figure*} \begin{center} {\myincludegraphics{width=0.45\textwidth}{./{}/Id000037489.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/Id000013188.pdf}} \end{center} \caption[Typical \texttt{Le Phare} best fit results]{Typical \texttt{Le Phare} SED fits. The two best-fit SEDs used to fit short- and long-wavelength photometry are shown by the red and magenta solid line, respectively. The black solid circles are the photometric data used to perform the fit. The ID and the redshift of the source are reported on top of each panel.} \label{Lephare-fit} \end{figure*} \begin{figure*} \begin{center} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-Lbol-L250.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-Lbol-L100.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-Lbol-L250-perclass.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-Lbol-L100-perclass.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-k250.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-k100.pdf}} \end{center} \caption[]{\textbf{Top}: relation between rest frame SPIRE 250$\,\mu$m or PACS 100$\,\mu$m luminosities and IR bolometric luminosity colour-coded as a function of redshift. \textbf{Middle}: relations between rest frame SPIRE 250$\,\mu$m/PACS 100$\,\mu$m luminosities and IR bolometric luminosity colour-coded according to the SED best fit class obtained by the SED fitting procedure following the list reported in Tab. \ref{sed.list}; \textbf{Bottom}: SPIRE 250$\,\mu$m and PACS 100$\,\mu$m k-corrections in function of redshift colour-coded according to the SED best fit class.} \label{sed.fit.plus} \end{figure*} \begin{figure*} \begin{center} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-AGN-Lacy04-Donley12-zs02.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-AGN-Lacy04-Donley12-02-05.pdf}} \end{center} \caption[]{IRAC colour-colour plot as from \cite{Lacy2004} and \cite{Donley2012}. In the left and right panels respectively the complete samples of sources in the two redshift ranges $0.02<z\leq0.2$ and $0.2<z\leq0.5$ are reported. In both panels the over plotted solid green circles and blue open circles show the AGN-like objects selected using the \cite{Lacy2004} and \cite{Donley2012} criteria respectively, while the solid red circles represent the rest of the sample in each redshift bin.} \label{AGN.colours} \end{figure*} \section{Statistical Methods} \label{stat.met.} Accurately estimating the luminosity function (LF) is difficult in observational cosmology since the presence of observational selection effects like flux detection thresholds can make any given galaxy survey incomplete and thus introduce biases into the LF estimate. Numerous statistical approaches have been developed to overcome this limit, but, even though they all have advantages, it is only by comparing different and complementary methods that we can be confident about the reliability of our results For this reason, to estimate the local luminosity functions (LLFs) in the SPIRE bands reported in this paper we exploit different LF estimators: the $1/V_{\rm max}$ approach of \cite{Schmidt1968} and the modified version $\phi_{\rm est}$ of \cite{PageCarrera2000}; the Bayesian parametric maximum likelihood method (ML) of \cite{Kelly2008} and \cite{Patel2013}; and the semi-parametric approach of \cite{Schafer2007}. All these methods are explained in the following sections. \subsection{$1/V_{\rm max}$ Estimator}\label{Vmax} \cite{Schmidt1968} introduced the intuitive and powerful $1/V_{\rm max}$ estimator for LF evaluation. The quantity $V_{\rm max}$ for each object represent the maximum volume of space which is available to such an object to be included in one sample accounting for the survey flux limits and the redshift bin in which the LF is estimated. $V_{\rm max}$ thus depends on the distribution of the objects in space and the way in which detectability depends on distance. Once the $V_{\rm max}$ (or $V_{\rm max}(L_{i})$, since it depends on the luminosity of each object), is defined, the LF can be estimated as \begin{equation} \Phi(B_{j-1}< L \leqslant B_{j})=\sum_{B_{j-1}< L \leqslant B_{j}}\frac{1}{V_{\rm max}(L_{i})}, \end{equation} in which its value is computed in bins of luminosity, within the boundary luminosities value of a defined bin $[B_{j-1} , B_{j}]$. It is usually expressed in the differential form as \begin{equation} \label{diff.vmax} \phi _{1/V_{\max } } (L,z) = \frac{1}{{\Delta L}}\sum\limits_{i = 1}^N {\frac{1}{{V_{\max ,i}}}}, \end{equation} where $N$ is the number of objects within some volume-luminosity region. Errors in the LF can be evaluated using Poisson statistics: \begin{equation}\label{err.vmax} \sigma_{\phi(L)}^2=\sum_{B_{j-1}< L \leqslant B_{j}}\frac{1}{(V_{\max}(L_{i}))^2} \end{equation} In our case there are three main selection factors that may constrain the $V_{\max}$ for each object in our sample: the limit in $r$ magnitude that guide the photometric redshift estimates in the SDSS survey, $r_{AB} < 22.2$; the MIPS 24$\,\mu$m flux limit that guides the SPIRE 250$\,\mu$m extraction, $S_{24} > 300$ $\mu$Jy; and finally the flux density limit in the SPIRE 250$\,\mu$m band, $S_{250} > 30$~mJy. Moreover, since we estimate the $1/V_{\max}$ in a number of redshift bins the $V_{\max}$ value is actually also limited by $z_{\min}$ and $z_{\max}$ for each $z$-bin. Taking into account all these considerations the $V_{\max}$ estimator used in the Eq. \ref{diff.vmax} is described by: \begin{equation}\label{vmax.perbin} V_{\max}=\frac{\Omega}{4\pi}\int_{z_{\min}}^{z_{\max}} \mathrm{d}z\frac{\mathrm{d}V}{\mathrm{d}z}, \end{equation} where $z_{\min}$ and $z_{\max}$ are the redshift boundaries resulting from taking into account both the redshift bin range and the selection factors \begin{eqnarray}\label{zmax.perbin.1} z_{k,\min} &=& z_{\mathrm{bin}_k,\min} \\ z_{k,\max} &=& \mathrm{min}[z_{0,\max},.., z_{n,\max},z_{\mathrm{bin}_k,\max}] \end{eqnarray} for all $0,...,n$ selection factors and for each $k$ redshift bin. For instance, in the case of the SPIRE 250$\,\mu$m luminosity function estimate in $z$-bin $0.02<z<0.1$ the conditions just shown become \begin{eqnarray}\label{zmax.perbin.2} z_{0.02<z<0.1,\min} &=& 0.02 \nonumber \\ z_{0.02<z<0.1,\max} &=& \mathrm{min}[z_{r_{AB},\max},z_{f24,\max},z_{f250,\max}, 0.1], \nonumber \end{eqnarray} where $z_{r_{AB},\max}$, $z_{f24,\max}$ and $z_{f250,\max}$ are the redshift at which a source in the sample reach the SDSS $r_{AB}$ magnitude limit (= 22), the 24$\,\mu$m flux limit (= 300 $\mu$Jy) and the SPIRE 250$\,\mu$m limit (= 30 mJy), respectively; 0.02 and 0.1 are the minimum and the maximum of the redshift bin. This method implies binning of the luminosity data, a non-parametric technique, and as such does not need to assume an analytic form. It does however contain the underlying assumption that galaxies have a uniform distribution in space. In principle this could be tested with the $V/V_{\max}$ distribution, but that still remains difficult when there are multiple selection factors limiting the sample. The simple $V_{\max}$ estimator have evolved, being improved and refined over the years to accommodate the many different types of survey that have steadily grown in size and complexity. One of these approaches is the one implemented in \cite{PageCarrera2000}, the so called $V_{\rm est}$ method, which we also used here to check whether with our $1/V_{\max}$ estimates we are ignoring any important incompleteness factor in our sample. \cite{PageCarrera2000} improved the method to take into account systematic errors in the $V_{\max}$ test introduced for objects close to the flux limit of a survey. This new method defines the value of the luminosity function $\phi(L)$ as $\phi_{\rm est}$, which assumes that $\phi$ does not change significantly over the luminosity and redshift intervals $\Delta L$ and $\Delta z$, respectively, and is defined as \begin{equation} \label{phiest} \phi_{\rm est} = \frac{N}{{\int\limits_{L_{\min } }^{L_{\max } } \int\limits_{z_{\min } }^{z_{\max } (L)} {\frac{dV}{dz}dzdL} }}, \end{equation} where $N$ is the number of objects within some volume-luminosity region. Due to how the methods work in practice, for luminosity functions in most of the redshift intervals, the two will produce the same results, particularly for the highest luminosity bins of any given redshift bin. However, for the lowest luminosity objects in each redshift bin, which are close to the survey limit and occupy a portion of volume-luminosity space much smaller than the rectangular $\Delta L$ $\Delta z$ region, the two methods can produce the most discrepant results. Nevertheless in our case we do not find any substantial differences between the $1/V_{\max}$ and $1/V{\rm est}$ solutions, as shown in the following sections. \subsection{Bayesian parametric maximum likelihood estimator}\label{bayes.MLE} The Maximum Likelihood estimator has first been applied in studies of observational cosmology by \cite{STY1979}, the so called STY estimator. In maximum likelihood analysis, one is interested in finding the estimate that maximises the likelihood function of the data. For a given statistical model, parameterised by $\theta$, the likelihood function, $p(x|\theta)$, is the probability of observing the data, denoted by $x$, as a function of the parameters $\theta$. In Bayesian analysis, one attempts to estimate the probability distribution of the model parameters, $\theta$, given the observed data $x$. Bayes theorem states that the probability distribution of $\theta$ given $x$ is related to the likelihood function as \begin{equation} p(\theta|x) \propto p(x|\theta)p(\theta), \end{equation} where $p(x|\theta)$ is the likelihood function of the data, and the term $p(\theta)$ is the prior probability distribution of $\theta$; the result, $p(\theta,x)$, is called the posterior distribution. The prior distribution, $p(\theta)$, should convey information known prior to the analysis. In general, the prior distribution should be constructed to ensure that the posterior distribution integrates to 1, but does not have a significant effect on the posterior. In particular, the posterior distribution should not be very sensitive to the choice of prior distribution, unless the prior distribution is constructed with the purpose of placing constraints on the posterior distribution that are not conveyed by the data. The contribution of the prior to $p(\theta|x)$ should become negligible as the sample size becomes large. From a practical standpoint, the primary difference between the maximum likelihood approach and the Bayesian approach is that the former is concerned with calculating a point estimate of $\theta$, while the latter is concerned with mapping out the probability distribution of $\theta$ in the parameter space. The maximum likelihood approach uses an estimate of the sampling distribution of $\theta$ to place constraints on the true value of $\theta$. In contrast, the Bayesian approach directly calculates the probability distribution of $\theta$, given the observed data, to place constraints on the true value of $\theta$. In terms of LF evaluation, the LF estimate is related to the probability density of $(L,z)$ \begin{equation} \label{eq.p} p(L,z) = \frac{1}{N} \phi(L,z)\frac{\mathrm{d}V}{\mathrm{d}z}, \end{equation} where $N$ is the total number of sources in the observable Universe and is given by the integral of $\phi$ over $L$ and $V(z)$. The quantity $p(L,z)$d$L$d$z$ is the probability of finding a source in the range $L,L+$d$L$ and $z, z+$d$z$. Eq. \ref{eq.p} separates the LF into its shape, given by $p(L,z)$, and its normalisation, given by $N$. Once we have an estimate of $p(L,z)$, we can easily convert this to an estimate of $\phi(L,z)$ using Eq. \ref{eq.p}. In general it is easier to work with the probability distribution of $L$ and $z$ instead of directly with the LF, because $p(L,z)$ is more directly related to the likelihood function. The function $\phi(L,z)$ can be described, as we have seen, by a parametric form with parameter $\theta$, so that we can derive the likelihood function for the observed data. The presence of flux limits and various other selection effects can make this difficult, since the observed data likelihood function is not simply given by Eq. \ref{eq.p}. In this case, the set of luminosities and redshifts observed by a survey gives a biased estimate of the true underlying distribution, since only those sources with $L$ above the flux limit at a given $z$ are detected. In order to derive the observed data likelihood function, it is necessary to take the survey's selection method into account. This is done by first deriving the joint likelihood function of both the observed and unobserved data, and then integrating out the unobserved data. The probability $p(L,z)$ (as reported in \citealt{Patel2013}) then becomes \begin{equation} p(L,z|\theta) = \frac{\phi(L,z|\theta)p(\rm selected|L,z)}{\lambda} \frac{\mathrm{d}V}{\mathrm{d}z}, \end{equation} where $p(\rm selected|L,z)$ stands for the probability connected with the selection factors of the survey and $\lambda$ is the expected number of sources, determined by \begin{equation} \lambda =\iint \phi(L,z|\theta)p(\rm selected|L,z)\mathrm{d} \mathrm{log} L\frac{\mathrm{d}V}{\mathrm{d}z}\mathrm{d}z, \label{lambda.patel} \end{equation} where the integrals are taken over all possible values of redshift and luminosity. This last equation gives the expected number of objects in a sample composed by sources of the same morphological type and collected in a single field survey. For our purposes we have to change the equation to the following: \begin{equation} \lambda = \sum_{\rm SED}\sum_{\rm fields} \iint\Phi(L,z|\theta)p(\rm selected|L,z)\mathrm{d} \mathrm{log} L\frac{\mathrm{d}V}{\mathrm{d}z}\mathrm{d}z , \label{lambda.sed.field} \end{equation} where we sum together the expected number of sources for each SED type, used for the SED fitting procedure, and survey areas that compose our HerMES Wide Fields sample. Since the data points are independent, the likelihood function for all $N$ sources in the Universe would be \begin{equation} p(L,z|\theta) = \prod_{i=1}^N p(L_i,z_i| \theta). \end{equation} Indeed, we do not know the luminosities and redshifts for all $N$ sources, nor do we know the value of $N$, since our survey only covers a fraction of the sky and is subject to various selecting criteria. As a result, our survey only contains $n$ sources. For this reason the selection process must also be included in the probability model, and the total number of sources, $N$, is an additional parameter that needs to be estimated. Then the likelihood becomes: \begin{equation} p(n|\theta) = p(N,\{L_i,z_i\}|\theta) = p(N| \theta)p(\{L_i,z_i\} | \theta), \end{equation} where $p(N|\theta)$ is the probability of observing $N$ objects and $p(\{L_i,z_i\} | \theta)$ is the likelihood of observing a set of $L_i$ and $z_i$, both given the model LF. Is it possible to assume that the number of sources detected follows a Poisson distribution \citep{Patel2013}, where the mean number of detectable sources is given by $\lambda$. Then, the term $p(N,\{L_i,z_i\}|\theta)$ could be written as the product of individual source likelihood function, since each data point is independent: \begin{eqnarray} &p(N| \theta)p(\{L_i,z_i\}| \theta) = & \\ & = \frac{\lambda^N e^{-\lambda}}{N!}\displaystyle\prod_{i=1}^N \frac{\Phi(L,z|\{\theta\})p(\rm{selected}|L,z)}{\lambda}\frac{\mathrm{d}V}{\mathrm{d}z} \nonumber , & \end{eqnarray} Then we can use the likelihood function for the LF to perform Bayesian inference by combining it with a prior probability distribution, $p(\theta)$ to compute the posterior probability distribution, $p(\theta | {d_i})$, given by Bayes' theorem : \begin{equation} p(\theta|{d_i}) = \frac{p(\{d_i\}|\{\theta\})p(\{\theta\})}{\displaystyle\int p(\{d_i\}|\{\theta\})p(\{\theta\})\mathrm{d}\theta} \end{equation} The denominator of this equation represents the Bayesian evidence which is determined by integrating the likelihood over the prior parameter space. This last step is needed to normalise the posterior distribution. Calculating the Bayesian evidence is computationally expensive, since it involves integration over $m$-dimensions for an $m$ parameter LF model. Therefore, Monte Carlo Markov Chain (MCMC) methods, used to examine the posterior probability, perform a random walk through the parameter space to obtain random samples from the posterior distribution. MCMC gives as a result the maximum of the likelihood, but an algorithm is needed to investigate in practice the region around the maximum. \cite{Kelly2008} suggested to use the Metropolis-Hastings algorithm (MHA; \citealt{Metropolis1953,Hastings1970}) in which a proposed distribution is used to guide the variation of the parameters. The algorithm uses a proposal distribution which depends on the current state to generate a new proposal sample. The algorithm needs to be tuned according to the results and the number of iterations, as well as the parameter step size change. Once we obtain the posterior distribution, we have the best solution for each of the parameters describing the LF model that we have chosen at the beginning; we have the mean value and the standard deviation ($\sigma$) for each of the parameters that we can combine together to find the $\sigma$ of the parametric function chosen as the shape of our LF (see Sec. \ref{results} for further details on our calculation). \subsection{A \textit{Semi-Parametric} Estimator} \label{schafer.sec} \cite{Schafer2007} introduced the \textit{semi-parametric} method in order to estimate luminosity functions given redshift and luminosity measurements from an inhomogeneously selected sample of objects (e.g. a flux-limited sample). In such a limited sample, like ours, only objects with flux within some range are observable. When this bound on fluxes is transformed into a bound in luminosity, the truncation limits take an irregular shape as a function of redshift; additionally, the k-correction can further complicates this boundary. We refer the reader to the original paper, \cite{Schafer2007}, for a complete description of the method; here we report only the main characteristics of it. This method shows various advantages in comparison with the other techniques previously described: it does not assume a strict parametric form for the LF (differently to the parametric MLE); it does not assume independence between redshifts and luminosities; it does not require the data to be split into arbitrary bins (unlike for the non-parametric MLE); and it naturally incorporates a varying selection function. This is obtained by writing the luminosity function $\phi(z,L)$ as \begin{equation}\label{lf.schafer} \mathrm{log}\phi (z,L) = f(z) + g(L) + h(z,L,\theta), \end{equation} where $h(z,L,\theta)$ assumes a parametric form and is introduced to model the dependence between the redshift $z$, the luminosity $L$ and the real valued parameter $\theta$. The functions $f$ and $g$ are estimated in a completely free-form way. Nevertheless, it is important to notice that this method assumes a $complete$ data-set in the un-truncated region that requires some care when applying it to samples that may suffer some incompleteness. Discussion on how this issue may influence our results are reported in the later sections. \subsection{Parametrising the Luminosity Function} \label{analy.form} Using the classical maximum likelihood technique (STY), as well the one based on Bayesian statistics, implies the assumption of a parametric form able to describe the LF. This choice is not straightforward and over the years the selected LF models varied. In this work we decide to use the \textit{Log Gaussian Function} introduced by \cite{Saunders1990} to fit the \textit{IRAS} IR LF and widely used for IR LF estimates (e.g. \citealt{Gruppioni2010}, \citealt{Gruppioni2013}, \citealt{Patel2013}). Usually this function is called the \textit{modified Schechter function} since its formalism is very similar to the one introduced by \cite{Schechter1976}. This parametric function is defined as \begin{equation}\label{log.gaus} \Phi(L)=\Phi^* \left(\frac{L}{L^{*}}\right)^{1-\alpha}\exp\left[-\frac{1}{2\sigma^{2}}\log^{2}\left(1+\frac{L}{L^{*}}\right)\right], \end{equation} where, $\Phi^*$ is a normalisation factor defining the overall density of galaxies, usually quoted in units of $h^3$Mpc$^{-3}$, and $L^*$ is the characteristic luminosity. The parameter $\alpha$ defines the faint-end slope of the LF and is typically negative, implying relatively large numbers of galaxies with faint luminosities. We also checked whether another functional form was more suitable to describe our LFs, but we did not find any evidence of improvement or substantial differences by using e.g. a \textit{double power law} function (used by \citealt{Rush1993} or \citealt{Franceschini2001}). We therefore decide to report and discuss the estimates obtained by using only the \textit{Log Gaussian Function} in order to be able to compare our results with other more recent results that use the same parametrisation. This approach is well suited to describe the total galaxy population, but may be inadequate if we divide the population into sub-groups according, for example, to their optical properties (see Sec. \ref{discussion} for more details) as done by other authors while studying the behaviour of the local mass functions of galaxies (e.g. \citealt{Baldry2012}). \section{Results} \label{results} We estimate the LFs at SPIRE 250$\,\mu$m as well as at SPIRE 350 and 500$\,\mu$m by using the SPIRE 250$\,\mu$m selected sample and extrapolating the luminosities from the SED fitting results. The higher sensitivity of the SPIRE 250$\,\mu$m channel with respect to the 350 and 500$\,\mu$m channels largely ensures that we do not miss sources detected only at these longer wavelengths. Additionally we estimate the IR bolometric luminosity functions using the integrated luminosity between 8 and 1000$\,\mu$m and at 24, 70, 100 and 160$\,\mu$m; these last monochromatic estimates are also used to check our procedure against other published LFs. As a summary, in Tab. \ref{llf-values.tab} we report our $1/V_{\rm max}$ luminosity function values for each SPIRE band and the IR bolometric rest-frame luminosity per redshift bins. We exclude from the calculation the sources with $z<0.02$, as explained in Sec. \ref{sam.sec.}. The error associated with each value of $\Phi$ is estimated following Poissonian statistics, as shown in Eq. \ref{err.vmax}.\\ Since we use photometric redshifts in our sample, we quantify the redshift uncertainties that may affect our results by performing Monte Carlo simulations We created 10 mock catalogues based on our actual sample, allowing the photometric redshift of each source to vary by assigning a randomly selected value according to the Gaussian SDSS photometric error. For each source in the mock catalogues we performed the SED fitting and recomputed both the monochromatic and total IR rest-frame luminosities and the $V_{\rm max}$-based LFs, using the randomly varied redshifts. The comparison between our real IR LF solution and the mean derived from the Monte Carlo simulations shows that the uncertainties derived from the use of the photometric redshifts do not significantly change the error bar estimated using the Poissonian approach and mainly after the lower luminosity bins at the lower redshifts ($z<0.1$). Even though the differences are really small, in Tab. \ref{llf-values.tab} we report the total errors, taking into account all these uncertainties. As an extra test we also check what happens if we estimate the LFs in each field using only spectroscopic redshifts and correct the solutions for the incompleteness effect due to this selection. The resulting LFs are effectively undistinguishable and thus confirms that the uncertainties introduced by the use of photometric redshifts are of the order of the Poissonian ones. The errors that we quote in Tab. \ref{llf-values.tab} are the total errors, taking into account both Poissonian and redshift uncertainties associated with $\Phi$. In Tab. \ref{mcmc.param} we report the values of the best parameter solutions of the parametric bayesian ML procedure (explained in Sec. \ref{bayes.MLE}) using the log-Gaussian functional form (Eq. \ref{log.gaus}). In Fig. \ref{mcmc.hist} we report the histograms representing the probability distribution of the best fit parameters produced by the MCMC procedures. To obtain these estimates we run an MCMC procedure with $5\times10^6$ iterations. This procedure is a highly time-consuming process, thus we focused our attention in the most local bin $0.02<z<0.1$ of our analysis where we want to obtain a precise estimate of the shape of the local LF observed by \textit{Herschel} at 250$\,\mu$m, which is our selection band. Such an estimate represents a fundamental benchmark to study the evolution of the luminosity function (e.g. Vaccari et al., in prep.) as discussed later in Sec. \ref{discussion}. \begin{table} \centering \begin{tabular}{lc|c} \multicolumn{2}{c|}{\textbf{Parameter}} & \textbf{$\langle \sigma \rangle$} \\ \hline log($L^*$) [L$_\odot$] & $9.03^{+0.14}_{-0.13}$ & 0.14 \\ $\alpha$ & $0.96 ^{+0.09}_{-0.07}$ & 0.08 \\ $\sigma$ & $0.39^{+0.04}_{-0.04}$ & 0.04 \\ log($\Phi^*$) [Mpc$^{-3}$dex$^{-1}$] & $-1.99^{+0.04}_{-0.02}$ & 0.03 \\ \end{tabular} \caption[]{Best fit parameter solution and uncertainties for the local SPIRE 250$\,\mu$m LF determined using the parametric Bayesian ML procedure. The redshift range for this solution is $0.02<z<0.1$.} \label{mcmc.param} \end{table} \begin{figure*} \begin{center} {\myincludegraphics{width=0.25\textwidth}{./{}/histoplot-L_star-log-gaussian-5e6-eps-converted-to.pdf}} {\myincludegraphics{width=0.25\textwidth}{./{}/histoplot-Alpha-log-gaussian-5e6-eps-converted-to.pdf}} {\myincludegraphics{width=0.25\textwidth}{./{}/histoplot-Sigma-log-gaussian-5e6-eps-converted-to.pdf}} {\myincludegraphics{width=0.25\textwidth}{./{}/histoplot-Phi_star-log-gaussian-5e6-eps-converted-to.pdf}} \end{center} \caption[]{Probability histogram of the best fitting parameters ($L^*$, $\alpha$, $\sigma$ and $\Phi^*$) for the SPIRE 250$\,\mu$m local luminosity function within $0.02<z<0.1$, determined using the MCMC parametric Bayesian procedure performing $5\times10^6$ iterations. The highlighted area is the $\pm1\sigma$ confidence area for each parameter, as reported in Tab. \ref{mcmc.param}.}\label{mcmc.hist} \end{figure*} A summary of the results is reported in the following figures. In Fig. \ref{local.whole} and \ref{local.perfield} we report the SPIRE 250$\,\mu$m rest-frame LF estimated by using the $1/V_{\rm max}$ and the parametric Bayesian ML, reporting both the solutions for the five fields togheter (see Tab. \ref{spire-llf-numbers.tab}) and for each field separately. The SPIRE LLFs in different fields do not show any field to field variations beyond what is expected from cosmic variance, i.e. about 15$\%$ as predicted by theoretical models (\citealt{Moster2011}). To report the confidence area of our Bayesian ML solution we estimate the standard deviation of the best fit model using the following equation: \begin{eqnarray}\label{log.gaus.err} &&\sigma^2_{\Phi(x_{1}, x_{2},..x_{n})} =\displaystyle \sum_{j=1}^{n}\left( \frac{\partial \Phi}{\partial x_j} \sigma_{x_j}\right)^2 + \nonumber \\ &&+ 2 \sum_{j=1}^{n}\sum_{k=j+1}^{n} r_{x_jx_k}\left( \frac{\partial \Phi}{\partial x_j} \sigma_{x_j}\right) \left( \frac{\partial \Phi}{\partial x_k} \sigma_{x_k}\right). \end{eqnarray} This equation represents the general formula for the parametric standard deviation in the case of non-independent variables. The functional form of $\Phi$ is, as already stated, the log-Gaussian function described in Eq. \ref{log.gaus}, in which the parameters $L^*$, $\alpha$, $\sigma$ and $\Phi^*$ are in fact not independent from each other. Thus $\Phi(x_{1}, x_{2},..x_{n})$ reported in Eq. \ref{log.gaus.err} can be translated, into our specific case, as $\Phi(\rm{L}_*, \alpha, \sigma, \Phi^*)$, while $\sigma_{x_j}$ expresses the error associated to the $j$-th parameter in the sum (and the same with $\sigma_{x_k}$ for the $k$-th parameter). In Fig. \ref{local.schafer} we report the SPIRE 250$\,\mu$m rest-frame LF estimated by using the semi-parametric method described in Sec. \ref{schafer.sec} and the modified $1/V_{\rm max}$ estimates from \cite{PageCarrera2000} described in Sec. \ref{Vmax}. In Fig. \ref{local.dye} we compare our SPIRE 250$\,\mu$m $1/V_{\rm max}$ LF solution to the H-ATLAS results of \cite{Dye2010}. In Figs. \ref{local.350}, \ref{local.500} and \ref{local.LIR} we report the SPIRE 350/500$\,\mu$m and IR bolometric rest-frame LFs respectively. Finally, as a check on the robustness of our SPIRE 250$\,\mu$m selected sample we estimate the LFs also at other wavelengths, namely MIPS 24/70/160$\,\mu$m and PACS 70/100/160$\,\mu$m, and compare our results to others already published. In Fig. \ref{local.multilambda} we report the 24/70/90/160$\,\mu$m rest-frame LFs compared with local predictions at these wavelengths given by different authors. \begin{figure*} \begin{center} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-spire-250-xid-out-zs-01-compared-shaded-ml-5000000i-double-eps-converted-to.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-spire-250-xid-out-zs-01-compared-ml-5000000i-double-eps-converted-to.pdf}} \end{center} \caption[]{SPIRE 250$\,\mu$m rest-frame local luminosity function estimates. The black open circle are our $1/V_{max}$ estimates; the red dashed line is from the \cite{Fontanot2012} model; the beige dashed-dot-dot-dot line is from \cite{Negrello2007} model and the black dot-dashed and dashed lines are local luminosity function prediction at 250$\,\mu$m from \cite{SerjeantHarrison2005}. The magenta shaded region is the $\pm1\sigma$ best MCMC solution using the log-Gaussian functional form reported in the text. The magenta line in the right panel is the mean from the MCMC solution plotted with the LFs estimates in each field (colour-coded as reported in the legend; the colour-coded number reported in the plot below each field's name is the number of sources in each field in the considered redshift bin).} \label{local.whole} \end{figure*} \begin{figure*} \begin{center} {\myincludegraphics{width=0.44\textwidth}{./{}/wide-spire-250-xid-out-zs-01-compared-double-eps-converted-to.pdf}} {\myincludegraphics{width=0.44\textwidth}{./{}/wide-spire-250-xid-out-01-02-compared-double-eps-converted-to.pdf}} {\myincludegraphics{width=0.44\textwidth}{./{}/wide-spire-250-xid-out-02-03-compared-double-eps-converted-to.pdf}} {\myincludegraphics{width=0.44\textwidth}{./{}/wide-spire-250-xid-out-03-04-compared-double-eps-converted-to.pdf}} {\myincludegraphics{width=0.44\textwidth}{./{}/wide-spire-250-xid-out-04-05-compared-double-eps-converted-to.pdf}} \end{center} \caption[]{SPIRE 250$\,\mu$m rest-frame local luminosity function estimates from field to field. The Colour-coded open circles are our $1/V_{\rm max}$ results for each field (the black is the solution for all the five fields considered together); the red dashed line is the \cite{Fontanot2012} model; the beige dashed-dot-dot-dot line is the \cite{Negrello2007} model; the black dot-dashed and dashed lines are local luminosity function predictions at 250$\,\mu$m from \cite{SerjeantHarrison2005}. \cite{Negrello2007} and \cite{SerjeantHarrison2005} estimates are reported at the same local ($z=0$) redshift in all panels.} \label{local.perfield} \end{figure*} \begin{figure*} \begin{center} {\myincludegraphics{width=0.8\textwidth}{./{}/llf-250-250-double-dye10-eps-converted-to.pdf}} \end{center} \caption[]{SPIRE 250$\,\mu$m rest-frame local luminosity functions compared to the H-ATLAS estimate from \cite{Dye2010}. The black open circles are our $1/V_{\rm max}$ results; the blue open diamonds are the SDP H-ATLAS SPIRE 250$\,\mu$m rest-frame local luminosity function from \cite{Dye2010}; the red open triangles are the SDP HerMES SPIRE 250$\,\mu$m rest-frame local luminosity function of \cite{Vaccari2010}; the black open triangles are the the SDP HerMES SPIRE 250 rest-frame luminosity function of \cite{Eales2010b}; the red dashed line is the SPIRE 250$\,\mu$m luminosity function predicted by \cite{Fontanot2012}; the beige dashed-dot-dot-dot line is the \cite{Negrello2007} model; the black dot-dashed and dashed lines are local luminosity function prediction at 250$\,\mu$m from \cite{SerjeantHarrison2005}. \cite{Negrello2007} and \cite{SerjeantHarrison2005} estimates are reported at the same local ($z=0$) redshift in all panels.} \label{local.dye} \end{figure*} \begin{figure*} \begin{center} {\myincludegraphics{width=0.45\textwidth}{./{}/WIDEbivest-backup.pdf}} \end{center} \caption[]{SPIRE 250$\,\mu$m luminosity distribution \textit{vs} redshift plane, as reconstructed using the \cite{Schafer2007} estimator. The red points are the data; the red dashed lines mark the flux limitations adopted in the application of the \textit{semi-parametric} LF estimator by \cite{Schafer2007} and the solid black lines are iso-density contours corresponding to the \textit{semi-parametric} reconstructions of the source volume density as a function of luminosity and redshift.} \label{local.schafer.bivest} \end{figure*} \begin{figure*} \begin{center} {\myincludegraphics{width=0.45\textwidth}{./{}/WIDErhoz0_08-backup.pdf}} \end{center} \caption[]{SPIRE 250$\,\mu$m rest-frame local luminosity function estimated using the semi-parametric method of \cite{Schafer2007} and the modified $1/V_{\rm max}$ approach of \cite{PageCarrera2000}. Our classic $1/V_{\rm max}$ estimate is shown in grey; in red is the estimate using the \cite{PageCarrera2000} method and in black the estimate using the \cite{Schafer2007} approach.} \label{local.schafer} \end{figure*} \begin{figure*} \begin{center} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-spire-350-xid-out-zs-01-compared-double-eps-converted-to.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-spire-350-xid-out-zs-02-compared-double-eps-converted-to.pdf}} \end{center} \caption[]{SPIRE 350$\,\mu$m rest-frame local luminosity function estimates. The black open circles are our $1/V_{\rm max}$; the red open triangles are the SDP HerMES SPIRE 350$\,\mu$m rest-frame local luminosity function from \cite{Vaccari2010}; the green open triangles are the Planck 857 GHz or 350$\,\mu$m local luminosity function estimate from \cite{Negrello2013}; the black dot-dashed and dashed lines are local luminosity function prediction at 350$\,\mu$m from \cite{SerjeantHarrison2005}.} \label{local.350} \end{figure*} \begin{figure*} \begin{center} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-spire-500-xid-out-zs-01-compared-double-eps-converted-to.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-spire-500-xid-out-zs-02-compared-double-eps-converted-to.pdf}} \end{center} \caption[]{SPIRE 500$\,\mu$m rest-frame local luminosity function estimates. The black open circles are our $1/V_{\rm max}$; the red open triangles are the SDP HerMES SPIRE 500$\,\mu$m rest-frame local luminosity function from \cite{Vaccari2010}; the green open triangles are the Planck 545 GHz or 550$\,\mu$m local luminosity function estimate from \cite{Negrello2013} converted to our wavelength by using a spectral index of $\alpha = 2.7$; the black dot-dashed and dashed lines are local luminosity function prediction at 500$\,\mu$m from \cite{SerjeantHarrison2005}.} \label{local.500} \end{figure*} \begin{figure*} \begin{center} {\myincludegraphics{width=1.0\textwidth}{./{}/llf-250-multilambda-double-eps-converted-to.pdf}} \end{center} \caption[]{MIPS 24/70/60$\,\mu$m and IRAS 60$\,\mu$m LFs as derived by our SPIRE 250$\,\mu$m sample. The black open circles are our $1/V_{\rm max}$ in all the panels. \textbf{Top left} The MIPS 24$\,\mu$m LF estimate. The open red squares are the IRAS 25$\,\mu$m LF from \cite{Shupe98}; the open green triangles are the MIPS 24$\,\mu$m LF from \cite{Marleau2007}; the open pink exagons are the MIPS 24$\,\mu$m LF of \cite{Babbedge2006}; the blue asterisks are the 25$\,\mu$m from IIFSCz by \cite{WangMRR2009} converted to MIPS 24$\,\mu$m; the open light blue pentagons are the MIPS 24$\,\mu$m LF from \cite{Rodighiero2010}. \textbf{Top right} The \textit{IRAS} 60$\,\mu$m LF estimate. The open green squares are the \textit{IRAS} 60$\,\mu$m LF from \cite{Saunders1990}. \textbf{Bottom left} The MIPS 70$\,\mu$m LF estimate. The open blues squares are the MIPS 70$\,\mu$m LF of \cite{Patel2013}; the dot-dashed and dashed line are the LF estimates from \cite{SerjeantHarrison2005}. \textbf{Bottom right} The MIPS 160$\,\mu$m LF estimate. The open blues squares are the MIPS 160$\,\mu$m LF of \cite{Patel2013}; the open black triangles are the ISO 170$\,\mu$m LF from \cite{Takeuchi2006} converted to MIPS 160$\,\mu$m; the dot-dashed and dashed line are the LF estimates from \cite{SerjeantHarrison2005}.} \label{local.multilambda} \end{figure*} \begin{figure*} \begin{center} {\myincludegraphics{width=0.8\textwidth}{./{}//llf-bol-thesis-red-double-eps-converted-to.pdf}} \end{center} \caption[]{The IR bolometric rest-frame local luminosity functions. The black open circles are our $1/V_{max}$ results; the green open circles are the $1/V_{max}$ results using COSMOS data (area 1.7 deg$^2$ and flux limited $S250 > 10$~mJy, Vaccari et al., in prep.); the blue open squares are the SWIRE IR bolometric rest-frame luminosity function of \cite{Patel2013} using a MIPS 70 and 160$\,\mu$m selected sample in LH and XMM-LSS; the red open triangles are the IR bolometric rest-frame luminosity function estimate of \cite{Vaccari2010}; the red dashed line is the IR bolometric luminosity function predicted of \cite{Fontanot2012}; the pink open diamonds are the \textit{IRAS} IR bolometric rest-frame luminosity function of \cite{Sanders2003}; the beige dashed-dot-dot-dot line is \cite{Negrello2007} model; the black dotted line is \cite{Valiante2009} model. \cite{Sanders2003}, \cite{Negrello2007} and \cite{Valiante2009} estimates are reported at the same local ($z=0$) redshift in all panels.} \label{local.LIR} \end{figure*} \subsection{The IR local luminosity density and the IR local spectral energy distribution} Once we obtains our LF solutions in each redshift bin and for each band, we can integrate them to find the luminosity density per redshift bin which is connected to the amount of energy emitted by the galaxies at each wavelengths and at each instant. To obtain this information we perform a $\chi^2$ fit to our $1/V_{\rm max}$ estimates, using the modified Schechter function described in Eq. \ref{log.gaus}. Since we are limited to a local sample, at $z>0.2$ we do not populate the low luminosity bins of our LFs and for this reason we cannot really constrain the integration at higher redshift. We thus report in Fig. \ref{local.ld}, \ref{SFR.fig} and in Tab. \ref{LLD} our luminosity density estimates for the SPIRE 250/350/500$\,\mu$m and the IR bolometric luminosity within $z<0.2$, reporting the results for three redshift bins whose mean redshifts are 0.05, 0.1 and 0.15. In Fig. \ref{local.em} we report the conversion of our luminosity density estimates at SPIRE 250/350/500$\,\mu$m, as well at MIPS 24/70/160$\,\mu$m wavelengths to the energy output and we compare our result to those reported by \cite{Driver2012}. Our plotted estimates, together with others extrapolated at 90 and 170$\,\mu$m, are reported in Tab. \ref{local.energy}. We find that, even though our sample is selected at 250$\,\mu$m, we can reproduce the energy density at all the other considered FIR bands in the very Local Universe. This confirms the shape of the energy density published by Driver at al. (2012) estimated using the GAMA I dataset combined with GALEX, SDSS and UKIRT. \begin{figure*} \begin{center} {\myincludegraphics{width=0.45\textwidth}{./{}/llf-250-250-vs-z-double-mixedbin-freelphi-eps-converted-to.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/llf-250-250-lld-vs-z-double-mixedbin-freelphi-eps-converted-to.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/llf-250-350-vs-z-double-mixedbin-freelphi-eps-converted-to.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/llf-250-350-lld-vs-z-double-mixedbin-freelphi-eps-converted-to.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/llf-250-500-vs-z-double-mixedbin-freelphi-eps-converted-to.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/llf-250-500-lld-vs-z-double-mixedbin-freelphi-eps-converted-to.pdf}} \end{center} \caption[]{SPIRE 250/350/500$\,\mu$m rest-frame LFs evolution within $0.02<z<0.5$ along with the luminosity density estimates. \textbf{Left}: the SPIRE 250/350/500$\,\mu$m LFs evolution within $0.02<z<0.5$. The colour-coded full point are our $1/V_{\rm max}$ solution in each redshift bin while the solid curves represent the best fit solution to the first three redshift bins reported in the legend by using a modified Schechter function whose best-fit parameters are reported in the panel. \textbf{Right}: SPIRE 250/350/500$\,\mu$m luminosity density resulting from the fit of the LFs in the first three redshift bins reported on the left.} \label{local.ld} \end{figure*} \begin{table} \centering \begin{tabular}{ccccc} \multicolumn{5}{c}{\textbf{Local luminosity density}}\\ \hline \textbf{$\langle z \rangle$} & log($\rho_{L}$,$\sigma$)$_{250}$ & log($\rho_{L}$,$\sigma$)$_{350}$ & log($\rho_{L}$,$\sigma$)$_{500}$ & log($\rho_{L}$,$\sigma$)$_{IR}$ \\ \hline\hline 0.05 & 7.11 , 0.02 & 6.64 , 0.02 & 6.09 , 0.02 & 7.92 , 0.02 \\ 0.10 & 7.23 , 0.02 & 6.75 , 0.01 & 6.20 , 0.01 & 8.02 , 0.02 \\ 0.15 & 7.31 , 0.02 & 6.82 , 0.02 & 6.27 , 0.02 & 8.07 , 0.02 \\ \hline \end{tabular} \caption[]{Local luminosity density estimates in the SPIRE 250/350/500$\,\mu$m bands and for the IR bolometric luminosity using the local SPIRE sample within $0.02<z<0.1$. The values are reported as log(LLD) and log(errors), expressed in L$_{\odot}$ Mpc$^{-3}$.} \label{LLD} \end{table} \begin{table} \centering \begin{tabular}{cc} \multicolumn{2}{c}{\textbf{Local energy output}}\\ \hline $\lambda$ & $\rho_L(\lambda)\,\lambda$ \\ \hline $\mu$m & $10^{33}$ h W Mpc$^{-3}$ \\ \hline 24 & 3.91 $\pm$ 0.69 \\ 60 & 16.87 $\pm$ 3.47 \\ 70 & 22.18 $\pm$ 4.77 \\ 90 & 25.93 $\pm$ 5.59 \\ 100 & 27.10 $\pm$ 5.79 \\ 160 & 19.95 $\pm$ 4.27 \\ 170 & 18.54 $\pm$ 4.00\\ 250 & 6.98 $\pm$ 1.45 \\ 350 & 2.32 $\pm$ 0.46 \\ 500 & 0.58 $\pm$ 0.14 \\ \hline \end{tabular} \caption[]{Local energy output of the Universe at different wavelengths.} \label{local.energy} \end{table} \begin{figure*} \begin{center} {\myincludegraphics{width=0.8\textwidth}{./{}/llf-local-emission-double-eps-converted-to.pdf}} \end{center} \caption[]{The multi-wavelengths energy output in the local Universe. The local luminosity density at different wavelengths was computed by integrating the relevant monochromatic local luminosity functions over the $0.02<z<0.1$ bin. Plotted values of this work, solid black circles, are reported in Tab. \ref{local.energy}.} \label{local.em} \end{figure*} \subsection{The local star formation rate} The estimate of the local luminosity function in the SPIRE bands is of fundamental importance for studying the evolution of the SPIRE LFs at higher redshift. In practice, local luminosity function estimates guide the priors on the parameters that define the LF shape that is adopted when fitting the LF also at higher redshifts (Vaccari et. al., in prep.). Additionally, thanks to the large volume sampled by shallow and wide area surveys, these estimates allow us to calculate the SFRD in the local Universe with small uncertainties. By integrating the luminosity function in different redshift bins, whenever the observed bands are related to the emission of the young stellar populations, like in this case, we can estimate the SFR at those redshifts. In this context, we can easily use the IR bolometric luminosity as a tracer of SFR and thus the IR bolometric luminosity density as tracer of the SFRD. We thus fit our $1/V_{\rm max}$ local luminosity function estimates with a modified Schechter function described in Eq. \ref{log.gaus}, obtaining the estimate of the local luminosity density (LLD) reported in Tab. \ref{LLD}. The lower and upper limits that we used in the LFs integration to estimate the LLDs are $L=10^8 L_{\odot}$ and $L=10^{14} L_{\odot}$ respectively. These limits guarantee that we account for the bulk of the IR luminosity emitted by our sources. We then convert the estimate of the luminosity density into star formation rate density using the \cite{Kennicutt1998} relation (assuming a Salpeter IMF): $\psi(t) = \mathrm{SFR} = k(\lambda)\mathrm{L}(\lambda)$ where $\mathrm{k}( \mathrm{IR}) = 4.5 \times 10^{-44} [\mathrm{M}_\odot \mathrm{yr}^{-1}\mathrm{W} \mathrm{Hz}] $. We used our SED fitting analysis and the IRAC colour-colour criteria by \cite{Lacy2004} and \cite{Donley2012} to quantify the possible AGN contamination in our sample, as discussed in Sec. \ref{sedfit.sec}. We find that in our sample the fraction of objects showing AGN-like IRAC colours and AGN-like SEDs is very small and even if we discard from our results the total luminosity contribution of these sources, our LFs and thus SFR estimates do not significantly deviate from the results obtained using our total sample. Even for these AGN-like sources (mainly located above $z\sim0.25$), the vast majority of the IR luminosity is still contributed by dust emission associated with ongoing star formation. This is also confirmed by \cite{Hatz09,Hatz10} and \cite{Bothwell2011} that show how AGN contribution to the FIR emission of the general extragalactic population is rather small. For these reasons we conclude that the AGN contribution does not significantly affect our LF and SFRD estimates. The SFRD estimate we obtain from the IR bolometric luminosity density (estimated at $0.02<z<0.1$, $0.05< z<0.15$ and $0.1<z<0.2$) are reported in Tab. \ref{SFR.comp}, together with other SFRD estimates obtained by various authors using different SFR tracers (all the results are converted to the same IMF and cosmology). These same data are also shown in Fig. \ref{SFR.fig}. The uncertainties reported in Tab. \ref{SFR.comp} are percentage errors. \begin{table*} \centering \begin{small} \begin{tabular}{lllc} \hline \textbf{Reference} & \textbf{SFR tracer} & $<z>$ & SFRD \\ &&&($10^{-3}\;M_{\odot} \;\mathrm{yr}^{-1}$ Mpc$^{-1}$) \\ \hline \hline \cite{Gallego2002}& [OII] & $0.025$ & $9.3\pm3$ \\ \cite{Sullivan2000}& [OII] & 0.15 & $23\pm3$ \\ \cite{Hogg1998} & [OII] & 0.20 & $11 \pm 4$ \\ \cite{Gallego1995}& H$\alpha$ & $0.022$ & $12\pm5$ \\ \cite{Tresse1998}& H$\alpha$ &0.2 & $25\pm 4$\\ \cite{Sullivan2000}& H$\alpha$ & 0.15 & $14\pm3$ \\ \cite{PerezGonzalez2003} & H$\alpha$ &0.025 & $25\pm 4$\\ \cite{Ly2007}& H$\alpha$ &0.08 & $13\pm 4$\\ \cite{Hanish2006} & H$\alpha$ &0.01 & $16^{+2}_{-4}$\\ \cite{Brinchmann2004} & H$\alpha$ & 0.15& $29 \pm 5$ \\ \cite{Dale2010}& H$\alpha$ & 0.16& $10^{+6}_{-4} $ \\ \cite{Westra2010}& H$\alpha$ & 0.05& $6 \pm 2 $ \\ \cite{Westra2010}& H$\alpha$ & 0.15& $12 \pm 3 $ \\ \cite{Serjeant2002}& 1.4 GHz & $0.005$ & $21\pm5$ \\ \cite{Condon1989}& 1.4 GHz & $0.005$ & $21\pm0.5$ \\ \cite{Sullivan2000}& FUV & 0.150 & $39\pm5$ \\ \cite{Martin2005.59M}& FUV+IR & 0.02 & $21\pm 2$\\ \cite{Bothwell2011} & FUV+IR & 0.05 &$25 \pm 1.6$ \\ \cite{Vaccari2010} & IR & 0.1 & $22.3 \pm 8.2$ \\ This work & IR & 0.05 & $14.11 \pm 2.4$ \\ This work & IR & 0.10 & $18.00 \pm 2.9$ \\ This work & IR & 0.15 & $20.10 \pm 2.2$ \\ This work & FUV+IR & 0.05 & $19.07 \pm 2.4$ \\ This work & FUV+IR & 0.10 & $22.53 \pm 2.9$ \\ This work & FUV+IR & 0.15 & $25.42 \pm 2.2$ \\ \hline \end{tabular} \end{small} \caption[Star formation rate density in the local Universe: literature results and from this work]{Star formation rate density in the local Universe: literature results and from this work. This table is an updated version of the one reported in \cite{Bothwell2011}. The FUV unobscured SFRD added to our IR results and quoted in this Table are from: \cite{Wyder05} at z=0.05 and \cite{Budavari05} at z=0.1 and z=0.15.} \label{SFR.comp} \end{table*} \begin{figure*} \begin{center} {\myincludegraphics{width=0.5\textwidth}{./{}/llf-250-bol-vs-z-double-mixedbin-freelphi-eps-converted-to.pdf}} {\myincludegraphics{width=0.4\textwidth}{./{}/llf-250-bol-sfrd-vs-z-double-refers-mixedbin-freelphi-eps-converted-to.pdf}} \end{center} \caption[]{The IR bolometric rest-frame luminosity function evolution within $0.02<z<0.5$ along with an illustration of the star formation rate density in the local Universe showing published results and from this work. \textbf{Left}: The infrared bolometric luminosity function within $0.02<z<0.5$, integrated in the first three redshift bins reported in the legend by using a modified Schechter functions; \textbf{Right}: The derived SFRD in the local Universe. Black open circles are our data as result of the integrations of the LFs on the left converted to SFRD by using the \cite{Kennicutt1998} relation (assuming a Salpeter IMF) and black asterisk are our results plus the contribution of the UV SFRD as estimated by Wider et al. (2005) at $\langle z \rangle = 0.05$ and \cite{Budavari05} at $\langle z \rangle = 0.1, 0.15$. This sum should represent the total SFRD in the Local Universe. The red open diamonds are OII estimates by \cite{Gallego2002}, \cite{Sullivan2000} and \cite{Hogg1998}; the blue open triangles are H$\alpha$ estimates by \cite{Gallego1995}, \cite{Tresse1998}, \cite{Sullivan2000}, \cite{PerezGonzalez2003}, \cite{Ly2007}, \cite{Hanish2006}, \cite{Brinchmann2004}, \cite{Dale2010} and \cite{Westra2010}; the green open square is Radio 1.4 GHz estimates by \cite{Serjeant2002} and \citealt{Condon1989}; the magenta crosses are FUV+IR estimates by \cite{Martin2005.59M} and \cite{Bothwell2011}; the cyan crosses are FUV estimates by \cite{Sullivan2000}; the pink open squares are IR estimate from \cite{Vaccari2010}; the black dashed line is from \cite{HB06}.} \label{SFR.fig} \end{figure*} \begin{figure*} \begin{center} {\myincludegraphics{width=0.45\textwidth}{./{}/llf-250-250-params-vs-z-double-mixedbin-freelphi-eps-converted-to.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/llf-250-bol-params-vs-z-double-mixedbin-freelphi-eps-converted-to.pdf}} \end{center} \caption[]{Evolution of L$^{*}$ and $\Phi^{*}$ as function of $z$ (L$_* \propto (1+z)^{\alpha_L}$ and $\Phi^* \propto (1+z)^{\alpha_D}$) estimated for the LF at 250 $\mu$m (left panels) and for the IR bolometric rest-frame local luminosity function (right panels) within $0.02 < z < 0.15$.} \label{evo} \end{figure*} \section{Discussion}\label{discussion} Using some of the widest-area surveys performed by \textit{Spitzer} and \textit{Herschel}, in this paper we have studied in details the local luminosity functions of SPIRE sources. Our LLFs at 250/350/500$\,\mu$m (SPIRE) strongly constrain the local luminosity density of the Universe throughout the FIR/submm wavelength range. Our estimates mostly confirm and improve upon the HerMES SDP results published in \cite{Vaccari2010}, thanks to our increased statistics; this is particularly visible in the 500$\,\mu$m LF solution, which shows strongly reduced uncertainties. \cite{Dye2010} used \textit{Herschel} SDP data to compute the H-ATLAS (\citealt{Eales2010a}) SPIRE local luminosity function. This analysis was carried out very early during the Herschel mission and relied on shallower SPIRE observations, much fewer ancillary data and smaller coverage area than our own. We thus judge that the H-ATLAS analysis is likely to have suffered from detection and cross-identification incompleteness and that the discrepancy we find between our results and their is therefore likely to be due to the H-ATLAS analysis. Our results are in fact broadly in agreement with their estimates in the highest luminosity bins where the uncertainties on the H-ATLAS SPIRE flux estimates were possibly smaller and their sample more complete, but at the lowest luminosities and redshifts they found LF values lower by up to 50\%, as shown in Fig. \ref{local.dye}. The semi-parametric method for the luminosity function estimate of Schafer (2007, see Fig. \ref{local.schafer}) is in perfect agreement with other classical estimators at low redshifts, $z<0.2-0.3$. At higher redshifts the agreement becomes poorer, being acceptable at high luminosities but degrading at lower luminosity values, where the semi-parametric estimate is always in excess of the $1/V_{\rm max}$ values. The precise origin of this problem is not fully understood, but it is clear that it happens in regions of the data-space that are poorly sampled by the observations or where the data are scattered e.g. by the effects of the K-correction. From the IR bolometric luminosity function we can estimate the SFRD of the Local Universe in various redshift bins. In Fig. \ref{SFR.fig} we report our SFRD solutions and we compare them to others already published in the same redshift range. We see a large scatter in the Local SFRD estimates using different SFR diagnostics. In particular the H$\alpha$ measurements present the largest scatter between different published results. Our new data are entirely consistent with \cite{Vaccari2010} and show good agreement also with OII-based estimates (except perhaps at $z=0.2$ by \citealt{Hogg1998}). Instead, our SFRD based on the FIR/SMM bolometric flux is systematically lower than the radio 1.4$\,$GHz estimates and those combining IR and UV data by \cite{Martin2005.59M} and \cite{Bothwell2011}. In principle the Radio flux should be unaffected by dust extinction and thus a more faithful representation of the total SFR than either the IR or IR+UV values. Nevertheless the Radio flux can be more affected by the AGN activity than the IR/submm ones. If we include the UV-uncorrected portion of the SFRD mapped by short-wavelength UV spectral data to our FIR estimate, we find that our total SFRD UV+IR is comparable, within the errors, to the radio estimates, thus confirming that the UV+IR SFRD estimate is a good proxy for the total SFRD in the local Universe and the contamination from AGN in the radio derivation is negligible. The analysis reported in this paper represents a fundamental local benchmark to study the evolution of the LF and, consequently, of the derived SFR with cosmic time. Studying the evolution of the luminosity function requires very deep data that are then limited to very small areas of the sky and thus it is difficult to constrain the local shape of the LF where a large statistical sample of local galaxies (like ours) is required. This can be seen in Figs. \ref{local.LIR} where we compare our local analysis with the one done using the deep COSMOS data (area 1.7 deg$^2$ and flux limited $S_{250} > 10$~mJy) that will be reported in Vaccari et al. (in prep.). Only the large area surveyed by our sample enables us to really study the local shape of the LF, while the deep sample allows us to populate only a few luminosity bins. On the other hand, deep data become more and more important with increasing redshift where, our sample soon starts being limited to the higher luminosity bins. Our luminosity function estimates show significant and rapid luminosity evolution already at low redshifts. In Fig. \ref{evo} we report our results about the redshift evolution of the parameters expressing the spatial density dependence of the LFs ($\Phi^*$) and the luminosity dependence (L$^*$) estimated for the IR bolometric at the 250$\,\mu$m luminosity functions. We found positive evolution in luminosity and negative evolution in density with L$_{IR}^* \propto (1+z)^{6.0\pm0.4}$, $\Phi_{IR}^* \propto (1+z)^{-2.1\pm0.4}$ for the IR bolometric LF and L$_{250}^* \propto (1+z)^{5.3\pm0.2}$, $\Phi_{250}^* \propto (1+z)^{-0.6\pm0.4}$ for the 250$\,\mu$m LF. The high values of the evolution rates that we find (both positive and negative) for the luminosity and density parameters are however consistent with previous results based on previous and more limited datasets from Spitzer (\citealt{Patel2013}) and from IRAS (\citealt{Hacking1987}; \citealt{Lonsdale1990}).Similar, although slightly lower, trends for positive luminosity and negative density evolution are found by \cite{Gruppioni2013}. \cite{Gruppioni2013} used a sample deeper and over a much smaller area than ours. Their sample includes sources as faint as ours but they are very few in the local Universe since they suffer from a small sample variance due to the little areas targeted. For this reason we are able to get a more accurate estimate of the LFs down to similar luminosities in the local Universe. Interesting for our analysis is the comparison with \cite{Negrello2013} reported in Figs. \ref{local.350} and \ref{local.500} which show a steep LF in the lowest luminosity bins while our estimate remains flat down to $L_{350}\sim10^8[L_\odot]$ and $L_{500}\sim10^7[L_\odot]$ respectively. In general, our low-z luminosity functions is computed at $z>0.02$, while the Planck sources used by \cite{Negrello2013} are located at a mean redshift value of $z\sim0.01$. This means that our analysis is based on a deeper sample somehow complementary to the Planck's one. Our sample therefore does not suffer from contamination from either the Local Super Cluster or the Virgo Cluster (like Planck and thus potentially the \citealt{Negrello2013} estimates) while representing the LF of typical galaxies in the not-so-nearby Universe (unlike Planck). Moreover, it can be argued that our measurement averages over any local inhomogeneity by sampling a larger cosmic volume than Planck ($\sim10$ times larger at $z\sim0.2$ over 39 deg$^2$ than Planck at $z\sim0.01$ over 30,000 deg$^2$). Indeed over a much smaller area, but with a much deeper sample, the flatness of the slope is also confirmed by \cite{Gruppioni2013} when measuring the $0<z<0.3$ IR LF. In any case at values of $L_{350}$ brighter than $\sim10^8 [L_\odot]$ and $L_{500}$ brighter than $\sim10^7 [L_\odot]$, where we are $\sim100\%$ complete and where the Planck sample is less affected by the presence of local structures and inhomogeneities, we find that our results are in overall agreement with \cite{Negrello2013} at both 350 and 500 $\mu$m. Similar considerations can be made when we compare our IR bolometric LF with the previous estimate obtained by \cite{Sanders2003}, which appears to be slightly steeper than ours in the lowest-luminosity bins (see Fig.\ref{local.LIR}). The mean and median redshifts of the entire IRAS sample used by \cite{Sanders2003} are in fact $z=0.0126$ and $z=0.0082$ respectively and their LF estimate can therefore be affected by the Virgo cluster in the same way as Planck's estimates discussed above. Our ability to map the local LF with good precision has revealed a wiggle in the shapes of the functions, with a local maximum at $\mathrm{log} L_{250}\sim 9.5$ and $\mathrm{log} L_{IR}\sim 10.5$, respectively. This feature, which appears relatively stable with wavelength, is reminiscent of similar behaviour found in the local mass functions of galaxies (\citealt{Moustakas2013}, \citealt{Baldry2012}, \citealt{Ilbert2013}), and interpreted as due to the summed contributions of red and blue galaxies, having Schechter functions with different slopes and cutoff masses. Given the known relationship between stellar mass and IR luminosity, it may not come as a surprise that a similar feature appears in our IR luminosity functions. To test this possibility, we have divided our sample into red and blue sub-populations, following the recipe of \cite{Baldry2012}, and separately calculated the LFs for the two classes. The results, reported in Fig. \ref{bluered}, confirm that red galaxies have an IR LF peaking at $\mathrm{log} (L_{250})\sim 9.5$ and $\mathrm{log} (L_{IR})\sim 10.5$ and decreasing at higher and lower $L$, while blue galaxies have steep Schechter slopes and lower characteristic luminosities. These are purely observational results and further analysis would be required to better constrain this feature, but this goes beyond the scope of this paper. At any rate our findings seem to indicate that massive early-type spirals dominate the high-IR-luminosity end of the LF, while bluer lower-mass late-type spirals and irregulars dominate its low-luminosity end. We also performed a preliminary comparison with semi-analytical models of galaxy formation available in literature, focusing our attention on the redshift range between $z=0.02$ and $z=0.2$. From these preliminary comparisons we notice that the \cite{Fontanot2012} predictions (using the MORGANA code by \citealt{Monaco2007}) seem to broadly reproduce the shape of the LF within the uncertainties, but they underestimate the LF at lower luminosities when compared to our IR bolometric LF estimates. Other predictions done by e.g., \cite{Negrello2007}, \cite{SerjeantHarrison2005} and \cite{Valiante2009} at different wavelengths also show good agreement with our results at higher luminosities, but most of them seem to underestimate the LF when compared to what we obtain at lower luminosities. A more careful and systematic analysis of existing and improved models is required to properly address this issue (e.g., \citealt{Gruppioni2015}, Franceschini et al., submitted). \begin{figure*} \begin{center} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-spire-250-xid-out-compared-bluered-kcorrect-zs-02-double-eps-converted-to.pdf}} {\myincludegraphics{width=0.45\textwidth}{./{}/wide-spire-bol-xid-out-compared-bluered-kcorrect-zs-02-double-eps-converted-to.pdf}} \end{center} \caption[]{250$\,\mu$m and IR bolometric LFs for the blue and red populations (reported with blue and red open circles, respectively) compared with the total of the two populations (black open circles) in the redshift range $0.02 < z < 0.2$.} \label{bluered} \end{figure*} \section{Conclusions}\label{conclusions} The determination of the galaxy luminosity function is often hampered by the difficulties of covering a wide area down to faint fluxes on the one hand, and determining counterparts and redshifts for detected sources in a complete and reliable manner on the other hand. In this work we have thus assembled and exploited the widest area \textit{Spitzer} and \textit{Herschel} extragalactic surveys to select IR galaxy samples in a complete and reliable manner, and the best UV/Optical/NIR ancillary data to identify them. Thanks to \textit{Spitzer} and \textit{Herschel} observations we are now able to reliably sample the IR bolometric luminosity of local sources and thus provide important insights into dust obscured star formation activity across cosmic time. Even with the best data sets, however, accurately constructing the luminosity function remains a tricky pursuit, since the presence of observational selection effects due to e.g. detection thresholds in apparent magnitude, colour, surface brightness or some combination thereof can make any given galaxy survey incomplete and thus introduce biases in the luminosity function estimates. Only a comparison of results coming from different luminosity function estimators applied to the same samples can ensure we can assess the impact of these biases in a robust manner. Armed with the Spitzer Data Fusion, we were able to describe the $0.02 < z < 0.5$ local luminosity function of sources selected in wide fields by \textit{Herschel} SPIRE imaging. We fully exploited the multi-wavelength information collected within the Spitzer Data Fusion\ to perform a SED fitting analysis of SPIRE sources and thus estimate the monochromatic rest-frame luminosities at 250, 350 and 500$\,\mu$m as well as the IR luminosity between 8 and 1000$\,\mu$m. We then implemented a number of different statistical estimators to evaluate the local luminosity functions of flux-limited samples in these bands: the classical $1/V_{\rm max}$ estimator of \cite{Schmidt1968} and the modified $1/V_{\rm est}$ version of \cite{PageCarrera2000}; a parametric maximum likelihood technique (ML) based on a Bayesian approach as described in \cite{Kelly2008}; and finally a semi-parametric approach introduced by \cite{Schafer2007}. Our high quality determinations of the IR luminosity functions have revealed for the first time some previously unidentified features in their shape, that we interpret as due to the contributions of red (possibly early-type) and blue (possibly late-type) galaxy populations, with their different Schechter forms. By means of this analysis we find that the luminosity functions show significant and rapid luminosity evolution already at low redshifts, $0.02 < z < 0.2$. Converting our IR LD estimate into an SFRD we can determine the SFRD of the local Universe up to redshift 0.2, where the integration of the LF solution is more reliable given that our data set fails to populate the low luminosity bins of the LF at higher $z$. Summing over our IR SFRD estimate of the unobscured contribution based on the UV dust-uncorrected emission from local galaxies, we estimate that SFRD $\simeq $ SFRD$_0+0.08 z$, where SFRD$_0\simeq (1.9\pm 0.03)\times 10^{-2} [\mathrm{M}_\odot \mathrm{Mpc}^{-3}]$ is our total SFRD estimate at $z\simeq0.02$. This analysis represents a local benchmark for studying the evolution of the infrared luminosity function and star formation rate function with cosmic time. \section*{Acknowledgements} LM acknowledges support from the Science and Technology Facilities Council under grant ST/J001597/1. LM, MV and AF acknowledges support from ASI ''\textit{Herschel} Science'' Contracts I/005/07/1 and I/005/11/0. Mattia Negrello produced additional predictions based on his models. JW acknowledges the Dark Cosmology Centre funded by the Danish National Research Foundation. MV acknowledges support from the Square Kilometre Array South Africa project, the South African National Research Foundation and Department of Science and Technology (DST/CON 0134/2014), the European Commission Research Executive Agency (FP7-SPACE-2013-1 GA 607254) and the Italian Ministry for Foreign Affairs and International Cooperation (PGR GA ZA14GR02). NS is the recipient of an ARC Future Fellowship. AF acknowledges support from the ERC via an Advanced Grant under grant agreement no. 321323-NEOGAL. This work makes use of STILTS \url{http://www.starlink.ac.uk/stilts/} and TOPCAT \citep{taylor05}. SPIRE has been developed by a consortium of institutes led by Cardiff University (UK) and including Univ. Lethbridge (Canada); NAOC (China); CEA, LAM (France); IFSI, Univ. Padua (Italy); IAC (Spain); Stockholm Observatory (Sweden); Imperial College London, RAL, UCL-MSSL, UKATC, Univ. Sussex (UK); and Caltech, JPL, NHSC, Univ. Colorado (USA). This development has been supported by national funding agencies: CSA (Canada); NAOC (China); CEA, CNES, CNRS (France); ASI (Italy); MCINN (Spain); Stockholm Observatory (Sweden); STFC (UK); and NASA (USA). The authors would like to thank the anonymous referee for helpful comments. \clearpage \newpage \begin{table} \centering \begin{small} \begin{tabular}{|c|c|c|c|c|} \hline $\log L$ & $\log\,(\Phi,\sigma)_{250}$ & $\log\,(\Phi,\sigma)_{350}$ & $\log\,(\Phi,\sigma)_{500}$ & $\log\,(\Phi,\sigma)_{IR}$ \\ \hlin \multicolumn{5}{|c|}{\textbf{$0.02<z<0.1$ Luminosity Functions}}\\ \hlin 7.16 & - & - & -2.19 , -2.07 & - \\ 7.33 & - & - & -1.90 , -2.54 & - \\ 7.49 & - & - & -2.06 , -2.72 & - \\ 7.66 & - & - & -2.06 , -2.90 & - \\ 7.83 & - & - & -2.17 , -3.11 & - \\ 8.00 & - & - & -2.12 , -3.21 & - \\ 8.16 & -2.10 , -2.06 & -2.09 , -2.89 & -2.15 , -3.34 & - \\ 8.33 & -1.96 , -2.57 & -2.15 , -3.05 & -2.30 , -3.47 & - \\ 8.49 & -2.08 , -2.72 & -2.11 , -3.17 & -2.52 , -3.58 & - \\ 8.66 & -2.06 , -2.89 & -2.14 , -3.31 & -2.75 , -3.69 & - \\ 8.83 & -2.17 , -3.08 & -2.23 , -3.43 & -3.34 , -3.99 & - \\ 9.00 & -2.15 , -3.20 & -2.48 , -3.56 & -3.44 , -4.04 & -2.10 , -2.60 \\ 9.16 & -2.11 , -3.31 & -2.66 , -3.65 & -4.34 , -4.49 & -2.11 , -2.69 \\ 9.33 & -2.26 , -3.45 & -3.05 , -3.84 & - & -1.96 , -2.69 \\ 9.49 & -2.49 , -3.57 & -3.49 , -4.07 & - & -2.17 , -3.00 \\ 9.66 & -2.69 , -3.67 & -3.86 , -4.25 & - & -2.12 , -2.99 \\ 9.83 & -3.12 , -3.88 & - & - & -2.13 , -3.24 \\ 10.00 &-3.53 , -4.08 & - & - & -2.25 , -3.35 \\ 10.16 & -4.04 , -4.34 & - & - & -2.35 , -3.47 \\ 10.33 & - & - & - & -2.50 , -3.56 \\ 10.49 & - & - & - & -2.91 , -3.77 \\ 10.66 & - & - & - & -3.06 , -3.85 \\ 10.83 & - & - & - & -3.28 , -3.96 \\ 11.00 & - & - & - & -3.94 , -4.29 \\ 11.16 & - & - & - & -4.16 , -4.40 \\ \hline \multicolumn{5}{|c|}{\textbf{$0.1<z<0.2$ Luminosity Functions}}\\ \hlin 8.33 & - & - & -2.31 , -3.23 & - \\ 8.49 & - & - & -2.31 , -3.52 & - \\ 8.66 & - & - & -2.44 , -3.76 & - \\ 8.83 & - & - & -2.70 , -4.05 & - \\ 9.00 & - & - & -3.08 , -4.27 & - \\ 9.16 & - & - & -3.57 , -4.51 & - \\ 9.33 & -2.28 , -3.14 & -2.63 , -4.01 & -4.21 , -4.82 & - \\ 9.49 & -2.28 , -3.49 & -2.91 , -4.18 & -4.76 , -5.10 & - \\ 9.66 & -2.41 , -3.76 & -3.39 , -4.42 & - & - \\ 9.83 & -2.64 , -3.95 & -4.02 , -4.73 & - & - \\ 10.00 & -2.98 , -4.22 & -4.55 , -5.00 & - & - \\ 10.16 & -3.45 , -4.45 & -5.45 , -5.44 & - & -2.36 , -3.37 \\ 10.33 & -4.05 , -4.74 & - & - & -2.41 , -3.66 \\ 10.49 & -4.61 , -5.03 & - & - & -2.48 , -3.64 \\ 10.66 & - & - & - & -2.69 , -3.92 \\ 10.83 & - & - & - & -3.01 , -4.18 \\ 11.00 & - & - & - & -3.40 , -4.41 \\ 11.16 & - & - & - & -3.64 , -4.16 \\ 11.33 & - & - & - & -4.15 , -4.80 \\ 11.49 & - & - & - & -4.76 , -5.10 \\ \hline \end{tabular} \end{small} \caption[]{SPIRE 250, 350, 500$\,\mu$m and IR bolometric rest-frame $1/V_{\rm max}$ luminosity function estimates in the redshift ranges between 0.02 and 0.5, using the HerMES Wide Fields sample. $L$ indicates $\nu\,L_\nu$ for the monochromatic LFs and $L_{\rm IR}$ indicates the integrated luminosity between 8 and 1000$\,\mu$m for the IR bolometric rest-frame LF. These $L$ is expressed in units of $\rm L_\odot$ and LLF estimates and their errors are in $\mathrm{[Mpc^{-3}\,dex^{-1}]}$. The quantity $\sigma$ is the total error (Poissonian error + redshift uncertainties, estimated as explained in the text) associated with $\Phi$ in each band and luminosity/redshift bin.} \label{llf-values.tab} \end{table} \begin{table} \centering \begin{small} \begin{tabular}{|c|c|c|c|c|} \hline $\log L$ & $\log\,(\Phi,\sigma)_{250}$ & $\log\,(\Phi,\sigma)_{350}$ & $\log\,(\Phi,\sigma)_{500}$ & $\log\,(\Phi,\sigma)_{IR}$ \\ \hline \multicolumn{5}{|c|}{\textbf{$0.2<z<0.3$ Luminosity Functions}}\\ \hlin 8.83 & - & - & -2.72 , -3.57 & - \\ 9.00 & - & - & -2.88 , -4.14 & - \\ 9.16 & - & - & -3.21 , -4.13 & - \\ 9.33 & - & - & -3.69 , -4.75 & - \\ 9.49 & - & -2.78 , -4.02 & -4.19 , -4.93 & - \\ 9.66 & - & -3.09 , -4.11 & -5.25 , -5.54 & - \\ 9.83 & - & -3.52 , -4.66 & -5.85 , -5.82 & - \\ 10.00 & -2.81 , -4.05 & -4.02 , -4.88 & - & - \\ 10.16 & -3.13 , -4.21 & -4.85 , -5.33 & - & - \\ 10.33 & -3.56 , -4.65 & -5.55 , -5.69 & - & - \\ 10.49 & -4.13 , -4.92 & - & - & - \\ 10.66 & -4.89 , -5.37 & - & - & -2.81 , -3.62 \\ 10.83 & -5.55 , -5.69 & - & - & -2.97 , -4.06 \\ 11.00 & - & - & - & -3.208 , -4.19 \\ 11.16 & - & - & - & -3.48 , -4.52 \\ 11.33 & - & - & - & -3.77 , -4.77 \\ 11.49 & - & - & - & -4.27 , -4.85 \\ 11.66 & - & - & - & -4.85 , -5.25 \\ \hline \multicolumn{5}{|c|}{\textbf{$0.3<z<0.4$ Luminosity Functions}}\\ \hlin 9.33 & - & - & -3.10 , -3.92 & - \\ 9.49 & - & - & -3.58 , -4.44 & - \\ 9.66 & - & - & -4.31 , -4.75 & - \\ 9.83 & - & -3.03 , -3.92 & -4.88 , -5.33 & - \\ 10.00 & - & -3.43 , -4.40 & -6.09 , -6.05 & - \\ 10.16 & - & -4.04 , -4.74 & -5.79 , -5.94 & - \\ 10.33 & -3.02 , -3.87 & -4.72 , -5.29 & - & - \\ 10.49 & -3.47 , -4.40 & -5.62 , -5.84 & - & - \\ 10.66 & -4.13 , -4.74 & -5.79 , -5.94 & - & - \\ 10.83 & -4.80 , -5.31 & - & - & - \\ 11.00 & -5.79 , -5.92 & - & - & -3.06 , -3.96 \\ 11.16 & - & - & - & -3.25 , -4.05 \\ 11.33 & - & - & - & -3.42 , -4.49 \\ 11.49 & - & - & - & -3.73 , -4.81\\ 11.66 & - & - & - & -4.16 , -4.10 \\ 11.83 & - & - & - & -4.63 , -5.36 \\ \hline \multicolumn{5}{|c|}{\textbf{$0.4<z<0.5$ Luminosity Functions}}\\ \hlin 9.66 & - & - & -3.75 , -4.43 & - \\ 9.83 & - & -3.03 , -3.92 & -4.63 , -4.71 & - \\ 10.00 & - & -3.43 , -4.40 & -4.99 , -5.38 & - \\ 10.16 & - & -4.04 , -4.74 & -5.68 , -5.89 & - \\ 10.33 & - & -4.35 , -4.70 & - & - \\ 10.49 & -3.23 , -4.11 & -4.92 , -5.37 & - & - \\ 10.66 & -3.69 , -4.41 &-5.35 , -5.76 & - & - \\ 10.83 & -4.41 , -4.70 & - & - & - \\ 11.00 & -4.97 , -5.38 & - & - & - \\ 11.16 & -5.58 , -5.85 & - & - & - \\ 11.33 & - & - & - & -3.45 , -4.42 \\ 11.49 & - & - & - & -3.61 , -4.42 \\ 11.66 & - & - & - & -3.87 , -4.65 \\ 11.83 & - & - & - & -4.23 , -5.10 \\ 12.00 & - & - & - & -4.99 , -5.60 \\ \hline \end{tabular} \end{small} \end{table} \clearpage \newpage
1,314,259,993,844
arxiv
\section{Introduction} \label{sec:introduction} Waves in the solar chromosphere may be divided in many different ways: high versus low chromosphere, below the canopy versus above the canopy, network versus internetwork, acoustic versus magnetic versus gravity mode, linear versus shocks, quiet versus active region, standing versus propagating versus evanescent, global versus local, short-period versus three-minute versus five-minute versus long-period, longitudinal versus transverse versus torsional. And, in particular, whether they heat appreciably or negligibly. The latter issue is the most frequently quoted motivation for chromospheric wave studies and for their claims to fame, but it seems to me that identifying the nature of the observed modulations and the underlying processes and interactions should come first -- and, personally, I regard the physics as more interesting than the eventually resulting overall energy budget, a mere summation at the very end with the answer already known as balancing photon losses. \section{Overview} \label{sec:overview} In this section I briefly review the literature\footnote{Note added December 5, 2010: All ADS abstracts for almost all solar physicists cited in this paper are collected at \url{http://www.astro.uu.nl/\~rutten/solar_abstracts}.} using the above splits as guideline. \paragraph{Chromosphere.} Even this term is non-trivial. In the standard one-dimensional modeling of Vernazza, Avrett \& Loeser (\citeyearads{1973ApJ...184..605V}, \citeyearads{1976ApJS...30....1V}, \citeyearads{1981ApJS...45..635V}) and \citetads{1993ApJ...406..319F}, it denotes the plane-parallel layers between the temperature minimum and the onset of the coronal temperature rise. Within this definition, UV continua shortward of 1600\,\AA\ come from the chromosphere (VALIII Fig.~36 in \citeads{1981ApJS...45..635V}), and strong optical lines are also chromospheric -- but not always. For example, the \NaD\ lines reach $\tau \is 1$ above the VALIII temperature minimum, but they are photospheric in their intensity response and do not map the chromospheric temperature rise in their VALIII emergent profiles (\citeads{1992A&A...265..237B}). Even \CaIIK\ filtergrams primarily sample the upper photosphere since the intensity-encoding thermal creation (rather than the velocity-encoding last scattering) of most of the observed photons takes place well below the VALIII minimum. In addition, the existence of a ubiquitous temperature starting at $h=500$~km has been put into doubt, first by Ayres' CO modeling (\eg\ \citeads{1981ApJ...245.1124A}; \citeads{1981ApJ...244.1064A}; \citeads{1986ApJ...304..542A}; \citeads{1989ApJ...338.1033A}; \citeads{1990ApJ...363..705A}; \citeads{1996ApJ...460.1042A}) and subsequently by the Carlsson--Stein acoustic shock modeling (see below). Hence, I prefer to define ``chromosphere'' -- harking back to its eclipse origin = the purple color of \Halpha\ plus \mbox{H\hspace{0.2ex}$\beta$}\ off-limb emission -- as the solar regime characteristically sampled by Balmer lines as a complex mass of fibrils, and taking the latter to represent (incomplete!) mappings of magnetic canopies\footnote{The definition implies that chromospheric studies must include \Halpha\ even though its formation is singularly awkward through mixing thickness and thinness with NLTE opacity and source function sensitivities including Zanstra-like ionisation-plus-recombination photon conversions. An unpleasant but inescapable conclusion to one prefering clean lines such as \CaII\ \mbox{H\,\&\,K}. Quantitative \Halpha\ mapping requires considerable work on reliable \Halpha\ interpretation, comparable to the ongoing efforts in Stokes profile inversion.}. \paragraph{Canopies.} The concept comes from \citetads{1980SoPh...68...49G}, \citetads{1982SoPh...79..247J}, \citetads{1982SoPh...79..267G} and is portrayed in older models and recent simulations (\eg\ \citeads{1993A&A...268..736B}; \citeads{2002ApJ...564..508R}; \citeads{2002AN....323..196B}) as smoothly upward spreading field hovering dome-like over essentially field-free internetwork. This is a simplification. If we define the chromosphere to start at canopy height (the height where the plasma beta drops through unity), its lower boundary will actually be a very warped surface, offset by dynamical flows and with large topological variations defined by the small-scale and large-scale field strength and connectivity, as partially delineated by \Halpha\ as short local and long distant fibril connections (\cf\ \citeads{2002SoPh..207..223S}). \paragraph{High versus low chromosphere.} This distinction now means well above and just above the canopy, respectively. Most of the references in this review pertain to the low chromosphere or even the upper photosphere (say $h=400-800$~km). Higher up, most of the recent wave literature employs CDS and SUMER spectrometry to discuss to what height the internetwork three-minute oscillations penetrate (\eg\ \citeads{1997ASPC..118..284S}; \citeads{1997ApJ...486L..63C}; \citeads{1997ApJ...490L.195J}; \citeads{1998SoPh..181...51D}; \citeads{1998ApJ...503L..95C}; \citeads{1999SoPh..184..253G}; \citeads{1999A&A...347..335D}; \citeads{2000ApJ...531.1150W}; \citeads{2001ApJ...554..424J}; \citeads{2001ApJ...561..420M}). Such penetration is likely to vary strongly with the actual field geometry. Oscillation analyses in tandem with moss and coronal loop diagnostics are yet scarce (but see Ineke de Moortel's contribution in this volume). Theoretical insight comes from beautiful Oslo simulations (\citeads{2002ApJ...564..508R}; \cf\ \citeads{2002AN....323..196B}). I wonder whether such simulations for different stellar parameters may, at long last, explain the Wilson-Bappu relation between stellar luminosity and \CaII\ \mbox{H\,\&\,K}\ peak width (\citeads{1957ApJ...125..661W}). It must describe fluxtube atmosphere properties, rather than non-magnetic atmospheric stratifications (\eg\ \citeads{1979ApJ...228..509A}; \citeads{1983A&A...128..311K}), since the emission peaks come from the magnetic component. \paragraph{Network versus internetwork.} This division is also overly simplistic. The verdict on internetwork fields isn't yet in, but they exist undoubtedly at some level of field strength and scale of spatial organization. \CaIIK\ and TRACE UV image sequences show extended zones of enhanced brightness (with respect to the darkest ``cell centers'') as aureoles around network (\cf\ Fig.~\ref{fig:canopy}). Acoustic maps show similar three-minute power aureoles when sampling power below the canopy and power shadows above it (\eg\ \citeads{1992ApJ...392..739B}; \citeads{1992ApJ...394L..65B}; \citeads{1993ApJ...415..847T}; \citeads{1998ApJ...504.1029H}; \citeads{1999ApJ...510..494L}; \citeads{1999ApJ...513L..79B}; \citeads{2000ApJ...537.1086T}; \citeads{2001ApJ...548L.237M}; \citeads{2001ApJ...554..424J}; \citeads{2001ApJ...561..420M}; \citeads{2001A&A...379.1052K}). In addition, \CaIIK\ and TRACE UV image sequences also show evidence of ``magnetic flashers'', presumably isolated fluxtubes on their way to or from the network concentrations (Brandt \etal, \citeyearads{1992ASPC...26..161B}, \citeyearads{1994ssm..work..251B}; \citeads{1998SoPh..179..253N}; \citeads{1999ApJ...517.1013L}; \citeads{2001A&A...379.1052K}). Their isolation may be helpful in trying to identify chromospheric tube modes excited by convective buffeting without amplitude loss from phase mixing over multiple fluxtubes. \paragraph{Acoustic versus magnetic versus gravity modes.} Or mixtures, of course. {\em Magnetic modes\/} are often invoked to convey energy from photospheric buffeting up along network fluxtubes (\eg\ \citeads{1999ApJ...519..899H}; \citeads{2000ApJ...535L..67H}) but have not been convincingly diagnosed yet, except negatively as suppression of acoustic wave power by conversion at canopies (\citeads{2001ApJ...548L.237M}; \citeads{2001ApJ...561..420M}; \citeads{2002ApJ...564..508R}). {\em Internal gravity waves\/} should be ``copiously excited'' in granular overshoot according to theory (\eg\ \citeads{1963ApJ...137..914W}; Lighthill in \citeads{1967IAUS...28.....T}; \citeads{1967SoPh....2..385S}; \citeads{1977SoPh...54..269S}; Mihalas \& Toomre \citeyearads{1981ApJ...249..349M}, \citeyearads{1982ApJ...263..386M}), but they are difficult to detect, being small-scale and propagating slantedly. The observational evidence is mostly indirect (\citeads{1968ApJ...152..557F}; \citeads{1976SoPh...47..435S}; \citeads{1978A&A....70..345C}; \citeads{1980ApJ...236L.169B}; \citeads{1981A&A....95..221D}; \citeads{1984MmSAI..55..147S}; \citeads{1987A&A...175..263S}; \citeads{1989A&A...213..423D}; \citeads{1991A&A...244..492B}; \citeads{1991A&A...252..827K}), with the clearest demonstration coming from wavenumber- and frequency-resolved \kf\ phase-difference spectra (\citeads{1993A&A...274..584K}; \citeads{1997A&A...324..704S}; \citeads{2001A&A...379.1052K}). Whether they affect upper-photosphere or low-chromosphere energy balances is not known. See also Section~\ref{sec:background} below. \paragraph{Linear versus shock behavior.} At least in the case of the three-minute oscillation, protests against large non-linearity (\citeads{1991mcch.conf....6D}) were silenced by the successful reproduction of so-called \CaII\ \mbox{H$_{2V}$}\ grain behavior in the celebrated simulations of Carlsson and Stein (\citeyearads{1994chdy.conf...47C}, \citeyearads{1996ASPC..109..119C}, \citeyearads{1997ApJ...481..500C}; \citeads{1997ASSL..225..261S}) of the spectral time sequences of \citetads{1993ApJ...414..345L}. The Carlsson-Stein reproduction of complex spectral \mbox{Ca\,\specchar{ii}\,\,H}\ core evolution patterns identified the three-minute oscillation beyond doubt with upward propagating acoustic shock trains, as proposed earlier by \citetads{1970SoPh...11..347A}, \citetads{1972SoPh...22..375C}, \citetads{1974SoPh...38..109L}, \citetads{1987A&A...177..283M}, \citetads{1982ApJ...258..393L}, Rutten \& Uitenbroek (\citeyearads{1991mcch.conf...48R}, \citeyearads{1991SoPh..134...15R}) and \citetads{1992A&A...253..586R}. However, the nature of the photospheric pistoning remains in debate (\citeads{1998SoPh..179..253N}; \citeads{1999ApJ...517.1013L}; \citeads{1999ApJ...523..450W}; \citeads{2000A&A...363..279S}; \citeads{2000ApJ...541..468S}; \citeads{2002A&A...390..681H}, as does the role or absence of a role in chromospheric heating (\citeads{1995ApJ...440L..29C}; \citeads{1997A&A...324..587T}; \citeads{1999ApJ...521L.141K}; \citeads{2001ApJ...557..376K}). See also Section~\ref{sec:background} below. \paragraph{Active region oscillations.} A direct active-region counterpart to the quiet-sun three-minute oscillation consists of the three-minute oscillations producing {\em umbral flashes\/} (\citeads{1969SoPh....7..351B}). They show even more outspoken nonlinear character (\eg\ \citeads{1984ApJ...285..368T}; \citeads{1986ApJ...301.1005L}), and are most likely similar nearly-acoustic shock trains running up along the radial fields above umbrae as suggested by Lites (\citeyearads{1992sto..work..261L}, \citeyearads{1994chdy.conf....1L}) Some are observed to penetrate up into coronal loops in UV and EUV data (\eg\ Brynildsen \etal, \citeyearads{1999ApJ...511L.121B}, \citeyearads{2002SoPh..207..259B}; Maltby \etal, \citeyearads{1999SoPh..190..437M}, \citeyearads{2001A&A...373L...1M}; \citeads{1999SoPh..187..261S}) and in microwave observations (\citeads{1999SoPh..185..177G}; \citeads{2001ApJ...550.1113S}; \citeads{2002A&A...386..658N}). Similarly, {\em running penumbral waves\/} (\citeads{1972ApJ...178L..85Z}; \citeads{1992SoPh..138...93A}; \citeads{1997ApJ...478..814B}; Christopoulou \etal, \citeyearads{2000A&A...354..305C}, \citeyearads{2000A&A...363..306G}, \citeyearads{2001A&A...375..617C}, \citeyearads{2002ApJ...576..561G}) may harbor complex wave reflection and mode conversion as postulated by \citeads{2002AN....323..196B} from comparison to thinner fluxtubes, but recent high-resolution data from three solar telescopes on La Palma suggest strongly that running penumbral waves and umbral flashes are both upward-propagating acoustic shock trains, seen differently through difference in field alignment with the line of sight and with the apparent horizontal spreading primarily a mapping of the field geometry (\citeads{2003A&A...403..277R}). Oslo simulations as those of \citetads{1997ApJ...481..500C} are needed for decisive diagnosis, including ``calibration'' of the polarimetric umbral-flash inversions of \citetads{2001ApJ...550.1102S} through less indirect forward modeling. The existence and nature of oscillations in the sunspot magnetic field itself remain contested (\eg\ \citeads{1998ApJ...497..464L}; \citeads{1999SoPh..187..389B}; \citeads{2002AN....323..317S}: \citeads{2000SoPh..191...97K}), with \citetads{2002A&A...392.1095S} warning against instrumental crosstalk and initial results from infrared spectropolarimetry indicating that opacity variations are an alternative explanation (\cf\ \citeads{2002AN....323..254C}; \citeads{2003ApJ...588..606K}). \citetads{2002ApJ...576..561G} found indications from TRACE data that running penumbral waves even spread into sunspot moats. Chromospheric oscillations above plage have not received much attention in the literature, but Rita Ryutova's presentation of beautiful TRACE 171\,\AA\ time slices from Richard Shine show interesting braiding with shorter braid period at larger magnetic filling factor (see her contribution in these proceedings). \paragraph{Standing versus propagation.} This issue is also debated in the three-minute oscillation literature (Deubner \etal, \citeyearads{1992A&A...266..560D}, \citeyearads{1996A&A...307..936D}; \citeads{1995A&A...302..277S}; \citeads{1996A&A...308..192H}; \citeads{1998IAUS..185..427D}). with simulations showing that at least some of the observed standing-wave behavior comes from reflections off the grain-forming shocks themselves and off canopies (\citeads{1999ASPC..184..206C}; \citeads{2002ApJ...564..508R}). Another standing versus propagation issue concerns the nature of the so-called pseudo-ridges above the cutoff frequency in \kf\ diagrams, also a global-versus-local issue. \citetads{1994ApJ...428..827K} described them as interference between directly emitted and once-bounced outgoing waves (that propagate up rather than being evanescent), using ``interference'' as a misleading mathematical term describing low-$l$ Fourier decomposition. Physically there is no actual wave interference, but simply power addition in \kf\ diagrams at those \kf\ locations that sample one-bounce horizontal spatial wavelengths at the corresponding temporal frequency, just another expression of the single-bounce three-minute power ridge in time-distance plots (\citeads{1993Natur.362..430D}) and exhibiting the Duvall dispersion law (\citeads{1982Natur.300..242D}; Eq.~2.14 of \citeads{1984ARA&A..22..593D}). Below the cutoff frequency, the acoustic ridges describe evanescent $p$-modes, with small phase delays compared to the photosphere governed by non-adiabaticity (\citeads{2001A&A...379.1052K}). \paragraph{Short-period versus three-minute versus five-minute versus low-frequency modulation.} These terms are all often misnomers. By {\em short-period waves\/} one simply implies those that might after all heat the chromosphere, with Peter Ulmschneider with coworkers as tenacious champion. After giving up on five-minute heating when the $p$-modes were identified and on three-minute heating when the Carlsson--Stein simulation refuted a low-lying chromospheric temperature rise, his quest turned to higher-frequency components not present in the Carlsson-Stein piston (\eg\ \citeads{1995A&A...294..241S}; Theurer \etal, \citeyearads{1997A&A...324..587T}, \citeyearads{1997A&A...324..717T}; \citeads{1999ApJ...521L.141K}) with obvious stellar overtones (\citeads{1996A&A...315..212U}, \citeads{1999ApJ...522.1053C}), and more recently to adding longitudinal, transverse and torsional tube waves (Ulmschneider \etal, \citeyearads{2001A&A...374..662U}, \citeyearads{2001ApJ...559L.167U}; Musielak and Ulmschneider, \citeyearads{2002ApJ...573..418M}, \citeyearads{2002A&A...386..606M}, \citeyearads{2002A&A...386..615M}) as well as detailed theoretical prediction of the mix of acoustic and tube waves that should explain observed cool-star chromospheric photon losses with the magnetic filling factor as activity scaling parameter (Fawzy \etal, \citeyearads{2002A&A...386..971F}, \citeyearads{2002A&A...386..983F}, \citeyearads{2002A&A...386..994F}). Heating by 10--100~mHz waves, outside and inside fluxtubes, is probable; the question is to what extent. Observationally, they are hard to see since they require fast cadence and high angular resolution and suffer response function loss by vertical wavelengths fitting in contribution functions (\eg\ \citeads{1980A&A....84...99S}; \citeads{1980A&A....91..251D}) and through large sensitivity to seeing noise (\eg\ \citeads{1983A&A...121..291E}; \citeads{1984MmSAI..55..135D}; \citeads{1994ssm..work..159L}). Indeed, the trials to detect high-frequency waves remain inconclusive (\eg\ \citeads{1976A&A....51..189D}; \citeads{1979ApJ...231..570L}; \citeads{1981A&A....97..310M}; \citeads{1990A&A...228..506D}; \citeads{2001A&A...379.1052K}). The chromospheric {\em three-minute\/} and {\em five-minute} oscillations are discussed above. The names imply broad-band frequency domains, not specific periodicities but just a shift in dominance from five-minute to three-minute periodicity when rising from the photosphere up to the chromosphere (in the internetwork at least). It is therefore also wrong to call them chromospheric and photospheric, respectively. The original Leighton-Simon-Noyes era diagram in Noyes (\citeyearads{1967IAUS...28..293N}), reprinted in Rutten (\citeyearads{1995ESASP.376a.151R}, \citeyearads{2001ASPC..223..117R}) clearly demonstrates the gradual change. Finally, {\em low-frequency oscillations\/} is a misnomer when the observed modulation is not oscillatory. This may hold for the observed low-frequency Dopplershift power of chomospheric network which may result from convective fluxtube buffeting (\cf\ Kneer and von Uexk{\"u}ll \citeyearads{1985A&A...144..443K}, \citeyearads{1986A&A...155..178K}; \citeads{1989A&A...208..290V}; \citeads{1993ApJ...414..345L}; \citeads{2000ApJ...535L..67H}), and also for low-frequency low-chromosphere internetwork oscillations if granular overshoot rather than internal gravity waves dominates the observed low-frequency power. \section{Low-frequency internetwork background modulation} \label{sec:background} I devote the remainder of this contribution to the topic on which I concentrated in my oral presentation at the meeting: the nature of the low-frequency background in low-chromosphere (``upper photosphere'' in the under-the-canopy definition) image sequences. I have long been puzzled by the mesh-like background pattern underlying \mbox{K$_{2V}$}\ grains in \CaIIK\ image sequences (\cf\ \citeads{1999ApJ...517.1013L}). The upshot is that I believe the answer to be a mixture of granular overshoot plus gravity-wave interference at slightly larger scales. I advocate this interpretation with selected results from TRACE data analyzed by J.M.~Krijger in his PhD thesis (\citeads{Krijger-thesis2002}). The diagrams in this section also illustrate various points made above\footnote{% Note added on December 5, 2010: part of this work was published in \citetads{2003A&A...407..735R} but not all diagrams and ideas given here survived the referee.}. The observation and reduction details are given in \citetads{2001A&A...379.1052K}. The displays below come from the very quiet-sun data taken on October 14, 1998, when TRACE registered image sequences in its 1700, 1600 and 1550\,\AA\ ultraviolet passbands and also in white light. The combination permits comparison of photospheric brightness patterns to the co-spatial and co-temporal ultraviolet ones. Such comparisons used to be made with groundbased telescopes combining \CaIIK\ and continuum image registration (\eg\ Hoekzema \etal, \citeyearads{Hoekzema+Rutten+Brandt+Shine1998}, \citeyearads{Hoekzema+Rutten1998}, \citeyearads{Hoekzema+Rimmele+Rutten2002}) but TRACE furnishes better quality thanks to the absence of seeing in space\footnote{But the tide turns, now that speckle reconstruction, phase-diverse restoration and adaptive optics also do away with (most of) the seeing, permitting higher angular resolution than TRACE's 1~arcsec. Much sharper images now result \eg\ from our Dutch Open Telescope which is presently being equipped with multi-wavelength tomography capability (\url{http://dot.astro.uu.nl}).}. \paragraph{Space-time representations.} Figure~\ref{fig:imslices} defines the topic. It compares the white-light low-photosphere scene with the co-spatial and co-temporal 1700\,\AA\ high-photosphere scene in the upper panels. The lower panels compare evolutionary characteristics between the two regimes. The displays sample only very small parts of the full white-light and 1700\,\AA\ data cubes\footnote{These cutouts are small in order to obtain sufficient magnification. They represent a very limited rendering of the full data cubes. A better view of the dynamical ultraviolet scene is gained by inspecting the TRACE movies available at \url{http://www.astro.uu.nl/~rutten/trace1}.}. \begin{figure} \centerline{\includegraphics[width=\textwidth]{budapestf1.eps}} \caption[]{TRACE image and time-slice samples. First column: white-light intensity. Second column: co-temporal and co-spatial 1700\,\AA\ intensity. The images in the upper row are small cut-outs of the full 256$\times$256~arcsec$^2$ field shown in Fig.~\ref{fig:canopy}. The time of observation is halfway the time slices in the lower row, as indicated by white markers. The slices show the temporal intensity evolution during 30~min of observation, for a horizontal cut through an internetwork area indicated by the white markers in the upper panels. It passes through weak network at the right. In the first two columns the greyscale is linear for all four panels, but it has been clipped at half the actual maximum for the 1700\,\AA\ image to enhance internetwork. Third and fourth column: the same, but low-pass filtered (subsonic horizontal propagation only). Third column: white-light brightness on a sign-reversed greyscale. Fourth column: logarithm of the 1700\,\AA\ brughtness temperature. } \label{fig:imslices} \end{figure} The difference between dark internetwork and bright network is obvious in the ultraviolet image (second column). The network grains stand out even though their brightness is cut in half by the display scaling. TRACE's resolution (1~arcsec) is insufficient to resolve the corresponding white-light ``network bright points'' (magnetic elements) residing within the underlying intergranular lanes. The white-light time slice in the lower-left panel displays primarily granular evolution, with some larger-scale five-minute $p$-mode modulation. The ultraviolet time slice shows the dynamical behavior characteristic of internetwork in the form of ubiquitous short-lived three-minute oscillation sequences. The brightest phases are called internetwork grains. The weak network grain near $X=-100$~arcsec stands out by its relative longevity. Note that the cut selection favours internetwork; stronger network grains produce much brighter vertical streaks. The two righthand columns show the same data after Fourier filtering and with modified greyscaling. In the third column (white light) low-pass ``subsonic' filtering, \ie\ applying a 3D Fourier ``cone'' filter to the transformed data cube which passes all signals with apparent horizontal speed below the sound speed, has removed the photospheric five-minute oscillation so that only the granular evolution patterns remain. In addition, the greyscale is sign-reversed to simulate ``reversed granulation'' with reversed contrast. The resulting mesh pattern in the reversed image illustrates the topological difference between granules and intergranular lanes. The corresponding time slice shows intergranular lanes as rather long-lived bright streaks. In the fourth column (1700\,\AA) the low-pass filtering has removed the chromospheric three-minute oscillation and therefore emphasizes the slower background evolution. The background streaks are relatively short and show larger horizontal displacements (tilts) than the intergranular streaks. At this resolution, there is no obvious correspondence between the reversed low-frequency granular evolution pattern in the third slice and the low-frequency internetwork background modulation pattern in the fourth slice. \paragraph{Time-delay scatter representations.} Figures~\ref{fig:scatter1}--\ref{fig:scatter2} are pixel-by-pixel dual-image correlation plots in an informative format initiated by \citetads{1994PhDT.......347S}. Each plot is measured from a $256\times256$~px$^2$ TRACE subfield after removal of Fourier taper edges, image conversion into brightness temperature, and $1.5\times1.5$~arcsec$^2$ boxcar smoothing to suppress noise. For each pixel in the subfield, \ie\ each solar location, the brightness temperature at one moment in the one type of image (say white light = WL) is taken as $x$ quantity plotted horizontally, its value at another (later) moment in the other type of image (say 1700\,\AA\ = UV) as $y$ quantity plotted vertically. The pair defines one point in a scatter plot. Such pairwise comparisons are made for all pixels (solar locations) in the whole subfield or in some selective part of it, and repeated for 50 consecutive image pairs to gain high significance (millions of spatio-temporal samples). Plot saturation (total blackness) is avoided by plotting sample density contours instead of the individual pixel-by-pixel samples. \begin{figure} \centerline{\includegraphics[width=\textwidth]{budapestf2.eps}} \caption[]{ Strous-format scatter diagrams. The crowded central parts are plotted as sample density contours to avoid plot saturation. First row: time-delayed white-light brightness temperature in K WL$(t+\Delta t)$ against WL$(t)$. Second row: time-delayed 1700\,\AA\ brightness temperature UV$(t+\Delta t)$ against UV$(t)$. Third row: UV$(t+\Delta t)$ against WL$(t)$. The time delays $\Delta t$ are specified in each panel and increase from left to right. The solid curves in the first panels show the occurrence distributions of the quantities plotted along $x$ and $y$ on inverted normalized scales. The dashed curves show the first moments of the sample density along horizontal and vertical cuts through the contours. }\label{fig:scatter1} \end{figure} \begin{figure} \centerline{\includegraphics[width=\textwidth]{budapestf3.eps}} \caption[]{ Time-delay scatter diagrams as in Fig.~\ref{fig:scatter1} but for internetwork (IN) only. First row: time-delayed low-pass filtered 1700\,\AA\ brightness temperature UV$(t+\Delta t)$ against low-pass filtered white-light brightness temperature WL$(t)$. Second row: unfiltered UV against low-pass UV. Third row: low-pass UV against low-pass UV. Fourth row: high-pass UV against low-pass UV, with the mean UV internetwork value ($T_\rmb = 4418$~K) added to the high-pass UV modulation for $y$-axis compatibility. }\label{fig:scatter2} \end{figure} The WL--WL comparison in the first row of Fig.~\ref{fig:scatter1} illustrates the format. Brightness distribution curves are added in the first panel. They are virtually the same for the other panels and also along $x$ and $y$ in this auto-correlation sequence. For 100\% correlation all samples and both first-moment (center of gravity) curves lie along the forward diagonal. For 100\% anticorrelation they lie along the backward diagonal. At the absence of any correlation the first-moment curves become perpendicular, parallel to the axes, and the contours become circular if the distribution function is symmetric. From left to right the panels have increasing time delay between the sampling of each type of image per solar location. The initial panel for $\Delta t = 22$~s shows very high pattern correlation because the granulation has not changed much during this brief interval. The final panel for $\Delta t = 30$~min shows absence of pattern correlation in the form of non-aligned first-moment curves and a nearly round bull's-eye contour pattern. The sequence illustrates that granulation largely loses its pattern identity over ten minutes, completely in half an hour. The UV--UV autocorrelation sequence in the second row illustrates the two-component dichotomy between chromospheric network and internetwork. The bright upward distribution tail made up by network grains persists over long delays, whereas the darker internetwork (lower-left contour mountain) shows faster pattern change. The UV--WL cross-correlation sequence in the bottom row of Fig.~\ref{fig:scatter1} shows some persistent bright-bright correlation due to network, slight anticorrelation for low WL at $\Delta t = 3$~min, and subsequent lack of persistent correlation at low UV brightness. In Fig.~\ref{fig:scatter2} the ultraviolet signal is decomposed in constituents by using selective data subsets for similar scatter diagrams. All four rows are limited to internetwork (IN) areas only. Fourier cone filtering as in Fig.~\ref{fig:imslices} is applied to show low-pass signals (apparent horizontal propagation below 6~\hbox{km$\;$s$^{-1}$}) or high-pass signals (above 7~\hbox{km$\;$s$^{-1}$}) only, selecting acoustic and non-acoustic modulation. The corresponding reduction in sampling statistics is offset by extending the pixel-by-pixel sampling to 120 consecutive image pairs per comparison (in all scatter plots the outer contour lies at 100~samples per bin with 25~bins per axis). The outer scatter clouds are now excluded to reduce the plot file size. The first row of Fig.~\ref{fig:scatter2} plots low-pass UV against low-pass WL. There is a slight but persistent anticorrelation over time delay $\Delta t = 2-6$~min. I attribute it to reversed granulation and expect it to become stronger in higher-resolution data that resolve granulation better than TRACE does. The slightness of the correlation shows that at the somewhat larger meso-scales imaged properly by TRACE, something else is present that does not obey point-to-point correlation or counter-correlation with the underlying photosphere at any time delay. The remaining three rows decompose the UV signal in high-frequency and low-frequency components, as contributed by the latter (low-pass UV as $x$-axis quantity). The second row plots total UV against low-pass UV in internetwork. The high initial correlation, with a roughly 2:1 slope, shows that peaks in internetwork UV brightness, \ie\ internetwork grains, occur preferentially when also the slow-changing internetwork background is bright. This strong correlation is conform the finding of \citetads{1983ApJ...272..355C} that bright \CaII\ \mbox{H$_{2V}$}\ grains are invariably part of a larger-scale modulation pattern. The same happens in the ultraviolet continua. Thus, the slowly evolving background pattern and the three-minute oscillation combine constructively to produce bright internetwork grains, each contributing about half of the excess brightness temperature. This makes the background a much more important grain co-localizer\footnote{The low-frequency background was not part of the Carlsson-Stein simulation. The corresponding Dopplershifts have less power than the brightness modulation and were not part of the Carlsson-Stein iron-blend piston because they were removed in spectrograph drift correction by \citetads{1993ApJ...414..345L}.} than \eg\ acoustic events (\cf\ \citeads{2002A&A...390..681H}). The next row shows a low-pass internetwork-only UV autocorrelation sequence. It illustrates the five-minute lifetime of the low-frequency pattern at a given spatial position. Horizontal propagation -- seen as tilts in the low-pass UV time slice in the final panel of Fig.~\ref{fig:imslices} when having an $x$ component -- causes loss of scatter correlation when the travel exceeds the 1.5~arcsec boxcar smoothing. The final row correlates high-pass UV with low-pass UV in internetwork. The curved shape of the vertical moment curves indicates that large three-minute excursions, both to high and to low UV brightness temperature with respect to the mean after removal of all slow variations, tend partially toward correlation with large UV background amplitude. This implies that the three-minute oscillation partially has modulatory character, gaining larger oscillation amplitude at larger background amplitude. The effect is not large. In addition, the contours have slightly asymmetric shape indicating non-linearity in wave behavior. When plotted as intensity, the scatter clouds stretch much further upwards due to the non-linear Planck function response in the ultraviolet. The conversion to brightness temperature circularizes the contours considerably. The conclusions from Figs.~\ref{fig:scatter1}--\ref{fig:scatter2} are, first, that the slowly-evolving internetwork ultraviolet background contributes about half of internetwork grain brightness excesses, in addition to the acoustic modulation modeled by Carlsson \& Stein, and, second, that at TRACE's meso-granular resolution this background seems to be something else than reversed granulation. \begin{figure} \centerline{\includegraphics[width=\textwidth]{budapestf4.eps}} \caption[]{ Two-dimensional Fourier spectra from TRACE. Lefthand graph: $\Delta \phi(1700\!-\!1600)$ intensity phase difference. Axes: horizontal wavenumber $k_h$ and temporal frequency $f$. The corresponding wavelengths and periodicities are specified along the top and righthand side. Greyscale: phase difference coding specified in the bar above the graph. The white curves along the sides are the temporal and spatial means, on linear scales in arbitrary units. The contours specify 1700\,--\,1600\,\AA\ coherence at values $C=$ 0.4, 0.6, 0.8 and 0.95, with the ticks directed to lower values. Righthand graph: similar $\Delta \phi({\rm WL}\!-\!1700)$ phase difference spectrum between white light intensity and 1700\,\AA\ intensity. The contours specify coherence levels $C=$ 0.1, 0.2 and 0.5. Thijs Krijger production. } \label{fig:Fourier} \end{figure} \paragraph{Fourier representations.} Figure~\ref{fig:Fourier} compares the same TRACE data in terms of \kf\ Fourier phase differences, at left between 1700 and 1600\,\AA\ brightness, at right between white-light and 1700\,\AA\ brightness. The acoustic oscillations (ridges and pseudo-ridges) in a similar diagram from the May 12, 1998 TRACE data are discussed in Sect.~4.4 of \citetads{2001A&A...379.1052K}; the emphasis here lies on the low-frequency domain. Note that the October 14, 1998 TRACE data contain only very weak network (\cf\ Fig.~\ref{fig:canopy}) so that these \kf\ diagrams are dominated by internetwork behavior. The lefthand graph shows a prominent wedge-shaped signature of negative propagation over a large $k_h$ range. It has high coherence between 1700 and 1600\,\AA\ modulation. The wedge location, its Lamb-line delimitation, and its negative values all suggest atmospheric gravity waves as cause. The large coherence implies that these waves dominate the low-frequency internetwork background in the ultraviolet at meso-scale wavelengths. The righthand \kf\ diagram is very noisy but again shows a low-$f$ low-$k$ wedge of negative phase for $k_h < 2$~arcsec$^{-1}$, \ie\ at mesogranular scales. It reaches much larger $\Delta \phi$ values due to the much larger span in formation height and it has much smaller coherence, but it is qualitatively similar. At larger $k_h$ the low-$f$ phase differences flips from large negative to large positive values at about $k_h = 2.5$~arcsec$^{-1}$, attributed to wraparound in the $\Delta \phi = [-180,+180]$~deg evaluation. This makes the positive blob around $k_h=3$~arcsec$^{-1}$ (outlined by a $C=0.1$ coherence contour) a continuation of the negative dark wedge, presumably marking reverse granulation. The slight counter-correlation signature in the top row of Fig.~\ref{fig:scatter2} is therefore indeed caused at granular scales. This high-$k$ contribution is likely to become better defined at higher angular resolution than TRACE. The mesoscale negative-phase blob in the righthand \kf\ diagram describes the non-correlated other agent, which I attribute to internal gravity waves. \paragraph{Discussion.} In summary, it seems that the slowly-evolving background mesh pattern in the internetwork parts of chromospheric image sequences is made up of reversed granulation and internal gravity waves at slightly larger scales. Numerical simulation may address this interpretation beyond speculation. Obviously, this should be feasible with realistic numerical simulations of the solar granulation such as those of \citetads{1998ApJ...499..914S} when equipped with sufficient atmosphere as in \citetads{2000ApJ...541..468S}. They should reproduce both granular overshoot and gravity-wave emission. The two phenomena are akin but not identical, since overshoot is a local phenomenon whereas gravity waves spread away from their source, interfere, and break at relatively large height (\cf\ \citeads{1981ApJ...249..349M}). \paragraph{Basal flux.} The demonstration in Fig.~\ref{fig:scatter2} that roughly half of the brightness excess in internetwork grains is contributed by the low-frequency variation applies likewise to \CaII\ \mbox{K$_{2V}$}\ grains and the accompanying \CaIIK\ line-core and inner-wing brightness modulation. Since the solar internetwork scene observed in \CaIIK\ is dominated by the superposition of these acoustic and low-frequency variations, the corollary is that part of the basal flux observed from non-active cool stars \citeads[][ and references therein]{1995A&ARv...6..181S}, is attributable to similar combination of acoustic and low-frequency phenomena including gravity waves -- not just acoustics alone. \paragraph{Internetwork fields.} As noted in Section~\ref{sec:overview} the debate on the pistoning of internetwork grains continues. It includes pro-and-contra claims on internetwork fields as grain localiser. Advocates pro may argue that the 50\% grain co-localization by the low-frequency chromospheric background pattern seen in Fig.~\ref{fig:scatter2} results from weak fields that are swept into intergranular lanes without reaching the magnetic flux of network elements, as suggested observationally by \citetads{1999ApJ...514..448L} and theoretically by the magnetoconvection simulations of \citetads{2001ApJ...560L.197E}, on the assumption that such small concentrations share the network habits of being bright in the chromosphere and displaying low-frequency variations there. I rather doubt this alternative explanation because of the ubiquity and regularity of the background emission, the low counter-correlation to intergranular lanes for the spatial ``meso''-scales producing the negative-phase blob in Fig.~\ref{fig:Fourier}, and the issue how intrinsically weak fields would influence the thermodynamics of the low chromosphere. The inverse-amplitude mapping in Fig.~\ref{fig:canopy} discussed below suggests rather the reverse to me, namely that larger low-frequency modulation amplitude implies weaker magnetism. Nevertheless, internetwork magnetism cannot be rejected as a chromospheric brightness producer. I wonder whether it may play a role in the generation of small-scale reversed granulation, in the form of kilogauss fluxtubes that are too thin to be magnetographically observable so far. One reason for such wondering is the wide extent of relatively bright areas surrounding chromospheric network \citeads[the ``intermediate'' pixel class of][]{2001A&A...379.1052K}, suggesting searches for relations between the occurrence of reversed granulation and magnetic flux density in high-resolution large-sensitivity magnetograms. \begin{figure} \centerline{\includegraphics[width=\textwidth]{budapestf5.eps}} \caption[]{ Temporally-averaged 1700\,\AA\ brightness on an inverted greyscale. The white lines mark division into smaller quadrants for processing. The yet smaller subfield of Fig.~\ref{fig:imslices} is near the lower-left corner. The horizontal striping near the right edge results from solar rotation correction. } \label{fig:canopy} \end{figure} \paragraph{Inverse canopy mapping.} Finally, Fig.~\ref{fig:canopy} serves to illustrate a speculation bringing me back to the first item in my review above: the definition of the canopy as lower chromosphere boundary. It shows a one-hour temporal average of the 1700\,\AA\ brightness over the full TRACE field, with inverted (1/value) greyscale. The inversion de-emphasizes network grains by making them dark. The hour-long averaging reduces the three-minute component; remaining enhancements from internetwork grain nonlinearity (non-sinusoidal spikes) also become dark. Thus, the brightest features in this display mark locations where the low-frequency background modulation reaches its deepest minima without contamination by other phenomena. Figure~\ref{fig:canopy} shows a grainy pattern. The brightest grains mark preponderance of extreme low-frequency minima. In reversed-granulation terms, these should correspond to long-lived or repetitive bright granules surviving a full hour of temporal averaging. Their distribution over the many network cells covered by Fig.~\ref{fig:canopy} clearly shows preference for the internetwork centers, with striking avoidance of the cell boundaries. My speculation is that such preferential occurrence of low-frequency extrema marks locations where the magnetic canopy reaches larger height than elsewhere. In the gravity-wave interpretation, amplitude reduction may occur where the field becomes strong enough to convert gravity waves into other modes or to reflect them. The areas containing the brightest inverse grains in Fig.~\ref{fig:canopy} would then have the highest canopy, giving the waves more room to increase their amplitude -- I suspect up to wave-breaking height. A reverse speculation by our meeting's co-director R.~Erdelyi is that magnetic resonant absorption enhances specific wave modes at the canopy height and that large apparent amplitude may mark such resonance (\cf\ \citeads{1997A&A...326.1241C}; \citeads{2001A&A...372L..17P}). \section{Conclusion} \label{sec:conclusion} Both the review in Section~\ref{sec:overview} and the ultraviolet background analysis in Section~\ref{sec:background} are observationally oriented. However, the major advance in chromospheric wave research in the past years is, in my view, the advent of realistic detailed simulations, in particular radiation (magneto-)\,hydrodynamic simulations casting Scandinavians and Robert F. Stein as coauthors. Of course, I allude to the \citetads{1997ApJ...481..500C} acoustic propagation simulation, the \citetads{2000ApJ...541..468S} acoustic excitation simulation, and the \citetads{2002ApJ...564..508R} shaken-fluxtube and canopy-interaction MHD simulation. An important strength of these simulations is that they are unusually close to observations, firstly in aiming to reproduce particular observed phenomena rather than prove grand presupposed concepts, secondly in the use of elaborate observation-like diagnostics to analyse simulation physics rather than just showing overall results. The issues discussed here suggest the following simulation to-do items: \leftmargini=5ex \begin{itemize} \vspace*{-1ex} \itemsep=0ex \item shaken-fluxtube \CaII\ \mbox{H\,\&\,K}\ emission; \item cool-star Wilson-Bappu relation; \item umbral flashes and running penumbral waves; \item granular overshoot and gravity wave excitation; \item gravity wave interaction with fluxtubes and canopies; \item cool-star basal flux evaluation.\\ \end{itemize} {\small \noindent {\bf Acknowledgements.} I thank the organizers for inviting me to a very good workshop and for leniency in their editorial deadline coercion. I am indebted to Thijs Krijger for four years of pleasant collaboration towards his PhD, for reducing and Fourier-analyzing the TRACE data, and for producing Fig.~\ref{fig:Fourier}. I am also indebted to T.D.~Tarbell for programming and scheduling TRACE, and to T.J.~Bogdan, B.~Fleck, S.S.~Hasan, O.V.~Khomenko, R.I.~Kostik, N.G.~Shchukina, G.~Severino, R.F.~Stein and Th.~Straus for inspiring discussions, some held within the collaborative framework of and funded by NATO grant PST.CLG.97501, INTAS grant 00-00084, and the EC--TMR European Solar Magnetometry Network.} {\small \bibliographystyle{aa} \bibsep=0ex \bibhang=3ex \section*{References} \longrefs=1 \bibliographystyle{rrbib \def\&{\&} \newcount\longrefs \def\aap{\ifnum\longrefs=1 {Astron.\ Astrophys.}\else {A\hbox{\rm \&}A}\fi} \def\aapr{\ifnum\longrefs=1 {Astron.\ Astrophys.\ Rev.}\else {A\hbox{\rm \&}AR}\fi} \def\aaps{\ifnum\longrefs=1 {Astron.\ Astrophys.\ Suppl.}\else {A\hbox{\rm \&}A Suppl.}\fi} \def\aj{\ifnum\longrefs=1 {Astron.\ J.}\else {AJ}\fi} \def\ao{\ifnum\longrefs=1 {Applied Optics}\else {Appl.\ Opt.}\fi} \def\aspcs{\ifnum\longrefs=1 {Astron.\ Soc.\ Pacific Conf. Series}\else {ASP Conf.\ Ser.}\fi} \def\apj{\ifnum\longrefs=1 {Astrophys.\ J.}\else {ApJ}\fi} \def\apjl{\ifnum\longrefs=1 {Astrophys.\ J. Lett.}\else {ApJ}\fi} \def\aplett{\ifnum\longrefs=1 {Astrophys.\ J. Lett.}\else {ApJ}\fi} \def\apjs{\ifnum\longrefs=1 {Astrophys.\ J. Suppl.}\else {ApJS}\fi} \def\apss{\ifnum\longrefs=1 {Astrophys.\ and Space Science}\else {Astrophys.\ Space Sci.}\fi} \def\araa{\ifnum\longrefs=1 {Ann.\ Rev.\ Astron.\ Astrophys.}\else {ARA\hbox{\rm \&}A}\fi} \def\azh{\ifnum\longrefs=1 {Astronomicheskii Zhurnal}\else {Astron.\ Zhur.}\fi} \def\baas{\ifnum\longrefs=1 {Bull.\ Am.\ Astron.\ Soc.}\else {BAAS}\fi} \def\bain{\ifnum\longrefs=1 {Bull.\ Astronom.\ Institutes Netherlands}\else {Bull.\ Astr.\ Inst.\ Neth.}\fi} \def\gca{\ifnum\longrefs=1 {Geochim.\ Cosmochim.\ Acta}\else {Geochim.\ Cosmochim.\ Acta}\fi} \def\grl{\ifnum\longrefs=1 {Geophys.\ Res.\ Lett.}\else {Geoph.\ Res.\ Lett.}\fi} \def\iaucirc{\ifnum\longrefs=1 {IAU Circulars}\else {IAU Circ.}\fi} \def\ip{\ifnum\longrefs=1 {in press}\else {in press}\fi} \def\jgr{\ifnum\longrefs=1 {J.\ Geophys.\ Res.}\else {J.\ Geophys.\ Res.}\fi} \def\jrasc{\ifnum\longrefs=1 {J.\ Royal Astron.\ Soc.\ Canada}\else {JRAS Can.}\fi} \def\mnras{\ifnum\longrefs=1 {Mon.\ Not.\ Roy.\ Astron.\ Soc.}\else {MNRAS}\fi} \def\nat{\ifnum\longrefs=1 {Nature}\else {Nat}\fi} \def\pasj{\ifnum\longrefs=1 {Pub.\ Astron.\ Soc.\ Japan}\else {PASJ}\fi} \def\pasp{\ifnum\longrefs=1 {Pub.\ Astron.\ Soc.\ Pacific}\else {PASP}\fi} \def\physscr{\ifnum\longrefs=1 {Physica Scripta}\else {Phys.\ Scrip.}\fi} \def\planss{\ifnum\longrefs=1 {Planetary \& Space Science}\else {Plan. \& Space Sci.}\fi} \def\procspie{\ifnum\longrefs=1 {Proc.\ SPIE}\else {Proc.\ SPIE}\fi} \def\qjras{\ifnum\longrefs=1 {Quarterly J.\ Royal Astron.\ Soc.}\else {QJRAS}\fi} \def\sa{\ifnum\longrefs=1 {Soviet Astron..}\else {Sov.\ Astron.}\fi} \def\skytel{\ifnum\longrefs=1 {Sky \& Telescope}\else {Sky \& Tel.}\fi} \def\solphys{\ifnum\longrefs=1 {Solar Phys.}\else {Sol.\ Phys.}\fi} \def\ssr{\ifnum\longrefs=1 {Space Science Rev.}\else {Space\ Sci.\ Rev.}\fi} \def\bibfiles{rjrfiles,adsfiles} \def\longrefs=1 \bibliographystyle{rrbib{\longrefs=1 \bibliographystyle{rrbib}
1,314,259,993,845
arxiv
\section{Author Information} \textbf{Corresponding Author}\\ *E-mail: sunil@ece.cornell.edu\\ \textbf{Notes}\\ The authors declare no competing financial interests. \acknowledgement The authors would like to acknowledge DARPA/MTO's ORCHID program for research support.
1,314,259,993,846
arxiv
\section{Introduction} Seismology is the classic method to investigate the deep interior of the Earth, as well as its dynamic behaviour. Seismic observations confirmed the existence of Earth's core \parencite{oldham_constitution_1906}, gave the first indication of a mineralogically distinct mantle \parencite{mohorovicic_beben_1909} as well as layering of the mantle and core \parencite{dahm_new_1934, lehmann_p_1936, bullen_seismology_1956}. For a more detailed overview, see chapter 3 of this book \parencite{knapmeyer_planetary_2022}. Seismometers were therefore among the first instruments installed on the surface of the Moon by the Apollo astronauts in 1969 \parencite{latham_moonquakes_1971, toksoz_lunar_1972} and were part of the Viking instrument suite on Mars 1976 \parencite{anderson_seismology_1977}. For a variety of reasons, among them the apparent failure of the Viking seismic experiment \parencite{lazarewicz_viking_1981}, the focus on human spaceflight in the 1980s, the absence of landers until 1995 and a focus on Martian geochemistry in the 2000s and 2010s, no seismic measurements were done on a planet for the 4 decades after. On 14 November 2014, the Philae lander on the comet 67/P Churyumov–Gerasimenko for the first time since Apollo measured elastic waves on a celestial object, excited by its sampling mechanism. On 5 May 2018, an Atlas V rocket finally launched the NASA InSight mission towards Mars, where it landed on 26 November 2018 and installed a seismometer on the surface in the weeks thereafter. This mission has since repeated many of the successes of a century of seismology within a good 3 years \parencite{banerdt_initial_2020, giardini_seismicity_2020, lognonne_constraints_2020, knapmeyer-endrun_thickness_2021, khan_imaging_2021, stahler_seismic_2021-1, hobiger_shallow_2021}, constraining the Martian interior structure from near surface to core. The successful execution of the InSight mission has renewed interest in seismic measurements as a natural part of landed missions to other planets. This article reviews possible scientific goals of seismic measurements on the major bodies of the solar system. In it, it follows \textcite{metzger_moons_2022} in defining a "planet" as an object of significant geological complexity. Large moons, i.e. objects of a few 100 km radius with tectonic processes shaping the surface and a complicated interior thermal budget are still planets, even if they happen to orbit another planet instead of the sun. For each object, we also summarize what little is known about its seismic sources and which missions are possible given current technical limitations, as well as the programmatic landscape of the Voyage 2050 program of the European Space Agency (ESA) \parencite{tacconi_esa_2021} and the Decadal Survey for Planetary Sciences and Astrobiology 2023-2033 by the National Academies of Sciences of the USA \parencite{decadal_2023}. The Voyage 2050 program and the Decadal survey are officially only recommendations to the individual agencies (ESA and NASA respectively), due to the participation of members from the global scientific community, both represent a true international effort and are therefore likely to shape programmatic efforts by all space agencies. \begin{figure} \centering \includegraphics[width=\textwidth]{figures/Seismogram_Comparison_v4.pdf} \caption{Seismograms recorded on four celestial bodies: Earth, the Moon, Mars and comet 67P.} \label{fig:seismograms} \end{figure} The article will hopefully serve as an introduction into the future of extraterrestrial seismology for seismologists, but even more, it should highlight to planetary scientists in general, which gaps in our understanding seismological data can fill. It is written to be readable by an interested reader with a general geological background. \section{Mercury} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{figures/mercury.jpg} \caption{Mercury limb as seen by MESSENGER. The strong cratering of the surface is immediately visible and suggests that the megaregolith and its strong crustal scattering will be about as problematic for any seismometer mission, as it was on the Moon. The interior of Mercury with its large core might be much more interesting, though. Image: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington PIA 17280} \label{fig:my_label} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{figures/global_stacks/Mercury.jpg} \caption{Global seismic wavefield stack for Mercury, using the interior model of \textcite{rivoldini_interior_2009}. This stack shows surface acceleration in all 3 directions: blue for vertical, red for radial (horizontal in direction of the source) and red for transversal (horizontal, orthogonal to the source). The plot is a synthetic version of the global seismogram stack of \textcite{astiz_global_1996} and computed using AxiSEM and Instaseis \parencite{nissenmeyer_axisem_2014, van_driel_instaseis_2015}. Due to the large core size, the first core-reflected shear wave (ScS) arrives already after 200 seconds and is followed by regular multiples. A seismometer on a lander could use these phases to determine the core radius. Note that these numerical simulations do not reproduce scattering, so real waveforms could be much less clear.} \label{fig:gs_mercury} \end{figure} \subsection{Potential scientific goals} Mercury is widely understood as a planet that was stripped of much of its mantle since formation \parencite{chau_forming_2018}. The core radius is 85\% of the planet's radius with a mantle of only 400 km \parencite{spohn_interior_2001} above. While the density is well-estimated from geodetic measurements \parencite{rivoldini_interior_2009}, the distribution of weight between mantle and core is not. As shown on Mars \cite{stahler_seismic_2021-1, khan_geophysical_2018}, probing the mass and radius of a planet's layers very precisely, can lead to strong constraints on the composition of the planet. \subsection{Seismicity} The level of background seismicity on Mercury is unknown. Compared to Mars and Earth, recent volcanism plays a minor role in shaping the surface of the planet \parencite[see][for an extended comparison]{byrne_comparison_2020}. Painted with very broad strokes, seismicity on Earth and Mars is connected to volcanism: On Earth, the mid ocean ridges are expressions of extension driven at least partially by volcanism. Subduction zones on the other hand harbour the largest earthquakes known to date, at the thrust front, while the subduction process itself produces back-arc volcanism. Only transform faults are not producing co-located volcanism on their own, even though many strike-slip faults are located at the end of convergent faults. On Mars, the strongest clusters of seismicity are found connected to recent volcanism. Dike intrusion is weakening the crust in Western Elysium Planitia, leading to a large number of marsquakes observed in Cerberus Fossae \parencite{giardini_seismicity_2020, clinton_marsquake_2021}. Given the absence of plate tectonics or recent volcanism on Mercury, it is not clear what the driver of seismicity is. Wrinkle ridges, i.e. buried thrust faults are distributed over the whole planet and are interpreted as the result of global crustal contraction due to secular cooling \parencite{byrne_mercurys_2014}. While the cumulative amount of deformation from this process can be estimated easily, it is not known whether the process is still ongoing or whether it has stalled. On the terrestrial moon, seismicity is triggered by tidal deformation, although it is disputed whether the tidal stresses directly cause the quakes or whether they just weaken normal stress on the fault temporarily so that rupture is possible. However, as shown by \parencite[, for a summarysee table \ref{tab:tidal}]{hurford_seismicity_2020}, the tidal energy deposited in Mercury is significantly less that that of the moon. However, the temperature difference of more than 600~K between day and night, is likely to form transient thermally-induced seismicity along the terminator. \subsection{Mission perspectives} Mercury places tight constraints on any landed mission. The place deep in the gravity well of the sun means that a lander mission will require at least one flyby on Venus or Earth to reduce its $\Delta V$ to an acceptable level. The Ames trajectory database \parencite{foster_mission_2010} lists no trajectories with a single Venus flyby and $\Delta V$ $<20$ km/s until 2040. Therefore, any mission would involve rather complex and long duration trajectories. The ESA BepiColombo mission needs to do a total of 7 flybys (Earth once, twice at Venus and then six times Mercury itself) between 2021 and 2025 to enter an orbit. After landing, the spacecraft would be subject to two separate realms of operations. Due to the 3:2 spin–orbit resonance of the planet, one solar day takes 176 Earth days. Therefore, temperatures vary extremely over time across the surface, between 100~K and 700~K in equatorial planes. Due to the low inclination of the orbit, temperatures at the poles, where the sun never rises more than 2 arcminutes above horizon, are more stable below 150~K. Since the low temperatures of the night can be relatively easily accommodated by electric heaters, most proposed missions target landing after sunset and limit the mission duration to a single night. \textcite{ernst_mercury_2021} present a mission concept to study the geology around an equatorial landing site over the course of a single Mercury night (88 Earth days) in 2045. The mission study includes an accelerometer for seismic studies. The mission is proposed as a New Frontiers class mission and is currently the only serious lander concept for the innermost planet. The decadal survey \parencite{decadal_2023} questioned whether this mission can be done within a New Frontiers budget and estimated a total cost of 2.8 billion USD, i.e. a flagship mission. Because of the narrower scientific scope of the mission as proposed compared to an ice giant system mission, it was ranked behind the Uranus Orbiter and the Enceladus Orbilander. \section{Venus} \begin{figure} \centering \includegraphics[width=\textwidth]{figures/global_stacks/Earth.jpg} \caption{Global seismic wavefield stack for Earth, using the interior model of \textcite{kennett_constraints_1995}. Venus may show similar seismic phases, given its comparable radius and density.} \label{fig:gs_earth} \end{figure} \subsection{Potential scientific goals} Venus is comparable in size and density to Earth, yet little is known about its surface tectonics compared to Mars or even Mercury, due to the thick cloud cover. The surface has undergone significant reworking and its oldest parts, the so-called tesserae are likely younger than 600 million years. Based on available radar images from Magellan and the Arecibo radio telescope, the surface is undergoing strong deformation until today (SOURCE). In the next decade, the Envision \parencite{ghail_science_2020} and Veritas \parencite{smrekar_veritas_2020} missions planned by ESA and NASA respectively will increase the resolution of radar images significantly and help to constrain the dominant mechanisms of surface tectonics based on geomorphology. Plate tectonics in the terrestrial sense of the word is unlikely to exist on Venus, since the high temperature inhibit a strong lithosphere. Locations of venusquakes could however constrain whether active faults cluster, as they do on Earth and Mars or whether deformation is widespread, as on the Moon. \subsection{Seismicity} The seismicity of Venus is unknown. The relatively low crater density suggests an average surface age of 250–750 Myr, which can either be accomplished by a steady state of crater formation and removal by volcanic or tectonic processes \parencite{phillips_is_1992} or catastrophic short term subduction or overturn of the whole lithosphere at regular intervals \parencite{strom_global_1994}. Both scenarios suggest high, though different strain rates and therefore tectonic seismicity. However, given the high surface temperature, it is actually not clear how much of this deformation is occurring in a brittle regime and thus capable of producing venusquakes. The rate of volcanic eruption is likely higher than on Earth, with a recent study based on scaling the terrestrial rate to Venus estimating 120 discrete eruptions per (terrestrial) year \parencite{byrne_estimates_2022}, which could be observed either seismically or from infrasound (see below). \subsection{Mission perspectives} Venus' orbit is relatively well accessible by spacecraft from Earth. Every 18 months, a launch window of 3 months opens with flight duration between 80 and 120 days. Landing and operating on the surface is of course an entirely different story. At this time, there are no seismic instruments at high technological readiness level that would be able to operate over extended periods under Venus conditions. The Soviet Venera landers relied on significant overdesign in terms of mass, which allowed survival of a few hours, by delaying the warming of the core electronics. The currently most promising candidate are Silicate Carbide (SiC) electronics that are able to operate at temperatures of 800~K, at least theoretically. The main application for SiC at this stage are high voltage, low loss systems, specifically power circuits, as well as simple amplification systems for sensors in high temperature environments \parencite{zetterling_integrated_2015}. The complexity of the electronic systems that can be manufactured in SiC is far lower than for Si systems, and only basic integrated circuits have become available recently. A decisive problem is that the business case for high temperature SiC electronics is weak on Earth and development of a Venus lander would not be able to profit from commercial innovation cycles as much as it is the case for Si-based electronics. Specifically, industry development focuses on single components that cannot be moved to a cooler part of a system (such as sensors or pre-amplifiers), while a Venus lander would need to operate virtually all of its electronics at temperatures $>700~$K. Even with SiC electronics, a lander interior would have to be cooled actively. Solar irradiation is 3-5 W/m$^2$, so that a mission would have to rely on a radio-thermal generator, which would operate at low efficiency, given the high outside temperature. A European concept for a long-lived lander with a seismic package based on microelectronics was presented by \textcite{wilson_venus_2016}, with a life time of 100 days, banking on further progress in SiC electronics over the next decade. An alternative approach is to operate from the upper atmosphere, where temperatures are stable and moderate around 290~K. The lifetime of such a mission would be limited primarily by the escape of Helium from the carrying balloon, to the order of 120 (Earth) days, but at much lower technical complexity compared to a surface mission. A balloon mission equipped with infrasound sensors could detect air-coupled Rayleigh waves. Terrestrial analog concepts are currently studied on two scales: A JPL-led team studies the lower range of signal amplitudes that can be observed, using earthquakes of magnitudes 3-4 in California \textcite{brissaud_first_2021}, as well as induced seismicity in Oklahoma. A French group centered on ISAE supaero uses free floating, long-lived meteorological balloons on equatorial trajectories to observe the global signature of large earthquakes above magnitude 7, as well as volcanic explosions \parencite{podglajen_stratospheric_2022}. In both cases, signal detection was demonstrated, for a variety of events. The obvious drawback of the method would be that the polarization of the actual seismic wave would not be accessible directly. \textcite{garcia_active_2021} demonstrated two possibilities to overcome this limit: A string of infrasound sensors on the balloon tether could determine the incident angle of the infrasound wave, as a proxy of the distance to the event's hypocenter. A second option is the inclusion of accelerometers into the balloon, which would record the acceleration of the balloon due to the arrival of the pressure wave. Similar techniques are used in marine seismic surveys, usually termed multisensor streamers \parencite{robertsson_use_2008}. The advantage is that the sensor effectively measures the pressure gradient of the infrasound wave, which allows recording the signal at higher frequencies (since the spatial derivative of observing the gradient is equivalent to recording the time derivative of the signal). The final application will likely see a combination of multiple infrasound sensors and accelerometers, from which the full pressure gradient field is reconstructed and transferred to Earth. An unknown issue is that the coupling into the air is most effective for Rayleigh waves, preferably at frequencies above 0.1~Hz. As the InSight example showed, surface waves are not regularly observed, if hypocenters are not very shallow and event magnitudes are below $M_W=4$. A curious question would be where Venus stands in terms of seismic scattering. The dense atmosphere reduces the meteorite impact rate and thus impact gardening that produces the lunar regolith. The high surface temperature and availability of volatiles likely allows for healing of cracks and thus increases the mean free path length of seismic waves. So in terms of seismic waves, Venus might be the most "transparent" planet in the solar system, even compared to Earth. \section{Moon} \subsection{Potential scientific goals} The moon hosted the first extraterrestrial seismic network between 1970 and 1979, when it was switched off due to lack of further interest in the seismic community \parencite{lognonne_planetary_2007}. The Apollo seismic network helped exploring the Moon in a number of ways, from the shallow subsurface \parencite{sollberger_shallow_2016} to the crust \parencite{khan_new_2000} and the core \parencite{weber_seismic_2011}. See \parencite{khan_lunar_2013, garcia_lunar_2019} for an overview of the knowledge on the lunar interior gained from Apollo. The network further observed significant numbers of tectonic quakes and impacts \parencite{kawamura_evaluation_2017}. Since a network was used, a tentative identification of shallow tectonic quakes with surface faults was possible in a few cases, allowing to estimate the energy budget of the moon due to shrinking \parencite{watters_shallow_2019}. Seismometers have been part of all proposals for future network science on the moon, specifically the Lunar Geophysical Network \parencite{weber_scientific_2021}. One scientific goal would be to explore the lunar crust in more detail. Since the Apollo age, it is known that significant parts of it are KREEP terrains, understood to be mantle material from an overturn in an early lunar magma ocean. These can be distinguished spectrally from orbit, but should produce a significant imprint in crustal thickness. The crustal density models of the moon are actually those of highest resolution in the whole solar system, due to data from the GRAIL mission \parencite{wieczorek_crust_2013}, but non-unique with respect to the average thickness of the crust, as well as the inner-crustal layering. Receiver function analysis to detect layering in the crust, was limited by the infamously strong scattering in lunar seismograms, but also by the low performance of the Apollo horizontal component seismometer. A future seismic network on the moon would consist of state of the art three-component seismometers, to better detect shear waves and converted phases. Central nodes could be equipped with very high fidelity sensors \parencite{kawamura_autonomous_2022}, based on optical readout or superconducting gravimeters to directly observe normal modes. These would have resolution on the structure around the chemical layer above the core-mantle boundary and a potential inner core. Both features were seen in the analysis of \textcite{weber_seismic_2011} and are of high importance to formation models not only of the Moon, but also of Earth, since the giant impact model derives the Moon from the Earth an a mars-sized impactor. \subsection{Seismicity} The lunar seismicity is famously divided into two families: the shallow and deep moonquakes. While the shallow moonquakes happen independently of another (their distribution in time is a Poisson process), the deep moonquakes follow a tidal cycle. The seismogram signals are so similar, that several clusters can be identified, in which moonquakes of similar focal mechanism repeat. This waveform similarity has been used to identify new moonquakes in the noise by template matching \parencite{bulow_new_2005}. In general, all moonquakes show a very high corner frequency compared to their moment magnitude \parencite{oberst_unusually_1987}. The peaked response curve of the Apollo seismometers makes the estimation of the absolute event magnitude challenging \parencite{kawamura_evaluation_2017}, but most estimates agree that the lunar seismicity is about four orders of magnitude below the terrestrial one \parencite{knapmeyer_working_2006}. The seismicity of the Moon is relatively well known after nearly eight years of registrations by the Apollo stations. New mission concepts even rely on the recognition and re-use of source clusters that were identified in the Apollo data. The Apollo data is however of limited value when concerning the far side of the Moon, where it is unknown if it is entirely aseismic, or if the core and partial melt layer on top of the core mantle boundary absorb all seismic waves from far side sources. Specifically the polar regions and the far side might harbour more regional tectonic activity that has been impossible to observe \subsection{Mission perspectives} The good accessibility of Moon, with launch windows almost every month means that mission complexity is significantly reduced. The NASA CLPS program tries to leverage this by buying delivery of scientific instruments to the lunar surface from commercial companies without prescribing a mission architecture. It is therefore likely that very different designs of seismic stations and network will be deployed over the next decade, starting with the Farside seismic suite (FSS), which is to land near Schr\"odinger crater in 2025 \parencite{panning_farside_2021}. Among the main challenges are the high amount of radiation (essentially equivalent to interplanetary space, with the chance for solar extremes), the high temperature difference between day and night and the 14-day long cold night. The two latter require significant battery capacity to ensure heating and therefore survival of a lander through the night. The Apollo 11 EASEP station is an example for instrumentation that did not survive the lunar night. A lunar geophysical network of multiple nodes, each equipped with a seismometer \parencite{weber_scientific_2021} has been mentioned as a New Frontiers Class mission in the Decadal Survey of 2010 and 2022. The currently planned human landings during the Artemis program would allow to bring significant amounts of scientific payload with them, specifically, if the lander is a SpaceX "starship". However, at the time of writing, the specifics of the Artemis program, the exact landing concept and therefore, the scientific program are not yet defined. \section{Mars} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{figures/Mars_CF_Hope.jpg} \caption{Image of South Western Elysium Planitia on Mars, taken on March 15, 2021, by the digital exploration camera (EXI) of the UAE Hope probe. The horizontal lines in the lower part of the image are the grabens of Cerberus Fossae, the source of most marsquakes recorded by the InSight mission so far \parencite{giardini_seismicity_2020, perrin_geometry_2022, zenhausern_lowfrequency_2022}. } \label{fig:mars_hope} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{figures/global_stacks/Mars.jpg} \caption{Global seismic wavefield stack for Mars, using the interior model InSight\_KKS21\_GP of \textcite{stahler_seismic_2021-1}} \label{fig:gs_mars} \end{figure} \subsection{Potential scientific goals} Apart from the Earth and the Moon, Mars is the only planet on which a seismometer has been operated for an extended period of time. The InSight mission \parencite{banerdt_initial_2020} has shown that Mars is tectonically active, with a seismicity above that of the moon; similar to quiet intraplate regions on Earth. Post-InSight seismic investigations could focus on one of three aspects: \begin{itemize} \item InSight's ability to observe long period signals (below 50 mHz) was limited by the surface installation on loose sand. This prohibited observation of tidal deformation at the period of the Phobos orbit, which would have added a strong constraint on the rheology of the mantle \parencite{van_hoolst_tidally_2003-1, lognonne_seis_2019}. A future long period seismic observatory would therefore have to be installed at least on bedrock to improve coupling to the ground at long periods. This installation would require either guided landing on exposed bedrock or a rover to deploy the seismometer on suitable ground in some distance of the lander. A second noise source was the tether, the rigid connector from SEIS to the lander, which contained 80 analog channels for scientific and housekeeping data \parencite{zweifel_seismic_2021-1, hurst_resonances_2021, scholz_detection_2020}. A future mission would benefit from a much thinner tether, possibly only for power transmission, while data is transmitted wireless or on only two wires in serial digitized form. The long period background noise could be further reduced by burying the whole sensor assembly. This however would require a careful selection of a site where burial is possible to a depth of significantly compacted ground. \item While first layered models of the crust and mantle, including the radius of the core were obtained by InSight using a single station only \parencite{duran_seismology_2022, khan_imaging_2021, knapmeyer-endrun_thickness_2021, stahler_seismic_2021-1}, many unknowns remain: The interior of the core has not been observed yet (e.g. by SKS waves), and even though tentative observations of Pdiff were made \parencite{horleston_far_2022}, the lowermost mantle is constrained only very sparsely by seismic data. The detection of magnitude 4 quakes in Southern Tharsis \parencite{horleston_far_2022} suggests that much more small seismicity could be present there, unobservable from InSight's location. At the same time, \textcite{plesa_seismic_2021} showed that geodynamical models predict significant lateral variations in seismic wave speed, potentially higher than on Earth. A global network of 4-6 seismometers, combined with other geophysical sensors could observe the full global seismicity, seismic phases over a wider distance range, and potentially also the above-mentioned three-dimensional structures in the Martian interior. \item InSight observed localized tectonic activity near the Cerberus Fossae graben, in contradiction to existing models of wide-spread compressive stress from lithospheric cooling \parencite{phillips_expected_1991, knapmeyer_working_2006}. This observation has strong implications for the general mechanisms of tectonic activity on terrestrial planets. To better understand the mechanisms, it would be worth while to locate marsquakes precisely in one of the active systems, for which a multi-station seismic network is necessary. Due to the low intrinsic seismic attenuation, energy above frequencies of 1 Hz is transmitted well and observable, which allows to use light-weight, short period instruments, such as the InSight SP-sensor \parencite{stahler_cerberus_2022}. \end{itemize} \subsection{Mission perspectives} Landing on Mars has been executed successfully nine times by NASA and once each by the Soviet Union (Mars 3 on December 2, 1971, \textcite{perminov_difficult_1999}) and the Chinese Space Agency (Zhurong on May 14, 2021). All these landers were fundamentally operating a combination of a heat-shield and parachute for initial deceleration. The Mars Exploration Rovers Spirit and Opportunity used airbags for final approach and landing, while the large Perseverance and Curiosity rovers were lowered to the ground from a separate spacecraft, colloquially termed sky crane. The Discovery class missions InSight and Phoenix used hydrazine retrothrusters for final deceleration. These techniques are currently considered to place a lower limit of about 200 million USD on any mission, before scientific instruments are even considered, requiring at least a Discovery class budget for future landed missions. A potentially cheaper option is to use penetrators, i.e. spacecraft that are landing with terminal velocity and decelerate either by penetrating up to a few meters into ground or by having a deformable front part. As described in \textcite{lorenz_planetary_2011}, the concept has been proposed several times over the last decades, but was never successfully executed. However, it must be noted that no penetrator mission failed during landing; Mars-96 was lost due to failed upper stage separation after launch, while DS-2 likely got lost with the mother spacecraft Mars Polar Lander. Overall, Mars is the location suited best for simple penetrator missions, since the atmosphere can fulfill initial deceleration and - more importantly for a low-cost mission - attitude control, removing the need for active propulsion. A semi-hard lander concept has recently been proposed by engineers from JPL under the name SHIELD \parencite{barba_access_2021}. It would utilize a deployable drag skirt of 2 meter diameter for deceleration to obtain a low ballistic coefficient and therefore terminal velocity of $<70$~m/s without the use of a parachute. At least four of such landers could be stored in a single Falcon-9 sized payload fairing and would therefore allow to build a seismic network on Mars. During landing, instruments, including the seismometer would have to survive deceleration of up to 2000~g, depending on surface character. The JAXA seismometer planned for the Lunar-A mission \parencite{mizutani_lunar_1995, shiraishi_present_2008} is specified for 5000~g and would therefore be a candidate instrument. The seismometers of the Ranger 3/4/5 missions was tested up to 3000g \parencite{lehner_seismograph_1962}. For a free-fall landing on the Moon, it was encapsulated in a balsa wood impact limiter sphere and submerged in freon. Other instruments, such as the InSight SP seismometer could be hardened in a similar way during flight and landing. A ultra-high sensitivity mission would require a soft lander, likely in combination with a rover to reach a suitable installation site after landing. While such a mission is well within the technical possibilities of NASA and likely also CNSA and ESA, it would likely require a flagship class budget for landing and operations and could therefore only be executed as part of a larger rover mission. However, NASA and ESA are currently executing the Mars Sample Return campaign, involving the Perseverance rover and likely 3 more launches until 2030. Given the size of the mission and its considerable strain on the budget of both involved agencies, it is unlikely that any additional scientific payload will be added to it, to avoid mission and budget creep. The delay of the ESA ExoMars landing due to the stop in collaboration with Roscosmos will further affect all Western Mars missions. As a final word, it should be noted that the landscape of possibilities on Mars is likely to change, if various private space companies, foremost SpaceX, succeed in constructing reusable high performance launch vehicles, that would increase mission cadence and lessen weight limitations. However, launch cost is not at all a constraining factor typically and other factors (e.g. the availability of a high bandwidth communication network) limit mission design on Mars. \section{Phobos and Deimos} \subsection{Potential scientific goals} For the two Martian moons, it is not even known whether they consist of consolidated material or form rubble piles \parencite{le_maistre_signature_2019, dmitrovskii_constraints_2022}. If they form rubble piles, seismic wave propagation in a usual sense is not possible and the strongest signal will instead be the normal modes of the moons. A more consolidated object however could be propagating seismic waves efficiently; due to the small size with high amplitudes even for small quakes. The high thermal gradient and strong tidal signal from Mars is likely to trigger a significant amount of small phobos- or deimosquakes, if the moons as a whole are consolidated enough. The rigidity of the uppermost surface layer could be estimated from the deceleration of a lander, information on mechanical properties of the soil surface will also be gained from the JAXA MMX rover, which will land and drive on the surface of Phobos. \subsection{Mission perspectives} The launch windows for Phobos and Deimos are identical to those of Mars, yet landing requires an entirely different skill set. A sample return mission to Phobos, MMX (Martian Moon eXplorer) is planned by JAXA for launch in 2024 \parencite{kuramoto_martian_2022}, building on their extensive expertise in missions to asteroids. The mission will not carry a seismometer, but during the landed time windows, the spacecraft's IMU will listen for ground vibrations. Given that the lander stays on the surface for only a few hours and performs sampling operations during that time, the detection of a phobosquake would be a lucky event. The mission will perform a geodetic experiment, mapping the shape of Phobos using imaging and LIDAR and combining it with Doppler radio tracking of the spacecraft to obtain gravity coefficients from which the interior can be inferred \parencite{matsumoto_mmx_2021}. \section{Ceres} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figures/ceres.jpg} \caption{Occator crater on Ceres, the proposed landing site for a NF-class Ceres sample return mission \parencite{castillo-rogez_concepts_2022}. The bright spots are understood to be young carbonate salt deposits from subsurface brines. A seismometer could help to constrain the current tectonic activity and from it the deposition rate. The view is a composite of two DAWN images, avoiding over-exposure of the bright spots. Image credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA, PIA19889} \label{fig:ceres} \end{figure} \subsection{Potential scientific goals} Ceres shows trace of a subsurface ocean, which would contain a significant amount of the water of the asteroid belt. Surface morphology indicates that this ocean is indeed partially liquid and forms yet another ocean world. A seismic mission could explore the depth of the ice/ocean interface, the thickness of the ocean, but also the homogeneity of the ice layer above. A warm ocean would result in slushy layers at the bottom of the ice that would significantly increase attenuation. Since the planet has no moons, the tidal dissipation is insignificant. Yet, the NASA Dawn mission observed signs of recent cryovolcanism \parencite{ruesch_cryovolcanism_2016}: The Ahuna Mons topographic feature is 4 km high at a width of 16 km and only lightly cratered. The age is estimated to be $210 \pm 30$ million years, i.e. very young in the context of a small planet. It is debated whether the dominant tectonic process on Ceres is solid state convection inside the icy crust, leading to dome formation \parencite{bland_dome_2019} or instead global contraction, as evidenced by ubiquitous thrust faults \parencite{ruiz_evidence_2019}. It is well possible that these thrust faults were indeed created in an early stage of the planet's formation and are thus fossils of the planet's ancient tectonics. The latter seems to be the case on Mars, too, where no seismicity could be attributed to thrust faults, lobate scarps or wrinkle ridges so far. Following the example of Mars, where Cerberus Fossae, one of the youngest surface features is also the most seismically active, Occator crater would be a prime target. The crater shows bright spots (faculae), interpreted as deposits of salts from eruption of brines \parencite{nathues_recent_2020, schenk_impact_2020} (see fig. \ref{fig:ceres}). If these brines are deposited by an endogenic process, it would also lead to contemporary seismic activity, that could be picked up even by a short period seismometer. A broadband Ceres seismometer would investigate the global distribution of tectonic activity. Since high resolution orbital images exist from the Dawn mission, at a resolution of 35 m, attribution to individual fault systems or centers of cryovolcanism would be possible. Assuming large enough Ceresquakes, the seismic data could be used to investigate layering of the planet, including the depth of the ocean. \subsection{Seismicity} The seismicity of Ceres is unknown. Given the absence of a partner object for tidal forcing and low interior heat flow, it is likely to be low. The presence of recently deposited brines \parencite{nathues_recent_2020} suggests that seismicity driven by interior processes might be present. Due to the location of Ceres in the Asteroid belt, meteorite impacts are a likely seismic source. \subsection{Mission perspectives} Ceres can be reached relatively easily, typically with a Mars flyby, leading to a 3-4 year trajectory for orbiter missions. The Dawn mission used solar electric propulsion, leading to a 6 year trajectory, including a 400 day research trip to asteroid Vesta. In the aftermath of the Dawn mission, lander missions have been proposed; due to the low gravity, even sample return missions are feasible. An interesting concept is proposed by \textcite{castillo-rogez_concepts_2022}, in form of a electrically propulsed orbiter that is able to land and relaunch as a whole, returning samples to Earth. The mission is supposed to collect samples of carbonate salts as well as the darker reference surface materials and return it to Earth at $\leq 20$\textdegree C to prevent alteration. The mission would benefit from investigating recent tectonic activity in Occator crater, to determine the samples' context. Unfortunately no seismometer is currently foreseen. A slightly modified version of this mission was proposed as a candidate for a New Frontier class mission in the Decadal Survey 2023-2033 \parencite{decadal_2023}, still without seismic measurements. A tip to the hat goes to the student participants of the Alpach summer school, who proposed a more classical dual orbiter/lander sample return mission, likely in the cost range of a NASA flagship mission or an ESA L-class mission \parencite{gassot_calathus_2021-1}. \section{Jupiter and Saturn - the giant planets} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{figures/Ring_Saturn_PIA06536.jpg} \caption{Cassini image of Saturn's rings taken by the ISS infrared camera. Major subdivisions are labeled by the author. Kronoseismology is done using density waves in the broad, but faint C-ring. Image credit: NASA/JPL/Space Science Institute, PIA06536} \label{fig:saturn_ring} \end{figure} \subsection{Potential scientific goals} For this topic, see the excellent overview in \textcite{gaulme_giant_2015}. Seismology in the common sense, using a landed mass-and-spring sensor is of course impossible on a gas giant, instead measurements focus on long period deformation, detected by astronomical means. As all bodies, the giant planets have normal modes whose shape and frequency depends on their interior's elastic parameters. Since the bulk of the interior is in fluid state without shear modulus and yet gravity is significant given their size, the nomenclature of helioseismology is typically used. Compared to Earth, where the elastic moduli dominate as restoring force and gravity and self-rotation (the Coriolis force) can be treated as second order parameters, in the sun or the gas giants, the modes need to be treated separately by their primary restoring force \parencite{guillot_giant_2015}. \begin{itemize} \item pressure modes (p-modes) are the closest analog to normal modes on Earth. The restoring force is the pressure field, or the gradient of the scalar potential $p=\nabla \phi$, analog to the case of the acoustic wave equation. Their sensitivity is typically constrained to the outer layers. \item gravity modes (g-modes) have the gravity, or more precisely buoyancy as restoring force. They can therefore only form in convectively stable regions, where no density inversions exist and lateral density contrasts are low. \item surface gravity modes (f-modes) are analogous to deep water gravity waves in the ocean, with the weight of surface vertical displacement as restoring force. Their sensitivity is therefore highly constrained to the outermost shell of the planet. \item inertial modes (i-modes) can occur in rapidly rotating planets, such as Jupiter and Saturn, restored by the Coriolis force. \end{itemize} All in all, these modes create a complex overlapping picture of spectral peaks in the surface displacement and of course cross terms and coupling exist. Over time, four approaches have been considered to observe the normal modes of Jupiter remotely \begin{itemize} \item Variations in infrared brightness, caused by temperature perturbations from p-modes. A 1~m/s velocity field corresponds to 10~mK in temperature perturbation, visible in mid-infrared, which are difficult to observe, given the limited sensitivity of photometric sensors in this optical wavelength window. Because of these difficulties, no dedicated instrument has been developed so far. \item Spectroscopy of reflected light. This method has been improved considerably on instrument side since in response to the exoplanet detection campaigns. In 2011, the seismology-dedicated SYMPA Fourier spectro-imager detected radial modes of maximum amplitude of $49^{+8}_{-10}$~cm, at a frequency of $1213 \pm 50~\mu$Hz, with a mean large frequency spacing between radial harmonics of $155.3 \pm 2.2~\mu$Hz \parencite{gaulme_detection_2011}, placing a weak constraint on the planets interior structure. Spectroscopy needs to take into account the large rotation contribution at the fringes of around $25$~km/s. \item Photometry of reflected light. Here, brightness variations in reflected light are used. Compared to the other two methods, the signal-to-noise ratio is low, but sensors in visible wavelengths are widespread. However, the complex surface pattern of the Jovian clouds means that only specific modes can be observed. Another problem arose when trying to apply this method to Neptune with the extended NASA Kepler mission "K2", which observed the planet for 50 days at a 1-min cadence. No oscillations of Neptune could be detected (Rowe et al 2017). But it became apparent that oscillations of the Sun in the reflected light perturbed the signal (Gaulme et al. 2016). \item Kronoseismology: The rings of Saturn are shaped by resonance with the interior modes of the planet. Density waves exist in all rings, but specifically in the innermost C-ring (see fig. \ref{fig:saturn_ring}), they are excited by certain normal modes of the planet \parencite{hedman_kronoseismology_2013}. This is observationally accessible via photometry of the exact ring pattern during stellar occultations during the Cassini mission \parencite{fuller_saturn_2014} and led to the discovery of a diffuse, but stable stratified core of the planet \parencite{mankovich_diffuse_2021}. \end{itemize} \subsection{Mission perspectives} Using ring seismology, the interior models of Saturn cannot be refined any further, given the high resolution on density waves in the C-ring obtained by Cassini from radio occultation experiments. Future research would therefore focus on Jupiter. Gravimetric observations by Juno \parencite{durante_peek_2021} detected gravity perturbations that are compatible with the presence of p-modes, which proved the existence and excitation of these modes, although they could not be further identified. A Doppler imaging camera for Jupiter seismology was proposed as a payload for the ESA JUICE mission \parencite{soulat_echoes_2011}, but ultimately not selected. Given that Juno, a general purpose mission is currently active in the Jupiter system and two flagship class orbiter missions (JUICE and Europa Clipper) are due to launch until 2025 (although both focused on the icy moons), the need for another large Jupiter orbiter has not been foreseen by the Decadal Survey. Instead, Doppler spectroscopy from Earth will likely be the only possible method for the foreseeable future. The success of SYMPA was followed on by the JOVIAL sensor \parencite{goncalves_first_2019} and dedicated instruments to measure Jupiter's zonal wind speeds, e.g. PMODE-I on the AEOS 3.6 m telescope atop Mount Haleakal\=a, Maui, Hawai'i \parencite{shaw_pmode_2022} are built. Observations of the normal modes with increased precision compared to SYMPA would however be fortuitous, possibly due to a large impact. To measure Jupiter's interior modes with high enough precision, multi-week continuous observation campaigns (at stable weather) would be necessary, which are likely only possible from Antarctica \parencite{shaw_pmode_2022}. In the far future, it could also be done from a long-lived optical Jupiter observatory at Lagrange-point L1, as proposed by the Chinese space agency CNSA \parencite{hsu_jupiter_2021}. \section{Io} \subsection{Potential scientific goals} While Europa, Ganymede and Callisto are all icy moons, Io is a terrestrial planet without any surface ices \parencite[or water absorption features whatsoever][]{smith_jupiter_1979}. The ice fraction on these moons actually increases with distance from Jupiter, which is one of the problems moon formation models face \parencite{shibaike_galilean_2019}. In how far the interior structure and composition of Io is related to the structures of the other icy moon's rocky cores, and in how far these permit certain formation scenarios, can be understood only if the interior structures of them all are better resolved. While Io's active volcanism (discovered by Voyager I, \parencite{smith_jupiter_1979} is a fascinating mission goal in itself, the moon is also a perfect natural laboratory to understand tidal heating in multi-body systems. To quote the KISS report on Tidal heating: "The Io–Europa–Ganymede system is a complex and delicately built tidal engine that powers Io’s extreme volcanism and warms water oceans in Europa. Io’s gravity generates a tidal bulge within Jupiter, whose dissipation transfers some of Jupiter’s rotational energy into Io’s orbit, moving it outwards and deeper into a 2:1 eccentricity resonance with Europa. This in turn increases Io’s eccentricity, resulting in enhanced tidal heating. Ultimately, Jupiter’s rotational energy is converted into a combination of gravitational potential energy (orbits of the satellites) and heat via dissipation in both Jupiter and the satellites" \parencite{de_kleer_tidal_2019}. The tidal heating is ultimately the cause of Europa's liquid ocean, which may be a permanent feature or periodic \parencite{hussmann_thermal-orbital_2004}. Since tidal heating is the process creating the many ocean worlds in the Solar system \parencite{nimmo_ocean_2016} and in other exoplanet systems, understanding it has significant consequences for habitability, as well as evolution of planetary systems. Prior to the observations of Voyager I, \textcite{peale_melting_1979} predicted that the deep interior of Io should be largely molten because of the tidal heating. \textcite{schubert_internal_1981}, in explicit contradiction, proposed a thin, molten layer between the crust and a solid interior and an iron core. The thickness of the molten layer is estimated to be at least 50 km, with a melt fraction exceeding 20\% \parencite{khurana_evidence_2011}. \textcite{van_hoolst_librations_2020} even consider a magma ocean possible. The size of the core is somewhere between 19\% and 50\% of Io's radius \parencite{anderson_primary_2001}. Seismological experiments could determine layer thicknesses and core radius, and, via shear modulus and attenutation, constrain the melt fraction in the asthenosphere. \subsection{Seismicity} As with all other large moons of the giant planets, Io's seismicity is not known. However, the planet shows obvious surface tectonic activity, which implies large strain and regular brittle failure, i.e. ioquakes. A popular approach to estimate seismicity has been to use the (relatively well known) tidal dissipation and assume that a certain part of that energy is released seismically. \textcite{hurford_seismicity_2020} estimated the ratio between tidal dissipation and seismic energy over an orbital cycle for the Earth's moon to be 0.0017 and assumed that this ratio would be a good first order estimate for other tidally active worlds. From this assumption, they found that Io would release an annual seismic moment on the order of $5\cdot10^{19}$ Nm/a (see table \ref{tab:tidal}, where the values are per ten orbital cycles). Assuming that the largest possible ioquake releases 70\% of the available moment (in analogy to Shallow Moonquakes), this implies that one to two magnitude 6 ioquakes would occur per month - a rate comparable to that of the Earth. The assumptions behind this scaling analysis may of course be simplistic, but it's nevertheless highly plausible that Io may be the most seismically active solid body after Earth. The existence of ridges structures oriented according to the directions of tidal stresses supports that tidally driven tectonism could be active \parencite{bart_ridges_2004}. While the existence of tectonic events is currently hypothetical, the volcanic activity of Io is obvious and documented photographically \parencite{smith_jupiter_1979}. One can thus expect to record seismic signals of the kind known from volcanos on Earth, i.e. distinct transient events as well as a more or less continuous tremor. Optical observation of volcanic centers on the surface could provide epicenter information and thus support the construction of travel time curves and the inversion for interior structure. \subsection{Mission perspectives} Despite the spectacular surface colors, landing on Io is actually not particularly dangerous, since the surface volcanism is locally constrained. A larger problem is Io's place very deep in Jupiter's gravity well, which would require considerable $\Delta V$~reserves and a launcher of SLS or Falcon Heavy class. The location in Jupiter's radiation belt poses a major design driver for lander electronics - contrary to orbiter missions which can minimize the radiation dose by performing only brief, repeated flybys, as in the IVO mission profile \parencite{adams_io_2012, mcewen_io_2014}, while a lander would need dedicated shielding of electronics. A more realistic option for seismology may therefore be an orbital detection of seismic deformation using Interferometric Synthetic Aperture Radar (InSAR) during multiple flybys or deployment of a few retroflectors that can be queried with a laser on an orbiter. A more realistic option for seismology may therefore be an orbital detection of coseismic deformation using Interferometric Synthetic Aperture Radar (InSAR) during multiple flybys, a technique which is routinely applied to significant Earthquakes, or deployment of a few retroflectors that can be queried with a laser on an orbiter (Laser vibrometry, see \textcite{de_kleer_tidal_2019}, and references therein). Via the latter, normal modes could be observed to constrain the deep interior of the planet, while the former would mostly serve to understand shallower lithospheric strength. \begin{table*}[!ht] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|} \hline & $m_{\textrm{parent}}$ & Period & $R$ & $a$ & $e$ & $k_2$/$Q$ & $E_{\textrm{T}}$ & $\sum M_0$ & $M_{\textrm{w}}$ & $M_C$ & $M_{\textrm{w}}$ \\ \hline & [kg] & [days] & [km] & [km] & [\%] & & [J] & [Nm] & & [Nm] & \\ \hline Io & $1.90\cdot10^{27}$ & 1.769 & 1821.6 & $421,700$ & 0.41 & 0.015& $1.43\cdot10^{20}$ & $2.3\cdot10^{18}$ & 6.2 & $1.7\cdot10^{18}$ & 6.1 \\ \hline Europa & $1.90\cdot10^{27}$ & 3.551 & 1560.8 & $670,900$ & 1 & 0.0054 & $8.7\cdot10^{18}$ & $1.5\cdot10^{17}$ & 5.4 & $1.0\cdot10^{17}$ & 5.3 \\ \hline Titan & $5.68\cdot10^{26}$ & 15.945 & 2575 & $1,200,000$ & 2.88 & 0.004 & $1.6\cdot10^{18}$ & $2.7\cdot10^{16}$ & 4.9 & $1.9\cdot10^{16}$ & 4.8 \\ \hline Moon & $5.97\cdot10^{24}$ & 27.3 & 1737.2 & $384,399$ & 5.5 & 0.0012 & $5.0\cdot10^{16}$ & $8.0\cdot10^{14}$ & 3.9 & $4.9\cdot10^{14}$ & 3.7 \\ \hline Enceladus & $5.68\cdot10^{26}$ & 1.37 & 252 & $237,948$ & 0.47 & 0.0036 & $6.3\cdot10^{15}$ & $1.0\cdot10^{14}$ & 3.3 & $7.5\cdot10^{13}$ & 3.2 \\ \hline Earth/Lunar & – & 1 & – & & – & – & $7.2\cdot10^{16}$ & $1.2\cdot10^{15}$ & 4 & $8.5\cdot10^{14}$ & 3.9 \\ \hline Mars/Solar & – & 1.03 & – & & – & – & $8.9\cdot10^{14}$ & $1.5\cdot10^{13}$ & 2.7 & $1.\cdot10^{13}$ & 2.6 \\ \hline Mars/Phobos & – & 0.32 & – & & – & – & $9.2\cdot10^{11}$ & $1.6\cdot10^{10}$ & 0.7 & $1.1\cdot10^{10}$ & 0.6 \\ \hline Mercury/Solar & – & 58.65 & – & & – & – & $1.2\cdot10^{15}$ & $1.2\cdot10^{15}$ & 4 & $8.4\cdot10^{14}$ & 3.9 \\ \hline \end{tabular} } \caption{Estimated tidally-induced seismic moment released over ten orbital cycles. The table is slightly modified from \parencite{hurford_seismicity_2020} and references therein. Note that this table assumes that the ratio of tidal dissipation to seismically released energy found on the moon (0.0017) is applicable to the other worlds. Also note that ten orbital cycles are 18 terrestrial days on Io, but 6700 days on Mars.} \label{tab:tidal} \end{table*} \section{Europa} \begin{figure} \centering \includegraphics[width=\textwidth]{figures/global_stacks/Europa_20km.jpg} \caption{Global seismic wavefield stack for Europa, using an interior model of \textcite{vance_geophysical_2018} with a crustal thickness of 20 km.} \label{fig:gs_europa} \end{figure} \subsection{Potential scientific goals} A liquid subsurface ocean on Europa was first predicted from the internal energy budget by radioactive decay and tides \parencite{lewis_satellites_1971, cassen_is_1979}, and later supported by flyby measurements during the Galileo mission \parencite{schubert_interior_2004}. The thickness of the ice layer above the ocean is related to composition and temperature of the water; measurements of said value would therefore constrains these parameters, which have strong implications for habitability of the ocean. The ESA JUICE mission, as well as the NASA Europa Clipper will therefore both carry radars to measure the thickness during multiple flybys \parencite{bruzzone_rime_2013, phillips_europa_2014}. At this stage, the attenuation of electromagnetic waves by ice under Europa conditions is not well known, and the detection of a reflection of the ice bottom by either of the mission's radars is likely, but not certain \parencite{eluszkiewicz_dim_2004, aglyamov_bright_2017}. A surface-deployed seismometer would be sensitive to ice thickness via 3 routes: \begin{itemize} \item An layer above a fluid half space forms a specific seismic phase, termed Crary phase after it's first descriptions on floating arctic ice \parencite{press_propagation_1951, crary_seismic_1954}. The phase is an almost monochromatic, radially horizontally polarized superposition of SV reverberations. Its central frequency is $ f_{\mathrm{Cr}} = \frac{v_{\mathrm{S}}}{2d \sqrt{1 - \left(\frac{v_{\mathrm{S}}}{v_{\mathrm{P}}}\right)^2}},$, where $d$ is the ice thickness and $v_{\mathrm{S}}, v_{\mathrm{P}}$ are S- and P-wave speeds respectively. For the range of ice thicknesses predicted for Europa (5-30 km) \parencite{vance_vital_2018}, $f_{\mathrm{Cr}} $ would be between 0.11~Hz and 0.44~Hz, i.e. well-observable by a short period seismometer. This has made it a prominent candidate for ice thickness determination since 20 years \parencite{kovach_seismic_2001}; the phase is also well-determinable in synthetic seismograms \parencite{stahler_seismic_2018}, but its robustness against heterogeneous ice of varying thickness is at this stage not known. \item Since the ocean cannot propagate shear waves, S-waves and specifically horizontally polarized SH waves will be reflected almost fully at the ice-ocean interface. Any seismic signal should therefore contain strong reverberations of shear waves, whose traveltime $T_{\mathrm{rev}}$ would be directly proportional to ice thickness $d$: $T_{\mathrm{rev}}=2d/v_{\mathrm{S}}$. The observability of direct reverberating phases would be affected by seismic attenuation, which reduces shear wave amplitude strongest. However, the reverberating waves would also be present in the ambient seismic noise of Europa and could be retrieved by autocorrelation \parencite{stahler_seismic_2018}. \item The ice thickness places an upper limit on the period of Rayleigh waves. If the ice is on the thinner side of previous estimates (below 10 km), this limit would be below 5 seconds, i.e. potentially observable by a short period sensor \parencite{panning_long-period_2006, stahler_seismic_2018}. \end{itemize} All in all, the different potential observables provide a certain redundancy in determining the ice thickness using seismic methods, which is one of the reasons, why a seismometer has been an element of the Europa lander mission concept, that is currently awaiting NASA adoption and funding \parencite{hand_report_2017}. This mission described several other science goals of a seismometer, specifically to observe water or brine lenses within the ice. Seismic methods are usually good at detecting heterogeneities at depth, but as it was ruthlessly pointed out by \parencite{grimm_magnetotelluric_2021}, the elastic impedance contrast between mushy ice and liquids is not particularly strong and determining even a "1D" layered seismic velocity profile below a lander is by no means a trivial process. Yet, the non-uniqueness of geophysical observations in a single station is even worse for electromagnetic, specifically potential-based methods. Also, \textcite{hobiger_shallow_2021} demonstrated using InSight data that long-term observation of ambient vibrations at a single location can constrain even complicated subsurface structure, using geological context information, if available. The habitability of Europa's ocean would be increased by transport of surface material into the ocean, since the Jovian radiation oxidizes surface material, creating a potential energy source for primitive life, if transferred back into the ocean. Subduction in the ice has been proposed based on the geomorphology of linea on Europa's surface \parencite{prockter_folds_2000}, but has not been observed in-situ. As InSight demonstrated on Mars, locations of seismic sources can be determined well using a single seismometer and shed light into the dominant tectonic process on a planet. Subduction zones on Earth cause well-observable continuous seismicity even in between the largest events, so a seismometer deployed close to a linea should be able to pick up its signature. This would confirm imaging data from orbit in one location and would allow inference over the whole planet (and in similar locations on Enceladus). A separate question of high importance for habitability is the geological activity of the sea floor. The mid ocean ridges on Earth provide habitable environments independent of sunlight and sub-ocean volcanism on Europa could play a similar role. Such activity, even in the past could be inferred from the chemistry of the dark patterns on Europa's surface, but europaquakes in the silicate crust or mantle would be a unique indication of current activity. \textcite{marusiak_seismic_2022} estimated that rocky europaquakes above magnitude 5 would be observable by a surface seismometer of the kind proposed for the Europa lander. Such magnitudes would be quite significant and are at the upper end of what is observed on Earth's mid-ocean ridges directly (not counting the adjacent transform faults, which often produce earthquakes of 7 and higher), but by no means impossible. Finally, the sound speed in the ocean itself is affected by the ocean's chemistry. A final seismic velocity model of the planet at the end of the mission would be contigent of the ocean's salt content. \textcite{duran_seismology_2022} demonstrated a fully consistent inversion of different seismic observables with other geophysical, as well as mineralogical data in a thermodynamically consistent model for Mars. Such an effort could be the final result of an observation campaign on Europa as well, delivering uncertainty limits on composition of the ocean, the thickness of the various layers and the wave speeds in the silicate interior. \subsection{Seismicity} \textcite{panning_expected_2018} estimated the seismicity rate of Europa based on the available energy from tidal dissipation to be between $10^{16} - 10^{18}$~Nm/a, which is above the value of $10^{15}-10^{16}$~Nm/a observed for Mars \parencite{banerdt_initial_2020}, but 5 orders of magnitude below the Earth. Assuming this value is correct, a few dozen europaquakes from the icy crust should be observable over a month by a Europa seismometer as defined in \parencite{hand_report_2017}. Assuming that the ratio of tidal dissipation to seismic moment is about the same on all tidally active moons in the solar system \parencite{hurford_seismicity_2020}, Europa would be the second most active moon after Io (see table \ref{tab:tidal}). \subsection{Mission perspectives} Europa is challenging to land and operate on. Landing needs to be entirely propulsive, due to the lack of an atmosphere and Europa is deep in the gravity well of Jupiter. Since Europa is in the distance of highest radiation within Jupiter's magnetic field, a surface mission requires extensive and heavy metal shielding, which \textit{worsens} the effect of the high-$\delta V$ landing. Within realistic weight constraints, a surface mission is limited to a duration of a few weeks and a scientific payload of tens of kg. The Europa lander concept \parencite{hand_report_2017} managed to fit a seismometer into these constraints. The instrument would be similar in performance to the InSight SP seismometer \parencite{pike_silicon_2016, lognonne_seis_2019}, with the goal of listening to body waves of europaquakes in the ice. Observation of long period surface waves, flexural ice modes or normal modes of the whole planet would likely not be possible with an instrument of this sensitivity. The observability of seismic waves from quakes in the silicate mantle would highly depend on their magnitude, but would be made less likely by the presence of soft layers on the seafloor or the ice bottom \parencite{marusiak_seismic_2021}. In general, operation of a warm lander on ice will pose challenges to coupling of seismic sensors due to melting and tilting \parencite{marusiak_detection_2022}. A spacecraft in orbit at Jupiter distance can still operate on solar panels, but their size would be prohibitive for a landed mission. Power would therefore have to come from an RTG, or for a short-lived mission from high-power-density batteries. The Europa Lander concept is the last iteration of a 3 decade long process of missions to explore the planets surface and interior. The ups and downs of this history are excellently described in \textcite{brown_mission_2021}. At this stage, it is a well-developed concept with a surface lifetime of 60-90 days. Yet, it would be the most expensive planetary robotic mission ever executed by NASA, which is why the Decadal Survey 2023-2033 did not recommend its execution as a Flagship mission until missions to Enceladus and Uranus have been realized. \section{Ganymede and Callisto}\begin{figure} \centering \includegraphics[width=\textwidth]{figures/global_stacks/Ganymede.jpg} \caption{Global seismic wavefield stack for Ganymede, using the interior model of \textcite{vance_geophysical_2018}.} \label{fig:gs_ganymede} \end{figure} \subsection{Potential scientific goals} Ganymede, Callisto and Titan form the three large ice moons of the solar system and yet, each one has at least one peculiarity that makes it difficult to sum them in one group: Ganymede is the only moon with a current-day magnetic field, Callisto has a very low moment of inertia, suggesting only partial differentiation \parencite{nagel_model_2004} and Titan is Titan (see next section). For all three planets, the low density implies a ocean deep enough for high-pressure ices to form, inhibiting flow of reductants from the mantle into the ocean \parencite{vance_geophysical_2018}. The presence or absence of these ice layers however depends strongly on the salt content of the ocean, as well as on the poorly known equation of state of salty water at high pressures and low temperatures. Measuring the surface ice thickness could constrain the salt content of the ocean and thus the presence of a high pressure ice layer, even if no direct phases from the high pressure ice can be observed \parencite{stahler_seismic_2018}. Seismology could therefore directly address questions of habitability \parencite{vance_vital_2018}. In the case of Ganymede, quakes from beyond the core shadow could give insight into the size of a liquid core and the existence of a solid core, as implied by the magnetic field. For Callisto, the question of differentiation of the rocky mantle could be answered much clearer by seismology than by moment of inertia (MoI) measurements from space. As described in chapter 3 \parencite{knapmeyer_planetary_2022}, moment of inertia estimates can come with significant errors and lead to long-lived misestimation of core sizes. \subsection{Seismicity} Ganymede shows strong indication of surface tectonics, although its age is difficult to estimate from existing images. Knowledge about current-day tectonics will improve significantly with data from the JUICE mission in the next decade. Callisto's heavily cratered surface indicates that current-day resurfacing and thus tectonic activity is limited. The higher distance to Jupiter, compared to Europa, means that the tidal energy budget for seismicity is low, but both planets have significant interior heat stored from formation that could drive tectonic deformation. \subsection{Mission perspectives} Tidal deformation, will be measured by the Ganymede Laser Altimeter GALA, to estimate the ice layer thickness \parencite{enya_ganymede_2022} during the late stage of the JUICE mission, when the spacecraft is orbiting Ganymede. This may be seen as a long period proxy to seismology and likely the closest for some time. Landing on Ganymede or Callisto has been proposed since the Voyager age at least \parencite{boain_ganymede_1980} and penetrators have been proposed specifically by European institutions as payloads for Jupiter system flagship missions regularly \parencite[e.g.][]{vijendran_penetrator_2010-1}, similar to the Huygens Titan probe on the Cassini mission. Just as regularly, these payloads were cancelled to reduce mission complexity. After JUICE, another Ganymede mission with a lander to either Ganymede or Callisto is not likely to happen before the 2040s. In the meantime, one may watch the 1976 German dystopian movie "Operation Ganymed" starring J\"urgen Prochnow about the difficult return of astronauts from a mission to find life on Jupiter's largest moon. \section{Titan} \begin{figure} \centering \includegraphics[width=\textwidth]{figures/global_stacks/Titan_33km.jpg} \caption{Global seismic wavefield stack for Titan, using an interior model from \textcite{vance_geophysical_2018} with a crustal thickness of 33 km.} \label{fig:gs_titan} \end{figure} \subsection{Potential scientific goals} Since Titan will be visited by a seismometer in 2034, the science goals of a seismic experiment have been described well already, see e.g. \textcite{barnes_science_2021} for an overview. A primary goal is the confirmation of a liquid ocean below the surface via observation of an interface with strong seismic impedance contrast at depth. A peak near 36 Hz in electric field signals measured during the Huygens descent were interpreted as the Schumann resonance of a 55-80 km deep conductor \parencite{beghin_analytic_2012}, i.e. the ice-ocean interface. These values are plausible given thermodynamical modelling of the whole ice layer \parencite{vance_geophysical_2018}, but far from undisputed. As on other ocean worlds, shear waves will be completely trapped in the ice layer and should lead to well observable reverberations. Compared to other ocean worlds, Titan is likely to harbour methane clathrates (i.e. ice-methane hybrids) near the surface \parencite{mousis_methane_2015}, which would have a significant effect on the thermal conductivity and therefore convection of the planet \parencite{kalousova_insulating_2020}. These clathrates will have reduced seismic velocities by up to 10\% compared to pure ice \parencite{marusiak_methane_2022}, which could be detectable by Rayleigh wave dispersion curves. An open question is the level of viscoelastic attenuation in the ice layer, given the high temperature of the ice below a few 100s of meters of depth. As described in chapter 5 \parencite{bagheri_tidal_2022}, ice has a very low quality factor $Q\approx10$ at tidal periods, but the scaling to seismic frequencies is not well constrained. Due to the existence of large lakes, as well as an atmosphere, Titan might be the only other place in the Solar System in which ocean-generated microseisms can be observed \parencite{stahler_seismic_2019}. A seismometer deployed north of 60\textdegree~latitude could likely observe the waves created on Kraken mare by a hurricane, enabling remote sensing of the atmosphere. \subsection{Seismicity} The tidal deformation of Titan’s ice crust \parencite{mitri_hydrocarbon_2007} shown by Cassini gravity measurements is a plausible source of seismic activity. While Titan's orbital period is significantly higher than Europa's or Ganymede's (15.9 vs 3.6/7.2 d), its orbit's high eccentricity could allow tidal forces to drive significant tectonism (see table \ref{tab:tidal}). Whether or not the rocky core of the planet shows significant tectonic activity, is unknown. As on Europa, the science value of detecting quakes from the core would be high, since it seafloor tectonics would enrich the ocean with potential nutrients, but the detection limit is significantly higher than for an iceshell quake. \subsection{Mission perspectives} Titan is one of the 2 places in the solar system with a seismic experiment in preparation: The dragonfly lander will be launched in 2027 and deploy a relocatable octocopter on Titan in 2034. The mission will contain a single vertical component seismometer, delivered by JAXA and based on the instrument planned for the Lunar-A mission \parencite{mizutani_lunar_1995, shiraishi_present_2008}, plus two or more horizontal geophones. The instrument will not be comparable to the InSight VBB seismometer, but should be able to detect local seismicity. \section{Enceladus} \begin{figure} \centering \includegraphics[width=\textwidth]{figures/global_stacks/Enceladus.jpg} \caption{Global seismic wavefield stack for Enceladus, using an interior model from \textcite{vance_geophysical_2018}. Note that for modelling reasons, a constant ice thickness of 15 km over the whole planet is assumed, while it gravimetric observations indicate that the ice is significantly thinner near the south pole \parencite{porco_cassini_2006}. } \label{fig:gs_enceladus} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figures/Enceladus_PIA17183.jpg} \caption{Cassini image of two "tiger stripe" fissures near Enceladus' south pole with water vapor emerging (visible against the dark surface below the terminator). The fissures have been identified as the source of the plumes by \parencite{porco_how_2014}. Image by NASA/JPL/Space Science Institute, PIA 17183.} \label{fig:enceladus_stripe} \end{figure} \subsection{Potential scientific goals} Enceladus is the only ocean world where water from the subsurface ocean is accessible in situ without drilling through kilometers of ice. A long-lived plume of water vapor and ice has been observed near five distinct fissures (called tiger stripes) near Enceladus' south pole (see fig. \ref{fig:enceladus_stripe}. The Cassini orbiter was able to probe these ejecta directly and found strong indications for a source in the subsurface ocean \parencite{teolis_enceladus_2017}. This means that a mission to Enceladus would be directly tasked with determining whether the ocean supports life today or at least has done so in the past \parencite{choblet_enceladus_2021}. \subsection{Seismicity} Enceladus has a tidal dissipation larger than Earth's moon, but at a radius of only 252~km. It can therefore be expected that the strain rate of the crust is very significant. The water plume sources near the tiger stripes could produce similar long-duration seismic signals as geysers on Earth. \subsection{Mission perspectives} Sampling the plumes could be done from orbit, but the high impact velocities would limit the observability of large molecules (as it has been the case for Cassini's measurements), so the science return of a lander would be significantly higher. The last decadal survey recommended an Enceladus orbiter at low priority, so the design has been updated into a "Orbilander" concept, i.e. an orbiter that is capable of landing on the surface after an extended reconnaissance and remote sensing phase \parencite{mackenzie_enceladus_2021}. The current decadal survey \parencite{decadal_2023} recommended this mission concept as a flagship mission after finalization of the Uranus orbiter mission, with a launch date after 2037 and landing during favourable communication geometries to Earth in the 2050s. \section{The Uranus and Neptune system} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figures/Miranda_PIA18185.jpg} \caption{Voyager 2 image of the Uranus moon Miranda. The surface shows linea and TODO. Image by NASA/JPL, PIA 18185.} \label{fig:miranda_image} \end{figure} Uranus and Neptune are the two ice giants of the solar system and are thereby representing a class of planets that is very common amongst exoplanets discovered so far. Both of them have a number of planet-like moons that are potential targets for landed missions. However, all of them are known from a few images only, obtained during the flyby of Voyager 2 through the Uranus \parencite{stone_voyager_1987} and Neptune systems \parencite{stone_voyager_1989} in 1987 and 1989 respectively. Primary science goals would therefore be to research the surface geology, including geomorphology. \subsection{Potential scientific goals} Uranus has 4 large moons, Ariel, Umbriel, Titania and Oberon, between 1100 and 1550 km radius, all of which have densities between 1.4–1.7 grams per cubic cm, implying a rock to ice ratio of 2 to 3. A fifth large moon, Miranda, has a diameter of 470 km, but a density of only 1.2 g/cm$^3$, implying an even higher amount of water ice. The surface morphology of Miranda (fig. \ref{fig:miranda_image}) implies that the outer ice layer underwent subduction-like processes in the past, at a similar rate to Europa \parencite{hammond_global_2014, stern_stagnant_2018}, which suggests that due to tidal heating, a significant part of this ice was in liquid form at least at one point in the past \parencite{nimmo_ocean_2016}. An Uranus orbiter mission will perform dedicated flybys on the large moons and thereby constrain their interiors from gravimetric measurements, similar to what Galileo did in the Jupiter system \parencite{sohl_interior_1997}. The question whether the liquid ocean still exists today is one that could be answered by a seismic experiment (see the Europa section for details), but as on Europa and Titan, measuring the seismicity of the planet alone would go a long way in constraining its energy budget. Neptune has one major moon: Triton, which is comparable in size to the Jovian moon Europa and at an average density of 2.061 g/cm$^3$ widely understood to be covered by several hundred km of frozen or liquid ice \parencite{nimmo_ocean_2016}. As on the icy moons of the gas giants, the thickness of the ice and the existence and depth of a potential ocean would be primary science targets for a seismic installation. A unique feature of Triton is its retrograde orbit, which is not found for any other moon of comparable size (the next largest moon, Phoebe with a retrograde orbit around Saturn, has only 0.08 times the mass of Triton), likely the result of capture, possibly due to collision with an original satellite. Similar to Europa, Triton has a relatively young surface age, around 100 Ma \parencite{stern_tritons_2000}, as demonstrated by the small number of visible impact craters, implying geological activity over most of the age of the solar system and therefore most likely until today. As on Europa, a relatively thin ice shell opens the possibility of ice convection, potentially even subduction and thus the possibility of large thrust faults with tritonquakes of magnitudes of 5 and larger. Of the other neptunian moons, two (Proteus and Nereid) are around 400 km in diameter, but both are irregular in shape, and possibly formed during the capture of Triton \parencite{goldreich_neptunes_1989}, when the natural neptunian satellites were cannibalized or deorbited by the newcomer. \subsection{Seismicity} No estimations of the level of seismicity of these moons are available up to now. \subsection{Mission perspectives} The Decadal Survey 2023-2033 \parencite{decadal_2023} recommended a flagship mission to one of the ice giants, and preferred a Cassini-like orbiter in the Uranus system over one to Neptune, on grounds of feasibility. Specifically, launch opportunities arise in 2031 and 2032 for a 13 year cruise without the need of inner solar system gravity assists. A Neptune mission would require significant technical development, specifically rely on the SLS rocket, while planning for a Uranus orbiter could start immediately. ESA's "Voyage 2050" report \parencite{tacconi_esa_2021} has expressed interest in contributing an atmospheric entry probe or moon lander to a NASA flagship mission to an ice giant, similar to the Cassini/Huygens contribution, where ESA delivered the Huygens Titan lander. The next years will show whether this contribution will materialize and whether it will be in form of a Uranus entry probe or a lander on Miranda or another large moon. A mission to the Neptune system is therefore very unlikely to be even considered before the 2040s. At this time, there would likely be a push for such a mission to contain at least a short-lived lander probe to Triton. Due to the distance to the sun, solar power is not feasible for sustained operation and as for the Europa Lander, such a mission would have to rely on high performance batteries or RTGs. \section{Interstellar objects} \subsection{Potential scientific goals} The first discovery of an interstellar object is less than five years old, when 1I/2017 U1 (‘Oumuamua) was observed on 2017 October 19 by the Pan-STARRS1 telescope system and confirmed to be travelling on a hyperbolic orbit, i.e. not been bound by the sun's gravity \parencite{meech_brief_2017}. The limited amount of observations that were possible in the short time window of observation confirmed a rocky, red surface without any trace of degassing, as it would be observed for a comet from the Solar System's Oort cloud \parencite{bannister_natural_2019}. Surprisingly, a second interstellar object was found only 2 years later, this time before aphelion, so that its dynamical behaviour during approximation of the sun could be observed and the object 2I/Borisov could be confirmed to be an ice-rich comet \parencite{bodewits_carbon_2020, guzik_initial_2020}. The investigation of interstellar objects is obviously interesting. Interstellar objects entering the solar system on hyperbolic trajectories are the only solid matter from outside the solar system that will be accessible to in situ characterisation in the foreseeable future. A lander mission would carry simple instruments to obtain a chemical composition of the top layers. The rigidity of the object however is difficult to assess from the outside, but the distinction on whether the object is a homogeneous body, an ice-rock mixture or even a "rubble ball", a very weakly consolidated object, is of high interest to constrain the source context. The SESAME/CASSE seismic experiment on the Philae mission to comet 67P/Churyumov-Gerasimenko showed a mechanically extremely weak core below a harder surface layer \parencite{knapmeyer_structure_2018}. The latter is likely the result of previous encounters of the comet to the sun \parencite{groussin_thermal_2019}. Whether similar layering exists for true interstellar objects would be quite interesting. As the contact of the TAG sampler on the OSIRIS-REx mission to Bennu showed, even asteroids can have surprisingly low rigidity in its uppermost layers \parencite{berry_contact_2022}. \subsection{Seismicity} Interstellar objects of the size encountered so far are unlikely to sustain significant tectonic activity themselves. A possible mechanism for seismic sources is thermal stress induced by the approximation of the sun or cooling while leaving the solar system. Another option would be the combination of a slow impactor mission with a landed seismometer or some kind of repeatable source on the lander, like the hammering devices used on Philae and InSight, or the mortars used on Apollo 14 and 16. \subsection{Mission perspectives} The observation of two interstellar objects in relatively short time has triggered significant interest in developing a mission for in situ exploration of objects to be detected in the near future \parencite{hein_project_2019, seligman_feasibility_2018, castillo-rogez_approach_2019}. Proposals exist for missions that are prepared and wait in storage for the discovery of a suitable object, either on Earth, or the sun-earth L2 point. Even though fly-by missions are significantly easier, a lander using solar-electric propulsion could be feasible \parencite{hein_interstellar_2022}. A landed mission would bear similarity to Philae on the ESA Rosetta mission and could do seismic or acoustic investigations to constrain subsurface properties either during landing \cite{ biele_landings_2015} or by listening to seismic waves excited during operation of a drill or similar instrument \parencite{knapmeyer_sesamecasse_2016, knapmeyer_structure_2018}. The intercept point to any interstellar object would likely be outside of Jupiter's orbit and therefore solar energy is not an option for a small lander. The lifetime of a surface mission would therefore be limited by battery capacity and seismic experiments would be coordinated with sampling or impactor operations. The primary mission of Philae on comet 67P, sustained by the primary battery only, for example, was about 68 hours. \section{Lessons} \subsection{Scattering} Terrestrial seismology builds on clearly separated arrivals of ground motion, called "phases". Only the fact that seismic waves travel mostly unperturbed through the earth and thus arrive in short pulses made it possible to disentangle the plethora of signals being reflected and converted at various layers and interfaces inside the planet. A necessary conditions for such clean phases is that the length scale of heterogeneities inside the planet is larger than the wave lengths involved \parencite{aki_quantitative_2002}. The first seismograms observed on the moon showed that this condition is not fulfilled there and that instead, the lunar crust scatters all seismic phases beyond recognition \parencite{blanchette-guertin_investigation_2012}. The classical explanation for this is a high impact rate due to the lack of an atmosphere and a lack of mechanisms to heal small cracks, due to the absence of fluids, combined with very low intrinsic attenuation, again due to extremely low water content. Preliminary analyses of Martian seismic data showed surprisingly high amount of scattering \parencite{menina_energy_2021, karakostas_scattering_2021}, at least close to the InSight landing site. Since this scattering seems limited to the uppermost kilometers, investigations of "clean" mantle phases were possible nevertheless \parencite{duran_seismology_2022}. It is likely however, that at least all airless rocky planets (Mercury) will be more similar to the Moon in terms of problematic scattering. Strongly cratered icy worlds like Ceres or Callisto might be similar, while the tidal heating in Europa, Titan and Enceladus may lead to ductile deformation of the crust and healing of heterogeneities. This scattering will affect single station, three component seismic measurements. It is possible that gradiometric measurements, e.g. measurements of rotational motion \parencite{bernauer_rotation_2021} or distributed measurements at multiple locations \parencite{walter_distributed_2020} could allow to detect coherent wavefronts even in presence of strong scattering. However, sensitivity of such sensors is typically too low for planetary applications still \parencite{bernauer_exploring_2020}, even though the field is rapidly progressing. \subsection{Timing} The Sesame Casse experiment on the ESA Rosetta mission was the first seismic experiment that used vibrations from a mechanical hammer as source in 2014. An issue encountered in the analysis, was the limited coordination between instruments. A dedicated trigger line from the MUPUS hammer to the SESAME recording system was considered too complex in the preparation phase. Instead, the two instruments exchanged messages in a common part of the onboard computer memory to coordinate hammering and recording, and an on-ground assessment of the seven involved clocks and their individual drift rates was carried out during the evaluation \cite{knapmeyer_sesamecasse_2016}. This made initial experimentation much more difficult \cite{knapmeyer_structure_2018}. The Rosetta mission was launched in 2004, the problem was encountered in 2014, but surprisingly, the same problem occurred again during the InSight HP3 seismic experiment, launched in 2018, when the seismometer SEIS was listening for seismic waves produced by the hammer of the HP$^3$ heat flow probe \parencite{spohn_insight_2021, sollberger_reconstruction_2020}. The lack of a joint time signal between the two instruments HP$^3$ and SEIS meant that a convoluted process was necessary to infer the exact time of each hammer blow from the seismic signal itself, significantly increasing the uncertainty of the observation. Since the distance between any hammering or drilling instrument and a seismic sensor is unlikely to be more than a few meters and seismic velocities $>500$ m/s are expected, the precision of the joint timing source needs to be $<100 \mu$s, which is below the typical resolution of spacecraft bus clocks. \subsection{Bandwidth} InSight was able to transmit 6 seismic channels of 20 sps each over much of the mission due to availability of orbiters with relay capacity (Odyssey, Mars Reconnaissance Orbiter and the ESA Trace Gas Orbiter). This far exceeded the pre-mission planning, where a single 10 sps vertical channel and the 3 native channels of the seismometer were foreseen. In retrospect, one event type, the "super high frequency events", likely thermal cracking near the lander, would not have been detected with the original configuration \parencite{dahmen_super_2020}, while another type, the very high frequency events, would have been much more difficult to spot, given that their energy is mainly on the horizontal channels above 2 Hz. It was specifically foreseen to retrieve higher bandwidth data of specific events by manual requests. Such events were either to be detected in the low bandwidth streams or in one specific "ESTA-SP" channel, that contained the integrated signal energy in a narrow frequency band above 10 Hz. This process worked successfully overall, with the caveat that the ESTA-SP channel was polluted too much by glitches and wind-related transient signals to be of much use overall. Any future seismic mission (save for a lunar one) will likely be dramatically more constrained in terms of bandwidth, so that only a subset of data can be transmitted. At the same time, low sampling rates risk omitting interesting signals of local events. Classic, lossless seismic compression \parencite{ahern_seed_2012} reduces the amount of data by 30-50\%, which is by far not enough. Another possibility would be advanced pre-processing of data onboard. Figure \ref{fig:spec_compression}a shows a spectrogram of 26 hours of vertical component VBB data of InSight, as described by \parencite{giardini_seismicity_2020}. Over the course of the day, the wind noise increased dramatically over noon, masking all potential marsquakes. In the evening, S0235b, the highest SNR event of the first Martian year, can be seen. This spectrogram is equivalent to $24.7 ~ 3600 ~ 20 ~ 24 = 41.4$~MBit or 5.2 MByte of seismic data. As described in \parencite{clinton_marsquake_2021}, operationally, all significant marsquakes can be easily detected in one such spectrogram. Figure \ref{fig:spec_compression}b shows the same spectrogram as JPEG graphic, stored with high compression. The 3 marsquakes can still be detected in this graphic, yet the file size of this graphic is 108 kByte, i.e. 2\% of the original seismic data. From this small file, interesting time windows could easily be identified for transfer of data at full bandwidth. Whether spectrograms or similar, wavelet based time-frequency representations are the optimal way of producing low bandwidth daily overview files needs to be investigated and weighted against the computational power available onboard. For certain mission concepts in the outer solar system, e.g. the Europa Lander \parencite{hand_report_2017}, the short mission duration will require the lander to exercise its scientific campaign mostly autonomously without "Earth in the loop". This may mean that seismic events need to be identified autonomously by the lander itself. Given the phenomenology of non-seismic events, even in terrestrial planets \parencite{ceylan_companion_2021, dahmen_resonances_2021-1, stahler_geophysical_2020}, let alone on icy ocean worlds, this process is non-trivial and will require significant theoretical work over the next years. Given the surprises we saw with both moonquakes and marsquakes, it is unlikely that a fully autonomous processing can find all kinds of events without prior knowledge of typical examples. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{figures/spectrogram_orig.png} \includegraphics[width=0.73\textwidth]{figures/spectrogram_comp.jpg} \caption{top: Spectrogram of one Sol of seismic data, plot in the style of \parencite{giardini_seismicity_2020}, fig. 1a. 3 marsquakes can be identified easily. bottom: Same plot, subjected to level 10\% lossy JPEG compression. The size of this figure is 108 kByte, i.e. 2 \% of the original seismic data, yet the marsquake time windows can still be identified.} \label{fig:spec_compression} \end{figure} \section{Conclusion} The InSight mission to Mars has demonstrated that even a single, two year seismometer mission is enough for a first global determination of the deep interior of a planet. It did confirm geodetic observations from orbit concerning the core radius, as well as imaging-based inferences of current day tectonics in Cerberus Fossae. Furthermore, it allowed to constrain previously inaccessible parameters, such as the crustal thickness or the rate of quakes. Finally, unexpected discoveries were made, such as the high amount of scattering in the shallow crust. The seismic observations, like travel times also provide a quantitative dataset to test future interior models of Mars against for the coming decades. A similar dataset would be highly valuable for any of the other planets and moons of the solar system. Yet, given the difficulty and cost of landing, it needs to be weighted against orbital data, geodetic, radar or image based, which is easier to obtain. From a purely subjective perspective, 4 questions stand out that can be only addressed by seismology. \begin{enumerate} \item The tectonic context of samples on Ceres. A Ceres sample return mission in Occator crater would rely heavily on knowledge of the age and deposition mechanism of these briny samples. Even a low-sensitivity seismometer could pick up small quakes near the landing site to determine whether signals related to active cryovolcanism exist and suggest a young age of the deposits. \item The ice thickness and ocean depth of Europa and Titan. The Europa Clipper radar will have a good shot at finding a reflection from the ice-ocean interface, but the success of this measurement is subject to the poorly constrained electromagnetic absorption in the icy crust. Yet, the crustal thickness and ocean depth are the strongest quantitative constraints on composition of the ocean and thus its habitability. A seismic experiment could give an independent constraint. Hopefully, the Dragonfly seismometer will perform good enough to prove this concept. \item The formation history and tidal heating of the large moons. The four Galilean moons differ so strongly in their interior profiles (as constrained from geodetic data) that it is difficult to judge whether our formation models work on them. Given that tidally heated planets and moons in resonance might be significantly more common in other planetary systems (think of Trappist-1), constraining even only the interior of Ganymede or Io seismically would go a long way in calibrating the models we apply to exoplanets. \item The meteoroid impact rate throughout the solar system. At the time of writing, there are no confirmed detections of meteoroid impacts by InSight, even though candidate signals exist and are reviewed. The rate of crater formation is assumed to be known when determining the age of planetary surfaces, yet the rate of new impacts cannot be constrained well, due to the strong effect of target material on observability of fresh craters. A seismic observatory on a planet can constrain this rate directly, which also has a direct implication for estimating the density of small bodies in the solar system and the resulting hazard for Earth and interplanetary spaceflight. \end{enumerate} The experience from InSight will make it possible to build smaller, low-power seismometers, which can be added to any future landed mission at a low cost in terms of complexity and energy. \printbibliography \end{document}
1,314,259,993,847
arxiv
\section{Introduction}\label{introduction} Network Intrusion Detection Systems (NIDSs) are an important defence mechanism to protect computer networks against an increasing diversity, sophistication and volume of cyber attacks. Due to the tremendous progress in Machine Learning (ML) over the last few years, in particular Deep Neural Networks (DNNs), there has been a lot of recent research into leveraging the power of novel ML models for Network Intrusion Detection Systems. In particular supervised ML methods have shown great potential over traditional signature-based NIDS. As with any supervised ML problem, the availability of high quality labelled datasets is absolutely critical. For many years, the KDD99 dataset \cite{KDD99} has been the most widely used benchmark dataset for the evaluation of NIDSs. However, it is well established that the dataset has significant limitations~\cite{UNSW-NB15} the main one being its age, it was created over two decades ago. Given that Youtube, Facebook, Spotify, mainstream cloud computing and smartphones did not exist when the dataset was created, one can appreciate that the pattern of network traffic has undergone a profound change since then. Furthermore, the type and sophistication of network attacks have undergone an equally dramatic change in the last 20+ years \footnote{It is quite suprising that the KDD99 dataset is still used today???}. The need for more recent and relevant NIDS datasets has been clearly identified~\cite{UNSW-NB15}, and has lead to the development of a range of new datasets over the last few years. In contrast to ML application areas such as image classification, where high quality benchmark datasets can relatively easily be generated, this is a much harder problem in the context of NIDSs. Ideally, we would have datasets collected from real production networks, with realistic network patterns of benign traffic, together with a wide range of correctly labelled attack traffic. Since such ideal NIDS datasets are not readily available, researchers have recently developed a range of new synthetic datasets, which have become the new benchmarks. These synthetic datasets are typically generated in a controlled and relatively small simulation or test-bed environment, where both normal traffic and attack traffic are created and labelled. Each of these datasets typically has its own dedicated feature set, which is collected and represented in a flow-based format. Over the past few years, researchers have extensively used these synthetic datasets to evaluate a wide range of new proposed ML-based intrusion detection models and methodologies. Recently published results show increasingly excellent classification performance, approaching 100\% across the key performance metrics, such as accuracy, F1-score, etc. Consequently, one could assume that the problem of ML-based NIDS has been largely solved. Arguably, this is not quite the case, and the excellent results achieved in recently published academic research have not yet translated into practical, near-perfect intrusion detection systems deployed in real-world production networks. This apparent gap has motivated our research presented in this paper. ML generally assumes that statistical properties of training data are the same as for testing data. Therefore, in order for the performance of an ML-classifier trained and evaluated on synthetic datasets to generalise and translate to a real network scenario, the statistical properties of both datasets would have to be similar. Our aim was therefore to compare the statistical properties of synthetic NIDS datasets with those of real world network traffic. Our aim was therefore to compare the statistical properties of synthetic datasets with those of network traffic obtained from real, large-scale production networks. In our analysis, we focused on benign (non-attack) traffic, due to the lack of attack labels in the production network traffic available to us. For our analysis, we have considered three recently published and widely used synthetic datasets UNSW-NB15~\cite{UNSW-NB15}, CIC-IDS2017~\cite{CIC} and TON-IOT~\cite{TON-IOT}. The datasets contain 44 to 85 number of features in different formats. We further used two datasets from large-scale production networks, collected in 2019, one from a medium sized Australian ISP, and the other from the University of Queensland network with $\sim$100 and $\sim$700 flows per second respectively. The real-world datasets were in Netflow/IPFIX format, which is widely available in In order to enable the comparison of the five datasets, we require them to be in the same format. For this, we leveraged our previous work \cite{netflow_datasets}, where we converted synthetic NIDS datasets from their proprietary feature set and format to the standard Netflow/IPFIX format \footnote{Datasets are available here ...}. In \cite{sarhan2021standard}, we argue the benefits of having a standard feature set and dataset format, and also show that, somewhat surprisingly, ML-classifiers achieve higher performance using the Netflow feature set and format, compared to the original version. For our comparison, we considered 9 practically relevant statistical features, such as flow duration, flow size, packet size, plus a number of IP address and port number related features. We compared the statistical distributions via box plots and CDF of each feature across the five considered datasets. We further quantified the distance of the different feature distributions using the Wasserstein metric~\cite{Ramdas2017}. Finally, we calculated and visualised the embedding of the 9 features into a 2 dimensional feature space, using four different embedding algorithms. Our analysis provided the following key findings. The two real-world datasets, despite the fact that they had been collected from quite different networks, exhibit a high degree of similarity in the traffic feature distributions. Similarly, the synthetic datasets are quite similar amongst themselves in regards to most traffic features. However, and most interestingly, our analysis found a highly significant difference between the synthetic datasets and the real-world datasets in regards to most of the considered feature statistics. To the best of our knowledge, this paper provides the first analysis of recent synthetic NIDS datasets and their comparison to real word traffic, We believe our results are relevant due to the extensive use of these synthetic datasets as a benchmark to evaluate ML-based NIDS models and algorithms, and they motivate future research into the development of new datasets that more closely match the properties of traffic in large scale real-world networks. This is an important goal in order to allow the translation and generalisation of the excellent NIDS performance achieved in academic research into NIDSs that are increasingly practically relevant and widely deployed in practical settings. The rest of this paper is organised as follows. After explaining the background in the next section, the three synthetic datasets along with the two real-world network traffic datasets that are used as references for this purpose, are introduced in Section \ref{Datasets}. The next section provides various qualitative comparisons between features distributions of the synthetic datasets and the real-world datasets. Then, Section \ref{Analysis} quantifies the differences of feature distributions among two groups of datasets. Section \ref{Related Works} provides the related works in the field and Section \ref{Conclusion} concludes the paper and provides the insights from our study. \section{Background}\label{Background} \subsection{Network Intrusion Detection Systems} The main approaches for building network-based intrusion detection systems (IDS) include \textit{misuse detection} or \textit{rule-based}, \textit{anomaly detection}, and a third \textit{hybrid} approach, which tries to combine benefits of the first two approaches \cite{MonowarH.Bhuyan2014}. In misuse detection systems, detection is based on the known attack signatures or rules. In anomaly detection, most of the methods define a normal traffic model and then identify anomalies as deviations from the normal model. While misuse-based IDSs have low false positive ratios, they fail to detect new types of attacks / intrusions since they have designed to detect specific types of intrusions. Anomaly-based IDSs, on the other hand, have higher false positive ratios but are potentially capable of detecting unknown attacks / intrusions. The anomaly-detection systems are implemented using various techniques, and hence there are various categories of anomaly-based IDS \cite{Hung-JenLiao2013}. The main approaches for the network anomaly detection include \textit{Statistical}, \textit{Machine Learning}, \textit{Soft Computing}, \textit{Knowledge-based}, and \textit{Combination Learner} methods. In the statistical methods, generally a model is fitted to the given data (usually normal behaviour), either via parametric or non-parametric techniques. Then, by applying statistical inference tests, the probability that an instance is generated by the model is calculated. If such probability is low, the instance is considered anomaly \cite{MonowarH.Bhuyan2017}. In the soft computing approach, which includes methods based on Genetic Algorithms (GA), Fuzzy Set Theoretic (FST), Rough Set (RS), and Ant Colony and Artificial Immune Systems (AIS), persistant features of data are detected and categorized without environmental feedback. For instance, in methods based on GA, heuristic search techniques based on evolutionary ideas are used to learn the user profiles, and in methods based on FST, fuzzy rules are used to determine the likelihood of specific or general network attacks \cite{MonowarH.Bhuyan2014}. In knowledge-based methods, network events are matched against predefined rules or patterns of attacks, i.e. attack signatures. Several approaches have been taken in the knowledge-based methods including expert systems, rule-based, ontology-based, logic-based and state-transition analysis \cite{MonowarH.Bhuyan2017}. The main ingredient of all combination learner methods, which include ensemble-based, fusion and hybrid approaches, is combining advantages of different methods to enhance the performance of anomaly detection. In the case of ensemble-base and fusion approaches various classifiers are combined and the hybrid approach enhances the anomaly detection by getting the advantages of misuse detection methods~\cite{MonowarH.Bhuyan2014}. Among different methods utilised for implementing the anomaly-detection IDS, systems based on \textit{machine learning} have been very common~\cite{kumar}. This approach includes two methods clustering / outlier-based, and classification-based. While clustering-based methods do not require labelled datasets for their training / operation, their evaluation necessitates using labelled datasets. Training and evaluation of the second sub-category of machine learning based approach, i.e. classification-based methods, require labelled datasets in which classes / categories of data are known and included in the dataset as \textit{labels}. This allows machine learning algorithms to learn the patterns of various classes and accordingly classify the dataset records. In the field of anomaly detection, these classes are mainly \textit{normal} and \textit{anomaly} classes. The anomaly class can be further divided into different types of anomalies such as network failures, intrusions and other types of attacks. \section{Datasets Explored in This Study}\label{Datasets} In this study we have used three Synthetic / testbed-based IDS benchmark datasets which we are compared with two real-world network traffic records. \subsection{Synthetic Datasets} The Synthetic / testbed-based datasets selected for this study are chosen from the most recent IDS benchmark datasets that published from 2015 till 2019. \subsubsection{UNSW-NB15 Dataset} This is the oldest dataset among the three selected datasets, which is published in 2015 by the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS)~\cite{UNSW-NB15}. The dataset has consisted of 2,540,044 flows including 87.35\% benign and 12.65\% attack flows. The traffic flows are represented in 49 features generated by Argus and Bro-IDS from traffic of a test bed in which 9 different attack types are combined with normal background testbed traffic. This dataset is also published in the form of raw pcap files. \subsubsection{CIC-IDS2017 Dataset} This dataset has been generated by Canadian Institute for Cybersecurity in 2017 \cite{CIC}. The main priority in generating this dataset is stated by authors as having realistic background traffic. They have used their own developed tool for network analysis, CICFlowMeter, to generate a dataset in which flows are labeled based on time stamp, source, and destination IPs, source and destination ports, protocols and attack. They have also provided the dataset represented by extracted features definition. They have included 21 different network attack in this dataset, which includes most of the major network breaches such as DoS, DDoS, different scans, Botnet, and various types of Web and Infiltration attacks. They ran individual class of attacks separately and provided them in a separate CSV files, as for normal background traffic of their test bed. The dataset is also published in raw pcap format. \subsubsection{TON-IOT Dataset} This is a newer dataset published by ACCS in 2019, encompassing a broader scope compared to their previous dataset~\cite{UNSW-NB15}, which includes traffic records from their testbed network and IoT devices along with logs of the operating systems~\cite{TON-IOT}. The network traffic records, explored in our work, are represented in the form of 22,339,021 network flows using 44 features extracted by Bro-IDS. The background / benign traffic constitutes 3.56\% of dataset and the remaining flows (96.44\%) are 9 different attack records. This dataset is also published in the raw pcap format. \subsection{Real-World Network Traffic Records} In order to compare the above testbed-based datasets with real-world traffic, we collected two sets of network traffics. Since we are looking for the aspects of network traffic that must be valid in any network type, we selected two completely different networks. This is an important basis in our study to avoid biased judgement and be able to generalise the results to any network type. As such, the two network traffic records explained in this section, not only are collected from two different types of networks, but they also belong to two different organization with various administration requirements and ruling polices who serve different types of customers. In addition, these two sets of network traffic have been recorded in two different time period, 2017 and 2019, to maximise the possibility of result generalisation. \subsubsection{Internet Service Provider Traffic Dataset} The first real-world dataset utilised in this study is the NetFlow records from an Australian Internet Service Provider (ISP) backbone network. This ISP is a provider of enterprise-grade managed services such as Internet, MPLS, VoIP, and cloud with dual point of presences (PoPs) in all major Australian capital cities, New Zealand, Hong Kong and Manilla. Most of the customers of the ISP are businesses and companies with multiple offices, sometimes thousands of offices around the country, that use ISP's services for the connectivity and other network and Internet services. For collecting the traffic records we used the nprobe \cite{Ntopng2017} software on a server to extract NetFlow records with about 20 fields. The whole traffic from ISP's backbone network was mirrored and aggregated toward the nprobe server where after extracting NetFlow records stored them in another collector server. These records reflect the whole traffic in the monitored part of ISP's backbone network without any sampling or exemptions. We have collected these NetFlow records for about 30 days in June 2017 and it includes about 400GB of flow records. \subsubsection{University of Queensland (UQ) Dataset} The next real-world dataset we have used in this study includes NetFlow records collected form the LAN of faculty of Engineering, Architecture and Information Technology (EAIT), The University of Queensland. The EAIT faculty consists of five schools including Architecture, Chemical Engineering, Civil Engineering, Information Technology and Electrical Engineering, and Mechanical and Mining Engineering. We used similar setup for collecting this dataset, i.e. using nprobe NetFlow exporter with the same set of fields that we used on ISP NetFlows. The data collection in this site started on early Feb. 2019 and continued for about 50 days. The recorded data includes all the traffic flows in the monitored part of the network including all wired and wireless communications, servers' traffic and all workstations in all subsidiary schools of EAIT faculty, totally about 4TB. Table~\ref{tab: datasets} shows a summary information relating the five datasets studied in this paper. This information includes the type of dataset, i.e. is it a synthetic or real-world dataset, the percentage of attack and benign flows from total records of the dataset, number of features, the year dataset is collected / published, size, type and the tools used to generate / collect the dataset. \begin{table}[!t] \tiny \caption{Summary information of datasets studied in this paper} \label{tab: datasets} \centering \begin{tabularx}{0.85\columnwidth}{ |m{2.2cm} |>{\centering\arraybackslash}X |>{\centering\arraybackslash}X |>{\centering\arraybackslash}X |m{0.8cm} |>{\centering\arraybackslash}X |>{\centering\arraybackslash}X| } \hline &&&&&&\\ \centering{\textbf{Dataset}} & \textbf{Synthetic / Real-world} & \textbf{Attack~ (\%)} & \textbf{Number of Features} & \centering\textbf{Year} & \textbf{Format} & \textbf{Generation Tool} \\ &&&&&& \\ \hline &&&&&& \\ \centering\textbf{UNSW-NB15}~\cite{UNSW-NB15} & Synthetic & 12.65 & 49 & \centering2015 & proprietary flow format & Argus / Bro\\ &&&&&& \\ \hline &&&&&& \\ \centering\textbf{CIC-IDS2017}~\cite{CIC} & Synthetic & 28.16 & 85 & \centering2017 & proprietary flow format & CICFlowMeter \\ &&&&&&\\ \hline &&&&&& \\ \centering\textbf{TON\_IOT}~\cite{TON-IOT} & Synthetic & 96.44 & 44 & \centering2019 & proprietary flow format & Security Onion / Bro \\ &&&&&& \\ \hline &&&&&& \\ \centering{\textbf{ISP}~[Our]} & Real-World & 0 & 20 & \centering2017 & NetFlow & nprobe \\ &&&&&& \\ \hline &&&&&& \\ \centering{\textbf{UQ}~[Our]} & Real-World & 0 & 20 & \centering2019 & NetFlow & nprobe \\ &&&&&& \\ \hline \end{tabularx} \end{table} \section{Statistical Analysis of Network Traffic Features}\label{Comparison} As discussed in Section \ref{NAC}, anomalies typically change statistical distribution of some traffic features~\cite{Lakhina, Soule2007, histo_anomaly}. Hence, we cannot expect to acquire similar results from the comparison of statistical distributions of these benchmark datasets containing anomalies / attacks with that of the real-world traffic records with unknown attack inclusion status. So, we needed to make sure that the selected parts of both groups of datasets are attack free (normal traffic). As such, we had two issues to address before starting our analysis. First, to select attack free parts from benchmark datasets, and second, to make sure the selected parts of the real-world traffic does not include attacks. The first problem was relatively easy, as normal / background traffic were provided in separate files in UNSW-NB15~\cite{UNSW-NB15}, CIC-IDS2017~\cite{CIC} and TON\_IOT~\cite{TON-IOT}, and we used the provided metadata to exclude all the attack flows. The second task was rather difficult. Initially, we asked for the attack / anomaly logs in the ISP and our university IT department related to the selected part of the traffic. Then, we applied our under-development AI-based anomaly detection algorithms on these recordings. Finally, any detected anomalies by algorithms were investigated in complete details and discussed with the corresponding network administration for final decision. In this way, an anomaly-free part of the traffic was selected equivalent to 24 hours of both ISP and UQ traffic records. Table~\ref{tab: features} shows the list of traffic features we have used in this statistical distributions comparison. The first column is the name of feature, the second column is the NetFlow (V9) fields used to calculate the feature, and the third column is the formula / equation applied for the calculation. Since some of the needed NetFlow fields were not provided in the original published benchmark datasets, we had to use their provided packet captures (pcap files) to generate the NetFlow records. For this, after determining the attack free parts of the benchmark datasets, as explained above, the corresponding pcap files were identified and fed to nprobe \cite{Ntopng2017} software to generate NetFlow records. All the NetFlow fields utilised in our study have previously been used for characterising anomaly traffic in at least one or more studies~\cite{Lakhina, Soule2007, histo_anomaly}, except the \texttt{L7\_PROTO} that was not easily provided by devices in the past. As such, comparing these datasets with real-world traffic, in terms of statistical distributions of these features, provides a realistic measure of similarity to the real-world traffic. This is a missing meaningful benchmarking of these datasets that can indicate their suitability for benchmarking the IDS algorithms and systems in the real-world scenarios. In this study we provide two sets of comparisons. Initially we qualitatively investigate distribution of these features by comparing their \textit{boxplots} and \textit{Cumulative Distribution Functions (CDFs)}, and later their embeddings, in this section. This enables us to observe the difference in distribution of these features between the real-world traffic and testbed-based datasets. Then we quantify these qualitative comparisons using quantitative distance metrics in the next section. \begin{table}[!t] \caption{List of Traffic features investigated in this study, along with NetFlow fields utilised to calculate these features and how they are calculated} \label{tab: features} \resizebox{\columnwidth}{5cm}{ \begin{tabularx}{\columnwidth}{ | p{6.4cm} | X | p{3.4cm} | } \hline \textbf{Feature} & \textbf{NetFlow Fields} & \textbf{How to } \textbf{Calculate} \\ \hline \texttt{flow duration} & \texttt{FIRST\_SWITCHED(FS)}, \hspace{2cm} \texttt{LAST\_SWITCHED(LS)} & \texttt{Ls - FS} \\ \hline \texttt{flow size (in Bytes)} & \texttt{IN\_BYTES(IB)}, \hspace{2cm} \texttt{OUT\_BYTES(OB)} & \texttt{IB + OB} \\ \hline \texttt{packet time (average)} & \texttt{FIRST\_SWITCHED(FS)}, \texttt{IN\_PKTS(IP)}, \texttt{LAST\_SWITCHED(LS)}, \texttt{OUT\_PKTS(OP)} & \[\textstyle\frac{\text{\texttt{LS - FS}}}{\text{\texttt{IP + OP}}} \hspace{3cm} \] \\ \hline \rule{0pt}{10pt} \texttt{packet size} & \texttt{IN\_BYTES(IB)}, \hspace{4cm} \texttt{OUT\_BYTES(OB)}, \hspace{4cm} \texttt{IN\_PKTS(IP)}, \hspace{4cm} \texttt{OUT\_PKTS(OP)}& \[\textstyle\frac{\text{\texttt{IB + OB}}}{\text{\texttt{IP + OP}}} \hspace{3cm} \] \\ \hline \texttt{number of source IPs per destination IP} & \texttt{SRC\_IP}, \texttt{DST\_IP} & \texttt{COUNT(SRC\_IP)} \hspace{4cm} per \texttt{DST\_IP} \\ \hline \texttt{number of source IPs per destination PORT} & \texttt{SRC\_IP}, \texttt{DST\_PORT} & \texttt{COUNT(SRC\_IP)} \hspace{4cm} per \texttt{DST\_PORT}\\ \hline \texttt{number of destination IPs per source PORT} & \texttt{DST\_IP}, \texttt{SRC\_PORT}& \texttt{COUNT(DST\_IP)} \hspace{4cm} per \texttt{SRC\_PORT}\\ \hline \texttt{number of destination Ports per source port} & \texttt{SRC\_PORT}, \texttt{DST\_PORT}& \texttt{COUNT(DST\_PORT)} \hspace{4cm} per \texttt{SRC\_PORT} \\ \hline \texttt{number of L7 protocols per destination port} & \texttt{L7_PROTO}, \texttt{DST\_PORT} & \texttt{COUNT(L7_PROTO)} \hspace{4cm} per \texttt{DST\_PORT} \\ \hline \end{tabularx} } \end{table} \subsection{Qualitative Comparison of Feature Distributions} \label{Basic Statistical Analysis} In this section, we are providing boxplot and CDF of all the features listed in Table~\ref{tab: features} for all five datasets, including the three testbed-based datasets and two real-world traffic records, side by side. \begin{figure}[!b] \centering \subfloat[\centering ] {\includegraphics[width=0.5\columnwidth, height=5cm]{Files/figs/flow_duration_Comscentre.png}}% \subfloat[\centering ] {\includegraphics[width=0.5\columnwidth, height=5cm]{Files/figs/flow_duration_UQ_4samples.png}}% \qquad \subfloat[\centering ] {\includegraphics[width=0.5\columnwidth, height=5cm]{Files/figs/flow_size_Comscentre.png}}% \subfloat[\centering ] {\includegraphics[width=0.5\columnwidth, height=5cm]{Files/figs/flow_size_UQ_4samples.png}}% \caption{Comparing the flow durations and flow sizes within the two real world datasets. a) and b) show the flow durations of ISP, and sampled UQ dataset, and c) and d) show their flow sizes correspondingly. In each figure, the selected sample day is highlighted in blue color}% \label{within datasets} \end{figure} \subsubsection{Feature Variability Within the Dataset} Before comparing the feature distributions between groups, we investigated the feature distribution within various samples of the same dataset. Figure \ref{within datasets} shows two examples of comparing feature distribution within the individual datasets. Figure \ref{within datasets}-a and b show the distribution of \textit{flow durations} for different days of ISP and sampled UQ datasets ($\sim20$ sample per minute) respectively, and Figure \ref{within datasets}-c and d show distribution of \textit{flow sizes} for different days of ISP and sampled UQ datasets respectively. In all figures, the red circles show the median, the red crosses show the mean, and the lines / bars show the standard deviation (STD) of the feature. The y axis is shown in logarithmic scale for the flow sizes (\ref{within datasets}-c and d) due to the presence of very large flows, which has compressed the main part of the feature range. As seen, while there are variations in the distribution of features among different days of both datasets, the main characteristics of distributions are similar, i.e. means, and medians are very close and STDs are almost in the same range. This clearly indicates that distribution of these features is an inherent quality of these datasets and its comparison is indeed meaningful. In addition, the it shows that the statistical parameters of selected sample day from each dataset, which is highlighted in blue, is not significantly different from the rest of the dataset. \subsubsection{Flow Duration} Figure \ref{flow duration} shows the distribution of flow duration for all five datasets. In Figure \ref{flow duration}-a the three \begin{figure}[t] \centering \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/flow_duration_boxplot.pdf} }% \qquad \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/flow_duration_CDF_c.pdf}}% \caption{Comparing the flow duration in five datasets a) boxplots, and b) CDFs}% \label{flow duration} \end{figure} \noindent left-most boxplots are the three synthetic datasets and two right-most are the real-world traffic records. % In all boxplots shown in this paper, the \textit{whiskers} indicate a maximum distance of 1.5 times IQR above and below IQR. % As seen, while the two real-world datasets have completely different IQR from all three synthetic datasets, they have significant overlap. This can be further confirmed by Figure \ref{flow duration}-b in which the CDFs of two real-world traffic records move close together and far from the three synthetic datasets all the way to the right corner of the figure. \subsubsection{Flow Size (in Bytes)} The next feature distribution compared between these five dataset is the flow size (in Bytes). Figure \ref{flow size} compares the distribution of flow sizes in (a) by boxplots and in (b) by CDFs. While in Figure \ref{flow size}-a there is a meaningful distinction between IQRs of the two synthetic datasets TON\_IOT and UNSW\_NB15, and two real-world datasets, the IQR of CIC has a major overlap with two real-world datasets. In addition, the overlap between IQRs of the two real-world datasets is still significant. This can be further confirmed by investigating the CDF of flow size in these datasets as shown in Figure \ref{flow size}-b. Here, we have used logarithmic scale for the horizontal axis. This is because of the very large outlier values that make the main part of the curves very tiny. The conclusion from figure \ref{flow size}-a can be confirmed here as well. The two real-world datasets close together from the beginning to the end, CIC also closely moves together with them, but TON\_IOT and UNSW\_NB15 have separate path. It worth to mention that we have also computed similar graphs for the distribution of flow sizes in number of packets, i.e. the total number of packets in flows \texttt{(IP + OP)}, but since the result were very similar to flow sizes in Bytes (Figure \ref{flow size}) we gave up to include them. \begin{figure}[!t] \centering \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/Total_flow_size_boxplot.pdf} }% \qquad \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/Total_flow_size_CDF_c.pdf} }% \caption{Comparing the flow sizes (in Bytes) in five datasets a) boxplots, and b) CDFs}% \label{flow size} \end{figure} \subsubsection{Packet Time (average)} Figure \ref{packet time} shows the distribution of packet times for the five datasets. The average packet time / duration is computed by dividing flow duration by total number of packets in flow, as shown in the Table~\ref{tab: features}. Since it takes into account, at the same time, the flow duration and number of packets, it reflects traffic characteristics in two dimensions time and volume. Again, the similarity between two real-world datasets, and their distance to synthetic datasets is very clear. \begin{figure}[!t] \centering \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/avg_Total_packet_duration_boxplot.pdf} }% \qquad \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/avg_Total_packet_duration_CDF_c.pdf}}% \caption{Comparing the packet times in five datasets a) boxplots, and b) CDFs}% \label{packet time} \end{figure} \subsubsection{Packet Size} Figure \ref{packet size} shows packet size distribution for the five datasets. Although it is hard to separate the real-world and synthetic datasets in Figure \ref{packet size}-a, due to the major overlap between IQRs of both groups, further investigation, as seen in Figure \ref{packet size}-b, reveals a pattern similar to previous features. While the distinction between groups is clear, the difference, in this case, is not as significant as in the case of the previous features. Though, the phenomena of two real-world datasets moving close together, separate from others, from beginning of the CDF till its end, is still observed. The horizontal axis in Figure~\ref{packet size}-b is again in logarithmic scale, to highlight the main part of the graph, which is compressed due to the outliers with very large values. \begin{figure}[!b] \centering \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/Total_packet_size_boxplot.pdf} }% \qquad \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/Total_packet_size_CDF_c.pdf}}% \caption{Comparing the packet sizes in five datasets a) boxplots, and b) CDFs}% \label{packet size} \end{figure} \subsubsection{Number of Source IPs per Destination IP} The next row of the Table \ref{tab: features} indicates a feature that is computed by counting number of source IP addresses per each destination IP address. This feature is specifically important for the detection and identification of DDoS attacks. \begin{figure}[!t] \centering \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/nsrcIP_per_dstIP_boxplot.pdf} }% \qquad \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/nsrcIP_per_dstIP_CDF_c.pdf}}% \caption{Comparing the number of source IPs per destination IP in five datasets a) boxplots, and b) CDFs}% \label{num src IP per dest IP} \end{figure} Figure \ref{num src IP per dest IP} shows the distribution of this feature for the five datasets in (a) using boxplots, and in (b) using CDFs. As in Figure \ref{num src IP per dest IP}-a, the two real-world datasets have a completely similar IQR that is totally different from UNSW\_NB15 and TON\_IOT, and the IQR of CIC dataset is overlapping with the real-world datasets. Figure \ref{num src IP per dest IP}-b also confirms this by showing that the CDF curves of the two real-world datasets move close together from beginning to the end. The CDF curve of CIC starts close to the real-world datasets but somewhere in the end of the range separates from them. However, the CDF curves of the UNSW\_NB15 and TON\_IOT take a distinct path from early beginning to the end. These results are completely inline with the previous results, i.e. results of the previous features. As mentioned earlier, the results in this section includes the initial analysis and we have done the precise statistical tests for all feature distribution comparisons, that their results will be provided the results in the next subsection. \begin{figure}[!b] \centering \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/nsrcIP_per_dstPORT_boxplot.pdf} }% \qquad \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/nsrcIP_per_dstPORT_CDF_c.pdf}}% \caption{Comparing the number of source IP addresses per destination port in five datasets a) boxplots, and b) CDFs}% \label{num src IP per dest Port} \end{figure} \subsubsection{Number of Source IPs per Destination Port} This feature is again important in detecting DoS and DDoS and scanning attacks. The feature distribution for the five datasets are depicted in Figure \ref{num src IP per dest Port}-a in boxplots and in Figure \ref{num src IP per dest Port}-b in CDFs. The distinction between the real-world and synthetic datasets is very clear in both boxplots and CDF curves. In Figure \ref{num src IP per dest Port}-a, the three synthetic datasets (red boxplots) have very small IQR, in contrast to two real-world datasets (blue boxplots), which have large overlapped IQR. The CDF curves have exactly the same situation, i.e. the two real-world datasets are relatively close together from beginning all the way to the end, and distinct from synthetic datasets. \begin{figure}[!b] \centering \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/ndstIP_per_srcPORT_boxplot.pdf} }% \qquad \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/ndstIP_per_srcPORT_CDF_c.pdf}}% \caption{Comparing the number of destination IP addresses per source port in five datasets a) boxplots, and b) CDFs}% \label{num dest IP per src Port} \end{figure} \subsubsection{Number of Destination IPs per Source Port} Again, this feature is important in detecting DoS, DDoS and scanning attacks. The boxplots and CDF curves of the feature distribution for the five datasets are depicted in Figure \ref{num dest IP per src Port}-a and \ref{num dest IP per src Port}-b respectively. The situation of feature distribution is completely similar to the previous feature. The two real-world datasets have very close distributions in both boxplots and CDFs, and they are distinct from the synthetic datasets. \subsubsection{Number of Destination Ports per Source Port} This feature is also important in detecting DDoS and scanning attacks. The feature distribution for the five datasets are depicted in boxplots (Figure \ref{num dest Port per src Port}-a ) and CDF curves (Figure \ref{num dest Port per src Port}-b). Like the two previous features, the distributions of the feature in two real-world datasets are very close in both boxplots and CDFs, and they are distinct from the synthetic datasets with an exception of UNSW\_NB15 which gets close to the real-world datasets in parts of its CDF curve. \begin{figure}[!t] \centering \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/ndstPORT_per_srcPORT_boxplot.pdf} }% \qquad \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/ndstPORT_per_srcPORT_CDF_c.pdf}}% \caption{Comparing the number of destination ports per source port in five datasets a) boxplots, and b) CDFs}% \label{num dest Port per src Port} \end{figure} \subsubsection{Number of L7 Protocols per Destination Port} This is the last feature used in this study for the comparison of the synthetic datasets with the real-world traffic. While due to the lack of availability of L7 protocol in the fields exported by NetFlow exporters in the past, it has not been utilised in the anomaly detection studies referred in our study, it is an important feature in network security. Intuitively, many well-known protocols have been used with specific destination ports, such as the HTTP and port 80. As such, the number of L7 protocols utilised with each port can be attributed to the normal behavior, i.e. characteristics, of a network and changes in its distribution can be an indicator of abnormal network situation. \begin{figure}[!b] \centering \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/nL7_per_dstPORT_boxplot.pdf} }% \qquad \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/nL7_per_dstPORT_CDF_c.pdf}}% \caption{Comparing the number of L7 protocols per destination port in five datasets a) boxplots, and b) CDFs}% \label{num L7 per dst Port} \end{figure} The distribution of this feature for the five datasets are shown in Figure \ref{num L7 per dst Port}. The boxplots are shown in Figure \ref{num L7 per dst Port}-a in which the three synthetic datasets are depicted in red color in the left most side and the two real-world datasets are depicted in blue in the right-most side of the figure. As seen, the IQRs of the two real-world datasets are totally separate from the synthetic datasets. Although the real-world datasets seem to have non-overlapping IQRs, their both differences to synthetic datasets' IQRs are stronger than their difference. Similar situation can be understood from the CDF curves as shown in Figure \ref{num L7 per dst Port}-b that confirms relative closeness of the real-world datasets compared to synthetic datasets. While this intuitive analysis gives us a sense of the feature distribution comparison between these five datasets, the results provided by running Kruskal-Wallis test in the next subsection will provide the final findings. \subsection{Comparing Dimensionality Reduced Feature Distributions} Here we visualize the feature distributions after applying various methods of embedding, on the set of all nine features listed in Table~\ref{tab: features}, and compare the five datasets. For this purpose, we use four different methods of dimensionality reduction techniques including Linear Discriminant Analysis (LDA), Multi-dimensional Scaling(MDS), Spectral Embedding and Principle Component Analysis (PCA). The resulting embeddings of features after applying dimensionality reduction on each dataset are plotted in Figure~\ref{embeddings}-a, b, c and d respectively. Embeddings of each dataset is plotted using a different marker and color to illustrate the differences of their distributions. \begin{figure}[!t] \centering \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/lda_sample_size_10000.png}}% \hspace{0.5cm} \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/mds_sample_size_10000.png}}% \qquad \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/spectral_sample_size_4000.png}}% \hspace{0.5cm} \subfloat[\centering ] {\includegraphics[width=0.45\columnwidth, height=5cm]{Files/figs/pca.png}}% \caption{Embedding samples all the five datasets using a) Linear Discriminant Analysis (LDA), b) Multi-dimensional Scaling(MDS), c) Spectral Embedding, and d) Principle Component Analysis (PCA)}% \label{embeddings} \end{figure} In Figure~\ref{embeddings}-a, LDA is the dimensionality reduction method applied to the features. The horizontal axis represents the first embedded component and the vertical axis represents the second embedded component. As per previous figures, dark and light blue colors are used for UQ and ISP and the orange, light and dark red are used for UNSW_NB15, TON_IOT and CIC_IDS respectively. As seen, the embedded points from the two real-world datasets are vastly spread in the direction of the first embedded component, while the embedded points of the synthetic datasets are mostly located in the area of small values. Similar results can be seen in Figure~\ref{embeddings}-d, where PCA is applied for dimensionality reduction and the horizontal and vertical axes represent the first and second principle components respectively. The other two embedding show another pattern in which the real-world and synthetic datasets are separated across both embedded components. In Figure~\ref{embeddings}-b and Figure~\ref{embeddings}-c, which illustrate the results of MDS and spectral embedding dimensionality reduction methods, the horizontal and vertical axes represent the first and second embedded component respectively. In both figures, the embedded points of the synthetic datasets are mostly accumulated in a small area of the surface, while the embedded points of the real-world datasets are spread in much larger area. These results are another indication of the vast differences between the statistical features of the real-world datasets and the \textbf{normal} part of these synthetic datasets. \\ \\ \section{Quantifying the Distance Between Distributions}\label{Analysis} In the previous section we have shown that distributions of traffic characteristics features of the benign / attack-free parts of the three selected IDS datasets are statistically different from the real-world traffic. This distribution difference is visualised for several features that are commonly used for implementing anomaly and intrusion detection systems, in terms of their CDF and boxplot comparison. While these visualizations are clear indications of the difference between the two groups of datasets, they do not provide a measure of how an individual dataset is far from the other one. \begin{figure}[!t] \centering \subfloat[\centering ] {\includegraphics[width=5cm] {Files/figs/flow_duration.pdf}}% \hspace{0.25cm} \subfloat[\centering ] {\includegraphics[width=5cm] {Files/figs/Total_flow_size.pdf}}% \hspace{0.25cm} \subfloat[\centering ] {\includegraphics[width=5cm] {Files/figs/avg_Total_packet_duration.pdf}}% \qquad \subfloat[\centering ] {\includegraphics[width=5cm] {Files/figs/Total_packet_size.pdf}}% \hspace{0.25cm} \subfloat[\centering ] {\includegraphics[width=5cm] {Files/figs/ndstPORT_per_srcPORT.pdf}}% \hspace{0.25cm} \subfloat[\centering ] {\includegraphics[width=5cm] {Files/figs/ndstIP_per_srcPORT.pdf}}% \caption{Wasserstein distances between the samples of the five datasets in terms of a) flow duration, b) flow size, c) (average) packet time, d) packet size, e) number of destination ports per source port, and f) number of destination IP addresses per source port}% \label{Wasserstein distance} \end{figure} In this section, we are quantifying the difference between these distributions. There are a range of metrics that can be used to measure the distance between two distributions / probability density functions as reviewed in~\cite{Sung-HyukCha2007}. Each metrics comes with its assumptions about distributions, which defines the accuracy of the measurement. The distance metric that we are using in this section is \textit{Wasserstein distance}, which is also known as the \textit{Earth Mover’s distance}, and is commonly used in the machine learning applications. The Wasserstein distance $W$ of two distribution $u$ and $v$ is given by~\cite{Ramdas2017} \begin{equation} W(u,v) = \inf_{\pi \in \Gamma(u,v) } \int_{\mathbb{R} \times \mathbb{R} } |x-y| d\pi(x,y) \end{equation} \\ \noindent where $\Gamma(u,v)$ is the set of (probability) distributions on $\mathbb{R} \times \mathbb{R}$ whose marginals are $u$ and $v$ on the first and second factors respectively. The \textbf{inf} stands for \textit{infimum}, also known as the greatest lower bound. If $S$ is a subset of set $T$ (partially ordered), then \textbf{inf}$(S)$ is the greatest element of $T$ which is less than or equal to all elements of $S$. Th Wasserstein distance of $u$ and $v$ can also be stated in terms of their CDFs, $U$ and $V$ as~\cite{Ramdas2017} \begin{equation} W(u,v) = \int_{-\infty}^{+\infty} |U-V| \end{equation} \\ \noindent which in the case of our study can be applied to CDFs of features computed in Section \ref{Basic Statistical Analysis}. Figure~\ref{Wasserstein distance} shows the Wasserstein distances between each pair of the five datasets that are used in this study, in the form of a heatmap diagram, for six features. In Figure~\ref{Wasserstein distance}-a the distance between flow duration distributions is measured. The value of the Wasserstein distances are shown on each entry, in addition to visualising by color. Since the Wasserstein distance metric is symmetric, distance from distribution \textbf{a} to \textbf{b} is the same as distance from \textbf{b} to \textbf{a}. For instance, the distance between UNSW_NB15 and UQ and vice versa is $0.51$. As seen, except CIC-IDS, the distances between synthetic datasets and the real-world datasets is bigger than real-world to real-world distances. Figure~\ref{Wasserstein distance}-b, c, d, e, and f show the Wasserstein distances for the distribution of flow size, packet time, packet size, number of destination ports per source ports and number of destination IPs per source port, respectively. The differences between the real-world and all synthetic datasets are more significant for the rest of features. \begin{figure}[!b] \centering \subfloat[\centering ] {\includegraphics[width=0.4\columnwidth, height=5cm]{Files/figs/average_over_all_features.pdf} }% \hspace{1cm} \subfloat[\centering ] {\includegraphics[width=0.4\columnwidth, height=5cm]{Files/figs/distance_to_real_average_over_all_features.pdf} }% \caption{Averaged distances between datasets over all 9 features in Table \ref{tab: features} using a) heatmap , and b) distance to UQ and ISP}% \label{summary WD} \end{figure} In order to summarise the overall differences between these datasets in a single number, the Wasserstein distances of all features are calculated and averaged. Figure~\ref{summary WD}-a shows the averaged Wasserstein distance over all features listed in Table \ref{tab: features}. Since the main subject in this paper is the distance between the real-world and synthetic datasets, these values are summarised in Figure~\ref{summary WD}-b. The horizontal axis indicates the distance to UQ dataset, and the vertical axis indicates distance to ISP dataset. As distance of UQ to UQ dataset is zero, it is placed on vertical axis $(x=0)$, similarly the ISP dataset is placed on horizontal axis $(y=0)$. In this way, the coordinates of each dataset represents its averaged distances to the two real-world datasets. As seen, the CIC-IDS dataset is closest dataset to UQ and ISP, and TON_IOT and UNSW_NB15 are placed in farther locations. The main purpose of this representation is to clearly show the distances between the real-world and synthetic datasets. While the two real-world datasets are placed in small distances from each other, all the three synthetic datasets are in much farther distances. \section{Related Works}\label{Related Works} In order to select the previous works regarding the subject, we have considered three aspects. First the role of network traffic characteristics, i.e features of network traffic that affect normal network behavior. Then, we looked for those features that are considered for detecting and identifying network traffic anomalies. Finally, we looked for other works in the field of IDS dataset evaluation and summarised these works method and how they evaluated the publicly available IDS datasets. \subsection{Network Traffic Characteristics} Many previous works that discuss characteristics of network traffic such as~\cite{KevinThompson1997} and~\cite{Liu2015} focus on traffic volume variations in time to explain main features of network traffic. In \cite{Lakhina2004a}, they use Principle Component Analysis (PCA) to investigate the the origin-destination flows of network as an essential part of network traffic modelling, and finding solution to a wide variety of problems including traffic engineering, capacity planning and anomaly detection. While timeseries and time variations of traffic volume play a significant role in many aspects of network such as design and implementation, there are other features of network traffic that are equally important when discussing traffic characteristics. This has been clearly illustrated in two other previous studies~\cite{Kandula2009} and~\cite{Benson2010} which investigate network traffic not only via its timeseries, but also by analysing its statistical distributions. In~\cite{Kandula2009} by collecting a petabyte of measurement data over two months, they discovered and reported traffic characteristics patterns. They not only investigated the features related to traffic volume, but they also studied other features such as flow duration and flow inter-arrival time. In\cite{Benson2010}, they studied the traffic characteristics of 10 distinct datacentre networks with different organisational administrations such as university, enterprise, and cloud services providers. They studied the traffic patterns of these datacentres by investigating the flow and packet-level properties of various layer-7 applications, and the impact of these applications on network congestion and link and network utilisation. In their study, they investigated a range of packet and flow statistical distributions such as number of flows per second, flow inter-arrival time, flow size, flow duration, and packet size. Unlike previous works, they not only took into account the time domain distribution of the traffic features, but they also considered the statistical measures such as \textit{Cumulative Distribution Function (CDF)}. In addition, they investigated these measures per layer 7 applications, and provided the distribution of corresponding statistical measures per various layer 7 applications. They use these measures to understand the normal patterns of network traffic in various levels such as edge, aggregation and core links. \subsection{Network Anomalies Characteristics} \label{NAC} The next group of studies investigate the network traffic characteristics to understand abnormal network behavior and detect anomalies. In this group of studies, various statistics, measures and distributions of several network traffic features have been utilised for detecting abnormal network behaviors. They not only take into account a broader range of network traffic features, they also consider the statistical measures and distributions of these features. Furthermore, some of the studies in this group apply their method on traffic records collected from the real world, mainly backbone networks, which is a stronger evaluation of their proposed method. In~\cite{Lakhina}, this argument is explained by illustrating the role and importance of the distributions of packet features, such as IP addresses and ports observed in flow traces, in detecting and identifying the structure of a wide range of network anomalies. They state that clustering network traffic based on the distribution of these features creates meaningful clusters of anomalies and such clusters can be utilised for detecting new anomalies and automatic classification of anomalies. In this way, by investigating these distributions, they are able to not only detect the volume-based anomalies, but also detect many other anomalies that do not change traffic volume significantly. They validate their proposed methods on data collected from two backbone networks (Abilene and Geant) and conclude that feature distributions is a key ingredient in general network anomaly detection framework. In~\cite{Soule2007}, they monitored anomalies and collected network traffic records of the same backbone networks (Abilene and Geant), in order to determine which network parameters affect detectability of network anomalies, and how these parameters influence the characteristics of anomalies. They recorded 3 weeks of traffic and routing data from both networks and detected three specific anomalies by applying Kalman filter. While they stated that detecting anomalies is significantly dependent on the network design, monitoring infrastructure, and anomaly-detection technique, they investigated variations of the entropy of four features of network traffic, namely source and destination IP addresses and source and destination (L4) Ports. They concluded that it is not possible to detect all anomalies in a network, based on a single method for Internet-Wide anomaly detection. In the last study of this group, based on the idea proposed in~\cite{Lakhina}, using the traffic feature distribution for anomaly detection, histograms of eight network traffic features have been investigated~\cite{histo_anomaly}. These features include the source and destination IP addresses, source and destination (L4) ports, TCP flags, (L4) protocol number, packet size, and flow duration. While they have listed the possible benefits of each feature for detecting specific type of anomaly, they stated that combination of features can reveal changes in the network traffic, which would be invisible otherwise. They applied their proposed method on the data collected from one datacentre, and one campus network and an IDS benchmark dataset. The results of these studies, collectively, show that network traffic features distribution play a significant role in the detection of network anomalies in both real world and benchmark datasets. However, this has rarely been investigated in the evaluation of publicly available datasets. In the next subsection, we briefly explore the available review papers that have evaluated these benchmark datasets. \subsection{IDS Dataset Evaluation} While there are many publicly available IDS benchmark datasets that have been frequently utilised for the evaluation of new and existing IDS algorithms, studies that evaluate these datasets themselves are very rare. In this subsection we explore the two studies that we have found in this space. The first study investigates usefulness of DARPA dataset for the evaluation of IDSs ~\cite{Thomas2008}. They use two signature-based IDSs, Snort~\cite{snort} and Cisco IDS, along with two anomaly detection methods to evaluate DARPA datasets based on the methodology proposed by MIT Lincoln Laboratory for IDS evaluation. Their results indicate that this dataset is useful for the evaluation of intrusion detection systems. Since this evaluation includes only a single dataset, and it is rather an old study investigating even an older dataset, the results cannot be generalised for other datasets. The report in \cite{Gharib2017} is the only study we found evaluating multiple IDS datasets. They systematically evaluate 11 publicly accessible IDS datasets published between 1998 till 2016, and conclude that most of these datasets are out of date and unreliable for the evaluation of the IDS algorithms. By investigating the shortcoming of the existing datasets, they provide a framework for the evaluation of IDS datasets. This framework consists of 11 features which a benchmark IDS dataset should possess, and hence define a corresponding score for each of their studied benchmark datasets. These features include complete network configuration, complete traffic, labeled dataset, complete interaction, complete capture, available protocols, attack diversity, anonymity, heterogeneity, feature set, and metadata. The main approach in defining these features, which are proposed based on the observations of the existing benchmark datasets, is to prevent the previous dataset shortcomings. However, it is not discussed how having these features guarantees the fitness or enhances the fitness of a synthetic IDS dataset for the real-world evaluation scenarios. As such, we are proposing for the first time, a criteria for a benchmark dataset, that can be used to evaluate the similarity of a synthetic / testbed-based IDS dataset to the real-world network traffic records. We explain this methodology in the coming sections by first explaining the datasets utilised in our study. \section{Conclusion}\label{Conclusion} The benchmark datasets for Network Intrusion Detection Systems (NIDSs) are commonly used by academic and industrial network security researchers for the evaluation of the anomaly detection and NIDS algorithms. In most of the methods based on the machine learning techniques, statistical features of the network traffic play a significant role in the performance of the classification of normal and anomalous traffic. However, these benchmark datasets have never been analysed in terms of the statistical features of their traffic records. The statistical distributions of the attack / anomaly traffic features are usually different from the normal traffic. As such, the normal / benign traffic records of these datasets are the main parts which should simulate / mimic normal / benign the real-world traffic in which the anomaly detection algorithms are supposed to work. The main purpose of this paper is to introduce tools and methodologies that can measure how realistic is the evaluation of an NIDS algorithm via a synthetic dataset. Currently, this has not been addressed in the network security research field. This paper, tries to address this gap not only by proposing the required tool and methodology, but also by applying the proposed methodology on three recent NIDS datasets, and comparing them to two diverse real-world network traffic datasets. We initially propose nine traffic features that can be used for this purpose. Then, we illustrate the statistical distributions of these features for the synthetic and real-world datasets. We also use four different dimensionality reduction methods to embed the set of nine features in a 2-dimensional Euclidean space and visualise the resulting embeddings for all datasets. In both cases, the statistical distributions and dimensionality reduced visualisations, there are significant distinctions between the two groups of datasets, the real world and synthetic. In addition, the differences among members of each group is much less significant, e.g. the two real-world datasets are more similar together than to the synthetic datasets, despite their diverse origin. While the illustrated distributions clearly indicate the differences in the traffic features of the synthetic and real-world datasets, differences between the real-world datasets and each of the synthetic datasets varies from one dataset to another. We propose a metric to quantify these differences between statistical distributions of features of each pair of datasets. The proposed metric also clearly indicates that difference between the two real-world datasets, despite the fact that they represent two different types of networks from two different organisation and geographical-location, is much less than to either of synthetic datasets. Based on these results, we believe that the evaluation of anomaly detection and NIDS algorithms using synthetic datasets, which are not created in the context of real-world traffic, does not guarantee their classification performance in the real-world scenarios. The addressing of this gap constitutes the future work of our research where we are proposing a methodology for creating NIDS benchmark datasets that their traffic feature distributions comply with that of the real-world traffic.
1,314,259,993,848
arxiv
\subsection{Autofocusing} related work 只写autofocusing The basic idea of software-based autofocusing is to capture one or more out-of-focus images and use them to determine the ideal focal position. The most popular approach of this kind is focus map surveying \cite{liao2017rapid}. It captures a z-stack along the optical axis including a series of out-of-focus images with different relative distance offset \cite{subbarao1998selecting}, and then maximizes image contrast to determine the ideal focal plane. Software-based methods are usually slow due to the requirement of a full focal stack. Moreover, image contrast does not always serve as a good image quality metric. For example, some pathology samples that are weakly stained have low image contrast. Hardware-based methods attempt to directly measure the distance from the objective lens to the sample, and thus is rapid. Among hardware-based approaches, Liron \textit{et al.} \cite{liron2006laser} proposed to use an external light source or laser to measure the position of a reference point. Montalto \textit{et al.} \cite{montalto2011autofocus} proposed to utilize a secondary camera to decouple image acquisition from focusing and allow parallel processing. Liao \textit{et al.} \cite{liao2017rapid} developed a novel focus map surveying method using additional LED illumination and autocorrelation image analysis. This kind of methods need to introduce hardware modifications to the microscope, which can be expensive and not compatible to current WSI systems. With the emergence of deep learning in microscopy, convolutional neural networks (CNNs) based approaches have appeared for autofocusing. The work in \cite{jiang2018transform} is the first one in the literature that uses CNN to predict the focal position. Specifically, Jiang \textit{et al.} \cite{jiang2018transform} firstly acquired $\sim$130,000 images with different defocus distances as the training dataset, and used an end-to-end deep residual network to build the relationship between the input image and its focal distance. This approach is able to capture images on the fly without focus map surveying. However, despite this method achieves remarkable autofocusing performance, from the perspective of methodology, it is not easy to derive a model that accurately describes the relationship between an image with complex contents and a numerical value (the defocus distance). More recently, Pinkard \textit{et al.} \cite{pinkard2019deep} proposed to combine the hardware modification and deep learning. It requires the addition of one or a few off-axis LEDs to a conventional transmitted light microscope. Defocus distance is then estimated and corrected based on a single image under this LED illumination using a neural network. \subsection{Optical Model} In optical microscopy, the point spread function (PSF) can be formulated by the Born \& Wolf model \cite{born2013principles,hosseini2018focus}: \begin{equation}\label{PSF} h(r, \Delta D)=\left|C \int_{0}^{1} J_{0}\left(k \frac{\mathrm{NA}}{n} r \rho\right) e^{-\frac{1}{2} i k \rho^{2} \Delta D\left(\frac{\mathrm{NA}}{n}\right)^{2}} \rho d \rho\right|^{2}, \end{equation} where $r$ is the radial distance along the lateral plane; $\Delta D$ is the distance between the in-focus position and the imaging plane along the optical axis, \textit{i.e.}, the defocus distance; $C$ is a normalization constant; $J_{0}$ is zero-order Bessel function of the first kind; $k$ is angular wave number of the light source; $n$ is the refractive index; $i$ is the imaginary number; $\rho $ is the normalized coordinate in the exit pupil. The axial PSF model is shown in Fig. \ref{fig:PSF} (a) and the lateral planes with different $\Delta D$ are shown in Fig. \ref{fig:PSF} (b) and (c). It can be found that, the amplitude of blue line ($\Delta D=0.5$ $\mu$m) is lower than the red one ($\Delta D=0$) due to the out-of-focus aberration, which becomes larger as $\Delta D$ increases. Accordingly, to recover the in-focus image, it is reasonable to assume that the most reliable knowledge is from the two nearest out-of-focus planes of the in-focus plane \cite{mcnally1999three, agard1984optical}. In contrast, the single out-of-focus image has complex features which need to be restrained, due to the effects of sample thickness and optical aberrations. Therefore, the two-shot method with feature fusion and compensation has the better performance than the single one. \subsection{Infocusing and Defocusing Model} In WSI, samples of pathological tissue slices are with uneven depth variations, and thus the ensuing PSFs vary spatially. Based on the layered depth of field model \cite{scofield1992212}, the continuous depth map is translated to discrete depth layers (image planes), and the PSF $h(r, \Delta D)$ is replaced by $h_{m}$, where $m$ stands for the position of each depth layer and $h_{0}$ is the PSF of the in-focus depth. Each depth layer is blurred by its corresponding PSF with a convolution operation and the blurred depth layers are integrated to form the captured image. Therefore, the in-focus imaging model can be expressed as: \begin{equation} X = \sum_{m} x_{m} \otimes h_{m}, \end{equation} where $x_{m}$ is the discrete depth layer of sample with depth $m$ and $x_{0}$ is the in-focus object plane of sample, $\otimes$ is the convolution operator, $X$ is the underlying in-focus image of $x_{0}$. When the sample is shifted by offset $\Delta D_1$ from the in-focus object plane, we denote the new in-focus object plane as $x_{i}$. Similarly, the new in-focus object plane is denoted as $x_{j}$ when the sample is shifted by offset $\Delta D_2$. Accordingly, the captured out-of-focus images $Y_{1}$ and $Y_{2}$ can be represented as: \begin{equation} Y_{1} = \sum_{m} x_{m+i} \otimes h_{m},\hspace{0.3cm} Y_{2} = \sum_{m} x_{m+j} \otimes h_{m}. \end{equation} The defocusing imaging model indicates that recovering in-focus image from out-of-focus images in WSI is more challenging than the conventional inverse imaging problem. Fortunately, the two-shot out-of-focus images retain pieces of partial information about the underlying in-focus image, which inspires us to fuse them to derive the sharp image by a deep neural network. \begin{figure*} \begin{center} \label{method_framwork} \includegraphics[width=0.9\linewidth]{fig3.png} \end{center} \vspace{0.2cm} \caption{The architecture of the proposed TSVA network. Each blue box corresponds to a multi-channel feature map. The number of channels is denoted at the side edge of the box. The x-y-size is provided at the top edge of the box. White boxes represent copied feature maps of the left contracting path. Black boxes represent copied feature maps of the right contracting path. The colorful arrows denote the different operations. $\times$ 2 stands for an additional convolution.} \label{fig:framework} \vspace{-0.2cm} \end{figure*} \subsection{Virtual Autofocusing} In view of the above, we propose a learning-based virtual autofocusing strategy relying on two-shot images, which are from the two nearest out-of-focus planes on both sides of the initial focal plane, as illustrated in Fig.~\ref{fig:imaging}. Specifically, at the very beginning, for the first tile we collect a full z-stack to derive the initial focal position $F$. It is worth noting that, since different tiles are with uneven topography, this position is usually not the focal one of other tiles. Then for the rest tiles, two out-of-focus images are captured with relative defocus offset $\Delta D_1$ and $\Delta D_2$ to $F$ respectively. This setting is inspired by the operation of manual microscopy, which first performs coarse tuning to get a best possible picture and further conducts fine tuning to finally acquire the best one. In conclusion, the practical workflow of the proposed method is listed as: \begin{itemize} \item \textbf{Initial focal plane prediction:} For the first tile, we collect a z-stack and obtain the initial focal position. \item \textbf{Two-shot imaging:} For the rest tiles, we perform two-shot imaging, which are captured in both sides of the initial focal plane with relative defocus offsets. \item \textbf{Algorithm processing:} The in-focus image can be recovered directly offline by algorithm processing. \end{itemize} The following task is to recover $X$ by fusing its two observations $Y_{1}$ and $Y_{2}$. This is done in our scheme through a U-Net-inspired deep neural network, which will be elaborated in the next section. In practical implementation, we set $\Delta D_{1} = \Delta D_{2} = \Delta D$. \subsection{Network Architecture The proposed deep neural network that is tailored for two-shot virtual autofocusing (TSVA) is illustrated in Fig. \ref{fig:framework}. Specifically, the TSVA network consists of two contracting paths (left and right sides) that are with two out-of-focus images $Y_1$ and $Y_2$ as inputs, and an expansive path (middle side) that outputs the recovered in-focus image $X$. The sharper image of two captured ones is chosen as $Y_1$, according to the metric of Brenner gradient \cite{sun2005autofocusing}. \begin{itemize} \item \textbf{Contracting paths design:} The contracting paths employ the typical convolutional architecture, including the repeated use of two $3 \times 3$ convolutions followed by a rectied linear unit (ReLU) and $2 \times 2$ max pooling downsampling layer with stride 2. We double the number of feature channels at every downsampling step. These two paths share the same parameters. Finally, we combine the deepest layers of two paths into a cascaded one. \item \textbf{Expansive path design:} The expansive path in each step includes an upsampling feature layer followed by $2 \times 2$ convolution (up-convolution), which halves the number of feature channels. We build a concatenation with the corresponding feature maps from the left contracting path (white layer) and the right contracting path (black layer), and employ two $3 \times 3$ convolution followed by ReLU. At the final residual layer, $Y_{1}$ is added to generate the recovered in-focus image $X$. In total the network has 27 convolutional layers. \end{itemize} \begin{figure*}[!t] \begin{center} \includegraphics[width=0.9\linewidth]{test3.png} \end{center} \caption{Subjective performance comparison on $Sample 1$ to $Sample 6$. Please enlarge the PDF for more details.} \label{fig:test3} \end{figure*} \subsection{Network Training} \subsubsection{Training Dataset We use a part of the dataset collected by Jiang \textit{et al.} \cite{jiang2018transform} to train our network. The dataset includes 35 research-grade human pathology slides with Hematoxylin and eosin stains (Omano OMSK-HP50), and contains 162 pathological tissue z-stack tiles. For each tile there is a stack of 41 images taken with different focal distances in a step size of 0.5$\mu m$, ranging from -10$\mu m$ to 10$\mu m$, with 0$\mu m$ corresponding to the image in focus. The in-focus image is recovered by maximizing Brenner gradient of the z-stack images. In image stacks of all tiles, the focal distance of an out-of-focus image is given as the defocus offset to the image in focus. But in our system, the microscope camera makes two shots of each title at two prefixed focal distances. Therefore, we need the out-of-focus images of absolute focal distances to train our TSVA network. We convert the training images in relative focal distance in the dataset of \cite{jiang2018transform} to those in absolute focal distance by simply adding a Gaussian random variable $n\sim{\cal N}(0,1)$ to the relative focal distance. This is because, according to the observation of \cite{hart2014focal}, the focal positions follow a Gaussian distribution, as shown in Fig. \ref{fig:distribution} (a). Specifically, The images of slides are divided into $224 \times 224$ patches in Fig. \ref{fig:distribution} (b). Then, we convert the dataset to discrete patches of Gaussian distribution. There are 3240 patches in the initial dataset and we enlarge the dataset by rotation. \subsubsection{Implementation Details} Here we clarify some details in implementation. In network training, the loss function is defined as follows: \begin{equation} L=\frac{1}{N}\sum_{i=1}^{N}(X_{i}-\tilde{X_{i}})^{2}, \end{equation} where $X_{i}$ is the ground-truth in-focus image and $\tilde{X_{i}}$ is the network output, and $N$ is the number of training images in each batch. We select 85\% patches with labeled relative defocus offset $\Delta D$ as our training set and 15\% patches for verification. We utilize batch normalization with batch size as 20 for acceleration training. The network is trained using the ADAM optimizer with a learning rate as 0.0005 for 50 epochs. The network training is run on a single NVIDIA GTX 1080Ti. \begin{figure*}[!t] \centering \includegraphics[width=0.9\linewidth]{cell-counting.png}\\ \vspace{0.20cm} \caption{Influence of image quality to the accuracy of cell counting. For (S1) to (S6), the cell counting results on our generated image are at the top and the corresponding results of ground-truth are at the bottom. From left to right, the input image for cell counting, the cell segmentation image, and the image of cell outlines counting. Please enlarge the PDF for more details.} \label{fig:cell-counting} \end{figure*} \begin{figure*}[!t] \centering \includegraphics[width=0.9\linewidth]{test1_1.png}\\ \vspace{0.20cm} \caption{Subjective Performance Comparison on images of Dataset 1. Please enlarge the PDF for more details. The in-focus images in red block are ground truth. The results of U-net and TSVA are shown in the two black blocks with the corresponding error maps on the bottom. } \label{fig:test1_1} \end{figure*} \begin{table*} \scriptsize \caption{The average numbers of counted cells with respect to different $\Delta D$ on all samples in Dataset 1.} \centering \vspace{0.20cm} \begin{tabular}{p{0.8cm}cc|cc|cc|cc|cc|cc|cc} \toprule \multirow{2}{*}{$\Delta D$} & \multicolumn{2}{c|}{Sample 1} & \multicolumn{2}{c|}{Sample 2}& \multicolumn{2}{c|}{Sample 3}& \multicolumn{2}{c|}{Sample 4}&\multicolumn{2}{c|}{Sample 5}&\multicolumn{2}{c|}{Sample 6} & \multicolumn{2}{c}{Average} \\ \cline{2-15} &Ours&GT&Ours&GT&Ours&GT&Ours&GT&Ours&GT&Ours&GT&Ours&GT\\ \hline 0.5$\mu$m & 15.17& 15.5 & 42.38 & 42.13 & 22.4 & 22.4 & 19.4 & 19.36 & 18.75 & 19 & 22.75 & 23 &22.38 &22.43 \\ 1$\mu$m & 12.83& 13.16 & 39.67 & 40 & 18.5 & 19.5 & 21.12 & 21.18 & 18.43 & 18.86 & 18.17 & 17.83 &20.25 &20.42 \\ 1.5$\mu$m & 14.44& 14.56 & 32 & 33 & 33 & 33 & 25.5 & 25.67 & 21 & 21 & 25 & 25.5 &21.78 &22.05 \\ 2$\mu$m & 15.33& 16 & 34.5 & 34 & 29 & 28 & 21 & 22 & 23 & 22 & 18 & 17 &22.70 &22.7 \\ 2.5$\mu$m & 10 & 10 & - & - & - & - & - & - & 28 & 27 & - & - &16 &15.67 \\ 3$\mu$m & - & - & - & - & - & - & - & - & 27 & 29 & - & - &27 &29 \\ \hline Average & 14.00& 14.27 & 39.40 & 39.40 & 23.44& 23.56 & 20.78 & 20.84 & 19.70 & 19.96 & 21.20 & 21.20 &21.57 &21.69 \\ \bottomrule \label{table:cell-counting} \vspace{-0.9cm} \end{tabular} \end{table*} \begin{figure*}[!t] \centering \includegraphics[width=0.9\linewidth]{test1_2.png}\\ \vspace{0.20cm} \caption{Subjective Performance Comparison on images of Dataset 2. Please enlarge the PDF for more details. The in-focus images in red block are ground truth. The results of U-net and TSVA are shown in the two black blocks with the corresponding error maps on the bottom. } \label{fig:test1_2} \end{figure*} \begin{table*}[!h] \scriptsize \caption{PSNR performance comparison of U-net and TSVA on Dataset 1 and Dataset 2 with respect to different $\Delta D$.} \centering \vspace{0.20cm} \begin{tabular}{p{0.8cm}<{\centering}p{0.8cm}<{\centering}cccccccc} \toprule Dataset&Methods& \multicolumn{7}{c}{Relative Distance Offset $\Delta D$ (The mean on the top and standard deviation (SD) on the bottom in each methods) }&Average\\ \hline \multirow{8}{*}{Dataset 1}&\multirow{4}{*}{U-net} & $\Delta D$ &-3$\mu$m&-2.5$\mu$m&-2$\mu$m&-1.5$\mu$m&-1$\mu$m&-0.5$\mu$m & \multirow{4}{*}{39.44 $\pm$3.76}\\ &&PSNR & 27.96 $\pm$0& 33.46 $\pm$4.17 & 38.28 $\pm$1.28 & 37.42 $\pm$2.86 & 38.86 $\pm$1.97 & 41.61 $\pm$4.08\\ \cline{3-9} &&0$\mu$m&+0.5$\mu$m&+1$\mu$m&+1.5$\mu$m&+2$\mu$m&+2.5$\mu$m&+3$\mu$m\\ &&39.44 $\pm$1.58 & 41.88 $\pm$4.36& 38.74 $\pm$2.26& 36.05 $\pm$3.11& 36.11 $\pm$2.78& 32.28 $\pm$3.90& 34.88 $\pm$0 & \\ \cline{2-10} &\multirow{4}{*}{TSVA} & $\Delta D$ &-3$\mu$m&-2.5$\mu$m&-2$\mu$m&-1.5$\mu$m&-1$\mu$m&-0.5$\mu$m & \multirow{4}{*}{\textbf{42.25 $\pm$4.90}}\\ && PSNR & \textbf{30.11 $\pm$0} & \textbf{33.58 $\pm$2.74} & \textbf{38.60 $\pm$1.07} & \textbf{38.34 $\pm$1.87} & \textbf{39.82 $\pm$1.11} & \textbf{47.99 $\pm$1.59} \\ \cline{3-9} &&0$\mu$m&+0.5$\mu$m&+1$\mu$m&+1.5$\mu$m&+2$\mu$m&+2.5$\mu$m&+3$\mu$m\\ &&\textbf{39.61 $\pm$1.32}& \textbf{48.71 $\pm$1.64} & \textbf{39.93 $\pm$1.34} & \textbf{37.58 $\pm$2.02} & \textbf{36.35 $\pm$2.38} & \textbf{33.58 $\pm$3.23} & \textbf{36.12 $\pm$0 } & \\ \hline\hline \multirow{8}{*}{Dataset 2}& \multirow{4}{*}{U-net} & $\Delta D$ &-3$\mu$m&-2.5$\mu$m&-2$\mu$m&-1.5$\mu$m&-1$\mu$m&-0.5$\mu$m & \multirow{4}{*}{38.83 $\pm$2.95}\\ &&PSNR& 33.22 $\pm$0.34 & 34.82 $\pm$1.74 & 36.27 $\pm$1.65 & 37.76 $\pm$1.29& 38.61 $\pm$0.99 & 41.28 $\pm$3.23& \\ \cline{3-9} &&0$\mu$m&+0.5$\mu$m&+1$\mu$m&+1.5$\mu$m&+2$\mu$m&+2.5$\mu$m&+3$\mu$m\\ &&38.80 $\pm$0.97 & 40.44 $\pm$3.68 & 37.71 $\pm$1.54 & 36.15 $\pm$1.66 & 35.31 $\pm$1.90 & 33.49 $\pm$1.81 & 33.00 $\pm$1.12& \\ \cline{2-10} &\multirow{4}{*}{TSVA} &$\Delta D$ &-3$\mu$m&-2.5$\mu$m&-2$\mu$m&-1.5$\mu$m&-1$\mu$m&-0.5$\mu$m & \multirow{4}{*}{\textbf{42.32 $\pm$4.67}}\\ &&PSNR& \textbf{34.87 $\pm$0.45} & \textbf{35.61 $\pm$1.20} & \textbf{37.20 $\pm$1.04} & \textbf{38.65 $\pm$0.63} &\textbf{ 39.78 $\pm$0.62}& \textbf{48.39 $\pm$0.91} \\ \cline{3-9} &&0$\mu$m&+0.5$\mu$m&+1$\mu$m&+1.5$\mu$m&+2$\mu$m&+2.5$\mu$m&+3$\mu$m\\ &&\textbf{39.55 $\pm$0.57} & \textbf{48.55 $\pm$0.88} &\textbf{39.77 $\pm$0.60}& \textbf{38.26 $\pm$0.79} & \textbf{36.61 $\pm$1.32} & \textbf{35.23 $\pm$0.79} & \textbf{33.89 $\pm$0.00} & \\ \bottomrule \label{table:test1_2} \end{tabular} \end{table*} \subsection{Influence of Image Quality to Downstream Image Analysis} According to Fig. \ref{fig:test3}, it is hard to differentiate the recovered in-focus images from the ground-truth by human eyes. Another concern is whether the machine also cannot differentiate them, \textit{i.e.}, whether the recovered in-focus images would significantly reduce the accuracy of downstream image analysis tasks? In this subsection, using cell counting that is a typical task of pathology image analysis as an example, we examine the influence of the quality of images yielded by the proposed visual autofocusing approach to the counting accuracy. We utilize a widely used tool \textit{ImageJ}\footnote{https://imagej.nih.gov/ij/} \cite{schneider2012nih} released by National Institutes of Health (NIH) as the test platform, which conducts cell counting including the following four steps: 1) gray processing; 2) adjusting brightness and contrast; 3) thresholding; 4) analysis of cell counting. The in-focus images recovered by our method and the ground-truth in-focus images are taken as input to \textit{ImageJ}, respectively. The results of cell counting are illustrated in Fig. \ref{fig:cell-counting}. It can be found that, the cell counting results on our recovered images are very close to the results on the corresponding ground-truth images. In Table \ref{table:cell-counting}, we also show the comparison of numbers of counted cells with respect to different $\Delta D$ on $Sample 1$ to $Sample 6$. It can be seen that, compared with the results on the ground-truth (GT), the average cell counting error on our recovered in-focus images is 0.12, which is too small to reduce the accuracy of downstream analysis significantly. \subsection{Ablation Study} In this subsection, we provide the empirical ablation analysis of the proposed TSVA network. According to the TSVA architecture, there are two input out-of-focus images with relative defocus offsets $\Delta D$. Therefore, it is essential to analyze the influence of dual input images and relative defocus offsets to the final performance. Moreover, we provide the study of the robustness of the proposed scheme to different test sets. Considering that our TSVA is built upon the U-net, we employ the traditional U-net \cite{U-NET} as the baseline, which takes $Y_{1}$ as input with different relative distance offsets. \subsubsection{Influence of dual input images} In this part, we provide empirical analysis if the dual captured images is really helpful to improve the quality of recovered in-focus images compared with the single one. Table \ref{table:test1_2} shows objective performance comparison of U-net that takes $Y_{1}$ as input and our TSVA that takes $Y_{1}$ and $Y_{2}$ as inputs. It can be found that, on Dataset 1 and 2, TSVA achieves better PSNR performance than U-net for all cases. The average PSNR gains are 2.81dB and 3.49dB over U-net, respectively. We also provide subjective performance comparison of U-net and TSVA in Fig. \ref{fig:test1_1} on Dataset 1. For clear display, we show the error maps between the recovered in-focus images and the corresponding ground-truth. It can be seen that, compared with U-net, the structure errors produced by TSVA are smaller, in particular when $\Delta D$ is ranging from from -1$\mu m$ to +1$\mu m$. Therefore, the proposed TSVA network achieves superior performance than U-net, benefiting from the dual inputs. \subsubsection{Influence of different relative defocus offsets} In this part, we examine the influence of different relative defocus offsets to the final performance. The PSNR histograms with respect to $\Delta D$ on Dataset 1 and Dataset 2 are shown in Fig. \ref{fig:h1} and Fig. \ref{fig:h2} respectively. It can be found that: i) For different $\Delta D$, the proposed TSVA always achieves higher PSNR values than U-net. This demonstrate that the performance of our scheme is robust with respect to $\Delta D$. ii) The highest PSNR gains appear when $\Delta D=+0.5$ $\mu$m and $\Delta D=-0.5$ $\mu$m. In practical case, most of estimated focal positions also lie in the region of $\pm$ 0.5 $\mu$m. Therefore, the TSVA network realizes virtual autofocusing with high accuracy. \subsubsection{Influence of different test sets} In this part, we examine the robustness of our method to different test sets. In Table \ref{table:test1_2}, we provide objective performance evaluation with respect to PSNR on samples of Dataset 2. It can be found that, for test samples from different resources of the training set, our method still achieves the best PSNR performance for all cases. The average PSNR gain over U-net is 3.49dB. The subjective performance comparison of U-net and TSVA is shown in Fig. \ref{fig:test1_2} on Dataset 2. Similar to the results on Dataset 1, the structure errors produced by TSVA is also much smaller than U-net. These results demonstrate that the proposed TSVA network has a strong generalization capability. \begin{table} \scriptsize \caption{The workflow comparison between the conventional methods and the proposed method.} \centering \vspace{0.20cm} \begin{tabular}{ccc} \toprule Step & Conventional Methods & Proposed Method \\ \hline (a) & \emph{Create a z-stack for the first tile} & \emph{Create a z-stack for the first tile} \\ (b) & \emph{Predict the initial focal plane} & \emph{Predict the initial focal plane} \\ (c) & Repeat z-stack creating for other tiles & \textbf{Repeat two-shot for other tiles} \\ (d) & Create a focus map& \textbf{Algorithm processing offline} \\ (e) & Shift platform for in-focus shooting & \textbf{Generate in-focus image directly} \\ \bottomrule \label{table:workflow} \vspace{-0.9cm} \end{tabular} \end{table} \section{Introduction} \input{1_Introduction.tex} \section{Related Work} \label{sec:related} \input{2_related} \section{Problem Formulation} \label{sec:preliminaries} \input{3_preliminaries} \section{The Proposed In-focus Image Recovery Method} \label{sec:method} \input{4_method} \section{Experiments} \label{sec:experiments} \input{5_experiments} \section{Conclusion} In this paper, we presented a high-speed and high-throughput whole slide imaging system. Traditional autofocusing methods rely on repetitive mechanical adjustment to conduct refocusing, which is time-consuming. Instead, our scheme does not perform autofocusing during the process of tissue slide scanning, but recovers the in-focus image based on two-shot ones in an offline learning-based manner, as shown in Table \ref{table:workflow} . The proposed method is built upon the well-known U-Net, which is modified and extended such that it can work with two input images and yields a recovered in-focus image. Experimental results demonstrate that our scheme achieves satisfactory performance on in-focus image recovery. \section*{Acknowledgment} The authors would like to thank Prof. G. Zheng and Dr. S. Jiang from UCOON for sharing the real measurements for WSI setup and beneficial discussions about \cite{jiang2018transform} as well as the following research. \ifCLASSOPTIONcaptionsoff \newpage \fi { \bibliographystyle{IEEEtran}
1,314,259,993,849
arxiv
\section{Introduction} Recently, it has become clear that our universe has not only undergone the period of early-time accelerated expansion (inflation), but also is currently in the so-called late-time accelerating epoch (dark energy era). An extremely powerful way to describe the early-time inflation and the late-time acceleration in a unified manner is modified gravity. This approach does not require the introduction of new dark components like inflaton and dark energy. The unified description of inflation and dark energy is achieved by modifying the gravitational action at the very early universe as well as at the very late times (for a review of such models, see \cite{Nojiri:2006ri}). A number of viable modified gravity theories has been suggested. Despite some indications to possible connection with string/M-theory \cite{Nojiri:2003rz}, such theories remain to be mainly phenomenological. It is a challenge to investigate their origin from some (not yet constructed) fundamental quantum gravity theory. Among the recent attempts to construct a consistent theory of quantum gravity much attention has been paid to the quite remarkable Ho\v{r}ava-Lifshitz quantum gravity \cite{Horava:2009uw}, which appears to be power-counting renormalizable in four dimensions. In this theory the local Lorentz invariance is abandoned, but it is restored as an approximate symmetry at low energies. Despite its partial success as a candidate for fundamental theory of gravity, there are a number of unresolved problems related to the detailed balance and projectability conditions, consistency, its general relativity (GR) limit, realistic cosmological applications, the relation to other modified gravities, etc. Due to the fact that its spatially-flat FRW cosmology \cite{cosmology} is almost the same as in GR, it is difficult to obtain a unified description of the early-time inflation with the late-time acceleration in the standard Ho\v{r}ava-Lifshitz gravity. Recently the modified Ho\v{r}ava-Lifshitz $F(R)$ gravity has been proposed \cite{Chaichian:2010yi}. Such a modification may be easily related with the traditional modified gravity approach, but turns out to be much richer in terms of the possible cosmological solutions. For instance, the unification of inflation with dark energy seems to be possible in such Ho\v{r}ava-Lifshitz gravity due to the presence of multiple de Sitter solutions. Moreover, on the one hand, there is the hope that the generalization of Ho\v{r}ava-Lifshitz gravity may lead to new classes of renormalizable quantum gravity. On the other hand, one may hope to formulate the dynamical scenario for the Lorentz symmetry violation/restoration, caused by the expansion of the universe, in terms of such generalized theory. In the present work (section \ref{sec:2}) we propose the most general modified first-order Ho\v{r}ava-Lifshitz-like theory, without higher derivative terms which are normally responsible for the presence of ghosts. The general form of the action in the spatially-flat FRW space-time is found, and the Hamiltonian structure of the action is analyzed in section \ref{sec:3}. As a specific example of such a first-order action we introduce the modified Ho\v{r}ava-Lifshitz $F(R)$ theory which is more general than the model of ref.~\cite{Chaichian:2010yi}. Nevertheless, its spatially-flat FRW cosmology turns out to be the same as for the model \cite{Chaichian:2010yi} (this is not the case for BH solutions, etc). Therefore it also coincides with the conventional $F(R)$ spatially-flat cosmology for a specific choice of the parameters. The ultraviolet structure of the new Ho\v{r}ava-Lifshitz $F(R)$ gravity is carefully investigated. It is shown that such models can have very nice ultraviolet behaviour at $z=2$. Moreover, for $z=3$ a big class of renormalizable models is suggested (section \ref{sec:2}). The Hamiltonian analysis of the modified Ho\v{r}ava-Lifshitz $F(R)$ gravity is presented in section \ref{sec:4}. The fixed gauge modified Ho\v{r}ava-Lifshitz $F(R)$ gravity is analyzed in section \ref{sec:5}. Section \ref{sec:6} is devoted to the investigation of spatially-flat FRW cosmology for power-law $F(R)$ gravity. The general equation for the de Sitter solutions is obtained. It acquires an extremely simple form for a special choice of parameters, when de Sitter solutions are roots of the equation $F=0$. The existence of multiple de Sitter solutions indicates the principal possibility of attaining the unification of the early-time inflation with the late-time acceleration in the modified Ho\v{r}ava-Lifshitz $F(R)$ gravity. The reconstruction technique is developed for the study of analytical and accelerating FRW cosmologies in power-law models. A number of explicit analytical solutions are presented. It is shown by explicit examples that some of the quintessence/phantom-like cosmologies may develop the future finite-time singularity of all the known four types, precisely in the same way as for traditional dark energy models. The possible curing of such singularities could be achieved in a similar way as in the case of traditional modified gravity. Some remarks about small corrections to the Newton law are made in section \ref{sec:7}. A summary and outlook are given in the last section \ref{sec:8}. In the appendix \ref{appendix} we propose a covariant $F(R)$ gravity that is quite similar to the corresponding Ho\v{r}ava-Lifshitz version but remains to be a covariant theory. It seems that it could also be made renormalizable. \section{General action for Ho\v{r}ava-Lifshitz-like gravity and renormalizability} \label{sec:2} In this section we propose the essentially most general Ho\v{r}ava-Lifshitz-like gravity action, which does not contain derivatives with respect to the time coordinate higher than the second order. Its ultraviolet properties are discussed. By using the ADM decomposition \cite{Arnowitt:1962hi} (for reviews and mathematical background, see \cite{ADMreviewmath}), one can write the metric of space-time in the following form: \begin{equation} \label{HLF2} \mathrm{d} s^2 = - N^2 \mathrm{d} t^2 + g^{(3)}_{ij}\left(\mathrm{d} x^i + N^i \mathrm{d} t \right)\left(\mathrm{d} x^j + N^j \mathrm{d} t \right), \quad i,j=1,2,3\, . \end{equation} Here $N$ is called the lapse variable and $N^i$ is the shift 3-vector. Then the scalar curvature $R$ has the following form: \begin{equation} \label{HLF3} R= K^{ij} K_{ij} - K^2 + R^{(3)} + 2 \nabla_\mu \left( n^\mu \nabla_\nu n^\nu - n^\nu \nabla_\nu n^\mu \right) \, . \end{equation} Here $R^{(3)}$ is the three-dimensional scalar curvature defined by the metric $g^{(3)}_{ij}$ and $K_{ij}$ is the extrinsic curvature defined by \begin{equation} \label{HLF4} K_{ij}=\frac{1}{2N}\left(\dot g^{(3)}_{ij}-\nabla^{(3)}_iN_j-\nabla^{(3)}_jN_i\right) \, ,\quad K =K^i_{\ i}\, . \end{equation} $n^\mu$ is the unit vector perpendicular to the three-dimensional space-like hypersurface $\Sigma_t$ defined by $t=\text{constant}$ and $\nabla^{(3)}_i$ is the covariant derivative on the hypersurface $\Sigma_t$. From the determinant of the metric \eqref{HLF2} one obtains $\sqrt{-g} = \sqrt{g^{(3)}} N$. For general Ho\v{r}ava-Lifshitz-like gravity models, we do not require the full diffeomorphism-invariance, but only invariance under ``foliation-preserving'' diffeomorphisms: \begin{equation} \label{fpd1} \delta x^i=\zeta^i(t,\bm{x})\,, \, \quad \delta t=f(t)\, . \end{equation} Therefore, there are many invariants or covariant quantities made from the metric, in particular $K$, $K_{ij}$, $\nabla^{(3)}_i K_{jk}$, $\cdots$, $\nabla^{(3)}_{i_1} \nabla^{(3)}_{i_2} \cdots \nabla^{(3)}_{i_n} K_{jk}$, $\cdots$, $R^{(3)}$, $R^{(3)}_{ij}$, $R^{(3)}_{ijkl}$, $\nabla^{(3)}_i R^{(3)}_{jklm}$, $\cdots$, $\nabla^{(3)}_{i_1} \nabla^{(3)}_{i_2} \cdots \nabla^{(3)}_{i_n} R^{(3)}_{jklm}$, $\cdots$, $\nabla_\mu \left( n^\mu \nabla_\nu n^\nu - n^\nu \nabla_\nu n^\mu \right)$, $\cdots$, etc. Then the general consistent action composed of invariants that are constructed from such covariant quantities, \begin{eqnarray} \label{HLF26} S_\mathrm{gHL} &=& \int \mathrm{d}^4 x \sqrt{g^{(3)}} N F \left(g^{(3)}_{ij}, K, K_{ij}, \nabla^{(3)}_i K_{jk}, \cdots, \nabla^{(3)}_{i_1} \nabla^{(3)}_{i_2} \cdots \nabla^{(3)}_{i_n} K_{jk}, \cdots, \right. \nonumber \\ && \left. R^{(3)}, R^{(3)}_{ij}, R^{(3)}_{ijkl}, \nabla^{(3)}_i R^{(3)}_{jklm}, \cdots, \nabla^{(3)}_{i_1} \nabla^{(3)}_{i_2} \cdots \nabla^{(3)}_{i_n} R^{(3)}_{jklm}, \cdots, \nabla_\mu \left( n^\mu \nabla_\nu n^\nu - n^\nu \nabla_\nu n^\mu \right) \right)\, , \end{eqnarray} could be a rather general action for the generalized Ho\v{r}ava-Lifshitz gravity. Note that one can also include the (cosmological) constant in the above action. Here it has been assumed that the action does not contain derivatives higher than the second order with respect to the time coordinate $t$. In the usual $F(R)$ gravity, there appears the extra scalar mode, since the equations given by the variation over the metric tensor contain the fourth derivative. By assuming that the action does not contain derivatives higher than the second order with respect to the time coordinate $t$, we can avoid more extra modes in addition to the only one scalar mode which appears in the usual $F(R)$ gravity. For example, if we consider the action containing the terms like \begin{equation} \label{example} \left(\nabla^\mu \nabla_\mu\right)^{n+1} R^{(3)}\, ,\quad \left(\nabla^\rho \nabla_\rho \right)^n \nabla_\mu \left( n^\mu \nabla_\nu n^\nu - n^\nu \nabla_\nu n^\mu \right)\, , \end{equation} the equations given by the variation over the metric tensor contain the fifth or higher derivatives (for a review of Hamiltonian structure of higher derivative modified gravity, see \cite{Woodard:2007}). If we define new fields recursively \begin{equation} \label{example2} \chi_R^{(m+1)} = \nabla^\mu \nabla_\mu \chi_R^{(m)}\, ,\quad \chi_R^{(0)} = R^{(3)}\, ,\quad \chi_n^{(m+1)} = \nabla^\mu \nabla_\mu \chi_n^{(m)}\, ,\quad \chi_n^{(0)} = \nabla_\mu \left( n^\mu \nabla_\nu n^\nu - n^\nu \nabla_\nu n^\mu \right) \, , \end{equation} the equations can be rewritten so that only second derivatives appear. The scalar fields in (\ref{example2}), however, often become ghost fields that generate states of negative norm. Thus, we only consider actions of the form given by (\ref{HLF26}) in this paper. In the Ho\v{r}ava-Lifshitz-type gravity, we assume that $N$ can only depend on the time coordinate $t$, which is called the \emph{projectability condition}. The reason is that the Ho\v{r}ava-Lifshitz gravity does not have the full diffeomorphism-invariance, but is invariant only under the foliation-preserving diffeomorphisms (\ref{fpd1}). If $N$ depended on the spatial coordinates, we could not fix $N$ to be unity ($N=1$) by using the foliation-preserving diffeomorphisms. Moreover, there are strong reasons to suspect that the non-projectable version of the Ho\v{r}ava-Lifshitz gravity is generally inconsistent \cite{nonprojectable}. Therefore we prefer to assume that $N$ is projectable. In the FRW space-time with the flat spatial part and the non-trivial lapse $N(t)$, \begin{equation} \label{HLF8} \mathrm{d} s^2 = - N(t)^2 \mathrm{d} t^2 + a(t)^2 \sum_{i=1}^3 \left( \mathrm{d} x^i \right)^2\, , \end{equation} we find \begin{eqnarray} \label{HLF27} && \Gamma^0_{00} = \frac{\dot N}{N}\, , \quad \Gamma^0_{ij} = \frac{a^2 H}{N^2}\delta_{ij}\, , \quad \Gamma^i_{j0} = H\delta^i_{\ j}\, \quad \mbox{other}\ \Gamma^\mu_{\nu\rho} = 0\, , \nonumber \\ && K_{ij} = \frac{a^2H}{N}\delta_{ij}\, ,\quad \nabla^{(3)}_i = 0\, , \quad R^{(3)}_{ijkl}=0\, ,\quad \nabla_\mu \left( n^\mu \nabla_\nu n^\nu - n^\nu \nabla_\nu n^\mu \right) = \frac{3}{a^3 N}\frac{\mathrm{d}}{\mathrm{d} t}\left(\frac{a^3 H}{N}\right)\, , \end{eqnarray} where $H=\frac{\dot{a}}{a}$ is the Hubble parameter. Then one gets \begin{eqnarray} \label{HLF27b} && g^{(3)}_{ij}=a^2\delta_{ij}\, , \quad K=\frac{3 H}{N}\, , \quad K_{ij}K^{ij}=3\left(\frac{H}{N}\right)^2\, ,\quad \nabla^{(3)}_i K_{jk}= \cdots = \nabla^{(3)}_{i_1} \nabla^{(3)}_{i_2} \cdots \nabla^{(3)}_{i_n} K_{jk} = \cdots = 0 \, ,\nonumber \\ && R^{(3)}=R^{(3)}_{ij}=R^{(3)}_{ijkl}=\nabla^{(3)}_i R_{jklm}= \cdots = \nabla^{(3)}_{i_1} \nabla^{(3)}_{i_2} \cdots \nabla^{(3)}_{i_n} R^{(3)}_{jklm} = \cdots =0\, , \end{eqnarray} and since $F$ must be a scalar under the spatial rotation, the action (\ref{HLF26}) reduces to \begin{eqnarray} \label{HLF28} S_\mathrm{gHL} &=& \int \mathrm{d}^4 x \sqrt{g^{(3)}} N F \left(\frac{H}{N}, \frac{3}{a^3 N}\frac{\mathrm{d}}{\mathrm{d} t}\left(\frac{a^3 H}{N}\right)\right)\, . \end{eqnarray} Therefore, if we consider the FRW cosmology, the function $F$ should depend on only two variables, $\frac{H}{N}$ and $\frac{3}{a^3 N}\frac{\mathrm{d}}{\mathrm{d} t}\left(\frac{a^3 H}{N}\right)$. As a specific example of the above general theory, we may consider the following modified Ho\v{r}ava-Lifshitz $F(R)$ gravity, whose action is given by \begin{equation} \label{HLF11} S_{F(\tilde R)} = \frac{1}{2\kappa^2}\int \mathrm{d}^4 x \sqrt{g^{(3)}} N F(\tilde R)\, , \quad \tilde R \equiv K^{ij} K_{ij} - \lambda K^2 + 2 \mu \nabla_\mu \left( n^\mu \nabla_\nu n^\nu - n^\nu \nabla_\nu n^\mu \right) - \mathcal{L}_R^{(3)}\left(g^{(3)}_{ij}\right) \, . \end{equation} Here $\lambda$ and $\mu$ are constants and $\mathcal{L}_R^{(3)}$ is a function of the three-dimensional metric $g^{(3)}_{ij}$ and the covariant derivatives $\nabla^{(3)}_i$ defined by this metric. Note that this action (\ref{HLF11}) is more general than the one introduced in ref.~\cite{Chaichian:2010yi} due to the presence of the last term in $\tilde{R}$. We normalize $F(\tilde R)$ or redefine $\kappa^2$ so that \begin{equation} \label{HLF11b} F'(0) = 1\, . \end{equation} In \cite{Horava:2009uw}, $\mathcal{L}_R^{(3)}$ is chosen to be \begin{equation} \label{HLFrg0} \mathcal{L}_R^{(3)} \left(g^{(3)}_{ij}\right) = E^{ij}\mathcal{G}_{ijkl} E^{kl}\, , \end{equation} where $\mathcal{G}_{ijk l}$ is the ``generalized De~Witt metric'' or ``super-metric'' (``metric of the space of metric''), \begin{equation} \label{HLF6} \mathcal{G}^{ijkl} = \frac{1}{2}\left( g^{(3) ik} g^{(3) jl} + g^{(3) il} g^{(3) jk} \right) - \lambda g^{(3) ij} g^{(3) kl}\, , \end{equation} defined on the three-dimensional hypersurface $\Sigma_t$. $E^{ij}$ can be defined by the so called \emph{detailed balance condition} by using an action $W[g^{(3)}_{kl}]$ on the hypersurface $\Sigma_t$ \begin{equation} \label{HLF7} \sqrt{\g}E^{ij}=\frac{\delta W[g^{(3)}_{k l}]}{\delta g^{(3)}_{ij}}\, , \end{equation} and the inverse of $\mathcal{G}^{ijkl}$ is written as \begin{equation} \label{HLF7b0} \mc{G}_{ijkl} = \frac{1}{2}\left( g^{(3)}_{ik} g^{(3)}_{jl} + g^{(3)}_{il} g^{(3)}_{jk} \right) - \tilde{\lambda} g^{(3)}_{ij} g^{(3)}_{kl}\, ,\quad \tilde{\lambda} = \frac{\lambda}{3 \lambda - 1}\, . \end{equation} The action $W[g^{(3)}_{kl}]$ is assumed to be defined by the metric and the covariant derivatives on the hypersurface $\Sigma_t$. There is an anisotropy between space and time in the Ho\v{r}ava-Lifshitz gravity. In the ultraviolet (high energy) region, the time coordinate and the spatial coordinates are assumed to behave as \begin{equation} \label{HLF7b} \bm{x}\to b\bm{x}\, ,\quad t\to b^z t\, ,\quad z=2,3,\cdots\, , \end{equation} under the scale transformation. In \cite{Horava:2009uw}, $W[g^{(3)}_{kl}]$ is explicitly given for the case $z=2$, \begin{equation} \label{HLF7c} W=\frac{1}{\kappa_W^2}\int \mathrm{d}^3\vect{x}\,\sqrt{\g}\left(\R-2\Lambda_W^{}\right)\, , \end{equation} and for the case $z=3$, \begin{equation} \label{HLF7d} W=\frac{1}{w^2}\int_{\Sigma_t}\omega_3(\Gamma)\, , \end{equation} where \begin{equation} \label{HLF7e} \omega_3(\Gamma) = \mathrm{Tr}\left(\Gamma\wedge \mathrm{d}\Gamma+\frac{2}{3}\Gamma\wedge\Gamma \wedge\Gamma\right) \equiv \varepsilon^{ijk}\left(\Gamma^{m}_{il}\partial_j \Gamma^{l}_{km}+\frac{2}{3}\Gamma^{n}_{il}\Gamma^{l}_{jm} \Gamma^{m}_{kn}\right)\mathrm{d}^3\vect{x}\, . \end{equation} Here $\kappa_W$ in (\ref{HLF7c}) is a coupling constant of dimension $-1/2$ and $w^2$ in (\ref{HLF7d}) is a dimensionless coupling constant. A general $E^{ij}$ consist of all contributions to $W$ up to the chosen value $z$. The original motivation for the detailed balance condition is its ability to simplify the quantum behaviour and renormalization properties of theories that respect it. Otherwise there is no a priori physical reason to restrict $\mc{L}_R^{(3)}$ to be defined by (\ref{HLFrg0}). In the following we abandon the detailed balance condition and consider $\mc{L}_R^{(3)}$ to have a more general form, since it is not always relevant even for the renomalizability problem. We now investigate the renormalizability and the unitarity of the model (\ref{HLF11}). For this purpose, by introducing an auxiliary field $A$, we rewrite the action (\ref{HLF11}) in the following form: \begin{equation} \label{HLFrg1} S_{F(\tilde R)} = \frac{1}{2\kappa^2}\int \mathrm{d}^4 x \sqrt{g^{(3)}} N \left\{F'(A) (\tilde R - A) + F(A)\right\}\, . \end{equation} For simplicity, the following gauge condition is used: \begin{equation} \label{HLFrg2} N=1\, ,\quad N^i = 0\, . \end{equation} Then one finds \begin{eqnarray} \label{HLFrg2b} && \Gamma^0_{ij} = - \frac{1}{2} {\dot g}^{(3)}_{ij}\, ,\quad \Gamma^i_{j0} = \Gamma^i_{0j} = \frac{1}{2} g^{(3) ik} {\dot g}^{(3)}_{kj}\, ,\quad \Gamma^{i}_{jk} = \Gamma^{(3) i}_{jk} \equiv \frac{1}{2} g^{(3) il}\left( g^{(3)}_{lk,j} + g^{(3)}_{jl,k} - g^{(3)}_{jk,l} \right)\, ,\nonumber \\ && \mbox{other components of\ } \Gamma^\mu_{\nu\rho} = 0\, , \end{eqnarray} and therefore \begin{equation} \label{HLFrg3} \left(n^\mu\right) = \left( 1, 0, 0, 0 \right)\, , \quad K_{ij}=\frac{1}{2}\dot g^{(3)}_{ij}\, , \quad \nabla_\mu \left( n^\mu \nabla_\nu n^\nu - n^\nu \nabla_\nu n^\mu \right) = \frac{1}{2}\partial_0 \left(g^{(3) ij}{\dot g}^{(3)}_{ij} \right) + \frac{1}{4}\left(g^{(3) ij}{\dot g}^{(3)}_{ij} \right)^2\, . \end{equation} We define a new field by \begin{equation} \label{varphi} \varphi \equiv \frac{1}{3}\ln F'(A) \, , \end{equation} which can be algebraically solved as $A=A(\varphi)$, so that \begin{equation} \label{Avarphi} \varphi = \frac{1}{3}\ln F'(A(\varphi)) \quad\Leftrightarrow\quad F'(A(\varphi)) = {\rm e}^{3\varphi} \, . \end{equation} The spatial metric is redefined as \begin{equation} \label{HLFrg4} g^{(3)}_{ij} = {\rm e}^{-\varphi} {\bar g}^{(3)}_{ij}\, . \end{equation} Then the action (\ref{HLFrg1}) has the following form: \begin{eqnarray} \label{HLFrg1b} S_{F(\tilde R)} &=& \frac{1}{2\kappa^2}\int \mathrm{d}^4 x \sqrt{{\bar g}^{(3)}} \left\{ \frac{1}{4}{\bar g}^{(3) ij}{\bar g}^{(3) kl}\dot{\bar g}^{(3)}_{ik} \dot{\bar g}^{(3)}_{jl} - \frac{\lambda}{4} \left( {\bar g}^{(3) ij}\dot{\bar g}^{(3)}_{ij} \right)^2 \right. \nonumber \\ && \left. + \left( - \frac{1}{2} + \frac{3\lambda}{2} - \frac{3\mu}{2} \right) {\bar g}^{(3) ij}\dot{\bar g}^{(3)}_{ij} \dot\varphi + \left( \frac{3}{4} - \frac{9\lambda}{4} + \frac{9\mu}{2} \right) {\dot\varphi}^2 + \bar{\mathcal{L}}_R^{(3)}\left( {\bar g}^{(3)}_{ij}, \varphi \right) - V(\varphi) \right\}\, . \end{eqnarray} Here \begin{equation} \label{HLFrg5} \bar{\mathcal{L}}_R^{(3)}\left({\bar g}^{(3)}_{ij}, \varphi \right) \equiv \mathcal{L}_R^{(3)}\left( {\rm e}^{-\varphi} {\bar g}^{(3)}_{ij} \right)\, ,\quad V(\varphi) \equiv A\left(\varphi\right)F' \left(A\left(\varphi\right)\right)) - F\left(A\left(\varphi\right)\right)\, . \end{equation} If we insert $\varphi=1$ into the action (\ref{HLFrg1b}), the standard Ho\v{r}ava-Lifshitz gravity emerges. On the other hand, if we choose \begin{equation} \label{HLFrg6} \mu = \lambda - \frac{1}{3} \, , \end{equation} $\dot \varphi$ decouples with ${\dot g}^{(3)}_{ij}$. When the decoupling (\ref{HLFrg6}) is assumed and \begin{equation} \label{HLFrg7} \lambda > \frac{1}{3} \, , \end{equation} $\varphi$ becomes canonical and the theory becomes unitary. In the case \begin{equation} \label{HLFrg8} \lambda= \frac{1}{3} \, , \end{equation} the ${\dot\varphi}^2$ term vanishes and therefore $\varphi$ becomes non-dynamical, i.e. an auxiliary field. Eq. (\ref{HLFrg6}) tells that $\mu=0$ when $\lambda=1/3$. In order to clarify the renormalizabilty issue, we need to explicitly construct $\mathcal{L}_R^{(3)} \left(g^{(3)}_{ij}\right)$ in (\ref{HLF11}). As a model corresponding to $z=2$ in (\ref{HLF7b}), which is still not renormalizable, we may propose \begin{equation} \label{HLFrg9} \mathcal{L}_R^{(3)} \left(g^{(3)}_{ij}\right) = c_2 \left( R^{(3) ij}R^{(3)}_{ij} + \alpha \left(R^{(3)}\right)^2 \right)\, , \end{equation} where $c_2$ and $\alpha$ are constants. Since the action (\ref{HLFrg9}) induces the higher derivative terms to contribute to the propagators and therefore the propagators behave as $1/\left| \bm{k} \right|^4$ in the high energy region, the ultraviolet behavior is improved, although the theory still is not renormalizable. By the scale transformation (\ref{HLFrg4}), the curvatures are transformed as \begin{eqnarray} \label{HLFrg10} R^{(3)}_{ij} &=& {\bar R}^{(3)}_{ij} + \frac{1}{2}\left( {\bar \nabla}^{(3)}_i {\bar \nabla}^{(3)}_j \varphi + {\bar g}^{(3)}_{ij} {\bar \triangle}^{(3)} \varphi \right) + \frac{1}{4} \left( {\bar \nabla}^{(3)}_i \varphi {\bar \nabla}^{(3)}_j \varphi - {\bar g}^{(3)}_{ij} {\bar g}^{(3) kl} {\bar \nabla}^{(3)}_k \varphi {\bar \nabla}^{(3)}_l \varphi \right)\, , \nonumber \\ R^{(3)} &=& {\rm e}^\varphi \left( {\bar R}^{(3)} + 2 {\bar \triangle}^{(3)} \varphi - \frac{1}{2} {\bar g}^{(3) kl} {\bar \nabla}^{(3)}_k \varphi {\bar \nabla}^{(3)}_l \varphi \right)\, . \end{eqnarray} Here ${\bar R}^{(3)}_{ij}$, ${\bar R}^{(3)}$, ${\bar \nabla}^{(3)}_i$, and ${\bar \triangle}^{(3)}$ are the Ricci curvature, the scalar curvature, the covariant derivative, and the Laplacian defined by the metric ${\bar g}^{(3)}_{ij}$, respectively. Then if we consider the perturbation from the flat background, where $\varphi\sim 0$ due to (\ref{HLF11b}), \begin{equation} \label{HLFrg11} {\bar g}^{(3)}_{ij} = \delta_{ij} + {\bar h}^{(3)}_{ij}\, ,\quad \left| {\bar h}^{(3)}_{ij} \right|,\, \left| \varphi \right| \ll 1\, , \end{equation} we find \begin{eqnarray} \label{HLFrg12} && \int \mathrm{d}^4 x \sqrt{g^{(3)}} N \bar {\mathcal{L}}_R^{(3)}\left( {\bar g}^{(3)}_{ij}, \varphi \right) \nonumber \\ && = \int \mathrm{d}^4 x \left[ \frac{1}{4} \left\{ \partial_i \partial^k {\bar h}^{(3)}_{kj} + \partial_j \partial^k {\bar h}^{(3)}_{ki} - \partial_i \partial_j {\bar h}^{(3)k}_k - \triangle {\bar h}^{(3)}_{ij} \right\} \left\{ \partial^i \partial^l {\bar h}^{(3)j}_l + \partial^j \partial^l {\bar h}^{(3)i}_l - \partial^i \partial^j {\bar h}^{(3)l}_l - \triangle {\bar h}^{(3) ij} \right\} \right. \nonumber \\ && \left. + \alpha \left\{ \partial_i \partial^j {\bar h}^{(3)}_{ij} - \triangle {\bar h}^{(3)k}_k \right\}^2 + \left( - \frac{1}{2} + 4\alpha \right) \left\{ \partial_i \partial^j {\bar h}^{(3)}_{ij} - \triangle {\bar h}^{(3)k}_k \right\} \triangle \varphi + \left( \frac{1}{2} + 4\alpha \right) \left( \triangle \varphi \right)^2 \right]\, . \end{eqnarray} Therefore if one chooses \begin{equation} \label{HLFrg13} \alpha = \frac{1}{8}\, , \end{equation} $\varphi$ decouples with ${\bar h}^{(3)}_{ij}$. Eq. (\ref{HLFrg12}) shows that the propagators of $\varphi$ and ${\bar h}^{(3)}_{ij}$ behave as $1/\left| \bm{k} \right|^4$ in the high energy region, so that the ultraviolet behavior is improved. Similarly, a model corresponding to $z=3$ in (\ref{HLF7b}), which could be power-counting renormalizable, can be obtained by choosing \begin{equation} \label{HLFrg14} \mathcal{L}_R^{(3)} \left(g^{(3)}_{ij}\right) = c_3 \left( {\bar \nabla}^{(3) k} R^{(3) ij} {\bar \nabla}^{(3)}_k R^{(3)}_{ij} + \frac{1}{8} {\bar \nabla}^{(3) k} R^{(3)} {\bar \nabla}^{(3)}_k R^{(3)} \right)\, . \end{equation} For the $z=3$ model, the dimension of $\varphi$ vanishes and therefore all the interactions in $\bar{\mathcal{L}}_R^{(3)}\left( {\bar g}^{(3)}_{ij}, \varphi \right)$ and $V(\varphi)$ in (\ref{HLFrg1b}) become power-counting renormalizable. The propagators of $\varphi$ and ${\bar h}^{(3)}_{ij}$ behave as $1/\left| \bm{k} \right|^6$ in the high energy region, so that the ultraviolet behavior is improved to be renormalizable. We have shown that by requiring (\ref{HLFrg6}) and (\ref{HLFrg13}), the scalar field decouples with the gravity modes in the Einstein frame. The decoupling itself is not directly related with the renormalizability, but the decoupling makes it much easier to discuss the renormalizability of the model. The choice (\ref{HLFrg14}) for $\mathcal{L}_R^{(3)}$ gives the renormalizable model. The renormalizability does not essentially depend on the functional form of $F(R)$. \section{Hamiltonian analysis of the general action in the FRW space-time} \label{sec:3} Let us analyze the proposed general action \eqref{HLF28} of the FRW space-time \eqref{HLF8} with the flat spatial part and the non-trivial lapse $N=N(t)$. Introducing four auxiliary variables $\alpha, A, \beta, B$ enables us to write the action \eqref{HLF28} as \begin{equation} \label{action_FRW} S_\mr{gHL} = \int \mathrm{d}^4 x \sqrt{\g} N \left[ \alpha \left( A - \frac{H}{N} \right) + \beta \left( B - \frac{3}{a^3 N}\frac{\mathrm{d}}{\mathrm{d} t}\left(\frac{a^3 H}{N}\right) \right) + F \left( A, B \right) \right] \, . \end{equation} The variations of the action \eqref{action_FRW} with respect to $\alpha$ and $\beta$ yield \begin{equation} \label{AandB} A = \frac{H}{N} \quad\text{and}\quad B = \frac{3}{a^3 N}\frac{\mathrm{d}}{\mathrm{d} t} \left(\frac{a^3 H}{N}\right)\, ,\end{equation} respectively. Integration by parts permits the removal of the second-order time derivative of $a$ and the time derivative of $N$, assuming the boundary terms vanish, but with the price that $\beta$ becomes a dynamical variable. Thus the action \eqref{action_FRW} can be written as \begin{equation} \label{action_FRW_final} S_\mr{gHL} = \int \mathrm{d}^4 x \sqrt{\g} N \left[ \alpha \left( A - \frac{H}{N} \right) + \beta B + \frac{3\dot{\beta} H}{N^2} + F \left( A, B \right) \right] \, , \end{equation} The action \eqref{action_FRW_final} is equivalent to \eqref{action_FRW} and consequently to the original action \eqref{HLF28}. The advantage of the action \eqref{action_FRW_final} over \eqref{HLF28} is the simpler dependence on the variables $a$ and $N$, which will be crucially important in the following Hamiltonian analysis. For the Hamiltonian analysis of constrained systems and their quantization we refer to the monographs \cite{Dirac:1964,Chaichian:1984,Gitman:1990,Henneaux:1994}. In the Hamiltonian formalism the generalized coordinates $\g_{ij}$, $N$, $\alpha$, $A$, $\beta$ and $B$ of the action \eqref{action_FRW_final} have the canonically conjugated momenta $\pi^{ij}$, $\pi_N$, $\pi_\alpha$, $\pi_A$, $\pi_\beta$ and $\pi_B$, respectively. We consider $N$ to be projectable, $N = N(t)$, and therefore also the momentum $\pi_N = \pi_N(t)$ is constant on the hypersurface $\Sigma_t$ for each $t$. The Poisson brackets are postulated in the form (equal time $t$ is understood) \begin{eqnarray} \label{PB} &&\{ \g_{ij}(\vect{x}), \pi^{kl}(\vect{y}) \} = \frac{1}{2} \left( \delta_i^k \delta_j^l + \delta_i^l \delta_j^k \right) \delta(\vect{x} - \vect{y}) \,,\quad \{ N, \pi_N \} = 1 \, ,\nonumber \\ &&\{ \alpha(\vect{x}), \pi_\alpha(\vect{y}) \} = \delta(\vect{x} - \vect{y}) \,, \quad \{ A(\vect{x}), \pi_A(\vect{y}) \} = \delta(\vect{x} - \vect{y})\,,\nonumber \\ &&\{ \beta(\vect{x}), \pi_\beta(\vect{y}) \} = \delta(\vect{x} - \vect{y}) \,, \quad \{ B(\vect{x}), \pi_B(\vect{y}) \} = \delta(\vect{x} - \vect{y})\, , \end{eqnarray} with all the other Poisson brackets vanishing. We are considering the FRW metric \eqref{HLF8} with the flat spatial part $\g_{ij} = a^2 \delta_{ij}$ and therefore the Poisson bracket for the scale factor $a$ and the momenta conjugate to the 3-metric takes the form \begin{equation} \label{PB_for_a} \int\mathrm{d}^3 \vect{y}\, \{ a, \pi^{ij}(\vect{y}) \} = \frac{\delta^{ij}}{2a} \, . \end{equation} Let us find the momenta and the primary constraints. The action \eqref{action_FRW_final} does not depend on the time derivative of $N$, $\alpha$, $A$ or $B$. Thus we have the primary constraints \begin{equation} \label{p_constraints_FRW} \Phi_1 \equiv \pi_N \approx 0\, ,\quad \Phi_2(\vect{x}) \equiv \pi_\alpha(\vect{x}) \approx 0\, ,\quad \Phi_3(\vect{x}) \equiv \pi_A(\vect{x}) \approx 0\, , \quad \Phi_4(\vect{x}) \equiv \pi_B(\vect{x}) \approx 0 \, . \end{equation} The momenta conjugated to $\beta$ and $\g_{ij}$ are \begin{eqnarray} \pi_\beta &=& \frac{\delta S_\mr{gHL}}{\delta \dot{\beta}} = \frac{3a^3 H}{N} \, , \label{pi_beta}\\ \pi^{ij} &=& \frac{\delta S_\mr{gHL}}{\delta \dg_{ij}} = \frac{a}{6} \left( -\alpha + \frac{3\dot{\beta}}{N} \right) \delta^{ij} \, , \label{pi^ij} \end{eqnarray} respectively. The ``velocities'' $\dot\beta$ and $\dg_{ij}$ can be solved in terms of the canonical variables, so there are no more primary constraints. Then we define the Hamiltonian \begin{equation} \label{H} H = \int\mathrm{d}^3 \vect{x} \left( \pi^{ij} \dg_{ij} + \pi_\beta \dot{\beta} \right) - L = \int\mathrm{d}^3 \vect{x} N \mc{H} \, , \end{equation} where the Lagrangian $L$ is defined by \eqref{action_FRW_final}, $S_\mr{gHL} = \int \mathrm{d} t L$, and the so-called Hamiltonian constraint is found to be \begin{equation} \mc{H} = \frac{\pi_\beta}{3} \left( \frac{2}{a} \sum_{i=1}^3 \pi^{ii} + \alpha \right) - a^3 \left( \alpha A + \beta B + F(A, B) \right) \, . \end{equation} The primary constraints \eqref{p_constraints_FRW} can be included into the Hamiltonian \eqref{H} by using the Lagrange multipliers $\lambda_k$, $k=1,2,3,4$. We define the total Hamiltonian by \begin{equation} \label{H_T_FRW} H_T = H + \lambda_1 \Phi_1 + \sum_{n=2}^4 \int\mathrm{d}^3 \vect{x} \lambda_n(\vect{x}) \Phi_n(\vect{x}) \, . \end{equation} Note that there is no space integral over the product $\lambda_1 \Phi_1 = \lambda_1 \pi_N$, since these variables depend only on the time coordinate $t$. The consistency of the system requires that every constraint has to be preserved under time evolution. Since the Poisson brackets of the primary constraints \eqref{p_constraints_FRW} are zero, \begin{equation} \{ \Phi_k, \Phi_l \} = 0 \, ,\ k,l \in \{1,2,3,4\} \, , \end{equation} the time evolution of the primary constraints is determined by the Hamiltonian $H$ alone \begin{equation} \label{p_time_evo} \dot{\Phi}_k = \{ \Phi_k, H_T \} = \{ \Phi_k, H \} \, ,\ k=1,2,3,4 \, . \end{equation} Thus the following time derivatives of the primary constraints have to vanish: \begin{eqnarray} \label{primary_preserved} && \dot{\Phi}_1 = \dot{\pi}_N = \{ \pi_N, H \} = - \int\mathrm{d}^3 \vect{x} \mc{H} \, ,\nonumber \\ && \dot{\Phi}_2 = \dot{\pi}_\alpha = \{ \pi_\alpha, H \} = N \left( -\frac{\pi_\beta}{3} + a^3 A \right) \, ,\nonumber \\ && \dot{\Phi}_3 = \dot{\pi}_A = \{ \pi_A, H \} = N a^3 \left( \alpha + \frac{\partial F(A, B)}{\partial A} \right) \, ,\nonumber \\ && \dot{\Phi}_4 = \dot{\pi}_B = \{ \pi_B, H \} = N a^3 \left( \beta + \frac{\partial F(A, B)}{\partial B} \right) \, . \end{eqnarray} Since none of these expressions \eqref{primary_preserved} vanish due to the primary constraints \eqref{p_constraints_FRW}, we must impose the secondary constraints: \begin{eqnarray} \label{s_constraints_FRW} \Phi_0 &\equiv& \int\mathrm{d}^3 \vect{x} \mc{H} \approx 0 \, ,\nonumber \\ \Phi_5(\vect{x}) &\equiv& - \frac{\pi_\beta}{3} + a^3 A \approx 0 \, ,\nonumber \\ \Phi_6(\vect{x}) &\equiv& \alpha + \frac{\partial F(A, B)}{\partial A} \approx 0 \, ,\nonumber \\ \Phi_7(\vect{x}) &\equiv& \beta + \frac{\partial F(A, B)}{\partial B} \approx 0 \, . \end{eqnarray} Here the position argument $\vect{x}$ has been omitted in the right-hand side of the local constraints. Note that neither $N$ or $a$ can be constrained to vanish, since they are the essential physical quantities in this theory. Here the actual Hamiltonian constraint $\Phi_0$ is global due to the projectability condition, $N=N(t)$. Note that the Hamiltonian (\ref{H}) is simply this constraint multiplied by $N$, i.e. \begin{equation} \label{H_as_constraint} H = N \Phi_0 \, . \end{equation} Also the secondary constraints \eqref{s_constraints_FRW} have to be preserved under time evolution. The time evolution of the secondary constraints is \begin{equation} \label{s_time_evo} \dot{\Phi}_m = \{ \Phi_m, H_T \} = N \{ \Phi_m, \Phi_0 \} + \sum_{n=2}^4 \int\mathrm{d}^3 \vect{y}\, \lambda_n(\vect{y}) \{ \Phi_m, \Phi_n(\vect{y}) \} \, ,\ m=0,5,6,7 \, , \end{equation} where we have used \eqref{H_as_constraint} and the fact that none of the constraints $\Phi_j, j=1,2,\ldots,7,0$ depend on the lapse $N$, and that the secondary constraints (\ref{s_constraints_FRW}) do not depend on $\pi_N$. For the global Hamiltonian constraint $\Phi_0$ we find the following Poisson brackets with the primary constraints \eqref{p_constraints_FRW} \begin{equation} \label{Phi0_PBs} \{ \Phi_0, \Phi_2(\vect{x}) \} = - \Phi_5(\vect{x}) \, ,\quad \{ \Phi_0, \Phi_3(\vect{x}) \} = - a^3 \Phi_6(\vect{x}) \, ,\quad \{ \Phi_0, \Phi_4(\vect{x}) \} = - a^3 \Phi_7(\vect{x}) \, , \end{equation} which all vanish due to the other secondary constraints. Thus, according to \eqref{s_time_evo} and \eqref{Phi0_PBs}, the Hamiltonian constraint $\Phi_0$ is preserved under time evolution, $\dot{\Phi}_0 \approx 0$. For the secondary constraint $\Phi_5$ we obtain the non-vanishing Poisson brackets with the primary constraints \eqref{p_constraints_FRW} and the Hamiltonian constraint $\Phi_0$ \begin{equation} \{\Phi_5(\vect{x}), \Phi_3(\vect{y})\} = a^3 \delta(\vect{x}-\vect{y}) \, ,\quad \{\Phi_5, \Phi_0(\vect{x})\} = - \frac{a^3 B}{3} + 3\pi_\beta A \, . \end{equation} For the next secondary constraint $\Phi_6$ we obtain the non-vanishing Poisson brackets \begin{eqnarray} \{\Phi_6(\vect{x}), \Phi_2(\vect{y})\} &=& \delta(\vect{x}-\vect{y}) \, ,\nonumber \\ \{\Phi_6(\vect{x}), \Phi_3(\vect{y})\} &=& \frac{\partial^2 F(A, B)}{\partial A^2}\, \delta(\vect{x}-\vect{y}) \, ,\nonumber \\ \{\Phi_6(\vect{x}), \Phi_4(\vect{y})\} &=& \frac{\partial^2 F(A, B)}{\partial A \partial B}\, \delta(\vect{x}-\vect{y}) \, . \end{eqnarray} For the last secondary constraint $\Phi_7$ we obtain the non-vanishing Poisson brackets \begin{eqnarray} \{\Phi_7(\vect{x}), \Phi_3(\vect{y})\} &=& \frac{\partial^2 F(A, B)}{\partial A \partial B}\, \delta(\vect{x}-\vect{y}) \, ,\nonumber \\ \{\Phi_7(\vect{x}), \Phi_4(\vect{y})\} &=& \frac{\partial^2 F(A, B)}{\partial B^2}\, \delta(\vect{x}-\vect{y})\, ,\nonumber \\ \{\Phi_7(\vect{x}), \Phi_0\} &=& \frac{1}{3} \left( \frac{2}{a}\sum_{i=1}^3 \pi^{ii} + \alpha \right) \, . \end{eqnarray} Inserting all these Poisson brackets into \eqref{s_time_evo} gives the tertiary constraints \begin{eqnarray} \dot{\Phi}_5 &=& N \left( - \frac{a^3 B}{3} + 3\pi_\beta A \right) + \lambda_3 a^3 \approx 0 \, .\label{t_constraint1_FRW}\\ \dot{\Phi}_6 &=& \lambda_2 + \lambda_3 \frac{\partial^2 F(A, B)}{\partial A^2} + \lambda_4 \frac{\partial^2 F(A, B)}{\partial A \partial B} \approx 0 \, .\label{t_constraint2_FRW}\\ \dot{\Phi}_7 &=& \frac{N}{3} \left( \frac{2}{a}\sum_{i=1}^3 \pi^{ii} + \alpha \right) + \lambda_3 \frac{\partial^2 F(A, B)}{\partial A \partial B} + \lambda_4 \frac{\partial^2 F(A, B)}{\partial B^2} \approx 0 \, .\label{t_constraint3_FRW} \end{eqnarray} We assume that all the second partial derivatives of $F(A, B)$ do not vanish.\footnote{This is the case for example in the modified Ho\v{r}ava-Lifshitz gravity model $F(\tilde{R}) \propto \tilde{R} + b\tilde{R}^2$ discussed in \cite{Chaichian:2010yi} that corresponds to \[ F(A, B) = F \bigl( (3 - 9\lambda) A^2 + 2\mu B \bigr) \propto b (3 - 9\lambda)^2 A^4 + 2b\mu(3 - 9\lambda) A^2 B + 4b\mu^2 B^2 + (3 - 9\lambda) A^2 + 2\mu B \, ,\nonumber \] so that we would have \[ \frac{\partial^2 F(A, B)}{\partial A^2} \propto 12b (3 - 9\lambda)^2 A^2 + 4b\mu(3 - 9\lambda) B + 2(3 - 9\lambda) \, ,\quad \frac{\partial^2 F(A, B)}{\partial A \partial B} \propto 4b\mu(3 - 9\lambda) A \, ,\quad \frac{\partial^2 F(A, B)}{\partial B^2} \propto 8b\mu^2 \, . \] } In this case the equations \eqref{t_constraint1_FRW}--\eqref{t_constraint3_FRW} are restrictions on the Lagrange multipliers, constituting an inhomogeneous linear equation for the unknown multipliers $\lambda_i, i=2,3,4$. Since the homogeneous part of this equation has only the null solution $\lambda_2 = \lambda_3 = \lambda_4 = 0$, the most general solution is the solution of the inhomogeneous equation: \begin{eqnarray} \label{lambda234_solved} \lambda_2 &=& N u_2 \equiv - \frac{N}{3} \left( B - \frac{9\pi_\beta A}{a^3} \right) \frac{\partial^2 F(A, B)}{\partial A^2} \nonumber \\ &+& \frac{N}{3} \left[ \frac{2}{a}\sum_{i=1}^3 \pi^{ii} + \alpha + \left( B - \frac{9\pi_\beta A}{a^3} \right) \frac{\partial^2 F(A, B)}{\partial A \partial B} \right] \frac{\partial^2 F(A, B)}{\partial A \partial B} \left( \frac{\partial^2 F(A, B)}{\partial B^2} \right)^{-1} \, ,\nonumber \\ \lambda_3 &=& N u_3 \equiv \frac{N}{3} \left( B - \frac{9\pi_\beta A}{a^3} \right) \, ,\nonumber \\ \lambda_4 &=& N u_4 \equiv - \frac{N}{3} \left[ \frac{2}{a}\sum_{i=1}^3 \pi^{ii} + \alpha + \left( B - \frac{9\pi_\beta A}{a^3} \right) \frac{\partial^2 F(A, B)}{\partial A \partial B} \right] \left( \frac{\partial^2 F(A, B)}{\partial B^2} \right)^{-1} \, . \end{eqnarray} The multiplier $\lambda_1$ is arbitrary, as is the non-dynamical variable $N$ that also is a multiplier in the Hamiltonian \eqref{H_T_FRW} with \eqref{H_as_constraint}. The total Hamiltonian \eqref{H_T_FRW} can be written as a sum of two first-class constraints multiplied by the two arbitrary time-dependent multipliers $N$ and $\lambda_1$: \begin{equation} H_T = N H_0 + \lambda_1 \Phi_1 \, ,\end{equation} where we have defined the first-class Hamiltonian constraint by \begin{equation} \label{H_0_FRW} H_0 = \Phi_0 + \sum_{n=2}^4 \int\mathrm{d}^3 \vect{x}\, u_n(\vect{x}) \Phi_n(\vect{x}) \, , \end{equation} with the fields $u_n$ ($n=2,3,4$) given by \eqref{lambda234_solved}. It is easy to see that $\Phi_1 = \pi_N$ is first-class, since it clearly has a vanishing Poisson bracket with every constraint. From \eqref{p_time_evo} and \eqref{s_time_evo} we see that the sum of constraints $H_0$ is first-class by construction. Note that \eqref{H_0_FRW} is a combination of secondary and primary constraints. Usually a secondary first-class constraint would require us to define an extended Hamiltonian where the constraint would be added with an additional arbitrary multiplier. In this case, however, that would only lead to a redefinition of the multiplier $N$, and such a change '$N \rightarrow N + \text{an arbitrary function of time}$' does not bring anything new to the description. As always, the first-class constraints are associated with the gauge symmetries of the system \cite{Cabo:1993}. The first-class constraints $H_0$ and $\Phi_1$ generate the (gauge) transformations that do not change the physical state of the system. The constraints $\chi_k = (\Phi_2, \Phi_3, \Phi_4, \Phi_5, \Phi_6, \Phi_7)$ form the set of second-class constraints of the system.\footnote{The index of $\chi_k$ runs over $k=1,2,\ldots,6$, so that $\chi_k = \Phi_{k+1}$.} For details on the classification and representation of second-class constraints, one can see \cite{Chaichian:1994}. The Poisson brackets of the second-class constraints define the matrix: \begin{equation} C_{kl}(\vect{x}, \vect{y}) \equiv \{ \chi_k(\vect{x}), \chi_l(\vect{y}) \} = C_{kl}(\vect{x}) \delta(\vect{x}-\vect{y}) \, , \end{equation} where \begin{equation} C_{kl}(\vect{x}) = \left(\begin{array}{cccccc} 0 & 0 & 0 & 0 & - 1 & 0 \\ 0 & 0 & 0 & - a^3 & - F_{A^2} & - F_{AB} \\ 0 & 0 & 0 & 0 & - F_{AB} & - F_{B^2} \\ 0 & a^3 & 0 & 0 & 0 & \frac{1}{3} \\ 1 & F_{A^2} & F_{AB} & 0 & 0 & 0 \\ 0 & F_{AB} & F_{B^2} & - \frac{1}{3} & 0 & 0 \end{array}\right) \end{equation} and we denote \begin{equation} F_{A^2} \equiv \frac{\partial^2 F(A, B)}{\partial A^2} \, ,\quad F_{AB} \equiv \frac{\partial^2 F(A, B)}{\partial A \partial B} \, ,\quad F_{B^2} \equiv \frac{\partial^2 F(A, B)}{\partial B^2} \, .\label{s_p_derivatives_of_F} \end{equation} This matrix has the inverse \begin{equation} C^{kl}(\vect{x}, \vect{y}) = C^{kl}(\vect{x}) \delta(\vect{x}-\vect{y}) \, , \end{equation} \begin{equation} \label{inverse_of_C} C^{kl}(\vect{x}) = \left(\begin{array}{cccccc} 0 & \frac{F_{AB}}{3a^3 F_{B^2}} & - \frac{F_{A^2}}{3a^3 F_{B^2}} & \frac{F_{AB}^2-F_{A^2}F_{B^2}}{a^3 F_{B^2}} & 1 & - \frac{F_{AB}}{F_{B^2}} \\ - \frac{F_{AB}}{3a^3 F_{B^2}} & 0 & \frac{1}{3a^3 F_{B^2}} & \frac{1}{a^3} & 0 & 0 \\ \frac{F_{A^2}}{3a^3 F_{B^2}} & - \frac{1}{3a^3 F_{B^2}} & 0 & - \frac{F_{AB}}{a^3 F_{B^2}} & 0 & \frac{1}{F_{B^2}} \\ \frac{F_{A^2}F_{B^2}-F_{AB}^2}{a^3 F_{B^2}} & - \frac{1}{a^3} & \frac{F_{AB}}{a^3 F_{B^2}} & 0 & 0 & 0 \\ - 1 & 0 & 0 & 0 & 0 & 0 \\ \frac{F_{AB}}{F_{B^2}} & 0 & - \frac{1}{F_{B^2}} & 0 & 0 & 0 \end{array}\right) \, , \end{equation} which satisfies \begin{equation} \int \mathrm{d}^3 \vect{z}\, C_{kl}(\vect{x}, \vect{z}) C^{lm}(\vect{z}, \vect{y}) = C_{kl}(\vect{x}) C^{lm}(\vect{x}) \delta(\vect{x}-\vect{y}) = \delta_k^m \delta(\vect{x}-\vect{y}) \, . \end{equation} Now it is possible to impose the second-class constraints $\chi_k$ by replacing the Poisson bracket with the Dirac bracket. For any two functions or functionals $f$ and $h$ of the canonical variables, the Dirac bracket is defined by \begin{equation} \label{DB_FRW} \{ f(\vect{x}), h(\vect{y}) \}_\mr{DB} = \{ f(\vect{x}), h(\vect{y}) \} - \int \mathrm{d}^3 \vect{z} \mathrm{d}^3 \vect{z'} \{ f(\vect{x}), \chi_k(\vect{z}) \} C^{kl}(\vect{z}, \vect{z'}) \{ \chi_l(\vect{z'}), h(\vect{y}) \} \, . \end{equation} The Dirac bracket takes fully into account how the second-class constraints impose relations between the canonical variables. Therefore it enables us to set these constraints to vanish strongly, $\chi_k(\vect{x}) = 0$. So we have the identities \begin{equation} \pi_\alpha = \pi_A = \pi_B = 0 \, ,\quad A = \frac{\pi_\beta}{3a^3} \end{equation} and \begin{equation} \label{alpha_and_beta} \alpha = - \left. \frac{\partial F(A, B)}{\partial A} \right|_{A = \frac{\pi_\beta}{3a^3}} \, ,\quad \beta = - \frac{\partial F(\frac{\pi_\beta}{3a^3}, B)}{\partial B} \, . \end{equation} When the function $F$ is known, from \eqref{alpha_and_beta} we can solve the variable $B$ in terms of $\beta$ and $\frac{\pi_\beta}{3a^3} = \frac{\pi_\beta}{3\sqrt{g}}$: \begin{equation} \label{Psi} B = \tilde{B} \left(\beta, \frac{\pi_\beta}{3a^3}\right)\, . \end{equation} Then $\alpha$ can be solved: \begin{equation} \label{alpha} \alpha = - \left. \frac{\partial F\left(A, \tilde{B} \left(\beta, \frac{\pi_\beta}{3a^3}\right)\right)}{\partial A} \right|_{A = \frac{\pi_\beta}{3a^3}} \, . \end{equation} Introducing these strong constraints into the Hamiltonian gives \begin{equation} \mc{H} = \frac{2\pi_\beta}{3a} \sum_{i=1}^3 \pi^{ii} - a^3 \left[ \beta\, \tilde{B} \left(\beta, \frac{\pi_\beta}{3a^3}\right) + F\left(\frac{\pi_\beta}{3a^3}, \tilde{B} \left(\beta, \frac{\pi_\beta}{3a^3}\right)\right) \right] \, . \end{equation} The first-class Hamiltonian \eqref{H_0_FRW} reduces to $H_0 = \Phi_0$ and the total Hamiltonian becomes \begin{equation} \label{H_T_final} H_T = N\Phi_0 + \lambda_1 \Phi_1 = N \int\mathrm{d}^3 \vect{x} \mc{H} + \lambda_1 \pi_N \, . \end{equation} The canonical variables are $N, \pi_N, \g_{ij}, \pi^{ij}, \beta, \pi_\beta$. In other words $\alpha, A, B$ and their conjugated momenta have been eliminated. In order to obtain the equations of motion, \begin{equation} \dot{f} = \{ f, H_T \}_\mr{DB} = N \{ f, \Phi_0 \}_\mr{DB} + \lambda_1 \{ f, \pi_N \}_\mr{DB} \, , \end{equation} for the canonical variables we have to work out all the Dirac brackets \eqref{DB_FRW} between the variables. We find that the Dirac bracket \eqref{DB_FRW} reduces to the Poisson bracket \eqref{PB} for all the canonical variables $N, \pi_N, \g_{ij}, \pi^{ij}, \beta, \pi_\beta$, and consequently for any functions of these variables. In the first pair of variables, $N$ is quite arbitrary and $\pi_N$ does not evolve due to the equations of motion: \begin{eqnarray} && \dot{N} = \{ N, H_T \}_\mr{DB} = \lambda_1 \{ N, \pi_N \} = \lambda_1 \, ,\nonumber \\ && \dot{\pi}_N = \{ \pi_N, H_T \}_\mr{DB} = \{ \pi_N, N \} \int\mathrm{d}^3 \vect{x} \mc{H} = - \int\mathrm{d}^3 \vect{x} \mc{H} \approx 0 \, , \end{eqnarray} where as before $\lambda_1$ is an abitrary function of time. For the spatial metric we get \begin{equation} \dg_{ij} = \{ \g_{ij}, H_T \}_\mr{DB} = \frac{2N\pi_\beta}{3a} \delta_{ij} \, ,\end{equation} where $\dg_{ij}=2a\dot{a}\delta_{ij}$. Solving for $a$ gives \begin{equation} a(t)^3 = a(t_0)^3 + \int_{t_0}^t \mathrm{d} t N\pi_\beta \, .\end{equation} Hence we need $\pi_\beta$ in order to get $a(t)$. This reveals that $\pi_\beta$ does not depend on the spatial coordinate $\vect{x}$, because both $a$ and $N$ depend only on the time coordinate $t$. For the conjugated momenta we obtain the equations of motion \begin{equation} \dot{\pi}^{ij} = \{ \pi^{ij}, H_T \}_\mr{DB} = \delta^{ij} N \left( \frac{\pi_\beta}{3a^3} \sum_{k=1}^3 \pi^{kk} + \frac{3a}{2} \left[ \beta \tilde{B} + F(A, \tilde{B}) \right] - \frac{\pi_\beta}{2a^2} \frac{\partial F(A, \tilde{B})}{\partial A} \right)_{A=\frac{\pi_\beta}{3a^3}} \, , \end{equation} where the arguments of $\tilde{B}$ are omitted for brevity, $\tilde{B} \equiv \tilde{B}(\beta, A) = \tilde{B}\left(\beta, \frac{\pi_\beta}{3a^3}\right)$, as will be in the next equation. For the variable $\beta$ we obtain the equation of motion \begin{equation} \dot{\beta} = \{ \beta, H_T \}_\mr{DB} = \frac{N}{3} \left( \frac{2}{a} \sum_{i=1}^3 \pi^{ii} - \left. \frac{\partial F(A, \tilde{B})}{\partial A} \right|_{A=\frac{\pi_\beta}{3a^3}} \right) \, .\end{equation} For its conjugated momentum $\pi_\beta$ we obtain the equation of motion \begin{equation} \dot{\pi}_\beta = \{ \pi_\beta, H_T \}_\mr{DB} = N a^3 \tilde{B} \left(\beta, \frac{\pi_\beta}{3a^3}\right) \, . \end{equation} Further progress in the study of dynamics practically requires one to specify the form of the function $F$, and then solve \eqref{Psi} from \eqref{alpha_and_beta}. We can conclude that when the second partial derivatives \eqref{s_p_derivatives_of_F} of the function $F$ are non-zero, the proposed general action \eqref{HLF28} defines a consistent constrained theory. Let us then briefly consider the cases when some of the second partial derivatives \eqref{s_p_derivatives_of_F} of the function $F$ are zero. In such cases the tertiary constraints \eqref{t_constraint1_FRW}--\eqref{t_constraint3_FRW} are no longer mere restrictions on the Lagrange multipliers, but in addition impose constraints on the canonical variables. As an example we consider the case when $F_{B^2}=0$ and $F_{A^2}\neq 0, F_{AB}\neq 0$. Then we obtain the tertiary constraint \begin{equation} \Phi_8 \equiv \frac{1}{3} \left( \frac{2}{a}\sum_{i=1}^3 \pi^{ii} + \alpha \right) + \left(\frac{B}{3} - \frac{3\pi_\beta A}{a^3}\right) F_{AB} \approx 0 \, , \end{equation} and solve two of the Lagrange multipliers, say $\lambda_3$ and $\lambda_4$: \begin{equation} \lambda_3 = N \left(\frac{B}{3} - \frac{3\pi_\beta A}{a^3}\right)\, ,\quad \lambda_4 = - \frac{1}{F_{AB}} \left[ \lambda_2 + N\left(\frac{B}{3} - \frac{3\pi_\beta A}{a^3}\right) F_{A^2} \right]\, , \end{equation} where the third multiplier $\lambda_2$ is arbitrary. The consistency condition $\dot{\Phi}_8 \approx 0$ of the tertiary constraint $\Phi_8$ imposes a quartic constraint on the canonical variables, because $\dot{\Phi}_8$ turns out to be independent of the Lagrange multiplier $\lambda_2$ and non-vanishing due to the constraints established so far. Further constraints may follow from the consistency condition of the quartic constraint. This has to be checked explicitly after choosing the form of the function $F$. These additional constraints are a serious threat to the viability and consistency of the action, since they may delete the physical degrees of freedom. In case we also have $F_{AB}=0$, we would solve the Lagrange multipliers $\lambda_2, \lambda_3$ from \eqref{t_constraint1_FRW}--\eqref{t_constraint3_FRW} and obtain a tertiary constraint that restricts the field $\beta$ to be a constant, $\dot{\beta} \approx 0$. Thus in the latter case we should not have introduced the auxiliary fields $\beta$ and $B$ in the first place, since $F$ in the action is already linear in its second argument. We do not discuss the case $F_{A^2}=0$, because it appears to have very little if any practical application. As a specific example of the above general theory, one can consider the FRW cosmology in the modified Ho\v{r}ava-Lifshitz $F(R)$ gravity studied in \cite{Chaichian:2010yi} and its further generalization considered in the present work, as action \eqref{HLF11}. However, this analysis can be used to study FRW cosmology in any theory with an action of the general form \eqref{HLF26}. Moreover, the methods presented in this section can be used to analyze any action of the form (\ref{HLF26}) in a general way, without assuming any particular space-time. The proposed modified Ho\v{r}ava-Lifshitz $F(R)$ gravity will be studied in the next section. \section{Hamiltonian analysis of the $F(\tilde{R})$ gravity} \label{sec:4} Let us then consider the Hamiltonian analysis of the proposed action (\ref{HLF11}) for the modified Ho\v{r}ava-Lifshitz $F(R)$ gravity. The analysis is similar with the Hamiltonian analysis presented in ref.~\cite{Chaichian:2010yi}, where a special case of this theory given by the choice (\ref{HLFrg0}) was proposed (see also the analysis of ref.~\cite{Kluson:2010xx}). This special case with the further restriction to the parameter value $\mu=0$ has been proposed and analyzed in ref.~\cite{Kluson:2009xx}. In this section we generalize the analysis of ref.~\cite{Chaichian:2010yi}. By introducing two auxiliary fields $A$ and $B$ we can write the action (\ref{HLF11}) into a form that is linear in $\tilde{R}$: \begin{equation} S_{F(\tilde{R})} = \int\mathrm{d}^4 x \sqrt{\g} N \left[ B(\tilde{R} - A) + F(A) \right] \, . \label{action_aux} \end{equation} Then we can write $\tilde{R}$ as \begin{equation} \tilde R = K_{ij} \mc{G}^{ijkl} K_{kl} + 2\mu \nabla_\mu \left( n^\mu K \right) - \frac{2\mu}{N} \g[ij] \nabla^{(3)}_i \nabla^{(3)}_j N - \mc{L}^{(3)}_R \left(\g_{ij}\right)\ .\label{tildeR_2nd} \end{equation} Introducing (\ref{tildeR_2nd}) into (\ref{action_aux}) and performing integrations by parts yields the action \begin{eqnarray} S_{F(\tilde{R})} &=& \int\mathrm{d} t \mathrm{d}^3 \vect{x} \sqrt{\g} \Bigl\{ N \left[ B \left( K_{ij} \mc{G}^{ijkl} K_{kl} - \mc{L}^{(3)}_R \left(\g_{ij}\right) - A \right) + F(A) \right] \nonumber \\ &&\qquad\qquad\qquad \left. - 2\mu K \left( \dot{B} - N^i \partial_i B \right) - 2\mu N \g[ij] \nabla^{(3)}_i \nabla^{(3)}_j B \right\} \, , \label{action_aux_final} \end{eqnarray} where the integral is taken over the union $\mc{U}$ of the $t=\text{constant}$ hypersurfaces $\Sigma_t$ with $t$ over some interval in $\mathbb{R}$. We assume that the boundary integrals on $\partial\mc{U}$ and $\partial\Sigma_t$ vanish. The difference compared to the action studied in ref.~\cite{Chaichian:2010yi} is that the potential part $\mc{L}^{(3)}_R(\g_{ij})$ may have any form that satisfies the correct scaling property under (\ref{HLF7b}). In other words it is not necessarily defined by (\ref{HLFrg0}) and the detailed balance condition (\ref{HLF7}). However, due to the projectability condition $N = N(t)$ the specific form of the $\mc{L}^{(3)}_R(\g_{ij})$ has very little effect on our analysis. Indeed the analysis of ref.~\cite{Chaichian:2010yi} is translated to the present more general case by making the replacement (\ref{HLFrg0}) from rhs to lhs. Therefore we only present the main points of the generalized analysis. In the Hamiltonian formalism the field variables $g_{ij}$, $N$, $N_i$, $A$ and $B$ have the canonically conjugated momenta $\pi^{ij}$, $\pi_N$, $\pi^i$, $\pi_A$ and $\pi_B$, respectively. For the spatial metric and the field $B$ we have the momenta \begin{eqnarray} \pi^{ij} &=& \frac{\delta S_{F(\tilde R)}}{\delta \dot{g}_{ij}} = \sqrt{\g} \left[ B \mc{G}^{ijkl} K_{kl} - \frac{\mu}{N} \g[ij] \left( \dot{B} - N^i \partial_i B \right) \right]\, ,\label{metric_momenta}\\ \pi_B &=& \frac{\delta S_{F(\tilde R)}}{\delta \dot{B}} = - 2\mu \sqrt{\g} K \, .\label{pi_B} \end{eqnarray} We assume $\mu\neq 0$ so that the momentum (\ref{pi_B}) does not vanish. Because the action does not depend on the time derivative of $N$, $N^i$ or $A$, the rest of the momenta form the set of primary constraints: \begin{equation} \pi_N \approx 0\, ,\quad \pi^i(\vect{x}) \approx 0\, , \quad \pi_A(\vect{x}) \approx 0\, . \label{p_constraints} \end{equation} Due to the projectability condition also the momentum is constant on $\Sigma_t$ for each $t$, $\pi_N = \pi_N(t)$. Then the Hamiltonian is calculated \begin{equation} H = \int \mathrm{d}^3 \vect{x} \left( N \mc{H}_0 + N_i \mc{H}^i \right)\, ,\label{Ha} \end{equation} where the so-called Hamiltonian constraint and the momentum constraints are \begin{eqnarray} \mc{H}_0 &=& \frac{1}{\sqrt{\g}} \left[ \frac{1}{B} \left( \g_{ik} \g_{jl} \pi^{ij} \pi^{kl} - \frac{1}{3}\left( \g_{ij} \pi^{ij} \right)^2 \right) - \frac{1}{3\mu} \g_{ij} \pi^{ij} \pi_B - \frac{1-3\lambda}{12\mu^2} B \pi_B^2 \right] \nonumber \\ && +\sqrt{\g} \left[ B \left( \mc{L}^{(3)}_R \left(\g_{ij}\right) + A \right) - F(A) + 2\mu \g[ij] \nabla^{(3)}_i \nabla^{(3)}_j B \right] \, ,\nonumber \\ \mc{H}^i &=& - 2\nabla^{(3)}_j \pi^{ij} + \g[ij] \partial_j B \pi_B \nonumber \\ &=& -2\partial_j \pi^{ij} - \g[ij] \left( 2\partial_k \g_{jl} - \partial_j \g_{kl} \right) \pi^{kl} + \g[ij] \partial_j B \pi_B \, ,\label{Hb} \end{eqnarray} respectively. We define the total Hamiltonian by \begin{equation} H_T = H + \lambda_N \pi_N + \int \mathrm{d}^3 \vect{x} \left( \lambda_i \pi^i + \lambda_A \pi_A \right)\, , \label{H_T} \end{equation} where the primary constraints (\ref{p_constraints}) are multiplied by the Lagrange multipliers $\lambda_N$, $\lambda_i$, $\lambda_A$. The primary constraints (\ref{p_constraints}) have to be preserved under time evolution of the system. Therefore we impose the secondary constraints: \begin{equation} \Phi_0 \equiv \int \mathrm{d}^3 \vect{x} \mc{H}_0 \approx 0 \, ,\quad \Phi_S^i(\vect{x}) \equiv \mc{H}^i(\vect{x}) \approx 0 \, ,\quad \Phi_A(\vect{x}) \equiv B(\vect{x}) - F'(A(\vect{x})) \approx 0 \, .\label{s_constraints} \end{equation} Here the Hamiltonian constraint $\Phi_0$ is global and the other two, the momentum constraint $\Phi_S^i(\vect{x})$ and the constraint $\Phi_A(\vect{x})$, are local. It is convenient to introduce a globalised version of the momentum constraints: \begin{equation} \Phi_S(\xi_i) \equiv \int \mathrm{d}^3 \vect{x}\xi_i \mc{H}^i \approx 0 \, , \end{equation} where $\xi_i, i=1,2,3$ are arbitrary smearing functions. It can be shown that the momentum constraints $\Phi_S(\xi_i)$ generate the spatial diffeomorphisms for the canonical variables $B, \pi_B, \g_{ij}, \pi^{ij}$, and consequently for any function or functional constructed from these variables, and treats the variables $A, \pi_A$ as constants. The consistency of the system requires that also the secondary constraints $\Phi_0$, $\Phi_S(\xi_i)$ and $\Phi_A(\vect{x})$ have to be preserved under time evolution defined by the total Hamiltonian (\ref{H_T}), which can be written in terms of the constraints as \begin{equation} H_T = N\Phi_0 + \Phi_S(N_i) + \lambda_N \pi_N + \int \mathrm{d}^3 \vect{x} \left( \lambda_i \pi^i + \lambda_A \pi_A \right)\, . \label{H_T_as_constraints} \end{equation} The Poisson brackets for the constraints $\Phi_0$ and $\Phi_S(\xi_i)$ are \begin{equation} \{ \Phi_0, \Phi_0 \} = 0 \, ,\quad \{ \Phi_S(\xi_i), \Phi_0 \} = 0 \, , \quad \{ \Phi_S(\xi_i), \Phi_S(\eta_i) \} = \Phi_S(\xi^j \partial_j \eta_i - \eta^j \partial_j \xi_i) \approx 0 \, .\label{Phi0_PhiS_PBs} \end{equation} For the constraints $\pi_A$ and $\Phi_A(\vect{x})$ the Poisson brackets that do not vanish strongly are: \begin{eqnarray} &&\{ \pi_A(\vect{x}), \Phi_0 \} = - \sqrt{\g} \Phi_A(\vect{x}) \approx 0 \, ,\quad \{ \pi_A(\vect{x}), \Phi_A(\vect{y}) \} = F''(A(\vect{x})) \delta(\vect{x}-\vect{y})\nonumber \\ &&\{ \Phi_0, \Phi_A(\vect{x}) \} = \frac{1}{3\mu\sqrt{\g}} \left(\g_{ij} \pi^{ij} + \frac{1-3\lambda}{2\mu}B\pi_B \right) \, , \quad \{ \Phi_S(\xi_i), \Phi_A(\vect{x}) \} = - \xi^i \partial_i B \, . \label{piA_PhiA_PBs} \end{eqnarray} Since $F''(A)=0$ would essentially reproduce the original projectable Ho\v{r}ava-Lifshitz gravity, we assume that $F''(A) \neq 0$. The constraint $\Phi_A(\vect{x})$ can be made consistent by fixing the Lagrange multiplier $\lambda_A$: \begin{equation} \lambda_A = \frac{1}{F''(A)} \left( N^i \partial_i B - \frac{N}{3\mu\sqrt{\g}} \left(\g_{ij} \pi^{ij} + \frac{1-3\lambda}{2\mu}B\pi_B \right) \right)\, .\label{lambda_A} \end{equation} Now all the constraints of the system are consistent under dynamics. According to the Poisson brackets (\ref{Phi0_PhiS_PBs})--(\ref{piA_PhiA_PBs}) between the constraints, we can set the second-class constraints $\pi_A(\vect{x})$ and $\Phi_A(\vect{x})$ to vanish strongly, and as a result turn the Hamiltonian constraint $\Phi_0$ and the momentum constraint $\Phi_S(\xi_i)$ into first-class constraints, by replacing the Poisson bracket with the Dirac bracket. It turns out that the the Dirac bracket reduces to the Poisson bracket for any functions of the canonical variables. Assuming we can solve the constraint $\Phi_A(\vect{x})=0$, i.e. $B=F'(A)$, for $A=\tilde{A}(B)$, where $\tilde{A}$ is the inverse of the function $F'$, we can eliminate the variables $A$ and $\pi_A$. Thus the final variables of the system are $\g_{ij}, \pi^{ij}, B, \pi_B$. The lapse $N$ and the shift vector $N_i$, together with $\lambda_N$ and $\lambda_i$, are non-dynamical multipliers. Finally the total Hamiltonian is the sum of the first-class constraints \begin{equation} H_T = N\Phi_0 + \Phi_S(N_i) + \lambda_N \pi_N + \int\mathrm{d}^3 \vect{x} \lambda_i \pi^i \, .\label{H_T_sum_of_first-class} \end{equation} We conclude that also the proposed action (\ref{HLF11}) of the more general modified Ho\v{r}ava-Lifshitz $F(R)$ gravity defines a consistent constrained theory when the projectability condition is postulated. For additional details and discussion on the analysis see ref.~\cite{Chaichian:2010yi}. \section{Hamiltonian analysis of the $F(\tilde{R})$ gravity in fixed gauge} \label{sec:5} Let us then analyze the action \eqref{HLF11} when the gauge is fixed by \eqref{HLFrg2}, and we obtain the action \eqref{HLFrg1b}. First we find the momenta canonically conjugated to the generalized coordinates $\bg_{ij}$ and $\varphi$ of the action \eqref{HLFrg1b}. For the fields $\varphi$ and $\bg_{ij}$ we find the momenta \begin{equation} \label{pi_varphi} \pi_\varphi = \frac{\delta S_{F(\tilde{R})}}{\delta \dot{\varphi}} = \frac{\sqrt{\bg}}{4\kappa^2} \left( - (1-3\lambda+3\mu) \bg[ij] \dbg_{ij} + 3 (1-3\lambda+6\mu) \dot{\varphi} \right) \end{equation} and \begin{equation} \label{bpi} \bar{\pi}^{ij} = \frac{\delta S_{F(\tilde{R})}}{\delta \bg_{ij}} = \frac{\sqrt{\bg}}{4\kappa^2} \left( \bg[ik] \bg[jl] \dbg_{kl} - \lambda \bg[ij] \bg[kl] \dbg_{kl} - (1-3\lambda+3\mu) \bg[ij] \dot{\varphi} \right)\,, \end{equation} respectively. In the following analysis we will first assume \begin{equation} 1-3\lambda+3\mu \neq 0 \, ,\quad 1-3\lambda+6\mu \neq 0 \, ,\quad \mu \neq 0 \, , \end{equation} so that the kinetic term for $\varphi$ does not vanish. Later we will consider the special cases where these conditions do not hold. First we solve $\dot{\varphi}$ from \eqref{pi_varphi}, \begin{equation} \label{dotvarphi} \dot{\varphi} = \frac{\frac{4\kappa^2}{\sqrt{\bg}} \pi_\varphi + (1-3\lambda+3\mu) \bg[ij] \dbg_{ij}}{3 (1-3\lambda+6\mu)}\,, \end{equation} and introduce it into \eqref{bpi} \begin{equation} \label{bpi2} \bar{\pi}^{ij} = \frac{\sqrt{\bg}}{4\kappa^2} \left[ \bg[ik] \bg[jl] \dbg_{kl} - \left(\frac{1}{3} + \frac{3\mu^2}{1-3\lambda+6\mu}\right) \bg[ij] \bg[kl] \dbg_{kl} \right] - \frac{1-3\lambda+3\mu}{3 (1-3\lambda+6\mu)} \bg[ij] \pi_\varphi \, . \end{equation} Then we find the velocities in terms of the coordinates and momenta. First we contract \eqref{bpi2} by $\bg_{ij}$ and solve for \begin{equation} \bg[ij] \dbg_{ij} = - \frac{4\kappa^2}{\sqrt{\bg}} \left( \frac{1-3\lambda+6\mu}{9\mu^2} \bg_{ij} \bar{\pi}^{ij} + \frac{1-3\lambda+3\mu}{9\mu^2} \pi_\varphi \right) \, . \end{equation} This is inserted back into \eqref{bpi2} as well as into \eqref{dotvarphi}, which enables us to obtain the velocities in terms of the canonical variables: \begin{gather} \label{varphi_eom} \dot{\varphi} = \frac{4\kappa^2}{\sqrt{\bg}} \left( \frac{3\lambda-1}{27\mu^2} \pi_\varphi - \frac{1-3\lambda+3\mu}{27\mu^2} \bg_{ij} \bar{\pi}^{ij} \right) \, ,\\ \label{bg_eom} \dbg_{ij} = \frac{4\kappa^2}{\sqrt{\bg}} \left\{ \bg_{ik} \bg_{jl} \bar{\pi}^{kl} - \bg_{ij} \left[ \left(\frac{1-3\lambda+6\mu}{27\mu^2}+\frac{1}{3}\right) \bg_{kl} \bar{\pi}^{kl} + \frac{1-3\lambda+3\mu}{27\mu^2} \pi_\varphi \right] \right\} \, . \end{gather} Thus there are no primary constraints. It is expected that there are no first-class constraints, since the gauge has been completely fixed by setting $N=1, N^i=0$. Due to the non-vanishing kinetic terms of $\varphi$, there are no second-class constraints either. The Hamiltonian is defined by \begin{equation} H = \int\mathrm{d}^3 \vect{x} \left( \bar{\pi}^{ij} \dbg_{ij} + \pi_\varphi \dot{\varphi} \right) - L \, . \end{equation} After a lengthy algebra exercise we find \begin{eqnarray} \label{H_fixed_gauge} H &=& \int\mathrm{d}^3 \vect{x} \Biggl\{ \frac{4\kappa^2}{\sqrt{\bg}} \biggl[ \frac{1}{2} \bg_{ik} \bg_{jl} \bar{\pi}^{ij} \bar{\pi}^{kl} - \left(\frac{1-3\lambda+6\mu}{54\mu^2} +\frac{1}{6}\right) \left( \bg_{ij} \bar{\pi}^{ij} \right)^2 \nonumber \\ &&\qquad\quad - \frac{1-3\lambda+3\mu}{27\mu^2} \bg_{ij} \bar{\pi}^{ij} \pi_\varphi + \frac{3\lambda-1}{54\mu^2} \pi_\varphi^2 \biggr] - \frac{\sqrt{\bg}}{2\kappa^2} \left[ \bar{\mathcal{L}}^{(3)}_R \left(\bg_{ij}, \varphi\right) - V(\varphi) \right] \Biggr\} \, . \end{eqnarray} In fact we find that the Hamiltonian \eqref{H_fixed_gauge} is correct for any parameters $\lambda$ and $\mu$ as long as it is defined, i.e. when $\mu \neq 0$. This can be seen by considering the two cases when the kinetic cross-term vanishes ($\dot{\varphi}$ and $\dbg_{ij}$ decouple), $1-3\lambda+3\mu = 0, \mu \neq 0$, and when the kinetic $\dot{\varphi}^2$ term vanishes, $1-3\lambda+6\mu = 0, \mu \neq 0$, separately. Even the formulas \eqref{varphi_eom}--\eqref{bg_eom} hold in these cases, thought the details of their calculation are quite diferent. Note that the vanishing of the both kinetic terms of $\varphi$ implies $\mu = 0$ and $\lambda=1/3$. The Poisson bracket is postulated by (equal time $t$ is understood) \begin{equation} \{ \bg_{ij}(\vect{x}), \bar{\pi}^{kl}(\vect{y}) \} = \frac{1}{2} \left( \delta_i^k \delta_j^l + \delta_i^l \delta_j^k \right) \delta(\vect{x} - \vect{y})\, ,\quad \{ \varphi(\vect{x}), \pi_\varphi(\vect{y}) \} = \delta(\vect{x} - \vect{y}) \, , \end{equation} with all the other Poisson brackets vanishing. Now we can work out the Hamiltonian equations of motion. Because there are no constraints, the Hamiltonian \eqref{H_fixed_gauge} defines the dynamics of any function or functional $f$ of the variables $\bg_{ij}, \bar{\pi}^{ij}, \varphi, \pi_\varphi$ by: \begin{equation} \dot{f} = \{ f, H \} \, . \end{equation} For the generalized coordinates $\bg_{ij}$ and $\varphi$ we obtain the equations of motion \eqref{bg_eom} and \eqref{varphi_eom} respectively. For the momenta $\bar{\pi}^{ij}$ we obtain \begin{eqnarray} \label{dot_bpi} \dot{\bar{\pi}}^{ij} &=& \frac{4\kappa^2}{\sqrt{\bg}} \biggl[ \bg[ij] \left( \frac{1}{4} \bg_{km} \bg_{ln} \bar{\pi}^{kl} \bar{\pi}^{mn} - \frac{a}{4} \left( \bg_{kl} \bar{\pi}^{kl} \right)^2 - \frac{b}{2} \bg_{kl} \bar{\pi}^{kl} \pi_\varphi + \frac{c}{4} \pi_\varphi^2 \right)\nonumber \\ &&\qquad\qquad - \bg_{kl} \bar{\pi}^{ik} \bar{\pi}^{jl} + a \bar{\pi}^{ij} \bg_{kl} \bar{\pi}^{kl} + b \bar{\pi}^{ij} \pi_\varphi \biggr] \nonumber \\ &&+ \frac{\sqrt{\bg}}{2\kappa^2} \left( \frac{1}{2} \bg[ij] \left[ \bar{\mathcal{L}}^{(3)}_R \left(\bg_{ij}, \varphi\right) - V(\varphi) \right] + \frac{\partial \bar{\mathcal{L}}^{(3)}_R \left(\bg_{kl}, \varphi\right)}{\partial \bg_{ij}} \right) \, , \end{eqnarray} where we have introduced the constants: \begin{equation} a = \frac{1-3\lambda+6\mu}{27\mu^2}+\frac{1}{3} \, ,\quad b = \frac{1-3\lambda+3\mu}{27\mu^2} \, ,\quad c = \frac{3\lambda-1}{27\mu^2} \, . \end{equation} The equation of motion for $\pi_\varphi$ is \begin{equation} \dot{\pi}_\varphi = \frac{\sqrt{\bg}}{2\kappa^2} \left( \frac{\partial \bar{\mathcal{L}}^{(3)}_R \left(\bg_{kl}, \varphi\right)}{\partial \varphi} - 3A(\varphi) {\rm e}^{3\varphi} \right) \, , \end{equation} where the last term is obtained from the derivative of $V(\varphi)$ from \eqref{HLFrg5} \begin{equation} \frac{\mathrm{d} V(\varphi)}{\mathrm{d}\varphi} = A(\varphi) \frac{\mathrm{d} A(\varphi)}{\mathrm{d}\varphi} F''(A(\varphi)) = 3A(\varphi) F'(A(\varphi)) = 3A(\varphi) {\rm e}^{3\varphi} \, . \end{equation} Here we have also used the definition \eqref{varphi} of $\varphi$ and \eqref{Avarphi} of $A(\varphi)$ to calculate \begin{equation} 1 = \frac{\mathrm{d} \varphi}{\mathrm{d}\varphi} = \frac{1}{3} \frac{\frac{\mathrm{d} A(\varphi)}{\mathrm{d}\varphi} F''(A(\varphi))}{F'(A(\varphi))} \ \Rightarrow\ \frac{\mathrm{d} A(\varphi)}{\mathrm{d}\varphi} F''(A(\varphi)) = 3 F'(A(\varphi)) \, . \end{equation} In particular, the equations of motion \eqref{dot_bpi} for the momenta $\bar{\pi}^{ij}$ are pretty complex. However, as always, they are first order differential equations. The cases with $\mu=0$ are less interesting and we only consider them briefly. When $\lambda=1/3$ the field $\varphi$ is non-dynamical and hence one has the primary constraint $\pi_\varphi \approx 0$. The momenta conjugate to $\g_{ij}$ are given by \begin{equation} \bar{\pi}^{ij} = \frac{\sqrt{\bg}}{4\kappa^2} \left( \bg[ik] \bg[jl] \dbg_{kl} - \frac{1}{3}\bg[ij] \bg[kl] \dbg_{kl} \right) \, .\label{bpi_traceless} \end{equation} It has zero trace $\bg_{ij}\bar{\pi}^{ij}=0$ and can be trivially solved for $\dbg_{ij}=\frac{4\kappa^2}{\sqrt{\bg}}\bg_{ik}\bg_{jl}\bar{\pi}^{kl}$. When $\lambda\neq 1/3$, one is forced to impose the constraint $\pi_\beta=-\bg_{ij}\bar{\pi}^{ij} \approx 0$ that again leads to (\ref{bpi_traceless}) and makes $\varphi$ non-dynamical. \section{FRW cosmology in power-like models: cosmic acceleration and future singularities} \label{sec:6} We now consider the FRW cosmology of the action (\ref{HLF11}). In the spatially-flat FRW space-time (\ref{HLF8}), since the spatial curvature vanishes, $R^{(3)}_{ij}=R^{(3)}=0$, there is no contribution from $\mathcal{L}_R^{(3)}$, as it vanishes according to (\ref{HLFrg9}) or (\ref{HLFrg14}). In other words, the choice of $\mathcal{L}_R^{(3)}$ in (\ref{HLFrg9}) or (\ref{HLFrg14}) gives the same FRW cosmology. Of course, this situation changes when one considers black holes or other solutions with non-trivial dependence on the spatial coordinates. Let us first review the spatially-flat FRW equations obtained in ref. \cite{Chaichian:2010yi}. Varying the action (\ref{HLF11}) with respect to $g^{(3)}_{ij}$ and setting $N=1$ one obtains: \begin{equation} \label{HLF13} 0 = F\left(\tilde R\right) - 2 \left(1 - 3\lambda + 3\mu \right) \left(\dot H + 3 H^2\right) F'\left(\tilde R\right) - 2\left(1 - 3\lambda \right) H \frac{\mathrm{d} F'\left(\tilde R\right)}{\mathrm{d} t} + 2\mu \frac{\mathrm{d}^2 F'\left(\tilde R\right)}{\mathrm{d} t^2} + p\, , \end{equation} where $F'$ denotes the derivative of $F$ with respect to its argument. Here, the matter contribution (the pressure $p$) is included. On the other hand, the variation over $N$ gives the global constraint: \begin{equation} \label{HLF14} 0 = \int \mathrm{d}^3 x \left[ F\left(\tilde R\right) - 6 \left\{ \left(1 - 3\lambda + 3\mu\right) H^2 + \mu \dot H \right\} F'\left(\tilde R\right) + 6 \mu H \frac{\mathrm{d} F'\left(\tilde R\right)}{\mathrm{d} t} - \rho \right]\, , \end{equation} after setting $N=1$. Here $\rho$ is the energy density of matter and we have set again $N=1$. It is important to stress that, because of the projectability condition $N=N(t)$, the above equation is a global constraint. If the standard conservation law is used, \begin{equation} \label{HLF15} 0= \dot \rho + 3H \left(\rho + p\right)\, , \end{equation} Eq. (\ref{HLF13}) can be integrated to give:\footnote{Note that, as already shown in \cite{Carloni:2009jc} for the standard case, the parameter $\lambda$ has a crucial role in the relation of Ho\v{r}ava-Lifshitz type theories. In fact, from the second equation above one realizes that this solution is physical only if $1-3 \lambda +3\mu>0$. It should also be pointed out that in Ho\v{r}ava-Lifshitz gravity the role of standard matter and its conservation properties are not well understood yet. We will proceed with our discussion supposing that it is possible to couple matter and gravity in the same way in which one does in GR.} \begin{equation} \label{HLF16} 0 = F\left(\tilde R\right) - 6 \left\{ \left(1 - 3\lambda + 3\mu\right) H^2 + \mu \dot H \right\} F'\left(\tilde R\right) + 6 \mu H \frac{\mathrm{d} F'\left(\tilde R\right)}{\mathrm{d} t} - \rho - \frac{C}{a^3}\, . \end{equation} Here $C$ is the integration constant and can be set to zero. In \cite{Mukohyama:2009mz}, however, it has been claimed that $C$ does not necessarily need to vanish in a local region, since (\ref{HLF14}) needs to be satisfied only in the whole universe. In this sense in a limited region, one can have $C>0$ and the $Ca^{-3}$ term in (\ref{HLF16}) can be regarded as dark matter. Note that Eq. (\ref{HLF16}) corresponds to the first FRW equation and (\ref{HLF13}) to the second one. Specifically, if we choose $\lambda=\mu=1$ and $C=0$, Eq. (\ref{HLF16}) reduces to \begin{eqnarray} \label{HLF17} 0 &=& F\left(\tilde R\right) - 6 \left(H^2 + \dot H \right) F'\left(\tilde R\right) + 6 H \frac{\mathrm{d} F'\left(\tilde R\right)}{\mathrm{d} t} - \rho \nonumber \\ \\ &=& F\left(\tilde R\right) - 6 \left(H^2 + \dot H \right) F'\left(\tilde R\right) + 36 \left(4H^2 \dot H + \ddot H\right) F''\left(\tilde R\right) - \rho \, , \end{eqnarray} which is identical to the corresponding equation in the standard $F(R)$ gravity (see Eq. (2) in \cite{Nojiri:2009kx} where a reconstruction of the theory has been made). In the following we will explore the properties of the equations (\ref{HLF13}) and (\ref{HLF16}), especially looking for solutions that represent accelerated expansion. Solutions of this type are very important because they represent the key evolutionary phases of the universe, namely the inflationary era and the dark energy era. The connection with dark energy is particularly important to understand, if the Newtonian nature of the quantum theory of gravitation, implicit in Ho\v{r}ava-Lifschitz gravity, is the direct cause of cosmic acceleration and, as a consequence, of dark energy. \subsection{de Sitter cosmology} Let us investigate the properties of the de Sitter solutions in this class of theories. This issue was considered for the first time in \cite{Chaichian:2010yi}, but in the following a more general treatment is given. These solutions are of great importance in cosmology because they have the potential to describe both inflationary phase(s) as well as dark energy era(s). In standard $F(R)$ gravity it has been proven that it is possible to construct a viable model unifying inflation and late time acceleration in the form of double or multiple de Sitter solution \cite{NO2003,Nojiri:2007as,Cognola:2007zu,Cognola:2008zp}. In vacuum ($\rho_m, p_m=0$) and substituting the de Sitter metric \begin{equation} \label{dSmetric} \mathrm{d} s^2= -\mathrm{d} t^2+\exp{(\gamma t)}\sum_{i=1}^{3}\left(\mathrm{d} x^i\right)^2\,, \end{equation} the equations (\ref{HLF13}) and (\ref{HLF16}) reduce to the single equation \begin{equation} \label{EqdS} F+6 \gamma ^2 \left(3 \lambda -3 \mu -1\right) F'=0\,. \end{equation} In Table \ref{dSTab} we show the values of the time constant $\gamma$ of the de Sitter metric of some popular $F(R)$ theories and their Ho\v{r}ava-Lifshitz versions. It is interesting to note that, in contrast to what happens in standard $F(R)$ gravity, the quadratic function $F=\tilde{R}^m$ is not degenerate for $m=2$, but only for \begin{equation} m=\frac{2}{1-3 \lambda +3 \mu }\,, \end{equation} i.e. it depends on the Lorentz-violation parameters. It is also interesting to note that in general the solution of (\ref{HLF13}) and (\ref{HLF16}) is not unique. Thus a given theory can have multiple de Sitter solutions. This means that also in this case the cosmologies of these theories can admit both inflation and dark energy phases. However, since the Ho\v{r}ava-Lifshitz parameters are in principle only present in the coefficients of the equation (\ref{EqdS}) and the number of solutions of (\ref{EqdS}) is determined by the powers of $\tilde{R}$ appearing in $F$, two corresponding theories will have in general the same number of de Sitter solutions. Obvious exceptions are the case $3 \lambda -3 \mu -1=0$ for which the Eq. (\ref{EqdS}) becomes $F=0$ and the case $3 \lambda -3 \mu -1=g(\gamma)$. In the first case, we see that the structure of the cosmological equations is essentially changed. In particular, the equation (\ref{HLF13}) is modified and the constraint (\ref{HLF16}) looses the linear $H^2$ term: \begin{equation} \label{HLFrg15} 0 = F\left(\tilde R\right) - 6 \mu \dot H F'\left(\tilde R\right) + 6 \mu H \frac{\mathrm{d} F'\left(\tilde R\right)}{\mathrm{d} t} - \rho\, \end{equation} (we have considered $C=0$). Consequently, the equation (\ref{EqdS}) becomes \begin{equation} \label{HLFrg16} 0 = F\left(\tilde R\right)\, , \end{equation} which is never obtained in the standard $F(R)$ gravity. In the second case, instead, the choice of the function $g$ can radically change the number and type of solutions in these theories. In this sense, the solution space for Ho\v{r}ava-Lifshitz $F(R)$ gravity can be considered bigger than the one of its standard counterpart. Such a fact will be even more apparent in the case of the FRW-type solutions that will be examined in the next section. \begin{table}[htdp] \caption{Some of the values of the time constant of de Sitter backgrounds for standard $F(R)$ gravity models and their Ho\v{r}ava-Lifshitz counterparts. When writing the form of the function $F$, the Ricci scalar of both types of theories is indicated by $x$. For the more complex forms of $F(R)$ an implicit equation has to be solved for the time parameter $\gamma$ in order to find its values.}\label{dSTab} \begin{center} \scriptsize{\begin{tabular}{cccc} \hline\hline Function $F$& $ \mbox{Standard Case} $ & $\mbox{Ho\v{r}ava-Lifshitz case}$ \\\hline $x+\chi x^n$ & $\gamma=\pm\left(2^{2 n-1} 3^{n-1} \alpha -3^n 4^{n-1} n \chi \right)^{\frac{1}{2-2 n}}$ & $\gamma=\pm\left(\frac{2^{2 n} 3^n \chi \left(3 n \lambda -3 n \mu -n+2\right)}{-36 \lambda +36 \mu -12}\right)^{\frac{1}{2 (1-n)}}$ \\&&&\\ $x^n\exp(\chi x^m)$ & $\gamma=\frac{1}{2 \sqrt{3}}\left(\frac{2-n}{m \chi}\right)^{\frac{1}{2 m}}$ & $\gamma=\frac{1}{2 \sqrt{3}}\left[\frac{n \left(3 \lambda -3 \mu -1\right)+2}{ \chi m \left(3 \mu +1-3 \lambda \right)} \right]^{\frac{1}{2 m}}$ \\&&&\\ $\frac{x^m+\chi}{1+\xi x^n}$ & $\frac{A}{2 \left(12^n \gamma ^{2 n}+\xi \right)^2}=0$ &$\frac{B}{2 \left(12^n \gamma ^{2 n}+\xi \right)^2}=0$ \\&&&\\ $x+\chi+\frac{\chi }{\alpha \left[(x \xi -1)^{2 n+1}+1\right]+1}$& $-C+6 \gamma ^2+\chi=0$& $-3 D+6 \gamma ^2 \left(3 \lambda -3 \mu +1\right)+\chi=0$ \\&&&\\ \hline \multicolumn4c{$A=2^{2 (m+n)} 3^{m+n} (-m+n+2) \gamma ^{2 (m+n)}+2^{2 m} 3^m (2-m) \xi \gamma ^{2 m}+2^{2 n} 3^n (n+2) \chi \gamma ^{2 n}+2 \xi \chi$}\\\multicolumn4c{$B=2^{2 (m+n)} 3^{m+n} \gamma ^{2 (m+n)} \left(m \left(3 \lambda -3 \mu -1\right)-3 n \lambda +3 n \mu +n+2\right)+~~~~~~~~~~~~~~~~~~~~~~~~~$}\\ \multicolumn4c{$+2^{2 m} 3^m \xi \gamma ^{2 m} \left(m \left(3 \lambda -3 \mu -1\right)+2\right)+2^{2 n} 3^n \chi \gamma ^{2 n} \left(-3 n \lambda +3 n \mu +n+2\right)+2 \xi \chi$}\\ \multicolumn4c{$C=\frac{\chi \left(-6 (-2 n-3) \gamma ^2 \xi -1\right)-6 (2 n+1) (\alpha +1) \gamma ^2 \xi \chi}{\left(12 \gamma ^2 \xi -1\right) \left(\alpha \left(12 \gamma ^2 \xi -1\right)^{2 n+1}+\alpha +1\right)^2}$}\\ \multicolumn4c{$D=\frac{6 (2 n+1) (\alpha +1) \gamma ^2 \xi \chi \left(3 \lambda -3 \mu -1\right)+\chi \left(-6 \gamma ^2 \xi \left(n \left(6 \lambda -6 \mu -2\right)+3 \left(\lambda -\mu -1\right)\right)-1\right)}{\left(12 \gamma ^2 \xi -1\right) \left(\alpha \left(12 \gamma ^2 \xi -1\right)^{2 n+1}+\alpha +1\right)^2}$}\\\hline\hline \end{tabular}} \end{center} \end{table} \subsection{Power law solutions and reconstruction technique}\label{SectionReconstruction} In addition to the de Sitter solution described above one can also look for accelerated expansion phases in the form of power law solutions. These solution can have a double value as Friedmannian cosmologies, if the exponent of the power law is in the interval $]0,1[$, and they can realize the so-called ``power law inflation", or a ``power law dark energy", if the exponent is bigger than one. If we look for the presence of Friedmann solutions of (\ref{HLF13}) and (\ref{HLF16}), we realize quickly that, as in $F(R)$ gravity, there is little chance to find power law solutions, unless one considers a function $F$ of trivial form. However, due to the additional parameters, the set of solutions of this type is bigger in the Ho\v{r}ava-Lifshitz case than in the standard one. For example, in the simple case $F(\tilde{R})=\tilde{R}+\chi \tilde{R}^m$ we find that, in the presence of a barotropic fluid ($p=w\rho$), the spatially-flat solution \begin{equation} \label{} a=a_0 t^{2/3(1+w)}\,, \qquad \rho=\rho_0 t^{-2}\,, \end{equation} satisfies (\ref{HLF13}) and (\ref{HLF16}) if \begin{equation} \mu =\frac{\left(w^2-1\right) \gamma \left(3 \lambda -1\right)}{2 w (3 (w+1) \gamma -w+1)}\,, \quad \mbox{and} \quad \rho _0= \frac{4 \left(3 \lambda -1\right)}{3 (w+1)^2 \kappa ^2}\,. \end{equation} This corresponds to the standard Friedmann solution. It is well known that in standard $F(R)$ theories, the case $F(R)=R+\chi R^m$ possesses only power law solutions of the type $a=a_0 t$ or $a=a_0 t^{1/2}$ (see e.g. \cite{Carloni:2007br}). In order to facilitate the analysis in the next sections, we consider also some solutions for this model in the case of very small and very large scalar curvature. In the first case, the theory reduces itself to GR plus a cosmological constant and its solutions are approximated by the Friedmann ones. In the case of high curvature, instead, the theory reduces to $F(\tilde{R})\approx\tilde{R}^n$. Such a theory possesses three exact solutions. The first two \begin{eqnarray} \label{SolMat} &&a=a_0 t^{\gamma}\,, \qquad \rho=\rho_0 t^{-2}\,,\qquad \gamma=\frac{2 m}{3 (1 + w)}\,,\\ &&\rho_0 =\frac{\chi \left[3 \mu(m-1) (2 m (w+2)-w-1)-m (2 m-1) \left(3 \lambda -1\right)\right] }{\kappa ^2 \left(m \left[3 \lambda -6 \mu -1\right)+3 (w+1) \mu \right]}\left(\frac{4 m^2 \left(-3 \lambda +6 \mu +1\right)-12 m (w+1)\mu }{(w+1)^2}\right)^m\,,\nonumber \end{eqnarray} and \begin{eqnarray} \label{SolVac} &&a=a_0 t^{\gamma}\,, \qquad \rho=\rho_0 t^{-2}\,,\qquad \gamma=\frac{2 (m-1) (2 m-1) \mu }{(2 m-1) \left(3 \lambda -1\right)-6 (m-1) \mu }\,,\qquad \rho_0 =0\,, \end{eqnarray} correspond to the solutions in the standard $F(R)$ case. A third solution, that is valid only for $m>1$, is \begin{eqnarray} \label{Sol HL} &&a=a_0 t^{\gamma}\,, \qquad \rho=\rho_0 t^{-2}\,,\qquad \gamma=\frac{2 \mu }{-3 \lambda +6 \mu +1}\,,\qquad \rho_0 =0\,, \end{eqnarray} which is characteristic of Ho\v{r}ava-Lifshitz gravity and does not depend on $m$. In the analysis of the singularities of this simple model, we will refer to these solutions. Note that as it often happens \cite{Carloni:2004kp} in theories of this type, the value of $\rho_0$ can be negative (or even undefined) for certain combinations of variables. This implies that matter is not always compatible with $F(R)$ gravity, not even in the Ho\v{r}ava-Lifshitz case. In our specific example $\rho_0>0$ implies \begin{eqnarray} &&m<0,\qquad 0\leq w\leq 1, \quad\left\{ \begin{array}{ccc} \chi <0 & \mu >0 &\frac{6 m \mu +m-3 w \mu -3 \mu }{3 m}<\lambda <\frac{6 m^2 w \mu +12 m^2 \mu +2 m^2-9 m w\mu -15 m \mu -m+3 w \mu +3 \mu }{6 m^2-3 m} \\ \chi >0&\mu <0 &\lambda >\frac{6 m \mu +m-3 w \mu -3 \mu }{3 m} \\ \chi >0&\mu \geq 0 & \lambda >\frac{6 m^2 w \mu +12 m^2 \mu +2 m^2-9 m w \mu -15 m \mu -m+3 w \mu +3 \mu }{6 m^2-3m} \end{array} \right. \\ &&\chi >0, \qquad \mu >0 , \quad\left\{ \begin{array}{ccc} w=0&0<m<\frac{1}{2} &\lambda <\frac{6 m \mu +m-3 \mu }{3 m} \\ w=0&m>\frac{1}{2} &\frac{12 m^2 \mu +2 m^2 -15 m \mu -m +3 \mu }{6 m^2-3m}<\lambda <\frac{6 m \mu +m-3 \mu }{3 m} \\ 0<w\leq 1&0<m< \frac{1}{2} &\lambda <\frac{6 m \mu +m-3 w \mu -3 \mu }{3 m} \\ 0<w\leq 1&\frac{1}{2}<m< \frac{1+w}{2 w} &\frac{6 m^2 w \mu +12 m^2 \mu +2 m^2-9 m w \mu -15 m \mu -m+3 w \mu +3 \mu }{6 m^2-3m}<\lambda <\frac{6 m \mu +m-3 w \mu -3 \mu }{3 m} \end{array} \right. \\ &&\chi <0, \qquad \mu >0 , \quad\left\{ \begin{array}{ccc} w=0&m>\frac{1}{2} &\lambda <\frac{12 m^2 \mu +2 m^2 -15 m \mu -m +3 \mu }{6 m^2-3m} \\ 0<w\leq 1&\frac{1}{2}<m< \frac{1+w}{2 w} &\lambda <\frac{6 m^2 w \mu +12 m^2 \mu +2 m^2-9 m w \mu -15 m \mu -m+3 w \mu +3 \mu }{6 m^2-3m}\\ 0<w\leq 1&m>\frac{1+w}{2 w} &\lambda <\frac{6 m \mu +m-3 w \mu -3 \mu }{3 m} \end{array} \right. \\ &&\chi >0, \qquad \mu <0 , \quad\left\{ \begin{array}{ccc} w=0&m>\frac{1}{2} &\lambda <\frac{12 m^2 \mu +2 m^2 -15 m \mu -m +3 \mu }{6 m^2-3m} \\ 0<w\leq 1&\frac{1}{2}<m< \frac{1+w}{2 w} &\lambda <\frac{6 m^2 w \mu +12 m^2 \mu +2 m^2-9 m w \mu -15 m \mu -m+3 w \mu +3 \mu }{6 m^2-3m}\\ 0<w\leq 1&m>\frac{1+w}{2 w} &\frac{6 m^2 w \mu +12 m^2 \mu +2 m^2-9 m w \mu -15 m \mu -m+3 w \mu +3 \mu }{6 m^2-3m}<\lambda <\frac{6 m \mu +m-3 w \mu -3 \mu }{3 m} \end{array} \right.\\ &&\chi <0, \qquad \mu <0 , \quad\left\{ \begin{array}{ccc} w=0&0<m<\frac{1}{2} &\frac{12 m^2 \mu +2 m^2 -15 m \mu -m +3 \mu }{6 m^2-3m}<\lambda <\frac{6 m \mu +m-3 \mu }{3 m} \\ w=0&m>\frac{1}{2}&\lambda <\frac{6 m \mu +m-3 \mu }{3 m} \\ 0<w\leq 1&0<m< \frac{1}{2} &\frac{6 m^2 w \mu +12 m^2 \mu +2 m^2-9 m w \mu -15 m \mu -m+3 w \mu +3 \mu }{6 m^2-3m}<\lambda <\frac{6 m \mu +m-3 w \mu -3 \mu }{3 m} \\ 0<w\leq 1&\frac{1}{2}<m< \frac{1+w}{2 w} &\lambda <\frac{6 m \mu +m-3 w \mu -3 \mu }{3 m} \end{array} \right. \end{eqnarray} Another result which will be useful for our purposes is an exact solution for the theory $F(\tilde{R})=\tilde{R}+\xi \tilde{R}^2+\chi \tilde{R}^m$. For $m>2$, this solution reads \begin{eqnarray} \label{SolR2corr} &&a=a_0 t^{\gamma}\,, \qquad \gamma=\frac{2}{3(1+w)}\,,\qquad \mu=\frac{1-3\lambda}{3 (w-1)}\,,\nonumber \\ && \rho=\rho_0 t^{-2}\,,\qquad \rho _0= \frac{4 (3 \lambda -1)}{3 (w+1)^2 \kappa ^2}\,. \end{eqnarray} This solution is obviously present only in the Ho\v{r}ava-Lifshitz version of this theory, as one can check directly. One of the most important methods used to investigate power law solutions in higher order gravity is the reconstruction of a theory starting from a specific background. In the following we will adapt this technique to reconstruct the form of the function $F(\tilde{R})$ that admits flat FRW power law solutions \cite{Nojiri:2009xh,Cognola:2009za,Nojiri:2009kx,Nojiri:2006be,Nojiri:2006je,Elizalde:2010jx}. Let us then consider a cosmological solution characterized by the Hubble parameter \begin{equation} \label{HubRec} H= \frac{\gamma}{t}\,, \end{equation} and again assuming the energy density of a barotropic fluid \begin{equation} \label{RhoRec} \rho= \rho_0 t^{-3\gamma(1+w)}\, . \end{equation} In this case the Ricci scalar is \begin{equation} \label{} \tilde{R}=\frac{3 \gamma \left(-3 \gamma \lambda +6 \gamma \mu +\gamma -2 \mu \right)}{t^2}\, , \end{equation} so that we can express the time $t$ as a function of $\tilde R$. Substituting (\ref{HubRec}) and (\ref{RhoRec}) into (\ref{HLF13}) and (\ref{HLF16}) and expressing $t$ in terms of $\tilde R$, one obtains \begin{eqnarray} \label{EqRec} &&A_1 \tilde{R}^3 F^{(3)}+A_2 \tilde{R}^2 F''+A_3 \tilde{R} F'+A_4 F+e w \tilde{R}^{\frac{3}{2} (w+1) \gamma}=0\,,\\ && B_1 \tilde{R}^2 F''+B_2 \tilde{R} F'+B_3 F+B_4 \tilde{R}^{\frac{3}{2} (w+1) \gamma }=0\,,\\ \end{eqnarray} with \begin{eqnarray} &&A_1=\frac{8 \mu }{3 \gamma \left(\gamma-3 \gamma \lambda +6 \gamma \mu -2 \mu \right)}, \\&&A_2=-\frac{4 \left(\gamma -3 \gamma \lambda +3 \mu \right)}{3 \gamma \left(\gamma \left(3 \lambda -6 \mu -1\right)+2 \mu \right)}, \\&&A_3=\frac{2 (3 \gamma -1) \left(3 \lambda -3 \mu -1\right)}{3 \left(\gamma-3 \gamma \lambda +6 \gamma \mu -2 \mu \right)}, \\&&A_4= 3^{-\frac{3}{2} (w+1) \gamma } \kappa ^2 w \rho _0 \left[\gamma^2 (1-3 \lambda +6 \mu ) -2 \mu\gamma\right]^{-\frac{3}{2} (w+1) \gamma } \end{eqnarray} and \begin{eqnarray} &&B_1=\frac{4 \mu }{3 \gamma \lambda -6 \gamma \mu -\gamma +2 \mu }, \\&&B_2=\frac{\gamma \left(3 \lambda -1\right)}{\gamma -3 \gamma \lambda +6 \gamma \mu -2 \mu }-1, \\&&B_3=1, \\&&B_4=\kappa ^2 \rho _0 \left(-3^{-\frac{3}{2} (w+1) \gamma }\right) \left(\gamma \left(\gamma-3 \gamma \lambda +6 \gamma \mu -2 \mu\right)\right)^{-\frac{3}{2} (w+1) \gamma }\,. \end{eqnarray} These equations admit the solution \begin{equation} \label{SolFRec} F(\tilde{R})=C_1 \tilde{R}^{\alpha _-}+C_2 \tilde{R}^{\alpha _+}+C_3 \tilde{R}^{\frac{3}{2} (1+w)\gamma }\, , \end{equation} where \begin{equation} \label{} \alpha_\pm =\frac{\gamma \left(3 \lambda -3 \mu-1\right)+3 \mu\pm\sqrt{\gamma ^2 \left(-3 \lambda +3 \mu +1\right)^2+2 \gamma \mu \left(3 \lambda +3 \mu -1\right)+\mu ^2} }{4 \mu } \end{equation} and \begin{equation} \label{} C_3=\frac{3^{-\frac{3}{2} (w+1) \gamma } \kappa ^2 \rho_0 \left[\gamma \left(-3 \gamma \lambda +6 \gamma \mu +\gamma -2 \mu\right)\right]^{1-\frac{3}{2} (w+1) \gamma }}{\gamma \left[\gamma \left(3 \lambda -1\right) (3 (w+1) \gamma -1)-\mu (3 (w+1) \gamma -2) (3 (w+2) \gamma-1)\right]}\, . \end{equation} Note that the coefficients $\alpha$ are real only for $$\gamma ^2 \left(-3 \lambda +3 \mu +1\right)^2+2 \gamma \mu \left(3 \lambda +3 \mu -1\right)+\mu ^2>0\, ,$$ which is satisfied for \begin{eqnarray} \label{} && \gamma\geq0\\ &&\gamma<0, \quad \mbox{and}\quad \lambda <\frac{3 \gamma \mu +\gamma -\mu }{3 \gamma }-2 \sqrt{-\frac{\mu ^2}{3\gamma }}\quad \lambda >\frac{3 \gamma \mu +\gamma -\mu }{3 \gamma } +2 \sqrt{-\frac{\mu ^2}{3\gamma}}\, . \end{eqnarray} Therefore also in the Ho\v{r}ava-Lifshitz case, the only type of function $F$ that is able to generate analytical power law solutions is a combination of powers of the Ricci scalar. The connection between the equations (\ref{HubRec}) and (\ref{SolFRec}) allows one to make some general considerations on the relation between the structure of the function $F$ and the cosmology. If one plots the behavior of the exponents of (\ref{SolFRec}) as a function of $\gamma$ for various values of $\mu$ and $\lambda$ (see figure \ref{FigRec}), one finds that there is a correlation between the existence of a specific type of solutions and the sign of the modes of the reconstructed theory. For example, if one chooses only positive values for the exponents of (\ref{SolFRec}) in order to avoid instabilities, none of the permitted values of the parameters are able to generate a contracting solution. On the other hand, both a Friedmann expansion and a power law inflation regimes can be obtained if $\mu <0$ and $\lambda <\frac{6 \gamma \mu +\gamma -2 \mu }{3 \gamma }$ or $\mu >0$ and $\lambda >\frac{6 \gamma \mu+\gamma -2 \mu }{3 \gamma }$. The behavior of the exponents of (\ref{SolFRec}) is shown in Figure \ref{FigRec} for different values of the parameters. As a final remark, it is interesting to note that the case $F(\tilde{R})=C_2 \tilde{R}+C_3 \tilde{R}^{m}$ corresponds to the solution (\ref{SolMat}) via the reconstruction method. This conclusion confirms the correctness of this approach and its utility in the search for exact solutions. \begin{figure}[htbp] \subfigure [ The exponents of (\ref{SolFRec}) for $w=0$, $\mu=-3$ and $\lambda=-1/2$.]{\includegraphics[scale=1]{Plot1.eps}} \subfigure[The exponents of (\ref{SolFRec}) for $w=0$, $\mu=-3$ and $\lambda=1$.]{\includegraphics[scale=1]{Plot2.eps}} \subfigure[ The exponents of (\ref{SolFRec}) for $w=0$, $\mu=-2/5$ and $\lambda=1/2$.]{\includegraphics[scale=1]{Plot3.eps}} \caption{Plot of the curves representing the values of the exponents of the reconstructed $F(R)$ theory corresponding to a background $H=\gamma/t$, for $w=0$ and some specific values of the parameters $\lambda$ and $\mu$. From these plots one can infer, for example, that some backgrounds of this type can only be realized in $F(R)$ theories with poles.}\label{FigRec} \end{figure} \subsection{Explicit model for the unification of inflation with dark energy} It is interesting to try to formulate an explicit model for the unification of early-time inflation with late-time acceleration. One may propose such a model, which may unify the inflation and the late-time acceleration. First we consider the case $3 \lambda -3 \mu -1=0$, which is specific in the Ho\v{r}ava-Lifshitz $F(R)$ gravity. An example is \begin{equation} \label{F0} F(R) = \frac{1}{2\kappa^2} \left( 1 + c_1 \ln \kappa^2 R \right) \left( 1 - c_2 \kappa^2 R \right)\, . \end{equation} Here $c_1$ and $c_2$ are dimensionless positive constants. Then following (133), we find two de Sitter solutions \begin{equation} \label{F0a} R = R_L \equiv \kappa^2 {\rm e}^{ - \frac{1}{c_1}}\, , \quad R = R_I \equiv \frac{1}{c_2 \kappa^2}\, . \end{equation} If one chooses $c_1 \sim 1/280$, we find $R_L \sim \left( 10^{-33}\, \mathrm{eV} \right)^2$, which may describe the accelerating expansion of the present universe. On the other hand, if $c_2 \sim \mathcal{O}(1) - \mathcal{O}(100)$, $R=R_I$ may express the inflation. In the general case $3 \lambda -3 \mu -1 \neq 0$, we may consider the following form of $F(\tilde R)$: \begin{equation} \label{F1} F\left(\tilde R\right) = \tilde R + f\left(\tilde R\right)\, \quad f\left(\tilde R\right) = R_I \tanh \frac{ \tilde R - R_1}{\Lambda} + R_L \tanh \frac{ \tilde R - R_2}{\Lambda} + R_I \tanh \frac{ R_1}{\Lambda} + R_L \tanh \frac{R_2}{\Lambda}\, . \end{equation} Here $R_I$, $R_L$, $R_1$, $R_2$, and $\Lambda$ are positive constants and we assume \begin{equation} \label{F2} R_I \gg R_L \gg \Lambda\, \quad R_I\gg R_1\, \quad R_L \gg R_2\, . \end{equation} Then, when $\tilde R \sim R_I$, we find \begin{equation} \label{F3} f(\tilde R) \sim R_I\, , \end{equation} such that $f(\tilde R)$ plays the role of a large cosmological constant, which generates inflation. On the other hand, when $\tilde R \sim R_L$, $f(\tilde R)$ becomes a small constant, \begin{equation} \label{F4} f(\tilde R) \sim R_L\, , \end{equation} and the late time acceleration could be generated. Note that \begin{equation} \label{F5} f(0) = 0\, , \end{equation} therefore $f(\tilde R)$ is not real cosmological constant. Hence, explicit construction of realistic models for the unification of the inflation with dark energy is possible. The remaining freedom in the choice of parameters gives the possibility to make the model quite satisfactory from the cosmological point of view. \subsection{Finite-time future singularities} The attempts of constructing accelerating cosmological models that include a dark component have revealed that such models often contain some unexpected phenomenology. One of the most striking features of dark energy cosmologies is that, regardless of the way in which the dark component is introduced, they can become singular. By ``become singular'' we intend that there exists a specific time $t_s$ in which one of more key quantities of the model becomes divergent. Some of these singularities, like the so-called ``Big Rip'' \cite{ref5}, are realized only far in the future (i.e. $t_s>>1$). However, as it was recently pointed out \cite{barrow,singularity}, under special circumstances one can have quintessence-like cosmologies that present softer singularities at smaller finite time (``sudden singularities''). Although in pure GR cosmologies this pathological behavior can be cured by specifying, for example, the equation of state of the fluid, this is not the case in models that include dark fluids or in modified gravity. In fact, it has been proved that these theories can admit up to four different types of singularities at finite time \cite{ConformalAnomaly1}. In the following, we will analyze the presence of these singularities in Ho\v{r}ava-Lifshitz $F(R)$ gravity using some specific examples. Let us define the effective density and the effective pressure associated to the Ho\v{r}ava-Lifshitz $F(R)$ gravity. We have \begin{eqnarray}\label{mateff} \rho_\mathrm{eff} &=& \frac{1}{\kappa ^2}\left[6 \mu H \dot{\tilde{R}} F''-6 \mu \dot{H} F'+6(3\lambda -3 \mu -1 )H^2 F'+F\right] \,,\nonumber \\ p_\mathrm{eff} &=& \frac{1}{\kappa ^2}\left[-2 \mu F^{(3)} \dot{\tilde{R}}^2- 2(3\lambda +\mu-1)H \dot{\tilde{R}} F''-2(3\lambda -3 \mu -1 ) \dot{H} F'-6(3\lambda -3 \mu -1 )H^2 F'-F\right]\,. \end{eqnarray} The different types of finite-time singularities can be classified by looking at the behavior of the quantities (\ref{mateff}) plus the scale factor $a$, $H$ and its derivatives. In particular, in \cite{ConformalAnomaly1} they are classified as \begin{itemize} \item Type I (``Big Rip'') : For $t \to t_s$, $a \to \infty$ and $(\rho,|p|) \to \infty$ or $\rho$ and $|p|$ are finite; \item Type II (``sudden'') : For $t \to t_s$, $a \to a_s$, $\rho \to \rho_s$ and $|p| \to \infty$; \item Type III : For $t \to t_s$, $a \to a_s$, $\rho \to \infty$ and $|p| \to \infty$; \item Type IV : For $t \to t_s$, $a \to a_s$ $(\rho, |p|) \to $ constant (or zero) and higher derivatives of $H$ diverge. \end{itemize} To classify the singularities in our case, let us consider the case of a vacuum spatially-flat cosmology, and let us imagine that close to the time $t_s$ the Hubble parameter can be written as \begin{equation}\label{Hsing} H\approx h_0(t_s-t)^{-\gamma}\,. \end{equation} This means that the scale factor is \begin{equation} \label{} a\approx a_0\exp\left[\frac{h_0(t_s-t)^{1-\gamma}}{\gamma-1}\right]\,, \end{equation} The above expression tells us that there are two possible behaviors of the scale factor that depend on the value of $\gamma$: if $\gamma\geq1$, $a$ will diverge as $t$ approaches $t_s$ while if $\gamma<1$, $a$ will converge. Therefore it is clear that the singularity of type I is realized when $\gamma>1$, the others for $\gamma<1$. The value of $\gamma$ also influences the form of the Ricci scalar. In general, one has \begin{equation} \tilde{R}\approx h_0^2 \left(-9 \lambda +18 \mu +3\right)(t_s-t)^{-2 \gamma } -6 \gamma h_0 \mu (t_s-t)^{-(\gamma +1)}\,, \end{equation} but, depending on the value of $\gamma$, the above expression can be reduced to \begin{equation} \label{RicciSing} \tilde{R}\approx \left\{ \begin{array}{ll} h_0^2 \left(-9 \lambda +18 \mu +3\right) (t_s-t)^{-2 \gamma }& \mbox{if}\quad \gamma>1\,, \\ 6 h_0 \gamma \mu (t_s-t)^{-\gamma -1} & \mbox{if}\quad \gamma<1\,. \\ \end{array} \right. \end{equation} This property of the curvature also indicates that for $t\rightarrow t_s$ one has \begin{equation} \left\{ \begin{array}{ll} \tilde{R}\gg 1 & \gamma>-1\,,\\ \tilde{R}\ll 1 & \gamma<-1\,,\\ \end{array} \right. \end{equation} i.e. the curvature may become divergent or very small depending on the value of $\gamma$. This property will turn out to be very useful for simplifying the calculations. Finally, because of the nature of the structure of the effective energy density and pressure, theories with the different types of singularities are associated directly with the values of $\gamma$. Specifically: \begin{itemize} \item Type I $\Rightarrow \gamma>1$; \item Type II $\Rightarrow -1<\gamma<0$; \item Type III $\Rightarrow 0<\gamma<1$; \item Type IV $\Rightarrow \gamma<-1$. \end{itemize} Let us now consider some simple examples in the Ho\v{r}ava-Lifshitz $F(R)$ gravity and their comparison with the standard case. \subsubsection{The case $F(\tilde{R})=\tilde{R}+\chi \tilde{R}^m$} In the case \begin{equation} \label{F=RRm} F(\tilde{R})=\tilde{R}+\chi \tilde{R}^m\,, \end{equation} substituting (\ref{Hsing}) and (\ref{RicciSing}), one finds the necessary conditions for the presence of the singularities: \begin{equation} \label{singCond_beta} \begin{array}{lc} \mbox{Type I} & \gamma> \\ \mbox{Type II} & m<0, \quad -1<\gamma<0\\ \mbox{Type III} & 0<\gamma<1,\quad m\neq0\\ \mbox{Type IV} & \gamma \in \mathbb{Q}-\mathbb{Z},\quad \gamma <-1\,. \\ \end{array} \end{equation} These are compatible with the solutions found in \cite{singularity}. It is important to stress that the conditions (\ref{singCond_beta}) are only necessary for the existence of the singularity. The reason is that we have implicitly postulated that the solution (\ref{Hsing}) satisfies (\ref{HLF13}) and (\ref{HLF16}) at least in a specific time interval, which may not be the case due to the non-linearity of the theory. Then the only way to proceed is to find some exact solutions of the theory we are examining and see if the parameters have values for which the conditions (\ref{singCond_beta}) are satisfied. Unfortunately finding exact solutions in these theories can be problematic. However, close to the singularity one can solve this problem by using the fact that, as we have seen, the magnitude of the Ricci scalar also changes when $t \rightarrow t_s$. This means that we can approximate the function $F$ with a simpler form that admits simple exact solutions. Then these solutions can be used to obtain necessary and sufficient conditions for the singularity to be realized. In our simple example it is clear that one has \[ F\approx \left\{ \begin{array}{lc} \tilde{R}^{n} & \gamma>-1\,, \\ \tilde{R} & \gamma<-1\,. \end{array} \right. \] This means that we can approximate our $F$ with $ R^{n}$ in all the cases of interest and that we can use the exact solutions found in Section \ref{SectionReconstruction} in order to understand the presence of singularities. In particular, one sees that the solution (\ref{SolVac}) can present a singularity of Type I for \begin{eqnarray} &&\lambda <\frac{1}{3}\,,\qquad \left\{ \begin{array}{cc} 0<m<\frac{1}{2}& \frac{6 m \lambda -2 m-3 \lambda +1}{8 m^3-16 m^2+16 m-8}<\mu <\frac{-6 m \lambda +2 m+3 \lambda -1}{4 m^2-12 m+8} \,,\\ \frac{1}{2}<m<1& \frac{6 m \lambda -2 m-3 \lambda +1}{6 m-6}<\mu <\frac{-6 m \lambda +2 m+3 \lambda -1}{4 m^2-12m+8}\,,\\ 1<m<2& \frac{-6 m \lambda +2 m+3 \lambda -1}{4 m^2-12 m+8}<\mu <\frac{6 m \lambda -2 m-3 \lambda +1}{6 m-6} \,,\\ m=2 & \mu <\frac{1}{6} (9 \lambda -3)\,,\\ m>2 & \mu <\frac{6 m \lambda -2 m-3 \lambda +1}{6 m-6}\quad \mu >\frac{-6 m \lambda +2 m+3 \lambda -1}{4 m^2-12 m+8} \,, \end{array} \right.\\ &&\lambda =\frac{1}{3}\,,\qquad m>2\qquad\mu\neq 0\\ &&\lambda >\frac{1}{3}\,,\qquad \left\{ \begin{array}{cc} 0<m<\frac{1}{2}& \frac{-6 m \lambda +2 m+3 \lambda-1}{4 m^2-12 m+8}<\mu <\frac{6 m \lambda -2 m-3 \lambda +1}{8 m^3-16 m^2+16 m-8}\,, \\ \frac{1}{2}<m<1& \frac{-6 m \lambda +2 m+3 \lambda-1}{4 m^2-12 m+8}<\mu <\frac{6 m \lambda -2 m-3 \lambda +1}{6 m-6}\,,\\ 1<m<2& \frac{6 m \lambda -2 m-3 \lambda +1}{6 m-6}<\mu <\frac{-6 m\lambda +2 m+3 \lambda -1}{4 m^2-12 m+8}\,,\\ m=2 & \mu >\frac{1}{6} (9 \lambda -3)\,,\\ m>2 & \mu <\frac{-6 m \lambda+2 m+3 \lambda -1}{4 m^2-12 m+8}\quad \mu> \frac{6 m \lambda -2 m-3 \lambda +1}{6 m-6} \,, \end{array} \right. \end{eqnarray} a singularity of Type II for \begin{eqnarray} &&\lambda <\frac{1}{3}\,,\qquad \left\{ \begin{array}{cc} m<-1& 0<\mu <\frac{6 m \lambda -2 m-3 \lambda +1}{4 m^2-4}\,,\\ m=-1& \mu >0\,,\\ -1<m<0& \mu <\frac{6 m \lambda -2 m-3 \lambda +1}{4 m^2-4}\qquad \mu >0\,, \end{array} \right.\\ &&\lambda =\frac{1}{3}\,,\qquad -1<m<0\qquad \mu\neq 0\\ &&\lambda >\frac{1}{3}\,,\qquad \left\{ \begin{array}{cc} m<-1 & \frac{6 m \lambda -2 m-3 \lambda +1}{4 m^2-4}<\mu<0\,,\\ m=-1& \mu <0\,, \\ -1<m<0& \mu <0 \qquad \mu >\frac{6 m \lambda -2 m-3 \lambda +1}{4 m^2-4} \,, \end{array} \right. \end{eqnarray} and a singularity of Type III for \begin{eqnarray} &&\lambda <\frac{1}{3}\,,\qquad \left\{ \begin{array}{cc} m<0& \mu <\frac{6 m \lambda -2 m-3 \lambda +1}{6 m-6}\,,\\ 0<m<\frac{1}{2}& \frac{-6 m \lambda +2 m+3 \lambda -1}{4 m^2-12 m+8}<\mu <0\,,\\ \frac{1}{2}<m<1& \mu <0\quad \mu >\frac{-6 m \lambda +2 m+3 \lambda -1}{4 m^2-12 m+8}\,,\\ 1<m<2& \mu<\frac{-6 m \lambda +2 m+3 \lambda -1}{4 m^2-12 m+8}\quad \mu 0\,,\\ m=2 & \mu >0\,, \\ m>2 & 0<\mu <\frac{-6 m \lambda +2 m+3 \lambda -1}{4 m^2-12 m+8}\,, \end{array} \right.\\ &&\lambda =\frac{1}{3}\,,\qquad \frac{1}{2}<m<2\qquad m\neq1\qquad\mu\neq 0\\ &&\lambda >\frac{1}{3}\,,\qquad \left\{ \begin{array}{cc} m<0 & \mu <0\quad \mu >\frac{6 m \lambda -2 m-3 \lambda +1}{6 m-6}\,,\\ 0<m<\frac{1}{2}& 0<\mu <\frac{-6 m \lambda +2 m+3\lambda -1}{4 m^2-12 m+8}\,, \\ \frac{1}{2}<m<1& \mu <\frac{-6 m \lambda +2 m+3 \lambda -1}{4 m^2-12 m+8}\qquad \mu>0\,,\\ 1<m<2& \mu <0\quad \mu >\frac{-6 m \lambda +2 m+3 \lambda -1}{4 m^2-12 m+8}\,,\\ m=2 & \mu <0\\ m>2 & \frac{-6 m \lambda +2 m+3 \lambda -1}{4 m^2-12 m+8}<\mu <0 \,. \end{array} \right. \end{eqnarray} This is very different from the standard case, where we have a singularity of Type I only for $m>2$, and a Type III for $\frac{1}{2}<m<1$ and $m<0$. For the solution (\ref{SolMat}) we have a singularity of Type III when $$\left\{ \begin{array}{ccc} -3<m\leq -\frac{3}{2}& \frac{1}{3} (-2 m-3)<w\leq 1\,,\\ -\frac{3}{2}<m<\frac{1}{2}\left(1-\sqrt{13}\right)& 0\leq w\leq 1\,. \end{array} \right.$$ For the new solution (\ref{Sol HL}), which exists only in the Ho\v{r}ava-Lifshitz version of $F(R)$ gravity, we have instead a singularity of Type I for $$\left\{ \begin{array}{ccc} \lambda <\frac{1}{3}&\frac{1}{6} (3 \lambda -1)<\mu <\frac{1}{8} (3 \lambda -1)&m>1\,,\\ \lambda >\frac{1}{3}&\frac{1}{8} (3 \lambda -1)<\mu <\frac{1}{6} (3 \lambda -1)& m>1\,, \end{array} \right.$$ and a Type III for $$\left\{ \begin{array}{ccc} \lambda <\frac{1}{3}& \frac{1}{8} (3 \lambda -1)<\mu <0 & m>1\,,\\ \lambda >\frac{1}{3} & 0<\mu <\frac{1}{8} (3 \lambda -1)& m>1\,. \end{array} \right.$$ The results above show clearly that the presence of singularities is deeply altered in the Ho\v{r}ava-Lifshitz version of $F(R)$ gravity. In particular, it seems that the additional parameters make it much easier to realize the singularities. The intervals we have presented above for the parameters can then be interpreted as constraints on this type of Ho\v{r}ava-Lifshitz $F(R)$ theories of gravity. Thus, we have demonstrated that modified Ho\v{r}ava-Lifshitz gravity has the phantom-like or quitessence-like accelerating cosmologies, which might lead to singularities of type I, II, or III. \subsubsection{Eliminating the singularities} Using conformal techniques, it was first argued in \cite{Abdalla:2004sw} that in the standard $F(R)$ gravity the singularities of the type we have found can be cured, when additional powers of the Ricci scalar are added to the Lagrangian. We can then verify if something similar happens also in the Ho\v{r}ava-Lifshitz case. Let us therefore consider the case \begin{equation} \label{F=RR2Rm} F(\tilde{R})=\tilde{R}+\xi \tilde{R}^2+\chi \tilde{R}^m\,. \end{equation} The action of this theory contains a correction of order $\tilde{R}^2$ and, in the standard case, it is able to cure the singularities of theories of the type $\tilde{R}+\chi \tilde{R}^m$. If one derives the necessary conditions for the presence of a singularity, one finds: \begin{equation} \label{} \begin{array}{lc} \mbox{Type I} & \gamma>1, \\ \mbox{Type II} & \mbox{never}, \\ \mbox{Type III} & 0<\gamma<1\quad m\neq0\,,\\ \mbox{Type IV} & \gamma \in \mathbb{Q}-\mathbb{Z}\quad \gamma <-1 \,. \\ \end{array} \end{equation} One can already see at this level that in this kind of theory singularities of Type II never occur. This can be interpreted as the fact that the correction $R^2$ is able to compensate for these terms. As said before, the conditions above are only necessary and the only way to actually determine if a singularity is present is to consider an exact solution of the theory and see if the conditions above can be satisfied by that solution. Let us consider then the solution (\ref{SolR2corr}) found in the previous section. Applying the conditions above one obtains that none of them is satisfied for this background. In other words, the addition of the $\tilde{R}^2$ term compensates for the singularities one would find in a similar background of (\ref{F=RRm}). This supports the claim made in \cite{Abdalla:2004sw} that (\ref{F=RR2Rm}) is more regular than (\ref{F=RRm}) and that in general the introduction of additional curvature invariants into the action can help in the curing of the singularities of an $F(R)$ theory of gravity. On the other hand, it is also quite possible that the account of quantum gravity effects \cite{Elizalde:2004mq} may also cure the future singularities. Summarizing, because of the additional parameters, the Ho\v{r}ava-Lifshitz version of $F(R)$ gravity has a bigger space of de Sitter solutions compared to its standard counterpart. Using a simple theory and both a direct resolution of the cosmological equations and a reconstruction technique, we have also verified that this is true in the case of power law solutions. In general the presence of additional parameters also means that one has a bigger freedom in the choice of the features of these solutions, e.g. to see if it is possible to realize accelerated expansion. Therefore in these theories the realization of dark energy era (and inflation) is comparatively easier. This is interesting because it draws a direct connection between the Newtonian nature of quantum gravity and the observed behavior of the Universe. Unfortunately, the additional number of parameters also increases the probability that these cosmologies will become singular not only at $t\to \infty$, but also at finite time. Using exact solutions, we have shown that there are many combinations of values of the parameters of the theory which are able to induce the appearance of singularities. We have also verified, via a specific example, that adding an invariant of the type $R^2$ into the Lagrangian of a theory, one is able to obtain a theory whose solutions are much more regular. Consequently, also in the Ho\v{r}ava-Lifshitz case one can compensate such ill behavior of these models by the introduction of additional powers of the Ricci scalar into the action. \section{Corrections to the Newton law} \label{sec:7} Let us now consider the possible corrections to the Newton law. For this purpose, we consider the infrared region where the higher derivative terms like (\ref{HLFrg9}) or (\ref{HLFrg14}) can be neglected and we find \begin{equation} \label{KLFrg17} \mathcal{L}_R^{(3)} \left(g^{(3)}_{ij}\right) \sim R^{(3)}\, . \end{equation} The $R^{(3)}$ term can be added from the beginning or this term might be induced at the infrared fixed point \cite{Horava:2009uw}. Then by the transformation (\ref{HLFrg4}), by using (\ref{HLFrg10}) and (\ref{HLFrg5}), in (\ref{HLFrg1b}), one gets \begin{eqnarray} \label{KLFrg18} && \int \mathrm{d}^4 x \sqrt{{\bar g}^{(3)}} \left\{ \bar{\mathcal{L}}_R^{(3)}\left( {\bar g}^{(3)}_{ij}, \varphi \right) - V(\varphi) \right\} \nonumber \\ && = \int \mathrm{d}^4 x \sqrt{{\bar g}^{(3)}} \left\{ {\rm e}^\varphi \left( {\bar R}^{(3)} - \frac{5}{2} {\bar g}^{(3) kl} {\bar \nabla}^{(3)}_k \varphi {\bar \nabla}^{(3)}_l \varphi \right) - \left(A\left(\varphi\right)F' \left(A\left(\varphi\right)\right)) - F\left(A\left(\varphi\right)\right)\right) \right\} \, . \end{eqnarray} The usual Newton law can be generated through the exchange of the graviton. Furthermore by the exchange of the scalar field $\varphi$, extra force might be generated. Now we consider the case that \begin{equation} \label{KLFrg19} F(A) = A - \Lambda_\mathrm{eff} - \frac{c}{A^n} + \mathcal{O}\left(A^{-n-1}\right)\, . \end{equation} Here $\Lambda_\mathrm{eff}$ is an effective cosmological constant. Then \begin{equation} \label{KLFrg20} F'(A) = 1 + \frac{cn}{A^{n+1}} + \cdots \, . \end{equation} In the solar system or on the earth, the second term in (\ref{KLFrg20}) is much smaller than unity, which corresponds to the first term. Hence, we find \begin{equation} \label{KLFrg21} \varphi\equiv \frac{1}{3}\ln F'(A) \sim \frac{cn}{3A^{n+1}} \, , \end{equation} and therefore \begin{equation} \label{KLFrg22} V(\varphi) \sim \Lambda_\mathrm{eff} + \frac{c(n+1)}{A^n} \sim \Lambda_\mathrm{eff} + \frac{\left(n+1\right) c^{\frac{1}{n+1}}}{n^{\frac{n}{n+1}}} \varphi^{- \frac{n}{n+1}} \, . \end{equation} Then since ${\rm e}^\varphi\sim 1$, the effective mass $m_\varphi$ of $\varphi$ is given by \begin{equation} \label{KLFRg23} m_\varphi^2 \equiv \frac{V''(\varphi)}{5} \sim \frac{n^{\frac{1}{n+1}}(2n+1) c^{\frac{1}{n+1}}}{n+1} \varphi^{- \frac{3n+2}{n+1}} \sim \frac{3^{\frac{3n+2}{n+1}} (2n+1)}{n^{\frac{3n+1)}{n+1}}(n+1) c^{\frac{3n+1}{n+1}}} A^{3n+2}\, . \end{equation} In the ``realistic'' model, $c$ is chosen to be $c=\mu^{2(n+1)}$ and $\mu\sim 10^{-33}\,\mathrm{eV}$. On the other hand, we find $A=R \sim 10^{-61}\,\mathrm{eV}^2$ in the solar system and $A=R \sim 10^{-50}\,\mathrm{eV}^2$ on the earth. Thus, one finds $m_\varphi^2 \sim 10^{15n - 56}\,\mathrm{eV}^2$ in the solar system and $m_\varphi^2 \sim 10^{48n - 34}\,\mathrm{eV}^2$. Hence, if $n$ could be large enough, the mass of $\varphi$ would become large and the Compton length would become small, so that the correction to the Newton law would not be observed. \section{Discussion and conclusions} \label{sec:8} We have proposed a first-order modified Ho\v{r}ava-Lifshitz-like gravity action and studied its Hamiltonian structure. As a large explicit class of such models we considered the modified Ho\v{r}ava-Lifshitz $F(R)$ gravity that is more general than the one introduced in ref.~\cite{Chaichian:2010yi}, which for the special choice of parameter $\mu=0$ coincides with the degenerate model introduced in ref.~\cite{Kluson:2009xx}. Its ultraviolet properties are discussed and it is demonstrated that such $F(R)$ gravity may be renormalizable for the case $z=3$ in a similar way as the original proposal for Ho\v{r}ava-Lifshitz gravity. The Hamiltonian analysis of the proposed modified Ho\v{r}ava-Lifshitz $F(R)$ gravity shows that this theory is generally consistent with reasonable assumptions. The $F(R)$ gravity action has also been analyzed in the fixed gauge form, where the presence of the extra scalar is particularly illustrative. The methods presented in the Hamiltonian analyses of sections \ref{sec:3} and \ref{sec:4} can be used to study any action of the general form (\ref{HLF26}). The spatially-flat FRW cosmology of the modified Ho\v{r}ava-Lifshitz $F(R)$ gravity is studied. It is shown that it coincides with the one of the earlier model \cite{Chaichian:2010yi}, but only in the spatially-flat FRW case. For specific choice of the parameters of the theory, its FRW equations of motion coincide with the ones of the traditional $F(R)$ gravity. The presence of the multiple de Sitter solutions shows the principal possibility of the unification of the early-time inflation with the late-time acceleration in the Ho\v{r}ava-Lifshitz background, which proves that it can have rich cosmological applications. The power-law theories are investigated in detail. A number of analytical FRW solutions is found, including the ones with behavior relevant for the early/late cosmic acceleration. The quintessence/phantom-like cosmologies derived in our work may show all the four possible types of finite-time future singularities like in the case of standard dark energy. The conditions to cure such future singularities are discussed in analogy with the traditional $F(R)$ gravity. It is also interesting that the correction to the Newton law in the $F(R)$ gravity under discussion can be made unobservably small. Finally, a covariant proposal for $F(R)$ gravity in Ho\v{r}ava-Lifshitz spirit has been made. Despite some successes in the formulation of modified Ho\v{r}ava-Lifshitz $F(R)$ gravity which can be made renormalizable and in its cosmological applications, a number of unsolved questions remain. What is the appropriate way to introduce matter in the theory? Is the theory itself fundamental (or at least, fully consistent) or does it descend from another more fundamental proposal? Can it comply with all the local tests in the Solar system as well as with cosmological bounds? What is the dynamical scenario for the restoration of the Lorentz invariance at late times? What are the cosmological and astrophysical consequences of the first-order modified Ho\v{r}ava-Lifshitz gravity when compared with those of the traditional modified gravity \cite{tradModGrav}. Moreover, the traditional questions about the properties of black holes in such a theory can be straightforwardly investigated. Nevertheless, even at the present stage some surprises can be expected from the theory. While the universe has likely undergone a perioid of inflation in its early moments, it is interesting to note that Ho\v{r}ava-Lifshitz gravity could produce cosmological perturbations that are almost scale-invariant even without inflation \cite{Mukohyama:2009gg}. Ho\v{r}ava-Lifshitz gravity has also been considered in the presence of scalar fields \cite{Chen:2009ka,Lee:2010iu}. In principle, it is possible to extend our Ho\v{r}ava-Lifshitz $F(R)$ gravity by including its coupling with scalar fields. We would also like to mention a recent paper \cite{Kluson:2010aw}, where a new class of Lorentz-invariance breaking non-relativistic string theories, inspired by the Ho\v{r}ava-Lifshitz gravity, has been presented and analyzed. Using the $F(R)$ version of gravity one can propose even a more general formulation of string theory in the Ho\v{r}ava-Lifshitz background: for instance, rigid strings, membranes and $p$-branes, etc. On the other hand, it may suggest unusual solutions for the known cosmological problems. There also exists an attempt to explain the homogeneity of our universe in a model with varying speed of light \cite{Albrecht:1998ir}. Having in mind that in the ultraviolet region the speed of the Ho\v{r}ava-Lifshitz graviton changes, one may speculate that the homogeneity of the universe may be described without the need for inflation. In any case, such a theory is both theoretically and cosmologically rich and it deserves further study. \section*{Acknowledgments} This research has been supported in part by MEC (Spain) project FIS2006-02842 and AGAUR (Catalonia) 2009SGR-994 (SDO), by Global COE Program of Nagoya University (G07) provided by the Ministry of Education, Culture, Sports, Science \& Technology (SN). M. O. is supported by the Finnish Cultural Foundation. The support of the Academy of Finland under the Projects No. 121720 and 127626 is gratefully acknowledged.
1,314,259,993,850
arxiv
\section{Introduction} \subsection{To the trailhead: classic and stake-governed random-turn games} Many combinatorial games, such as chess, Go and Hex, are zero-sum two-player games in which players alternate in making moves. Positions in these games have complex geometric aspects from which experienced players may surmise strong choices for the party who has the right to move next. {\em Random-turn} versions of certain combinatorial games were considered in~\cite{PSSW07}: in these games, a fair coin toss starts each turn, with one player or other winning the right to move according to the outcome. In some such games, including random-turn Hex, the optimal strategies for the two players were found explicitly, even though the game in its original form is extremely complex. The article~\cite{PSSW09} introduced a random-turn game, {\em tug-of-war}, in which a counter resides among the vertices of a graph, and players vie to move the counter along adjacent edges until it arrives in a certain boundary set of vertices, on which a payment function is defined; one given player then pays the other the evaluation of the payment function at the terminal vertex of the counter. The value of the game tug-of-war was determined in~\cite{PSSW09} as a function of the counter's initial location: it is the infinity harmonic extension of the boundary data given by the payment function. By considering a suitable Euclidean version of tug-of-war involving moves of small step size,~\cite{PSSW09} forged an attractive connection between game theory and the infinity harmonic Laplacian on domains in $\ensuremath{\mathbb{R}}^d$: the latter is a degenerate elliptic operator with a subtle uniqueness~\cite{Jensen93} and regularity~\cite{Savin05,EvansSavin} theory. Tug-of-war on graphs has also been considered in a biased case~\cite{PPS10,PeresSunic}, where a given player wins each turn according to the toss of a coin with given bias. In these random-turn games, the coin is either fair or of a given bias. In~\cite{HP2022}, a class of {\em stake-governed} random-turn games was introduced in which each of the two players, Mina and Maxine, has a limited capacity to determine the bias of the coin at each turn. Mina and Maxine are each allocated a given budget at the start of a game of several (and perhaps many) turns. Before each turn, each draws from her remaining budget to offer a stake. Her probability of winning the impending turn is the ratio of the stake she has just offered to the total stake offered at the turn. Stakes are not returned, during or after the game. In stake-governed tug-of-war (as in the original or `classic' version of this game), Mina pays Maxine the evaluation of the payment function at the counter's terminal location. In this way, the budgets allocated at the outset are an irredeemable resource whose sole role is to afford capacity to the player to win moves throughout the lifetime of the game. Maxine and Mina's initial budgets are given finite quantities whose values are part of the game design. The finiteness of these values is what makes the resource precious. Classic random-turn games, and the stake-governed versions in~\cite{HP2022}, are two-person zero-sum games. In this article, we introduce a further stake-governed random-turn game that has a natural definition. The new game has two players but is not zero-sum. The change in specification from stake-governed tug-of-war is simple. Rather than giving the two players limited budgets from which they each make stakes, we make no such offering; instead, we simply invite Mina and Maxine to spend their own money in making stakes at each turn of the game. Both players are supposed to be wealthy, so that there is no absolute constraint on either's spending. The new stake-governed games may be called `self-funded' in contrast to the original `allocated budget' version. The stakes are swept away as before, and they constitute a running cost to each player which must be considered against the potential benefit that higher expenditure will bring the counter to a more favourable terminal location in the boundary set. The players incur different running costs insofar as they place different stakes (as certainly they may). Given that the resulting game is not zero-sum, we also generalize the nature of the terminal receipts that Mina and Maxine receive. On the boundary are now specified two real-valued payment functions, $p_-$ and $p_+$: Mina receives the evaluation of $p_-$ at the terminal location of the counter, while Maxine receives this evaluation for $p_+$. The special case where $p_- = -p_+$ that is seen in classic, and allocated-budget stake-governed, tug-of-war is thus generalized to a form suitable for the study of non-zero-sum games. The existing tug-of-war games make little sense on certain infinite graphs. Consider classic tug-of-war on the integers~$\ensuremath{\mathbb{Z}}$ with nearest-neighbour edges, and a payment of one unit from Mina to Maxine if the counter tends to $\infty$, and of minus one unit if instead it tends to $-\infty$. The players have no choices for strategy and the counter evolves as simple random walk, so no terminal payment is made (or perhaps a default rule stipulates the payment). And likewise the game on $\ensuremath{\mathbb{Z}}$ makes no real sense for the allocated-budget stake-governed games in~~\cite{HP2022}: roughly put, since the game will require infinitely many turns, any positive expenditure of the globally finite budget of a given player is unjustified at any given turn; but if both players consistently stake nothing, then (at least if a symmetric rule is adopted for this circumstance) the counter will again evolve as simple random walk. In contrast to these trivial outcomes on $\ensuremath{\mathbb{Z}}$, self-funded stake-governed tug-of-war---the new game---has a subtle theory on this set. Indeed, it is the aim of this article to investigate the new game when the underlying graph is either $\ensuremath{\mathbb{Z}}$ or a finite interval therein. We call the game on these graphs the Trail of Lost Pennies---either on $\ensuremath{\mathbb{Z}}$, or on a given finite integer interval. Our principal focus will be on the Trail of Lost Pennies on $\ensuremath{\mathbb{Z}}$, a game that is necessarily of infinite duration. By treating infinite-turn non-zero-sum games, we explore a new aspect of the theory of random-turn games. We will specify strategies and classify Nash equilibria for the Trail of Lost Pennies on $\ensuremath{\mathbb{Z}}$, finding them to have a very different structure to counterparts in the theory of classic, or allocated-budget stake-governed, random-turn games. Since our focus is the gameboard~$\ensuremath{\mathbb{Z}}$ (or a finite interval therein), we will carefully specify only the Trail of Lost Pennies, rather than a more general version of self-funded stake-governed tug-of-war. To this specification we turn next. This done, we will state our main results in several further sections of the introduction. \subsection{Game setup, strategies and Nash equilibria}\label{s.gamespec} The Trail of Lost Pennies on $\ensuremath{\mathbb{Z}}$ will be denoted by ${\rm Trail}(m_{-\infty},m_\infty,n_{-\infty},n_\infty)$. The data that specifies the game takes the form of \begin{equation}\label{e.quadruple} \textrm{a quadruple $\big(m_{-\infty},m_\infty,n_{-\infty},n_\infty \big) \in \ensuremath{\mathbb{R}}^4$ that satisfies $m_{-\infty} < m_\infty$ and $n_{-\infty} < n_\infty$} \, . \end{equation} For any given $k \in \ensuremath{\mathbb{Z}}$, we will specify the gameplay of ${\rm Trail}(m_{-\infty},m_\infty,n_{-\infty},n_\infty)$ where the initial location of the counter is equal to $k$. The counter's location~$X$ will evolve from its initial location $X_0 = k$ in discrete time-steps, the result being a stochastic process $X:\N \to \ensuremath{\mathbb{Z}}$ whose law is determined by~$k$ (we take $\ensuremath{\mathbb{N}}$ to include zero). This random process is specified by the pair of strategies adopted by Mina (who plays to the left) and Maxine (who plays to the right). For either player, a strategy is a map $S:\ensuremath{\mathbb{Z}} \times \N_+ \to [0,\infty)$. A player who follows the strategy~$S$ stakes $S(X_{i-1},i)$ at the turn with index~$i \geq 1$. Let $\mathcal{S}$ denote the set of strategies. An element of $\mathcal{S}^2$ is called a strategy pair. A generic element of $\mathcal{S}^2$ will be written $(S_-,S_+)$, where the respective components are the strategies of Mina and Maxine. We wish then to specify the gameplay process $X:\N \to \ensuremath{\mathbb{Z}}$ as a function of a given element $(S_-,S_+) \in \mathcal{S}^2$ and the initial location $X_0 = k \in \ensuremath{\mathbb{Z}}$. We will write $\pgameplay{S_-}{S_+}{k}$ for a probability measure that specifies this gameplay process and accompanying aspects of the game; the associated expectation operator will be written $\egameplay{S_-}{S_+}{k}[\cdot]$. At the turn with index $i \in \N_+$, Mina stakes $S_-(X_{i-1},i)$ and Maxine stakes $S_+(X_{i-1},i)$. By sampling of independent randomness, the turn victor is declared to be Maxine with probability $\tfrac{S_+(X_{i-1},i)}{S_-(X_{i-1},i) + S_+(X_{i-1},i)}$; in the other event, it is declared to be Mina. Maxine will elect to move the counter one place to the right if she is the turn victor; Mina, one place to the left. (It is intuitive given the rules of the game that we are specifying that the two players will always elect to move the counter in the said directions, and we will not furnish the straightforward details to the effect that permitting other options changes nothing essential about the game.) Should neither player make a stake at the given turn---that is, if $S_-(X_{i-1},i) = S_+(X_{i-1},i) = 0$---then a further rule is needed to permit play to continue. We will declare that, in this event, each player wins the right to move with equal probability (with Maxine moving right, and Mina left, as usual). Formally, then, our counter evolution satisfies the condition that, for $(k,i,\ell) \in \ensuremath{\mathbb{Z}} \times \N_+ \times \ensuremath{\mathbb{Z}}$, $$ \pgameplay{S_-}{S_+}{k} \Big( X_i - X_{i-1} = \ell \, \Big\vert \, X_j, j \in \llbracket 0, i-1 \rrbracket \Big) \, = \, \tfrac{S_-(X_{i-1},i)}{S_-(X_{i-1},i) + S_+(X_{i-1},i)} {\bf 1}_{\ell = - 1} + \tfrac{S_+(X_{i-1},i)}{S_-(X_{i-1},i) + S_+(X_{i-1},i)} {\bf 1}_{\ell = 1} \, , $$ where we use the interval-interval notation $\llbracket i,j \rrbracket = \big\{ \ell \in \ensuremath{\mathbb{Z}}: i \leq \ell \leq j \big\}$, $i,j \in \ensuremath{\mathbb{Z}}$. Note that, in reading the ratios on the right-hand side in the display, we adopt the convention that $0/0 = 1/2$. We further wish to specify the other pertinent features of the game when the strategy pair $(S_-,S_+)$ is played. These features are the resulting payoffs to Mina and Maxine. Mina's payoff $P_-$ is the sum of a negative term given by the total costs incurred to Mina during gameplay, and a further term that is the terminal payment that is made to her. Indeed, we may write \begin{equation}\label{e.minapayoff} P_- \, = \, - \sum_{t = 1}^\infty C_-(t) \, \, + \, \, T_- \, , \end{equation} where $C_-(t)$ denotes the cost incurred to Mina at the turn with index $t \in \N_+$, and $T_-$ equals the terminal payment to Mina. We have then that the cost $C_-(t)$ is equal to Mina's stake $S_-(X_{t-1},t)$. The terminal payment $T_-$ is in essence given by $n_{-\infty}$ if Mina wins the game by eventually bringing the counter infinitely far to the left; and to $n_\infty$ in the opposing event. However, a precise formulation is needed to make sense of this. We define the {\em escape} event $E$ according to \begin{equation}\label{e.escape} E = \big\{ \lim_n \vert X_n \vert = \infty \big\} \, . \end{equation} The {\em left} and {\em right} escape events are given by \begin{equation}\label{e.leftrightescape} E_- = \big\{ \limsup_n X_n = - \infty \big\} \, \, \, \, \textrm{and} \, \, \, \, E_+ = \big\{ \liminf_n X_n = \infty \big\} \, . \end{equation} Note that $E_-$ and $E_+$ are disjoint events whose union equals $E$. We regard them as victory events for Mina and Maxine, and accordingly set the terminal payment to Mina as follows: \begin{equation}\label{e.terminalmina} T_- \, = \, \begin{cases} \, \, n_{-\infty} & \text{when $E_-$ occurs} \, , \\ \, \, n_\infty & \text{when $E_+$ occurs} \, , \\ \, \, n_* & \text{when $E^c$ occurs} \, . \end{cases} \end{equation} Here $n_*$ is a given real value that is at most $n_\infty$. By assigning this terminal payment to Mina in the event of non-escape, we ensure that this payment no more generous than that made in the event~$E_+$ of her defeat. We may specify Maxine's payoff \begin{equation}\label{e.maxinepayoff} P_+ \, = \, - \sum_{t = 1}^\infty C_+(t) \, \, + \, \, T_+ \end{equation} with counterpart interpretations for the right-hand terms: the cost~$C_+(t)$ incurred to Maxine at the turn with index $t \in \N_+$ equals Maxine's stake $S_+(X_{t-1},t)$, while the terminal payment $T_+$ that she receives is given by \begin{equation}\label{e.terminalmaxine} T_+ \, = \, \begin{cases} \, \, m_{-\infty} & \text{when $E_-$ occurs} \, , \\ \, \, m_\infty & \text{when $E_+$ occurs} \, , \\ \, \, m_* & \text{when $E^c$ occurs} \, , \end{cases} \end{equation} where $m_*$ is a given real value\footnote{Note that $(m_*,n_*) \in \ensuremath{\mathbb{R}}^2$ and $(m_{-\infty},m_\infty,n_{-\infty},n_\infty) \in \ensuremath{\mathbb{R}}^4$ are the parameters that specify the Trail of Lost Pennies on~$\ensuremath{\mathbb{Z}}$. We thus speak imprecisely when we refer to the latter quadruple as the game's data. Given the upper bounds that we impose on them, the values of $m_*$ and $n_*$ will be immaterial for our analysis.} that is at most $m_{-\infty}$. The quantities labelled $P$, $C$ and $T$ for Mina and Maxine are determined by the gameplay $X:\N \to \ensuremath{\mathbb{Z}}$. The gameplay and these other random variables are thus coupled together under the law $\ensuremath{\mathbb{P}}_{S_-,S_+}^k$. Of course, starting location $X_0 = k$ and strategy pair $(S_-,S_+)$ are fundamental for determining game outcome including the above described quantities. In our notation, this dependence is communicated by the labels of the law $\ensuremath{\mathbb{P}}_{S_-,S_+}^k$, rather than in the annotations $P_-$, $T_-$, and so on. A strategy is {\em time-invariant} if $S(i,j)$ is independent of $j \in \N_+$ for every $i \in \ensuremath{\mathbb{Z}}$. The set of time-invariant strategies will be denoted by~$\mc{S}_0$. A time-invariant strategy pair $(S_-,S_+) \in \mathcal{S}^2$ may be identified with a pair of sequences $\big\{ a_i: i \in \ensuremath{\mathbb{Z}} \big\}$ and $\big\{ b_i: i \in \ensuremath{\mathbb{Z}} \big\}$ , where $a_i = S_+(i,j)$ and $b_i = S_-(i,j)$ for $(i,j) \in \ensuremath{\mathbb{Z}} \times \N_+$. A strategy pair $(S_-,S_+) \in \mathcal{S}^2$ is a Nash equilibrium if $$ \egameplay{S_-}{S_+}{k} [P_+] \geq \egameplay{S_-}{S}{k} [P_+] \, \, \, \, \textrm{and} \, \, \, \, \egameplay{S_-}{S_+}{k} [P_-] \geq \egameplay{S}{S_+}{k} [P_-] $$ for all $S \in \mathcal{S}$ and $k \in \ensuremath{\mathbb{Z}}$. Note that this condition takes a strong form, in that it stipulates the displayed bound for any initial condition $X_0 = k \in \ensuremath{\mathbb{Z}}$ for the counter location. Let $\mathcal{N}(m_{-\infty},m_\infty,n_{-\infty},n_\infty) \subset \mathcal{S}^2$ denote the set of Nash equilibria. Consider a time-invariant Nash equilibrium, namely an element~$(S_-,S_+)$ of $\mc{S}_0^2$ that satisfies the above condition: when such a strategy pair is played, neither player would gain by altering strategy, even if the proposed alternative strategy is not time-invariant. In an abuse of notation, generic elements of $\mc{S}_0$, for respective use by Maxine and Mina, will be called $\big(a_i:i\in \ensuremath{\mathbb{Z}})$ and $\big(b_i:i\in \ensuremath{\mathbb{Z}} \big)$. In a further abuse, the accompanying element of $\mc{S}_0^2$ will be denoted\footnote{In the strategy-pair notation $(S_-,S_+) \in \mathcal{S}^2$, governed by $- < +$, Mina precedes Maxine. Thus the notation $(b,a)$ for strategy pairs will be standard. We will shortly introduce an $(a,b,m,n)$-quadruple notation for stakes and mean payoffs, in which Maxine precedes Mina (in the sense of `$a$ before $b$' and `$m$ before $n$'). As a result, usages of the form `$(a,b,m,n)$ is the quadruple associated to the Nash equilibrium $(b,a)$' will be made.} $\big\{ (b_i,a_i): i \in \ensuremath{\mathbb{Z}} \big\}$. \subsection{Time-invariant Nash equilibria and the \textrm{ABMN} equations} We now begin to present our main results. We first introduce the \textrm{ABMN} equations, which will be fundamental to this study. Theorems~\ref{t.positiveabmn} and~\ref{t.minamarginvalues} present basic properties of the equations' solution set, and Theorem~\ref{t.nashabmn} is the result that bridges between the equations and the trail game (as we will sometimes informally call the Trail of Lost Pennies). In Theorem~\ref{t.nashequil.prelim}, these theorems are leveraged to characterize when the trail game has time-invariant Nash equilibria in terms a condition on boundary data involving an important basic quantity, the {\em Mina margin}, which is introduced here. The section ends with Theorem~\ref{t.ajbj}, which offers precise asymptotic decay estimates for Nash equilibria as the index varies away from the {\em battlefield index}, at which the players are most likely to decide the ultimate outcome of a given game; and with its consequence Theorem~\ref{t.unanimity}, which describes gameplay at any Nash equilibrium. \begin{definition}\label{d.quadruple} Let $\big\{ (b_i,a_i): i \in \ensuremath{\mathbb{Z}} \big\}$ denote a time-invariant strategy pair: namely, $$ \big\{ (b_i,a_i): i \in \ensuremath{\mathbb{Z}} \big\} \in \mc{S}_0^2 \, . $$ Let $S_-,S_+ \in \mathcal{S}$ be strategies such that $S_-(i,j) = b_i$ and $S_+(i,j) = a_i$ whenever $(i,j) \in \ensuremath{\mathbb{Z}} \times \N_+$. Set $m_i = \egameplay{S_-}{S_+}{i} [ P_+]$ and $n_i = \egameplay{S_-}{S_+}{i} [P_-]$ for $i \in \ensuremath{\mathbb{Z}}$. By this means, we have associated to any element $\big\{ (b_i,a_i): i \in \ensuremath{\mathbb{Z}} \big\} \in \mc{S}_0^2$ a $\ensuremath{\mathbb{Z}}$-indexed quadruple $\big\{ (a_i,b_i,m_i,n_i): i \in \ensuremath{\mathbb{Z}} \big\}$ of elements taking values in $[0,\infty)^2 \times \big(\ensuremath{\mathbb{R}} \cup \{ -\infty\}\big)^2$. \end{definition} \begin{definition}\label{d.abmn} The ABMN system on $\ensuremath{\mathbb{Z}}$ is the set of equations in the four real variables $a_i$, $b_i$, $ m_i$ and $n_i$, indexed by~$i \in \ensuremath{\mathbb{Z}}$, \begin{align*} (a_i + b_i)(m_i+a_i) & = a_i m_{i+1} + b_i m_{i-1} && \qquad \textrm{ABMN}(1) \\ (a_i + b_i)(n_i+b_i) & = a_i n_{i+1} + b_i n_{i-1} &&\qquad \textrm{ABMN}(2) \\ (a_i + b_i)^2 & = b_i \big( m_{i+1} - m_{i-1} \big) &&\qquad \textrm{ABMN}(3) \\ (a_i + b_i)^2 & = a_i \big( n_{i-1} - n_{i+1} \big) &&\qquad \textrm{ABMN}(4) \, , \end{align*} where $i$ ranges over $\ensuremath{\mathbb{Z}}$. We will refer to the above equations throughout in the form \textrm{ABMN}$(i)$, for $i \in \{1,2,3,4\}$, rather than by a conventional numerical labelling. It is always supposed that $a_i$ and~$b_i$ are non-negative for $i \in \ensuremath{\mathbb{Z}}$. A solution is said to have boundary data $(m_{-\infty},m_\infty,n_{-\infty},n_\infty)$ when \begin{equation}\label{e.boundarydata} \lim_{k \to \infty} m_{-k} = m_{-\infty} \, \, \, , \, \, \, \lim_{k \to \infty} m_k = m_\infty \, \, \, , \, \, \, \lim_{k \to \infty} n_{-k} = n_{-\infty} \,\,\,\, \textrm{and} \,\,\,\, \lim_{k \to \infty} n_k = n_\infty \, . \end{equation} For such a solution, the {\em Mina margin} is set equal to $\frac{n_{-\infty} - n_\infty}{m_\infty - m_{-\infty}}$. A solution is called {\em positive} if $a_i > 0$ and $b_i > 0$ for all $i \in \ensuremath{\mathbb{Z}}$. It is called {\em strict} if $m_{i+1} > m_i$ and $n_i > n_{i+1}$ for such $i$. \end{definition} \begin{theorem}\label{t.positiveabmn} Let $\big\{ (a_i,b_i,m_i,n_i) \in (0,\infty)^2 \times \ensuremath{\mathbb{R}}^2: i \in \ensuremath{\mathbb{Z}} \big\}$ be a positive \textrm{ABMN} solution. \begin{enumerate} \item The solution is strict. \item The solution has boundary conditions $(m_{-\infty},m_\infty,n_{-\infty},n_\infty)$ that satisfy $m_\infty > m_{-\infty}$ and $n_{-\infty} > n_\infty$. \item The values $m_{-\infty}$, $m_\infty$, $n_{-\infty}$ and $n_\infty$ are real numbers. As such, the Mina margin $\frac{n_{-\infty} - n_\infty}{m_\infty - m_{-\infty}}$ exists and is a positive and finite real number. \end{enumerate} \end{theorem} The Mina margin has a fundamental role to play in determining whether the \textrm{ABMN} system can be solved, as we now see. \begin{theorem}\label{t.minamarginvalues} Invoking Theorem~\ref{t.positiveabmn}(3), we may set $I \subset (0,\infty)$ equal to the set of values of the Mina margin $\frac{n_{-\infty} - n_\infty}{m_\infty - m_{-\infty}}$, where $\big\{ (a_i,b_i,m_i,n_i) \in (0,\infty)^2 \times \ensuremath{\mathbb{R}}^2: i \in \ensuremath{\mathbb{Z}} \big\}$ ranges over the set of positive \textrm{ABMN} solutions. \begin{enumerate} \item There exists a value $\lambda \in (0,1]$ such that the set $I$ is equal to the interval $[\lambda,\lambda^{-1}]$. \item Moreover, a positive \textrm{ABMN} solution exists with boundary data $(m_{-\infty},m_\infty,n_{-\infty},n_\infty) \in \ensuremath{\mathbb{R}}^4$ if and only if $m_{-\infty} < m_\infty$, $n_\infty < n_{-\infty}$ and $\frac{n_{-\infty} - n_\infty}{m_\infty - m_{-\infty}} \in [\lambda,\lambda^{-1}]$. \item The value of $\lambda$ is at most $0.999904$. \end{enumerate} \end{theorem} Theorem~\ref{t.minamarginvalues}(3) eliminates the possibility that $\lambda =1$, that Nash equilibria exist only when the players have symmetric roles. Were the bound on $\lambda$ proved in this result close to sharp, this quantity, which is canonically associated to the Trail of Lost Pennies on $\ensuremath{\mathbb{Z}}$---a natural and simple enough game---would then be remarkably close to one, differing from it by less than $10^{-4}$. This is exactly what we expect. \begin{conjecture}\label{c.lambda} The value of $\lambda$ is at least $0.999902$. \end{conjecture} Evidence for this conjecture will be presented in~Section~\ref{s.minamarginmap}. \begin{theorem}\label{t.nashabmn} Let $(m_{-\infty},m_\infty,n_{-\infty},n_\infty) \in \ensuremath{\mathbb{R}}^4$ satisfy $m_{-\infty}<m_\infty$ and $n_\infty < n_{-\infty}$. \begin{enumerate} \item Suppose that an element $\big\{ (b_i,a_i): i \in \ensuremath{\mathbb{Z}} \big\}$ of $\mc{S}_0^2$ lies in~$\mathcal{N}(m_{-\infty},m_\infty,n_{-\infty},n_\infty)$. Then $$ \textrm{the quadruple $\big\{ (a_i,b_i,m_i,n_i): i \in \ensuremath{\mathbb{Z}} \big\}$ associated to the element by Definition~\ref{d.quadruple}} $$ is a positive \textrm{ABMN} solution with boundary data $(m_{-\infty},m_\infty,n_{-\infty},n_\infty)$. \item Conversely, suppose that $\big\{ (a_i,b_i,m_i,n_i) \in (0,\infty)^2 \times \ensuremath{\mathbb{R}}^2 : i \in \ensuremath{\mathbb{Z}} \big\}$ is a positive \textrm{ABMN} solution with boundary data $(m_{-\infty},m_\infty,n_{-\infty},n_\infty)$. Then $\big\{ (b_i,a_i): i \in \ensuremath{\mathbb{Z}} \big\} \in \mc{S}_0^2$ lies in $\mathcal{N}(m_{-\infty},m_\infty,n_{-\infty},n_\infty)$. \end{enumerate} \end{theorem} \begin{definition}\label{d.standard} In its {\em standard} form, the trail game has boundary data that satisfies $m_{-\infty} = 0$, $n_\infty = 0$ and $m_\infty = 1$. For a game in this form, the game's data is thus specified by one parameter, $n_{-\infty} \in (0,\infty)$. This parameter equals the Mina margin~$\frac{n_{-\infty} - n_\infty}{m_\infty - m_{-\infty}}$. \end{definition} Let $x \in (0,\infty)$. By ${\rm Standard}(x)$, we denote the Trail of Lost Pennies on $\ensuremath{\mathbb{Z}}$ in its standard form, with the Mina margin equal to $x$. That is, ${\rm Standard}(x)$ equals ${\rm Trail}(0,1,x,0)$, as this game has been specified in~Section~\ref{s.gamespec}. Suppose that $x$ exceeds one. In playing ${\rm Standard}(x)$, Maxine has more to play for than does Mina. Maxine may be tempted to outstake Mina, perhaps staking a certain constant multiple $f(x) >1$ of the stake that Mina offers at any given turn. The resulting gameplay is a walk with a constant bias to the right, making Mina's defeat inevitable---she may as well (or better) have staked nothing. If instead it is $x^{-1}$ that exceeds one, then it is of course Mina who may be tempted by such an approach. Perhaps an argument can be fashioned along these lines to the effect that the game is competitive precisely when $x$ lies in an interval of the form $[\mu,\mu^{-1}]$ for some $\mu \in (0,1]$. This heuristic hardly lacks shortcomings, and it is quite unclear what the value of $\mu$ should be. However, the next result, which anyway follows from Theorems~\ref{t.minamarginvalues} and~\ref{t.nashabmn}, validates its conclusion in a certain sense, with the value of $\mu$ equal to $\lambda$. \begin{theorem}\label{t.nashequil.prelim} Recall the quantity $\lambda \in (0,1)$, which is specified and described by Theorem~\ref{t.minamarginvalues}. For $x \in (0,\infty)$, the game ${\rm Standard}(x)$ has a time-invariant Nash equilibrium precisely when $x$ lies in $[\lambda,\lambda^{-1}]$. \end{theorem} The shift operator on $\ensuremath{\mathbb{Z}}$ has a basic role to play as we analyse the Trail of Lost Pennies on this set. \begin{definition}\label{d.shiftone} Consider two time-invariant strategy pairs $\big\{ (b_i,a_i): i \in \ensuremath{\mathbb{Z}} \big\}$ and $\big\{ (b'_i,a'_i): i \in \ensuremath{\mathbb{Z}} \big\}$. These pairs are called {\em shift equivalent} if there exists $k \in \ensuremath{\mathbb{Z}}$ for which $(b_i,a_i) = (b'_{i+k},a'_{i+k})$ for all $i \in \ensuremath{\mathbb{Z}}$. It is straightforward to see that an element $\big\{ (b_i,a_i): i \in \ensuremath{\mathbb{Z}} \big\}$ of $\mathcal{S}_0^2$ lies in $\mathcal{N}(m_{-\infty},m_\infty,n_{-\infty},n_\infty)$ if and only if every shift equivalent element does so. \end{definition} Let $Q:(0,\infty) \to \ensuremath{\mathbb{N}}$ be such that $Q(x)$ is the maximum cardinality of a set of mutually shift inequivalent time-invariant Nash equilibria for the game ${\rm Standard}(x)$ for $x \in (0,\infty)$. The preceding result implies that the set of $x \in (0,\infty)$ for which $Q(x) > 0$ is equal to the interval $[\lambda,\lambda^{-1}]$---which interval is non-degenerate in view of Theorem~\ref{t.minamarginvalues}(3). In the next result, we assert that a pair of shift inequivalent solutions exist when the Mina margin lies in the interval's interior. \begin{theorem}\label{t.solutions} For $x \in (\lambda,\lambda^{-1})$, $Q(x) \geq 2$. \end{theorem} We conjecture that no further time-invariant Nash equilibria exist. \begin{conjecture}\label{c.solutions} We have that $Q(x) = 2$ when $x \in (\lambda,\lambda^{-1})$ and $Q(x) =1$ when $x \in \{ \lambda,\lambda^{-1} \big\}$. \end{conjecture} This conjecture will be discussed in Section~\ref{s.conjectureroute}. The next result describes precise asymptotic estimates on four sequences associated to any positive \textrm{ABMN} solution. In light of Theorem~\ref{t.nashabmn}, it also describes decay rates for the pair of sequences given by any time-invariant Nash equilibrium. \begin{definition}\label{d.deltai} Let $(a,b,m,n)$ be an \textrm{ABMN} solution. For $i \in \ensuremath{\mathbb{Z}}$, set $\phi_i = \frac{n_{i-1} - n_i}{m_i - m_{i-1}}$. \end{definition} \begin{definition}\label{d.battlefield} For an \textrm{ABMN} solution $(a,b,m,n)$, the {\em battlefield index} is the unique value $k \in \ensuremath{\mathbb{Z}}$ such that $\phi_k \in (1/3,3]$. \end{definition} In Lemma~\ref{l.battlefield}, we will prove the existence and uniqueness claims implicit in the last definition, thus showing that the battlefield index is well-defined. \begin{theorem}\label{t.ajbj} Let $\big\{ (a_i,b_i,m_i,n_i): i \in \ensuremath{\mathbb{Z}} \big\}$ be a positive \textrm{ABMN} solution, and let $k \in \ensuremath{\mathbb{Z}}$ denote its battlefield index. \begin{enumerate} \item There exist positive constants $A$ and $F$ such that, for $j \geq k$, \begin{eqnarray*} a_j & = & (m_k - m_{k-1})\cdot 2F \cdot 2^{2(j-k)} \exp \big\{ - 2 \cdot 2^{j-k}A \big\} \big( 1 + e^{-O(1) 2^{j-k}}\big) \, ; \\ b_j & = & (m_k - m_{k-1})\cdot 4F \cdot 2^{2(j-k)} \exp \big\{ - 3 \cdot 2^{j-k}A \big\} \big( 1 + e^{-O(1) 2^{j-k}}\big) \, ; \\ m_j - m_{j-1} & = & (m_k - m_{k-1})\cdot F \cdot 2^{2(j-k)} \exp \big\{ - 2^{j-k}A \big\} \big( 1 + e^{-O(1) 2^{j-k}}\big) \, ; \, \, \, \textrm{and} \\ n_{j-1} - n_j & = & (m_k - m_{k-1})\cdot 2F \cdot 2^{2(j-k)} \exp \big\{ - 2^{j-k+1}A \big\} \big( 1 + e^{-O(1) 2^{j-k}}\big) \, . \end{eqnarray*} The constants $A$ and $F$ may be chosen to lie in a compact interval of $(0,\infty)$ that does not depend on the choice of the solution $\big\{ (a_i,b_i,m_i,n_i): i \in \ensuremath{\mathbb{Z}} \big\}$. The positive constant that is implicit in the $O$-notation in the four displayed expressions may be chosen independently of this solution. \item There exist positive constants $B$ and $G$ such that, for $j \leq k-1$, \begin{eqnarray*} a_j & = & (n_{k-1} - n_k)\cdot 4G \cdot 2^{2(k-j)} \exp \big\{ - 3 \cdot 2^{k-j}B \big\} \big( 1 + e^{-O(1) 2^{k-j}}\big) \, ; \\ b_j & = & (n_{k-1} - n_k)\cdot 2G \cdot 2^{2(k-j)} \exp \big\{ - 2 \cdot 2^{k-j}B \big\} \big( 1 + e^{-O(1) 2^{k-j}}\big) \, ; \\ m_j - m_{j-1} & = & (n_{k-1} - n_k)\cdot 2G \cdot 2^{2(j-k)} \exp \big\{ - 2^{k-j+1}B \big\} \big( 1 + e^{-O(1) 2^{k-j}}\big) \, ; \, \, \, \textrm{and} \\ n_{j-1} - n_j & = & (n_{k-1} - n_k)\cdot G \cdot 2^{2(j-k)} \exp \big\{ - 2^{k-j}B \big\} \big( 1 + e^{-O(1) 2^{k-j}}\big) \, . \end{eqnarray*} The conditions on $B$ and $G$ satisfy those set out for $A$ and $F$ in the preceding part; the constant implicit in the $O$-notation satisfies the condition recorded in this part. \end{enumerate} \end{theorem} When $X = k$---when the counter is at the battlefield index---both players spend big to try to win the next move. For example, when $m_\infty - m_{-\infty} = n_{-\infty} - n_\infty =1$, so that the difference in terminal receipt between victory and defeat is one unit for each player, then values of Maxine's stake $a_k$ lie on the interval $[0.12,0.20]$ and values of Mina's stake $b_k$ lie in $[0.025,0.18]$. (We will shortly present explicit solutions to the \textrm{ABMN} equations, which validate this assertion: we may use Theorem~\ref{t.altstand}(1), for example. Maxine's expense interval is displaced to the right from Mina's, but the situation is reversed if the counter reaches $k-1$, one place to the left.) These are big expenditures in a single turn of a game with infinitely many. The expenditures drop rapidly as the counter moves away from the battlefield, however. Indeed, if we write $g_i \ll h_i$ to denote that $g_i \leq \exp \big\{ - e^{ci} \big\} h_i$ for $i \in \N_+$ (where $c$ is some given positive constant), then Theorem~\ref{t.ajbj} implies that, $$ \textrm{for} \, \, \, i \in \N_+ \, \, \, , \, \, \, 0 < b_{k+i} \ll a_{k+i} \ll 1 \, \, \, \textrm{and} \, \, \, 0 < a_{k-i} \ll b_{k - i} \ll 1 \, \, \, : $$ to the right of the battlefield, both expenditures drop suddenly; but Maxine, eyeing victory, makes sure to vastly outspend Mina; while to the left of the battlefield, the roles are reversed. We also have that $0 < n_{k+i} - n_{k+i+1} \ll m_{k+i+1} - m_{k+i} \ll 1$ and $0 < m_{k-i} - m_{k-i-1} \ll n_{k-i-1} - n_{k-i} \ll 1$; and, by extension, $$ 0 < n_{k+i} - n_\infty \ll m_\infty - m_{k+i} \ll 1 \, \, \, \textrm{and} \, \, \, 0 < m_{k-i} - m_{-\infty} \ll n_{-\infty} - n_{k-i} \ll 1 \, . $$ Indeed, in the left part of the last display, which is to the right of the battlefield, Mina has essentially (but not absolutely!) thrown in the towel, and her expected payoff $n_{k+i}$ is minutely above her defeat terminal receipt of $n_\infty$. Maxine's average payoff $m_{k+i}$ is just slightly below her victory receipt of $m_\infty$, but her need to keep moving the counter rightwards provides some lower bound on the difference. In the right part of the display, roles are naturally reversed. The players may dread the return of the counter to the battlefield index because this is an expensive occasion for both of them. The next result, a consequence of Theorem~\ref{t.ajbj}, shows that they are typically saved from witnessing this event repeatedly when a Nash equilibrium is played. Let $(S_-,S_+) \in \mathcal{S}^2$ and $i \in \ensuremath{\mathbb{Z}}$. Under $\ensuremath{\mathbb{P}}_{S_-,S_+}^i$, the {\em unanimity} event $U$ occurs when all but finitely many of the differences $X_{j+1} - X_j \in \{-1,1\}, j \in \N$, of the gameplay process $X:\ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{Z}}$, $X_0 = i$, adopt a given value. Writing $U_-$ and $U_+$ for the respective events specified when the given value is $-1$ and $1$, the occurrence of these events correspond to victories for Mina and Maxine, and $U$ is the disjoint union of $U_-$ and $U_+$. \begin{theorem}\label{t.unanimity} Let $(a,b,m,n)$ denote a positive \textrm{ABMN} solution on $\ensuremath{\mathbb{Z}}$ with given boundary data of the form~(\ref{e.quadruple}). Suppose that the solution has battlefield index $k \in \ensuremath{\mathbb{Z}}$, and let $i \in \ensuremath{\mathbb{Z}}$. Let $(S_-,S_+) \in \mc{S}_0^2$. be given by $(b,a)$, with the usual abuse of notation. \begin{enumerate} \item We have that $\ensuremath{\mathbb{P}}_{S_-,S_+}^i(U) = 1$. \end{enumerate} There exist positive constants $C$ and $c$ that may be chosen independently of the element $(S_-,S_+)$ and the index $i \in \ensuremath{\mathbb{Z}}$ for which the following hold. \begin{enumerate} \setcounter{enumi}{1} \item If $i \geq k$ then $\ensuremath{\mathbb{P}}_{S_-,S_+}^i(U_-) \leq C \exp \big\{- c 2^{k-i}\big\}$. \item If $i \leq k - 1$ then $\ensuremath{\mathbb{P}}_{S_-,S_+}^i(U_+) \leq C \exp \big\{- c 2^{i-k}\big\}$. \end{enumerate} Consider the Trail of Lost Pennies on $\ensuremath{\mathbb{Z}}$ with given boundary data~(\ref{e.quadruple}). Redefine $(S_-,S_+)$ to be an element of $\mathcal{S}_0^2 \cap \mathcal{N}(m_{-\infty},m_\infty,n_{-\infty},n_\infty)$. Writing $(b,a)$ for $(S_-,S_+)$, the data $(a,b,m,n)$ specified by Definition~\ref{d.quadruple} determines the battlefield index. Suppose that this index is $k \in \ensuremath{\mathbb{Z}}$, and let $i \in \ensuremath{\mathbb{Z}}$. \begin{enumerate} \setcounter{enumi}{3} \item The preceding three parts remain valid in this framework. \end{enumerate} \end{theorem} We see then that the player who wins a local victory at (or around) the battlefield index typically comes to entirely dominate the later moves of the game. By playing at a time-invariant Nash equilibrium, players thereby forge an implicit consensus to avoid the mutually destructive circumstance of many returns to the battlefield. In this paper, we will not attempt to recapitulate the conclusions of~\cite{HP2022} regarding Nash equilibria in allocated-budget stake-governed games. But safe to say these results indicate that, for suitable graphs and at Nash equilibrium, each budget must be spent via staking in a more-or-less regular flow, so that the concerned player is competitive throughout the lifetime of the game. Connections to PDE in classic random-turn games arise for a similar reason: game value has a certain regularity as a function of initial counter location. The above results show how different is the Trail of Lost Pennies on $\ensuremath{\mathbb{Z}}$ (and perhaps suggest this difference more broadly for self-funded stake-governed games). The empirical stake process of a player at any Nash equilibrium is punctuated by a few brief intense periods as the counter passes through the battlefield. In the large, the only concern for outcome is the answer to the question: is the battlefield to the left or the right of where the counter lies? \subsection{Explicit \textrm{ABMN} solutions}\label{s.solvingabmn} Here we present an explicit form for all positive \textrm{ABMN} solutions. It is useful to begin by classifying the solutions into classes, where members of a given class differ in simple ways. If one or other player receives, or must pay, some given amount before a game begins, play will be unaffected---or at least the Nash equilibria will not be. If the unit currency is revalued before play, the outcome will be a mere scaling of all quantities. We identify \textrm{ABMN} solutions that differ according to translations $\chi_{x,0}$ or $\chi_{0,y}$ or dilations $\tau_u$ (where $x,y \in \ensuremath{\mathbb{R}}$ and $u \in (0,\infty)$) that correspond to such operations. If we can describe one element in each equivalence class, we will be able to describe all solutions. Equivalence classes are naturally parametrized by the positive real quantity $\phi_0 = \tfrac{n_{-1}-n_0}{m_0 - m_{-1}}$, which we call the {\em central ratio}, specified by Definition~\ref{d.deltai}. So there is a one-parameter family of essentially different positive \textrm{ABMN} solutions. In each equivalence class, we will distinguish two special solutions---the default solution, which has a simpler explicit formula; and the standard solution, which corresponds to a convenient choice of boundary data for the trail game. We will set up this structure and then state the explicit form of the default solution in each equivalence class. We consider $\ensuremath{\mathbb{Z}}$-indexed sequences $g = \{ g_i: i \in \ensuremath{\mathbb{Z}} \}$. A sequence is {\em monotone} if it is non-decreasing or non-increasing. A bounded monotone sequence $g$ has left and right limits $$ g_{-\infty} = \lim_{k \to \infty} g_{-k} \, \, \, \, \textrm{and} \, \, \, \, g_\infty = \lim_{k \to \infty} g_k $$ that are elements of~$\ensuremath{\mathbb{R}}$. We will specify certain bounded monotone sequences~$g$ by giving one of the limiting values, $g_{-\infty}$ or $g_\infty$, alongside the difference sequence $\big\{ g_{i+1} - g_i: i \in \ensuremath{\mathbb{Z}} \big\}$. Let $u \in (0,\infty)$ and $v \in \ensuremath{\mathbb{R}}$. For any sequence $g:\ensuremath{\mathbb{Z}} \to \ensuremath{\mathbb{R}}$, we write $u \cdot g:\ensuremath{\mathbb{Z}} \to \ensuremath{\mathbb{R}}$ for the sequence given by $(u \cdot g)_i = u \cdot g_i$. Let $\Theta$ denote the space of quadruples of sequences; thus, when $(a,b,m,n) \in \Theta$, each component $* \in \{ a,b,m,n \}$ has the form $*:\ensuremath{\mathbb{Z}} \to \ensuremath{\mathbb{R}}$. For $u \in (0,\infty)$ and $v_1,v_2 \in \ensuremath{\mathbb{R}}$, define $\tau_u,\chi_{v_1,v_2}:\Theta \to \Theta$ so that $\tau_u(a,b,m,n) = \big( u \cdot a, u \cdot b, u \cdot m, u \cdot n \big)$ and $\chi_{v_1,v_2}(a,b,m,n) = (a,b,m+v_1,n+v_2)$. Two solutions of the \textrm{ABMN} equations on $\ensuremath{\mathbb{Z}}$ are called {\em equivalent} if one is the image of the other under a composition of the form $\tau_u \circ \chi_{v_1,v_2}$ for such $u$, $v_1$ and $v_2$ as above. The relation of two such solutions will be denoted by~$\sim$; Proposition~\ref{p.abmnclassify} asserts that~$\sim$ is indeed an equivalence relation. Let $(a,b,m,n)$ be an \textrm{ABMN} solution on $\ensuremath{\mathbb{Z}}$. Recall from Definition~\ref{d.abmn} that the solution's {\em Mina margin} is defined to be $\tfrac{n_{-\infty} - n_\infty}{m_\infty - m_{-\infty}}$. The solution's {\em central ratio} ${\rm CenRatio}$ is set equal to $\frac{n_{-1} - n_0}{m_0 - m_{-1}}$. The solution is called {\em standard} if $m_{-\infty} = 0$, $n_\infty = 0$ and $m_\infty = 1$. It is called {\em default} if $m_{-\infty} = 0$, $n_\infty = 0$ and $m_0 - m_{-1} = 1$. Compatibly with the usage of Definition~\ref{d.standard}, the Mina margin of a standard solution equals $n_{-\infty}$; note further that the central ratio of a default solution equals $n_{-1} - n_0$. \begin{proposition}\label{p.default} For any default \textrm{ABMN} solution, the value of the central ratio ${\rm CenRatio}$ lies in $(0,\infty)$. For any $x \in (0,\infty)$, there is exactly one default solution for which ${\rm CenRatio}$ equals $x$. \end{proposition} \begin{proposition}\label{p.abmnclassify} \leavevmode \begin{enumerate} \item The space of \textrm{ABMN} solutions is partitioned into equivalence classes by the relation~$\sim$. \item Each equivalence class contains a unique standard solution and a unique default solution. \end{enumerate} \end{proposition} Propositions~\ref{p.default} and~\ref{p.abmnclassify} provide a natural labelling of \textrm{ABMN} solution equivalence classes: any given class is labelled by the value of the central ratio of the unique default solution in the class. The labelling parametrizes the equivalence classes by a copy of $(0,\infty)$. According to the latter assertion of Proposition~\ref{p.default}, there is a unique default solution to the \textrm{ABMN} equations whose central ratio equals a given value $x \in (0,\infty)$. The next definitions will enable us to record the form of this solution in Theorem~\ref{t.defaultexplicit}. \begin{definition}\label{d.acs} Set $\omega:(0,\infty) \to (1,\infty)$, $\omega(x) = \sqrt{8x+1}$. Writing $\omega = \omega(x)$, we further set $$ c(x) = \frac{(\omega + 3)^2}{16} \, \, , \, \, d(x) = \frac{(\omega + 3)^2}{8(\omega + 1)} \, \, \, \, \textrm{and} \,\, \, \, s(x) = \frac{(\omega - 1)^2}{4(\omega + 7)} \,\,\,\, \textrm{for} \, \, \, \, x \in (0,\infty) \, . $$ \end{definition} \begin{definition}\label{d.stabc} Let $s_{-1}:(0,\infty) \to (0,\infty)$ be given by $s_{-1}(x) = 1/s(1/x)$. We now define a collection of functions $s_i:(0,\infty) \to (0,\infty)$ indexed by $i \in \ensuremath{\mathbb{Z}}$. We begin by setting $s_0(x) = x$ for $x \in (0,\infty)$. We then iteratively specify that $s_i(x) = s \big( s_{i-1}(x) \big)$ and $s_{-i}(x) = s_{-1} \big( s_{-(i-1)}(x) \big)$ for $i \in \N_+$ and $x \in (0,\infty)$. Note that $s_1$ equals $s$ and that the two specifications of $s_{-1}$ coincide. Set $c_j,d_j:(0,\infty) \to (0,\infty)$, $j \in \ensuremath{\mathbb{Z}}$, by means of $c_j(x) = c (s_j(x))$ and $d_j(x) = d (s_j(x))$. \end{definition} To get a sense of the maps $s_i$, $i \in \ensuremath{\mathbb{Z}}$, a few points are worth noting. First, as we will see in Proposition~\ref{p.sminusone}, $s_{-1}$ is the inverse of $s$. Second, as Lemma~\ref{l.acsfacts}(5) attests, $s(x) < x$ for $x \in (0,\infty)$; the orbit $s_i(x)$ thus decreases or increases from $x$ as $i$ grows to the right or the left. And third, note that $s(3) = 1/3$. In view of the second point, we see that $(0,\infty) = \cup_{k \in \ensuremath{\mathbb{Z}}} \, s_k [1/3,3)$ is a partition whose interval elements are arranged in decreasing order in the index~$k$. \begin{definition}\label{d.zdefault} For a sequence $h$, we may naturally write $\prod_{i=0}^k h_i = h_0 \cdots h_k$ for $k \in \ensuremath{\mathbb{N}}$. A convenient device extends this notation to cases where $k \in \ensuremath{\mathbb{Z}}$ is negative: we set $$\prod_{i=0}^k h_i \, = \, \begin{cases} \, 1 & \text{for $k=-1$} \\ \, h_{k+1}^{-1} \cdots h_{-1}^{-1} & \text{for $k \leq -2$} \, . \end{cases} $$ Let $x \in (0,\infty)$. This parameter will index four real-valued sequences $$ a^{\rm def}(x),b^{\rm def}(x),m^{\rm def}(x),n^{\rm def}(x):\ensuremath{\mathbb{Z}} \to (0,\infty) $$ which we denote in the form $\big\{ *^{\rm def}_i(x): i \in \ensuremath{\mathbb{Z}} \big\}$ for $* \in \{a,b,m,n\}$. We begin by specifying $m^{\rm def}(x):\ensuremath{\mathbb{Z}} \to \ensuremath{\mathbb{R}}$. This is the increasing sequence such that $$ m^{\rm def}_{-\infty}(x) = 0 \, , \, \, \, \, \textrm{and} \, \, \, \, m^{\rm def}_{k+1}(x)- m^{\rm def}_k(x) \, = \, \prod_{i=0}^k \big( c_i(x) - 1 \big) \, \, \, \textrm{for $k \in \ensuremath{\mathbb{Z}}$} \, . $$ Note that $m^{\rm def}_0(x) - m^{\rm def}_{-1}(x) = 1$ in view of the notation for products. Next we set $n^{\rm def}(x):\ensuremath{\mathbb{Z}} \to \ensuremath{\mathbb{R}}$. This is the decreasing sequence with $$ n^{\rm def}_\infty(x) = 0 \, , \, \, \, \, \textrm{and} \, \, \, \, n^{\rm def}_k(x)- n^{\rm def}_{k+1}(x) \, = \, x \prod_{i=0}^k \big( d_i(x) - 1 \big) \, \, \, \textrm{for $k \in \ensuremath{\mathbb{Z}}$} \, . $$ Note that $n_{-1}(x) - n_0(x) = x$. To specify $a^{\rm def}(x),b^{\rm def}(x):\ensuremath{\mathbb{Z}} \to (0,\infty)$, we set $$ M_i(x) = m^{\rm def}_{i+1}(x) - m^{\rm def}_{i-1}(x) \, \, \, \, \textrm{and} \, \, \, \, N_i(x) = n^{\rm def}_{i-1}(x) - n^{\rm def}_{i+1}(x) $$ for $i \in \ensuremath{\mathbb{Z}}$. For such~$i$, we take $$ a^{\rm def}_i(x) = \frac{M_i(x)^2 N_i(x)}{\big(M_i(x)+N_i(x)\big)^2} \, \, \, \, \textrm{and} \, \, \, \, b^{\rm def}_i(x) = \frac{M_i(x) N_i(x)^2}{\big(M_i(x)+N_i(x)\big)^2} \, . $$ \end{definition} \begin{theorem}\label{t.defaultexplicit} Let $x \in (0,\infty)$. The unique default \textrm{ABMN} solution with ${\rm CenRatio} = x$ is the quadruple $\big( a^{\rm def}_i(x),b^{\rm def}_i(x),m^{\rm def}_i(x),n^{\rm def}_i(x): i \in \ensuremath{\mathbb{Z}} \big)$ specified in Definition~\ref{d.zdefault}. \end{theorem} For $x \in (0,\infty)$, we write $\mathcal{C}(x)$ for the equivalence class of \textrm{ABMN} solutions that contains the element $\big( a^{\rm def}_i(x),b^{\rm def}_i(x),m^{\rm def}_i(x),n^{\rm def}_i(x): i \in \ensuremath{\mathbb{Z}} \big)$. Let $\big(a^{\rm st}_i(x),b^{\rm st}_i(x),m^{\rm st}_i(x),n^{\rm st}_i(x) : i \in \ensuremath{\mathbb{Z}} \big)$ denote the unique standard solution in $\mathcal{C}(x)$. {\em Remark.} Let $x \in (0,\infty)$. Set $Z(x) = m^{\rm def}_\infty(x)$; which is to say, $Z(x) = \sum_{k \in \ensuremath{\mathbb{Z}}} \prod_{i=0}^k \big( c_i(x) - 1 \big)$. It is straightforward that \begin{equation}\label{e.remark} \Big( a^{\rm st}_i(x),b^{\rm st}_i(x),m^{\rm st}_i(x),n^{\rm st}_i(x) : i \in \ensuremath{\mathbb{Z}} \Big) \, \, \, \, \textrm{equals} \, \, \, \, Z(x)^{-1} \cdot \Big( a^{\rm def}_i(x),b^{\rm def}_i(x),m^{\rm def}_i(x),n^{\rm def}_i(x) : i \in \ensuremath{\mathbb{Z}} \Big) \, . \end{equation} \subsection{The Mina margin map}\label{s.mmm} According to Theorem~\ref{t.defaultexplicit}, the central ratio $\phi_0$ is a convenient parameter for indexing \textrm{ABMN} solution equivalence classes. And Theorem~\ref{t.nashequil.prelim} tells us that the Mina margin is a fundamental parameter for locating Nash equilibria in the trail game. The map $(0,\infty) \to (0,\infty)$ from equivalence class index to the Mina margin of any member solution is a natural object that we will use to organize and prove results. We call this function the {\em Mina margin map}. Here, we define it, and state basic properties in Theorem~\ref{t.relativereward}. Theorem~\ref{t.nashequil} shows how to solve the trail game with given boundary data by finding time-invariant Nash equilibria indexed by the map's level sets. Theorem~\ref{t.phithetainverse} states that a reparametrization of the Mina margin map's domain leads to a periodic form for the map that commutes with the shift operator on $\ensuremath{\mathbb{Z}}$. \begin{definition}\label{d.r} Let the Mina margin map $\mathcal{M}:(0,\infty) \to (0,\infty)$ be given by $\mathcal{M}(x) =n^{\rm st}_{-\infty}(x)$ for $x \in (0,\infty)$. Namely, $\mathcal{M}(x)$ is the Mina margin of $\big(a^{\rm st}_i(x),b^{\rm st}_i(x),m^{\rm st}_i(x),n^{\rm st}_i(x): i \in \ensuremath{\mathbb{Z}} \big)$. \end{definition} \begin{theorem}\label{t.relativereward} \leavevmode \begin{enumerate} \item The function $\mathcal{M}:(0,\infty) \to (0,\infty)$ satisfies $\mathcal{M}(s(x)) = \mathcal{M}(x)$ for $x \in (0,\infty)$. \item The function $\mathcal{M}$ is continuous on $(0,\infty)$ and satisfies $$ \mathcal{M}(x) \, \, = \, \, \Bigg( \sum_{k \in \ensuremath{\mathbb{Z}}} \, \, \prod_{i=0}^k \big( c_i(x) - 1 \big) \Bigg)^{-1} \, \cdot \, x \, \sum_{k \in \ensuremath{\mathbb{Z}}} \, \, \prod_{i=0}^k \big( d_i(x) - 1 \big) \, . $$ \item The range $\mathcal{M}(0,\infty)$ takes the form $[\lambda,\lambda^{-1}]$, where $\lambda \in (0,0.999904]$ is specified in Theorem~\ref{t.minamarginvalues}. \end{enumerate} \end{theorem} \begin{theorem}\label{t.nashequil} Let $x \in [\lambda,\lambda^{-1}]$. Set $X = \big\{ z \in (0,\infty): \mathcal{M}(z) = x \big\}$, and let $Y = X \cap (1/3,3]$, so that, as noted after Definition~\ref{d.stabc}, $X = \cup_{k \in \ensuremath{\mathbb{Z}}} s_k(Y)$. \begin{enumerate} \item The collection of time-invariant Nash equilibria in the game ${\rm Standard}(x)$ is given by the set of maps $$ \ensuremath{\mathbb{Z}} \to (0,\infty)^2: i \to \big(b^{\rm st}_i(z),a^{\rm st}_i(z) \big) $$ indexed by $z$ in $X$. \item Alternatively, this collection is the set of maps $$ \ensuremath{\mathbb{Z}} \to (0,\infty)^2: i \to \big(b^{\rm st}_{i+j}(x),a^{\rm st}_{i+j}(x) \big) \, , $$ where now the index $(x,j)$ ranges over $Y \times \ensuremath{\mathbb{Z}}$. \end{enumerate} \end{theorem} We now develop the notation for the symbolic shift map that was mooted in Definition~\ref{d.shiftone}. \begin{definition}\label{d.shiftmap} We let $\mathcal{S}_1$ denote the left shift by one place: this is the map that sends the space of quadruples $(\ensuremath{\mathbb{R}}^4)^\ensuremath{\mathbb{Z}}$ to itself by the action $$ \mathcal{S}_1 \big\{ (a_i,b_i,m_i,n_i): i \in \ensuremath{\mathbb{Z}} \big\} = \big\{ (a_{i+1},b_{i+1},m_{i+1},n_{i+1}): i \in \ensuremath{\mathbb{Z}} \big\} \, . $$ By iterating this map, we specify the left shift $\mathcal{S}_k$ by $k$ places, for $k \geq 2$; and by specifying $\mathcal{S}_{-1} = \mathcal{S}_1^{-1}$ and iterating, we specify the right shift $\mathcal{S}_{-k}$ by $k$ places, for $k \geq 1$. \end{definition} What is the effect of applying the shift $\mathcal{S}_k$ to a standard solution? It takes the form of a replacement $x \to s_k(x)$ in the ${\rm CenRatio}$-variable, as we will see in the short proof of the next result. \begin{proposition}\label{p.shift} For $x \in (0,\infty)$ and $k \in \ensuremath{\mathbb{Z}}$, $$ \mathcal{S}_k \big( a^{\rm st}(x), b^{\rm st}(x), m^{\rm st}(x), n^{\rm st}(x) \big) \, = \, \Big( a^{\rm st}\big(s_k(x)\big), b^{\rm st} \big(s_k(x)\big), m^{\rm st} \big(s_k(x)\big), n^{\rm st} \big(s_k(x)\big) \Big) \, . $$ \end{proposition} {\bf Proof.} The symbolic shift map leaves invariant the boundary quadruple of any \textrm{ABMN} solution. Thus, the displayed left-hand quadruple is a standard solution of the \textrm{ABMN} system. To identify it as the right-hand quadruple, it is thus enough to show that its ${\rm CenRatio}$-value equals $s_k(x)$. But this amounts to $\frac{n_{k-1} - n_k}{m_k - m_{k-1}} = s_k(x)$, because $x = \frac{n_{-1} - n_0}{m_0 - m_{-1}}$. \qed Theorem~\ref{t.relativereward}(1) leads directly to $\mathcal{M}(s_k(x)) = \mathcal{M}(x)$ for $x \in (0,\infty)$ and $k \in \ensuremath{\mathbb{Z}}$. To understand the map $\mathcal{M}$, we see then that the asymptotics in highly positive and negative $k$ of the orbits $s_k(x)$ are important. As we will see in Lemma~\ref{l.acsfacts}(2,3), $s(x) \sim x^2/2$ for $0 < x \ll 1$ and $s(x) \sim 2^{-1/2} x^{1/2}$ for $x \gg 1$. Thus the forward orbit $s_k(x)$, $k \to \infty$, converges rapidly to zero, while the backward orbit, $k \to -\infty$, grows quickly towards infinity. We now undertake a change of coordinates of the Mina margin map $\mathcal{M}:(0,\infty) \to (0,\infty)$. The domain $(0,\infty)$ will be identified with $\ensuremath{\mathbb{R}}$ by an increasing bijection $\theta^{-1}$. The goal of the coordinate change is to ensure that the original action $(0,\infty) \to (0,\infty): x \to s_1(x)$ becomes the map $\ensuremath{\mathbb{R}} \to \ensuremath{\mathbb{R}}: x \to x-1$. The action of the symbolic sequence shift $\mathcal{S}_1$ on the $x$-variable, as stated in Proposition~\ref{p.shift}, comes to correspond to a left shift by a unit in the new real variable. This leads to an attractive representation of the Mina margin map in the guise $\ensuremath{\mathbb{R}} \to (0,\infty): x \to \mathcal{M}\big( \theta^{-1}(x)\big)$. \begin{definition} Let $q:[1/3,3) \to [0,1)$ be an increasing surjection; for definiteness, we may take $q(x) = 3(x -1/3)/8$. We specify $\theta:(0,\infty) \to \ensuremath{\mathbb{R}}$ so that, for $x \in (0,\infty)$, $\theta(x) = k+q\big(s_k(x)\big)$, where $k \in \ensuremath{\mathbb{Z}}$ is the unique integer such that $s_k(x) \in [1/3,3)$. Since $\theta:(0,\infty) \to \ensuremath{\mathbb{R}}$ is an increasing surjection, the inverse $\theta^{-1}: \ensuremath{\mathbb{R}} \to (0,\infty)$ is well defined. We may thus represent the Mina margin map after domain coordinate change by the function $\psi$, where $$ \psi: \ensuremath{\mathbb{R}} \to (0,\infty) \, \, \, \, , \, \, \, \, \psi(x) = \mathcal{M} \big( \theta^{-1}(x) \big) \, . $$ We define the {\em standard solution} map $\mathsf{StSol}:\ensuremath{\mathbb{R}} \to (\ensuremath{\mathbb{R}}^4)^\ensuremath{\mathbb{Z}}$, $$ \mathsf{StSol}(x) \, = \, \Big( a^{\rm st}\big( \theta^{-1}(x)\big), b^{\rm st}\big( \theta^{-1}(x)\big), m^{\rm st}\big( \theta^{-1}(x)\big), n^{\rm st}\big( \theta^{-1}(x)\big) \Big) \, \, \, \, \textrm{for $x \in \ensuremath{\mathbb{R}}$} \, . $$ \end{definition} For $u \in (0,\infty)$ and $j \in \ensuremath{\mathbb{Z}}$, $\theta\big(s_{-j}(u)\big) - \theta(u) = j$. For $z = \Theta(1)$, the value of $\theta^{-1}(z+k)$ thus tracks that of $s_{-k}(z)$ as $k$ rises, either by growing to infinity (if $k$ is positive); or by decaying to zero (if $k$ is negative). To understand the transformation $\theta^{-1}$, it is thus useful to introduce a simple explicit function $\Theta:\ensuremath{\mathbb{R}} \to (0,\infty)$ which is designed so that $s_{-k}(z)$ grows or decays roughly as does $\Theta(k)$ for $\vert k \vert$ large; here $z \in [1/3,3)$, say. Let $\rm Sign:\ensuremath{\mathbb{R}} \to \{-1,1\}$ equal ${\rm Sign}(x) = {\bf 1}_{x \geq 0} - {\bf 1}_{x < 0}$. Then set $$ \Theta:\ensuremath{\mathbb{R}} \to (0,\infty) \, \, \, \, , \, \, \, \, \Theta(x) = 2^{{\rm Sign}(x)(2^{\vert x \vert} -1)} \, . $$ We now present our result concerning the Mina margin map after domain coordinate change. The transformed function $\psi$ is periodic, of unit period; symbolic shift by one place corresponds to unit translation of the domain; and coordinate change asymptotics are, crudely at least, described by $\Theta$. \begin{theorem}\label{t.phithetainverse} \leavevmode \begin{enumerate} \item For $x \in \ensuremath{\mathbb{R}}$, $\psi(x+1) = \psi(x)$. \item For $x \in (0,\infty)$ and $k \in \ensuremath{\mathbb{Z}}$, $$ \mathsf{StSol}(x+k) = \mathcal{S}_{-k} \circ \mathsf{StSol} (x) \, . $$ \item There exists a positive constant $C$ such that, for $z \geq 0$, $$ 2^{2^{z-C}} \leq \theta^{-1}(z) \leq 2^{2^{z+C}} \, ; $$ and, for $z < 0$, $$ 2^{-2^{\vert z \vert +C}} \leq \theta^{-1}(z) \leq 2^{-2^{\vert z \vert -C}} \, . $$ \end{enumerate} \end{theorem} The map $\Theta$ is a simple and explicit surrogate for $\theta^{-1}$, and the transformed Mina margin map $\ensuremath{\mathbb{R}} \to (0,\infty): x \to \mathcal{M}\big(\Theta(x)\big)$ shares the periodicity property of $\psi$ in Theorem~\ref{t.phithetainverse}(1) up to a domain perturbation that decays rapidly away from zero. And this surrogate has a more practical version, in which the Mina margin map $\mathcal{M}$ is replaced by a counterpart for a trail that is a finite interval, rather than all of $\ensuremath{\mathbb{Z}}$. These counterpart Mina margin maps $\mathcal{M}_{j+1,k+1}$ will be presented in the next section. Plots of several of these maps, indexed by different finite trails, appear in Figure~\ref{f.tmmm}. \subsection{The Trail of Lost Pennies on a finite interval}\label{s.finite} The principal aim of this article is to study the trail game in the infinite setting, with gameboard~$\ensuremath{\mathbb{Z}}$. Even with this purpose, it is instructive to introduce and discuss the game whose trail is a finite interval. This setting is more practical if two people are to play the game, taking decisions turn-by-turn because, at least for short intervals, the game will end (by the token reaching one end of the interval or the other) in a limited number of moves. The theoretical aspects of the game---time-invariant Nash equilibria; \textrm{ABMN} solutions and their standard solutions; the Mina margin map---share many basic aspects between the infinite and finite settings. The finite setting permits important objects, such as the Mina margin map, to be plotted in Mathematica, and such investigation has informed several of our main results (in the infinite setting). Our goal then in this section is to communicate the principal aspects of the finite setting so that the reader can interpret pertinent Mathematica plots and understand how these suggest some of our principal results and conjectures. We will also present a conjecture concerning the number of time-invariant Nash equilibria in a symmetric version of the finite game; we will seek to explain why we believe it during the section. The section contains one result, Proposition~\ref{p.rkvalues}, which we will use and whose proof appears in Section~\ref{s.rolereversal}. Our basic aim is heuristic, however, and at times our presentation will be informal. \subsubsection{Gameplay, strategies and Nash equilibria for the finite trail}\label{s.gsn} Let $j,k \in \N$. The Trail of Lost Pennies with trail (or gameboard) $\llbracket -j-1,k+1\rrbracket$ is specified by $\big( m_{-j-1},m_{k+1},n_{-j-1},n_{k+1} \big) \in \ensuremath{\mathbb{R}}^4$, boundary data on which the conditions $m_{-j-1} < m_{k+1}$ and $n_{k+1} < m_{-j-1}$ are imposed. Begun from $\ell$, an element in the field of open play $\llbracket -j,k \rrbracket$, gameplay is a stochastic process $X: \llbracket 0, K \rrbracket \to \llbracket -j-1,k+1 \rrbracket$, $X_0 = \ell$, where $$ K \, = \, \inf \, \Big\{ \, i \in \N_+: X_i \in \{ -j-1,k+1 \} \, \Big\} \, . $$ Indeed, with Mina and Maxine playing to the left and right, the game will end with victory to these respective players when the counter arrives at $-j-1$ or at $k+1$. The gameplay is specified by a strategy pair, where a strategy is a map $S: \llbracket - j,k \rrbracket \times \N_+ \to [0,\infty)$. The construction of $X$ from a given location $X_0 = \ell \in \llbracket -j,k \rrbracket$ coincides with that explained in Section~\ref{s.gamespec}, where instances of the trail $\ensuremath{\mathbb{Z}}$ are replaced by $\llbracket -j,k \rrbracket$, it being understood that the construction stops when $X$ arrives in $\{ - j-1,k+1 \}$. A strategy $S$ for which $S(\ell,i)$ is independent of $i \in \N_+$ for all $\ell \in \llbracket -j,k \rrbracket$ is said to be {\em time-invariant}. Let $\mathcal{S}[j,k]$ denote the space of strategies. For a strategy pair $(S_-,S_+) \in \mathcal{S}[j,k]$, we may reuse notation from the $\ensuremath{\mathbb{Z}}$-indexed trail game, and speak of the law $\ensuremath{\mathbb{P}}_{S_-,S_+}^i$ of gameplay $X:\ensuremath{\mathbb{N}} \to \llbracket -j-1,k+1 \rrbracket$, $X_0 = i$, governed by the pair $(S_-,S_+)$, and stopped on arrival in $\{ -j-1,k+1 \}$. Counterpart to~(\ref{e.minapayoff}) and~(\ref{e.maxinepayoff}) are the $\ensuremath{\mathbb{P}}_{S_-,S_+}^i$-almost sure payoff identities \begin{equation}\label{e.finitepayoff} P^{j,k}_- \, = \, - \sum_{i = 1}^\infty C^{j,k}_-(i) \, \, + \, \, T^{j,k}_- \, \, \, \, \textrm{and} \, \, \, \, P^{j,k}_+ \, = \, - \sum_{i = 1}^\infty C^{j,k}_+(i) \, \, + \, \, T^{j,k}_+ \, , \end{equation} where the cost $C^{j,k}_\pm(i)$ incurred to each player at the $i$\textsuperscript{th} turn, $i \in \N_+$, equals $S_{\pm}(X_{i-1},i)$, as in the original case. To specify the terminal payments $T^{j,k}_\pm$, we permit $E_-$ to denote the event that $X$ arrives at the vertex $-j-1$ at some positive time, and~$E_+$ to denote the event that this process instead reaches $k+1$ at some such time. We then adopt~(\ref{e.terminalmina}) and~(\ref{e.terminalmaxine}) for $T_\pm^{j,k}$, where $m_*$ and $n_*$ denote given real values that satisfy $m_* \leq m_{-j-1}$ and $n_* \leq n_{k+1}$. Definitions concerning Nash equilibria continue to be specified as they are at the end of Section~\ref{s.gamespec}. A collection of quadruples $\big\{ (a_i,b_i,m_i,n_i):i \in \llbracket -j,k \rrbracket \big\}$ is associated to any element $\big\{ (b_i,a_i): i \in \llbracket -j,k \rrbracket \big\}$ by Definition~\ref{d.quadruple} after evident changes in notation have been made. \subsubsection{The \textrm{ABMN} equations} Recall Definition~\ref{d.abmn}. Let $j,k \in \N$. The \textrm{ABMN} system on $\llbracket -j,k \rrbracket$ is the set of equations~\textrm{ABMN}$(1,2,3,4)$ in the real variables $a_i$, $b_i$, $m_i$ and $n_i$, where the index $i$ varies over $\llbracket -j,k \rrbracket$. These equations refer to the components of the quadruple $\big( m_{-j-1},m_{k+1},n_{-j-1},n_{k+1} \big) \in \ensuremath{\mathbb{R}}^4$ which acts as boundary data and for which we suppose a fixed value that satisfies $m_{j-1}< m_{k+1}$ and $n_{-j-1} > n_{k+1}$. Similarly to Definition~\ref{d.abmn}, a solution is {\em positive} if $a_i$ and $b_i$ exceed zero for $i \in \llbracket -j,k \rrbracket$. \subsubsection{A result and a conjecture for the finite trail}\label{s.resultandconjecture} The basic relation between time-invariant Nash equilibria $\big\{ (b_i,a_i): i \in \llbracket -j,k \rrbracket \big\}$ and positive ABMN solutions $\big\{ (a_i,b_i,m_i,n_i): i \in \llbracket -j,k \rrbracket \big\}$ embodied in Theorem~\ref{t.nashabmn} is maintained. The trail game on $\llbracket -j-1,k+1 \rrbracket$ is in its {\em standard form} when its boundary data satisfies $m_{-j-1} = n_{k+1} = 0$ and $m_{k+1}=1$. This class of games is thus parametrized by the Mina margin $n_{-j-1} \in (0,\infty)$. If further $n_{-j-1} =1$, then we speak of the {\em symmetric} standard game. Likewise a solution of the ABMN equations on $\llbracket -j,k \rrbracket$ is {\em standard} when $m_{-j-1} = n_{k+1} = 0$ and $m_{k+1}=1$. The space of standard solutions may be parametrized by the central ratio ${\rm CenRatio} = \tfrac{n_{-1}- n_0}{m_0 - m_{-1}} \in (0,\infty)$. The Mina margin map $\mathcal{M}_{j+1,k+1}:(0,\infty) \to (\infty)$ associates to $x \in (0,\infty)$ the value of the Mina margin $n_{-j-1}$ of the unique standard \textrm{ABMN} solution on $\llbracket -j,k \rrbracket$ for which ${\rm CenRatio} =x$. Standard solutions may be computed explicitly, similarly as was~(\ref{e.remark}) in the infinite setting. To obtain the standard solution on $\llbracket -j,k \rrbracket$ with ${\rm CenRatio} = x \in (0,\infty)$, we start with the restriction of the default solution from Theorem~\ref{t.defaultexplicit} to $\llbracket -j-1, k+1 \rrbracket$. By adding a suitable constant to each $m$-term, and another such to each $n$-term, and then multiplying the result by a suitable scaling factor, we obtain a standard solution whose ${\rm CenRatio}$ remains equal to $x$ because the additions and the scaling leave this value unchanged. We thus see that, for $x \in (0,\infty)$, \begin{equation}\label{e.minammfinite} \mathcal{M}_{j+1,k+1}(x) \, = \, \frac{n_{-j-1} - n_{k+1}}{m_{k+1} - m_{-j-1}} \, , \end{equation} where $\big\{ (a_i,b_i,m_i,n_i): i \in \llbracket -j,k \rrbracket \big\}$ is any \textrm{ABMN} solution on $\llbracket -j,k \rrbracket$ such that $\tfrac{n_{-1}- n_0}{m_0 - m_{-1}} = x$. \begin{figure}[htbp] \centering \includegraphics[width=0.6\textwidth]{ThreeNash.pdf} \caption{The shortest trail with non-unique Nash equilibria for at least some boundary conditions has length six, with five sites in open play. The values $x_1 = 1.63$, $x_2 = 3$ and $x_3 = 5.64$ approximate the three solutions of $\mathcal{M}_{3,3}(x) = 1$. (There is no error for $x_2$.) The $(a,b)$ and $(m,n)$ data on $\llbracket - 2,2 \rrbracket$ for the standard solution on $\llbracket -3,3 \rrbracket$ corresponding to $x_1$ appears in the top row; to $x_2$ in the middle; and to $x_3$ in the lower row. The left column thus depicts the three Nash equilibria in standard symmetric game on the shortest trail for which this game may be expected to have several equilibria. Note that the $x_3$-solution is formed from the $x_1$-solution by role-reversal: that is, by interchanging the roles of $a$ and $b$, and of $m$ and $n$, and by reflecting in the origin. }\label{f.threenash} \end{figure} The trail game on trails $\llbracket -k,k \rrbracket$ of even length differs from that on trails $\llbracket -k-1,k \rrbracket$ of odd length, because the trails in the two classes are reflection symmetric about different objects (the vertex $0$ or the edge $\llbracket -1,0 \rrbracket$). The next result records outcomes of these symmetries for the finite trail Mina margin map. \begin{proposition}\label{p.rkvalues} Let $k \in \N_+$ and $x \in (0,\infty)$. \begin{enumerate} \item We have that $\mathcal{M}_{k,k}(x) \cdot \mathcal{M}_{k,k} \big( 1/s(x)\big) = 1$. \item And that $\mathcal{M}_{k+1,k}(x) \cdot \mathcal{M}_{k+1,k}(x^{-1}) = 1$. \end{enumerate} \end{proposition} Here is our conjecture concerning the symmetric form of the finite trail game. \begin{conjecture}\label{c.tine} Consider the Trail of Lost Pennies on $\llbracket -j-1,k+1 \rrbracket$ in its symmetric standard form. The number of time-invariant Nash equilibria equals $\max \big\{ 2(j+k) - 5,1 \big\}$. \end{conjecture} Figure~\ref{f.threenash} depicts data for the three Nash equilibria predicted by this conjecture for the trail~$\llbracket -3 ,3 \rrbracket$ (when $j=k=2$). We mention also that the number of Nash equilibria is odd in almost all finite games~\cite{Wilson1971}. We offer an explanation of why we believe Conjecture~\ref{c.tine}. By a counterpart to Theorem~\ref{t.nashabmn} (which we have roughly indicated), any time-invariant Nash equilibrium of the symmetric trail game on $\llbracket -j-1,k+1 \rrbracket$ corresponds to a positive \textrm{ABMN} solution on $\llbracket -j,k \rrbracket$. This solution must have $m_{-j-1}=n_{k+1}=0$ and $m_{k+1}=1$, as well as $n_{-j-1}=1$. That is, the solution must be standard, and it must satisfy $\mathcal{M}_{-j-1,k+1}(x) = 1$, where $x \in (0,\infty)$ is the solution's value of ${\rm CenRatio}$. We may thus obtain the set of time-invariant Nash equilibria by recording, for each solution $x \in (0,\infty)$ of the equation $\mathcal{M}_{-j-1,k+1}(x) = 1$, the reverse-ordered $(a,b)$-component pair of the unique standard \textrm{ABMN} solution on $\llbracket -j,k \rrbracket$ whose ${\rm CenRatio}$-value equals $x$. The case for Conjecture~\ref{c.tine} thus rests on advancing an argument for the equality \begin{equation}\label{e.finitenash} \# \big\{ x \in (0,\infty): \mathcal{M}_{j+1,k+1}(x) = 1 \big\} \, = \, \max \big\{ 2(j+k) - 5,1 \big\} \, . \end{equation} Plots of several finite-trail Mina margin maps $(0,\infty) \to (0,\infty): x \to \mathcal{M}_{j+1,k+1}(x)$ led to the conjecture. The pattern begins to emerge in the four plots displayed in Figure~\ref{f.mmm}, for which $j+k \in \llbracket 2,5 \rrbracket$. To see the pattern continue, we need higher values of $j+k$. For these, a suitable device is the finite-trail $\Theta$-transformed Mina margin map $(0,\infty) \to (0,\infty): x \to (\mathcal{M}_{j+1,k+1} \circ \Theta)(x)$ mentioned at the end of Section~\ref{s.mmm}: see Figure~\ref{f.tmmm} for four depictions, where $j+k \in \llbracket 6,10 \rrbracket$. \begin{figure}[htbp] \centering \includegraphics[width=0.6\textwidth]{MinaMarginMaps.pdf} \caption{Four finite-trail Mina margin maps $\mathcal{M}_{j+1,k+1} \circ \Theta:\ensuremath{\mathbb{R}} \to (0,\infty)$ are depicted, for values of $j+k$ in $\llbracket 2,5 \rrbracket$. {\em Top left.} The four functions are plotted together. {\em Top right.} This is a `Tube Map' of the left-hand graph (a distorted but practical depiction), in which the green curve has been artificially displaced to separate it, so that the viewer may watch the different lines as they run. \\ The green and red curves seem to suggest that the curves converge to the constant function one as $j+k$ rises, but this impression is false. Indeed, the middle and lower graphs plot the four functions in turn, each on a scale that shows the finer journey of the map as it rises through height one. The maps lose injectivity in the $(j+1,k+1)$-index change $(3,2) \to (3,3)$. }\label{f.mmm} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=1.05\textwidth]{TransformedMinaMarginMaps.pdf} \caption{{\em Left:} Five $\Theta$-transformed finite-trail Mina margin maps $\mathcal{M}_{j+1,k+1} \circ \Theta:\ensuremath{\mathbb{R}} \to (0,\infty)$ are depicted, for increasing values of $j+k$ in $\llbracket 6,10 \rrbracket$. The graphs join and leave a shared highway, which is (up to visually negligible discrepancies) the graph of the limiting transformed map $\mathcal{M} \circ \Theta:\ensuremath{\mathbb{R}} \to (0,\infty)$. {\em Right:} As in Figure~\ref{f.mmm}(top,right), curves have been artificially displaced so that their routes can be clearly seen.}\label{f.tmmm} \end{figure} \subsection{Some further formulas}\label{s.formulas} In this article, we study a new game, presenting conjectures as well as results. We have derived some formulas of which we do not make use, and we choose to present them as our final results in the introduction because they appear interesting and could be of value in further study of the Trail of Lost Pennies. First we state Theorem~\ref{t.altstand}, an alternative explicit form for standard \textrm{ABMN} solutions. Then we present the \textrm{A} system, which is a closed $\ensuremath{\mathbb{Z}}$-indexed set of equations that we find in Theorem~\ref{t.symmetric} to describe the $a$- (or $b$-)variables in any time-invariant Nash equilibrium in the special case of the game with a symmetric form of boundary data. \subsubsection{Alternative formulas for standard solutions and their Mina margins} Recall the function $Z:(0,\infty) \to (0,\infty)$, $Z(x) = m^{\rm def}_\infty(x) = \sum_{k \in \ensuremath{\mathbb{Z}}} \prod_{i=0}^k \big( c_i(x) - 1 \big)$, from the remark that concludes Section~\ref{s.solvingabmn}. \begin{theorem}\label{t.altstand} Let $f$, $g$ and $h$ mapping $(0,\infty)$ to itself be specified by $$ f(x) = \frac{x c(x)d(x)}{\big( c(x) + x d(x) \big)^2} \, \, ; \, \, \, g(x) = Z(x)^{-1}c(x) f(x) \, \, ; \, \textrm{and} \, \, \, h(x) = Z(x)^{-1} x d(x) f(x) \, . $$ Let $x \in (0,\infty)$. \begin{enumerate} \item For $k \in \ensuremath{\mathbb{Z}}$, $a^{\rm st}_k(x) = g \big( s_k(x) \big)$ and $b^{\rm st}_k(x) = h \big( s_k(x) \big)$. \item For $j,k \in \ensuremath{\mathbb{Z}}$ such that $j < k$, $$ m^{\rm st}_k(x) - m^{\rm st}_j(x) \, = \, \sum_{i = j+1}^k \frac{1}{Z \big( s_i(x) \big)} \, \, \, \, \textrm{and} \, \, \, \, n^{\rm st}_j(x) - n^{\rm st}_k(x) \, = \, \sum_{i = j+1}^k \frac{s_i(x)}{Z \big( s_i(x) \big)} \, . $$ In particular, $m^{\rm st}_k(x) - m^{\rm st}_{k-1}(x) =Z \big( s_k(x) \big)^{-1}$ and $n^{\rm st}_{k-1}(x) - n^{\rm st}_k(x) = s_k(x) Z \big( s_k(x) \big)^{-1}$. \item For $j,k \in \N$, the finite trail Mina margin map $\mathcal{M}_{j+1,k+1}:(0,\infty) \to (0,\infty)$ satisfies the equation $$ \mathcal{M}_{j+1,k+1}(x) \, \, = \, \, \Bigg( \, \sum_{i = -j}^{k+1} \frac{1}{Z \big( s_i(x) \big)} \, \Bigg)^{-1} \, \cdot \, \sum_{i = -j}^{k+1} \frac{s_i(x)}{Z \big( s_i(x) \big)} \, \, . $$ \item The Mina margin map $\mathcal{M}:(0,\infty) \to (0,\infty)$ satisfies $$ \mathcal{M}(x) \, \, = \, \, \sum_{i \in \ensuremath{\mathbb{Z}}} \frac{s_i(x)}{Z \big( s_i(x) \big)} \, \, . $$ \end{enumerate} \end{theorem} \subsubsection{The game with symmetric boundary data} The \textrm{A} system on $\ensuremath{\mathbb{Z}}$ is the set of equations in the real variables $A_i$, $i \in \ensuremath{\mathbb{Z}}$: \begin{equation}\label{e.a} A_{-i-1} (2 A_i + A_{-i}) = A_{i+1}^2 \, . \end{equation} where the index ranges over $\ensuremath{\mathbb{Z}}$. We will also speak of the \textrm{A} system on $\ensuremath{\mathbb{Z}} + 1/2$. In this case, the real variables $A_i$ are indexed by $i$ in the one-half-offset lattice $\ensuremath{\mathbb{Z}} + 1/2$; the set of equations is given by~(\ref{e.a}) with the index ranging over $\ensuremath{\mathbb{Z}} + 1/2$. By the Trail of Lost Pennies on $\ensuremath{\mathbb{Z}}$ in its symmetric form is meant the game ${\rm Trail}(m_{-\infty},m_\infty,n_{-\infty},n_\infty)$, where the boundary parameters are supposed to satisfy $m_{-\infty} = n_\infty = 0$ and $m_\infty = n_{-\infty}$. There is thus a one-parameter family of such games, indexed by $m_\infty \in (0,\infty)$. \begin{theorem}\label{t.symmetric} \leavevmode \begin{enumerate} \item For $\lambda \in (0,\infty)$, there is a unique solution $\big\{ a_i(\lambda): i \in \ensuremath{\mathbb{Z}} \big\}$ of the \textrm{A} system on $\ensuremath{\mathbb{Z}}$ such that $a_0(\lambda) = \lambda$. The solutions satisfy $a_i(\lambda) = \lambda a_i(1)$ for $\lambda \in (0,\infty)$ and $i \in \ensuremath{\mathbb{Z}}$. \item For $\lambda \in (0,\infty)$, there is a unique solution $\big\{ A_i(\lambda): i \in \ensuremath{\mathbb{Z}} + 1/2 \big\}$ of the \textrm{A} system on $\ensuremath{\mathbb{Z}}+1/2$ such that $A_{1/2}(\lambda) = \lambda$. The solutions satisfy $A_i(\lambda) = \lambda A_i(1)$ for $\lambda \in (0,\infty)$ and $i \in \ensuremath{\mathbb{Z}} + 1/2$. \item In the notation of the first part, let $S_1$ denote the set of strategy pairs $\big( a_{-i+k}(\lambda), a_{i+k}(\lambda) : i \in \ensuremath{\mathbb{Z}} \big)$ indexed by $k \in \ensuremath{\mathbb{Z}}$ and $\lambda \in (0,\infty)$. In the notation of the second part, let $S_2$ denote the set of the strategy pairs $\big( A_{-i-1/2 +k}(\lambda) ,A_{i+1/2 + k}(\lambda): i \in \ensuremath{\mathbb{Z}} \big)$ with the same index set. The elements of $S_1 \cup S_2$ are pairwise distinct time-invariant Nash equilibria for the Trail of Lost Pennies on $\ensuremath{\mathbb{Z}}$ in its symmetric form. \item Admit Conjecture~\ref{c.solutions} in the special case that $x=1$: namely, suppose that $Q(1)=2$. Then there are no other time-invariant Nash equilibria for the Trail of Lost Pennies on $\ensuremath{\mathbb{Z}}$ in its symmetric form than those identified in the preceding part. \end{enumerate} \end{theorem} \subsection{The article's structure} There are five further sections and an appendix. Two basic aspects of later use are treated in Section~\ref{s.tools}: a role-reversal symmetry satisfied by the \textrm{ABMN} system; and the solution of the simplest of the finite trail games, with just one site in open play. The fundamental relationship Theorem~\ref{t.nashabmn} between Nash equilibria and the \textrm{ABMN} equations is proved in Section~\ref{s.nashabmn}. and asymptotic decay estimates Theorem~\ref{t.ajbj} is derived, along with the eventual gameplay unanimity Theorem~\ref{t.unanimity}. The Mina margin map~$\mathcal{M}$ is addressed in Section~\ref{s.allminamm}: its approximation by finite-trail counterparts, Theorem~\ref{t.relativereward} and several consequences among our main results; the map's $\Theta$-transformed version, and Theorem~\ref{t.phithetainverse}; and an explicitly recorded computation that $\mathcal{M}$ evaluated at $0.58$ is bounded away from one above, and thus Theorem~\ref{t.minamarginvalues}(3). In Section~\ref{s.prospects}, we discuss several aspects of our results and proofs and some prospects for further study. The appendix contains the proofs of the further formulas from Section~\ref{s.formulas}. \subsubsection{Acknowledgments} The author thanks G\'abor Pete for many discussions about stake-governed games. He thanks Judit Z\'ador for help with Mathematica and in preparing the article's figures. He is supported by the National Science Foundation under DMS grants~$1855550$ and~$2153359$ and by the Simons Foundation as a $2021$ Simons Fellow. \section{Some basic tools}\label{s.tools} Role-reversal symmetry is treated in Section~\ref{s.rolereversal} and the trail game on $\llbracket -1,1 \rrbracket$ in Section~\ref{s.pennyforfeit}. Later subsections introduce some further basic notation and properties. \subsection{Role-reversal symmetry}\label{s.rolereversal} \begin{definition}\label{d.rolereversal} The {\em role-reversal map} $\mathcal{R}$ sends the space of quadruples $\ensuremath{\mathbb{Z}} \to \ensuremath{\mathbb{R}}^4$ to itself by mapping $\big\{ (a_i,b_i,m_i,n_i): i \in \ensuremath{\mathbb{Z}} \big\}$ to $\big\{ (b_{-i},a_{-i},n_{-i},m_{-i}): i \in \ensuremath{\mathbb{Z}} \big\}$. \end{definition} \begin{proposition}\label{p.rolereversal} Suppose that $(a,b,m,n) = \big\{ (a_i,b_i,m_i,n_i): i \in \ensuremath{\mathbb{Z}} \big\}$ is an \textrm{ABMN} solution. Then so is $\mathcal{R} (a,b,m,n)$. \end{proposition} {\bf Proof.} The result may be verified by inspecting the \textrm{ABMN} equations. We instead indicate in rough terms a more conceptual, game-theoretic, argument which is available for positive \textrm{ABMN} solutions if we admit their connection to the trail game via Theorem~\ref{t.nashabmn}. Suppose that a time-invariant Nash equilibrium $(b,a): \ensuremath{\mathbb{Z}} \to (0,\infty)$ is played in the first instance. If Mina and Maxine swap roles, so that the strategy pair $(a,b)$ is played, each acts in diametric opposition to her interests. But if the gameboard is then reflected through the origin, these interests are reversed, and each plays optimally once more. It is the strategy pair $\ensuremath{\mathbb{Z}} \to (0,\infty)^2: i \to (a_{-i},b_{-i})$ that is now being played. This pair is a Nash equilibrium (for the game whose boundary data in $\ensuremath{\mathbb{R}}^4$ is specified by this pair), and the associated quadruple is an \textrm{ABMN} solution. This quadruple is $\mathcal{R} (a,b,m,n)$. \qed We will obtain Proposition~\ref{p.rkvalues} by using the role-reversal map $\mathcal{R}$ on quadruples whose index set is finite; to do so, we extend our notation to handle this circumstance. Let $j,k \in \N_+$ and let $\big\{ (a_i,b_i,m_i,n_i) \in \ensuremath{\mathbb{R}}^4: i \in \llbracket -j,k\rrbracket \big\}$. We set $\mathcal{R}(a,b,m,n)$ equal to $\big\{ (b_{-i},a_{-i},n_{-i},m_{-i}) \in \ensuremath{\mathbb{R}}^4: i \in \llbracket -k,j\rrbracket \big\}$. Proposition~\ref{p.rolereversal} has a counterpart in the finite case which asserts that \begin{eqnarray} & & (a,b,m,n): \llbracket -j,k \rrbracket \to \ensuremath{\mathbb{R}}^4 \, \, \, \textrm{is an \textrm{ABMN} solution} \label{e.rolereversalfinite} \\ & \implies & \mathcal{R}(a,b,m,n): \llbracket -k,j \rrbracket \to \ensuremath{\mathbb{R}}^4 \, \, \, \textrm{is an \textrm{ABMN} solution} \nonumber \, . \end{eqnarray} The \textrm{ABMN} equations can again be inspected to verify this statement. We will further consider the left shift $\mathcal{S}_1$, which sends any quadruple $(a,b,m,n): \llbracket -j,k \rrbracket \to \ensuremath{\mathbb{R}}^4$ to the quadruple $\llbracket -j-1,k-1 \rrbracket \to \ensuremath{\mathbb{R}}^4: i \to (a_{i+1},b_{i+1},m_{i+1},n_{i+1})$. {\bf Proof of Proposition~\ref{p.rkvalues}(1).} For $x \in (0,\infty)$, let $\big\{ (a_i,b_i,m_i,n_i): i \in \llbracket -k,k\rrbracket \big\}$ be an \textrm{ABMN} solution on $\llbracket -k,k \rrbracket$ such that $\tfrac{n_{-1} - n_0}{m_0 - m_{-1}}$ equals $x$; that such a solution may be found has been explained in Subsection~\ref{s.resultandconjecture}. We {\em claim} that \begin{equation}\label{e.rkktwo} \mathcal{M}_{k,k} \big( \tfrac{m_1 - m_0}{n_0 - n_1} \big) = \mathcal{M}_{k,k} \big( \tfrac{n_{-1} - n_0}{m_0 - m_{-1}} \big)^{-1} \, . \end{equation} Admitting this claim, we see that $s(x)= \tfrac{n_0 - n_1}{m_1 - m_0}$ by~(\ref{e.rolereversalfinite}); thus, $1/s(x) = \tfrac{m_1 - m_0}{n_0 - n_1}$. Using the claim, we confirm Proposition~\ref{p.rkvalues}(1). To confirm~(\ref{e.rkktwo}), we let $\hat\phi_i$ denote the $\phi_i$-value of $\hat{R}(a,b,m,n)$. The claim follows from $$ \mathcal{M}_{k,k} \big( \tfrac{m_1 - m_0}{n_0 - n_1} \big) \, = \, \mathcal{M}_{k,k} (\hat\phi_0) \, = \, \frac{\hat{n}_{-k} - \hat{n}_k}{\hat{m}_k - \hat{m}_{-k}} \, = \, \frac{m_k - m_{-k}}{n_{-k} - n_k} \, = \, \mathcal{M}_{k,k} \big( \tfrac{n_{-1} - n_0}{m_0 - m_{-1}} \big)^{-1} \, , $$ where the second and fourth equalities are due to~(\ref{e.minammfinite}). {\bf (2).} We now let $\big\{ (a_i,b_i,m_i,n_i): i \in \llbracket -k-1,k\rrbracket \big\}$ be an \textrm{ABMN} solution on $\llbracket -k-1,k \rrbracket$ such that $\tfrac{n_{-1} - n_0}{m_0 - m_{-1}}= x$. We consider the operator $\mathcal{A} = \mathcal{S}_1 \circ \mathcal{R}$; note that, directly from~(\ref{e.rolereversalfinite}), $\mathcal{A}(a,b,m,n)$ is an \textrm{ABMN} solution, also on the index set $\llbracket -k-1,k\rrbracket$. We denote $\mathcal{A}(a,b,m,n) = \big\{ (\tilde{a}_i,\tilde{b}_i,\tilde{m}_i,\tilde{n}_i): i \in \llbracket -k-1,k\rrbracket \big\}$; and we let $\tilde\phi_i$ denote the $\phi_i$-value of $\mathcal{A}(a,b,m,n)$ for $i \in \llbracket -k,k-1 \rrbracket$. By~(\ref{e.minammfinite}), $$ \mathcal{M}_{k+1,k}(\tilde\phi_0) = \frac{\tilde{n}_{-k-1} - \tilde{n}_k}{\tilde{m}_k - \tilde{m}_{-k-1}} = \frac{m_k - m_{-k-1}}{ n_{-k-1} - n_k} \, . $$ And again by~(\ref{e.minammfinite}), $\mathcal{M}_{k+1,k}\big( \tfrac{n_{-1} - n_0}{m_0 - m_{-1}} \big) = \tfrac{n_{-k-1} - n_k}{m_k - m_{-k-1}}$. Hence, we obtain \begin{equation}\label{e.tworinter} \mathcal{M}_{k+1,k}(\tilde\phi_0) = \mathcal{M}_{k+1,k}\big( \tfrac{n_{-1} - n_0}{m_0 - m_{-1}} \big)^{-1} \, . \end{equation} Note further that $$ \tilde\phi_0 = \frac{\tilde{n}_{-1} - \tilde{n}_0}{\tilde{m}_0 - \tilde{m}_{-1}} = \frac{m_0 - m_{-1}}{n_{-1} - n_0} \, . $$ Since $x = \tfrac{n_{-1} - n_0}{m_0 - m_{-1}}$, we have that $\tilde\phi_0 = 1/x$. From~(\ref{e.tworinter}), we thus obtain Proposition~\ref{p.rkvalues}(2). \qed \begin{corollary}\label{c.rkvalues} For $k \in \N_+$, $\mathcal{M}_{k,k}(3) = \mathcal{M}_{k+1,k}(1) = 1$. \end{corollary} {\bf Proof.} By Proposition~\ref{p.rkvalues}(1) and $s(3) = 1/3$, we have that $\mathcal{M}_{k,k}(3)^2 = 1$. Since $\mathcal{M}_{k,k} > 0$, we obtain $\mathcal{M}_{k,k}(3) = 1$. By Proposition~\ref{p.rkvalues}(2), $\mathcal{M}_{k+1,k}(1)^2 = 1$. Since $\mathcal{M}_{k+1,k} > 0$, we confirm that $\mathcal{M}_{k+1,k}(1) = 1$. \qed The form of the inverse of the map $s$ may be obtained by use of role-reversal symmetry. \begin{proposition}\label{p.sminusone} The function $s:(0,\infty) \to (0,\infty)$ from Definition~\ref{d.acs} is invertible, and its inverse is given by $$ s^{-1}(x) \, = \, \frac{1}{s(1/x)} \, \, \, \, , \, \, \, \, \textrm{for $x \in (0,\infty)$} \, . $$ \end{proposition} {\bf Proof.} It is enough to show that $h:(0,\infty) \to (0,\infty)$ given by $h(x) = 1/s(1/x)$ satisfies \begin{equation}\label{e.shhs} (s \circ h)(x) = (h \circ s)(x) = x \, . \end{equation} Set $$ (a_i,b_i,m_i,n_i) = \big( a^{\rm st}_i(x), b^{\rm st}_i(x),m^{\rm st}_i(x),n^{\rm st}_i(x) \big) \, \, \, , \, \, \, \, \textrm{for $i \in \ensuremath{\mathbb{Z}}$} \, . $$ We have that $\phi_0 = \tfrac{n_{-1}-n_0}{m_0 - m_{-1}} = x$. First note that, by Proposition~\ref{p.shift}, $s(\phi_{-1}) = \phi_0$; or, in other words, \begin{equation}\label{e.xformula} s \, \bigg( \frac{n_{-2} - n_{-1}}{m_{-1} - m_{-2}} \bigg) \, = \, \frac{n_{-1} - n_0}{m_0 - m_{-1}} \, = \, x \, . \end{equation} Let $\hat{\phi}:\ensuremath{\mathbb{Z}} \to (0,\infty)$ be such that, for $i \in \ensuremath{\mathbb{Z}}$, $\hat\phi_i$ is the value of $\phi_i$ for the quadruple $\mathcal{R}(a,b,m,n)$. Note then that \begin{equation}\label{e.hatphione} \hat\phi_1 \, =\, \frac{\hat{n}_0 - \hat{n}_1}{\hat{m}_1 - \hat{m}_0} \, =\, \frac{m_0 - m_{-1}}{n_{-1} - n_0} \, = \, 1/x \, . \end{equation} Thus, note that $$ s(1/x) = s(\hat\phi_1) \, = \, \hat\phi_2 \, = \, \frac{\hat{n}_1 - \hat{n}_2}{\hat{m}_2 - \hat{m}_1} \, =\, \frac{m_{-1} - m_{-2}}{n_{-2} - n_{-1}} \, , $$ where the second equality is justified by Propositions~\ref{p.rolereversal} and~\ref{p.shift}. Applying $s$, we find from~(\ref{e.xformula}) that $s\big(1/s(1/x) \big) = x$. We have confirmed that $s\big(h(x)\big) = x$ for $x \in (0,\infty)$. Next we note that $s(x) = \phi_1 = \tfrac{n_0 - n_1}{m_1 - m_0}$, so that $1/s(x) = \tfrac{m_1 - m_0}{n_0 - n_1} = \hat\phi_0$. But $s(\hat\phi_0) = \hat\phi_1 = 1/x$, by Propositions~\ref{p.shift} and~\ref{p.rolereversal}, and~(\ref{e.hatphione}). Which is to say, $1/s(1/s(x)) = x$, or $h\big(s(x)\big) = x$ for $x \in (0,\infty)$. This completes the derivation of~(\ref{e.shhs}) and thus the proof of Proposition~\ref{p.sminusone}. \qed \subsection{Penny Forfeit}\label{s.pennyforfeit} The simplest case of the finite trail game from Section~\ref{s.finite} has $j=k=0$, when the first move is the last. The straightforward solution of this case is already instructive, and we provide it now, calling this game Penny Forfeit. Here is an explicit description of this one-turn game. Maxine and Mina are asked to stake non-negative quantities $a$ and $b$. After these stakes have been submitted, the game victor is declared: this will be Maxine, with probability $\tfrac{a}{a+b}$; otherwise, it will be Mina. If Maxine wins, she receives $m_1$, and Mina $n_1$; if Mina wins, Maxine receives $m_{-1}$ and Mina $n_{-1}$. These four values act as boundary data. They are supposed to be real values that satisfy $m_{-1} < m_1$ and $n_1 < n_{-1}$. Maxine and Mina's mean winnings in the game are \begin{equation}\label{e.maxineminawinnings} \tfrac{a}{a+b} m_1 + \tfrac{b}{a+b} m_{-1} -a \, \, \, \textrm{and} \, \, \, \tfrac{b}{a+b} n_{-1} + \tfrac{a}{a+b} n_1 - b \, , \end{equation} where in each expression the respective terms are mean terminal receipt in the event of turn victory; such receipt in the event of turn defeat; and the negative contribution from the forfeited stake. The pair $(b,a)$ is a Nash equilibrium---a notion that is specified by suitably adapting the definition in Section~\ref{s.gamespec}-- when these last two expressions are both global maxima as the variables $a$ and $b$ are respectively varied over $[0,\infty)$. \begin{lemma}\label{l.pennyforfeit} There is a unique solution in $(a,b) \in [0,\infty)^2$ in which the pair of expressions in~(\ref{e.maxineminawinnings}) are both global maxima as the variables $a$ and $b$ are respectively varied over $[0,\infty)$. It is given by \begin{equation}\label{e.absolution} (a,b) \, = \, \bigg(\frac{M^2 N}{(M+N)^2},\frac{M N^2}{(M+N)^2} \bigg) \, , \, \, \, \, \textrm{with} \, \, \, \, M = m_1 - m_{-1} \, \, \, \, \textrm{and} \, \, \, \, N = n_{-1} - n_1 \, , \end{equation} Note that $a$ and $b$ are strictly positive. \end{lemma} {\bf Proof.} A critical point $(a,b)$ is given by setting the respective partial derivatives in $a$ and~$b$ of the two expressions equal to zero: the conditions are $$ \tfrac{b}{(a+b)^2}(m_1 - m_{-1}) - 1 \, = \, \tfrac{a}{(a+b)^2}(n_{-1} - n_1) - 1 \, = \, 0 \, . $$ At least one component in the desired pair $(a,b)$ must be non-zero. Indeed, if for example $a$ equals zero, then an infinitesimal increase of $b$ from zero will increase Mina's expected payoff from $\tfrac{n_{-1}+n_1}{2}$ to $n_{-1}$. Restricting then, as we may, to solutions with at least one positive component, we see that there exists a unique solution in $(a,b) \in [0,\infty)^2$ of the last displayed equations, and that this solution is given by~(\ref{e.absolution}). This is indeed a global maximum for the pair of expressions in~(\ref{e.maxineminawinnings}) under respective variation of $a$ and~$b$ in~$[0,\infty)$. \qed {\em Remark.} We see then that Penny Forfeit has a unique Nash equilibrium~$(b,a)$, with $(a,b)$ as just specified. It is straightforward to see that this Nash equilibrium is unique even if we permit the players to offer random stakes. \subsection{The game with a delayed start}\label{s.delayedstart} We will wish to consider the finite and infinite trail games begun at a turn whose index $\ell \in \N_+$ is general. For $(i,\ell) \in \ensuremath{\mathbb{Z}} \times \N_+$ and $(S_-,S_+) \in \mathcal{S}$ we will write $\ensuremath{\mathbb{P}}_{S_-,S_+}^{i,\ell}$ and $\E_{S_-,S_+}^{i,\ell}[\cdot]$ for the law and expectation operator of gameplay $X: \llbracket \ell,\infty) = \ensuremath{\mathbb{Z}} \cap [\ell,\infty) \to \ensuremath{\mathbb{Z}}$, $X_\ell = i$, in the Trail of Lost Pennies on $\ensuremath{\mathbb{Z}}$, begun at the $(\ell + 1)$\textsuperscript{st} turn at $\ell$. Payoffs, costs and terminal receipts $P_\pm$, $C_\pm(u)$ (for $u \in \llbracket \ell,\infty)$) and $T_{\pm}$ remain as specified by Section~\ref{s.gamespec}. Mina and Maxine's payoff identities~(\ref{e.minapayoff}) and~(\ref{e.maxinepayoff}) now take the $\ensuremath{\mathbb{P}}_{S_-,S_+}^{i,\ell}$-almost sure form \begin{equation}\label{e.delayedpayoff} P_\pm \, = \, - \sum_{j = \ell +1}^\infty C_\pm(j) \,\, + \,\, T_\pm \, . \end{equation} Note that $\ensuremath{\mathbb{P}}_{S_-,S_+}^i$ equals $\ensuremath{\mathbb{P}}_{S_-,S_+}^{i,0}$. \subsection{Lack of escape entails infinite costs} \begin{lemma}\label{l.dontlookback} Let $(S_1,S_2) \in \mathcal{S} \times \mathcal{S}_0$ be a strategy pair whose second component is time-invariant. Writing $a_i = S_2(i,j)$ for $(i,j) \in \ensuremath{\mathbb{Z}} \times \N_+$, suppose that $a_i > 0$ for $i \in \ensuremath{\mathbb{Z}}$. For $(i,\ell) \in \ensuremath{\mathbb{Z}} \times \N_+$, $\ensuremath{\mathbb{P}}_{S_1,S_2}^{i,\ell}(E^c) > 0$ implies that $\E_{S_1,S_2}^{i,\ell} [P_-] = -\infty$. \end{lemma} {\bf Proof.} For $j \in \ensuremath{\mathbb{Z}}$, let $W_-(j)$ denote the event that Mina wins infinitely many turns at which the counter is at $j$. We claim that, up to a $\pgameplay{S_1}{S_2}{i,\ell}$-null set, \begin{equation}\label{e.ecomplement} E^c \, \subseteq \, \bigcup_{j \in \ensuremath{\mathbb{Z}}} W_-(j) \, . \end{equation} To see this, set $V_j$ denote the event that the counter visits $j \in \ensuremath{\mathbb{Z}}$ infinitely often. The occurrence of $E^c$ entails that of $\cup_{j \in \ensuremath{\mathbb{Z}}} V_j$. If $V_j$ occurs and Mina wins infinitely many of the turns at which $X$ visits $j$, then $W_-(j)$ occurs. If $V_j$ occurs but Mina does not thus succeed, there are infinitely many occasions on which $X$ leaves $j$ to the right, only to return to $j$ at some later time. Consider the set of turns that occur just before each of these returns. At each, $X$ is at $j+1$ and Mina wins the turn, so that $X$ passes to $j$. Thus, $W_-(j+1)$ occurs. We have derived~(\ref{e.ecomplement}). For $j \in \ensuremath{\mathbb{Z}}$, let $\textrm{TotalCost}_-(j) = \sum_{t=\ell}^\infty {\bf 1}_{X_t =j} C_-(t+1)$ denote Mina's running cost expended at $j$ under $\pgameplay{S_1}{S_2}{i,\ell}$. Let $N_-(j,j-1) = \sum_{t=\ell}^\infty {\bf 1}_{X_t =j,X_{t+1}=j-1}$ denote the number of turns with index at least $\ell+1$ that are won by Mina and at whose start $X$ visits $j$. Since $C_-(t+1) = S_1(X_t,t+1)$, we have that \begin{eqnarray*} \egameplay{S_1}{S_2}{i,\ell} \big[ N_-(j,j-1) \big] & = & \sum_{t=\ell}^\infty \pgameplay{S_1}{S_2}{i,\ell} (X_t =j) \cdot \tfrac{S_1(j,t+1)}{S_1(j,t+1) + a_j} \, \leq \, a_j^{-1} \sum_{t=\ell}^\infty \pgameplay{S_1}{S_2}{i,\ell} (X_t =j) S_1(j,t+1) \\ & = & a_j^{-1} \egameplay{S_1}{S_2}{i,\ell} \big[ \textrm{TotalCost}_-(j) \big] \, \leq \, a_j^{-1} \egameplay{S_1}{S_2}{i,\ell} \sum_{t=\ell}^\infty C_-(t) \, . \end{eqnarray*} By~(\ref{e.ecomplement}) and $a_j > 0$ for $j \in \ensuremath{\mathbb{Z}}$, we see then that, if $\pgameplay{S_1}{S_2}{i,\ell}(E^c) > 0$, then $\egameplay{S_1}{S_2}{i,\ell} [N_-(j,j-1)]$ is infinite for some $j \in \ensuremath{\mathbb{Z}}$, and thus so is Mina's mean total running cost $\egameplay{S_1}{S_2}{i,\ell} \sum_{t=\ell}^\infty C_-(t)$. Applying $\egameplay{S_1}{S_2}{i,\ell}$ to~(\ref{e.delayedpayoff}) with $\pm = -1$ and noting that terminal receipts $T_-$ are almost surely bounded, we find that Mina's mean payoff $\egameplay{S_1}{S_2}{i,\ell} [P_-]$ equals minus infinity. This completes the proof of Lemma~\ref{l.dontlookback}. \qed \subsection{Relating the finite and infinite trail games}\label{s.relating} Let $(m_{-\infty},m_\infty,n_{-\infty},n_\infty) \in \ensuremath{\mathbb{R}}^4$ satisfy $m_{-\infty} < m_\infty$ and $n_\infty < n_{-\infty}$. It is useful to specify a coupling of the Trail of Lost Pennies ${\rm Trail}(m_{-\infty},m_\infty,n_{-\infty},n_\infty)$ and its finite trail counterparts. \begin{definition}\label{d.coupling} Let $i \in \ensuremath{\mathbb{Z}}$ and $(S_-,S_+) \in \mathcal{S}^2$. Recall that the gameplay $X:\ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{Z}}$, $X_0 = i$, of the infinite trail game governed by $(S_-,S_+)$ is specified under the law $\ensuremath{\mathbb{P}}_{S_-,S_+}^i$. For $j,k \in \N_+$, strategy pairs in $\mathcal{S}[j,k]^2$ for the game with trail $\llbracket -j -1,k+1\rrbracket$ result by restricting the domain of $S_-$ and $S_+$ to $\llbracket -j ,k \rrbracket$. Copies of the gameplay $X^{j,k}:\N \to \llbracket -j-1,k+1 \rrbracket$, $X^{j,k}_0 = i$, that result from use of these restricted pairs may be coupled under $\ensuremath{\mathbb{P}}_{S_-,S_+}^i$ whenever $j,k \in \N_+$ are such that $i \in \llbracket -j ,k \rrbracket$. To specify these copies, set \begin{equation}\label{e.taujk} \tau^{j,k} \, = \, \inf \big\{ i \in \N_+: X_i \in \{-j-1,k+1 \} \big\} \, . \end{equation} Writing $\wedge$ for minimum, we then take $X^{j,k}(u) = X(u \wedge \tau^{j,k})$ for $u \in \N$. \end{definition} The finite and infinite trail payoffs, costs and terminal receipts $*_\pm^{j,k}$ and $*_\pm$, $* \in \{P,C,T\}$, are coupled under $\ensuremath{\mathbb{P}}_{S_-,S_+}^i$ by this definition. We note some basic relationships that result. \begin{lemma}\label{l.couplingproperties} Let $(S_-,S_+) \in \mathcal{S}^2$. Suppose that $i \in \ensuremath{\mathbb{Z}}$ and $j,k \in \N_+$ satisfy $i \in \llbracket -j,k \rrbracket$. \begin{enumerate} \item We have that $P_- - P^{j,k}_- = T_- - T^{j,k}_- - \sum_{t = \tau^{j,k}}^\infty C_-(t)$. \item And that $P_- - P^{j,k}_- \leq T_- - T^{j,k}_-$. \item For $\ell \in \N$, it is $\ensuremath{\mathbb{P}}_{S_-,S_+}^{i,\ell}$-almost certain that $P_- - P^{j,k}_- \leq n_{-\infty} - n_\infty$. \end{enumerate} \end{lemma} {\bf Proof: (1).} This follows from~(\ref{e.minapayoff}),~(\ref{e.finitepayoff}) and $C^{j,k}_-(t) = C_-(t)$ for $u \in \llbracket 0, \tau^{j,k}-1 \rrbracket$. \\ {\bf (2).} Due to the preceding part and the non-negativity of costs $C_-(t)$. \\ {\bf (3).} The receipt $T^{j,k}_-$ is a weighted average of $n_{-j-1}$ and $n_{k+1}$. Since $\big\{ n_i: i \in \ensuremath{\mathbb{Z}} \big\}$ is decreasing (this due to Theorem~\ref{t.positiveabmn}(1), because this sequence is the $n$-component of a positive \textrm{ABMN} solution), we find that $T^{j,k}_- \geq n_\infty$. Also note that $P_- \leq n_{-\infty}$. The preceding part of the lemma thus implies the stated result. \qed \begin{lemma}\label{l.stopping} Let $(S_-,S_+) \in \mathcal{S}^2$, $k \in \ensuremath{\mathbb{Z}}$ and $\ell \in \N$. Let $Q \in \N \cup \{ \infty \}$ be a stopping time with respect to gameplay $X: \N \to \ensuremath{\mathbb{Z}}$ under the law $\pgameplay{S_-}{S_+}{k,\ell}$ specified in Section~\ref{s.delayedstart}. Then \begin{equation}\label{e.stopping} \egameplay{S_-}{S_+}{k,\ell} [P_-] \, = \, - \, \egameplay{S_-}{S_+}{k,\ell} \sum_{t=\ell + 1}^{Q - 1} C_-(t) \,\, + \,\, \egameplay{S_-}{S_+}{k,\ell} \big[ \egameplay{S_-}{S_+}{X(Q)} [P_-] \big] \, . \end{equation} In reading this display in the event that $Q = \infty$, we adopt the conventions that $\egameplay{S_-}{S_+}{\infty,\ell}[P_-] = n_\infty$ and $\egameplay{S_-}{S_+}{-\infty,\ell}[P_-] = n_{-\infty}$, as well as $Q-1 = \infty$. We also have the counterpart identity for Maxine, given by $P_- \to P_+$ and $C_- \to C_+$. \end{lemma} {\bf Proof.} The right-hand side of~(\ref{e.delayedpayoff}) with $\pm = -1$ may be written $A_1 + A_2$, where $A_1$ is the sum of costs $C_-(t)$ with $\ell + 1 \leq t < Q$; and $A_2$ is the sum of the higher indexed costs (in the case that $Q$ is finite) and the terminal receipt~$T_-$. Since $T_-$ equals $n_{-\infty}$ or $n_\infty$ when the events $E_-$ or $E_+$ occur, we find that, when the mean $\egameplay{S_-}{S_+}{k,\ell}$ of~(\ref{e.delayedpayoff}) thus represented is taken, the two right-hand terms in the lemma result. \qed We have used Theorem~\ref{t.positiveabmn}(1), and we will use it again in a moment. We now give the simple proofs of Theorem~\ref{t.positiveabmn}(1,2). {\bf Proof of Theorem~\ref{t.positiveabmn}(1).} Since $a_i + b_i > 0$, \textrm{ABMN}$(3)$ implies that $m_{i+1} > m_{i-1}$. We may rearrange \textrm{ABMN}$(1)$ in the form $m_i = \tfrac{a_i}{a_i + b_i} m_{i+1} + \tfrac{b_i}{a_i + b_i} m_{i-1} - a_i$. Using $m_{i-1} <m_{i+1}$ and $b_i > 0$, we find that $m_i < m_{i+1} - a_i$. Since $a_i > 0$, $m_i < m_{i+1}$. That $n_{i+1} < n_i$ follows similarly. We have shown that the \textrm{ABMN} solution $(a,b,m,n)$ is strict. { \bf (2).} The sequences $\big\{ m_i: i \in \ensuremath{\mathbb{Z}} \big\}$ and $\big\{ n_i: i \in \ensuremath{\mathbb{Z}} \big\}$ are increasing and decreasing, by the preceding part. Thus, the limiting values~(\ref{e.boundarydata}) exist, at least as elements of $\ensuremath{\mathbb{R}} \cup \{ \infty\} \cup \{ - \infty \}$; they satisfy $m_\infty > m_{-\infty}$ and $n_{-\infty} > n_\infty$. \qed Note that Theorem~\ref{t.positiveabmn}(1,2) do not exclude the possibilities that $m_\infty$ or $n_{-\infty}$ equals $\infty$ or that $n_\infty$ or $m_{-\infty}$ equals $-\infty$. We will deduce this when we prove Theorem~\ref{t.positiveabmn}(3). This result will be derived in Section~\ref{s.consequences} as a consequence of the asymptotic decay estimate Theorem~\ref{t.ajbj}. The next result interprets the $m$- and $n$-components of a \textrm{ABMN} solution as mean payoffs. It is couched in the notation of delayed-start games from Section~\ref{s.delayedstart}. \begin{lemma}\label{l.minipayoff} Let $\big\{ (a_i,b_i,m_i,n_i) : i \in \ensuremath{\mathbb{Z}} \big\}$ denote a positive solution of the \textrm{ABMN} equations with boundary data $(m_{-\infty},m_\infty,n_{-\infty},n_\infty) \in \ensuremath{\mathbb{R}}^4$. Let $S_-,S_+ \in \mathcal{S}$ satisfy $S_-(i,j) = b_i$ and $S_+(i,j) = a_i$ for $(i,j) \in \ensuremath{\mathbb{Z}} \times \N_+$. \begin{enumerate} \item Let $i \in \ensuremath{\mathbb{Z}}$ and $\ell \in \N$. Then $\pgameplay{S_-}{S_+}{i,\ell}(E) =1$. \item For $i \in \ensuremath{\mathbb{Z}}$ and $\ell \in \N$, $$ m_i = \E_{(S_-,S_+)}^{i,\ell}[P_+] \, \, \, \, \textrm{and} \, \, \, \, n_i = \E_{(S_-,S_+)}^{i,\ell} [P_-] \, . $$ \item Let $j,k \in \N$ and $\ell \in \N$. For $i \in \llbracket -j,k \rrbracket$, $$ m_i = \E_{(S_-,S_+)}^{i,\ell} [P^{j,k}_+] \, \, \, \, \textrm{and} \, \, \, \, n_i = \E_{(S_-,S_+)}^{i,\ell} [P^{j,k}_-] \, . $$ \end{enumerate} \end{lemma} {\bf Proof: (1).} By Theorem~\ref{t.positiveabmn}(1), $n_i > n_\infty$. But $n_\infty > -\infty$ by assumption. Thus Lemma~\ref{l.dontlookback} implies the sought statement. \\ {\bf (2).} Since $a_i + b_i>0$, $\textrm{ABMN}(1)$ may be written in the form $m_i = \tfrac{a_i}{a_i+b_i}m_{i+1} + \tfrac{b_i}{a_i+b_i}m_{i-1} - a_i$ or equivalently $m_i = \egameplay{S_-}{S_+}{i} [m(X_1)] \, - \, a_i$. Iterating, we find that \begin{equation}\label{e.mexpand} m_i \, = \, \egameplay{S_-}{S_+}{i,\ell} \, [m(X_{u+1})] \, - \, \egameplay{S_-}{S_+}{i,\ell} \, \sum_{i=\ell}^u b_{X(i)} \end{equation} for any $u \in \N$, $u \geq \ell$. The value of $\lim_{u \to \infty} m(X_u)$ exists on the event $E$, equalling $m_\infty$ or $m_{-\infty}$ according to whether $E_+$ or $E_-$ occurs. By Lemma~\ref{l.minipayoff}(1), we see that $\lim_{u \to \infty} \egameplay{S_-}{S_+}{i,\ell} \, [ m(X_{u+1})]$ equals $m_\infty \cdot \pgameplay{S_-}{S_+}{i,\ell} (E_+) + m_{-\infty} \cdot \pgameplay{S_-}{S_+}{i} (E_-)$. In the notation of Lemma~\ref{l.stopping}, we find by taking the high-$u$ limit of the preceding display that $m_i$ equals the right-hand side of~(\ref{e.stopping}) with $k=i$ and $Q$ identically equal to infinity. Thus, Lemma~\ref{l.stopping} implies that $m_i = \E_{(S_-,S_+)}^{i,\ell} [P_+]$. That $n_i = \E_{(S_-,S_+)}^{i,\ell} [ P_-]$ is similarly proved. \\ {\bf (3).} We may obtain~(\ref{e.mexpand}) with $X$ replaced by its stopped version $X^{j,k}$. By taking the high-$u$ limit, we find that $m_i$ equals the right-hand side of~(\ref{e.stopping}) with $k=i$ and $Q = \tau^{j,k}$. From Lemma~\ref{l.stopping} we thus find that $m_i = \E_{(S_-,S_+)}^{i,\ell} [P^{j,k}_+]$. That $n_i = \E_{(S_-,S_+)}^{i,\ell} [ P^{j,k}_-]$ follows similarly. This completes the proof of Lemma~\ref{l.minipayoff}(3). \qed Some simple relationships between escape in the finite and infinite trail games are now recorded. We define the events $E_-[-j,k] = \big\{ X(\tau^{j,k}) = - j-1 \big\}$ and $E_+[j,k] = \big\{ X(\tau^{j,k}) = k+1 \big\}$. \begin{lemma}\label{l.eminuseplus} We have that $$ E_- \, = \, \bigcup_{k=1}^\infty \bigcap_{j=1}^\infty E_-[j,k] \, \, \, \, \textrm{and} \, \, \, \, E_+ \, = \, \bigcup_{j=1}^\infty \bigcap_{k=1}^\infty E_+[j,k] \, . $$ \end{lemma} {\bf Proof.} These follow from the definitions of the events $E_-$ and $E_+$. \qed \begin{lemma}\label{l.mn} We have that \begin{equation}\label{e.mn} \lim_{k \to \infty } \ensuremath{\mathbb{P}}_{S_-,S_+}^i \bigg( E_- \setminus \Big\{ \lim_{j \to \infty} m\big( X_{\tau^{j,k}} \big) = m_{-\infty} \Big\} \bigg) = 0 \, . \end{equation} and $$ \lim_{j \to \infty } \ensuremath{\mathbb{P}}_{S_-,S_+}^i \bigg( E_+ \setminus \Big\{ \lim_{k \to \infty} m\big( X_{\tau^{j,k}} \big) = m_\infty \Big\} \bigg) = 0 \, . $$ These statements are also valid if we replace all instances of $m$ by $n$. \end{lemma} {\bf Proof}. By Lemma~\ref{l.eminuseplus} for $E_-$, we see that, on this event, there exists a random value $K \in \N_+$ such that, for all $j \in \N_+$, $X(\tau^{j,K}) = -j-1$. Since $m_{-i} \to m_{-\infty}$ as $i \to \infty$, we see that, on $E_-$, $\lim_j m\big( X(\tau^{j,K})\big) = m_{-\infty}$. Thus, we obtain~(\ref{e.mn}). The other three assertions made by the lemma have the same proof up to evident notational changes. \qed \section{The structure of time-invariant Nash equilibria}\label{s.nashabmn} The aim of this section is to prove Theorem~\ref{t.nashabmn}, our result that relates time-invariant Nash equilibria and positive \textrm{ABMN} solutions. On the way to this result, we will establish some basic properties of time-invariant Nash equilibria. In the first section, we prove Theorem~\ref{t.nashabmn}(1) alongside some simple properties of strategy pairs. The second proves Theorem~\ref{t.nashabmn}(2). \subsection{Time-invariant Nash equilibria result in positive \textrm{ABMN} solutions} Here, we prove Theorem~\ref{t.nashabmn}(1). Our style of argument is hands on: we build up inferences on the behaviour of a time-invariant Nash equilibrium step-by-step. With one exception: to close out the proof, we will invoke the unanimity Theorem~\ref{t.unanimity}(2,3), which is argued independently by explicit solution of the $\textrm{ABMN} $ system in Section~\ref{s.battlefield}. Recall the mean payoff notation~(\ref{e.minapayoff}) and~(\ref{e.maxinepayoff}). A strategy pair $(S_-,S_+) \in \mathcal{S}^2$ has {\em finite mean costs} if neither $\E^k_{S_-,S_+}[P_-]$ nor $\E^k_{S_-,S_+}[P_+]$ equals minus infinity, for any $k \in \ensuremath{\mathbb{Z}}$. Let $(S_-,S_+) \in \mc{S}_0^2$. We adopt our standard convention of writing $b_i = S_-(i,j)$ and $a_i = S_+(i,j)$ for $(i,j) \in \ensuremath{\mathbb{Z}} \times \N_+$. The {\em idle zone} $\mathcal{I} \subset \ensuremath{\mathbb{Z}}$ is given by $\mathcal{I} = \big\{ j \in \ensuremath{\mathbb{Z}}: a_j = b_j = 0 \big\}$. \begin{lemma}\label{l.idlezone} Let $(S_-,S_+) \in \mc{S}_0^2$ be such that $\mathcal{I} \not= \emptyset$. For $k \in \ensuremath{\mathbb{Z}}$, consider the gameplay $X:\ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{Z}}$ under $\pgameplay{S_-}{S_+}{k}$. For $i \in \N$ given, condition on the event that $X_i$ is a given element of $\mathcal{I}$. (If $i = 0$, suppose that $k \in \mathcal{I}$.) Let $j \geq i$, $j = \inf \big\{ m \in \N: X_m \not\in \mathcal{I} \big\}$. Then the conditional law of $X: \llbracket i,j\rrbracket: \N \to \ensuremath{\mathbb{Z}}$ is equal to simple random walk given the value $X_i$ stopped on leaving~$\mathcal{I}$. \end{lemma} {\bf Proof.} At each turn with index in $\llbracket i,j-1 \rrbracket$, neither Mina nor Maxine offers a positive stake, since the $b$ and $a$ values vanish in the idle zone. The gameplay increments~$X(h+1) - X(h)$ for $h \in \llbracket i,j-1 \rrbracket$ are thus unbiased $\pm 1$ steps as determined by the $0/0 = 1/2$ rule that was specified in Section~\ref{s.gamespec}. \qed Recall the escape event $E$ from~(\ref{e.escape}). An element of $\mathcal{S}_0^2$ is non-zero when at least one of its components is not identically zero. \begin{proposition}\label{p.fmc} Let $(S_-,S_+) \in \mc{S}_0^2$ be non-zero, with finite mean costs. Then escape occurs almost surely: $\pgameplay{S_-}{S_+}{k}(E)=1$ for $k \in \ensuremath{\mathbb{Z}}$. \end{proposition} {\bf Proof.} Let $k \in \ensuremath{\mathbb{Z}}$ and suppose that $\pgameplay{S_-}{S_+}{k}(E^c)>0$. We may find $\ell \in \ensuremath{\mathbb{Z}}$ such that it is with positive probability that the process $X$, under the law $\pgameplay{S_-}{S_+}{k}$, visits $\ell$ infinitely often. Since $(S_-,S_+)$ is time-invariant, the strong Markov property implies that \begin{equation}\label{e.infinitelyoften} \pgameplay{S_-}{S_+}{k} \Big( \textrm{$X$ visits $\ell$ infinitely often} \, \Big\vert \, \textrm{$X$ visits $\ell$ at least once} \, \Big) \, = \, 1 \, . \end{equation} Let $i \in \ensuremath{\mathbb{Z}} \cup \{ - \infty \}$, $j \in \ensuremath{\mathbb{Z}} \cup \{ \infty\}$, $i \leq j$, be such that at least one of $i$ and $j$ is finite; $\ell \in \llbracket i,j \rrbracket$; $a_m = b_m = 0$ for $m \in \ensuremath{\mathbb{Z}} \cap (i,j)$; and at least one of $a_m$ and $b_m$ is positive for any endpoint $m \in \{ i,j \}$ that is finite. (It may be that $i=j=\ell$; in this case, some of these conditions are vacuous. In the other event, $i < \ell < j$.) Suppose that $i < \ell < j$. Note that $\llbracket i+1,j-1 \rrbracket \subset \mathcal{I}$. We now consider the conditional law of $X$ under $\pgameplay{S_-}{S_+}{k}$ given that $X$ visits $\ell$ infinitely often. We invoke~(\ref{e.infinitelyoften}) to note that the conditioning disappears at the first visit of $X$ to $\ell$. Lemma~\ref{l.idlezone} thus implies that, on each occasion that $X$ visits~$\ell$, $X$ pursues a simple random walk until it reaches $i$ or $j$. Suppose, without loss of generality, that the index $i$ is finite, and that $a_i > 0$. It is with probability at least $2^{-(\ell-i)}$ that $X$ proceeds from a visit to $\ell$ by means of a string of leftward steps to reach $i$. Later, the conditioned walk $X$ inevitably returns to $\ell$, and a further opportunity to reach $i$ directly ensues. Thus, $X$ will infinitely often visit~$i$, a location to which $a$ assigns positive value. (Note that this conclusion also holds trivially in the opposing case, where $i=j=\ell$.) The cost $\sum_{t \geq 1} C_+(t)$ incurred by Maxine (which is specified in Section~\ref{s.gamespec}) is thus seen to be almost surely infinite on the $\pgameplay{S_-}{S_+}{k}$-positive probability event that $X$ visits $\ell$ infinitely often. (Were $b_i$ instead supposed positive, then it would be Mina's cost $\sum_{t \geq 1}C_-(t)$ that is found to be infinite.) This is contrary to our assumption that $(S_-,S_+)$ has finite mean costs. We conclude, as desired, that $\pgameplay{S_-}{S_+}{k}(E) = 1$. \qed For $S \in \mc{S}_0$, let $\textrm{Left}(S) \in \ensuremath{\mathbb{Z}} \cup \{ -\infty\} \cup \{\infty\}$ denote $\inf \{ i \in \ensuremath{\mathbb{Z}} : S(i,1) > 0 \}$; and let $\textrm{Right}(S) \in \ensuremath{\mathbb{Z}} \cup \{ -\infty\} \cup \{\infty\}$ denote $\sup \{ i \in \ensuremath{\mathbb{Z}} : S(i,1) > 0 \}$. We say that $S$ is {\em wide} if $\textrm{Left}(S) = -\infty$ and $\textrm{Right}(S) = \infty$; if $S$ is not wide, it is {\em narrow}. The right rocket $\eta \cdot \textrm{Rocket}^{i\rightarrow}$ at $i \in \ensuremath{\mathbb{Z}}$ of strength $\eta \in (0,\infty)$ is the element of $\mc{S}_0$ given $$ \eta \cdot \textrm{Rocket}^{i\rightarrow}_j \, = \, \eta \cdot 2^{-(j-i)-1} {\bf 1}_{j \geq i} \, \, \, , \, \, \, j \in \ensuremath{\mathbb{Z}} \, . $$ The counterpart left rocket $\eta \cdot \textrm{Rocket}^{\leftarrow i} \in \mc{S}_0$ is $$ \eta \cdot \textrm{Rocket}^{\leftarrow i}_j \, = \, \eta \cdot 2^{-(i-j)-1} {\bf 1}_{j \leq i} \, \, \, , \, \, \, j \in \ensuremath{\mathbb{Z}} \, . $$ The right boost at $i \in \ensuremath{\mathbb{Z}}$ of strength $\eta$ is the map $\textrm{Boost}_\eta^{i\rightarrow}:\mc{S}_0 \to \mc{S}_0$ that sends $q = (q_i: i \in \ensuremath{\mathbb{Z}}) \in \mc{S}_0$ to $q + \eta \cdot \textrm{Rocket}^{i\rightarrow}$. The corresponding left boost $\textrm{Boost}_\eta^{i\leftarrow}:\mc{S}_0 \to \mc{S}_0$ sends $q$ to $q + \eta \cdot \textrm{Rocket}^{i\leftarrow}$. The right drag at $i \in \ensuremath{\mathbb{Z}}$ is the map $\textrm{Drag}^{i\rightarrow}:\mc{S}_0 \to \mc{S}_0$ that sends $q \in \mc{S}_0$ to the map $$ \ensuremath{\mathbb{Z}} \to (0,\infty ): j \to \, \, \begin{cases} \, q_j/2 & \text{if $j \geq i$} \\ \, q_j & \text{if $j < i$} \, . \end{cases} $$ The counterpart left drag $\textrm{Drag}^{i\leftarrow}:\mc{S}_0 \to \mc{S}_0$ sends $q \in \mc{S}_0$ to $$ \ensuremath{\mathbb{Z}} \to (0,\infty ): j \to \, \, \begin{cases} \, q_j/2 & \text{if $j \leq i$} \\ \, q_j & \text{if $j > i$} \, . \end{cases} $$ \begin{lemma}\label{l.boostdrag} Let $(S_-,S_+) \in \mc{S}_0^2$. \begin{enumerate} \item Suppose that the quantities $\textrm{Right}(S_-)$ and $\textrm{Right}(S_+)$ are finite. Let $i \in \ensuremath{\mathbb{Z}}$ exceed their maximum. There $\egameplay{S_-}{\textrm{Boost}_\eta^{i\rightarrow}(S_+)}{i}[P_+] > \egameplay{S_-}{S_+}{i}[P_+]$ for $\eta \in (0,m_\infty - m_{-\infty})$. \item Suppose that $\textrm{Right}(S_+) = \infty$ and $\textrm{Right}(S_-) < \infty$. Let $i \in \ensuremath{\mathbb{Z}}$, $i > \textrm{Right}(S_-)$, satisfy $S_+(i,1) > 0$. Then $\egameplay{S_-}{\textrm{Drag}^{i\rightarrow}(S_+)}{i}[P_+] > \egameplay{S_-}{S_+}{i}[P_+]$. \item If $\textrm{Left}(S_-)$ and $\textrm{Left}(S_+)$ exceed $-\infty$ and $i \in \ensuremath{\mathbb{Z}}$ is less than their minimum, then, provided that $\eta \in (0,n_{-\infty} - n_\infty)$, we have that $\egameplay{\textrm{Boost}_\eta^{\leftarrow i}(S_-)}{S_+}{i}[P_-] > \egameplay{S_-}{S_+}{i}[P_-]$. \item If $\textrm{Left}(S_-) = -\infty$ and $\textrm{Left}(S_+) > - \infty$ and $i \in \ensuremath{\mathbb{Z}}$, $i < \textrm{Left}(S_+)$, satisfies $S_-(i,1) > 0$, then $\egameplay{\textrm{Drag}^{\leftarrow i}(S_-)}{S_+}{i}[P_-] > \egameplay{S_-}{S_+}{i}[P_-]$. \end{enumerate} \end{lemma} {\bf Proof: (1).} The idle zone $\mathcal{I}$ determined by $(S_-,S_+)$ includes $\llbracket i,\infty)$. By Lemma~\ref{l.idlezone}, $X$ under $\pgameplay{S_-}{S_+}{i}$ thus behaves as a simple random walk when it visits $\llbracket i,\infty)$. Right escape $E_+$ is thus impossible, so Maxine's mean terminal payoff $\egameplay{S_-}{S_+}{i} [T_+]$ is at most $m_{-\infty}$ because it is a weighted average of $m_*$ and $m_{-\infty}$. Since $P_+ \leq T_+$ in view of running costs $C_+$ in~(\ref{e.maxinepayoff}) being non-negative, we find that $\egameplay{S_-}{S_+}{i}[P_+] \leq m_{-\infty}$. Now consider $\pgameplay{S_-}{\textrm{Boost}_\eta^{i\rightarrow}(S_+)}{i}$. Under this law, $X$ proceeds non-randomly by rightward steps, so that $E_+$ occurs almost surely. Since $E_+$ occurs, we have $T_+ = m_\infty$ almost surely. By the non-random rightward movement, we further have that $\sum_{t=1}^\infty C_+(t)$ equals $\sum_{t=1}^\infty \eta \cdot 2^{-t} = \eta$. By~(\ref{e.maxinepayoff}), we see then that $\egameplay{S_-}{\textrm{Boost}_\eta^{i\rightarrow}(S_+)}{i}[P_+] = m_\infty - \eta$. This confirms Lemma~\ref{l.boostdrag}(1). {\bf (2).} Under gameplay governed by the law $\pgameplay{S_-}{S_+}{i}$, Maxine offers a positive stake at $i$, and at infinitely many locations to its right, while Mina offers no stake at or to the right of $i$. Thus $X:\ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{Z}}$, $X_0 = i$, remains always to the right of $X$, and tends to infinity. If Maxine switches from $S_+$ to $\textrm{Drag}^{i\rightarrow}(S_+)$, the law of gameplay $X$ is unaffected, because the original and altered gameplays may be coupled so that Maxine's altered stake process is one-half of her original one, while Mina's remains identically zero---with the result that Maxine wins exactly the same turns in the altered gameplay as she did in the original one. The value of $T_+$ is thus almost surely equal to $m_\infty$ under $\pgameplay{S_-}{\textrm{Drag}^{i\rightarrow}(S_+)}{i}$ as well under $\pgameplay{S_-}{S_+}{i}$. But $\egameplay{S_-}{\textrm{Drag}^{i\rightarrow}(S_+)}{i} \sum_{t=1}^\infty C_+(t) = \tfrac{1}{2} \egameplay{S_-}{S_+}{i} \sum_{t=1}^\infty C_+(t) $ and $\egameplay{S_-}{S_+}{i} \sum_{t=1}^\infty C_+(t) \geq \egameplay{S_-}{S_+}{i} [C_+(1)] > 0$, so that $\egameplay{S_-}{\textrm{Drag}^{i\rightarrow}(S_+)}{i} \sum_{t=1}^\infty C_+(t) < \egameplay{S_-}{S_+}{i} \sum_{t=1}^\infty C_+(t)$. In summary, the switch to the altered strategy has maintained Maxine's terminal receipt but has reduced her running costs, so that Lemma~\ref{l.boostdrag}(2) holds by~(\ref{e.maxinepayoff}). {\bf (3,4).} The preceding proofs may be readily adapted to prove these statements. \qed \begin{lemma}\label{l.zeronotnash} \leavevmode \begin{enumerate} \item Any element of $\mathcal{N}$ has finite mean costs.\footnote{When the value of $(m_{-\infty},m_\infty,n_{-\infty},n_\infty)$ is clear---and it is usually a generic quadruple satisfying~(\ref{e.quadruple})---we will often omit to record this notation when we denote~$\mathcal{N}$. This includes the present case, where such a generic value is specified by the result, Theorem~\ref{t.nashabmn}(1), that we are seeking to prove.} \item If $(S_-,S_+) \in \mc{S}_0^2$ is an element of~$\mathcal{N}$ then $S_-$ and $S_+$ are wide. \end{enumerate} \end{lemma} In the ensuing proof and later, we denote the identically zero strategy by $0$. {\bf Proof of Lemma~\ref{l.zeronotnash}(1).} Let $(S_-,S_+) \in \mathcal{N}$, and let $i \in \ensuremath{\mathbb{Z}}$. Note that $ \egameplay{S_-}{S_+}{i} [P_+] \geq \egameplay{S_-}{0}{i} [P_+]$. In evaluating the latter term, note that no running costs to Maxine have been incurred, so that the quantity is an average of terminal receipts $m_\infty$, $m_{-\infty}$ and $m_*$ to Maxine in the events $E_+$, $E_-$ and $E^c$. We see that $\egameplay{S_-}{0}{i} [P_+] \geq \min \{ m_{-\infty},m_\infty,m_* \} = m_* > -\infty$, the latter inequality by assumption. Likewise, $\egameplay{S_-}{S_+}{i}[ P_-] > -\infty$. {\bf (2).} We argue by contradiction and suppose, without loss of generality---for the other case is similar---that $S_-$ is narrow. (Lemma~\ref{l.boostdrag}(4) is not used in the ensuing proof. It is needed for the case whose proof we omit). Either $\textrm{Left}(S_-) > -\infty$ or $\textrm{Right}(S_-) < \infty$. Suppose that $\textrm{Right}(S_-) < \infty$. If $\textrm{Right}(S_+) < \infty$, then Lemma~\ref{l.boostdrag}(1) provides a strategy $\hat{S}_+$ to Maxine along with a value of $i \in \ensuremath{\mathbb{Z}}$ such that $\egameplay{S_-}{\hat{S}_+}{i}[P_+] > \egameplay{S_-}{S_+}{i}[P_+]$. But this is contrary to $(S_-,S_+) \in \mathcal{N}$. If $\textrm{Right}(S_+) = \infty$, then it is Lemma~\ref{l.boostdrag}(2) that provides such $\hat{S}_+ \in \mc{S}_0$ and $i \in \ensuremath{\mathbb{Z}}$. A contradiction has thus been found in the case that $\textrm{Right}(S_-) < \infty$. Suppose now that $\textrm{Left}(S_-) > -\infty$. If $\textrm{Left}(S_+) > -\infty$, then Lemma~\ref{l.boostdrag}(3) furnishes a strategy $\hat{S}_-$ for Mina and an index $i \in \ensuremath{\mathbb{Z}}$ for which $\egameplay{\hat{S}_-}{S_+}{i}[P_-] > \egameplay{S_-}{S_+}{i}[P_-]$ holds, contrary to $(S_-,S_+) \in \mathcal{N}$. The case that $\textrm{Left}(S_-) > -\infty$ and $\textrm{Left}(S_+) = -\infty$ remains. The pair $(S_-,S_+) \in \mc{S}_0^2 \cap \mathcal{N}$ is non-zero, because $S_+$ is; it has finite mean costs by Lemma~\ref{l.zeronotnash}(1). Thus $\pgameplay{S_-}{S_+}{i}(E^c) = 0$ by Proposition~\ref{p.fmc}. Select $i \in \ensuremath{\mathbb{Z}}$ for which $S_+(i,1) > 0$ and $S_-(j,1) = 0$ for $j \in (-\infty, i \rrbracket$. Note that $\pgameplay{S_-}{S_+}{i}(E_-^c) = 0$ because gameplay $X$ is at least $i$ almost surely. Thus, $\pgameplay{S_-}{S_+}{i}(E_+) = 1$, so that $T_+ = m_\infty$ almost surely. If Maxine drags down her strategy $S_+$ by replacing the stake she offers at $i$ to be one-half of its value, the resulting strategy $\hat{S}_+$ is such that gameplay $X:\ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{Z}}$ is equal under the laws $\pgameplay{S_-}{S_+}{i}$ and $\pgameplay{S_-}{\hat{S}_+}{i}$; $T_+ = m_\infty$ almost surely under each of them; but $\sum_{t=1}^\infty C_+(t)$ is almost surely less under $\pgameplay{S_-}{\hat{S}_+}{i}$ than it is under $\pgameplay{S_-}{S_+}{i}$, because the value of $C_+(1)$ is lower. Thus~(\ref{e.maxinepayoff}) shows that $\egameplay{\hat{S}_-}{S_+}{i}[P_+] > \egameplay{S_-}{S_+}{i}[P_+]$. Again, we have a contradiction to $(S_-,S_+) \in \mathcal{N}$. This completes the proof of Lemma~\ref{l.zeronotnash}(2). \qed \begin{corollary}\label{c.nashescape} For $(S_-,S_+) \in \mathcal{N} \cap \mc{S}_0^2$ and $i \in \ensuremath{\mathbb{Z}}$, $\pgameplay{S_-}{S_+}{i}(E) = 1$. \end{corollary} {\bf Proof.} Due to Proposition~\ref{p.fmc} and Lemma~\ref{l.zeronotnash}(1,2). \qed Recall that an element $(S_-,S_+) \in \mc{S}_0^2$ may be identified as a sequence $\big\{ (b_i,a_i): i \in \ensuremath{\mathbb{Z}} \big\}$ to which Definition~\ref{d.quadruple} associates a quadruple $\big\{ (a_i,b_i,m_i,n_i): i \in \ensuremath{\mathbb{Z}} \big\}$. \begin{lemma}\label{l.mnincdec} Suppose that $(S_-,S_+) \in \mathcal{N} \cap \mc{S}_0^2$. Then $m_i \leq m_{i+1}$ and $n_{i+1} \leq n_i$ for $i \in \ensuremath{\mathbb{Z}}$. \end{lemma} {\bf Proof.} Recall that $\pgameplay{S_-}{S_+}{i}$ denotes the law of gameplay when $X_0 = i$. Let $\sigma_{i+1} \in \N_+ \cup \{ \infty \}$ denote the stopping time $\inf \big\{ \ell \in \N_+ : X_\ell = i+1 \big\}$. Noting the non-negativity of running costs $C_-(t)$ in Lemma~\ref{l.stopping} with $k=i$ and $Q = \sigma_{i+1}$, we find that $$ \egameplay{S_-}{S_+}{i} [P_-] \, \leq \, \egameplay{S_-}{S_+}{i} \big[ \egameplay{S_-}{S_+}{X(\sigma_{i+1})}[P_-] \big] \, , $$ whose left-hand side equals $m_i$ by definition and whose right-hand side takes the form $$ m_{i+1} \pgameplay{S_-}{S_+}{i} \big( \sigma_{i+1} < \infty \big) + m_{-\infty} \pgameplay{S_-}{S_+}{i} \big( \sigma_{i+1} = \infty, E \big) + m_* \pgameplay{S_-}{S_+}{i} \big( \sigma_{i+1} = \infty, E^c \big) \, . $$ However, the third term vanishes in view of Corollary~\ref{c.nashescape}. Thus, $m_i$ is seen to be a weighted average of $m_{-\infty}$ and $m_{i+1}$. To conclude, as we seek to do, that $m_i \leq m_{i+1}$, it is thus enough to argue that $m_{-\infty} \leq m_{i+1}$. To obtain this bound, we first {\em claim} that $\egameplay{S_-}{0}{i+1} [P_+] = m_{-\infty}$. To check this, we invoke Lemma~\ref{l.zeronotnash}(2) to say that $S_-$ is wide. Thus, $E_-$, and $T_- = m_{-\infty}$, are $\pgameplay{S_-}{0}{i}$-almost certain. The absence of running costs for Maxine means that $P_+ = T_+$ under $\pgameplay{S_-}{0}{i+1}$. This yields the claim. Using it, and $(S_-,S_+) \in \mathcal{N}$, we find that $$ m_{i+1} \, = \, \egameplay{S_-}{S_+}{i+1}[ P_+] \, \geq \, \egameplay{S_-}{0}{i+1} [P_+] = m_{-\infty} \, . $$ We have confirmed that $m_i \leq m_{i+1}$. We omit the similar proof that $n_{i+1} \leq n_i$. This completes the proof of Lemma~\ref{l.mnincdec}. \qed \begin{lemma}\label{l.firstrearranged} Let $\big\{ (b_i,a_i): i \in \ensuremath{\mathbb{Z}} \big\} \in \mathcal{N} \cap \mc{S}_0^2$. Recall from Definition~\ref{d.quadruple} that $m_i$ equals Maxine's mean receipt when the counter starts at $i \in \ensuremath{\mathbb{Z}}$. Suppose that $a_i + b_i > 0$. Then \begin{equation}\label{e.firstrearranged} m_i = \tfrac{a_i}{a_i + b_i} m_{i+1} + \tfrac{b_i}{a_i + b_i} m_{i-1} - a_i \, . \end{equation} \end{lemma} {\bf Proof.} Maxine will spend $a_i$ at the first turn; she will win the turn with probability $\tfrac{a_i}{a_i + b_i}$; if she does so, the counter will reach $i+1$, and her resulting conditional mean receipt will be $m_{i+1}$; if she does not, this receipt will instead be $m_{i-1}$. Note that the two ratios on the right-hand side of~(\ref{e.firstrearranged}) are well defined, because $a_i + b_i > 0$. \qed \begin{lemma}\label{l.condpositive} Let $(S_-,S_+) \in \mathcal{N} \cap \mc{S}_0^2$, and let $i \in \ensuremath{\mathbb{Z}}$. Then $a_i > 0$ implies that $m_{i+1} > m_i$. And $b_i > 0$ implies that $n_{i-1} > n_i$. \end{lemma} {\bf Proof.} Lemma~\ref{l.firstrearranged} and $a_i > 0$ imply that $m_i < \max \{ m_{i-1},m_{i+1} \}$. But the maximum is attained by $m_{i+2}$ in view of Lemma~\ref{l.mnincdec}. The second assertion in the lemma is similarly obtained. \qed \begin{proposition}\label{p.allpositive} Let $(S_-,S_+) \in \mathcal{N} \cap \mc{S}_0^2$. Then $a_i > 0$, $b_i > 0$, $m_{i+1} > m_i$ and $n_i > n_{i+1}$ for all~$i \in \ensuremath{\mathbb{Z}}$. \end{proposition} {\bf Proof.} By Lemma~\ref{l.zeronotnash}(2), $S_-$ is wide. To show that every $a$-coefficient is positive, it is thus enough to argue that $a_i > 0$ implies $a_{i+1} > 0$ for $i \in \ensuremath{\mathbb{Z}}$, because every index $i \in \ensuremath{\mathbb{Z}}$ has a positive $a$-coefficient indexed somewhere to its left. Suppose to the contrary that $a_i > 0$ but $a_{i+1} = 0$. Applying~(\ref{e.firstrearranged}) at index~$i+1$, we see that $b_{i+1} > 0$ implies that $m_{i+1} = m_i$. But Lemma~\ref{l.condpositive} and $a_i > 0$ imply that $m_{i+1} > m_i$. Thus, $b_{i+1} = 0$. In view of $a_{i+1} = 0$, we see from~(\ref{e.firstrearranged}) at index $i+1$ (with use of the $0/0 = 1/2$ rule) that $m_{i+1} = \tfrac{m_i + m_{i+2}}{2}$. However: given that $b_{i+1} = 0$, the same equation shows that a sufficiently small positive choice of $a_{i+1}$ would yield a value for $m_{i+1}$ which is arbitrarily close to $m_{i+2}$, a quantity that exceeds $\tfrac{m_i + m_{i+2}}{2}$ because (in view of Lemma~\ref{l.condpositive}, $a_i > 0$ and $a_{i+1} > 0$) we have the bound $m_{i+2} > m_i$. Thus, $(S_-,S_+) \not\in \mathcal{N}$, contrary to assumption. We have confirmed that $a_{i+1} > 0$, and thus that every $a$-coefficient is positive. The argument that $b_i > 0$ for $i \in \ensuremath{\mathbb{Z}}$ is no different. Lemma~\ref{l.condpositive} then shows that each difference $m_{i+1} - m_i$ and $n_i - n_{i+1}$ is positive. This completes the proof of Proposition~\ref{p.allpositive}. \qed We may now prove the first part of Theorem~\ref{t.nashabmn}. {\bf Proof of Theorem~\ref{t.nashabmn}(1).} Suppose that $(S_-,S_+) \in \mc{S}_0^2$ is a time-invariant Nash equilibrium for ${\rm Trail}(m_{-\infty},m_\infty,n_{-\infty},n_\infty)$. We abusively identify $(S_-,S_+)$ with the sequence $\big\{ (b_i,a_i): i \in \ensuremath{\mathbb{Z}} \big\} \in \mc{S}_0^2$ as usual (and, by doing so, we conform notation with the theorem's statement). We note at the outset that, in view of Proposition~\ref{p.allpositive}, each $a_i$ and $b_i$, and each difference $m_{i+1} - m_i$ and $n_i - n_{i+1}$, is positive. Equation \textrm{ABMN}$(1)$ results from rearranging the formula in Lemma~\ref{l.firstrearranged}. Equation \textrm{ABMN}$(2)$ is similarly derived. Next we derive \textrm{ABMN}$(3,4)$. Recall $S_-(i,j) = b_i$ and $S_+(i,j)=a_i$ for each $(i,j) \in \ensuremath{\mathbb{Z}} \times \N_+$. For given $i \in \ensuremath{\mathbb{Z}}$, we will consider a perturbed strategy $\hat{S}_+ \in \mathcal{S}$ for Maxine in which only her first-turn stake is altered, and only then if the counter is at $i$. In this way, $\hat{S}_+(j,k) = a_j$ for $j \in \ensuremath{\mathbb{Z}}$ and $k \geq 2$; and also for $k=1$ and $j \in \ensuremath{\mathbb{Z}}$, $j \not= i$. We let $\eta > -a_i$ be small in absolute value, and set $\hat{S}_+(1,i) = a_i + \eta$. The {\em original} scenario refers to the law $\pgameplay{S_-}{S_+}{i}$, which records counter evolution~$X:\N \to \ensuremath{\mathbb{Z}}$ given the initial condition $X_0 = i$ under the strategy pair $(S_-,S_+)$. The {\em altered} scenario refers to the same law, instead governed by the pair $(S_-,\hat{S}_+)$. Let $O_+$ and $A_+$ denote the mean payoff to Maxine in the original and altered scenarios: that is, $O_+ = \egameplay{S_-}{S_+}{i} [P_+]$ and $A_+ = \egameplay{S_-}{\hat{S}_+}{i} [P_+]$. Then $$ O_+ = \tfrac{a_i}{a_i+b_i} m_{i+1} + \tfrac{b_i}{a_i+b_i} m_{i-1} - a_i \, \, \, \textrm{and} \, \, \, A_+ = \tfrac{a_i+\eta}{a_i+\eta+ b_i} m_{i+1} + \tfrac{b_i}{a_i+\eta+b_i} m_{i-1} - a_i - \eta \, , $$ so that \begin{equation}\label{e.aodifference} A_+ - O_+ \, = \, \Big( \tfrac{b_i}{(a_i+b_i)^2} (m_{i+1} - m_{i-1}) - 1 \Big) \cdot \eta \cdot \big( 1 + o(1) \big) \, , \end{equation} where the $o(1)$ term is small in the sense of $\vert \eta \vert \to 0$. Since $(S_-,S_+) \in \mathcal{N}$, $A_+$ is at most $O_+$, whatever the value of $\eta > - a_i$. The derivative in $\eta$ of $A_+ - O_+$ thus vanishes at zero, so that $\tfrac{b_i}{(a_i+b_i)^2} (m_{i+1} - m_{i-1}) - 1 = 0$ or equivalently \begin{equation}\label{e.bma} b_i (m_{i+1} - m_{i-1}) = (a_i+b_i)^2 \, . \end{equation} We now consider the same original scenario alongside a new altered scenario in which it is Mina who adopts a perturbed strategy $\hat{S}_-$ (as a function of a given choice of $i \in \ensuremath{\mathbb{Z}}$). Analogously to what we have done, we choose $\eta > - b_i$, and set $\hat{S}_-(j,k) = b_j$ for $j \in \ensuremath{\mathbb{Z}}$ and $k \geq 2$ or when $k=1$ and $j \in \ensuremath{\mathbb{Z}}$, $j\not=i$; and then we set $\hat{S}_-(1,i) = b_i + \eta$. We denote by $O_-$ and $A_-$ Mina's mean payoff in the original and in the newly altered scenarios; to wit, $O_- = \egameplay{S_-}{S_+}{i} [P_-]$ and $A_- = \egameplay{\hat{S}_-}{S_+}{i} [P_-]$. We find then that $$ O_- = \tfrac{b_i}{a_i+b_i} n_{i+1} + \tfrac{a_i}{b_i+a_i} n_{i-1} - b_i \, \, \, \textrm{and} \, \, \, A_- = \tfrac{b_i+\eta}{b_i+\eta+ a_i} n_{i+1} + \tfrac{b_i}{a_i+\eta+b_i} n_{i-1} - b_i - \eta \, ; $$ and, analogously to~(\ref{e.aodifference}), $$ A_- - O_- \, = \, \Big( \tfrac{a_i}{(a_i+b_i)^2} (n_{i-1} -n_{i+1}) - 1 \Big) \cdot \eta \cdot \big( 1 + o(1) \big) \, . $$ The condition that $(S_-,S_+) \in \mathcal{N}$ ensures that $O_- \geq A_-$, whatever the value of $\eta > - b_i$. Thus, \begin{equation}\label{e.anb} a_i (n_{i-1} - n_{i+1}) = (a_i+b_i)^2 \, . \end{equation} The derived equations~(\ref{e.bma}) and~(\ref{e.anb}) are \textrm{ABMN}$(3,4)$ with index $i$. We have established that $\big\{ (a_i,b_i,m_i,n_i): i \in \ensuremath{\mathbb{Z}} \big\}$ solves the \textrm{ABMN} equations. To complete the proof of Theorem~\ref{t.nashabmn}(1), it remains to confirm that the boundary values~(\ref{e.boundarydata}) are achieved. We will argue that $\lim_{i \to \infty} m_{-i} = m_{-\infty}$; the three other limits are similarly shown. The sequence $\big\{ m_{-i}: i \in \N \big\}$ decreases by Proposition~\ref{p.allpositive} to a limiting value that we call $\mathfrak{m}_{-\infty}$. Since $m_i = \pgameplay{S_-}{S_+}{i} [P_+] \geq \pgameplay{S_-}{0}{i} [P_+] = m_{-\infty}$, we have that $\mathfrak{m}_{-\infty} \geq m_{-\infty}$; we wish to obtain the opposite inequality. By removing non-negative running costs from the right-side of the expression for $m_i$ in Lemma~\ref{l.minipayoff}(2), we see that $m_i \leq \pgameplay{S_-}{S_+}{i}(E_-)\cdot m_{-\infty} + \pgameplay{S_-}{S_+}{i}(E_+) \cdot m_\infty$ where we invoked Corollary~\ref{c.nashescape}. Thus $\mathfrak{m}_{-\infty} \leq m_{-\infty}$ provided that we argue that $\lim_{i \to -\infty} \pgameplay{S_-}{S_+}{i}(E_+) = 0$: far to the left is the domain of Mina's likely victory. It would be of interest to argue directly; and to do so would be more in keeping with the style of this section. It is quicker however to simply invoke the eventual gameplay unanimity Theorem~\ref{t.unanimity}(3), which will be proved by independent arguments when we find an explicit solution of the \textrm{ABMN} system in Section~\ref{s.battlefield}. (Theorem~\ref{t.unanimity}(2) is invoked in the corresponding place in two of the three omitted limit derivations.) We have thus obtained Theorem~\ref{t.nashabmn}(1). \qed \subsection{The reverse implication} Here we prove Theorem~\ref{t.nashabmn}(2). It is here that the infinite-turn nature of the game has to be tamed by comparison with finite-trail counterparts. We begin by developing definitions and results that will lead to the proof of the desired result at the end of the section. As such, we now enforce the notation in the hypothesis of Theorem~\ref{t.nashabmn}(2), so that, from now on, $$ \text{$\big\{ (a_i,b_i,m_i,n_i) : i \in \ensuremath{\mathbb{Z}} \big\}$ denotes a positive solution of the \textrm{ABMN} equations} $$ with boundary data $(m_{-\infty},m_\infty,n_{-\infty},n_\infty)$ that satisfies~(\ref{e.quadruple}). Let $S_-,S_+ \in \mathcal{S}$ satisfy \begin{equation}\label{e.ba} S_-(i,j) = b_i \, \, \,\,\textrm{and} \, \, \, \, S_+(i,j) = a_i \, \, \, \, \textrm{for $(i,j) \in \ensuremath{\mathbb{Z}} \times \N_+$} \, . \end{equation} Recall that $S_+ = a$ with the usual notational abuse. \begin{definition}\label{d.mds} Let $i \in \ensuremath{\mathbb{Z}}$. The forward play-cone $F_i$ of $i$ is given by $$ F_i \, = \, \Big\{ \, (k,\ell) \in \ensuremath{\mathbb{Z}} \times \N_+: \vert k - i \vert \leq \ell \, , \, \vert k-i \vert + \ell \in 2\N \, \Big\} \, . $$ This is the set of space-time sites that are in principle accessible for gameplay $X:\N \to \ensuremath{\mathbb{Z}}$ under $\pgameplay{S_1}{S_2}{i}$ for some strategy pair $(S_1,S_2) \in \mathcal{S}^2$. Let $S \in \mathcal{S}$. An element $(q,\ell) \in F_i$ such that $S(q,\ell+1) \not= b_q$ is called a {\em Mina deviation point}. The Mina deviation set $\mathsf{D}_-(S,i) \subseteq F_i$ is the collection of Mina deviation points. The strategy $S$ is called {\em deviating for Mina} if $\mathsf{D}_-(S,i)$ is non-empty. A {\em Maxine deviation point} $(q,\ell) \in F_i$ satisfies $S(q,\ell+1) \not= a_q$. The set $\mathsf{D}_+(S,i)$ of such points is the Maxine deviation set; if $\mathsf{D}_+(S,i) \not= \emptyset$, then $S$ is deviating for Maxine. \end{definition} When gameplay under $\pgameplay{S}{S_+}{i}$ runs through a Mina deviation point---when $X_\ell = q$ for $(q,\ell) \in \mathsf{D}_-(S,i)$---her stake according to strategy $S$---namely, $S(q,\ell+1)$---may be viewed as a mistake when her opponent plays her element $S_+$ of the putative Nash equilibrium $(S_-,S_+)$. The next result, which is fundamental to proving Theorem~\ref{t.nashabmn}(2), validates this notion. It measures the magnitude of the mistakes that result from a player's deviation in the sense of decrease in mean payoff in finite trail games. It finds the mistakes to be uniformly costly as the finite trails vary. \begin{proposition}\label{p.jksup} Let $i \in \ensuremath{\mathbb{Z}}$ be given. \begin{enumerate} \item Let $S_-^{\textrm{dev}} \in \mathcal{S}$ be deviating for Mina. Suppose that $\ensuremath{\mathbb{P}}_{S_-^{\textrm{dev}},S_+}^i(E) = 1$. Then $$ \sup \, \E_{S_-^{\textrm{dev}},S_+}^i [P_-^{j,k}] \, < \, \E_{S_-,S_+}^i [P_-] \, , $$ where the supremum is taken over $j,k \in \N_+$ such that $i \in \llbracket -j,k \rrbracket$ and for which there exists an element $(u,\ell)$ of $\mathsf{D}_-(S_-^{\textrm{dev}},i)$ with $u \in \llbracket -j,k \rrbracket$. \item Now suppose that $S_+^{\textrm{dev}} \in \mathcal{S}$ is deviating for Maxine, and $\ensuremath{\mathbb{P}}_{S_-,S_+^{\textrm{dev}}}^i(E) = 1$. Then $$ \sup \, \E_{S_-,S_+^{\textrm{dev}}}^i [P_+^{j,k}] \, < \, \E_{S_-,S_+}^i [P_+] \, , $$ where now the supremum is taken over $j,k \in \N_+$ with $i \in \llbracket -j,k \rrbracket$ and for which there exists $(u,\ell) \in \mathsf{D}_+(S_+^{\textrm{dev}},i)$ such that $u \in \llbracket -j,k \rrbracket$. \end{enumerate} \end{proposition} It is a short step from the just stated result to the next conclusion, which asserts that a player's deviation will cost her in the infinite trail game. This is in essence what it means for $(S_-,S_+)$ to be a Nash equilibrium. Indeed, we next close out the proof of Theorem~\ref{t.nashabmn} by first deriving Proposition~\ref{p.sminuscomp} from Proposition~\ref{p.jksup}; and second showing how the latter result leads to the desired conclusion. These tasks done, we will turn to the remaining and more substantial one: to prove Proposition~\ref{p.jksup}. \begin{proposition}\label{p.sminuscomp} Let $i \in \ensuremath{\mathbb{Z}}$. \begin{enumerate} \item Let $S_-^{\textrm{dev}} \in \mathcal{S}$ be deviating for Mina. Then $$ \E_{S_-^{\textrm{dev}},S_+}^i [P_-] < \E_{S_-,S_+}^i [P_-] \, . $$ \item Now let $S_+^{\textrm{dev}} \in \mathcal{S}$ be deviating for Maxine. Then $$ \E_{S_-,S_+^{\textrm{dev}}}^i [P_+] < \E_{S_-,S_+}^i [P_+] \, . $$ \end{enumerate} \end{proposition} {\bf Proof: (1).} Suppose first that $\ensuremath{\mathbb{P}}_{S_-^{\textrm{dev}},S_+}^i(E^c) > 0$. Lemma~\ref{l.dontlookback} implies that $\E_{(S_-^{\textrm{dev}},S_+)}^i [P_-] = -\infty$. But $ \E_{S_-,S_+}^i [P_-] = m_i$ by Lemma~\ref{l.minipayoff}(1). We have that $m_i \geq m_{-\infty}$ since the sequence $\big\{ m_i: i \in \ensuremath{\mathbb{Z}} \big\}$ increases for any positive \textrm{ABMN} solution by Theorem~\ref{t.positiveabmn}(1). And we know that $m_{-\infty} > -\infty$ by hypothesis. Thus we see that $ \E_{S_-,S_+}^i [P_-] > -\infty$, so that Proposition~\ref{p.sminuscomp}(1) has been established in this case. Now we suppose instead that $\ensuremath{\mathbb{P}}_{S_-^{\textrm{dev}},S_+}^i(E) = 1$. Let $\eta > 0$ be arbitrary. Note that $T_-^{j,k} = n\big(X(\tau^{j,k}) \big)$ for $j,k \in \N_+$; and that $T_-$ equals $n_{-\infty}$ on $E_-$, and $n_\infty$ on $E_+$. By Lemma~\ref{l.mn} and $\ensuremath{\mathbb{P}}_{S_-^{\textrm{dev}},S_+}^i(E) = 1$, we may thus find $j_0,k_0 \in \N_+$ such that, when $j \geq j_0$ and $k \geq k_0$, $$ \ensuremath{\mathbb{P}}_{S_-^{\textrm{dev}},S_+}^i \Big( \big\vert T_- - T_-^{j,k} \big\vert \geq \eta \Big) \leq \eta \, . $$ By Lemma~\ref{l.couplingproperties}(2), we see that $$ \ensuremath{\mathbb{P}}_{S_-^{\textrm{dev}},S_+}^i \Big( P_- \, \leq \, P^{j,k}_- + \eta \Big) \geq 1 - \eta \, . $$ By Lemma~\ref{l.couplingproperties}(3), $$ \E_{S_-^{\textrm{dev}},S_+}^i [P_-] \leq \E_{S_-^{\textrm{dev}},S_+}^i [P^{j,k}_-] + (1+n_{-\infty}- n_\infty)\eta \, . $$ By taking $\eta > 0$ to be one-half of the difference of the two sides in the conclusion of Proposition~\ref{p.jksup}(1), the latter result is seen to imply Proposition~\ref{p.sminuscomp}(1). {\bf (2).} We omit this similar argument. \qed {\bf Proof of Theorem~\ref{t.nashabmn}(2).} Recall that $\big\{ (a_i,b_i,m_i,n_i): i \in \ensuremath{\mathbb{Z}} \big\}$ is a positive \textrm{ABMN} solution with boundary data $(m_{-\infty},m_\infty,n_{-\infty},n_\infty) \in \ensuremath{\mathbb{R}}^4$. Further recall that $(S_-,S_+) = (b,a)$, with the usual notational abuse. Let $S \in \mathcal{S}$. If $S$ is not deviating for Mina, then $\egameplay{S}{S_+}{i} [P_-] = \egameplay{S_-}{S_+}{i} [P_-]$ since the laws $\pgameplay{S}{S_+}{i}$ and $\pgameplay{S_-}{S_+}{i}$ are equal. Otherwise, $\egameplay{S}{S_+}{i}[P_-] < \egameplay{S_-}{S_+}{i} [P_-]$ by Proposition~\ref{p.sminuscomp}(1). (We recall that implicit in the notation $\pgameplay{S_1}{S_2}{i}$ and $\egameplay{S_1}{S_2}{i}[\cdot]$ are the values $(m_{-\infty},m_\infty,n_{-\infty},n_\infty)$, because these values appear as terminal receipts.) By Proposition~\ref{p.sminuscomp}(2), it follows similarly that $\egameplay{S_-}{S}{i} [P_+] < \egameplay{S_-}{S_+}{i} [P_+]$ if $S$ is deviating for Maxine. Further, $\egameplay{S_-}{S}{i} [P_+] = \egameplay{S_-}{S_+}{i} [P_+]$ if Maxine's $S$ is not deviating. We have confirmed that $(S_-,S_+) \in \mathcal{N}(m_{-\infty},m_\infty,n_{-\infty},n_\infty)$ and thus obtain Theorem~\ref{t.nashabmn}(2). \qed We now prepare to prove Proposition~\ref{p.jksup}(1). (The proof of Proposition~\ref{p.jksup}(2) is essentially the same.) Henceforth, Proposition~\ref{p.jksup}(1)'s hypotheses are understood to be in force: $S_-$ and $S_+$ are the non-deviating strategies given by~(\ref{e.ba}); $i \in \ensuremath{\mathbb{Z}}$ is given; and $S_-^{\textrm{dev}} \in \mathcal{S}$ is deviating for Mina, with $\ensuremath{\mathbb{P}}_{S_-^{\textrm{dev}},S_+}^i(E) = 1$. Let $j,k \in \N$ satisfy $i \in \llbracket -j,k \rrbracket$. Developing Definition~\ref{d.mds}, we set $$ \mathsf{D}_-^{j,k}(S,i) \, = \, \Big\{ (q,\ell) \in \mathsf{D}_-(S,i): q \in \llbracket -j,k \rrbracket \Big\} $$ for $S \in \mathcal{S}$. It may be that $\mathsf{D}_-^{j,k}(S_-^{\textrm{dev}},i)$ is infinite. It serves our purpose to approximate $S_-^{\textrm{dev}}$ by strategies for which the counterpart set is finite. We now specify these strategies. Enumerate $\mathsf{D}_-^{j,k}(S_-^{\textrm{dev}},i)$ in increasing order of the vertical component, using an arbitrary rule to break the ties that arise when elements share the same height. For $v \in \N_+$, let $\mathsf{D}_{-,v}^{j,k}(S_-^{\textrm{dev}},i)$ denote the set whose elements are the first $v$ members of $\mathsf{D}_-^{j,k}(S_-^{\textrm{dev}},i)$. Let $S_-^{\textrm{dev}}[v]$ denote the strategy that equals $S_-^{\textrm{dev}}$ on $\mathsf{D}_{-,v}^{j,k}(S_-^{\textrm{dev}},i)$ and $S_-$ otherwise; note that $\mathsf{D}_{-}^{j,k}(S_-^{\textrm{dev}}[v],i)$ equals $\mathsf{D}_{-,v}^{j,k}(S_-^{\textrm{dev}},i)$. We make another basic comparison in terms of the next definition. \begin{definition}\label{d.ground} For $S \in \mathcal{S}$, let $\textrm{ground}^{j,k}(S,i) \in \N$ denote the minimum vertical coordinate assumed by an element of $\mathsf{D}_-^{j,k}(S,i)$. \end{definition} Note then that $\textrm{ground}^{j,k}(S_-^{\textrm{dev}}[v],i)$ is independent of $v \in \N_+$. We wish to argue that Mina's deviant play under the strategies $S_-^{\textrm{dev}}[v]$, $v \in \N_+$, and $S_-^{\textrm{dev}}$, is suitably penalized in the trail game on $\llbracket -j-1,k+1 \rrbracket$. In the notation of the next definition, Lemma~\ref{l.baseconseq} establishes such a conclusion for the finitely deviating strategies $S_-^{\textrm{dev}}[v]$: there is a penalty incurred by use of these strategies; and, in a suitable sense, the penalty is uniform among them, and is governed by the limiting strategy~$S_-^{\textrm{dev}}$. After we prove Lemma~\ref{l.baseconseq}, it will remain to address the penalty suffered by using $S_-^{\textrm{dev}}$ itself. Definition~\ref{d.merit} speaks of a `strong' penalty as a contrast with a modified definition that will be used to treat the perhaps infinitely deviating $S_-^{\textrm{dev}}$, this appearing after the proof of Lemma~\ref{l.baseconseq}. \begin{definition}\label{d.merit} Let $S_1,S_2 \in \mathcal{S}$. Consider the following conditions: \begin{enumerate} \item We have that $\E^{u,\ell}_{S_1,S_+} \big[ P^{j,k}_- \big] \leq n_u$ for all $\ell \in \N_+$ and $u \in \llbracket -j,k \rrbracket$. \item Writing $g = \textrm{ground}^{j,k}(S_1,i)$, consider any $u \in \llbracket -j,k \rrbracket$ for which $(u,g) \in \mathsf{D}_-^{j,k}(S_1,i)$. Then the value $n_u - \E^{u,g}_{S_1,S_+} \big[ P^{j,k}_- \big]$ is positive, and indeed is bounded below by a positive quantity that is determined solely by $S_2(u,g+1)$. \end{enumerate} If these conditions are met, we say that {\em $S_1$ receives the strong $(i,j,k)$-penalty merited by $S_2$}. Let $S \in \mathcal{S}$. If $S$ receives the strong $(i,j,k)$-penalty merited by $S$, we say that {\em $S$ justly receives a strong $(i,j,k)$-penalty}. \end{definition} (Although it is omitted from the notation of the strong $(i,j,k)$-penalty, it is the strategy $S_+ = a$ that Mina is facing when she plays $S_1$ or $S_2$. The above definition and the next result are intended to capture the sense of Mina's mistake when she declines to stake at the $b$-level dictated by $S_-$ against Maxine's $a$-stake offered by $S_+$.) \begin{lemma}\label{l.baseconseq} Let $j,k \in \N$ satisfy $i \in \llbracket -j,k \rrbracket$. Let $v$ be at least the number of elements of $\mathsf{D}_-^{j,k}(S_-^{\textrm{dev}},i)$ of minimum height. Then $S_-^{\textrm{dev}}[v]$ receives the strong $(i,j,k)$-penalty merited by $S_-^{\textrm{dev}}$. \end{lemma} (The value of $g$ implicit in Lemma~\ref{l.baseconseq} does not depend on the value of $v \in \N_+$ used in $S_-^{\textrm{dev}}[v]$, because $\textrm{ground}^{j,k}(S_-^{\textrm{dev}}[v],i)$ is independent of $v \in \N_+$.) The finite-error strategies $S_-^{\textrm{dev}}[v]$ have been introduced because they may be analysed using the fundamental game-theoretic technique of backwards induction. When Mina uses $S_-^{\textrm{dev}}[v]$ for any given $v \in \N_+$, she never deviates at late enough time. Lemma~\ref{l.minipayoff}(2) then serves to show that she incurs no penalty by doing so. As turn index retreats in backwards induction, Mina will make deviating moves. At the heart of the analysis of the inductive step is the consideration of one turn when Mina deviates. What is being played here is a game of Penny Forfeit, treated in Section~\ref{s.pennyforfeit}. The next result gathers what we need to know about one step in the game. \begin{lemma}\label{l.onestep} \leavevmode \begin{enumerate} \item Let $j,k \in \ensuremath{\mathbb{Z}}$ satisfy $i \in \llbracket -j,k \rrbracket$. For $\ell \in \N_+$, let $S_1,S_2 \in \mathcal{S}$ be such that, if $(u,h) \in \ensuremath{\mathbb{Z}} \times \N_+$ satisfies $S_1(u,h) \not= S_2(u,h)$, then $h \leq \ell$. Then $\E^{u,h}_{S_1,S_+} \big[ P^{j,k}_- \big] = \E^{u,h}_{S_2,S_+} \big[ P^{j,k}_- \big]$ for any $(u,h) \in \llbracket -j,k \rrbracket \times \llbracket \ell, \infty)$. \end{enumerate} Let $S \in \mathcal{S}$ and $(u,\ell) \in \llbracket -j,k \rrbracket \times \N$. Suppose that $\E^{v,\ell+1}_{S,S_+} \big[ P^{j,k}_- \big] \leq n_j$ for $v \in \{u-1,u+1\}$. \begin{enumerate} \setcounter{enumi}{1} \item We have that $\E^{u,\ell}_{S,S_+} \big[ P^{j,k}_- \big] \leq n_u$. \item Suppose further that $(u,\ell) \in \mathsf{D}^{j,k}_-(S,i)$. Then $n_u - \E^{u,\ell}_{S,S_+} \big[ P^{j,k}_- \big]$ is bounded below by a positive quantity that is determined solely by the value of $S(u,\ell+1) \not= b_u$. \end{enumerate} \end{lemma} {\bf Proof: (1).} The laws $\ensuremath{\mathbb{P}}^{u,h}_{S_1,S_+}$ and $\ensuremath{\mathbb{P}}^{u,h}_{S_2,S_+}$ are identical because $S_1$ and $S_2$ coincide at any point $(u,\ell)$ with $\ell \geq h+1$. \\ {\bf (2).} Note that $$ \E^{u,\ell}_{S,S_+} \big[ P^{j,k}_- \big] \, = \, \tfrac{S(u,\ell)}{a_i+S(u,\ell)} \E^{u-1,\ell+1}_{S,S_+} \big[ P^{j,k}_- \big] + \tfrac{a_i}{a_i+S(u,\ell)} \E^{u+1,\ell+1}_{S,S_+} \big[ P^{j,k}_- \big] - S(u,\ell) \, . $$ Since $\E^{u-1,\ell+1}_{S,S_+} \big[ P^{j,k}_- \big] \leq n_{u-1}$ and $\E^{u+1,\ell+1}_{S,S_+} \big[ P^{j,k}_- \big] \leq n_{u+1}$, we see that $$ \E^{u,\ell}_{S,S_+} \big[ P^{j,k}_- \big] \, \leq \, \tfrac{S(u,\ell)}{a_u+S(u,\ell)} n_{u-1} + \tfrac{a_u}{a_u+S(u,\ell)} n_{u+1} - S(u,\ell) \, . $$ By Lemma~\ref{l.pennyforfeit}, this right-hand side has a unique maximum in $b$ at $b = b_u$, when it assumes the value $n_u$.\\ {\bf (3).} Since $S(u,\ell+1)$ is not equal to $b_u$, we see that the above right-hand side, and thus $\E^{u,\ell}_{S,S_+} \big[ P^{j,k}_- \big]$, is less than $n_u$. The difference $n_u - \egameplay{S}{S_+}{u,\ell}[P^{j,k}_-]$ is determined solely by $S(u,\ell+1)$. \qed The next result leads quickly to Lemma~\ref{l.baseconseq}. Indeed, its proof (in a perhaps slightly disguised form) is the backwards inductive argument that underlies Lemma~\ref{l.baseconseq}. \begin{lemma}\label{l.backwardformal} Suppose that $S \in \mathcal{S}$ is such that $\mathsf{D}_-^{j,k}(S,i)$ is finite. Then $S$ justly receives a strong $(i,j,k)$-penalty. \end{lemma} {\bf Proof.} We will induct on the cardinality of $\mathsf{D}_-^{j,k}(S,i)$. Let $S \in \mathcal{S}$. Set $g = \textrm{ground}^{j,k}(S,i)$. For $\ell \in \N$, $\ell \not= g$, let $\textrm{IH}(S,\ell)$ denote the assertion that \begin{equation}\label{e.basicineq} \E^{u,\ell}_{S,S_+} \big[ P^{j,k}_- \big] \leq n_u \, \, \, \, \textrm{for} \, \, \, \, u \in \llbracket -j,k \rrbracket \, . \end{equation} For $\ell =g$, let $\textrm{IH}(S,\ell)$ denote the assertion that the preceding display holds and so does the following. \begin{eqnarray*} & & \textrm{Consider any $u \in \llbracket -j,k \rrbracket$ for which $(u,g) \in \mathsf{D}_-^{j,k}(S,i)$.}\\ & & \textrm{Then the value $n_u - \E^{u,g}_{S,S_+} \big[ P^{j,k}_- \big]$ is positive,} \\ & & \textrm{and indeed is bounded below by a positive quantity that is determined solely by $S(u,g+1)$.} \end{eqnarray*} We take the inductive hypothesis indexed by $q \in \N_+$ to be the assertion that the statements $\textrm{IH}(S,\ell)$, $\ell \in \N$, are true for each $S \in \mathcal{S}$ such that $\char"0023 \, \mathsf{D}_-^{j,k}(S,i) \leq q$. The base case will be $q = 0$. This is the assertion that~(\ref{e.basicineq}) holds for $\ell \in \N_+$, when $S \in \mathcal{S}$ is such that $\mathsf{D}_-^{j,k}(S,i)$ is empty. The base case holds by Lemma~\ref{l.minipayoff}(3). Let $q \in \N$ and assume the inductive hypothesis indexed by $q$. Let $S \in \mc{S}_0$ be such that $\char"0023 \, \mathsf{D}_-^{j,k}(i) = q+1$. Again set $g = \textrm{ground}^{j,k}(S,i)$. Let $\hat{S} \in \mc{S}_0$ be given by $$ \hat{S}(i,\ell) \, = \, \begin{cases} \, S_-(i,g+1) = b_i & \text{if $\ell = g+1$} \\ \, S(i,\ell) & \text{if $\ell \in \N_+$, $\ell \not= g+1$} \, . \end{cases} $$ for $i \in \ensuremath{\mathbb{Z}}$. The set $\mathsf{D}_-^{j,k}(\hat{S},i)$ is formed from $\mathsf{D}_-^{j,k}(S,i)$ by the removal of the elements of minimum height---which is height $g$. Hence, $\char"0023 \, \mathsf{D}_-^{j,k}(\hat{S},i) < \char"0023 \, \mathsf{D}_-^{j,k}(S,i)$; the hypotheses $\textrm{IH}(\hat{S},\ell)$, $\ell \in \N_+$, are thus available. By Lemma~\ref{l.onestep}(1) with $S_1 = \hat{S}$, $S_2 = S$ and $\ell = g+1$, we find that $\textrm{IH}(S,\ell)$ holds for $\ell \geq g+1$. Now consider $u \in \llbracket -j,k \rrbracket$ such that $(u,g) \in \mathsf{D}_-^{j,k}(S,i)$. Lemma~\ref{l.onestep}(3) (and Lemma~\ref{l.onestep}(2) for other $u \in \llbracket -j,k \rrbracket$) implies $\textrm{IH}(S,g)$. To complete the inductive step, it remains to verify $\textrm{IH}(S,\ell)$ for $\ell \in \llbracket 0,g-1 \rrbracket$. We do so iteratively in decreasing~$\ell$. It is Lemma~\ref{l.onestep}(2) that demonstrates the generic step in this iteration. This completes the proof of Lemma~\ref{l.backwardformal}. \qed {\bf Proof of Lemma~\ref{l.baseconseq}.} Apply Lemma~\ref{l.backwardformal} for $S = S_-^{\textrm{dev}}[v]$ for $v \in \N_+$, We learn that Definition~\ref{d.merit} holds with $S_1 = S_2 = S_-^{\textrm{dev}}[v]$. Thus, the positive quantity in Definition~\ref{d.merit}(2) is determined by $S_-^{\textrm{dev}}[v](u,g+1)$. When $v$ satisfies the bound in Lemma~\ref{l.baseconseq}, we have that $S_-^{\textrm{dev}}[v](u,g+1)$ equals $S_-^{\textrm{dev}}(u,g+1)$. As a result, Definition~\ref{d.merit} holds with $S_1 = S_-^{\textrm{dev}}[v]$ and $S_2 = S_-^{\textrm{dev}}$. This is what Lemma~\ref{l.baseconseq} asserts. \qed Lemma~\ref{l.baseconseq} is a stepping stone to a counterpart that describes the penalty incurred by use of the perhaps infinitely deviating strategy $S_-^{\textrm{dev}} \in \mathcal{S}$. The counterpart, Lemma~\ref{l.baseconseqtwo}, depends on a variation of Definition~\ref{d.merit}. \begin{definition}\label{d.just} Let $S \in \mathcal{S}$. An element $(q,\ell) \in F_i$ is said to be {\em $(S,S_+)$-accessible from $(i,0)$} if $\pgameplay{S}{S_+}{i}(X_\ell = q) > 0$. Let $\mathsf{A}(S,i)$ denote the set of elements of $F_i$ that are $(S,S_+)$-accessible from $(i,0)$. Alter Definition~\ref{d.merit} by taking $S_1$ and $S_2$ equal to $S$; the first part to include the condition that the point $(u,\ell)$ belongs to $\mathsf{A}(S,i)$; and the second to include the condition that $(u,g) \in \mathsf{A}(S,i)$. Thus, no requirement is imposed by a given part when $(u,\ell)$ or $(u,g)$ is not $(S,S_+)$-accessible from $(i,0)$. When the altered set of conditions is satisfied, we say that {\em $S$ justly receives a weak $(i,j,k)$-penalty}. \end{definition} \begin{lemma}\label{l.baseconseqtwo} Let $j,k \in \N$ satisfy $i \in \llbracket -j,k \rrbracket$. The strategy $S_-^{\textrm{dev}}$ justly receives a weak $(i,j,k)$-penalty. \end{lemma} To prove this result, we intend to make use of Proposition~\ref{p.jksup}(1)'s hypothesis that $\pgameplay{S_-^{\textrm{dev}}}{S_+}{i}(E)=1$. Since escape is certain under $\pgameplay{S_-^{\textrm{dev}}}{S_+}{i}$, gameplay will exit $\llbracket -j,k \rrbracket$ in finite time, so that Mina's choice between $S$ and $S[v]$, for high $v$, will typically leave gameplay unaffected. Thus we aim to reduce the proof of the new result to quoting Lemma~\ref{l.baseconseq}. To do this, it is useful to state a consequence of $\pgameplay{S_-^{\textrm{dev}}}{S_+}{i}(E)=1$. \begin{lemma}\label{l.escapepropagate} Let $(u,\ell) \in \mathsf{A}(S_-^{\textrm{dev}},i)$. Then $\pgameplay{S_-^{\textrm{dev}}}{S_+}{u,\ell}(E) = 1$. \end{lemma} {\bf Proof.} We have that $$ 1 \, = \, \pgameplay{S_-^{\textrm{dev}}}{S_+}{i}(E) \, = \, \sum_{u \in \ensuremath{\mathbb{Z}}} \pgameplay{S_-^{\textrm{dev}}}{S_+}{i}(X_\ell = u) \cdot \pgameplay{S_-^{\textrm{dev}}}{S_+}{u,\ell}(E) \, . $$ Since $\pgameplay{S_-^{\textrm{dev}}}{S_+}{i}(X_\ell = u) > 0$ if and only if $(u,\ell) \in \mathsf{A}(S_-^{\textrm{dev}},i)$, we see that $\pgameplay{S_-^{\textrm{dev}}}{S_+}{u,\ell}(E)$ equals one when this condition is satisfied. \qed {\bf Proof of Lemma~\ref{l.baseconseqtwo}.} Let $h(v)$ be the vertical coordinate of the $v$\textsuperscript{th} element of $\mathsf{D}_-^{j,k}(S_-^{\textrm{dev}},i)$. Let $(u,\ell) \in \llbracket -j,k \rrbracket \times \N$. Under $\pgameplay{S_-^{\textrm{dev}}[v]}{S_+}{u,\ell}$ given $\tau^{j,k} \geq h(v)$, Mina does not deviate after time $\tau^{j,k}$. By Lemma~\ref{l.minipayoff}(2) and Theorem~\ref{t.positiveabmn}(1), the conditional mean of $P^{j,k}_-$ under $\pgameplay{S_-^{\textrm{dev}}[v]}{S_+}{u,\ell}$ given that $\tau^{j,k} > h(v)$ is thus seen to be at least $n_{k+1}$. Now consider (\ref{e.delayedpayoff}) with $\pm = -1$ and $(P,S_-) \to (P^{j,k},S_-^{\textrm{dev}})$; note that running costs here are non-negative, and that terminal receipt is at most $n_{-j-1}$ by Theorem~\ref{t.positiveabmn}(1). We see then that the conditional mean of $P^{j,k}_-$ under $\pgameplay{S_-^{\textrm{dev}}}{S_+}{u,\ell}$ given that $\tau^{j,k} > h(v)$ is at most $n_{-j-1}$. We find then that \begin{equation}\label{e.tau} \egameplay{S_-^{\textrm{dev}}}{S_+}{u,\ell}[P^{j,k}_-] - \egameplay{S_-^{\textrm{dev}}[v]}{S_+}{u,\ell}[P^{j,k}_-] \, \leq \, \pgameplay{S_-^{\textrm{dev}}}{S_+}{u,\ell} \big(\tau^{j,k} \geq h(v)\big) \cdot (n_{-j-1} - n_{k+1}) \, . \end{equation} Lemma~\ref{l.baseconseqtwo} will follow from Lemma~\ref{l.baseconseq} provided that we show that the right-hand side of this display vanishes in high~$v$ whenever $(u,\ell) \in \mathsf{A}(S_-^{\textrm{dev}},i)$. By Lemma~\ref{l.escapepropagate}, and the hypothesis of Proposition~\ref{p.jksup}(1), we know that $\pgameplay{S_-^{\textrm{dev}}}{S_+}{u,\ell}(E) = 1$. Thus, $\tau^{j,k}$ is finite, $\pgameplay{S_-^{\textrm{dev}}}{S_+}{u,\ell}$-almost surely. The right-hand side of~(\ref{e.tau}) thus indeed tends to zero in the limit of high $v$. Lemma~\ref{l.baseconseq} implies Lemma~\ref{l.baseconseqtwo}, as we sought to show. \qed We are ready for the following proof. {\bf Proof of Proposition~\ref{p.jksup}(1).} For $j,k \in \N_+$ such that $i \in \llbracket -j,k \rrbracket$, let $g$ denote the minimum vertical coordinate among elements of $\mathsf{D}_-^{j,k}(S_-^{\textrm{dev}},i)$. Any element $(u,g) \in F_i$ belongs to $\mathsf{A}(S_-^{\textrm{dev}},i)$ because, under $(S_-^{\textrm{dev}},S_+)$, gameplay is governed before the $g$\textsuperscript{th} turn by the positive-element pair $(S_-,S_+)$. Lemma~\ref{l.baseconseqtwo} thus implies that, when $(u,g) \in F_i$, \begin{equation}\label{e.starone} \E^{u,g}_{S_-^{\textrm{dev}},S_+} \big[ P^{j,k}_- \big] \leq n_u \end{equation} and \begin{equation}\label{e.startwo} \E^{u,g}_{S_-^{\textrm{dev}},S_+} \big[ P^{j,k}_- \big] < n_u \, \, \, \, \textrm{if $(u,g) \in \mathsf{D}_-^{j,k}(S_-^{\textrm{dev}},i)$} \, . \end{equation} Now note that $$ \egameplay{S_-^{\textrm{dev}}}{S_+}{i} [P_-^{j,k}] \, = \, - \, \egameplay{S_-^{\textrm{dev}}}{S_+}{i} \sum_{t=1}^{g-1} C_-(t) {\bf 1}_{t < \tau^{j,k}} \,\, + \,\, \sum_{\substack{u \in \llbracket -j,k \rrbracket : \\ (u,g) \in F_i}} \pgameplay{S_-^{\textrm{dev}}}{S_+}{i} (X^{j,k}_g = u) \cdot \egameplay{S_-^{\textrm{dev}}}{S_+}{u,g} [P_-^{j,k}] \, . $$ The joint law of $C_-(t)$, $t \in \intint{g-1}$, is equal under $\pgameplay{S_-^{\textrm{dev}}}{S_+}{i}$ and $\pgameplay{S_-}{S_+}{i}$, because $S$ and $S_-$ coincide on $\ensuremath{\mathbb{Z}} \times \intint{g-1}$. The costs $C_-(t)$ are non-negative, and upper bounds on the conditional mean payoffs in the preceding display are offered by~(\ref{e.starone}) and~(\ref{e.startwo}). By way of comparison, $$ \egameplay{S_-}{S_+}{i} [P_-^{j,k}] \, = \, - \, \egameplay{S_-}{S_+}{i} \sum_{t=1}^{g-1} C_-(t) {\bf 1}_{t < \tau^{j,k}} \, \, + \, \, \sum_{\substack{u \in \llbracket -j,k \rrbracket : \\ (u,g) \in F_i}} \pgameplay{S_-}{S_+}{i} (X^{j,k}_g = u) \cdot \egameplay{S_-}{S_+}{u,g} [P_-^{j,k}] \, , $$ with $$ \egameplay{S_-}{S_+}{u,g} [P_-^{j,k}] = n_u \, \, \, \textrm{for $u \in \llbracket -j,k \rrbracket$} $$ by Lemma~\ref{l.minipayoff}(3). Consider a pair $(j,k)$ over which the supremum in Proposition~\ref{p.jksup}(1) is taken. Since $\mathsf{D}_-^{j,k}(S_-^{\textrm{dev}},i)$ is non-empty, we may find $q \in \llbracket -j,k \rrbracket$ such that $(q,g) \in \mathsf{D}_-^{j,k}(S_-^{\textrm{dev}},i)$. Since $X: \llbracket 0,g \rrbracket \to \ensuremath{\mathbb{Z}}$ coincides under $\ensuremath{\mathbb{P}}_{S_-,S_+}^i$ and $\ensuremath{\mathbb{P}}_{S_-^{\textrm{dev}},S_+}^i$, we see that $$ \egameplay{S_-}{S_+}{i} [P_-^{j,k}] - \egameplay{S_-^{\textrm{dev}}}{S_+}{i} [P_-^{j,k}] \, \, = \, \, \sum_{\substack{u \in \llbracket -j,k \rrbracket : \\ (u,g) \in F_i}} \pgameplay{S_-}{S_+}{i} (X^{j,k}_g = u) \cdot \Big(n_u - \egameplay{S_-^{\textrm{dev}}}{S_+}{u,g} [P_-^{j,k}] \Big) \, , $$ where the term in parentheses on the right-hand side is strictly positive if $u = q$ (by~(\ref{e.startwo})), and is non-negative if $u \in \llbracket -j,k \rrbracket$, $u \not= q$ (by~(\ref{e.starone})). This implies that $$ \egameplay{S_-}{S_+}{i} [P_-^{j,k}] - \egameplay{S_-^{\textrm{dev}}}{S_+}{i} [P_-^{j,k}] \, \, \geq \, \, \pgameplay{S_-}{S_+}{i} (X^{j,k}_g = q) \cdot \Big( \, n_q - \egameplay{S_-^{\textrm{dev}}}{S_+}{q,g} \big[P_-^{j,k} \big] \, \Big) \, . $$ We claim that $\pgameplay{S_-}{S_+}{i} (X^{j,k}_g = q) > 0$. Indeed, it is enough to find any access route for $X$ from $(i,0)$ to $(q,g)$ that never leaves $\llbracket -j,k \rrbracket \times \llbracket 0,g \rrbracket$, because the strategies in the pair $(S_-,S_+) = (b,a)$ have positive coefficients; that such a route exists is due to $(q,g) \in F_i$, $i,q \in \llbracket -j,k \rrbracket$ and $k >-j$. For example, $\pgameplay{S_-}{S_+}{i} (X^{j,k}_g = q)$ is at least $\eta^g$, where $\eta = \min \big\{ a_i \wedge b_i: i \in \llbracket -j,k \rrbracket \big\}$. That is, $$ \egameplay{S_-}{S_+}{i} [P_-^{j,k}] - \egameplay{S_-^{\textrm{dev}}}{S_+}{i} [P_-^{j,k}] \, \geq \, \eta^g \cdot \Big( \, n_q - \egameplay{S_-^{\textrm{dev}}}{S_+}{q,g} \big[P_-^{j,k}\big] \, \Big) \, . $$ Since the positive right-hand side is independent of the choice of the pair $j,k \in \N_+$ over which the supremum is taken in Proposition~\ref{p.jksup}(1), we have obtained this result. \qed {\bf Proof of Proposition~\ref{p.jksup}(2).} The essentially identical argument is omitted. \qed \section{Explicit \textrm{ABMN} solutions and their consequences}\label{s.battlefield} Here we explicitly solve the \textrm{ABMN} system, proving Theorem~\ref{t.defaultexplicit}, and its softer cousin Proposition~\ref{p.default}. Then we analyse the asymptotic decay in high index values of \textrm{ABMN} solutions, proving Theorem~\ref{t.ajbj}. Two consequences of this decay---finiteness of boundary data in Theorem~\ref{t.positiveabmn}(3), and the almost sure eventual unanimity of gameplay in Theorem~\ref{t.unanimity}---are derived. \subsection{Explicit \textrm{ABMN} solutions} Fundamental to deriving Theorem~\ref{t.defaultexplicit} is an alternative representation of the \textrm{ABMN} system that we offer first, in Proposition~\ref{p.abmnsolvesmn}. The real-valued variables $\big\{ m_i,n_i: i \in \ensuremath{\mathbb{Z}} \big\}$ satisfy the \textrm{MN} system on $\ensuremath{\mathbb{Z}}$ if \begin{align*} (m_i - m_{i-1}) (m_{i+1} - m_{i-1} + n_{i-1} - n_{i+1})^2 & \, = \, (m_{i+1} - m_{i-1})^3 && \qquad \textrm{MN}(1) \\ (n_i - n_{i+1}) (m_{i+1} - m_{i-1} + n_{i-1} - n_{i+1})^2 & \, = \, (n_{i-1} - n_{i+1})^3 && \qquad \textrm{MN}(2) \, , \end{align*} for $i \in \ensuremath{\mathbb{Z}}$. As for \textrm{ABMN}$(1,2,3,4)$ from Definition~\ref{d.abmn}, we refer to the above equations as $\textrm{MN}(1)$ and $\textrm{MN}(2)$ rather than by the usual convention of numbered equations. \begin{proposition}\label{p.abmnsolvesmn} A positive solution of the \textrm{ABMN} system on $\ensuremath{\mathbb{Z}}$ solves the \textrm{MN} system on $\ensuremath{\mathbb{Z}}$. \end{proposition} {\bf Proof.} For $i \in \ensuremath{\mathbb{Z}}$, set $M_i = m_{i+1} - m_{i-1}$ and $N_i = n_{i-1} - n_{i+1}$. We claim that \begin{equation}\label{e.abclaim} a_i = \frac{M_i^2 N_i}{(M_i+N_i)^2} \, \, \, , \, \, \, b_i = \frac{M_i N_i^2}{(M_i+N_i)^2} \, \, \, \, \textrm{and} \, \, \, \, \frac{a_i}{a_i+b_i} = \frac{M_i}{M_i+N_i} \, . \end{equation} These follow from \textrm{ABMN}$(3,4)$. Expressing \textrm{ABMN}$(1)$ in the form~(\ref{e.firstrearranged}), we find from~(\ref{e.abclaim}) that $$ m_i \, = \, m_{i-1} + \frac{M_i^2}{(M_i+N_i)^2} - \frac{M_i^2 N_i}{(M_i+N_i)^2} \, , $$ whence \textrm{MN}$(1)$ holds. Equation \textrm{MN}$(2)$ is obtained similarly, from \textrm{ABMN}$(2)$. \qed Recall $c,d,s:(0,\infty) \to (0,\infty)$ from Definition~\ref{d.acs}. \begin{definition}\label{d.alphagamma} Let $\gamma,\delta:(0,\infty) \to (0,\infty)$ be given by $\gamma(x) = c(x)^{-1}$ and $\delta(x) = d(x)^{-1}$. Set $\beta:(0,\infty) \to (0,\infty)$, $\beta(x) = \tfrac{\omega - 1}{4}$, where recall that $\omega = \sqrt{8x+1}$ for $x \in (0,\infty)$. \end{definition} \begin{lemma}\label{l.acsfacts} \leavevmode \begin{enumerate} \item The functions $c,d,s:(0,\infty) \to (0,\infty)$ are increasing.\footnote{Let $* \in \{ c,d,s \}$. By `Lemma~\ref{l.acsfacts}(1:$*$)' will be meant `$*$ is increasing'.} \item We have that $s(x) = x^2/2 + O(x^3)$ as $x \searrow 0$. \item For $x \in (0,\infty)$, $s(x) = \tfrac{\beta(x)^2}{\beta(x)+2}$. \item For $x \in (0,\infty)$, $\beta(x) \leq x$. \item For $x \in (0,\infty)$, $s(x) < x$. \end{enumerate} \end{lemma} {\bf Proof: (1).} The expressions for $c(x)$, $d(x)$ and $s(x)$ in Definition~\ref{d.acs} are readily seen to be increasing in the variable $\omega \in (1,\infty)$; since $\omega = \sqrt{8x +1}$, they are also increasing in $x \in (0,\infty)$.\\ {\bf (2).} We have that $\omega = \sqrt{8x +1} = 1 + 4x + O(x^2)$, whence $$ s(x) = \tfrac{(\omega-1)^2}{4(\omega +7)}= \tfrac{16 x^2 + O(x^3)}{4(8 + O(x))} = x^2/2 + O(x^3)\, . $$ {\bf (3).} This is due to $s(x) = \tfrac{(\omega-1)^2}{4(\omega+7)}$ and $\beta(x) = (\omega-1)/4$. \\ {\bf (4).} Since $\omega(x) = \sqrt{8x +1} \leq 4x+1$, $\beta(x) \leq x$. \\ {\bf (5).} Lemma~\ref{l.acsfacts}(3), $\beta > 0$ and Lemma~\ref{l.acsfacts}(4) imply that $$ s(x) = \tfrac{\beta(x)^2}{\beta(x) + 2} < \beta(x) \leq x $$ as desired. \qed Recall Definition~\ref{d.deltai}. \begin{proposition}\label{p.alphagammaess} For $i \in \ensuremath{\mathbb{Z}}$, we have that\footnote{Let $* \in \{\gamma,\delta,s\}$. By `Proposition~\ref{p.alphagammaess}($*$)', we will mean the statement made concerning the labelled quantity.} $$ \gamma(\phi_i) = \frac{m_i - m_{i-1}}{m_{i+1} - m_{i-1}} \, \, , \, \, \delta(\phi_i) = \frac{n_{i-1} - n_i}{n_{i-1} - n_{i+1}} \, \, \, \, \textrm{and} \, \, \, \, s(\phi_i) = \phi_{i+1} \, . $$ \end{proposition} Notation to be used only in the proof of this proposition\footnote{In particular, the temporary usage of $s_i$ introduced in Definition~\ref{d.subscripti} is an abuse, because the denoted quantity is not the function $s_i$; nor is it the value $s_i(x)$ for $x = \phi_0$. Indeed, $s_i(x)$ equals $\phi_i$, while $s_i$ with the temporary usage equals $\phi_{i+1}$.} makes the task to show that $*(\phi_i)$ equals $*_i$ for $* \in \{\gamma,\delta,s\}$. \begin{definition}\label{d.subscripti} For $i \in \ensuremath{\mathbb{Z}}$, set $\gamma_i = \tfrac{m_i - m_{i-1}}{m_{i+1} - m_{i-1}}$, $\delta_i = \frac{n_{i-1} - n_i}{n_{i-1} - n_{i+1}}$ and $s_i = \phi_{i+1}$. We also set $\beta_i = \frac{n_{i-1}-n_{i+1}}{m_{i+1} - m_{i-1}}$, and write $\omega_i = \omega(\phi_i) = \sqrt{8\phi_i +1}$. \end{definition} \begin{lemma}\label{l.fourfacts} We have that $$ (1 + \beta_i)^2 \gamma_i = 1 \, \, , \, \, 1 - \delta_i = \tfrac{\beta_i^2}{(1+\beta_i)^2} \, \, , \, \, \phi_i = \delta_i\beta_i/\gamma_i \, \, , \, \, \phi_{i+1} = \tfrac{\beta_i(1-\delta_i)}{1-\gamma_i} \, . $$ \end{lemma} {\bf Proof.} Equation~\textrm{MN}(1) implies that $(1 + \beta_i)^2 \gamma_i = 1$. Equation \textrm{MN}(2) implies $1 - \delta_i = \tfrac{\beta_i^2}{(1+\beta_i)^2}$. That $\phi_i = \delta_i\beta_i/\gamma_i$ follows by the definitions of the concerned quantities. Noting that $$ 1 - \delta_i = \tfrac{n_i - n_{i+1}}{n_{i-1} - n_{i+1}} \, \, \, \, \textrm{and} \, \, \, \, 1 - \gamma_i = \tfrac{m_{i+1} - m_i}{m_{i+1} - m_{i-1}} \, , $$ we find from the definitions of $\beta_i$ and $\phi_{i+1}$ that $\phi_{i+1} = \tfrac{\beta_i(1-\delta_i)}{1-\gamma_i}$ holds. \qed \begin{lemma}\label{l.omegai} For $i \in \ensuremath{\mathbb{Z}}$, $$ \gamma_i^{-1} = \tfrac{1}{16} (\omega_i + 3)^2 \, \, , \, \, \delta_i^{-1} = \frac{(\omega_i + 3)^2}{8(\omega + 1)} \, \, , \, \, s_i = \frac{(\omega_i - 1)^2}{4(\omega_i + 7)} \, \, \, \textrm{and} \, \, \, \beta_i = \tfrac{1}{4} (\omega_i - 1) \, . $$ \end{lemma} {\bf Proof.} Omitting $i$ subscripts, consider the four equations stated in Lemma~\ref{l.fourfacts} when we take $\phi \in (0,\infty)$ given. The first and third equations imply that $\delta\beta(1+\beta)^2 = \phi$. Using the second equation, we find that $(2\beta+1)\beta = \phi$; since $\beta$ is positive, we confirm that $\beta = (\omega -1)/4$. From the first equation, we then obtain $\gamma = 16(\omega +3)^{-2}$. The third equation $\delta = \phi\gamma/\beta$ then yields $\delta = \tfrac{16\phi}{(\omega +3)^2} \cdot \tfrac{4}{\omega -1}$ which equals $\tfrac{8(\omega +1)}{(\omega +3)^2}$ in view of $\omega^2 -1 = 8\phi$. Finally, $s = \phi_{i+1}$ by definition, so that the fourth equation implies that $s = \tfrac{\omega -1}{4} \cdot \tfrac{(\omega+3)^2 - 8\omega - 8}{(\omega+3)^2 - 16}$ whose right-hand side is seen to equal $\tfrac{(\omega -1)^2}{4(\omega+7)}$ after cancellation of $\omega - 1 > 0$ from numerator and denominator. \qed \begin{lemma}\label{l.omega.asymptotic} \leavevmode \begin{enumerate} \item We have that $\gamma_i^{-1} -1 = 2\phi_i + O(\phi_i^2)$. \item And that $\beta_i = \phi_i + O(\phi_i^2)$. \end{enumerate} \end{lemma} {\bf Proof: (1).} From Lemma~\ref{l.omegai}, note that $\gamma_i^{-1} = \tfrac{1}{16} (\omega_i + 3)^2 = \big( 1 + \phi_i + O(\phi_i^2)\big)^2$. \\ {\bf (2).} By the same result, $\beta_i = \tfrac{1}{4}(\omega_i - 1) = \phi_i + O(\phi_i^2)$. \qed {\bf Proof of Proposition~\ref{p.alphagammaess}.} By Lemma~\ref{l.omegai} and Definitions~\ref{d.acs},~\ref{d.alphagamma} and~\ref{d.subscripti}, $$\gamma_i = 16(\omega_i + 3)^{-2} =c(\phi_i)^{-1} = \gamma(\phi_i) \, \, ; \, \, \, \, \delta_i = \tfrac{8(\omega_i +1)}{(\omega_i + 3)^2} =d(\phi_i)^{-1} = \delta(\phi_i) \, \, ; $$ and $s_i = \tfrac{(\omega_i - 1)^2}{4(\omega_i +7)} = s(\phi_i)$. \qed {\bf Proofs of Proposition~\ref{p.default} and Theorem~\ref{t.defaultexplicit}.} For given $x \in (0,\infty)$, let $(a,b,m,n)$ be an \textrm{ABMN} solution with $\tfrac{n_{-1} - n_0}{m_0 - m_{-1}} =x$. Since $c_i(x) = c(s_i(x)) = c(\phi_i)$, Definition~\ref{d.alphagamma} and Proposition~\ref{p.alphagammaess}($\gamma$) imply that \begin{equation}\label{e.ciformula} c_i(x) - 1 = \frac{1 - \gamma(\phi_i)}{\gamma(\phi_i)} = \frac{m_{i+1} - m_i}{m_i - m_{i-1}} \, . \end{equation} Adopting the notation in Definition~\ref{d.zdefault}, we find that \begin{equation}\label{e.mdifferenceratio} \frac{m_{j+1} - m_j}{m_0 - m_{-1}} \, = \, \prod_{i=0}^j \big( c_i(x) - 1 \big) \end{equation} for any $j \in \ensuremath{\mathbb{Z}}$. Since a default solution has $m_0 - m_{-1} = 1$ by definition, we deduce that the formula for $m^{\rm def}_{k+1} - m^{\rm def}_k$ in Definition~\ref{d.zdefault} holds. Similarly to~(\ref{e.ciformula}), we find via Proposition~\ref{p.alphagammaess}($\delta$) that $$ d_i(x) - 1 = \frac{1 - \delta(\phi_i)}{\delta(\phi_i)} = \frac{n_i - n_{i+1}}{n_{i-1} - n_i} \, , $$ whence $$ \frac{n_j - n_{j+1}}{n_{-1} - n_0} \, = \, \prod_{i=0}^j \big( d_i(x) - 1 \big) $$ for $j \in \ensuremath{\mathbb{Z}}$. Since $n_{-1} - n_0 = x(m_0 - m_{-1}) = x$ for any default solution, we find that the formula for $n^{\rm def}_k - n^{\rm def}_{k+1}$ in Definition~\ref{d.zdefault} is valid. Proposition~\ref{p.abmnsolvesmn} implies that the sought formulas for $a^{\rm def}_i$ and $b^{\rm def}_i$ for $i \in \ensuremath{\mathbb{Z}}$ hold. The exhibited solution exists and is unique. This completes the proof of Theorem~\ref{t.defaultexplicit}. The noted existence and uniqueness also prove Proposition~\ref{p.default}. \qed \subsection{Asymptotic decay of solutions} Here we prove Theorem~\ref{t.ajbj}. \begin{lemma}\label{l.deltadecay} \leavevmode \begin{enumerate} \item For $\phi_i \in (0,1)$, we have that $$ \phi_i^2/2 - O (\phi_i^3) \leq \phi_{i+1} \leq \phi_i^2/2 \, , $$ where the positive constant implied by the $O$-notation is bounded above in terms of $h \in (0,1)$, where $\phi_i \in (0,1-h)$. \item For any $i \in \ensuremath{\mathbb{Z}}$, $\phi_{i+1} < \phi_i$. \end{enumerate} \end{lemma} {\bf Proof: (1).} From Proposition~\ref{p.alphagammaess}(s) and Lemma~\ref{l.acsfacts}(3,4), we see that $$ \phi_{i+1} \leq \beta(\phi_i)^2/2 \leq \phi_i^2/2 \, . $$ By Proposition~\ref{p.alphagammaess}(s) and Lemma~\ref{l.acsfacts}(2), $\phi_{i+1} = s(\phi_i) = \phi_i^2/2 + O(\phi_i^3)$. {\bf (2).} By Lemma~\ref{l.acsfacts}(5), and Proposition~\ref{p.alphagammaess}(s), $\phi_{i+1} = s(\phi_i) < \phi_i$. \qed We are about to prove Theorem~\ref{t.ajbj}. Since this result uses the notion of the battlefield index specified in Definition~\ref{d.battlefield}, we now offer a proof that this index is well-defined. \begin{lemma}\label{l.battlefield} Let $\big\{ (a_i,b_i,m_i,n_i): i \in \ensuremath{\mathbb{Z}} \big\}$ be a positive \textrm{ABMN} solution on $\ensuremath{\mathbb{Z}}$. There is a unique value of $k \in \ensuremath{\mathbb{Z}}$ for which $\phi_k \in (1/3,3]$. \end{lemma} {\bf Proof.} By Lemma~\ref{l.deltadecay}(2), the sequence $\big\{ \phi_i: i \in \ensuremath{\mathbb{Z}} \big\}$ is decreasing. Taking $s(0) = 0$, the value $\lim_{i \to \infty}\phi_i$ is a fixed point of $s:[0,\infty) \to [0,\infty)$ because $s$ is continuous and $s(\phi_i) = \phi_{i+1}$ (the latter by Proposition~\ref{p.alphagammaess}(s)). But $s(x) < x$ for $x > 0$ by~Lemma~\ref{l.acsfacts}(5). Thus, $\phi_i \searrow 0 $ as $i \to \infty$. The opposite limiting value $\lim_{i \to \infty} \phi_{-i}$ would also be a fixed point for $s:[0,\infty) \to [0,\infty)$ were it to be finite; we see then that $\lim_{i \to \infty} \phi_{-i}$ is infinite. We may thus set $k \in \ensuremath{\mathbb{Z}}$ so that $k = \inf \big\{ i \in \ensuremath{\mathbb{Z}}: \phi_i \leq 3 \big\}$ and be assured that $k$ is well-defined. Now, $\phi_j > 3$ for $j \leq k-1$, while $\phi_k$, being $s(\phi_{k-1})$, exceeds $s(3) = 1/3$ by Lemma~\ref{l.acsfacts}(1:$s$). On the other hand, if $j \geq k+1$, then $\phi_j \leq \phi_{k+1} = s(\phi_k) \leq s(3) = 1/3$. Thus, $k \in \ensuremath{\mathbb{Z}}$ is the unique index whose $\phi$-value exceeds one-third and is at most three. \qed {\bf Proof of Theorem~\ref{t.ajbj}(1).} For $i \in \N$, set $\e_i = \phi_{k+i}/2$ and $g_i = - \log \e_i$. By $s(3) = 1/3$ and Lemma~\ref{l.acsfacts}(1:$s$), we have that $s(x) \leq 1/3$ for $x \in (0,3]$. Definition~\ref{d.battlefield} and $s(\phi_i) =\phi_{i+1}$ (from Proposition~\ref{p.alphagammaess}(s)) thus imply that $\phi_{k+j} \leq 1/3$ for $j \geq 1$. We may then apply Lemma~\ref{l.deltadecay}(1) to find that $\e_i^2 \big( 1 - O(\e_i) \big) \leq \e_{i+1} \leq \e_i^2$, where the positive constant implicit in the $O$-notation may be chosen independently of the ABMN solution $\big\{ (a_j,b_j,m_j,n_j):j \in \ensuremath{\mathbb{Z}} \big\}$ and the value of the index $i \geq 1$. (We say that a positive constant is universal, or is bounded universally, if it may be so chosen.) We learn that \begin{equation}\label{e.twogi} 2 g_i \, \leq \, g_{i+1} \, \leq \, 2 g_i + O\big(e^{-g_i} \big) \, , \end{equation} where the implicit positive constant is again universal. Thus, $g_i > \log 6$ for $i \geq 1$, and we may write $g_i = 2^{\macell_i}$ for a real-valued sequence $\{ \macell_i: i \in \N_+ \}$ whose terms are bounded below by $\tfrac{\log \log 6}{\log 2} > 0$. From~(\ref{e.twogi}), we find that $$ 0 \leq g_{i+1} - 2g_i = \big( 2^{\macell_{i+1} - \macell_i - 1} - 1 \big) 2^{\macell_i +1} \, = \, O \big( \exp \{ - 2^{\macell_i} \} \big) \, ; $$ using $\macell_i > 0$, we readily obtain $$ 0 \, \leq \, \macell_{i+1} - \macell_i - 1 \, = \, O \big( \exp \{ - 2^{\macell_i} \} \big) \, . $$ Since $\macell_1 > 0$ and $\macell_{i+1} \geq \macell_i +1$, we have that $\macell_i > i -1$ for $i \geq 1$. Thus, $$ 0 \, \leq \, \macell_{i+1} - \macell_i - 1 \, = \, O \big( \exp \{ - 2^{i-1} \} \big) \, . $$ We may find $B \in \ensuremath{\mathbb{R}}$ so that $\macell_i = B + i + O \big( \exp \{ - 2^{i-1} \} \big)$ for $i \in \N_+$. The universal form of $O$ and the fact that $\macell_1$ is bounded (since $\e_1 \in (1/6,3/2]$) implies that $B$ is bounded in a universal sense. Set $A = 2^B$ (so that $A$ is bounded away from zero and infinity in a universal sense), and exponentiate with base two to obtain $$ g_i = A \cdot 2^{i + O\big(\exp ( -2^{i-1} ) \big)} $$ for $i \geq 1$. Since $\phi_{k+i} = 2e^{-g_i}$, we see then that, for $i \geq k+1$, \begin{equation}\label{e.deltaiformula} \phi_i \, = \, 2 \exp \Big\{ -A \cdot 2^{{i-k} +O\big(\exp ( -2^{i-k-1} ) \big)} \Big\} \, . \end{equation} Similarly as we derived~(\ref{e.mdifferenceratio}), we find that $$ m_j - m_{j-1} \, = \, (m_k - m_{k-1}) \prod_{i = k}^{j-1} \big( \gamma_i^{-1} - 1 \big) $$ for $j \geq k+1$. By Lemma~\ref{l.omega.asymptotic}(1), \begin{eqnarray*} m_j - m_{j-1} & = & (m_k - m_{k-1}) \prod_{i = k}^{j-1} \Big( \, 4 \exp \Big\{ -A \cdot 2^{i -k+ \kappa_i \exp ( -2^{i-k-1} ) \big)} \Big\} + O(1) e^{-A \cdot 2^{i-k + 1/2 }} \, \Big) \\ & = & (m_k - m_{k-1}) 4^{j-k} E_{k,j} \prod_{i = k}^{j-1} \exp \Big\{ -A \cdot 2^{i -k+ \kappa_i \exp ( -2^{i-k-1} )} \Big\} \, , \end{eqnarray*} where the values of $\kappa_i$ are bounded above in absolute value (in a universal sense), and where \begin{eqnarray*} E_{k,j} & = & \prod_{i=k}^{j-1} \Big( 1 + O(1) \exp \big\{ - A 2^{i-k} \big(2^{1/2} - 2^{\kappa_i \exp \{- 2^{i-k-1} \}} \big) \big\} \Big) \\ & = & \prod_{i=k}^{j-1} \Big( 1 + \exp \big\{ - O(1)A \cdot 2 ^{i-k} \big\} \Big) \end{eqnarray*} satisfies $E_{k,j} = E \big( 1 + e^{-O(1)A 2^{j-k}}\big)$ with $$ E = \prod_{i=k}^\infty \Big( 1 + \exp \big\{ - O(1)A \cdot 2 ^{i-k} \big\} \Big) \, . $$ The quantity $E$ is positive and bounded away from zero and infinity universally. Note that \begin{eqnarray*} \sum_{i= k}^{j-1} 2^{i -k+ \kappa_i \exp \{ - 2^{i-k-1} \} } & = & 2^{j-k} - 1 + \sum_{i= k}^{j-1} 2^{i-k} \big( 2^{\kappa_i \exp \{- 2^{i-k-1} \} } - 1 \big) \\ & = & 2^{j-k} - 1 + \rho - \sum_{i= j}^\infty 2^{i-k} \big( 2^{\kappa_i \exp \{ - 2^{i-k-1} \} } - 1 \big) \\ & = & 2^{j-k} - 1 + \rho + O(1) e^{-2^{j-k}O(1)} \, , \end{eqnarray*} where $\rho = \sum_{i= k}^\infty 2^{i-k} \big( 2^{\kappa_i \exp \{ - 2^{i-k -1} \} } - 1 \big)$. Thus, $m_j - m_{j-1}$ equals \begin{eqnarray*} & & (m_k - m_{k-1}) 4^{j-k} \exp \big\{ - 2^{j-k}A \big\} E \exp \big\{ A ( 1 - \rho ) \big\} \big( 1 + e^{-O(1)A 2^{j-k}}\big) \\ & & \qquad \qquad \qquad \qquad \qquad \qquad \times \, \, \, \Big( 1 + e^{-O(1)A 2^{j-k}}\Big) \exp \Big\{ A \cdot O(1) e^{-2^{j-k}O(1)} \Big\} \\ & = & (m_k - m_{k-1}) 4^{j-k} \exp \big\{ - 2^{j-k}A \big\} E \exp \big\{ A ( 1 - \rho ) \big\} \big( 1 + e^{-O(1) 2^{j-k}}\big) \, , \end{eqnarray*} where we used that $A = \Theta(1)$---namely, $A$ is bounded away from zero and infinity in a universal sense---for the displayed equality. Set $F$ equal to $E \exp \big\{ A ( 1 - \rho ) \big\}$, and note that this positive expression is bounded away from zero and infinity universally. We find that \begin{equation}\label{e.mjmjminusone} m_j - m_{j-1} = (m_k - m_{k-1})\cdot F \cdot 2^{2(j-k)} \exp \big\{ - 2^{j-k}A \big\} \big( 1 + e^{-O(1) 2^{j-k}}\big) \, , \end{equation} which is the inference that Theorem~\ref{t.ajbj} makes for the sequence of $m$-differences. With $M = m_{j+1} - m_{j-1}$ and $N = n_{j-1} - n_{j+1}$, we have that $a_j = \tfrac{M^2 N}{(M+N)^2}$ and $b_j = \tfrac{M N^2}{(M+N)^2}$ from~(\ref{e.abclaim}). Using the definition of $\beta_j$ in the guise $N = \beta_j M$, and Lemma~\ref{l.omega.asymptotic}(2) with $i=j$, we find that $$ a_j = ( m_{j+1} - m_{j-1}) \tfrac{\beta_j}{(1+\beta_j)^2} \, = \, ( m_{j+1} - m_{j-1}) \big( \phi_j + O(\phi_j^2) \big) $$ and $$ b_j = ( m_{j+1} - m_{j-1}) \tfrac{\beta_j^2}{(1+\beta_j)^2} \, = \, ( m_{j+1} - m_{j-1}) \big( \phi_j^2 + O(\phi_j^3) \big) \, . $$ We may use (\ref{e.mjmjminusone}) to replace the quantity $m_{j+1} - m_{j-1}$ in these expressions. The expressions in terms of $\phi_i$ may be bounded by means of~(\ref{e.deltaiformula}): \begin{eqnarray*} \phi_j & = & 2 \exp \big\{ - A \cdot 2^{j-k} \big( 1 + O ( \exp \{ - 2^{j-k-1} \} ) \big) \big\} \\ & = & 2 \exp \big\{ - A \cdot 2^{j-k} \big\} \exp \big\{ O (e^{- 2^{j-k}c}) \big\} \, = \, 2 \exp \big\{ - A \cdot 2^{j-k} \big\} \big( 1 + O (e^{- 2^{j-k}c}) \big) \, . \end{eqnarray*} Here, the value of $c$ is positive (and universal) in the second line. We thus obtain the expressions for $a_j$ and $b_j$ in Theorem~\ref{t.ajbj}(1). It remains to derive the asymptotic expression for the quantity $n_j - n_{j-1}$. Here, we use $n_{j-1} - n_j = \phi_j(m_j - m_{j-1})$,~(\ref{e.mjmjminusone}) and the preceding display. {\bf (2).} According to Definition~\ref{d.battlefield}, the battlefield index $k \in \ensuremath{\mathbb{Z}}$ is the unique solution of $\phi_k \in (1/3,3]$. Consider the role-reversal transformation that replaces index $i$ by $2k-i$, and $(a,b,m,n)$ by $(b,a,n,m)$. The resulting system is also a solution of the \textrm{ABMN} system by a minor variation of Proposition~\ref{p.rolereversal}. Write $\hat\phi_i$ for the value of $\phi_i$ in the transformed solution. Then $\hat\phi_i = 1/\phi_{2k+1 -i}$ for $i \in \ensuremath{\mathbb{Z}}$. We see then that $\hat\phi_{k+1} \in [1/3,3)$ (so that $k+1$ is the battlefield index of the transformed system except when $\phi_k =1/3$). Theorem~\ref{t.ajbj}(2) thus reduces to Theorem~\ref{t.ajbj}(1), because the proof of the latter operates as well as when $\phi_k = 1/3$ as when $\phi_k \in (1/3,3]$. \qed \subsection{Consequences of asymptotic decay}\label{s.consequences} We may now complete the proof of Theorem~\ref{t.positiveabmn}. {\bf Proof of Theorem~\ref{t.positiveabmn}(3).} By Theorem~\ref{t.positiveabmn}(2), we know that $m_\infty$, $m_{-\infty}$, $n_\infty$ and $n_{-\infty}$ exist as elements of $\ensuremath{\mathbb{R}} \cup \{ \infty \} \cup \{ - \infty \}$. Since we know that $m_0$ and $n_0$ belong to $\ensuremath{\mathbb{R}}$, it is enough, in order to exclude the possibility that one of the four quantities is infinite, to argue that $\lim_{i \to \infty} (m_i - m_0) < \infty$, $\lim_{i \to \infty} (m_{-i} - m_0) > - \infty$, $\lim_{i \to \infty} (n_i - n_0) > - \infty$ and $\lim_{i \to \infty} (n_{-i} - n_0) < \infty$. These results follow from the asymptotic expressions for $m_j - m_{j-1}$ and $n_{j-1} - n_j$ in Theorem~\ref{t.ajbj}(1,2). The almost sure occurrence of the unanimity event $U$ is a consequence of Theorem~\ref{t.ajbj}, and we prove it now. {\bf Proof of Theorem~\ref{t.unanimity}(4).} By Theorem~\ref{t.nashabmn}(1), this reduces to Theorem~\ref{t.unanimity}(1,2,3). {\bf (1,2,3).} We abusively write $(S_-,S_+)= (b,a)$ as usual. Theorem~\ref{t.nashabmn} and Theorem~\ref{t.ajbj}(1) imply that, for $i \geq k$, $\tfrac{a_i}{a_i + b_i} = 1 - 2\exp \{ - 2^{i-k}A \} \big( 1 + e^{-O(1) 2^{i-k}}\big)$. Thus, the $\ensuremath{\mathbb{P}}_{S_-,S_+}^i$-probability that every move is won by Maxine equals $$ \prod_{j=i}^\infty \tfrac{a_j}{a_j + b_j} \, = \, \prod_{j=i}^\infty \Big( 1 - 2\exp \{ - 2^{j-k}A \} \big( 1 + e^{-O(1) 2^{j-k}}\big) \Big) \, = \, 1 - 2\exp \{ - 2^{i-k}A \} \big( 1 + e^{-O(1) 2^{i-k}}\big) \, . $$ This bound proves Theorem~\ref{t.unanimity}(2). The corresponding bound for $i \leq k-1$, and the proof of Theorem~\ref{t.unanimity}(3), are similar. It remains then to derive Theorem~\ref{t.unanimity}(1). The displayed and omitted bounds permit us to choose $L \in \ensuremath{\mathbb{N}}$ such that \begin{equation}\label{e.outer} \textrm{if $\vert i - k \vert > L$, then $\ensuremath{\mathbb{P}}_{S_-,S_+}^ i(U) \geq 1/2$} \, . \end{equation} The {\em status report} $\mathsf{Stat}:\ensuremath{\mathbb{N}} \to \{ I,O,F\}$ is a random process defined under the law $\ensuremath{\mathbb{P}}_{S_-,S_+}^i$ that we will use to prove Theorem~\ref{t.unanimity}(1). This process takes values in a three-point set whose labels denote `inner', `outer' and `final'. To record the status report, we will iteratively specify an increasing sequence $\big\{ \tau_i: i \in \ensuremath{\mathbb{N}} \big\}$ of times valued in $\N \cup \{ \infty \}$. We set $\tau_0 = 0$. We check whether $\vert X_0 - k \vert \leq L$, where the value of $L$ was specified in the preceding paragraph. If this condition is met then we set $\mathsf{Stat}(0) = I$. If the condition is not met, we set $\mathsf{Stat}(0) = O$. Let $i \in \N_+$. Suppose that an initial status report $\mathsf{Stat}(j) \in \{I,O,F \}$, $j \in \llbracket 0,i-1 \rrbracket$, and an increasing sequence $\tau_j \in \ensuremath{\mathbb{N}} \cup \{ \infty\}$, $j \in \llbracket 0,i-1 \rrbracket$, has been recorded. If $\mathsf{Stat}(i-1) = F$, we set $\tau_i = \infty$ and $\mathsf{Stat}(i) = F$. If $\mathsf{Stat}(i-1)= I$, we set $\tau_i = \tau_{i-1} + L$. We set $$ \mathsf{Stat}(i) \, = \, \begin{cases} \, I & \text{if $\vert X_{\tau_i} - k \vert \leq L$} \\ \, O & \text{in the other case} \, . \end{cases} $$ If $\mathsf{Stat}(i-1) = O$, we begin to view the process $X$ run forward from time~$\tau_{i-1}$. We watch for the first occasion~$F \geq \tau_{i-1} + 2$ at which the sequence of observed differences $X_{j+1}-X_j$, $F-1 \geq j \geq \tau_{i-1}$, has assumed both values $-1$ and $1$. If this occasion never occurs, so that $F = \infty$, we set $\mathsf{Stat}(i) = F$ and $\tau_i = \infty$. If the occasion does occur, we set $\tau_i = F$. The last display is used to set $\mathsf{Stat}(i)$. This completes the description of the iterative scheme for the generic later step indexed by $i \geq 1$. The status report $\mathsf{Stat}:\N \to \{I,O,F\}$ is not a Markov process, but it has simple properties that serve to prove that unanimity~$U$ is an almost sure event under $\ensuremath{\mathbb{P}}_{S_-,S_+}^i$ for any $i \in \ensuremath{\mathbb{Z}}$. Consider then the process $\mathsf{Stat}$ under the just mentioned law. By construction, $\mathsf{Stat}$ arrives, and is absorbed, in $F$ precisely when the event $U$ occurs. To prove Theorem~\ref{t.unanimity}(1), our task is thus to show that $\mathsf{Stat}$ almost surely reaches $F$. Two properties suffice to show this. {\em Property~$I$.} Let $j \in \N_+$. Suppose given a status report history $\mathsf{Stat}_i$, $i \in \llbracket 0,j-1\rrbracket$, for which $\mathsf{Stat}_{j-1} = I$. There exists a constant $c > 0$ that does not depend on this history such that the conditional probability that $\mathsf{Stat}(j) = O$ is at least $c$. {\em Property~$O$.} Let $j \in \N_+$. Suppose given a history $\mathsf{Stat}_i$, $i \in \llbracket 0,j-1\rrbracket$, for which $\mathsf{Stat}_{j-1} = O$. The conditional probability that $\mathsf{Stat}(j) = F$ is at least one-half. Properties $I$ and $O$ show that, whatever the status report history up to a given moment, there is probability at least $c/2$ that one of the next two entries in the report is~$F$. Thus, it is inevitable that the report will eventually contain an entry in~$F$. The proof of Theorem~\ref{t.unanimity}(1) has thus been reduced to the task of deriving the two properties. The proofs of Properties~$I$ and~$O$ depend on a {\em claim}. This states that all the information in any report history $\mathsf{Stat}_i$, $i \in \llbracket 0,j-1\rrbracket$, in which $F$ is not recorded, is contained in the gameplay history $X_i$, $i \in \llbracket 0,\tau_{j-1} \rrbracket$. The claim may be proved by induction on $j$. The specifications of $\tau_j$ above are stopping times for the process $X$ that are finite when $\tau_i \in \{ I,O\}$. This proves the claim. We now prove Property~$I$. The coefficients $a_i$ and $b_i$ are positive by Theorem~\ref{t.nashabmn}; and they are bounded by Theorem~\ref{t.ajbj}. Consider then the event that $X$ makes $L$ rightward jumps from time~$\tau_{j-1}$. To find a lower bound on the conditional probability of this event given the circumstance of Property~$I$, note that the claim permits us to further condition on $X$ until time $\tau_{j-1}$. Since $\vert X_{\tau_{j-1}} - k \vert \leq L$, a lower bound is offered by the minimum over $\ell \in \llbracket k-L,k+L \rrbracket$ of the product $\Pi_{i=0}^{L-1} \tfrac{a_{\ell+i}}{a_{\ell + i} + b_{\ell + i}}$. This minimum is positive because the positive and bounded quantities $a$ and $b$ that are involved are finite in number. And now we prove Property~$O$. Again, by the claim, we may condition on $X$ until time $\tau_{j-1}$. Since $\vert X_{\tau_{j-1}} - k \vert > L$, we may invoke~(\ref{e.outer}) to show the sought property. This completes the proof of Theorem~\ref{t.unanimity}(1). \qed \section{The Mina margin map}\label{s.allminamm} Here we prove our results concerning the Mina margin map in three subsections. Finite-trail counterparts to the map are defined and estimated in Section~\ref{s.approxmmm}, and Theorem~\ref{t.relativereward} and several consequences are derived. In Section~\ref{s.mmmtransform}, the $\theta^{-1}$- and $\Theta$-transforms of the map are compared, and Theorem~\ref{t.phithetainverse} is proved. In Section~\ref{s.minamarginmap}, the $\lambda \leq 0.999904$ bound Theorem~\ref{t.minamarginvalues}(3) is derived by a scheme of explicit approximation for a well-chosen value of a suitable finite-trail counterpart for $\mathcal{M}$. \subsection{Approximating the Mina margin by its finite trail counterpart}\label{s.approxmmm} Here we prove Theorem~\ref{t.relativereward}, the third part contingent on Theorem~\ref{t.minamarginvalues}(3). At the end of the section, we prove the consequent Theorems~\ref{t.minamarginvalues}(1,2);~\ref{t.nashequil.prelim};~\ref{t.solutions}; and~\ref{t.nashequil}(1,2). \subsubsection{An explicit form for the finite-trail Mina margin map} \begin{lemma}\label{l.ecinvariance} Let $x \in (0,\infty)$. For $k,\ell \in \ensuremath{\mathbb{Z}} \cup \{ \infty\} \cup \{ - \infty\}$, $k < \ell$, the value of $\frac{n_k - n_\ell}{m_\ell - m_k} \in (0,\infty)$ is a constant function of the element $\big\{ (a_i,b_i,m_i,n_i) \in (0,\infty)^2 \times \ensuremath{\mathbb{R}}^2 : i \in \ensuremath{\mathbb{Z}} \big\}$ in the equivalence class~$\mathcal{C}(x)$. \end{lemma} {\em Remark.} When we write expressions $\frac{n_k - n_\ell}{m_\ell - m_k}$ in this section, we refer to the quantities $\frac{n_k - n_\ell}{m_\ell - m_k}(x)$ that the above lemma identifies; the value of $x \in (0,\infty)$ is often understood. { \bf Proof of Lemma~\ref{l.ecinvariance}.} That each expression $\frac{n_k - n_\ell}{m_\ell - m_k}$ is a finite number follows from Theorem~\ref{t.positiveabmn}(1,3). Each expression $\frac{n_k - n_\ell}{m_\ell - m_k}$ is invariant under the translation $\chi_{u,v}$, $u,v \in \ensuremath{\mathbb{R}}$ and dilation $\tau_x$, $x \in \ensuremath{\mathbb{R}}$, maps that must be used to interpolate any two elements of~$\mathcal{C}(x)$. \qed Recall the functions $s_j,c_j,d_j:(0,\infty) \to (0,\infty)$, $j \in \ensuremath{\mathbb{Z}}$, from Definition~\ref{d.stabc}. Set $P_0 = S_0 = 1$. For $k \in \N$, we iteratively specify \begin{equation}\label{e.prodp} P_{k+1}(x) - P_k(x) = \prod_{i=0}^k \big( c_i(x) - 1 \big) \, , \end{equation} and \begin{equation}\label{e.prods} S_{k+1}(x) - S_k(x) = \prod_{i=0}^k \big( d_i(x) - 1 \big) \, . \end{equation} Set $Q_1 = T_1 = 0$. For $k \in \N_+$, we then set \begin{equation}\label{e.prodq} Q_{k+1}(x) - Q_k(x) = \prod_{i=1}^k \big( c_{-i}(x) - 1 \big)^{-1} \, , \end{equation} and \begin{equation}\label{e.prodt} T_{k+1}(x) - T_k(x) = \prod_{i=1}^k \big( d_{-i}(x) - 1 \big)^{-1} \, . \end{equation} \begin{lemma}\label{l.prodinterpret} Let $x$ equal $\phi_0$ from Definition~\ref{d.deltai}. For $k \in \N$, $$ P_k(x) = \tfrac{m_k - m_{-1}}{m_0 - m_{-1}} \, \, \, \, \textrm{and} \, \, \, \, S_k(x) = \tfrac{n_{-1} - n_k}{n_{-1} - n_0} \, . $$ For $\ell \in \N_+$, $$ Q_\ell(x) = \tfrac{m_{-1} - m_{-\ell}}{m_0 - m_{-1}} \, \, \, \, \textrm{and} \, \, \, \, T_\ell(x) = \tfrac{n_{-\ell} - n_{-1}}{n_{-1} - n_0} \, . $$ \end{lemma} {\bf Proof.} The claimed formula for $P_k(x)$ is trivial when $k=0$. To prove the general formula for $P_k(x)$, it suffices to argue that $\prod_{i=0}^k \big( c_i(x) - 1 \big)$ equals $\tfrac{m_{k+1} - m_k}{m_0 - m_{-1}}$ for $k \in \N$, and we do this by induction on~$k$. The generic step in the induction is enabled by showing that $c_k(x) -1 = \tfrac{m_{k+1} - m_k}{m_k - m_{k-1}}$, which we obtain as follows: $$ c_k(x) -1 \, = \, \tfrac{1 - \gamma(s_k(x))}{\gamma(s_k(x))} \, = \, \tfrac{1 - \gamma(\phi_k)}{\gamma(\phi_k)} \, = \, \tfrac{m_{k+1} - m_k}{m_k - m_{k-1}} \, , $$ the respective equalities by Definition~\ref{d.alphagamma}; by iterating Proposition~\ref{p.alphagammaess}(s); and by Proposition~\ref{p.alphagammaess}($\gamma$). Likewise, the claimed formula for $Q_\ell(x)$ is trivial when $\ell=1$. Establishing the formula in the general case is a matter of showing that $\prod_{i=0}^\ell \big( c_{-i}(x) - 1 \big)^{-1}$ equals $\tfrac{m_{-\ell} - m_{-\ell-1}}{m_0 - m_{-1}}$ for $\ell \geq 2$. The generic inductive step here amounts to showing that $\big( c_{-\ell}(x) - 1 \big)^{-1} = \tfrac{m_{-\ell} - m_{-\ell-1}}{m_{-\ell + 1} - m_{-\ell}}$ for such $\ell$, and follows, similarly as above, from $\big( c_{-\ell}(x) - 1 \big)^{-1} = \tfrac{\gamma(s_{-\ell}(x))}{1-\gamma(s_{-\ell}(x))}$. The formulas for $S$ and $T$ follow when the changes $$ \textrm{$P \to S$, $Q \to T$, $k \to \ell$, $c \to d$, $\gamma \to \delta$ and $m_i \to -n_{-i}$} $$ are made. \qed Recall from~(\ref{e.minammfinite}) that the finite-trail Mina margin map $\mathcal{M}_{\ell,k}:(0,\infty) \to (0,\infty)$ satisfies $\mathcal{M}_{\ell,k}(x) \, = \, \frac{n_{-\ell} - n_k}{m_k - m_{-\ell}}$ for $k \in \N$ and $\ell \in \N_+$, where $x = \phi_0$. \begin{lemma}\label{l.ratiointerpret} We have that $$ \mathcal{M}_{\ell,k}(x) \, = \, \frac{x(S_k + T_\ell)}{P_k + Q_\ell} $$ for $k \in \N$ and $\ell \in \N_+$. \end{lemma} In reading the proof of this result, recall the notation explained in the remark that follows Lemma~\ref{l.ecinvariance}. {\bf Proof of Lemma~\ref{l.ratiointerpret}.} By Lemma~\ref{l.prodinterpret}, \begin{equation}\label{e.pqstformulas} m_k - m_{-\ell} = (P_k+Q_\ell)(m_0 - m_{-1}) \, \, \, \, \textrm{and} \, \, \, \, n_{-\ell} - n_k = (S_k + T_\ell) (n_{-1} - n_0) \, . \end{equation} But $x = \phi_0$, which is to say, $x = \tfrac{n_{-1} - n_0}{m_0 - m_{-1}}$. We find then that $$ \mathcal{M}_{\ell,k}(x) = \frac{(S_k + T_\ell) (n_{-1} - n_0)}{(P_k+Q_\ell)(m_0 - m_{-1})} = \frac{x(S_k + T_\ell)}{P_k+Q_\ell} \, , $$ as we sought to do. \qed \subsubsection{Estimates for the finite trail Mina margin map} In this subsection, we derive the following compact-uniform Cauchy sequence property of the finite-trail Mina margin maps. \begin{proposition}\label{p.rkrell} For $k \geq 0$, $\ell \geq 2$ and $1/3 \leq x \leq 3$, $$ \sup_{\substack{i \geq k+1 \\ j \geq \ell +1}} \big\vert \mathcal{M}_{i,j}(x) - \mathcal{M}_{\ell,k}(x) \big\vert \, \leq \, 3^5 2^{2k-2} 6^{1-2^k} + 3^3 2^{\ell-2} 6^{\ell -2^{\ell-1}} \, . $$ \end{proposition} The next lemma assembles key elements for the proof of Proposition~\ref{p.rkrell}. We omit to denote the argument `$(x)$' of $\mathcal{M}_{\cdot,\cdot}$, $P$, $Q$, $S$ and $T$ as we derive this proposition. \begin{lemma}\label{l.pqst} Let $k \in \N$ and $x \in \ensuremath{\mathbb{R}}$. \begin{enumerate} \item For $k \geq 0$ and $x \leq 3$, $P_{k+1} - P_k \leq 2^{2k} 6^{1-2^k}$. \item For $k \geq 1$ and $x \geq 1/3$, $Q_{k+1} - Q_k \leq 2^{2k} 6^{1-2^k}$. \item For $k \geq 0$ and $x \leq 3$, $S_{k+1} - S_k \leq 2^{2k+1} 6^{1-2^{k+1}}$. \item For $\ell \geq 2$ and $x \geq 1/3$, $T_{\ell+1} - T_\ell \leq 3 (12)^{\ell-1} 6^{1-2^{\ell-1}}$. \end{enumerate} \end{lemma} Two simple lemmas gather estimates needed to prove Lemma~\ref{l.pqst}. \begin{lemma}\label{l.abounds} \leavevmode \begin{enumerate} \item For $x \in (0,\infty)$, $s(x) \leq x^2/2$. \item For $x \in (0,\infty)$, $c(x) \leq 1 + 2x$. \item For $x \in (0,\infty)$, $c(x) \geq 1+x/2$. \item For $x \in (0,3]$, $d(x) - 1 \leq 1/3$. \item For $x \in (0,\infty)$, $d(x) \geq 2^{-3/2} x^{1/2}$. \end{enumerate} \end{lemma} {\bf Proof: (1).} Since $\omega \geq 1$, $\beta(x) \geq 0$. Thus, Lemma~\ref{l.acsfacts}(3) implies that $s(x) \leq \beta(x)^2/2$. So the result reduces to Lemma~\ref{l.acsfacts}(4). \\ {\bf (2).} By Definition~\ref{d.acs}, $c(x) = \tfrac{(\omega+3)^2}{16} = \tfrac{8x+10+6\omega}{16} \leq 1 + 2x$ where the inequality is due to $\omega = \sqrt{1+ 8x} \leq 1+4x$ for $x \geq 0$. \\ {\bf (3).} We have that $c(x) = \tfrac{8x+10+6\omega}{16} \geq 1 + x/2$ from $\omega \geq 1$. \\ {\bf (4).} By Lemma~\ref{l.acsfacts}(1:$d$), $d(x) -1 \leq d(3) - 1 = 1/3$.\\ {\bf (5).} Recall that $d(x) = \tfrac{(\omega + 3)^2}{8(\omega+1)}$ where $\omega = \sqrt{8x+1}$. Thus, $d(x) \geq (\omega +3)/8 \geq 2^{-3/2}x^{1/2}$. \qed \begin{lemma}\label{l.stbounds} Let $j \in \N_+$. \begin{enumerate} \item For $x \leq 3$, $s_j(x) \leq 2 \cdot 6^{-2^{j-1}}$. \item For $x \geq 1/3$, $s_{-j}(x) \geq 2^{-1} 6^{2^{j-1}}$. \item For $i \geq 1$ and $x \leq 3$, $c_i(x) \leq 1 + 2^2 6^{-2^{i-1}}$. \item For $i \in \ensuremath{\mathbb{Z}}$ and $x \in (0,\infty)$, $d_i(x) - 1 \leq s_i(x)^2$. \item For $i \geq 2$ and $x \geq 1/3$, $d_{-i}(x)-1 \geq 2^{-1}6^{2^{i-2}-1}$. \end{enumerate} \end{lemma} {\bf Proof: (1).} Note that $s(3) = 1/3$ since $\omega(3) = 5$. We may thus use Lemma~\ref{l.abounds}(1) to prove the desired statement by induction. \\ {\bf (2).} Due to the preceding and $s_{-j}(x) = 1/s_j(1/x)$. \\ {\bf (3).} By Lemma~\ref{l.acsfacts}(1:$c$), Lemma~\ref{l.stbounds}(1) and Lemma~\ref{l.abounds}(2), $$ c_i(x) = c \big( s_i(x) \big) \leq c \big( 2 \cdot 6^{-2^{i-1}} \big) \leq 1 + 2^2 6^{-2^{i-1}} $$ for $i \geq 1$ and $x \leq 3$. \\ {\bf (4).} It is enough to show that $d(x) \leq 1 + x^2$. To see this, note that $d(x) - 1 = \delta(x)^{-1} - 1$. From $1 - \delta(x) = \big( 1- \tfrac{1}{\beta(x) +1} \big)^2$ and Lemma~\ref{l.acsfacts}(4), we find that $\delta(x) \geq 1 - \tfrac{x^2}{(1+x)^2}$, so that $d(x) - 1 \leq \tfrac{x^2}{1+2x} \leq x^2$. \\ {\bf (5).} Note that $d_{-i}(x) = d\big(s_{-i}(x)\big) \geq d\big(2^{-1}6^{2^{i-1}}\big) \geq 2^{-2}6^{2^{i-2}}$, where the first inequality is due to Lemma~\ref{l.acsfacts}(1:$d$) and Lemma~\ref{l.stbounds}(2), and the second to Lemma~\ref{l.abounds}(5). From this, the sought result follows. \qed {\bf Proof of Lemma~\ref{l.pqst}: (1).} Note that $c(x) \leq 4$ for $x \in (0,3]$ by Lemma~\ref{l.acsfacts}(1:$c$) and $c(3)=4$. Thus we bound the first term in the product in~(\ref{e.prodp}). Bounding the latter terms by Lemma~\ref{l.stbounds}(3), we find that $$ P_{k+1} - P_k \, = \, \prod_{i=0}^k \big( c_i(x) - 1 \big) \, \leq \, 3 \prod_{i=1}^k 2^2 6^{-2^{i-1}} \, , $$ whence the sought result. {\bf (2).} Note that $$ c_{-i}(x) -1 = c \big( s_{-i}(x)\big) -1 \geq c \big( 2^{-1} 6^{2^{i-1}} \big) -1 \geq 2^{-2}6^{2^{i-1}} \, , $$ where the first inequality holds when $x \geq 1/3$ in view of Lemma~\ref{l.acsfacts}(1:$c$) and Lemma~\ref{l.stbounds}(2); the second is due to Lemma~\ref{l.abounds}(3). By~(\ref{e.prodq}), $Q_{k+1} -Q_k \leq \prod_{i=1}^k 2^2 6^{-2^{i-1}} = 2^{2k} 6^{1 - 2^k}$, whence Lemma~\ref{l.pqst}(2). {\bf (3).} Note that $$ S_{k+1} - S_k \leq 3^{-1}\prod_{i=1}^k 2^2 6^{-2^i} = 2^{2k+1} 6^{1-2^{k+1}} \, , $$ where, in the first inequality, the first term in the product expression in~(\ref{e.prods}) is bounded by use of Lemma~\ref{l.abounds}(4), and the later terms are taken care of by the bounds $d_i(x) -1 \leq s_i(x)^2 \leq 2^2 6^{-2^i}$, which are valid for $x \leq 3$ and $i \geq 1$ in view of Lemma~\ref{l.stbounds}(1,4). {\bf (4).} Since $s_{-1}(1/3) = 3$, Proposition~\ref{p.sminusone} and Lemma~\ref{l.acsfacts}(1:$s$) imply that $s_{-1}(x) \geq 3$ for $x \geq 1/3$. And since $d(3) = 4/3$, the same result implies that $\big(d_{-1}(x) - 1\big)^{-1} \leq 3$ for such $x$. Applying these bounds alongside Lemma~\ref{l.stbounds}(5) to~(\ref{e.prodt}), we see that $$ T_{\ell+1} - T_\ell \, \leq \, 3 \cdot \prod_{i=2}^\ell 2 \cdot 6^{1 - 2^{i-2}} \, = \, 3 (12)^{\ell-1} 6^{1-2^{\ell-1}} $$ for $x \geq 1/3$ and $\ell \geq 2$. Whence Lemma~\ref{l.pqst}(2). \qed Two further lemmas will permit the derivation of Proposition~\ref{p.rkrell} from Lemma~\ref{l.pqst}. \begin{lemma}\label{l.lub} We have that $$ x^{-1} \big\vert \mathcal{M}_{k+1,\ell} - \mathcal{M}_{\ell,k} \big\vert \, \leq \, \max \, \Big\{ \, S_{k+1} - S_k \, , \, (S_k + T_\ell)(P_{k+1}-P_k) \, \Big\} \, , $$ and $$ x^{-1} \big\vert \mathcal{M}_{\ell,k+1} - \mathcal{M}_{\ell,k} \big\vert \, \leq \, \max \, \Big\{ \, T_{\ell+1} - T_\ell \, , \, (S_k + T_\ell)(Q_{\ell+1}-Q_\ell) \, \Big\} \, . $$ \end{lemma} {\bf Proof.} Since $P_j$ and $Q_j$ are at least one whenever $j$ is at least one, it is enough to show that \begin{equation}\label{e.lubone} x^{-1} \big\vert \mathcal{M}_{k+1,\ell} - \mathcal{M}_{\ell,k} \big\vert \, \leq \, \max \, \bigg\{ \, \frac{S_{k+1} - S_k}{P_{k+1}+Q_\ell} \, , \, \frac{(S_k + T_\ell)(P_{k+1}-P_k)}{(P_{k+1}+Q_\ell)(P_k+Q_\ell)} \, \bigg\} \, , \end{equation} and \begin{equation}\label{e.lubtwo} x^{-1} \big\vert \mathcal{M}_{\ell+1,k} - \mathcal{M}_{\ell,k} \big\vert \, \leq \, \max \, \bigg\{ \, \frac{T_{k+1} - T_k}{P_{k+1}+Q_\ell} \, , \, \frac{(S_k + T_\ell)(Q_{k+1}-Q_k)}{(P_{k+1}+Q_\ell)(P_k+Q_\ell)} \, \bigg\} \, . \end{equation} Note that \begin{eqnarray} x^{-1} \big( \mathcal{M}_{k+1,\ell} - \mathcal{M}_{\ell,k} \big) & = & \frac{S_{k+1}+T_\ell}{P_{k+1} + Q_\ell} - \frac{S_k+T_\ell}{ P_k + Q_\ell} \nonumber \\ & = & \frac{(S_{k+1} + T_\ell)(P_k + Q_\ell) - (S_k + T_\ell)(P_{k+1} + Q_\ell)}{(P_{k+1} + Q_\ell)(P_k + Q_\ell)} \, . \nonumber \end{eqnarray} The numerator in the latter term equals $ (P_k + Q_\ell) (S_{k+1} - S_k) - (S_k+T_\ell)(P_{k+1} - P_k)$. Since this is a difference of positive terms, and the right-hand denominator above is positive, we obtain~(\ref{e.lubone}). Note further that \begin{eqnarray} x^{-1} \big( \mathcal{M}_{\ell+1,k} - \mathcal{M}_{\ell,k} \big) & = & \frac{S_k+T_{\ell+1}}{P_k+ Q_{\ell+1}} - \frac{S_k+T_\ell}{ P_k + Q_\ell} \nonumber \\ & = & \frac{(S_k + T_{\ell+1})(P_k + Q_\ell) - (S_k + T_\ell)(P_k + Q_{\ell+1})}{(P_k + Q_{\ell+1})(P_k + Q_\ell)} \, . \nonumber \end{eqnarray} In this case, the numerator in the last line is $(P_k + Q_\ell) (T_{\ell+1} - T_\ell) - (S_k+T_\ell)(Q_{\ell+1} - Q_\ell)$. By reasoning as we did above, we obtain~(\ref{e.lubtwo}). This completes the proof of Lemma~\ref{l.lub}. \qed \begin{lemma}\label{l.sup} \leavevmode \begin{enumerate} \item For $x \leq 3$, $\sup_{k \geq 1} S_k \leq 3/2$. \item For $x \geq 1/3$, $\sup_{k \geq 1} T_k \leq 12$. \end{enumerate} \end{lemma} {\bf Proof: (1).} By $S_0 =1$ and Lemma~\ref{l.pqst}(2), $\sup_{k \geq 1} S_k \leq 1 + \sum_{k=0}^\infty 2^{2k+1}6^{1-2^{k+1}} = 1 + 3^{-1} + 2^{-4}3^{-7} + 2^{-10}3^{-15} + \cdots = 1.333361\cdots \leq 3/2$. {\bf (2).} Recall that, by definition, $T_1 = 0$. The quantity $T_2$ equals $\big(d_{-1}(x) - 1\big)^{-1}$ which is at most $3$ when $x \geq 1/3$, as we noted in the proof of Lemma~\ref{l.pqst}(4). Using these alongside Lemma~\ref{l.pqst}(4), we find that $\sup_{k \geq 1} T_k \leq 3 + \sum_{\ell = 2}^\infty 3 (12)^{\ell-1} 6^{1-2^{\ell-1}} = 3 + \sum_{k=0}^\infty 2^k 6^{k+3 - 2^{k+1}} \leq 12$. \qed {\bf Proof of Proposition~\ref{p.rkrell}.} By Lemma~\ref{l.lub}(1), Lemma~\ref{l.sup}(1,2) and Lemma~\ref{l.pqst}(1,3), we have that $$ x^{-1} \big\vert \mathcal{M}_{k+1,\ell} - \mathcal{M}_{\ell,k} \big\vert \leq \max \big\{ 2^{2k +1} 6^{1 - 2^{k+1}} , 2^{2k-4} 6^{4-2^k} \big\} $$ for $k \geq 0$ and $\ell \geq 1$, where here we used $\tfrac{27}{2} \cdot 2^{2k} 6^{1 - 2^k} = 2^{2k-4} 6^{4-2^k}$. And by Lemma~\ref{l.lub}(1), Lemma~\ref{l.sup}(1,2) and Lemma~\ref{l.pqst}(2,4), $$ x^{-1} \big\vert \mathcal{M}_{\ell+1,k} - \mathcal{M}_{\ell,k} \big\vert \leq \max \big\{ 3 (12)^{\ell - 1} 6^{1 - 2^{\ell-1}} , \tfrac{27}{2} 2^{2\ell} 6^{1-2^\ell} \big\} $$ for $k \geq 0$ and $\ell \geq 2$. In each of the two displayed maximums, it is the first expression which is the greater for the stated ranges of $k$ and $\ell$. Set $g(i) = 2^{2i-1} 6^{1-2^i}$ and $h(j) = (12)^{j-1} 6^{1 - 2^{j-1}}$, and note that $g(i+1)/g(i) \leq 1/3$ and $h(j+1)/h(j) \leq 1/3$ provided that $i \leq k$ and $j \geq \ell$ where $k \geq 0$ and $\ell \geq 2$. What we learn is that $$ x^{-1} \sup_{\substack{i \geq k+1 \\ j \geq \ell +1}} \big\vert \mathcal{M}_{i,j} - \mathcal{M}_{\ell,k} \big\vert \, \leq \, \tfrac{3}{2} \cdot \Big( 3^3 2^{2k-1} 6^{1-2^k} + 3(12)^{\ell - 1} 6^{1-2^{\ell - 1}} \Big) $$ for $k \leq 0$, $\ell \geq 2$ and $1/3 \leq x \leq 3$. Using $x \leq 3$, and rewriting the product of three and the right-hand side of this display, we obtain Proposition~\ref{p.rkrell}. \qed \subsubsection{Proofs via the finite trail Mina margin map} {\bf Proof of Theorem~\ref{t.relativereward}(1).} Note that \begin{equation}\label{e.rkkconvergence} \mathcal{M}_{k,k}(x) \, = \, \frac{n_{-k} - n_k}{m_k - m_{-k}} \, \to \, \frac{n_{-\infty} - n_\infty}{m_\infty - m_{-\infty}} \, = \, \mathcal{M}(x) \end{equation} where the convergence, which is in the limit $k \to \infty$, is explained by the proof of Theorem~\ref{t.positiveabmn}(3); the latter equality here is due to the specification of $\mathcal{M}(x)$ in Definition~\ref{d.r} and to Lemma~\ref{l.ecinvariance}. Note next that, for $i \in \ensuremath{\mathbb{Z}}$, the standard element in $\mathcal{C}\big( s_i(x) \big)$ is equal to the left shift by $i$ places of the standard element in $\mathcal{C}(x)$. Thus, $$ \mathcal{M}_{k,k}\big( s_i(x) \big) \, = \, \frac{n_{-k+i} - n_{k+i}}{m_{k+i} - m_{-k+i}} \, . $$ The left-hand side converges to $\mathcal{M}\big( s_i(x) \big)$ in the limit of high $k$, by~(\ref{e.rkkconvergence}). The right-hand side converges to $\frac{n_{-\infty} - n_\infty}{m_\infty - m_{-\infty}} = \mathcal{M}(x)$ since $m$- and $n$-differences vanish asymptotically at high values of the index by Theorem~\ref{t.ajbj}. Thus we find that $\mathcal{M}\big( s_i(x) \big) = \mathcal{M}(x)$ for $i \in \ensuremath{\mathbb{Z}}$ and $x \in [1/3,3]$. Since $\cup_{i \in \ensuremath{\mathbb{Z}}} s_i[1/3,3] = (0,\infty)$, we see that $\mathcal{M}(x)$ exists for all $x \in (0,\infty)$, and that in fact $\mathcal{M}\big( s_i(x) \big) = \mathcal{M}(x)$ for $i \in \ensuremath{\mathbb{Z}}$ and $x \in (0,\infty)$. This completes the proof of Theorem~\ref{t.relativereward}(1). {\bf Proof of Theorem~\ref{t.relativereward}: (2).} We first show that $\mathcal{M}$ is continuous on $(0,\infty)$. Proposition~\ref{p.rkrell} shows that $\mathcal{M}_{k,k}$ converges uniformly as $k \to \infty$ on $[1/3,3]$. By~(\ref{e.rkkconvergence}), the limiting function is the restriction of $\mathcal{M}$ to $[1/3,3]$. Since the constituent functions $c_i,d_i:(0,\infty) \to (0,\infty)$, $i \in \ensuremath{\mathbb{Z}}$, are continuous, we see that the map $\mathcal{M}_{k,k}:[1/3,3] \to 0,\infty)$ is continuous for any $k \in \N_+$. Thus, $\mathcal{M}$ is continuous on this interval. But $\mathcal{M}(x) = \mathcal{M}(s(x))$ for $x \in (0,\infty)$ by Theorem~\ref{t.relativereward}(1), and $\mathcal{M}(3) = \mathcal{M}(1/3)$ since $s(3) = 1/3$. Since $s:(0,\infty) \to (0,\infty)$ is seen to be continuous from its specification in Definition~\ref{d.acs}, we confirm that $\mathcal{M}$ is continuous on~$(0,\infty)$. To derive the formula for $\mathcal{M}(x)$ claimed in Theorem~\ref{t.relativereward}(2), note, by decoding the notation for products in Definition~\ref{d.zdefault}, that this formula may expressed in our present notation in the form $\tfrac{n_{-\infty} - n_\infty}{m_\infty - m_{-\infty}} = \tfrac{x(S_\infty +T_\infty)}{P_\infty + Q_\infty}$, where in fact we have extended this notation to write $*_\infty$ for $\lim_{k \to \infty}*_k$ with $* \in \{P,Q,S,T\}$. Since $x = \tfrac{m_0 - m_{-1}}{n_{-1} - n_0}$, the sought formula is a consequence of $$ m_\infty - m_{-\infty} = (P_\infty + Q_\infty) (m_0 - m_{-1}) \, \, \, \, \textrm{and} \, \, \, \, n_{-\infty} -n_\infty = (S_\infty + T_\infty) (n_{-1} - n_0) \, . $$ To obtain these identities, we take the limit in high $k$ and $\ell$ of the two formulas in~(\ref{e.pqstformulas}), using Theorem~\ref{t.positiveabmn}(3) to justify that the limiting expressions are finite real numbers. This completes the proof of Theorem~\ref{t.relativereward}(2). \qed \subsubsection{Some further consequences} In order to prove Theorem~\ref{t.minamarginvalues} and Theorem~\ref{t.relativereward}(3), we now offer a definition of the quantity $\lambda \in (0,1]$ to which these results refer. \begin{definition}\label{D.lambda} We set $\lambda = \inf \{ \mathcal{M}(x): x \in [1/3,3] \}$. \end{definition} \begin{lemma}\label{l.infimumminamarginmap} There exists $x_0 \in [1/3,3]$ such that $\mathcal{M}(x_0) = \lambda$. We have that $$ \lambda \, = \, \inf \{ \mathcal{M}(x): x \in (0,\infty) \} \, . $$ \end{lemma} {\bf Proof.} Since $\mathcal{M}:[1/3,3] \to (0,\infty)$ is continuous by Theorem~\ref{t.relativereward}(2), the infimum is attained on $[1/3,3]$, and we may find $x_0 \in [1/3,3]$ so that $\mathcal{M}(x_0) = \lambda \in (0,\infty)$. The proof of Lemma~\ref{l.battlefield} shows that $\cup_{i \in \ensuremath{\mathbb{Z}}} s_i[1/3,3] = (0,\infty)$. By Theorem~\ref{t.relativereward}(3), we see thus that $\lambda = \inf \{ \mathcal{M}(x): x \in (0,\infty) \}$. \qed \begin{lemma}\label{l.rangeminamarginmap} For $x \in (0,\infty)$, $\mathcal{M}(x^{-1}) = \mathcal{M}(x)^{-1}$. \end{lemma} {\bf Proof.} Recall from Definition~\ref{d.r} that $\mathcal{M}(x) =n^{\rm st}_{-\infty}(x)$ for $x \in (0,\infty)$ is the Mina margin of the standard solution $\big(a^{\rm st}_i(x),b^{\rm st}_i(x),m^{\rm st}_i(x),n^{\rm st}_i(x): i \in \ensuremath{\mathbb{Z}} \big)$. The $\phi_0$-value of this solution is equal to $\tfrac{n^{\rm st}_{-1}(x) - n^{\rm st}_0(x)}{m^{\rm st}_0(x) - m^{\rm st}_{-1}(x)} = x$. By Proposition~\ref{p.rolereversal} and dilation, the quadruple $$ n^{\rm st}_{-\infty}(x)^{-1} \cdot \Big( \, b^{\rm st}_{-i}(x) \, , \, a^{\rm st}_{-i}(x) \, , \, n^{\rm st}_{-i}(x) \, , \, m^{\rm st}_{-i}(x): i \in \ensuremath{\mathbb{Z}} \, \Big) $$ is also a standard \textrm{ABMN} solution. Its $\phi_1$-value equals $\tfrac{m^{\rm st}_0(x) - m^{\rm st}_{-1}(x)}{n^{\rm st}_{-1}(x) - n^{\rm st}_0(x)} = x^{-1}$. The left shift by one place of the displayed quadruple is thus a standard \textrm{ABMN} solution whose $\phi_0$-value equals $x^{-1}.$ The quantity $\mathcal{M}(x^{-1})$, which by definition equals $n^{\rm st}_{-\infty}(x^{-1})$, is thus found to be equal to $n^{\rm st}_{-\infty}(x)^{-1} \cdot m^{\rm st}_\infty(x) =n^{\rm st}_{-\infty}(x)^{-1} = \mathcal{M}(x)^{-1}$. Here, we used that $m^{\rm st}_\infty(x) = 1$ since the solution $\big(a^{\rm st}_i(x),b^{\rm st}_i(x),m^{\rm st}_i(x),n^{\rm st}_i(x): i \in \ensuremath{\mathbb{Z}} \big)$ is standard. \qed \begin{lemma}\label{l.supremumminamarginmap} We have that $\lambda^{-1} = \sup \{ \mathcal{M}(x): x \in [1,3,3] \} = \sup \{ \mathcal{M}(x): x \in (0,\infty) \}$. Further, there exists $y_0 \in [1,3,3]$ such that $\mathcal{M}(y_0) = \lambda^{-1}$. \end{lemma} {\bf Proof.} The ranges $\mathcal{M}[1/3,3]$ and $\mathcal{M}(0,\infty)$ are invariant under the transformation $z \to z^{-1}$ in view of Lemma~\ref{l.rangeminamarginmap}. The supremum of the continuous function $\mathcal{M}$ is attained on $[1/3,3]$. \qed {\bf Proof of Theorem~\ref{t.relativereward}(3).} The range $\mathcal{M}[1/3,3]$ has maximum $\lambda^{-1}$ and minimum $\lambda$, by Lemmas~\ref{l.infimumminamarginmap} and~\ref{l.supremumminamarginmap}. By the continuity of $\mathcal{M}$ on $[1/3,3]$, $\mathcal{M}[1/3,3]$ is seen to equal $[\lambda,\lambda^{-1}]$. Since $\cup_{i \in \ensuremath{\mathbb{Z}}} s_i[1/3,3] = (0,\infty)$, $\mathcal{M}(0,\infty)$ equals $\mathcal{M}[1/3,3]$. Note that $\lambda \in (0,1]$ since $\mathcal{M}[1/3,3] = [\lambda,\lambda^{-1}]$ and $\mathcal{M}$ is continuous. The remaining assertion that we need to validate, which is that $\lambda$ is at most $0.999904$, is Theorem~\ref{t.minamarginvalues}(3), whose proof will appear in Section~\ref{s.minamarginmap}. \qed {\bf Proof of Theorem~\ref{t.minamarginvalues}(1).} Theorem~\ref{t.relativereward}(3) shows that the set of values of the Mina margins of standard positive \textrm{ABMN} solutions is equal to $[\lambda,\lambda^{-1}]$. Now consider an arbitrary positive \textrm{ABMN} solution. The value of the Mina margin is shared between this solution and the equivalent standard solution. Thus, no new values for the Mina margin emerge as the solution set is enlarged from standard to general. {\bf (2).} Consider a positive \textrm{ABMN} solution $(a,b,m,n)$ with boundary data $(m_{-\infty},m_\infty,n_{-\infty},n_\infty) \in \ensuremath{\mathbb{R}}^4$. By Theorem~\ref{t.positiveabmn}(1), $(a,b,m,n)$ is strict. Thus, $m_{-\infty} < m_\infty$ and $n_\infty < n_{-\infty}$. By Theorem~\ref{t.minamarginvalues}(1), $\tfrac{n_{-\infty} - n_\infty}{m_\infty - m_{-\infty}} \in [\lambda,\lambda^{-1}]$. Conversely, suppose that $(m_{-\infty},m_\infty,n_{-\infty},n_\infty) \in \ensuremath{\mathbb{R}}^4$ satisfies $m_{-\infty} < m_\infty$, $n_\infty < n_{-\infty}$ and $\tfrac{n_{-\infty} - n_\infty}{m_\infty - m_{-\infty}} \in [\lambda,\lambda^{-1}]$. Set $x$ equal to the latter quantity. In the notation of Section~\ref{s.solvingabmn}, the image of the standard solution $\big( a^{\rm st}_i(x),b^{\rm st}_i(x),m^{\rm st}_i(x),n^{\rm st}_i(x) : i \in \ensuremath{\mathbb{Z}} \big)$ under the transformation $\chi_{x,y} \circ \tau_u$, where $x = m_{-\infty}$, $n = n_{-\infty}$ and $u = m_\infty - m_{-\infty}$, is a positive \textrm{ABMN} solution with boundary data $(m_{-\infty},m_\infty,n_{-\infty},n_\infty)$. This completes the proof of Theorem~\ref{t.minamarginvalues}(2). \qed {\bf Proof of Theorem~\ref{t.nashequil.prelim}.} Let $x \in (0,\infty)$. By Theorem~\ref{t.nashabmn}, the game ${\rm Standard}(x)$ has a time-invariant Nash equilibrium if and only if there exists a positive \textrm{ABMN} solution whose Mina margin equals $x$. The latter condition is equivalent to $x \in [\lambda,\lambda^{-1}]$ by Theorem~\ref{t.minamarginvalues}(1). \qed {\bf Proof of Theorem~\ref{t.solutions}.} Let $y \in [1/3,3]$, $\mathcal{M}(y) = \lambda^{-1}$, be the value~$y_0$ assured by Lemma~\ref{l.supremumminamarginmap}. Let $w \leq y$ be maximal such that $\mathcal{M}(w) = \lambda$, where Theorem~\ref{t.relativereward} assures the existence of this quantity. We have that $w < y$ because $\mathcal{M}$ is continuous and assigns different values to these two points. Set $z = s_{-1}(w)$. By Proposition~\ref{p.sminusone} and Lemma~\ref{l.acsfacts}(5), $z > w$. Since $\mathcal{M}(z) = \mathcal{M}(w) = \lambda$ by Theorem~\ref{t.relativereward}(1), we thus have that $z > y$. Now let $x \in (\lambda,\lambda^{-1})$. By the continuity of $\mathcal{M}$, we may find $u \in (w,y)$ and $v \in (y,z)$ such that $\mathcal{M}(u) = \mathcal{M}(v) =x$. Note that $w < u < v < z = s_{-1}(w)$. The quadruples $$ \big(a^{\rm st}_i(u),b^{\rm st}_i(u),m^{\rm st}_i(u),n^{\rm st}_i(u) : i \in \ensuremath{\mathbb{Z}} \big) \, \, \, \, \textrm{and} \, \, \, \, \big(a^{\rm st}_i(v),b^{\rm st}_i(v),m^{\rm st}_i(v),n^{\rm st}_i(v) : i \in \ensuremath{\mathbb{Z}} \big) $$ are standard \textrm{ABMN} solutions of Mina margin~$x$. They are shift inequivalent because $u$ is not equal to $s_i(v)$ for any $i \in \ensuremath{\mathbb{Z}}$. Indeed, the condition $s_i(v) \in [w,s_{-1}(w))$ implies that $i =0$, but $s_0(v) = v \not= u$. This pair of solutions demonstrates that $Q(x) \geq 2$, as required to obtain Theorem~\ref{t.solutions}. \qed We end this subsection by proving Theorem~\ref{t.nashequil}, in part as a consequence of Theorem~\ref{t.relativereward}(1). This makes now a convenient moment to derive the next result, thus rendering rigorous a verbal argument in the first paragraph of Section~\ref{s.solvingabmn}. {\bf Proof of Proposition~\ref{p.abmnclassify}.} To prove the two parts of this result, it is enough to argue that there is a unique standard solution, and a unique default solution, to which any positive \textrm{ABMN} solution is equivalent. Suppose then that $(a,b,m,n) = \big\{ (a_i,b_i,m_i,n_i) \in (0,\infty)^2 \times \ensuremath{\mathbb{R}}^2 : i \in \ensuremath{\mathbb{Z}} \big\}$ is a positive \textrm{ABMN} solution. The boundary values $m_{-\infty}$ and $n_\infty$ exist as real numbers by Theorem~\ref{t.positiveabmn}(3). Note then that the translation $(a',b',m',n') = \tau_{-m_{-\infty},-n_\infty}$ has $m'_{-\infty} = n'_\infty = 0$. Write $x = \frac{n'_{-1} - n'_0}{m'_0 - m'_{-1}}$ and $y = m'_\infty - m'_{-\infty}$. By applying the dilation $\tau_u$ to $(a',b',m',n')$, we obtain a default solution if $u = x^{-1}$ and a standard solution if $u = y^{-1}$. We have seen that $\tau_{x^{-1}} \circ \chi_{-m_{-\infty},-n_\infty}(a,b,m,n)$ is a default \textrm{ABMN} solution. It is clear that any variation of the parameters $(x^{-1},m_{-\infty},n_\infty)$ will result in an \textrm{ABMN} solution that fails to be default. Likewise, $\tau_{y^{-1}} \circ \chi_{-m_{-\infty},-n_\infty}(a,b,m,n)$ has been shown to be a standard \textrm{ABMN} solution. Any variation of $(y^{-1},m_{-\infty},n_\infty)$ will result in an \textrm{ABMN} solution that fails to be standard. Thus we complete the proof of Proposition~\ref{p.abmnclassify}. \qed {\bf Proof of Theorem~\ref{t.nashequil}(1).} By Theorem~\ref{t.nashabmn}, a time-invariant Nash equilibrium in ${\rm Standard}(x)$ is the reverse-ordered $(a,b)$-component of a standard \textrm{ABMN} solution whose Mina margin equals $x$. Since Proposition~\ref{p.abmnclassify} implies that standard \textrm{ABMN} solutions are indexed by the value $z \in (0,\infty)$ of their ${\rm CenRatio}$, we obtain Theorem~\ref{t.nashequil}(1). {\bf (2).} By Theorem~\ref{t.relativereward}(1) and Proposition~\ref{p.sminusone}, $\mathcal{M}(s_k(x))$ equals $\mathcal{M}(x)$ for all $x \in (0,\infty)$ and $k \in \ensuremath{\mathbb{Z}}$. This, the set $X$ is the disjoint union of $s_k(Y)$ as $k$ ranges over~$\ensuremath{\mathbb{Z}}$. Proposition~\ref{p.shift} then yields Theorem~\ref{t.nashequil}(2). \qed \subsection{The Mina margin map after domain coordinate change}\label{s.mmmtransform} In this section, we prove Theorem~\ref{t.phithetainverse}. The map $\Theta:\ensuremath{\mathbb{R}} \to (0,\infty)$ is an increasing surjection, so we may set $\Psi = \Theta^{-1}:(0,\infty) \to \ensuremath{\mathbb{R}}$. The proof of the theorem will harness the next result. \begin{lemma}\label{l.thetapsi} There exists a constant $C > 0$ such that $\vert \theta(x) - \Psi(x) \vert \leq C$ for $x \geq 1/3$. \end{lemma} Two further results will serve to prove Lemma~\ref{l.thetapsi}. \begin{lemma}\label{l.psi} We have that $$ \Psi(x)\, = \, \begin{cases} \, \, \log_2 \big( \log_2(x) +1 \big) & \text{if $x \in [1,\infty)$} \, , \\ \, \, - \log_2 \big( - \log_2(x) +1 \big) & \text{if $x \in (0,1)$} \, . \end{cases} $$ \end{lemma} {\bf Proof.} The formulas follow from the expressions $2^{2^x - 1}$ and $2^{-(2^{-x} -1)}$ for $\Theta(x)$ that are respectively valid when $x \geq 0$ and $ x < 0$. \qed For $x \in [1/3,3]$ and $i \in \N$, we write $s_{-i}(x)$ in the form $2^{2^i c_i(x) - 1}$. \begin{lemma}\label{l.sminusi} There exists a constant $C > 0$ and a function $c:[1/3,3] \to (0,\infty)$ such that $\vert c_i(x) - c(x) \vert \leq C 2^{-i}$ for $x \in [1/3,3]$ and $i \in \N$. Further, we have that $\inf \big\{ c(x): x \in [1/3,3] \big\} > 0$. \end{lemma} {\bf Proof.} From the relation $$ s_i(x) = 2 s_{-(i-1)}(x)^2 + O \big( s_{-(i-1)}(x) \big) $$ and the form $s_{-j}(x) = 2^{2^i c_j - 1}$ (where we write $c_j = c_j(x)$), we find that $$ 2^{2^j c_j - 1} \, = \, 2^{2^j c_{j-1}(x) - 1} + O \big( 2^{2^{j-1} c_{j-1}(x) - 1} \big) \, = \, 2^{2^j c_{j-1}(x) - 1} \Big( 1+ 2^{-2^{j-1} c_{j-1}(x)} \Big) $$ so that $$ 2^j c_j = 2^j c_{j-1} - 1 + \Theta(1) 2^{-2^{j-1}c_{j-1}} $$ and $$ c_j = c_{j-1} - 2^{-j} + \Theta(1) 2^{-j - 2^{j-1}c_{j-1}} = c_{j-1} + \Theta(1) 2^{-j} \, . $$ We thus learn that there exists $c = c(x) \in [0,\infty)$ such that $\vert c_j - c \vert \leq O(1) 2^{-j}$. We may exclude the possibility that $c$ equals zero because, in this case, we would have that $c_j \leq O(1) 2^{-j}$, which would imply the false assertion that $s_{-j}(x) = 2^{2^j c_j - 1} = O(1)$ is bounded above independently of $j \in \N$. We now argue that $\inf \big\{ c(x): x \in [1/3,3] \big\} > 0$. This follows from $c(1/3) > 0$ and the fact that $s_{-i}(x)$ is increasing in $x \in [1/3,3]$ for each $i \in \N$. This completes the proof of Lemma~\ref{l.sminusi}. \qed {\bf Proof of Lemma~\ref{l.thetapsi}.} Note that $s_{-i}(x) \geq 3$ for $i \in \N_+$ and $x \in [1/3,3]$. By Lemma~\ref{l.psi} and $s_{-i}(x) = 2^{2^i c_i(x) - 1}$, we find that $\Psi\big(s_{-i}(x)\big) = i + \log_2 c_i$. Indeed, we find that $$ \big\vert \Psi \big( s_{-i}(x) \big) - i \big\vert = \log_2 \big( c_i/c \big) = \log \big( 1 + \tfrac{c_i - c}{c} \big) \leq O(1) 2^{-i} \ , $$ the inequality due to Lemma~\ref{l.sminusi}. On the other hand, $\theta\big( s_{-i}(x) \big)$ is equal to the unique value $J \in \ensuremath{\mathbb{Z}}$ such that $s_J \big( s_{-i}(x) \big) \in [1/3,3)$. When $x \in [1/3,3)$, we see then that $J = i$. We find then that $\vert \Psi(x) - \theta(x) \vert = O(1)$ for $x \geq 1/3$, as we sought to do in proving Lemma~\ref{l.thetapsi}. \qed {\bf Proof of Theorem~\ref{t.phithetainverse}(1,2).} We first claim that \begin{equation}\label{e.claimsk} s_k \big( \theta^{-1}(x+k) \big) = \theta^{-1}(x) \, . \end{equation} To check this, note that $\theta \big( s_k(z) \big) = \theta(z) - k$, so that $$ \theta \Big( s_k \big( \theta^{-1}(x+k) \big) \Big) = \theta \big( \theta^{-1}(x+k)\big) - k = (k+x) - x = x \, , $$ as desired. Note then that $$ \psi(x) = \mathcal{M} \big( \theta^{-1}(x) \big) = \mathcal{M} \big( s \big( \theta^{-1}(x+1) \big) = \mathcal{M} \big( \theta^{-1}(x+1) \big) = \psi(x+1 ) $$ where the respective equalities are due to the definition of $\psi$; the above claim with $k=1$; Theorem~\ref{t.relativereward}(1); and the definition of~$\psi$ once more. We have obtained Theorem~\ref{t.phithetainverse}(1). Note next that $\mathcal{S}_k \mathsf{StSol}(x+k)$ equals \begin{eqnarray*} & & \mathcal{S}_k \Big( a^{\rm st}\big( \theta^{-1}(x+k) \big), b^{\rm st}\big( \theta^{-1}(x+k) \big), m^{\rm st}\big( \theta^{-1}(x+k) \big), n^{\rm st}\big( \theta^{-1}(x+k) \big) \Big) \\ & = & \bigg( a^{\rm st} \Big( s_k\big(\theta^{-1}(x+k) \big) \Big) , b^{\rm st} \Big( s_k\big(\theta^{-1}(x+k) \big) \Big) , m^{\rm st} \Big( s_k\big(\theta^{-1}(x+k) \big) \Big) , n^{\rm st} \Big( s_k\big(\theta^{-1}(x+k) \big) \Big) \bigg) \, , \end{eqnarray*} the latter equality by Proposition~\ref{p.shift}. Applying~(\ref{e.claimsk}), we find that $$ \mathcal{S}_k \mathsf{StSol}(x+k) = \Big( a^{\rm st}\big( \theta^{-1}(x) \big), b^{\rm st}\big( \theta^{-1}(x) \big), m^{\rm st}\big( \theta^{-1}(x) \big), n^{\rm st}\big( \theta^{-1}(x) \big) \Big) = \mathsf{StSol}(x) \, . $$ This implies that $\mathsf{StSol}(x+k) = \mathcal{S}_{-k} \mathsf{StSol}(x)$, which is what Theorem~\ref{t.phithetainverse}(3) asserts. {\bf (3).} Let $z \in \ensuremath{\mathbb{R}}$ and set $\Theta(z) = x$. Since $\vert \Psi(x) - \theta(x) \vert = O(1)$ by Lemma~\ref{l.thetapsi}, and $\Psi$ and $\theta$ are increasing, we have that $$ \Theta\big(z - O(1)\big) \leq \theta^{-1}(z) \leq \Theta\big(z + O(1)\big) \, . $$ Substituting the expressions for $2^{2^z - 1}$ and $2^{-(2^{-z} -1)}$ for $\Theta(z)$, valid when $z \geq 0$ and $z < 0$, we obtain Theorem~\ref{t.phithetainverse}(3). \qed \subsection{The Mina margin map is not identically equal to one}\label{s.minamarginmap} Here we prove Theorem~\ref{t.minamarginvalues}(3). We will obtain evidence for Conjecture~\ref{c.lambda} as we do so. \begin{proposition}\label{p.thevalueofminamargin} The value of $\mathcal{M}_{5,4}(0.58)$ lies in the interval $[0.9999032032 , 0.9999032038]$. \end{proposition} {\bf Proof of Theorem~\ref{t.minamarginvalues}(3).} Since $\lambda$ is equal to the infimum of $\mathcal{M}(x)$ over $x \in (0,\infty)$, we have that $\lambda \leq \mathcal{M}(0.58)$. Note that Proposition~\ref{p.rkrell} with $(\ell,k) = (5,4)$ implies that the value $\mathcal{M}(z) = \lim_j \mathcal{M}_{j,j}(z)$ (where $z = 0.58 \in [1/3,3]$) satisfies $$ \big\vert \mathcal{M}(z) - \mathcal{M}_{5,4}(z) \big\vert \, \leq \, 3.3 \times 10^{-8} + 5.95 \times 10^{-7} \, \leq \, 6.3 \times 10^{-7} \, . $$ Applying the upper bound on $\mathcal{M}_{5,4}(z)$ in Proposition~\ref{p.thevalueofminamargin}, we find that $$ \mathcal{M}(0.58) \, \leq \, 0.9999032038 + 6.3 \times 10^{-7} \, = \, 0.9999038338 \, . $$ We confirm then that $\lambda$, being at most $\mathcal{M}(0.58)$, is bounded above by $0.999904$. This completes the proof of Theorem~\ref{t.minamarginvalues}(3). \qed Numerical work with Mathematica indicates that $\mathcal{M}_{5,4}(0.5809)$ equals $0.999903202726$ to twelve decimal places; that $\mathcal{M}_{5,4}(0.5809)$ equals $\min \big\{ \mathcal{M}_{5,4}(x): x \in [1/3,3] \cap 10^{-4}\ensuremath{\mathbb{Z}} \big\}$; and that the error between $\inf \big\{ \mathcal{M}_{5,4}(x): x \in [1/3,3] \big\}$ and this minimum may jeopardise only the final digit of the twelve. If this evidence is admitted, then the preceding proof yields that $\inf \big\{ \mathcal{M}(x): x \in [1/3,3] \big\}$ is at least $\mathcal{M}_{5,4}(0.5809) -10^{-11} - 6.3 \times 10^{-7} \geq 0.99990257 \geq 0.999902$; whence Conjecture~\ref{c.lambda}. In fact, the conjecture is cautious: $\lambda$ is likely to exceed $0.999903$, as an estimate on a higher indexed $\mathcal{M}_{\ell,k}$ might show. The formula for $(0,\infty) \to \ensuremath{\mathbb{R}}: x \to \mathcal{M}_{5,4}(x)$ in Lemma~\ref{l.ratiointerpret} may be recorded explicitly---it involves several applications of such operations as inverse and square-root---but it is messy, and would occupy several pages of standard print. Arguably a claim that mathematical software evaluates this function at $0.58$ to be within the range claimed by Proposition~\ref{p.thevalueofminamargin} may be admitted as a proof of this result. But a diligent reader who is given this information has no practical way to confirm it. In the following proof, we provide an approximation scheme, from above and below, for computing $\mathcal{M}_{5,4}(0.58)$. All quantities in the scheme are values in $10^{-10}\ensuremath{\mathbb{Z}}$, and the proof is reduced to verifying about fifty explicit statements of the form `if $x=u$, then $f(x)=v$', where $u$ and $v$ are given elements of $10^{-10}\ensuremath{\mathbb{Z}}$, and $f$ is the application of a function such as $s$, $c$ and $d$ from Definition~\ref{d.acs} followed by a rounding down or up on to the lattice $10^{-10}\ensuremath{\mathbb{Z}}$. In this way, the diligent reader has a mundane but manageable task to verify every detail of the derivation of Proposition~\ref{p.thevalueofminamargin}. We note that, were $\mathcal{M}_{5,4}$ shown to be differentiable, and a suitable bound on its derivative found, then a similarly explicit record of the values of $\mathcal{M}_{5,4}$ on a fine enough mesh of points in $[1/3,3]$ would furnish a proof of Conjecture~\ref{c.lambda}. If the number of points in the mesh were large, then a manual check on the explicit bounds would be impracticable, so that in such a case the proof would be at least modestly computer-assisted. We now turn to introducing and implementing the approximation scheme. Let $k \in \N$, and set $$ \lfloor x \rfloor_k = 10^{-k} \lfloor 10^k x \rfloor \, \, \, \, \textrm{and} \, \, \, \, \ulcorner x \urcorner^k = 10^{-k} \lfloor 10^k x \rfloor + 10^{-k} \in \ensuremath{\mathbb{R}} \, . $$ Namely, the real line is partitioned $$ \ensuremath{\mathbb{R}} \, = \, \bigcup_{j \in \ensuremath{\mathbb{Z}}} \, 10^{-k} \cdot [j,j+1) $$ into intervals whose endpoints are consecutive elements in the lattice $10^{-k}\ensuremath{\mathbb{Z}}\,$; $\big[ \lfloor x \rfloor_k , \lceil x \rceil^k \big)$ is the unique interval in the partition that contains~$x$. From the outset, we set the parameter $k$ equal to ten, and omit to denote it. It should thus be understood that $\lfloor x \rfloor$ and $\lceil x \rceil$ denote $\lfloor x \rfloor_{10}$ and $\lceil x \rceil^{10}$, rather than the usual integer roundings of $x$. Recall Definition~\ref{d.acs}. We specify $s^\uparrow,s^\downarrow,c^\uparrow,c^\downarrow,d^\uparrow,d^\downarrow:(0,\infty) \to (0,\infty)$ by setting $$ *^\uparrow(x) = \lceil *(x) \rceil \, \, \, \textrm{and} \, \, \, *^\downarrow(x) = \lfloor *(x) \rfloor \, \, \, \textrm{for} \, \, \, * \in \{s,c,d\} \, . $$ For $x \in (0,\infty)$, we specify $\big\{ s^\uparrow_i(x): i \in \ensuremath{\mathbb{Z}} \big\}$ and $\big\{ s^\downarrow_i(x): i \in \ensuremath{\mathbb{Z}} \big\}$, the upper and lower $s$-sequences evaluated at $x$. Indeed, we set $s_0^\uparrow(x) = s_0^\downarrow(x) = x$. For $i \geq 1$, we then iteratively set $s^\uparrow_i(x) = s^\uparrow \big(s^\uparrow_{i-1}(x)\big)$ and $s^\downarrow_i(x) = s^\downarrow \big(s^\downarrow_{i-1}(x)\big)$. We further set $s_{-1}^\uparrow (x) = \lceil s_{-1}(x) \rceil$ and $s_{-1}^\downarrow (x) = \lfloor s_{-1}(x) \rfloor$ . For $i \leq -2$, we iteratively set $s_{-i}^\uparrow(x) = s_{-1}^\uparrow \big( s_{-i+1}^\uparrow(x) \big)$ and $s_{-i}^\downarrow(x) = s_{-1}^\downarrow \big( s_{-i+1}^\downarrow(x) \big)$. Set $z =0.58$. We will write $s_i^\uparrow = s_i^\uparrow(z)$ and $s_i^\downarrow = s_i^\downarrow(z)$ for $i \in \ensuremath{\mathbb{Z}}$. In this way, the value of $z = 0.58$ is understood. We further define the upper and lower $c$- and $d$-sequences, $\big\{c^\uparrow_i,c^\downarrow_i,d^\uparrow_i,d^\downarrow_i: i \in \ensuremath{\mathbb{Z}} \big\}$, where again the value of $z$ is understood. We set $c_i^\uparrow = c^\uparrow(s_i^\uparrow)$, $c_i^\downarrow = c^\downarrow(s_i^\downarrow)$, $d_i^\uparrow = d^\uparrow(s_i^\uparrow)$ and $d_i^\downarrow = d^\downarrow(s_i^\downarrow)$. The data $\big\{ s^\uparrow_i,s^\downarrow_i,c^\uparrow_i,c^\downarrow_i,d^\uparrow_i,d^\downarrow_i \big\}$, $i \in \llbracket -4,3 \rrbracket$, are forty-eight elements of $10^{-10}\ensuremath{\mathbb{Z}}$. These values are presented in Tables~\ref{t.one} and~\ref{t.two}. Two of the values are known without computation: $s_0^\uparrow = s_0^\downarrow =0.58$. The remaining values may be computed, one at a time, where each step is a computation ${\rm INPUT} \rightarrow {\rm OUTPUT}$ of one element of the lattice $10^{-10}\ensuremath{\mathbb{Z}}$ from another. Each step takes one of the forms $s_i^\uparrow \to s^\uparrow_{i+1}$ for $i \in \llbracket 0,2 \rrbracket$; $s_i^\uparrow \to s^\uparrow_{i-1}$ for $i \in \llbracket -3,0 \rrbracket$; $s_i^\uparrow \to c_i^\uparrow$ or $s_i^\uparrow \to d_i^\uparrow$ for $i \in \llbracket -4,3 \rrbracket$; or it is formed by replacing $\uparrow$ by $\downarrow$ in one of these steps. Forty-six such steps lead to the completion of the two tables, given the two initial entries. \begin{table} \begin{center} \begin{tabular}{| c | c | c |} \hline $i$ & $s_i^\uparrow$ & $s_i^\downarrow$ \\ \hline -4 & 954911606.03 & 954911605.92 \\ -3 & 21848.5122538904 & 21848.5122525938 \\ -2 & 102.3071054647 &102.3071054616\\ -1 & 5.3556473847 & 5.3556473846 \\ 0 & 0.5800000000 & 0.5800000000 \\ 1 & 0.0504077253 & 0.0504077252 \\ 2 & 0.0010408205 & 0.0010408204 \\ 3 & 0.0000005392 & 0.0000005391 \\ \hline \end{tabular} \caption{Values of $s_i^\uparrow$ and $s_i^\downarrow$ for $i \in \llbracket -4,3 \rrbracket$}\label{t.one} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{| c | c | c | c | c |} \hline $i$ & $c_i^\uparrow$ & $c_i^\downarrow$ & $d_i^\uparrow$ & $d_i^\downarrow$ \\ \hline -4 & 477488579.78 & 477488579.73 & 10926.0060411432 & 10926.0060404948 \\ -3 & 11081.6603248978 & 11081.6603242447 & 52.8859257466 & 52.8859257450 \\ -2 & 62.5133614707 & 62.5133614689 & 4.2201465577 & 4.2201465576 \\ -1 & 5.7859121540 & 5.7859121538 & 1.5182994418 & 1.5182994417 \\ 0 & 1.8055756566 & 1.8055756565 & 1.0700124766 & 1.0700124765 \\ 1 & 1.0944264319 & 1.0944264316 & 1.0019497202 & 1.0019497201 \\ 2 & 1.0020784046 & 1.0020784043 & 1.0000010767 & 1.0000010766 \\ 3 & 1.0000010785 & 1.0000010782 & 1.0000000001 & 1.0000000000 \\ \hline \end{tabular} \caption{Values of $c_i^\uparrow$, $c_i^\downarrow$, $d_i^\uparrow$ and $d_i^\downarrow$ for $i \in \llbracket -4,3 \rrbracket$}\label{t.two} \end{center} \end{table} According to Lemma~\ref{l.ratiointerpret}, $$ \mathcal{M}_{5,4}(z) \, = \, \frac{z(S_4+T_5)}{P_4 + Q_5} \, , $$ where $$ P_4 = 1 + (c_0 - 1) + (c_0 - 1) (c_1 - 1) + (c_0 - 1) (c_1 - 1) (c_2 - 1) + (c_0 - 1) (c_1 - 1) (c_2 - 1) (c_3 - 1) \, ; $$ \begin{eqnarray*} Q_5 & = & \big( c_{-1} -1 \big)^{-1} + \big( c_{-1} -1 \big)^{-1} \big( c_{-2} -1 \big)^{-1} + \big( c_{-1} -1 \big)^{-1} \big( c_{-2} -1 \big)^{-1} \big( c_{-3} -1 \big)^{-1} \\ & & \qquad \qquad \qquad + \, \big( c_{-1} -1 \big)^{-1} \big( c_{-2} -1 \big)^{-1} \big( c_{-3} -1 \big)^{-1} \big( c_{-4} -1 \big)^{-1} \, ; \end{eqnarray*} $$ S_4 = 1 + (d_0 - 1) + (d_0 - 1) (d_1 - 1) + (d_0 - 1) (d_1 - 1) (d_2 - 1) + (d_0 - 1) (d_1 - 1) (d_2 - 1) (d_3 - 1) \, ; $$ and \begin{eqnarray*} T_5 & = & \big( d_{-1} -1 \big)^{-1} + \big( d_{-1} -1 \big)^{-1} \big( d_{-2} -1 \big)^{-1} + \big( d_{-1} -1 \big)^{-1} \big( d_{-2} -1 \big)^{-1} \big( d_{-3} -1 \big)^{-1} \\ & & \qquad \qquad \qquad + \, \big( d_{-1} -1 \big)^{-1} \big( d_{-2} -1 \big)^{-1} \big( d_{-3} -1 \big)^{-1} \big( d_{-4} -1 \big)^{-1} \, . \end{eqnarray*} We further specify quantities $*^\uparrow$ and $*^\downarrow$, where $* \in \{ P_4, Q_5, S_4, T_5 \}$. To do so, we record variable dependence in the form $P_4 = P_4(c_0,c_1,c_2,c_3)$, $S_4 = S_4(d_0,d_1,d_2,d_3)$, $Q_5 = Q_5(c_{-1},c_{-2},c_{-3},c_{-4})$ and $T_5 = T_5(d_{-1},d_{-2},d_{-3},d_{-4})$. We may then set $$ P_4^\uparrow = \lceil P_4(c^\uparrow_0,c^\uparrow_1,c^\uparrow_2,c^\uparrow_3) \rceil \, \, \, \, \textrm{and} \, \, \, \, P_4^\downarrow = \lfloor P_4(c^\downarrow_0,c^\downarrow_1,c^\downarrow_2,c^\downarrow_3) \rfloor \, ; $$ $$ S_4^\uparrow = \lceil S_4(d^\uparrow_0,d^\uparrow_1,d^\uparrow_2,d^\uparrow_3) \rceil \, \, \, \, \textrm{and} \, \, \, \, S_4^\downarrow = \lfloor S_4(d^\downarrow_0,d^\downarrow_1,d^\downarrow_2,d^\downarrow_3) \rfloor \, ; $$ $$ Q_5^\uparrow = \lceil Q_5(c^\downarrow_{-1},c^\downarrow_{-2},c^\downarrow_{-3},c^\downarrow_{-4}) \rceil \, \, \, \, \textrm{and} \, \, \, \, Q_5^\downarrow = \lfloor Q_5(c^\uparrow_{-1},c^\uparrow_{-2},c^\uparrow_{-3},c^\uparrow_{-4}) \rfloor \, ; $$ and $$ T_5^\uparrow = \lceil T_5(c^\downarrow_{-1},c^\downarrow_{-2},c^\downarrow_{-3},c^\downarrow_{-4}) \rceil \, \, \, \, \textrm{and} \, \, \, \, T_5^\downarrow = \lfloor T_5(c^\uparrow_{-1},c^\uparrow_{-2},c^\uparrow_{-3},c^\uparrow_{-4}) \rfloor \, . $$ (Note the reversals of the uses of $\downarrow$ and $\uparrow$ in the replaced terms for $Q_5$ and $T_5$.) The tables then permit us to record the values (all of which are elements of the lattice $10^{-10}\ensuremath{\mathbb{Z}}$) \begin{eqnarray} S_4^\uparrow & = & 1.0701489815 \, \, \, , \, \, \, S_4^\downarrow = 1.0701489813 \label{e.stpqvalues} \\ T_5^\uparrow& = & 2.5400964392 \, \, \, , \, \, \, T_5^\downarrow = 2.5400964386 \nonumber \\ P_4^\uparrow & = & 1.8818013910 \, \, \, , \, \, \, P_4^\downarrow =1.8818013906 \nonumber \\ Q_5^\uparrow & = & 0.2123436589 \, \, \, , \, \, \, Q_5^\downarrow = 0.2123436587 \, . \nonumber \end{eqnarray} Next we specify two further elements of $10^{-10}\ensuremath{\mathbb{Z}}$: \begin{equation}\label{e.rupdown} \mathcal{M}^\uparrow_{5,4}(z) \, = \, \biggl\lceil \, \frac{z(S^\uparrow_4+T^\uparrow_5)}{P^\downarrow_4 + Q^\downarrow_5} \, \biggr\rceil \, \, \, \, \textrm{and} \, \, \, \, \mathcal{M}^\downarrow_{5,4}(z) \, = \, \biggl\lfloor \, \frac{z(S^\downarrow_4+T^\downarrow_5)}{P^\uparrow_4 + Q^\uparrow_5} \, \biggr\rfloor \, . \end{equation} \begin{lemma}\label{l.fiveshort} \begin{enumerate} Let $i \in \ensuremath{\mathbb{Z}}$. \item $s_i^\downarrow \leq s_i \leq s_i^\uparrow$. \item $c_i^\downarrow \leq c_i \leq c_i^\uparrow$. \item $d_i^\downarrow \leq d_i \leq d_i^\uparrow$. \item $P_4 \downarrow \leq P_4 \leq P_4^\uparrow$, $Q_4 \downarrow \leq Q_4 \leq Q_4^\uparrow$, $S_4 \downarrow \leq S_4 \leq S_4^\uparrow$ and $T_4 \downarrow \leq T_4 \leq T_4^\uparrow$. \item $\mathcal{M}_{5,4}^\downarrow \leq \mathcal{M}_{5,4} \leq \mathcal{M}_{5,4}^\uparrow$. \end{enumerate} \end{lemma} {\bf Proof.} Note that $s^\downarrow(x) \leq s(x) \leq s^\uparrow(x)$ for $x \in (0,\infty)$ by the definitions of $s^\downarrow$ and $s^\uparrow$. By induction on $i \geq 1$, we will show that $s_i^\uparrow \geq s_i$. Indeed, note that $s_i^\uparrow = s^\uparrow(s_{i-1}^\uparrow) \geq s(s_{i-1}^\uparrow) \geq s(s_{i-1}) = s_i$, where the latter inequality is due to the inductive hypothesis at index $i-1$ and to Lemma~\ref{l.acsfacts}(1:$s$). We also prove that $s_{-i}^\uparrow \geq s_{-i}$ for $i \geq 1$ by induction on $i$. In this regard, note that $s_{-i-1} = s_{-1}^\uparrow(s_{-i}^\uparrow) \geq s_{-1}(s_{-i}^\uparrow) \geq s_{-1}(s_{-i}) = s_{-1-i}$, where the first bound is due to $s_{-1}^\uparrow(x) \geq s_{-1}(x)$ for $x \in (0,\infty)$, which follows from the definition of $s_{-1}^\uparrow$; the second is due to the inductive hypothesis at index $i$ and $x \to s_{-1}(x)$ being increasing, which fact follows from Proposition~\ref{p.sminusone} and Lemma~\ref{l.acsfacts}(1:$s$). Similar arguments prove that $s_i^\downarrow \leq s_i$ for $i \in \ensuremath{\mathbb{Z}}$. {\bf (2).} Note that $c^\uparrow(s_i^\uparrow) \geq c(s_i^\uparrow) \geq c(s_i) = c_i$, where the first bound is due to the definition of $c^\uparrow$ and the second is due to $s_i^\uparrow \geq s_i$ and Lemma~\ref{l.acsfacts}(1:$c$). Similarly may we show that $c_i^\downarrow \leq c_i$. {\bf (3).} This is similar to the preceding part. {\bf (4).} Note that $P_4$ is an increasing function of the variables $c_i$, $i \in \llbracket 0,3 \rrbracket$; $Q_5$ is decreasing in $c_{-i}$, $i \in \intint{4}$; $S_4$ is increasing in $d_i$, $i \in \llbracket 0,3 \rrbracket$; and $T_5$ is decreasing in $d_{-i}$, $i \in \intint{4}$. (The noted properties of $Q_5$ and $T_5$ are valid only insofar as the variables $c_{-i}$ and $d_{-i}$ remain greater than one. But this condition is always met in applications, including the present one.) Given these monotonicities, Lemma~\ref{l.fiveshort}(2) shows that $$ P_4(c^\downarrow_0,c^\downarrow_1,c^\downarrow_2,c^\downarrow_3) \leq P_4 \leq P_4(c^\uparrow_0,c^\uparrow_1,c^\uparrow_2,c^\uparrow_3) \, , $$ so that the monotonicities of $\lfloor \cdot \rfloor$ and $\lceil \cdot \rceil$ prove the assertions concerning $P_4$. The derivation for $Q_5$ is similar. So are the others: Lemma~\ref{l.fiveshort}(3) is used in regard to $S_4$; and Lemma~\ref{l.fiveshort}(4) for $T_5$. {\bf (5).} The expression $\mathcal{M}_{5,4}$ is an increasing function of $S_4$ and $T_5$, and it is decreasing in $P_4$ and $Q_5$---thus, we may use the preceding part to reach the desired conclusion. \qed {\bf Proof of Proposition~\ref{p.thevalueofminamargin}.} Using the data~(\ref{e.stpqvalues}), note that the expressions~(\ref{e.rupdown}) have the evaluations $$ \mathcal{M}^\uparrow_{5,4} \, = \, \biggl\lceil \, 0.58 \times \frac{1.0701489815 + 2.5400964392}{1.8818013906 + 0.2123436587} \, \biggr\rceil \, = \, 0.9999032038 $$ and $$ \mathcal{M}^\downarrow_{5,4} \, = \, \biggl\lfloor \, 0.58 \times \frac{1.0701489813 + 2.5400964386}{1.8818013910 + 0.2123436589} \, \biggr\rfloor \, = \, 0.9999032032 \, . $$ By Lemma~\ref{l.fiveshort}(5), we learn that $\mathcal{M}_{5,4} \in [0.9999032032 , 0.9999032038]$, as Proposition~\ref{p.thevalueofminamargin} states. \qed \section{Trail prospects}\label{s.prospects} We discuss five topics prompted by the article. \subsection{Properties of the Mina margin map and prospective routes to conjectures}\label{s.conjectureroute} Conjecture~\ref{c.solutions} concerns the level sets of the Mina margin map, and via~(\ref{e.finitenash}), Conjecture ~\ref{c.tine} concerns the level sets of the finite trail counterparts to this map. Consider the $\Theta$-transformed finite-trail Mina margin maps $\mathcal{M}_{j+1,k+1} \circ \Theta: \ensuremath{\mathbb{R}} \to (0,\infty)$ depicted for several pairs~$(j+1,k+1)$ in Figure~\ref{f.tmmm}. In these sketches, there are a total of $2(j+k) - 5$ elements in any level set through which every swerve of the function passes; such level sets are indexed by $[\lambda,\lambda^{-1}]$ for $\lambda =0.999903 \cdots$ up to an error that vanishes in high $j$ and $k$; the functions converge to a limit $\mathcal{M} \circ \Theta: \ensuremath{\mathbb{R}} \to (0,\infty)$, and this limit has level sets with two elements in each period (such as in $\Theta^{-1}(1/3,3]$) for heights in $(\lambda,\lambda^{-1})$. These claims constitute the content of the two conjectures and they can be said to be visually more-or-less evident. But can they be proved? In regard to Conjecture~\ref{c.solutions} at least, control on derivatives and explicit evaluation on a suitably fine mesh may be a tractable approach: see the discussion regarding Conjecture~\ref{c.lambda} in Section~\ref{s.minamarginmap}. \subsection{The possible existence of further Nash equilibria} We have studied time-invariant Nash equilibria. It is natural to ask whether further Nash equilibria exist. We discuss two directions. \subsubsection{Time-invariant random Nash equilibria} Our formulation of the notion of Nash equilibrium in Section~\ref{s.gamespec} is deterministic. What if time-invariant random play is permitted? A strategy would consist of a set of laws on the non-negative reals indexed by $\ensuremath{\mathbb{Z}}$. When such a strategy is played, the stake offered would be sampled from the law indexed by the present counter location, the sampling being independent of other randomness. To avoid extra notation, we have not formulated this notion in the main part of the article. We do not believe that non-trivial random time-invariant Nash equilibria exist. Indeed, we remarked after the Penny Forfeit Lemma~\ref{l.pennyforfeit} that random play is suboptimal for the one-step game. By iterating this result and invoking the monotonicity in Penny Forfeit argued in the proof of Lemma~\ref{l.onestep}(2), the possibility of a non-trivial role for randomness of the form we have discussed may be excluded. \subsubsection{Nash equilibria that are not time-invariant} A deterministic strategy pair that may not be time-invariant takes the form $(b,a):\ensuremath{\mathbb{Z}} \times \N_+ \to (0,\infty)^2$. We may anticipate that, were such a pair a Nash equilibrium, the naturally associated dynamical quadruple $(a,b,m,n)$, specified by suitably modifying Definition~\ref{d.quadruple}, would satisfy a dynamical form \textrm{dABMN} of the ABMN system on $\ensuremath{\mathbb{Z}}$. For simplicity, we describe these equations on a finite trail $\llbracket -K-1,K+1\rrbracket$ and for a finite time interval $\llbracket 0, T \rrbracket$ (so that $K+1,T \in \N_+$). Boundary data is a quadruple $(m_{-K},m_K,n_K,n_{-K}) \in \ensuremath{\mathbb{R}}^4$ which equals $(0,1,1,0)$ in the simple symmetric case; and two terminal functions $m_{\rm ter},n_{\rm ter}: \llbracket -K-1,K+1 \rrbracket \to \ensuremath{\mathbb{R}}$. If write $*_i(j) = *(i,j)$ for $* \in \{a,b,m,n\}$, so that, for example, $a_i(j)$ is the stake offered by Maxine at the $j$\textsuperscript{th} turn in the event that $X_{j-1} =i$, the revised equations are \begin{align*} \big( a_i(j) + b_i(j) \big)\big(m_i(j) + a_i(j) \big) & = a_i(j) m_{i+1}(j+1) + b_i(j) m_{i-1}(j+1) && \qquad \textrm{dABMN}(1) \\ \big(a_i(j) + b_i(j) \big) \big(n_i(j)+b_i(j) \big) & = a_i(j) n_{i+1}(j+1) + b_i(j) n_{i-1}(j+1) &&\qquad \textrm{dABMN}(2) \\ \big(a_i(j) + b_i(j) \big)^2 & = b_i(j) \big( m_{i+1}(j+1) - m_{i-1}(j+1) \big) &&\qquad \textrm{dABMN}(3) \\ \big(a_i(j) + b_i(j) \big)^2 & = a_i(j) \big( n_{i-1}(j+1) - n_{i+1}(j+1) \big) &&\qquad \textrm{dABMN}(4) \, , \end{align*} where $(i,j)$ ranges over $\llbracket -K,K \rrbracket \times \llbracket 0, T-1 \rrbracket$. Boundary conditions enter via \begin{eqnarray*} & & m_{\pm (K+1)}(j) = m_{\pm (K+1)} \, \, , \, \, n_{\pm (K+1)}(j) = n_{\pm (K+1)} \, \textrm{for $j \in \intint{K}$; and} \\ & & m_i(T) = n_{\rm term}(i) \, \, , \, \, n_i(T) = n_{\rm term}(i) \, \, \textrm{for $i \in \llbracket{-K,K \rrbracket}$} \, . \end{eqnarray*} Of course, any \textrm{ABMN} solution $(a,b,m,n)$ solves $\textrm{dABMN}$ if we extend notation to set $*_i(j) = *_i$ for $(i,j) \in \ensuremath{\mathbb{Z}} \times \N_+$ and $* \in \{a,b,m,n\}$ (and then restrict the domain suitably). Do other solutions of $\textrm{dABMN}$ exist? Certainly there are some such. Points~$(i,j)$ in $\ensuremath{\mathbb{Z}} \times \N_+$ are odd or even according to whether $i+j$ is odd or even. The parity of $j + X_i(j)$ never changes from its initial $j=0$ value in any instance of the trail game. If we select two solutions $(a',b',m',n')$ and $(\hat{a},\hat{b},\hat{m},\hat{n})$ of the ABMN equations, and set $$ (a,b,m,n)(i,j) \, = \, \begin{cases} \, \, (a'_i,b_i',m'_i,n'_i) & \text{when $i+j$ is even} \, , \\ \, \, (\hat{a}_i,\hat{b}_i,\hat{m}_i,\hat{n}_i) & \text{when $i+j$ is odd} \, , \end{cases} $$ then $(a,b,m,n)$ solves $\textrm{dABMN}$ (when suitably restricted in the domain) and Theorem~\ref{t.nashabmn} directly implies that $(b,a)$ is a Nash equilibrium in the trail game. Conceptually, this is not really a new solution, however. Gameplay resides on the odd or even lattice and use of any of these new Nash equilibria will coincide with that of a time-invariant Nash equilibrium in any given game. \begin{figure}[htbp] \centering \includegraphics[width=0.78\textwidth]{DynamicABMN.pdf} \caption{The dynamic ABMN equations $\textrm{dABMN}$ on trail $\llbracket -8,8\rrbracket$ and time interval $\llbracket 0,4200 \rrbracket$ are solved with $m_{\rm ter}:\llbracket -8,8 \rrbracket \to [0,1]$, $m_{\rm ter}(-8)=0$, $m_{\rm ter}(8)=1$, rising sharply from zero to run along a rough plateau at height one-half, and ending with a further sharp rise to height one. We set $n_{\rm ter}(\cdot) = m_{\rm ter}(-\cdot)$ and work with a standard symmetric boundary quadruple~$(0,1,1,0)$. The red curve on the right plot, which is most exposed both on the left and the right, is $m_{\rm ter}$. The solutions of the equations are depicted at values of $j$ in $\llbracket 0,4200 \rrbracket$ that are multiples of $140$, so that thirty curves excepting the final condition are depicted in each plot. On the left, the $a$-components of $\textrm{dABMN}$ on the open-play set $\llbracket -7,7 \rrbracket$ for the $j$-values in question are shown (with linear interpolation between integers); on the right, the $m$-components on the trail $\llbracket -8,8\rrbracket$ with such interpolation are shown. The curves are coloured on a spectrum leading from red to black as time passes backwards. These curves make a staccato advance (with this flow of time) from the sides to the centre, with the final black curve in each plot, indexed by $j=0$, representing a single battlefield around the origin. The $b$- and $n$-components are formed by reflecting the $a$- and $m$-components in the vertical axis.}\label{f.dynamicabmn} \end{figure} For $k \in \N_+$, the system $\textrm{dABMN}$ can be solved on $\llbracket -K,K \rrbracket \times \intint{k}$ by choosing a given terminal condition $\big\{ m_i(k),n_i(k) \big): i \in \llbracket -K-1,K+1 \rrbracket \big\}$ and iteratively solving $\textrm{dABMN}$ for decreasing values of $j$. In searching for a solution that is not invariant in time, we seek a terminal condition such that, if this condition is imposed even for a very high value of $k$, the solved solution stabilises for bounded values of $j$ to a form that is not time-invariant. We have tested a few terminal conditions; Figure~\ref{f.dynamicabmn} depicts a solution of $\textrm{dABMN}$ on the trail $\llbracket -8,8 \rrbracket$. The $m$-component of the terminal condition, defined on this trail, rises sharply at both ends, and otherwise has the form of a rough plateau of height one-half; the $n$-component is the reflection of the $m$-component in the vertical axis. The region of each sharp rise for the $m$-component represents a battlefield which may be rather stable as time evolves. As the two plots in Figure~\ref{f.dynamicabmn} show, the two battlefields rapidly approach one another by a short distance, and then remain in a rough stasis in which a gradual movement towards the origin is discernible, before rapidly breaking towards a single central battlefield. A total of $4200$ time-steps are involved in the simulation, with the $j=140$ black curve in each plot (which is the penultimate depicted in the backwards-in-time evolution) showing an eruption towards the centre, and the final black curve in each plot adopting the central location which is in essence the fixed point of the evolution. Battlefield pairs with greater separation may endure much longer, and may present a metastable effect for $\textrm{dABMN}$ that causes these equations to converge very slowly to fixed points. There is no evidence however of non-convergence: our limited investigation has not produced examples that support the notion that further time-invariant Nash equilibria exist beyond the simple parity-based class discussed above. \subsection{Gameboards beyond $\ensuremath{\mathbb{Z}}$} By use of a setup involving directed graphs, self-funded stake-governed random-turn games derived from games such as Hex or chess may be considered. It would be of interest to determine for a suitable class of games whether some of the features of the Trail of Lost Pennies of $\ensuremath{\mathbb{Z}}$ are present more generally. The central ratio $\tfrac{n_{-1} -n_0}{m_0 - m_{-1}}$ is the ratio of changes in mean payoff for Mina and Maxine arising from Mina's victory at the first turn. This or similar quantities may be considered in suitable infinite games, permitting us to ask whether Theorem~\ref{t.nashequil.prelim} generalizes to these games: do Nash equilibria exist precisely when the quantity lies in an interval of the form $[\lambda,\lambda^{-1}]$? Do these game-determined $\lambda$-values differ from one by a notably small but positive quantity, as this value for the trail game on~$\ensuremath{\mathbb{Z}}$ appears to differ by about $10^{-4}$? Do more general games share with ours the notion of the battlefield, namely one (or perhaps several) bounded regions of the space of configurations on the gameboard, specified by any given Nash equilibrium, in which players concentrate their stake expenditure, with the outcome of turns occurring therein being highly influential in the overall game? \subsection{Playing the game when the Mina margin is away from one} Theorem~\ref{t.nashequil.prelim} shows that, when the Margin margin lies outside of the narrow interval $[\lambda,\lambda^{-1}]$, no time-invariant Nash equilibria exist for the Trail of Lost Pennies on~$\ensuremath{\mathbb{Z}}$. How then should the game be played in this case? This question can be addressed for a finite trail game, perhaps shedding light on the infinite version. Consider the gameboard $\llbracket -6,6 \rrbracket$ (whose set of open play is $\llbracket -5,5 \rrbracket$), and the associated $\Theta$-transformed Mina margin map $\mathcal{M}_{6,6} \circ \Theta :\ensuremath{\mathbb{R}} \to (0,\infty)$, which is depicted in Figure~\ref{f.tmmm}(left). Select $z = 1 + 10^{-4}$, a value that lies slightly above $\lambda^{-1}$ according to Conjecture~\ref{c.lambda} (so that the purple curve in Figure~\ref{f.tmmm}(left) has turned left off the highway to cross this height). Indeed, the equation $\mathcal{M}_{6,6}(x) = z$ is found (by some trail-and-error work in Mathematica) to have a unique solution in $x \in \ensuremath{\mathbb{R}}$, with this solution taking the form $x = 4.04493$ up to five decimal places. The corresponding standard solution $(a,b,m,n): \llbracket -5,5 \rrbracket \to (0,\infty)^2 \times \ensuremath{\mathbb{R}}^2$ is depicted in Figure~\ref{f.uniquenash}. \begin{figure}[htbp] \includegraphics[width=1\textwidth]{UniqueNash.pdf} \caption{The unique standard solution $(a,b,m,n): \llbracket -5,5 \rrbracket \to (0,\infty)^2 \times \ensuremath{\mathbb{R}}^2$ for the trail game on $\llbracket -6,6\rrbracket$ with Mina margin equal to $1 + 10^{-4}$ is depicted. The stakes $(a_i,b_i)$ and expected mean payoffs $(m_i,n_i)$ are displayed for $i \in \llbracket 0,5 \rrbracket$ in the left and right charts. Note that the leftmost displayed data is indexed by the origin: the data to the left, indexed by $\llbracket -5,-1 \rrbracket$, is visually indistinguishable from the zero-indexed data in the two displays.}\label{f.uniquenash} \end{figure} The solution has central ratio $\tfrac{n_{-1} - n_0}{m_0 - m_{-1}} = \Theta(x)$ equal to $46538$ up to rounding error: the origin is comfortably to the left of the battlefield index. Indeed, this index lies at four, since $\phi_4 = 0.719 \cdots \in [1/3,3)$. Mina's stake at vertex three is the greater, and she dominates staking, and mean payoffs, at vertices two and below. Were we to consider the analogous solution on longer gameboards $\llbracket -\ell,\ell \rrbracket$, $\ell > 6$, we would see that its battlefield index rises in $\ell$, and that the region around the origin falls progressively more into the territory that Mina controls. In this territory, she vastly outbids Maxine, even though all stakes are tiny. The weak limit in high $\ell$ of the gameplay starting at the origin that is governed by the reverse-ordered $(a,b)$-component of the solution is likely to be a deterministic left-moving walk on~$\ensuremath{\mathbb{Z}}$. The conclusion may seem to be that, when $x$ exceeds $\lambda^{-1}$, the Trail of Lost Pennies on~$\ensuremath{\mathbb{Z}}$ has become uncompetitive because Mina's position is too strong: she should, it appears, win without expenditure. And as usual likewise for Maxine in the opposing case, when $x$ is less than $\lambda$. But care is needed in this interpretation. After all, the limit in high~$\ell$ of the stakes offered near the origin is zero for both players, and the double-zero strategy will not gratify Mina's ambition to win without cost. Overall, then, the limit from finite gameboards creates a sense of utter dominance for the player with a favourable value of the Mina margin, but our formal results are agnostic: there are no Nash equilibria in the infinite game; since this is the solution concept we have studied, our results offer no guidance to Mina as she prepares to play the trail game on $\ensuremath{\mathbb{Z}}$ with boundary data specifying a value of the Mina margin that exceeds~$\lambda^{-1}$. \subsection{The game of chicken in the Trail of Lost Pennies} Theorems~\ref{t.nashequil.prelim} and~\ref{t.solutions} show that, for any $x \in (\lambda,\lambda^{-1})$, the trail game with boundary data $(m_{-\infty},m_\infty,n_{-\infty},n_\infty) = (0,1,x,0)$ has at least two distinct time-invariant Nash equilibria of any given integral battlefield index; if $x \in \{- \lambda,\lambda\}$, then there is at least one such. For any $x \in [-\lambda,\lambda]$, we may thus find an element $\mathcal{S}_0^2 \cap \mathcal{N}$ of battlefield index zero. For $k \in \ensuremath{\mathbb{Z}}$, we denote by $(S_-(k),S_+(k))$ the right-shift by $k$ places of $(S_-,S_+)$. This is an element of $\mathcal{S}_0^2 \cap \mathcal{N}$ of battlefield index~$k$. Suppose that the counter begins at the origin. Game outcomes under the strategy pairs $(S_-(k),S_+(k))$ become more favourable to Mina, and less favourable to Maxine, as the index $k$ increases; as Theorem~\ref{t.unanimity} indicates, the probability of victory for Maxine decays rapidly as $k$ becomes positive. Suppose the game is about to begin, and players must commit to strategies. Mina may consider playing one of the strategies $S_-(k)$ for $k \in \ensuremath{\mathbb{Z}}$. If her opponent elects to play $S_+(k)$, then Mina would much prefer that the shared value of $k$ be positive; Maxine would naturally prefer a negative choice. But the players must consider the case that the opposing player elects a different value of~$k$. What then? For simplicity, consider the symmetric game where $x=1$. Suppose that $S_+ = a$ and $S_-=b$ in the usual notational abuse, where $a_i = b_{-i}$ for $i \in \ensuremath{\mathbb{Z}}$. (The choice $(a,b) = (a^{\rm st}(3),b^{\rm st}(3))$ meets this condition, as we will see in Proposition~\ref{p.symmetric}(1).) Let $k \in \N_+$. Suppose that Mina chooses between the soft $S_-(-k)$ and the tough $S_-(k)$, while Maxine elects to play either the soft $S_+(k)$ or the tough $S_+(-k)$. By this restriction, we consider a two-person game where each player has two alternatives, and in Table~\ref{t.twobytwo}, we depict mean payoffs in a two-by-two array whose rows index Mina's choice, whose columns index Maxine's, and each of whose coordinates contains a list of Mina's and Maxine's mean payoffs when the indexing strategy pair is played. The good outcome $G$ has value $1 - \exp \{- 2^k O(1) \}$. The medium outcome $M$ takes the form $1/2 - \exp \{- 2^k O(1) \}$. The bad outcome~$B$ has value $\exp \{- 2^k O(1) \}$. And the value $C$ of the catastrophic outcome is ... minus infinity! We will illustrate how to obtain these assertions rather than present formal derivations. The outcomes of $G$ and $B$ arise in the off-diagonal cases, where Nash equilibria are played, so that the claimed forms for $G$ and $B$ arise from Theorem~\ref{t.ajbj} in the sense of the paragraph that follows this theorem. Consider the strategy pair $({\rm Soft}=S_-(-k),{\rm Soft}=S_+(k))$. At the first turn, Mina is playing $k$ units to the right of her presumed location of the battlefield vertex, as if she has as good as lost already. But Maxine is playing $k$ units to the left of where she is claiming the battlefield index to be, and also in effect nearly admits defeat. Maxine's and Mina's stakes are $a_{-k}$ and $b_k$: both very small, but equal in our special case. So the first turn victor is chosen by the outcome of a fair coin toss. And this early winner will lose even one later move only with probability $\exp \{- 2^k O(1) \}$ as the estimates in Theorem~\ref{t.ajbj} show, because the victor's stakes rise and her opponent's fall as the counter moves closer to the victor's presumed battlefield location. In the case of $({\rm Tough}=S_-(k),{\rm Tough}=S_+(-k))$ play, a phenomenon opposite to the eventual unanimity of gameplay in Theorem~\ref{t.unanimity} occurs. The counter location at late time has law approaching an equilibrium which heavily charges the origin and a few nearby sites. When the counter moves slightly to the left of the origin, it comes closer to Maxine's presumed battlefield index at $-k$ than it does to Mina's at $k$, so that Maxine's stake rises far higher than Mina's, and the counter is restored towards the origin. An opposing leftward force naturally acts on the counter when it is to the right of the origin. The implicit consensus against lengthy play in a bounded region discussed around Theorem~\ref{t.unanimity} has been broken with double-tough play, and the players are trapped in an unending mutually destructive cycle. \begin{table} \begin{center} \begin{tabular}{| c | c | c | } \hline & ${\rm Maxine \, \, Soft}: S_+(k)$ & ${\rm Maxine \, \, Tough}: S_+(-k)$ \\ \hline ${\rm Mina \, \,Soft}: S_-(-k)$ & M, M & B, G \\ ${\rm Mina \, \,Tough}: S_-(k)$ & G , B & C,C \\ \hline \end{tabular} \caption{Mina and Maxine choose between their components in two given Nash equilibria with battlefield indices $-k$ and $k$, for some $k \in \N_+$. The respective mean payoffs for Mina and Maxine for each of the four strategy pairs are recorded in each entry of the $2 \times 2$ array. The possible outcomes are $G = {\rm Good}$, ${M = \rm Medium}$, $B = {\rm Bad}$ and $C = {\rm Catastrophe}$.}\label{t.twobytwo} \end{center} \end{table} In the classic game of chicken~\cite[Chapter~$10$]{Poundstone2011}, two players choose between soft and tough options of swerving or driving straight. When one player drives straight and the other swerves, their payoffs are the pleasure $G$ of winning and the annoyance $B$ of showing weakness. When both swerve, both receive an intermediate value $M$. When both drive straight, the shared outcome is a highly negative $C$ as the cars crash. We see then that the Trail of Lost Pennies on $\ensuremath{\mathbb{Z}}$ embeds the game of chicken. The translation symmetry of $\ensuremath{\mathbb{Z}}$ makes the selection of which Nash equilibrium to play a difficult choice for players who may be infinitely punished for a perhaps unintentionally tough decision. The counterpart embedding of chicken occurs in the finite trail game, where the value of $C$, while often highly negative, is finite. \subsection{Play between people and algorithms} The finite trail $\llbracket -j,k \rrbracket$---perhaps for values of $j$ and $k$ somewhere between one and five---may provide an attractive context for investigating how people or algorithms play the trail game. Given the smallness of $1 - \lambda$ and the multiplicity of Nash equilibria for many of the games with longer trails, it seems fanciful to believe that two people who play the same game repeatedly will typically adhere to such an equilibrium (at least when $j +k$ is high enough). Other strategies may seem natural. \subsubsection{Cooperative behaviour} Trust could be established during iterated play. If two players each stake zero throughout a standard symmetric trail game on $\llbracket -k,k \rrbracket$, $k \in \N_+$, whose counter starts at zero, their running costs are zero, and their mean payoffs are one-half (this is because play ends in finite time on a finite trail; we use the $0/0 =1/2$ rule from Section~\ref{s.gamespec}). \subsubsection{Tit-for-tat} A consistent zero strategy has the flaw of being vulnerable to exploitation, and a player in an iterated game may prefer a tit-for-tat approach: stake zero in every game until the opponent makes a positive stake; in the next game, play more aggressively; revert to playing zero if the opponent reacts modestly to the aggressive play. Of course, there are degrees of aggression that may be adopted. The iterated prisoner's dilemma is a classic example where the Nash equilibrium (which proposes uncooperative play) often predicts wrongly how people will play, and where tit-for-tat and variants thereof are commonly adopted strategies for humans~\cite{DalBoFrechette} that have been found in computer-against-computer tournaments to be effective~\cite{AxelrodHamilton}. \subsubsection{The loadsamoney bully} On a finite trail, the loadsamoney bully chooses $\e > 0$ small and consistently stakes $\e$ against an opponent who stakes zero. He plays aggressively when the opponent makes a positive stake: he may react to a stake~$a > 0$ by staking $2a$ at the next turn, for example. This player wins games against a zero-staking opponent while incurring almost no running cost. He seeks to rein financial terror on the opponent who deviates from a zero strategy, by seeming prepared to win the concerned game no matter what the cost. Hoping to create a sense of formidable financial resources, his long-term plan for the iterated game is to cow the opponent into a submissive zero strategy.
1,314,259,993,851
arxiv
\section*{Calculational methods} First-principles calculations based on van der Waals density functional theory were performed within the plane-wave implementation of the density functional theory in the ABINIT package~\cite{ABINIT}, which we have adapted from the Siesta~\cite{Siesta} code to incorporate the van der Waals interaction. We adopted Troullier-Martins pseudopotentials~\cite{TM} with a gradient-corrected functional. An energy cutoff of 50 Ry and Gamma point sampling were used for total energy calculations. \section*{Vibrational frequency} The four type of adsorption sites are shown in Fig.~\ref{fig:sites}. To calculate the stretch frequency for H$_2$ at each of these four sites, we performed a series of calculations varying the bond length of H$_2$, with the center of H$_2$ and the host atoms fixed at their equilibrium positions. The resulting potential-energy curve was used in the Schr{\"o}dinger equation to obtain the eigenvalues and excitation frequencies. A similar calculation was also carried out for isolated H$_2$ to obtain the frequency shift due to MOF-H$_2$ interaction. The {\sl ab initio} total energies vs H$_2$ internuclear distance were tabulated in Tables~\ref{table:cup_vibpes}$-$\ref{table:bz_vibpes} for H$_2$ at the four type of adsorption sites. \section*{Rotational frequency} In order to calculate the rotational states, we first sample the solid angle to get the total energies. The spherical surface was sampled as follows: the polar angle were evenly divided into seven layers; and the azimuthal angle were then sampled by \{1,8,16,24,16,8,1\} number of points corresponding to each layer from pole to pole of the sphere. We next fit these potential energies with spherical harmonics \begin{equation} V(\theta,\phi) = \sum_{lm} c_{lm} Y_{lm} \end{equation} which was then substituted into the rigid rotor equation and diagonalized for rotational energies. We found that fitting with $s$ and $d$ states gave results converged within 1 cm$^{-1}$. The fitted coefficients are shown in Table~\ref{table:ylmcoefficients}. The calculated rotational energy states are shown in Tables~\ref{table:eigenE_cup}$-$\ref{table:eigenE_bz}. \section*{Wannier function approach for dipole moment and IR intensity} The Wannier functions were calculated with the Wannier90 code~\cite{Wannier90} embedded in ABINIT and the Brillouin zone was sampled by a 2x2x2 Monkhorst-Pack grid. The Wannier centers for the bare MOF were first calculated and used as initial guess for the H$_2$ loaded system. The change of MOF Wannier centers upon H$_2$ adsorption was obtained from \begin{equation} \delta \mathbf{r}_n = \mathbf{r}^{MOF+H_2}_n - \mathbf{r}^{MOF}_n \end{equation} where $\mathbf{r}_n$ is the center of the $n$-th Wannier function. Fig.~\ref{fig:wc_o2site}$-$\ref{fig:wc_bzsite} show these changes for the O2, O3 and benzene sites while the cup site is given in the main text. To calculate the IR intensity, one first needs to get the derivative of the dipole with respect to the normal coordinates corresponding to H$_2$ stretch vibration. We used the H$_2$ internuclear distance as an approximation for the stretching normal coordinates and the derivative was approximate by the finite difference. To reduce the numerical errors, the bond stretching should be sufficiently large, but still in the linear regime. We found that a stretch of 0.05A from equilibrium bond length was appropriate, as shown in Fig.~\ref{fig:der2dif}. \section*{Model for induced dipole} The total induced dipole of the system can be approximated by a sum of four terms. \begin{equation} \mathbf{u} = \mathbf{u}^{H_2}_0 + \mathbf{u}^{MOF}_0 +\mathbf{u}^{MOF}_1 + \mathbf{u}^{H_2}_1 \end{equation} The first term on the right-hand side is the induced dipole on H$_2$ due to interactions with MOF atoms. The second term is the induced dipole on MOF atoms due to H$_2$ quadrupole. The third term is the induced dipole on MOF due to $\mathbf{u}^{H_2}_0$ and the fourth term is the induced dipole on H$_2$ due to $\mathbf{u}^{MOF}_0$. These last two terms are second order corrections. We now derive the expressions for the four terms for cup site adsorption. \subsection{$\mathbf{u}^{H_2}_0$} Due to the 3-fold symmetry, the electric field ($\vec E$) at cup site due to MOF atoms is along the rotation axis, {\sl i.e.} Z. For H$_2$ with its bond oriented along ($\theta,\phi$), the projected fields along and perpendicular to the bond are \begin{subequations} \begin{align} E_{\|} & = E\cos \theta \\ E_{\bot}& = E\sin \theta \end{align} \end{subequations} and the corresponding induced dipole is \begin{subequations} \begin{align} u_{\|} & = E \alpha _{\|} \cos \theta \\ u_{\bot}& = E \alpha _{\bot} \sin \theta \end{align} \end{subequations} where $\alpha _{\|} $ and $\alpha _{\bot} $ are the H$_2$ polarizability along and perpendicular to the bond. In Cartesian coordinates, this gives \begin{subequations} \label{eq:uH20} \begin{align} u_{0X}^{H_2} &= E (\alpha _{\|} - \alpha _{\bot}) \sin \theta \cos \theta \cos \phi \\ u_{0Y}^{H_2} &= E (\alpha _{\|} - \alpha _{\bot}) \sin \theta \cos \theta \sin \phi \\ u_{0Z}^{H_2} &= E (\alpha _{\|} \cos ^2 \theta + \alpha _{\bot} \sin ^2 \theta ) \end{align} \end{subequations} \subsection{$\mathbf{u}^{MOF}_0$} Hydrogen molecule has permanent quadrupole. The tensor is $Q_{zz}=-2Q_{xx}=-2Q_{yy}=Q$. For H$_2$ at origin with orientation of ($\theta,\phi $), the quadrupole potential at position P$_1$(X,Y,Z) is \begin{equation} V(\mathbf{r}) = \frac{3Q}{2r^5} \tilde{Z} ^2 - \frac{Q}{2r^2} \end{equation} where $r=(X^2+Y^2+Z^2)^{1/2}$ and $ \tilde{Z} = X \sin \theta \cos \phi + Y \sin \theta \sin \phi + Z \cos \theta $. The electric field of this potential is \begin{subequations}\label{eq:E0X} \begin{align} E_{0X}(\mathbf{r}) &= \frac{3Q}{2r^7} \left \{ -2r^2\sin \theta \cos \phi \tilde{Z} + 5X \tilde{Z}^2 \right \} - \frac{3Q}{2r^5}X \\ E_{0Y}(\mathbf{r}) &= \frac{3Q}{2r^7} \left \{ -2r^2\sin \theta \sin \phi \tilde{Z} + 5Y \tilde{Z}^2 \right \} - \frac{3Q}{2r^5}Y \\ E_{0X}(\mathbf{r}) &= \frac{3Q}{2r^7} \left \{ -2r^2\sin \theta \cos \phi \tilde{Z} + 5Z \tilde{Z}^2 \right \} - \frac{3Q}{2r^5}Z \end{align} \end{subequations} The MOF charge density will be shifted by this field and thus leads to induced dipole. To calculate this induced dipole on MOF, one may view that there is an electron at position P$_1$ with partial charge which is equal to the charge density at P$_1$. This charge has a certain polarizability. The total induced dipole can then be formally calculated by multiplying the electric field by the corresponding polarizability and then integrating over the whole MOF space. This procedure is somewhat an extension of the classical point charge into the continuous charge density regime. Assuming the polarizability is isotropic, the final result will have the same dependence on ($\theta,\phi$) as electric field since the integration runs over (X,Y,Z) while ($\theta,\phi$) will be left unchanged. In other words, we will have an equation of the form in Eq. (5) in the main text. Note that the isotropic assumption is not critical here except in making the final equations simpler. If one had used the whole polarizability tensor, the final result can still be cast into the form in Eq. (5) in the main text. We found that the isotropic assumption gave consistent results for our system. Due to the 3-fold rotation symmetry, there are three equivalent points with equal polarizability in MOF. Taking advantage of this symmetry, the sum of the electric fields at the three positions is \begin{subequations} \label{eq:EMOF0} \begin{align} \tilde E_{0X} & = \frac{9Q}{4 r^7}\left\{ \left(3 r^2 - 5Z^2 \right) Z \sin 2 \theta \cos \phi - \frac{5}{2}\left(3XY^2-X^3\right) \sin^2 \theta \cos 2 \phi + \frac{5}{2}\left(3X^2Y-Y^3\right) \sin^2 \theta \sin 2 \phi \right \} \\ \tilde E_{0Y} & = \frac{9Q}{4 r^7}\left \{ \left(3 r^2 - 5Z^2 \right)Z \sin 2 \theta \sin \phi + \frac{5}{2}\left(3X^2Y-Y^3\right) \sin^2 \theta \cos 2 \phi + \frac{5}{2}\left(3XY^2-X^3\right) \sin^2 \theta \sin 2 \phi \right \} \\ \tilde E_{0Z} & = \frac{9Q}{4} \frac{ (5Z^2 -3r^2 )Z}{r^7} (3\cos^2 \theta -1 ) \end{align} \end{subequations} The induced dipole on MOF is therefore given by \begin{subequations} \label{eq:uMOF0} \begin{align} u_{0X}^{MOF} & = C_{01}^{MOF} \sin 2 \theta \cos \phi -C_{02}^{MOF} \sin^2 \theta \cos 2 \phi +C_{03}^{MOF} \sin^2 \theta \sin 2 \phi \\ u_{0Y}^{MOF} & = C_{01}^{MOF} \sin 2 \theta \sin \phi +C_{03}^{MOF} \sin^2 \theta \cos 2 \phi +C_{02}^{MOF} \sin^2 \theta \sin 2 \phi \\ u_{0Z}^{MOF} & = C_{04}^{MOF} (3\cos^2 \theta -1 ) \end{align} \end{subequations} where \begin{subequations} \begin{align} C_{01}^{MOF} &= \int \frac{9Q}{4 r^7} \left(3 r^2 - 5Z^2 \right) Z \alpha^{MOF}(\mathbf{r}) \,d\mathbf{r} \\ C_{02}^{MOF} &= \int \frac{9Q}{4 r^7} \frac{5}{2}\left(3XY^2-X^3\right) \alpha^{MOF}(\mathbf{r}) \,d\mathbf{r} \\ C_{03}^{MOF} &= \int \frac{9Q}{4 r^7} \frac{5}{2}\left(3X^2Y-Y^3\right) \alpha^{MOF}(\mathbf{r}) \,d\mathbf{r} \\ C_{04}^{MOF} &= \int \frac{9Q}{4r^7} (5Z^2 -3r^2 )Z \alpha^{MOF}(\mathbf{r}) \,d\mathbf{r} \end{align} \end{subequations} and the integration runs over the 1/3 irreducible region of MOF, as a result of the 3-fold rotation symmetry. \subsection{ 2$^{nd}$-order corrections: $\mathbf{u}^{MOF}_1$ and $\mathbf{u}^{H_2}_1$ } The induced dipole on H$_2$ and MOF, $\mathbf{u}^{H_2}_0$ and $\mathbf{u}^{MOF}_0$, further induces dipole on each other and produces second order corrections. For $\mathbf{u}^{H_2}_0$, it gives electric field at P$_1$(X,Y,Z) \begin{subequations} \begin{align} E_{1X}(\mathbf{r}) &= \frac{1}{r^5}\left(-u^{H_2}_{0X} r^2 +3\mathbf{u}^{H_2}_0 \cdot \mathbf{r} X \right) \\ E_{1Y}(\mathbf{r}) &= \frac{1}{r^5}\left(-u^{H_2}_{0Y} r^2 +3\mathbf{u}^{H_2}_0 \cdot \mathbf{r} Y \right) \\ E_{1Z}(\mathbf{r}) &= \frac{1}{r^5}\left(-u^{H_2}_{0Z} r^2 +3\mathbf{u}^{H_2}_0 \cdot \mathbf{r} Z \right) \end{align} \end{subequations} Similar to the derivation of $\mathbf{u}^{MOF}_0$, one obtains \begin{subequations}\label{eq:uMOF1} \begin{align} u^{MOF}_{1X} &= C_{11}^{MOF} \sin 2 \theta \cos \phi - C_{12}^{MOF} \sin^2 \theta \cos 2 \phi + C_{13}^{MOF} \sin ^2 \theta \sin 2 \phi \\ u^{MOF}_{1Y} &= C_{11}^{MOF} \sin 2 \theta \sin \phi + C_{12}^{MOF} \sin^2 \theta \sin 2 \phi + C_{13}^{MOF} \sin ^2 \theta \cos 2 \phi \\ u^{MOF}_{1Z} &= C_{14}^{MOF}\cos^2 \theta + C_{15}^{MOF} \end{align} \end{subequations} where \begin{subequations} \begin{align} C_{11}^{MOF} &= \int \left\{ \frac{9Q}{4}\left(\frac{3Z}{r^5}-\frac{5Z^3}{r^7}\right) +\frac{3E}{4}\left( \frac{1}{r^3}-\frac{3Z^2}{r^5}\right)\left( \alpha_{\|} -\alpha_{\perp} \right) \right \} \alpha^{MOF} (\mathbf{r}) \,d\mathbf{r} \\ C_{12}^{MOF} &= \int \left\{ \frac{45Q}{8r^7}\left( 3XY^2 -X^3 \right) \right \} \alpha^{MOF} (\mathbf{r}) \,d\mathbf{r} \\ C_{13}^{MOF} &= \int \left\{ \frac{45Q}{8r^7}\left( 3X^2Y -Y^3 \right) \right \} \alpha^{MOF} (\mathbf{r}) \,d\mathbf{r} \\ C_{14}^{MOF} &= \int \left\{ \frac{27Q}{4}\left(-\frac{3Z}{r^5}+\frac{5Z^3}{r^7}\right) - 3E\left(\frac{1}{r^3}-\frac{3Z^2}{r^5}\right)( \alpha_{\|} -\alpha_{\perp} ) \right \} \alpha^{MOF} (\mathbf{r}) \,d\mathbf{r} \\ C_{15}^{MOF} &= \int \left\{ \frac{9Q}{4}\left(\frac{3Z}{r^5}-\frac{5Z^3}{r^7}\right) - 3E\left(\frac{1}{r^3}-\frac{3Z^2}{r^5}\right)\alpha_{\perp} \right \} \alpha^{MOF} (\mathbf{r}) \,d\mathbf{r} \end{align} \end{subequations} and the integration again runs over 1/3 of the MOF region. Now let us look at the second-order correction on H$_2$, $\mathbf{u}^{H_2}_1$. The hydrogen quadrupole generates electric field at P$_1$ as given by Eq.~\eqref{eq:EMOF0}. With the help of the partial charge and local polarizability concept, this field produces a local dipole $ \mathbf{u}^{MOF}_0(\mathbf{r}) = \alpha^{MOF}(\mathbf{r}) \mathbf{E}_0(\mathbf{r}) $ where $\mathbf{E}_0$($\mathbf{r}$) is given in Eq.~\eqref{eq:E0X} and $\alpha^{MOF}(\mathbf{r})$ is assumed to be isotropic. The electric field back at H$_2$ due to this local dipole is \begin{subequations}\label{eq:EH2_1} \begin{align} E^{H_2}_{1X} & = -\frac{\alpha}{r^5} \left \{ E^{MOF}_{0X} (r^2-3X^2) - 3E^{MOF}_{0Y} X Y - 3E^{MOF}_{0Z} X Z \right \} \\ E^{H_2}_{1Y} & = -\frac{\alpha}{r^5} \left \{ E^{MOF}_{0Y} (r^2-3Y^2) - 3E^{MOF}_{0X} X Y - 3E^{MOF}_{0Z} Y Z \right \} \\ E^{H_2}_{1Z} & = -\frac{\alpha}{r^5} \left \{ E^{MOF}_{0Z} (r^2-3Z^2) - 3E^{MOF}_{0X} X Z - 3E^{MOF}_{0Y} Y Z \right \} \end{align} \end{subequations} Inserting Eq.~\eqref{eq:EMOF0} into Eq.~\eqref{eq:EH2_1}, adding together the three rotationally equivalent points and integrating over the 1/3 MOF region, one finally obtains \begin{subequations}\label{eq:EH21} \begin{align} \tilde E^{H_2}_{1X} &= C^{H_2}_{11} \sin 2 \theta \cos \phi - C^{H_2}_{12} \sin^2 \theta \cos 2 \phi + C^{H_2}_{13} \sin^2 \theta \sin 2 \phi \\ \tilde E^{H_2}_{1Y} &= C^{H_2}_{11} \sin 2 \theta \cos \phi + C^{H_2}_{12} \sin^2 \theta \sin 2 \phi + C^{H_2}_{13} \sin^2 \theta \cos 2 \phi \\ \tilde E^{H_2}_{1Z} &= C^{H_2}_{14} \cos^2 \theta + C^{H_2}_{15} \end{align} \end{subequations} where \begin{subequations} \begin{align} C^{H_2}_{11} &= \int \frac{9\alpha^{MOF}(\mathbf{r}) QZ}{2r^8} \left ( \frac{33}{16} - \frac{Z^2}{8r^2} - \frac{15Z^4}{16r^4} -\frac{5X^2Y^2}{4r^4} \right ) \,d\mathbf{r}\\ C^{H_2}_{12} &= \int \frac{9\alpha^{MOF}(\mathbf{r}) Q}{2r^{10}}(3Y^2-X^2)X \,d\mathbf{r} \\ C^{H_2}_{13} &= \int \frac{9\alpha^{MOF}(\mathbf{r}) Q}{2r^{10}}(3X^2-Y^2)Y \,d\mathbf{r} \\ C^{H_2}_{14} &= \int \frac{9\alpha^{MOF}(\mathbf{r}) QZ}{2r^8}\left (-\frac{9}{16}+\frac{45}{8}\frac{Z^2}{r^2} +\frac{15}{16}\frac{Z^4}{r^4} + \frac{5X^2Y^2}{4r^4} \right ) \,d\mathbf{r}\\ C^{H_2}_{15} &= \int \frac{9\alpha^{MOF}(\mathbf{r}) QZ}{2r^8} \left ( \frac{57}{16}-\frac{37}{8}\frac{Z^2}{r^2} -\frac{15}{16}\frac{Z^4}{r^4} -\frac{5X^2Y^2}{4r^4} \right )\,d\mathbf{r} \end{align} \end{subequations} and the integral is over the 1/3 of the MOF space. Considering the anisotropy of the polarizability of H$_2$, we have \begin{equation} \label{eq:uH21} \mathbf{u}^{H_2}_1 \\ = \left[ \alpha_{\perp} + (\alpha_{\|} - \alpha_{\perp}) \left( \begin{array}{lll} \sin^2 \theta \cos^2 \phi & \sin^2 \theta \sin \phi \cos \phi & \sin \theta \cos \theta \cos \phi \\ \sin^2 \theta \sin \phi \cos \phi & \sin^2 \theta \sin^2 \phi & \sin \theta \cos \theta \sin \phi \\ \sin \theta \cos \theta \cos \phi & \sin \theta \cos \theta \sin \phi & \cos^2 \theta \end{array} \right) \right] \tilde{\mathbf{E}}^{H_2}_1 \end{equation} The anisotropic term impose a small correction to the first term inside the bracket. For simplicity, we neglect the second term so that $\mathbf{u}^{H_2}_1$ and $\tilde{\mathbf{E}}^{H_2}_1$ have a simple linear relationship. In particular, they have the same form of dependence on ($\theta,\phi$) as given in Eq.~\eqref{eq:EH21}. \subsection{Coefficients} From Eq.~\eqref{eq:uH20}, \eqref{eq:uMOF0}, \eqref{eq:uMOF1} and \eqref{eq:EH21}, we conclude that the following equations hold \begin{subequations} \begin{align} u^s_X &= C_1^s \sin 2 \theta \cos \phi - (C_2^s \cos 2 \phi - C_3^s \sin 2 \phi ) \sin^2 \theta \label{eq:ux} \\ u^s_Y &= C_1^s \sin 2 \theta \sin \phi + (C_2^s \sin 2 \phi + C_3^s \cos 2 \phi ) \sin^2 \theta \label{eq:uy} \\ u^s_Z &= C_4^s\cos^2 \theta + C_5^s \label{eq:uz} \end{align} \end{subequations} where $s$ denotes the system and could be H$_2$, MOF or the total. To determine the coefficients C's, we calculated the dipole and the derivative of the dipole with respect to H$_2$ internuclear distance with Wannier function approach for five hydrogen orientations. The Z components of the obtained values were used to fit C$_4$ and C$_5$ in Eq.~\eqref{eq:uz}. As shown in Fig.~\ref{fig:uz_linearity}, good linearity is obtained in agreement with our model. To compute C$_1$, C$_2$ and C$_3$, we pick three {\sl ab initio} calculated value, u$'_x$/u$'_y$ of orientation 4 and u$'_x$ of orientation 5, to solve a 3$\times$3 linear equation for the coefficients. To check the values obtained, we substitute them back into Eq.~\eqref{eq:ux} and \eqref{eq:uy} for other orientations and compare with the {\sl ab initio} results. The comparison are shown in Table~\ref{table:compare_deltauxy_MOF} and \ref{table:compare_deltauxy_H2}. Consistent results are obtained generally while we do see some deviations on the induced dipole on H$_2$, which may be due to the neglect of the anisotropy in Eq.~\eqref{eq:uH21}. However, the absolute magnitude of these deviations are quite small ($<$ 10\%) compared to the total value which is the sum of the induced dipole on MOF and on H$_2$. \newpage \begin{figure}[h] \subfigure{ \epsfig{file=./ucellsite.eps,width=3.3in,clip=true} } \\ \subfigure{ \epsfig{file=./pcellsite.eps,width=3.3in,clip=true} } \caption{Illustration of H$_2$ adsorption sites in MOF-5 unit cell (top) and primitive cell (bottom). MOF-5 has FCC structure with lattice constant of 25.89\AA~\cite{Rowselljacs2005}. The primitive cell has 106 atoms. The H atoms on benzene rings are omitted for clarity.} \label{fig:sites} \end{figure} \newpage \begin{figure}[h] \epsfig{file=./coor.eps,width=4.5in,clip=true} \caption{Illustration of the MOF frame of reference. Origin is at the cup adsorption sites and {\sl Z} is along the \textless111\textgreater direction of cubic crystal lattice shown in Fig. \ref{fig:sites}. It is also the 3-fold rotation axis.} \end{figure} \newpage \begin{figure}[h] \epsfig{file=./o2_adsorp_half.eps,width=4.5in,clip=true} \caption{Illustration of the O2 adsorption site and the change of Wannier centers due to H$_2$ adsorption compared to bare MOF and free H$_2$. The vector lengths are enlarged by 1200. } \label{fig:wc_o2site} \end{figure} \newpage \begin{figure}[h] \epsfig{file=./o3_adsorp_half.eps,width=4.5in,clip=true} \caption{Illustration of the O3 adsorption site and the change of Wannier centers due to H$_2$ adsorption compared to bare MOF and free H$_2$. The vector lengths are enlarged by 1200. } \label{fig:wc_o3site} \end{figure} \newpage \begin{figure}[h] \epsfig{file=./bz_adsorp_half.eps,width=4.5in,clip=true} \caption{Illustration of the benzene adsorption site and the change of Wannier centers due to H$_2$ adsorption compared to bare MOF and free H$_2$. The vector lengths are enlarged by 1200. } \label{fig:wc_bzsite} \end{figure} \newpage \begin{figure}[h] \epsfig{file=./H2_dipole_r.eps,width=4.0in,clip=true} \\ \epsfig{file=./MOF_dipole_r.eps,width=4.0in,clip=true} \caption{$\delta {\mathbf u} $ as a function of H$_2$ internuclear distance. u$_0$ is the dipole at equilibrium distance. }\label{fig:der2dif} \end{figure} \newpage \begin{table}[h] \caption{Calculated total energy vs H$_2$ internuclear distance at cup site} \label{table:cup_vibpes} \begin{tabular*}{0.5\textwidth}{@{\extracolsep{\fill}}ccc} \hline \hline R(a.u.) & E(a.u.) \\ \hline 0.63258 & -1147.23459646 \\ 0.70817 & -1147.34182739 \\ 0.78376 & -1147.41767267 \\ 0.85935 & -1147.47155388 \\ 0.93494 & -1147.50972612 \\ 1.01052 & -1147.53646724 \\ 1.08611 & -1147.55478553 \\ 1.16170 & -1147.56683964 \\ 1.23729 & -1147.57419775 \\ 1.31288 & -1147.57801104 \\ 1.38847 & -1147.57913028 \\ 1.48297 & -1147.57768502 \\ 1.57745 & -1147.57390705 \\ 1.67194 & -1147.56843515 \\ 1.76642 & -1147.56176098 \\ 1.86091 & -1147.55423624 \\ 1.95540 & -1147.54614531 \\ 2.04988 & -1147.53771266 \\ 2.14437 & -1147.52910151 \\ 2.23886 & -1147.52044322 \\ 2.33334 & -1147.51183591 \\ \hline \hline \end{tabular*} \end{table} \newpage \begin{table}[h] \caption{Calculated total energy vs H$_2$ internuclear distance at O2 site} \label{table:o2_vibpes} \begin{tabular*}{0.5\textwidth}{@{\extracolsep{\fill}}ccc} \hline \hline R(a.u.) & E(a.u.) \\ \hline 0.63142 & -1147.23167787 \\ 0.70701 & -1147.33946050 \\ 0.78259 & -1147.41567703 \\ 0.85818 & -1147.46981324 \\ 0.93377 & -1147.50816182 \\ 1.00936 & -1147.53502878 \\ 1.08495 & -1147.55343531 \\ 1.16054 & -1147.56555111 \\ 1.23613 & -1147.57295236 \\ 1.31172 & -1147.57679567 \\ 1.38731 & -1147.57793396 \\ 1.48182 & -1147.57650225 \\ 1.57630 & -1147.57273083 \\ 1.67079 & -1147.56726718 \\ 1.76528 & -1147.56059280 \\ 1.85976 & -1147.55307608 \\ 1.95425 & -1147.54499838 \\ 2.04873 & -1147.53657940 \\ 2.14322 & -1147.52798691 \\ 2.23771 & -1147.51934958 \\ 2.33219 & -1147.51076491 \\ \hline \hline \end{tabular*} \end{table} \newpage \begin{table}[h] \caption{Calculated total energy vs H$_2$ internuclear distance at O3 site} \label{table:o3_vibpes} \begin{tabular*}{0.5\textwidth}{@{\extracolsep{\fill}}ccc} \hline \hline R(a.u.) & E(a.u.) \\ \hline 0.63117 & -1147.23131641 \\ 0.70676 & -1147.33921650 \\ 0.78235 & -1147.41551509 \\ 0.85794 & -1147.46970566 \\ 0.93353 & -1147.50809162 \\ 1.00912 & -1147.53498512 \\ 1.08471 & -1147.55340776 \\ 1.16030 & -1147.56553250 \\ 1.23589 & -1147.57293612 \\ 1.31147 & -1147.57677507 \\ 1.38705 & -1147.57790325 \\ 1.48153 & -1147.57645368 \\ 1.57601 & -1147.57265607 \\ 1.67050 & -1147.56716130 \\ 1.76499 & -1147.56045531 \\ 1.85947 & -1147.55290340 \\ 1.95396 & -1147.54478993 \\ 2.04845 & -1147.53633172 \\ 2.14293 & -1147.52769561 \\ 2.23742 & -1147.51900844 \\ 2.33191 & -1147.51036692 \\ \hline \hline \end{tabular*} \end{table} \newpage \begin{table}[h] \caption{Calculated total energy vs H$_2$ internuclear distance at benzene site} \label{table:bz_vibpes} \begin{tabular*}{0.5\textwidth}{@{\extracolsep{\fill}}ccc} \hline \hline R(a.u.) & E(a.u.) \\ \hline 0.63097 & -1147.23028851 \\ 0.70656 & -1147.33826702 \\ 0.78215 & -1147.41460791 \\ 0.85774 & -1147.46882070 \\ 0.93333 & -1147.50721490 \\ 1.00891 & -1147.53410287 \\ 1.08450 & -1147.55251525 \\ 1.16009 & -1147.56462548 \\ 1.23568 & -1147.57201447 \\ 1.31127 & -1147.57584010 \\ 1.38686 & -1147.57695620 \\ 1.48135 & -1147.57549948 \\ 1.57583 & -1147.57170006 \\ 1.67032 & -1147.56620858 \\ 1.76480 & -1147.55950504 \\ 1.85929 & -1147.55195560 \\ 1.95378 & -1147.54384388 \\ 2.04826 & -1147.53538984 \\ 2.14275 & -1147.52676285 \\ 2.23724 & -1147.51809406 \\ \hline \hline \end{tabular*} \end{table} \newpage \begin{table}[h] \caption{ Expansion coefficients (meV) of orientational potential energy surface in the basis of spherical harmonics. The equilibrium energy is set to be zero.} \label{table:ylmcoefficients} \begin{tabular*}{0.6\textwidth}{@{\extracolsep{\fill}}ccccccc} \hline \hline site & s &$d_{z^2}$ & $d_{xz}$ & $d_{yz}$ & $d_{xy}$ & $d_{x^2-y^2}$ \\ \hline cup & 17.0 & 0.06 & 8.45 & 8.41 & 8.55 &-0.006 \\ O2 & 25.0 & -3.74 & -6.36 & -6.90 &-5.76 &-4.88 \\ O3 & 7.73 & 0.27 & -2.43 & -2.39 &-2.38 & 0 \\ benzene & 1.46 & -0.52 & 0 & 0.27 & 0 & 0.80 \\ \hline \hline \end{tabular*} \end{table} \newpage \begin{table}[h] \caption{Rotational eigen energies (meV) at cup site} \label{table:eigenE_cup} \begin{tabular*}{0.5\textwidth}{@{\extracolsep{\fill}}ccccr} \hline \hline State \# & Energy & E$_i$-E$_1$ & j & m\\ \hline 1 & 4.428 & -- & 0 & 0\\ \hline 2 & 17.498 & 13.070 &\multirow{3}{*}{1} & $-$1\\ 3 & 17.555 & 13.127 & & 1\\ 4 & 23.014 & 18.586 & & 0\\ \hline 5 & 46.197 & 41.769 &\multirow{5}{*}{2} & $-$2\\ 6 & 46.197 & 41.769 & & 2\\ 7 & 50.094 & 45.666 & & $-$1\\ 8 & 50.135 & 45.707 & & 1\\ 9 & 51.781 & 47.353 & & 0\\ \hline 10&89.88 & 85.452 & \multirow{7}{*}{3} & $-$3\\ 11&89.88 & 85.452 & & 3\\ 12&92.934 & 88.506 & & $-$2\\ 13&92.934 & 88.506 & & 2\\ 14&94.856 & 90.428 & & $-$1\\ 15&94.894 & 90.466 & & 1\\ 16&95.552 & 91.124 & & 0\\ \hline \hline \end{tabular*} \end{table} \newpage \begin{table}[h] \caption{Rotational eigen energies (meV) at O2 site} \label{table:eigenE_o2} \begin{tabular*}{0.5\textwidth}{@{\extracolsep{\fill}}ccc} \hline \hline State \# & Energy & E$_i$-E$_1$ \\ \hline 1 & 6.76 & -- \\ \hline 2 &18.614 & 11.854 \\ 3 &22.306 & 15.546 \\ 4 &24.03 & 17.270 \\ \hline 5 &49.036 & 42.276 \\ 6 &49.39 & 42.630 \\ 7 &50.619 & 43.859 \\ 8 &53.265 & 46.505 \\ 9 &53.439 & 46.679 \\ \hline 10 &93.026 & 86.266 \\ 11 &93.146 & 86.386 \\ 12 &94.315 & 87.555 \\ 13 &95.211 & 88.451 \\ 14 &95.488 & 88.728 \\ 15 &97.788 & 91.028 \\ 16 &97.801 & 91.041 \\ \hline \hline \end{tabular*} \end{table} \newpage \begin{table}[h] \caption{Rotational eigen energies (meV) at O3 site} \label{table:eigenE_o3} \begin{tabular*}{0.5\textwidth}{@{\extracolsep{\fill}}ccccr} \hline \hline State \# & Energy & E$_i$-E$_1$ & j & m\\ \hline 1 & 2.148 & -- & 0 & 0 \\ \hline 2 & 15.815 & 13.667 &\multirow{3}{*}{1} & 0 \\ 3 & 17.356 & 15.208 & & $-$1 \\ 4 & 17.434 & 15.286 & & 1 \\ \hline 5 & 45.551 & 43.403 &\multirow{5}{*}{2} &0 \\ 6 & 45.869 & 43.721 & & $-$1 \\ 7 & 45.924 & 43.776 & & 1 \\ 8 & 47.026 & 44.878 & & $-$2 \\ 9 & 47.027 & 44.879 & & 2 \\ \hline 10 & 89.685 & 87.537 & \multirow{7}{*}{3} &0 \\ 11 & 89.831 & 87.683 & & $-$1 \\ 12 & 89.884 & 87.736 & & 1 \\ 13 & 90.375 & 88.227 & & $-$2 \\ 14 & 90.377 & 88.229 & & 2 \\ 15 & 91.253 & 89.105 & & $-$3 \\ 16 & 91.253 & 89.105 & & 3 \\ \hline \hline \end{tabular*} \end{table} \newpage \begin{table}[h] \caption{Rotational eigen energies (meV) at benzene site} \label{table:eigenE_bz} \begin{tabular*}{0.5\textwidth}{@{\extracolsep{\fill}}ccc} \hline \hline State \# & Energy & E$_i$-E$_1$ \\ \hline 1 & 0.409 & -- \\ \hline 2 & 14.929 & 14.520 \\ 3 & 15.051 & 14.642 \\ 4 & 15.35 & 14.941 \\ \hline 5 & 44.332 & 43.923 \\ 6 & 44.339 & 43.930 \\ 7 & 44.553 & 44.144 \\ 8 & 44.64 & 44.231 \\ 9 & 44.691 & 44.282 \\ \hline 10 & 88.408 & 87.999 \\ 11 & 88.409 & 88.000 \\ 12 & 88.595 & 88.186 \\ 13 & 88.611 & 88.202 \\ 14 & 88.693 & 88.284 \\ 15 & 88.773 & 88.364 \\ 16 & 88.787 & 88.378 \\ \hline \hline \end{tabular*} \end{table} \newpage \begin{figure}[h] \epsfig{file=./angle_dipole_z_st.eps,width=2.5in,clip=true} \caption{$\partial{u_Z}/\partial{R}$ vs $\cos^2 \theta $ for the induced dipole on H$_2$ and MOF. $u'_Z$ was calculated as the difference between the dipole moments at equilibrium bond length and a stretch of 0.05\AA. } \label{fig:uz_linearity} \end{figure} \newpage \begin{table}[t] \caption{Comparison of ($\delta u_X, \delta u_Y$) on MOF due to H$_2$ bond stretching of 0.05\AA~between the {\sl ab initio} and the fitted values} \label{table:compare_deltauxy_MOF} \begin{tabular*}{0.95\textwidth}{@{\extracolsep{\fill}}ccccccc} \hline \hline \multirow{2}{*}{orientation} & \multirow{2}{*}{$\theta$(degree)} & \multirow{2}{*}{$\phi$(degree)} & \multicolumn{2}{c}{\sl ab initio} & \multicolumn{2}{c}{fitted} \\ \cline{4-5} \cline{6-7} && & $\delta u_x $ & $\delta u_y $ & $\delta u_x $ & $\delta u_y $ \\ \hline 1& 99.650 & -95.723 & 5.690E-04 & 9.440E-04 & 5.690E-04 & 9.634E-04 \\ 2& 87.054 & -6.190 & -4.864E-04 & -8.400E-04 & -5.178E-04 & -8.720E-04 \\ 3& 10.097 & -112.811& -5.760E-05 & -1.008E-04 & -5.172E-05 & -9.052E-05 \\ 4& 54.741 & -98.278 & 2.774E-04 & 2.464E-04 & -- & -- \\ 5& 42.937 & -16.295 & 2.640E-04 & -6.126E-04 & -- & -5.754E-04 \\ \hline \hline \end{tabular*} \end{table} \begin{table}[h] \caption{comparison of ($\delta u_X, \delta u_Y$) on H$_2$ due to bond stretching of 0.05\AA~between the {\sl ab initio} and the fitted values} \label{table:compare_deltauxy_H2} \begin{tabular*}{0.95\textwidth}{@{\extracolsep{\fill}}ccccccc} \hline \hline \multirow{2}{*}{orientation} & \multirow{2}{*}{$\theta$(degree)} & \multirow{2}{*}{$\phi$(degree)} & \multicolumn{2}{c}{\sl ab initio} & \multicolumn{2}{c}{fitted} \\ \cline{4-5} \cline{6-7} && & $\delta u_x $ & $\delta u_y $ & $\delta u_x $ & $\delta u_y $ \\ \hline 1& 99.650 & -95.723 & -1.631E-05 & -4.322E-05 & -2.286E-05 & -6.552E-05\\ 2& 87.054 & -6.190 & 1.562E-05 & 2.220E-05 & 1.340E-05 & 4.522E-05\\ 3& 10.097 & -112.811 & 1.338E-05 & 3.052E-05 & 9.392E-06 & 2.030E-05\\ 4& 54.741 & -98.278 & -2.190E-06 & 3.340E-05 & -- & -- \\ 5& 42.937 & -16.295 & -6.386E-05 & 3.326E-05 & -- & 4.184E-05\\ \hline \hline \end{tabular*} \end{table} \newpage \begin{table} \caption{ Theory vs experiment (Ref.~\onlinecite{FitzGerald2008}) for RV frequency shifts (cm$^{-1}$) of adsorbed H$_2$ relative to free H$_2$. The vibrational transition is from the ground state to the 1st excited state ($v$=0 $\rightarrow$ $v$=1). } \label{table:rovibfrequency} \begin{tabular*}{0.8\textwidth}{@{\extracolsep{\fill}}ccll} \hline \hline \multirow{2}{*}{site} & &S(0) (para) & S(1) (ortho) \\ & & ($j$=0$\rightarrow j$=2) & ($j$=1$\rightarrow j$=3) \\ \hline \multirow{2}{*}{O2} &Th. &$-37$, $-34$, $-24$, $-3$, $-1$ & $-15$, $-14$, $-4,3$, $5$, $24$, $24$ \\ &Ex. &$-36.7$, $-27.3$, $-24.3$, $-7.4$ &$-12.9$ \\ \hline O3 &Th. &$-19$, $-16$, $-7$ & $-10$, $-9$, $-4$, $3$ \\ \hline \hline \end{tabular*} \end{table} \begin{table}[h] \caption{ Angular integral $\langle jm | \mathbf{u'} | jm \rangle $ for H$_2$ at cup site. The energy of the $|00\rangle$ is set as reference. The units for $\mathbf{u'}$ is 10$^{-3}$e. } \label{table:angular_integral} \begin{tabular*}{0.8\textwidth}{@{\extracolsep{\fill}}cccccc} \hline \hline jm & u$'_X$ & u$'_Y$ & u$'_Z$ & u$'^2(\times 10^{-6}e^2)$ & E(meV) \\ \hline 00 & -0.009 & 0.02 & -2.23 & 5.0 & 0 \\ 1$\bar{1}$ & -2.38 & 7.69 & -1.41 & 66.7 &13.1\\ 11 & 2.36 & -7.67 & -1.40 & 66.4 &13.2\\ 10 & 0.003 & 0.01 & -4.67 & 21.8 &18.6\\ \hline \hline \end{tabular*} \end{table} \newpage \begin{table}[h] \caption{Theoretical predictions and experimental data \cite{FitzGerald2008} for $v=0 \rightarrow v=1$ transitions of H$_2$ at the cup site. The frequency shift $\Delta$v (cm$^{-1}$) is relative to the corresponding free H$_2$ value and the angular integral given by $I_A^2=|\langle j_fm_f | \mathbf{u'} | j_im_i \rangle|^2$ (10$^{-6}$e$^2$). The rotational energy (meV) $E^{rot}_i$ of the $|00\rangle$ state is set as a reference. The theoretical intensity is calculated from $I_A^2$ weighted by the 30K Boltzmann factor and the spin ratio of 1:3 between para and ortho H$_2$. The strongest line is normalized to 100.} \label{table:cup_v_intensitys} \begin{tabular*}{0.9\textwidth}{@{\extracolsep{\fill}}ccccccccc} \hline \hline & \multirow{2}{*}{m$_i$} & \multirow{2}{*}{m$_f$} & \multicolumn{4}{c}{Theory} & \multicolumn{2}{c}{Experiment} \\ \cline{4-7} \cline{8-9} & & &E$_i^{rot}$ &$\Delta$v & $I^2_A$ &Intensity &$\Delta$v & Intensity \\ \hline Q(0)($j_i$=0$\rightarrow j_f$=0) & 0 & 0 &0 & -23 & 5 & 2 & & absent \\ \hline \multirow{2}{*}{Q(1)($j_i$=1$\rightarrow j_f$=1)} & $\pm1$ & $\pm1$ &13.1 &\multirow{2}{*}{-23} &66 & \multirow{2}{*}{97} &\multirow{2}{*}{$-$27.5} & \multirow{2}{*}{strong} \\ & 0 & 0 &18.6& &22 & & \\ \cline{2-9} Q*(1)($j_i$=1$\rightarrow j_f$=1)& $\pm1$ & 0& 13.1 &22 &6 &9 &39 &weak \\ \hline \multirow{3}{*}{S(0) ($j_i$=0$\rightarrow j_f$=2)} &\multirow{3}{*}{0} & $\pm$2 &\multirow{3}{*}{13.1} &$-$44 & 115 & 58 &$-$49.3& strong \\ & & $\pm$1 & &$-12$ & 10 & 5 &$-$6.8 & weak \\ & & 0 & &$-$1 & 5 & 2 & & absent \\ \hline \multirow{8}{*}{S(1)($j_i$=1$\rightarrow j_f$=3)} &\multirow{4}{*}{$\pm1$} & $\pm$3 &\multirow{4}{*}{13.1} & $-$34 & 69& 100 &$-$36.8 & strong \\ & & $\pm$2 & & $-$9 & 4 & 6 &$-$0.8 & weak \\ & & $\pm$1 & & 6 & 6 & 9 & 21.6 & weak \\ & & 0 & & 11 & 2 & 3 & & absent \\ \cline{2-9} &\multirow{4}{*}{0} & $\pm$3 &\multirow{4}{*}{18.6} &$-$78& 0 &0 & &absent \\ & & $\pm$2 & &$-$53& 49&3&$-$61 &weak \\ & & $\pm$1 & &$-$50& 8 &$\sim$0& &absent \\ & & 0 & &$-$33& 6 &$\sim$0& &absent \\ \hline \hline \end{tabular*} \end{table} \newpage
1,314,259,993,852
arxiv
\section{introduction} Recently the spin-orbit(SO) interaction in semiconductor mesoscopic system has attracted a lot of interest\cite{1}. Due to the coupling of electron orbital motion with the spin degree of freedom, it is possible to manipulate and control the electron spin in SO coupling system by applying an external electrical field or a gate voltage, and it is believed that the SO effect will play an important role in the future spintronic application. Actually, various interesting effects resulting from SO coupling have already been predicted, such as the Datta-Das spin field-effect transistor based on Rashba SO interaction\cite{2} and the intrinsic spin Hall effect\cite{3}. In this paper we shall focus our attention on the persistent charge current and spin current in mesoscopic semiconductor ring with SO interaction. The existence of a persistent charge current in a mesoscopic ring threaded by a magnetic flux has been predicted decades ago\cite{4}, and has been extensively studied in theory\cite{5,6,7,8,9} and also observed in various experiments\cite{10,11,12}. The reason that a persistent charge current exists may be interpreted as that the magnetic flux enclosed by the ring introduces an asymmetry between electrons with clockwise and anticlockwise momentum, thus leads to a thermodynamic state with a charge current without dissipation. For a mesoscopic ring with a texture like inhomogeneous magnetic field, D. Loss et al.\cite{13} predicted that besides the charge current there are also a persistent spin current. The origin of the persistent spin current can be related to the Berry phase acquired when the electron spin precesses during its orbital motion. The persistent spin current has also been studied in semiconductor system with Rashba SO coupling term\cite{14,15,16}. Recently it is shown that a semiconductor ring with SO coupling can sustain a persistent spin current even in the absence of external magnetic flux\cite{17}. For the system of a mesoscopic ring with a magnetic impurity, the persistent charge current has been investigated in the context of a mesoscopic ring coupled with a quantum dot\cite{18,19,20,21,22,23,24}, where the quantum dot acts as an impurity level and will introduce charge or spin fluctuations to the electrons in the ring. The Kondo effect arising from a localized electron spin interacting with a band of electrons will be essential in the charge transport in the ring. But to our knowledge in these systems the SO effect hasn't been considered. It might be expected that the interplay between the Kondo effect and the SO coupling in the ring can give new features in the persistent currents. In this paper we shall address this problem and investigate the SO effect on persistent charge and spin currents in the ring system with an Anderson impurity. The Anderson impurity can act as a magnetic impurity when the impurity level is in single electron occupied state and as well as a barrier potential in empty occupied regime. The outline of this paper is as follows. In section II we introduce the model Hamiltonian of the system and also the method of calculation by finite-U slave boson approach\cite{25,26,27,28}. In section III the results of persistent charge current and spin current are presented and discussed. In Section IV we give the summary. \section{Mesoscopic ring with an Anderson impurity} The electrons in a closed ring with SO coupling of Rashba term can be described by following Hamiltonian in the polar coordinates\cite{14,29} \begin{equation} H_{ring}=\Delta(-i{\partial\over{\partial\varphi}}+{\Phi\over\Phi_0})^2 +{\alpha_R\over 2}[(\sigma_x\cos\varphi+\sigma_y\sin\varphi)(-i{\partial\over{\partial\varphi}}+{\Phi\over\Phi_0})+h.c.]\;, \end{equation} where $\Delta=\hbar^2/(2m_ea^2)$, $a$ is the radius of the ring. $\alpha_R$ will characterize the strength of Rashba SO interaction. $\Phi$ is the external magnetic flux enclosed by the ring, and $\Phi_0=2\pi\hbar c/e$ is the flux quantum. We can write the above Hamiltonian in terms of creation and annihilation operators of electrons in the momentum space, \begin{equation} H_{ring}=\sum_{m,\sigma}\epsilon_m c^\dagger_{m\sigma}c_{m\sigma}+1/2\sum_m[t_m(c^\dagger_{m+1\downarrow}c_{m\uparrow} +c^\dagger_{m-1\uparrow}c_{m\downarrow})+h.c.]\;, \end{equation} where $\epsilon_m=\Delta(m+\phi)^2$, $t_m=\alpha_R (m+\phi)$,($m=0,\pm 1,\cdots,\pm M$) with $\phi=\Phi/\Phi_0$. One can see that the SO interaction causes the $m$ mode electrons coupled with $m+1$ and $m-1$ mode electrons and spin-flip process. We consider the system with a side-coupled impurity which can be described by the Anderson impurity model, \begin{equation} H_d=\sum_\sigma\epsilon_d d^\dagger_\sigma d_\sigma+Un_{d\uparrow}n_{d\downarrow}\;. \end{equation} The tunneling between the impurity level and the ring are given by \begin{equation} H_{d-ring}=t_D\sum_{m\sigma}(d^\dagger_\sigma c_{m\sigma}+h.c)\;. \end{equation} Then the total Hamiltonian for the system should be \begin{equation} H=H_{ring}+H_d+H_{d-ring}\;. \end{equation} In order to treat the strong on-site Coulomb interaction in the impurity level. we adopt the finite-U slave boson approach\cite{25,26}. A set of auxiliary bosons $e, p_{\sigma}, d$ is introduced for the impurity level, which act as projection operators onto the empty, singly occupied(with spin up and spin down), and doubly occupied electron states on the impurity, respectively. Then the fermion operators $d_{\sigma}$ are replaced by $d_{\sigma}\rightarrow f_{\sigma}z_{\sigma} $, with $z_{\sigma}=e^\dagger p_{\sigma}+p^\dagger_{\bar\sigma}d$. In order to eliminate un-physical states, the following constraint conditions are imposed :$\sum_{\sigma} p^\dagger_{\sigma}p_{\sigma}+e^\dagger e+d^\dagger d=1$, and $f^\dagger_{\sigma}f_{\sigma}=p^\dagger_{\sigma}p_{\sigma}+d^\dagger d(\sigma=\uparrow, \downarrow)$. Therefore, the Hamiltonian can be rewritten as the following effective Hamiltonian in terms of the auxiliary boson $e, p_{\sigma}, d$ and the pesudo-fermion operators $f_{\sigma}$: \begin{eqnarray} H_{eff}&=&\sum_{m,\sigma}\epsilon_m c^\dagger_{m\sigma}c_{m\sigma}+1/2\sum_m[t_m(c^\dagger_{m+1\downarrow}c_{m\uparrow} +c^\dagger_{m-1\uparrow}c_{m\downarrow})+h.c.] \nonumber\\ &+&\sum_{\sigma}\epsilon_d f^\dagger_{\sigma}f_{\sigma}+ Ud^\dagger d \nonumber\\ & +&\sum_{m,\sigma } (t_Dz^\dagger_{\sigma} f^\dagger_{\sigma}c_{m\sigma}+h.c.) + \lambda^{(1)}(\sum_{\sigma} p^\dagger_{\sigma}p_{\sigma}+e^\dagger e+d^\dagger d-1) \nonumber\\ &+&\sum_{\sigma}\lambda^{(2)}_{\sigma}(f^\dagger_{\sigma}f_{\sigma}-p^\dagger_{\sigma}p_{\sigma}-d^\dagger d )\;, \end{eqnarray} where the constraints are incorporated by the Lagrange multipliers $\lambda^{(1)}$ and $\lambda^{(2)}_{\sigma}$. The first constraint can be interpreted as a completeness relation of the Hilbert space on the impurity level, and the second one equates the two ways of counting the fermion occupancy for a given spin. In the framework of the finite-U slave boson mean field theory\cite{25,26}, the slave boson operators $e, p_{\sigma}, d $ and the parameter $z_\sigma$ are replaced by real c numbers. Thus the effective Hamiltonian is given as \begin{eqnarray} H^{MF}_{eff}&=&\sum_{m,\sigma}\epsilon_m c^\dagger_{m\sigma}c_{m\sigma}+1/2\sum_m[t_m(c^\dagger_{m+1\downarrow}c_{m\uparrow} +c^\dagger_{m-1\uparrow}c_{m\downarrow})+h.c.] \nonumber\\ &+&\sum_{\sigma}{\tilde\epsilon_{d\sigma}}f^\dagger_{\sigma}f_{\sigma} +\sum_{m\sigma } ({\tilde t_{D\sigma}} f^\dagger_{\sigma}c_{m\sigma}+h.c.)+E_g\;, \end{eqnarray} where ${\tilde t_{D\sigma}}=t_Dz_\sigma$ represents the renormalized tunnel coupling between the impurity and the mesoscopic ring. $z_\sigma$ can be regarded as the wave function renormalization factor. ${\tilde\epsilon_{d\sigma}}=\epsilon_d+\lambda^{(2)}_{\sigma}$ is the renormalized impurity level and $E_g= \lambda^{(1)}( \sum_{\sigma} p_{\sigma}^2+e^2+d^2-1)-\sum_{\sigma}\lambda^{(2)}_\sigma(p_{\sigma}^2+d^2)+Ud^2$ is an energy constant. In this mean field approximation the Hamiltonian is essentially that of a non-interacting system, hence the single particle energy levels can be calculated by numerical diagonalization of the Hamiltonian matrix. Then the ground state of this system $|\psi_0>$ can be constructed by adding electrons to the lowest unoccupied energy levels consecutively . By minimizing the ground state energy with respect to the variational parameters a set of self-consistent equations can be obtained as in Ref.[27,28], and they can be applied to determine the variational parameters in the effective Hamiltonian. \section{the persistent charge current and spin current } In this section we will present the results of our calculation of the persistent charge current and spin current circulating the mesoscopic ring. Since there is still some controversial in the literature for the definition of the spin current operator in the ring system with SO coupling term\cite{30}. We give both the formula of charge and spin currents used in this paper explicitly. It is easy to obtain that the $\varphi$ component of electron velocity operator in this SO coupled ring is \begin{equation} v^\varphi={a\over\hbar}[2\Delta(-i{\partial\over{\partial\varphi}}+\phi) +\alpha_R(\sigma_x\cos\varphi+\sigma_y\sin\varphi)]\;. \end{equation} Thereby the charge current operator is define as $\hat I=-e v^\varphi$, and in terms of creation and annihilation operator it can be written as \begin{equation} \hat I=-{e a\over \hbar}[2\Delta\sum_{m,\sigma}\ c^\dagger_{m\sigma}c_{m\sigma}(m+\phi)+\alpha_R\sum_m(c^\dagger_{m+1\downarrow}c_{m\uparrow} +c^\dagger_{m-1\uparrow}c_{m\downarrow})]\;. \end{equation} At zero temperature, the persistent charge current is given by the expectation value of the above charge current operator in the ground state, $I={1\over{2\pi a}}<\psi_0|\hat I|\psi_0>$, and it can also be calculated from the expression \begin{equation} I=-c{\partial E_{gs}\over{\partial\Phi}}=-{e\over h}<\psi_0|{\partial H\over{\partial\phi}}|\psi_0>\;, \end{equation} where $E_{gs}$ is the ground state energy. In Fig.1 the persistent charge current vs. the enclosed magnetic flux is plotted for a set of values for the SO coupling strength. Here we have taken the model parameters $\Delta=0.01, t_D=0.3, U=2.0$ and the total number of electrons $N$ is around $100$. In this case one can obtain the Fermi energy of the system $E_F=6.25$ and the level spacing $\delta=0.5$ around the Fermi surface. We consider the energy level of the Anderson impurity is well below the Fermi energy( with $\epsilon_d-E_F=-1.0$), therefore the Anderson impurity is in the Kondo regime. One can see in Fig.1 that the characteristic features of persistent charge current depends on the parity of the total number of electrons($N$), and can be distinguished by two cases with $N$ odd and $N$ even. This is attributed to the different occupation patterns of the highest occupied single particle energy level in the mean field effective Hamiltonian. The persistent charge current for the system with $N+2$ electrons is different from that with $N$ electrons by a $\pi$ phase shift $I^{N+2}(\phi)=I^{N}(\phi+\pi)$. In case (I) where the electron number is odd($N=4n-1$ and $N=4n+1$), one electron is almost localized on the impurity level and forming a singlet with electron cloud in the conducting ring. This phenomena leads to the well known Kondo effect. Fig.1 shows that the Kondo effect decreases the magnitude of the persistent charge current, and also makes its curve shape resemble sinusoidal. In the presence of finite SO coupling($\alpha_R<\Delta$), the spin-up and spin-down electrons are coupled and it causes the splitting of the twofold degenerated energy levels in the effective Hamiltonian. It turns out that the Kondo effect is suppressed and the abrupt jumps of the persistent charge current with similarity to that of ideal ring case appears. It is explained in Ref.[14] that the jumps of the persistent charge current in the case of odd number of electrons are due to a crossing of levels with opposite spin. In case (II) where $N$ is even ($N=4n$ and $N=4n+2$), The Kondo effect is manifested that the magnitude of persistent charge current is significantly suppressed compared with ideal ring case and the rounding of the jumps of persistent charge current due to the level crossing. In the presence of finite SO coupling, the persistent charge current decreases with increasing the SO coupling strength when $\alpha_R<\Delta$. Fig.2 displays the persistent charge current as a function of the SO coupling strength $\alpha_R$ at different enclosed magnetic flux. The persistent charge current exhibits oscillations with increasing the value of $\alpha_R$ for both the systems with even or odd number of electrons. Therefore by tuning the SO coupling strength, the magnetic response of this system can change from paramagnetic to diamagnetic and vice versa. It indicates that SO coupling can play a important role in electron transport in this mesoscopic ring. The curve of the persistent charge current for odd number of electrons shows discontinuity in its derivation, this can be attributed the level crossing in the energy spectrum by changing $\alpha_R$. It is also noted that the position of this discontinuity for odd $N$ also corresponds to the peak or valley in even N case. Since the electron has the spin degree of freedom as well as the charge, the electron motion in the ring may give rise to a spin current besides the charge current. Now we turn to study the persistent spin current in the ground state. The spin current operator is defined by $\hat J_v=(v^\varphi\sigma_v+\sigma_v v^\varphi)/2$, which can be written explicitly as \begin{equation} \hat J_v={a\over\hbar}\{2\Delta(-i{\partial\over{\partial\varphi}}+\phi)\sigma_v+{\alpha_R\over 2} [ (\sigma_x\cos\varphi+\sigma_y\sin\varphi)\sigma_v+h.c.]\}\;, \end{equation} Therefore the three component of spin current operator in terms of creation and annihilation operators are given by \begin{equation} \hat J_z={a\over\hbar}[2\Delta\sum_{m}\ (c^\dagger_{m\uparrow}c_{m\uparrow}-c^\dagger_{m\downarrow}c_{m\downarrow})(m+\phi)]\;, \end{equation} \begin{equation} \hat J_x={a\over\hbar}[2\Delta\sum_{m}(c^\dagger_{m\uparrow}c_{m\downarrow} +c^\dagger_{m\downarrow}c_{m\uparrow})(m+\phi) +{\alpha_R\over 2}\sum_{m,\sigma}(c^\dagger_{m+1\sigma}+c^\dagger_{m-1\sigma})c_{m\sigma}]\;, \end{equation} \begin{equation} \hat J_y={a\over\hbar}[-2i\Delta\sum_{m}( c^\dagger_{m\uparrow}c_{m\downarrow}- c^\dagger_{m\downarrow}c_{m\uparrow})(m+\phi) -i{\alpha_R\over 2}\sum_{m,\sigma}(c^\dagger_{m+1\sigma}-c^\dagger_{m-1\sigma})c_{m\sigma}]\;, \end{equation} The expectation value of the spin current $J_v={1\over {2\pi a}}<\psi_0|\hat J_v|\psi_0>$. In our calculation we find that only the $z$ component of the spin current is nonzero in the ground state. Fig.3 shows the persistent spin current $J_z$ vs. magnetic flux at different SO coupling strength. The persistent spin current is a periodic function of the magnetic flux $\phi$, which has the even parity symmetry $J_z(-\phi)=J_z(\phi)$ and also an additional symmetry $J_z(\phi)=J_z(\pi-\phi)$. It is noted that the persistent spin current has quite different dependence behaviors on magnetic flux compared with the persistent charge current in Fig.1. In the presence of finite SO coupling, the persistent spin current is nonzero both for the systems with odd $N$ and even $N$ at zero magnetic flux, it indicates that a persistent spin current can be induced solely by SO interaction without accompany a charge current. This phenomena is also shown in Ref.[17] where a SO coupling/normal hybrid ring was considered. In Fig.4 the persistent spin current $J_z$ as a function of SO coupling strength is plotted. In the absence of SO coupling $\alpha_R=0$, the persistent spin current is exactly zero for both even and odd number electron system. In the presence of SO coupling, The persistent spin current becomes nonzero and shows oscillations with increasing $\alpha_R$. It can change from positive to negative values or vice versa by tuning the SO coupling strength. The sign of the persistent spin current also shows dependence on the enclosed magnetic flux. For the system with odd $N$, there is abrupt jumps in the curve of persistent spin current at certain value of $\alpha_R$, the reason for the jump is the same as that in the charge current, and is due to the level crossing in the energy spectrum. It is noted that the position of the jump coincides with that of the persistent charge current. This kind of characteristic feature of the persistent currents might provide a useful way to detect the SO coupling effects in semiconductor ring system. \section{conclusions} In summary, we have investigated the Rashba SO coupling effect on the persistent charge current and spin current in a mesoscopic ring with an Anderson impurity. The Anderson impurity leads to the Kondo effect and decreases the amplitude of the persistent charge and spin current in the ring. In the semiconducting ring with SO interaction, the persistent charge current changes significantly by tuning the SO coupling strength, e.g. from the paramagnetic to diamagnetic current. Besides the persistent charge current, there also exists a persistent spin current, which also oscillates with the SO coupling strength. It is shown that at zero magnetic flux a persistent spin current can exist even without the charge current. Since the persistent spin current can generate an electric field\cite{31}, one might expect that experiments on semiconductor ring with Rashba SO coupling can detect the persistent spin current. \begin{acknowledgments}{ This project is supported by the National Natural Science Foundation of China, the Shanghai Pujiang Program, and Program for New Century Excellent Talents in University (NCET). } \end{acknowledgments}
1,314,259,993,853
arxiv
\section{Introduction} {\color{black} \subsection{Motivation} {\color{black} In} a series of papers \cite{Chr.1}-\cite{Chr.4}, Christodoulou studied singularity formation for \textcolor{black}{the} Einstein-scalar field system: \begin{equation}\label{ES} \begin{split} &\mbox{Ric}_{\mu\nu}-\f12Rg_{\mu\nu}={\color{black}8\pi } T_{\mu\nu},\\ &T_{\mu\nu}=\partial_{\mu}\phi \partial_{\nu}\phi-\f12g_{\mu\nu}\partial^{\sigma}\phi \partial_{\sigma}\phi. \end{split} \end{equation} \noindent \textcolor{black}{Through these papers}, Christodoulou proved {\color{black}in four steps} that \textcolor{black}{under spherical symmetry, the} \textit{weak cosmic censorship conjecture} holds. {\color{black} More precisely, Christodoulou proved that for (\ref{ES}) with} large initial data, {\color{black} a so-called naked singularity} may form{\color{black}; however,} for generic initial data, these singularities {\color{black} are} covered by a black hole region and are invisible for observers far away. {\color{black}These are celebrated results.} The Penrose diagram of a spherically symmetric gravitational collapse spacetime {\color{black}for (\ref{ES}) with generic initial data} is as {\color{black}{follows}}: \begin{center} \begin{minipage}[!t]{0.4\textwidth} \begin{tikzpicture}[scale=0.75] \draw [white](-1, -2.5)-- node[midway, sloped, above,black]{$\Gamma$}(0, -2.5); \draw [white](0, 0)-- node[midway, sloped, above,black]{$\mathcal{S}$}(4, 0); \draw [white](0, -0.75)-- node[midway, sloped, above,black]{$\mathcal{T}$}(4.5, -0.75); \draw [white](-1, 0)-- node[midway, sloped, above,black]{$\mathcal{S}_0$}(1, 0); \draw [white](5.5, 0.2)-- node[midway, sloped, above,black]{$i^+$}(7, 0.2); \draw [white](10, -4.8)-- node[midway, sloped, above,black]{$i^0$}(12.5, -4.8); \draw (0,0) to [out=-5, in=195] (5.5, 0.5); \draw (0,0) to [out=-40, in=215] (5.5, 0.5); \draw [white](0, -3)-- node[midway, sloped, above,black]{$\mathcal{H}$}(7, -3); \draw [white](7, -2)-- node[midway, sloped, above,black]{$\mathcal{I^+}$}(10, -2); \draw [white](0, -0.65)-- node[midway, sloped, below,black]{$\mathcal{A}$}(2.8, -0.65); \draw [thick] (0, -5)--(0,0); \draw [thick] (5.5, 0.5)--(0,-5); \draw[fill] (0,0) circle [radius=0.08]; \draw[fill] (5.5, 0.5) circle [radius=0.08]; \draw[fill] (11, -5) circle [radius=0.08]; \draw [thick] (5.5, 0.5)--(11,-5); \draw [thick] (0,-5) to [out=5, in=165] (11, -5); \end{tikzpicture} \end{minipage} \begin{minipage}[!t]{0.6\textwidth} \end{minipage} \hspace{0.05\textwidth} \end{center} \noindent Here{\color{black}{,}} $\Gamma$ is the center of symmetry$-$invariant under $SO(3)${\color{black}, and} $i^+, \mathcal{I}^+, i_0$ are timelike infinity, future null infinity{\color{black}{,}} and spacelike infinity {\color{black}respectively}. The boundary of the causal past of $i^+$ is $\mathcal{H}$, {\color{black}which} is called {\color{black}the} event horizon. $\mathcal{T}$ is the trapped region, where {\color{black}not even light can} escape to $\mathcal{I^+}$. $\mathcal{A}$ is called {\color{black}the} apparent horizon and it is the lower boundary of $\mathcal{T}$. $\mathcal{S}_0$ is the first singular point along $\Gamma$ and $\mathcal{S}$ is the singular boundary of $\mathcal{T}$. A crucial step of Christodoulou's {\color{black}proof of the weak cosmic censorship conjecture} is \cite{Chr.1}. There, Christodoulou established a sharp trapped surface\footnote{A trapped surface is a two-dimensional sphere, with both incoming and outgoing null expansions negative.} formation criterion for (\ref{ES}). Christodoulou's original proof in \cite{Chr.1} was based on a geometric Bondi coordinate {\color{black}system} with a null frame. {\color{black}However, at present, the double null foliation is a more popular choice of coordinate system.} {\color{black}There have been many recent works published in general relativity using a double null foliation}. {\color{black}In order to generalize Christodoulou's results} in \cite{Chr.1}-\cite{Chr.4} to other matter {\color{black}models}, here we {\color{black}adopt the} double null foliation. In our paper, we {\color{black}will review Christodoulou's result in the setting of a double null foliation. Then,} we {\color{black}will} generalize his result to {\color{black}the} Einstein-Maxwell-{\color{black}charged }scalar field system. {\color{black} Within {\color{black}the study of spherically symmetric systems}, there are interesting results {\color{black}on formation of trapped surfaces and singularities} for other matter {\color{black}models}, e.g{\color{black}.} Einstein-Vlasov studied by Andr\'easson \cite{And}, And\'easson-Rein\cite{AR}, Moschidis\cite{Mo}, Einstein-Euler studied by Burtscher and LeFloch \cite{BL}, Einstein-scalar field studied by Li-Liu \cite{LL}, An-Zhang \cite{AZ}, An-Gajic \cite{AG}, Einstein-null dust studied by Moschidis\cite{Mo2}, Einstein-scalar field with positive cosmological constant by Costa \cite{JC}. For {\color{black}the} Einstein-Maxwell-(real) scalar field system, {\color{black}we refer interested readers} to \cite{Da1, Da2} by Dafermos, \cite{LO} by Luk and Oh on the recent development of proving strong cosmic censorship. {\color{black}And for the Einstein-Maxwell-charged scalar field system}, we refer to \cite{VDM}-\cite{VDM3} by Van de Moortel.} \subsection{The Main Result} {\color{black}We consider the characteristic initial value problem for (\ref{ES}) in the {\color{black}rectangular} region.} \begin{minipage}[!t]{0.4\textwidth} \begin{tikzpicture}[scale=0.9] \node[] at (1.25,3.25) {\LARGE $\mathcal{R}$}; \begin{scope}[thick] \draw[->] (0,0) node[anchor=north]{$(u_0,0)$} -- (0,5)node[anchor = east]{$\Gamma$}; \draw[->] (0,0) --node[anchor = north]{$v$} (3,3); \draw[->] (1.75,1.75) node[anchor=west]{$(u_0,v_1)$} --node[anchor=north]{$u$} (0,3.5)node[anchor = east]{$(0,v_1)$}; \draw[->] (2.75,2.75) node[anchor=west]{$(u_0,v_2)$} -- (1,4.5); \end{scope} \begin{scope}[gray] \draw (2,2) -- (0.25,3.75); \draw (2.25,2.25) -- (0.5,4); \draw (2.5,2.5) -- (0.75,4.25); \draw(1.5,2) -- (2.5,3); \draw(1.25,2.25) -- (2.25,3.25); \draw(1,2.5) -- (2,3.5); \draw(0.75,2.75) -- (1.75,3.75); \draw(0.5,3) -- (1.5,4); \draw(0.25,3.25) -- (1.25,4.25); \draw(0,3.5) -- (1,4.5); \end{scope} \end{tikzpicture} \end{minipage} \begin{minipage}[!t]{0.58\textwidth} We {\color{black}employ} the double-null foliation {\color{black}with} $u$ and $v$ {\color{black}as} optical functions{\color{black}; that is,} {\color{black} $g^{\alpha\beta}\partial_{\alpha}u\partial_{\beta}u=0$ and $g^{\alpha\beta}\partial_{\alpha}v\partial_{\beta}v=0$. Thus, we have} $u=\mbox{{\color{black}constant}}$ {\color{black}as} the outgoing null hypersurface; $v=\mbox{{\color{black}constant}}$ {\color{black}as} the incoming null hypersurface. \\ \noindent {\color{black}Due to} spherical symmetry, {\color{black}we have a central axis $\Gamma$.} We prescribe initial data along {\color{black}the }outgoing cone $u=u_0$ and {\color{black}the} incoming cone $v=v_1$. \end{minipage} \hspace{0.05\textwidth} \noindent For the metric of the $3+1$-dimensional spacetime, we {\color{black}impose spherical symmetry and write it with double-null coordinates:} \begin{equation}\label{metric0} g_{\mu\nu}dx^{\mu}dx^{\nu}=-\Omega^2(u,v)dudv+r^2(u,v)\big(d\theta^2+\sin^2\theta d\phi^2\big). \end{equation} \noindent In {\color{black}the} above diagram every point $(u,v)$ {\color{black}represents} a $2$-sphere $S_{u,v}$. {\color{black}The Hawking mass of such a 2-sphere} is defined as \begin{equation}\label{Hawking mass} m(u,v)=\frac{r}{2}(1+4\Omega^{-2}\partial_u r \partial_v r). \end{equation} \noindent Along $u=u_0$, we also define {\color{black}{the}} initial mass input $$\eta_0:=\frac{m(u_0, v_2)-m(u_0, v_1)}{r(u_0, v_2)}, \mbox{ and denote } \delta_0:=\frac{r(u_0, v_2)-r(u_0, v_1)}{r(u_0, v_2)}.$$ {\color{black}Finally, let $u_*$ denote the value of $u\in[u_0,u_*]$ such that ${\color{black}r(u_*, v_2)}=\frac{3\delta_0}{1+\delta_0}\cdot r(u_0, v_2)$.} \begin{theorem}{\textcolor{black}{(Christodoulou \cite{Chr.1} and reproved in appendix)}}\label{thm1.1}\\ {\color{black}Define the function} \begin{align*} E(x):=\frac{x}{(1+x)^2}\bigg[\ln\bigg(\frac{1}{2x}\bigg)+5-x\bigg]. \end{align*} {\color{black}Consider the system (\ref{ES}) with characteristic initial data along $u=u_0$ and $v=v_1$.} For initial mass input $\eta_0$ along $u=u_0$, {\color{black}if the following lower bound holds:} \begin{align*} \eta_0>E(\delta_0), \end{align*} then a trapped surface {\color{black}$S_{u,v}$, with properties $\partial_v r(u,v)<0$} and $\partial_u r(u,v)< 0$, {\color{black}forms} in {\color{black}the region $[u_0,u_*]\times[v_1,v_2]\subset\mathcal{R}$} . \end{theorem} \begin{remark}For $0<\delta_0\ll 1$, {\color{black}we can check that the order of the lower bound of $\eta_0$, $E(\delta_0)$, is of order $\delta_0\ln(\frac{1}{\delta_0})$. Hence,} if $\eta_0\gtrsim\delta_0\ln\bigg(\frac{1}{\delta_0}\bigg)$, {\color{black}a trapped surface is guaranteed to form within $\mathcal{R}$}. \end{remark} \noindent The above theorem is crucial for Christodoulou's final proof of {\color{black}the} weak cosmic censorship in \cite{Chr.4}. There{\color{black},} Christodoulou studied the first singular point formed in {\color{black}the evolution of (\ref{ES})}: if that point is not covered by a trapped region, then a perturbation of the initial data would {\color{black}lead to} the condition in Theorem \ref{thm1.1} being satisfied{\color{black}. Hence,} a trapped surface would form to cover that singular point. \\ \noindent \textcolor{black} {We provide a reproof of Theorem \ref{thm1.1} in the appendix. While Christodoulou's proof was written in Bondi coordinates, here we have rewritten it in a double null foliation.} Double null foliations are widely used in studying both the exterior and interior regions of black holes for various matter models. Many results pertaining to spherical symmetry are also based on double null foliations. Hence, there is strong motivation to rewrite \cite{Chr.1} with a double null foliation.\\ \noindent \textcolor{black}{By strenghtening the hypothesis on the initial data in Theorem \ref{thm1.1}, in appendix we also improve Christodoulou's bound:} \begin{minipage}[!t]{0.4\textwidth} \begin{tikzpicture}[scale=0.9] \node[] at (1.25,3.25) {\LARGE $\mathcal{R}$}; \node[] at (0.75,1.65) { $\mathcal{D}(0,v_1)$}; \begin{scope}[thick] \draw[->] (0,0) node[anchor=north]{$(u_0,0)$} -- (0,5)node[anchor = east]{$\Gamma$}; \draw[->] (0,0) --node[anchor = north]{$v$} (3,3); \draw[->] (1.75,1.75) node[anchor=west]{$(u_0,v_1)$} --node[anchor=north]{$u$} (0,3.5)node[anchor = east]{$(0,v_1)$}; \draw[->] (2.75,2.75) node[anchor=west]{$(u_0,v_2)$} -- (1,4.5); \end{scope} \begin{scope}[gray] \draw (2,2) -- (0.25,3.75); \draw (2.25,2.25) -- (0.5,4); \draw (2.5,2.5) -- (0.75,4.25); \draw(1.5,2) -- (2.5,3); \draw(1.25,2.25) -- (2.25,3.25); \draw(1,2.5) -- (2,3.5); \draw(0.75,2.75) -- (1.75,3.75); \draw(0.5,3) -- (1.5,4); \draw(0.25,3.25) -- (1.25,4.25); \draw(0,3.5) -- (1,4.5); \end{scope} \end{tikzpicture} \end{minipage} \begin{minipage}[!t]{0.58\textwidth} {\color{black}\begin{theorem}\label{thm1.2} Assume that {\color{black}Minkowskian data are prescribed along $v=v_1$ and require $\phi(u,v_1)=0$}. Suppose that the following lower bound on $\eta_0$ holds: \begin{align*} \eta_0>\f92\delta_0, \end{align*} then there exist a MOTS {\color{black}or a trapped surface} in $[u_0,u_*]\times[v_1,v_2]\subset\mathcal{R}$, i.e. $\partial_vr\leq 0$ at some point in $[u_0,u_*]\times[v_1,v_2]$. \end{theorem}} \end{minipage} \hspace{0.05\textwidth} \begin{remark} {\color{black}{Theorem \ref{thm1.2}}} improves the almost scale critical result in Theorem \ref{thm1.1}, i.e., $\eta_0>\delta_0 \ln \bigg(\frac{1}{\delta_0}\bigg)$ {\color{black}implies} trapped surface formation, to a scale critical result, i.e., $\eta_0>\f92\delta_0$ {\color{black}{implies trapped surface formation}}. {\color{black}In \cite{AL}, the first author and Luk first noted that by prescribing Minkowskian data along $v=v_1$, for Einstein vacuum equations a scale-critical trapped surface formation criterion could be established.\footnote{For more discussions about scaling consideration, interested readers are also refereed to \cite{An2012} and \cite{An2019} by the first author.} For $a$ being a large universal constant, the corresponding requirement for $\eta_0$ is $\eta_0\geq \delta a$. For Einstein-scalar field system under spherical symmetry, Theorem 1.2 improves the large universal constant $a$ into a \underline{concrete} number $9/2$.} \end{remark} \noindent The main result of our paper is the next theorem. We generalize {\color{black}{the}} above results to {\color{black}{the}} Einstein scalar field coupled with the electromagnetic field. More precisely, we consider the following {\color{black}Einstein-Maxwell-charged scalar field} system: \begin{gather*} R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R = 8\pi T_{\mu\nu},\\ T_{\mu\nu}=T^{SF}_{\mu\nu}+T^{EM}_{\mu\nu},\\ T^{SF}_{\mu\nu} =\frac{1}{2}D_\mu\phi(D_\nu\phi)^\dag+\frac{1}{2}D_\nu\phi(D_{\mu}\phi)^\dag-\frac{1}{2}g_{\mu\nu}\big(g^{\alpha\beta} D_\alpha\phi(D_\beta\phi)^\dag\big),\\ T_{\mu\nu}^{EM}=\frac{1}{4\pi}\big(g^{\alpha\beta}F_{\alpha\mu}F_{\beta\nu}-\frac{1}{4}g_{\mu\nu}F^{\alpha\beta}F_{\alpha\beta}\big). \end{gather*} {\color{black}Here,} the Einstein scalar field is coupled to the electromagnetic field by the following {\color{black}{form of the}} Maxwell equation: \begin{align*} \nabla^\nu F_{\mu\nu} = 2\pi\mathfrak{e}i\big(\phi(D_\mu\phi)^\dag-\phi^\dag D_\mu\phi\big), \end{align*} where $D_\mu:={\color{black}\partial_{\mu}+\mathfrak{e}iA_\mu}$ is known as the gauge covariant derivative. {\color{black}Here $\mathfrak{e}$ is the coupling constant and $A_{\mu}$ is the electromagnetic potential.} {\color{black} Using the gauge covariant derivative instead of the usual derivative ensures} that the physical equations remain invariant under local $U(1)$ transformations on $\phi$. {\color{black}Recall that under spherical symmetry, we have the ansatz (\ref{metric0}). {\color{black}Using the $\Omega$ appearing in the ansatz}, we define the charge $Q(u,v)$ contained in a sphere $S(u,v)$ to be $Q:=2r^2\Omega^{-2}F_{uv}$.} \begin{theorem}{\color{black}(Main Theorem)}\label{thm1.3} {\color{black}Denoting the outgoing null hypersurface $u=u_0$ by $C$ and {\color{black}the} incoming null hypersurface $v=v_1$ by $\underline{C}$, we define $$\epsilon:=\sup_{C\cup\underline{C}}\frac{Q^2}{r^2}<1, \mbox{ and }L:=\sup_{\underline{C}}r|\phi|^2.$$} {\color{black}\noindent Let $\omega$ be \underline{any} positive constant in $(0,\frac{2}{3})$. Choose $v_2-v_1$ sufficiently small such that \begin{gather} \frac{9\mathfrak{e}^2}{4(1-\epsilon)^2}(v_2-v_1)^2+\frac{12\pi L\mathfrak{e}}{1-\epsilon}(v_2-v_1)\leq\frac{\omega}{4}\label{first assumption on v2-v1},\\ \frac{45\pi\mathfrak{e}^2(v_2-v_1)^2}{\pi(1-\epsilon)^2}+160\pi\mathfrak{e}^2{\color{black}r}(u_0, {\color{black}v_2})\frac{v_2-v_1}{1-\epsilon}|\phi_1|^2\leq 4\omega\label{second assumption on v2-v1}. \end{gather} } {\color{black}\noindent Further require that the initial data along $\underline{C}$ are \underline{not} supercharged, i.e. \begin{equation}\label{non supercharged} m(u,v_1)\geq |Q|(u,v_1). \end{equation}} \noindent Denote \begin{align*} g_\omega(x):=\frac{1+\frac{\omega}{2}}{1-\frac{\omega}{2}}\frac{1}{(1+x)^2}\bigg(\bigg(\frac{2^{1-\frac{\omega}{2}}}{\omega}+\frac{1}{2^{1+\frac{\omega}{2}}(1+\frac{\omega}{2})}\bigg)x^{1-\frac{\omega}{2}}-\frac{2}{\omega}x-\frac{1}{1+\frac{\omega}{2}}x^2\bigg). \end{align*} \noindent Assume the following lower bound on $\eta_0$ holds \begin{align*} \eta_0>\max\bigg\{\frac{13\epsilon}{\omega}+g_\omega(\delta_0),\frac{9}{2^{1+\frac{\omega}{2}}(1+\delta_0)^2}\delta_0^{1-\frac{\omega}{2}}+g_\omega(\delta_0)\bigg\}, \end{align*} \begin{minipage}[!t]{0.4\textwidth} \begin{tikzpicture}[scale=0.9] \node[] at (1.25,3.25) {\LARGE $\mathcal{R}$}; \node[] at (2.6,2.25) {C}; \node[] at (0.75,2.25) {\underline{C}}; \begin{scope}[thick] \draw[->] (0,0) node[anchor=north]{$(u_0,0)$} -- (0,5)node[anchor = east]{$\Gamma$}; \draw[->] (0,0) --node[anchor = north]{$$} (3,3); \draw[->] (1.75,1.75) node[anchor=west]{$(u_0,v_1)$} --node[anchor=north]{} (0,3.5)node[anchor = east]{$(0,v_1)$}; \draw[->] (2.75,2.75) node[anchor=west]{$(u_0,v_2)$} -- (1,4.5); \end{scope} \begin{scope}[gray] \draw (2,2) -- (0.25,3.75); \draw (2.25,2.25) -- (0.5,4); \draw (2.5,2.5) -- (0.75,4.25); \draw(1.5,2) -- (2.5,3); \draw(1.25,2.25) -- (2.25,3.25); \draw(1,2.5) -- (2,3.5); \draw(0.75,2.75) -- (1.75,3.75); \draw(0.5,3) -- (1.5,4); \draw(0.25,3.25) -- (1.25,4.25); \draw(0,3.5) -- (1,4.5); \end{scope} \end{tikzpicture} \end{minipage} \begin{minipage}[!t]{0.58\textwidth} \noindent then a trapped surface {\color{black}is guaranteed to form in $[u_0,u_*]\times[v_1,v_2]\subset\mathcal{R}$}. \end{minipage} \hspace{0.05\textwidth} \end{theorem} \begin{remark} By comparing the order of the lower bounds (when {\color{black}$0<\delta_0\ll 1$}) of $\eta_0$ in the hypothesis of the theorem, we can interprete the theorem as: If $\eta_0\gtrsim\delta_0^{1-\frac{\omega}{2}}+\frac{13\epsilon}{\omega}$, a trapped surface {\color{black}forms in $\mathcal{R}$}. {\color{black}Since $\omega$ could be chosen to be arbitrary small number in $(0,\f32)$, if we require that $\epsilon$ (upper bound of $\frac{Q^2}{r^2}$ on $C\cup\underline{C}$) is small and satisfies $\frac{13\epsilon}{\omega}\leq \delta_0^{1-\frac{\omega}{2}}$, our theorem is also an \underline{almost-scale-critical} result.} \end{remark} \begin{remark} {\color{black} Moreover, although we use the symbol $\epsilon$ to denote the upper bound of $\frac{Q^2}{r^2}$ on $C\cup\underline{C}$, $\epsilon$ is not necessarily to be small. In particular, we could choose $0<\delta_0\ll1, \omega=\f12$ and require $\epsilon$ to be of size $1$. This is \underline{not} in the perturbative regime of Christodoulou's result for Einstein-scalar field.} Intuitively, for this case our Theorem \ref{thm1.3} is saying: \textit{if the incoming mass contained between $v_1$ and $v_2$ is large enough to overcome the initial charge on $C\cup\underbar{C}$, then we can guarantee the formation of a trapped surface}. \end{remark} {\color{black} \begin{remark}\label{nonsupercharged} For initial data along $v=v_1$, we require that the initial data {\color{black}are not} super-charged, i.e. \begin{equation}\label{m Q 1} m(u, v_1)\geq |Q|(u, v_1). \end{equation} {\color{black}It is natural to consider initial data, which are not-super-charged, otherwise there could be non-physical super-charged naked singularity prescribed along $v=v_1$. At the same time, (\ref{m Q 1}) also implies an important inequality used in the proof of Proposition $\ref{mixed derivatives of r}$. \begin{lemma} Along $v=v_1$, condition (\ref{m Q 1}) implies \begin{equation}\label{m Q 2} \frac{m}{r}(u,v_1)\geq \frac{Q^2}{r^2}(u,v_1). \end{equation} \end{lemma} } \begin{proof} Since there is no MOTS or trapped surface along $v=v_1$, we have $$\frac{2m}{r}(u,v_1)\leq 1, \mbox{ which gives } \frac{m}{r}(u,v_1)\leq \f12.$$ Together with the non-super charged condition, we also have \begin{equation}\label{Q m} \frac{|Q|}{r}(u,v_1)\leq \frac{m}{r}(u,v_1)\leq \f12. \end{equation} Then, we have \begin{equation*} \begin{split} \frac{m}{r}(u,v_1)-\frac{Q^2}{r^2}(u,v_1)\geq& \frac{|Q|}{r}(u,v_1)-\frac{Q^2}{r^2}(u,v_1)\\ \geq& \frac{|Q|}{r}(1-\frac{|Q|}{r})(u,v_1)\geq 0. \end{split} \end{equation*} {\color{black} For the last inequality, we used (\ref{Q m})}. \end{proof} {\color{black}The inequality \eqref{m Q 2} is crucial in proving Proposition \ref{mixed derivatives of r}. And all subsequent results in Section \ref{main section} depend on Proposition \ref{mixed derivatives of r}.} \end{remark} \section{Preliminaries and Set-up}\label{preliminary} To study the problem of trapped surface formation, we need to choose a convenient coordinate system {\color{black}in which} to express the {\color{black}Einstein field equations}. We describe the double null coordinate system for $\textit{spherically symmetric spacetimes}$ in {\color{black}what} follows.\\ \begin{definition} A spacetime $(\mathcal{M},g)$ is called \textit{spherically symmetric} if $SO(3)$ acts on it by isometry, and the orbits of the group are (topological) 2-dimensional spheres $S$. We define the area-radius coordinate $r(S)$ such that $A = 4\pi r^2$, where $A$ is the area of the $S$ determined by the induced metric $g|_S$.\footnote{This definition for $r$ implies that $g|_S = r^2(d\theta^2+sin^2\theta d\phi^2)$}. \end{definition} Under the assumption of spherical symmetry, a spacetime can be represented by {\color{black}a} two-dimensional diagram by considering only the quotient $\mathcal{M}/S$. Hence, a point on such diagram represents a 2-sphere in spacetime. There is no loss in generality by assuming that the outgoing ($v$ coordinate) and incoming ($u$ coordinate) null geodesics make 45-degree angles with the horizontal and vertical axes. Furthermore, it is possible to bring the points at infinity to a finite region through a conformal transformation, so that we can visualize the entire spacetime in a finite region. Such a representation of {\color{black}a} spacetime is called a \textit{Penrose diagram}.\\ {\color{black} We now introduce the setup of the coordinate system, along with important points of interest on the Penrose diagram. \begin{center} \begin{figure} \resizebox{4.5cm}{7cm}{ \begin{tikzpicture} \node[] at (1.5,3.5) {\LARGE $\mathcal{D}(0,v_1)$}; \begin{scope}[thick] \draw[->] (0,0) node[anchor=north]{$(u_0,0)$} -- (0,10)node[anchor = east]{$\Gamma$}; \draw[->] (0,0) --node[anchor = north][label={[label distance=2cm]40:$C$}]{} (5.5,5.5); \draw[->] (3.5,3.5) node[anchor=west]{$p: (u_0,v_1)$} --node[anchor=north]{$\underline{C}$} (0,7)node[anchor = east]{$(0,v_1)$}; \end{scope} \begin{scope}[gray] \draw (4,4) -- (0.5,7.5); \draw (4.5,4.5) -- (1,8); \draw (5,5) -- (1.5,8.5); \draw(3,4) -- (5,6); \draw(2.5,4.5) -- (4.5,6.5); \draw(2,5) -- (4,7); \draw(1.5,5.5) -- (3.5,7.5); \draw(1,6) -- (3,8); \draw(0.5,6.5) -- (2.5,8.5); \draw(0,7) -- (2,9); \end{scope} \end{tikzpicture} }\caption{Illustration of double null coordinate patch} \label{fig: doublenull} \end{figure} \end{center} \begin{enumerate} \item Let $\Gamma$ denote the axis of symmetry of the spacetime. \item Fix a point $p$ on the penrose diagram. Label the incoming null geodesic intersecting $p$ by $\underline{C}$, and the outgoing null geodesic intersecting $p$ by $C$. On the actual spacetime, $C$ and $\underline{C}$ are therefore null hypersurfaces. \item Parametrize $\underline{C}$ with the variable $u$, and $C$ by the variable $v$. At the intersection of $\Gamma$ and $\underline{C}$, set $u = 0$. Extend $C$ backwards until it intersects $\Gamma$. At the intersection of $\Gamma$ and $C$, we similarly set $v = 0$. Fixing these values determines the coordinate of $p$, which we call $(u_0,v)$. \item In the domain of dependence of $C\cup\underline{C}$, we can now establish a coordinate system: through every point in the domain of dependence runs an incoming and outgoing null geodesic emanating from $C$ and $\underline{C}$ respectively. Using the parameters $u$ and $v$ defined on $\underline{C}$ and $C$ gives us a coordinate for the point in question. \item Finally, let $\mathcal{D}(0,v_1)$ denote the region in spacetime bounded by $C$, $\underline{C}$ and $\Gamma$. \end{enumerate}} The construction above is illustrated in Figure \ref{fig: doublenull}. With respect to the double null coordinate system, the spherically symmetric metric can be expressed as \begin{align} g=-\Omega^2(u,v)dudv + r^2(u,v)d\theta^2 +r^2(u,v)\sin^2\theta d\phi^2. \label{metric} \end{align} We now define several useful geometric quantities: \begin{definition} The \textit{Hawking mass} $m(u,v)$ contained inside a sphere $S(u,v)$ is defined to be the quantity {\color{black}$\frac{r}{2}(1+4\Omega^{-2}\partial_u r \partial_v r)$}. \end{definition} \begin{definition} We define the charge $Q(u,v)$ contained in a sphere $S(u,v)$ to be \begin{align} Q:=2r^2\Omega^{-2}F_{uv}.\label{chargeeqn} \end{align} \end{definition} \begin{gather} \mbox{Note: } F_{uv}=\textcolor{black}{\partial_uA_v}-\partial_vA_u, \text{where $A$ is the electromagnetic potential.}\label{maxwelleqn} \end{gather} {\color{black} Due to gauge freedom in the electromagnetic potential, we can impose the condition $A_v\equiv 0$. Hence the above definition becomes: \begin{gather} F_{uv} = -\partial_vA_u\nonumber. \end{gather} } By substituting the expression {\color{black}$\eqref{metric}$, $\eqref{chargeeqn}$ and $\eqref{maxwelleqn}$} into the Einstein field equations and Maxwell equations, we arrive at the following system of equations with dynamical real-valued unknowns $r, A_u$ and $\Omega^2$, and {\color{black}complex-valued unknown} $\phi$. For a more comprehensive explanation of these variables, we refer to \cite{Ko}, from which the following {\color{black}Einstein-Maxwell-charged scalar field system} has been obtained. \begin{gather} r\partial_v\partial_ur+\partial_vr\partial_ur=-\frac{\Omega^2}{4}\bigg(1-\frac{Q^2}{r^2}\bigg)\label{EMS1},\\ r^2\partial_u\partial_v\log\Omega = -2\pi r^2\big(D_u\phi(\partial_v\phi)^\dag+\partial_v\phi(D_u\phi)^\dag\big)-\frac{1}{2}\Omega^2\frac{Q^2}{r^2}+\frac{1}{4}\Omega^2+\partial_u\partial_v r\label{EMS2},\\ \partial_u(\Omega^{-2}\partial_ur) = -4\pi r\Omega^{-2}D_u\phi(D_u\phi)^\dag\label{EMS3},\\ \partial_v(\Omega^{-2}\partial_vr) = -4\pi r\Omega^{-2}\partial_v\phi(\partial_v\phi)^\dag\label{EMS4},\\ r\partial_u\partial_v\phi+\partial_ur\partial_v\phi+\partial_vr\partial_u\phi+\mathfrak{e}i\Psi(A)=0\label{EMS5},\\ \Psi(A) = A_u\partial_v(r\phi)-\frac{\Omega^2}{4}\frac{Q}{r}\phi\label{EMS6},\\ Q=-2r^2\Omega^{-2}\partial_vA_u\label{EMS7},\\ \partial_uQ=2\pi\mathfrak{e}ir^2\big(\phi(D_u\phi)^\dag-\phi^\dag D_u\phi\big)=4\pi\mathfrak{e}r^2Im\big(\phi^\dag D_u\phi\big)\label{EMS8},\\ \partial_vQ=2\pi\mathfrak{e}ir^2\big(\phi(\partial_v\phi)^\dag-\phi^\dag \partial_v\phi\big)=4\pi\mathfrak{e}r^2Im\big(\phi^\dag\partial_v\phi\big)\label{EMS9}, \end{gather} where $D_u:= \partial_u+i\mathfrak{e}A_u$, and $\mathfrak{e}$ is the coupling constant between the scalar and electromagnetic field. It is worth noting that to reduce the above system into that of an uncharged scalar field, it suffices to set $\mathfrak{e} = 0$. Also, we can combine $\eqref{EMS5}$, $\eqref{EMS6}$ and $\eqref{EMS7}$, which gives us: \begin{align} &r\partial_u\partial_v\phi+\partial_ur\partial_v\phi+\partial_vr\partial_u\phi+\mathfrak{e}iA_u\partial_v(r\phi)-\mathfrak{e}i\frac{\Omega^2}{4}\frac{Q}{r}\phi=0\nonumber\\ &\implies \partial_v(r\partial_u\phi) + \partial_ur\partial_v\phi+\mathfrak{e}i\bigg(A_u\partial_v(r\phi)+r\phi\partial_vA_u\bigg)=-\mathfrak{e}i\frac{Q\phi\Omega^2}{4r}\nonumber\\ &\implies\partial_v(r\partial_u\phi)+\partial_ur\partial_v\phi+\mathfrak{e}i\partial_v(r\phi A_u)\nonumber\\&=\partial_v(rD_u\phi)+\partial_v\phi\partial_ur=-\mathfrak{e}i\frac{Q\phi\Omega^2}{4r}.\label{complex wave equation} \end{align} The above system \eqref{EMS1}-\eqref{complex wave equation} is subject to initial conditions. There are two types of initial conditions to be considered: \begin{enumerate} \item The first type of initial conditions are derived from geometrical considerations and are independent of the physical scenario. On the center of symmetry $\Gamma$, we must have $r=0$. In addition, by the spherical symmetry assumption, as we consider points infinitesimally close to the center, its incoming null geodesics essentially become outgoing (in the opposite direction). Hence we require that $\partial_vr(u_0,0) =-\partial_ur(u_0,0)$. The evolution of $r$ in the spacetime is then determined by equations $\eqref{EMS1}, \eqref{EMS3}, \text{ and } \eqref{EMS4}$.\\ \textcolor{black}{On $C\cup\underline{C}$, we set $\Omega^2 = 1$.} This amounts to fixing a normalization for the coordinate system. The evolution of $\Omega^2$ in the coordinate patch $[u_0,0]\times[v_1,\infty)$ is then given by equation $\eqref{EMS2}$.\\ \item The second type of initial conditions are those derived from quantities such as the scalar field $\phi$ and electromagnetic potential $A_u$. We can prescribe initial data of $\phi$ {\color{black}freely} on $C\cup\underline{C}$, which will completely determine its first derivatives as $C$ and $\underline{C}$ are characteristic hypersurfaces.\\ The electromagnetic potential $A_u$ along outgoing null hypersurfaces can be determined through equation $\eqref{EMS7}$ up to an arbitrary constant, which is in turn determined by $\eqref{EMS9}$. For completeness, it is worth mentioning that there is no loss in generality in letting $A_u = 0$ along $\Gamma$ \textcolor{black}{due to gauge freedom}, although we will not make use of this fact.\\ \end{enumerate} Using the above system of equations, we can compute the derivatives of the Hawking mass: \begin{align} \partial_um&= \partial_u\bigg(\frac{r}{2}(1+4\Omega^{-2}\partial_ur\partial_vr)\bigg)\nonumber\\ &=\frac{\partial_ur}{2}+2\partial_ur\Omega^{-2}\partial_ur\partial_vr+2r\partial_u(\Omega^{-2}\partial_ur)\partial_vr+2r\Omega^{-2}\partial_ur\partial_u\partial_vr\nonumber\\ &=\frac{\partial_ur}{2}+2\partial_ur\Omega^{-2}\partial_ur\partial_vr-8\pi r^2\Omega^{-2}\partial_vr|\partial_u\phi|^2\nonumber\\ &\hspace{0.5cm}+2\Omega^{-2}\partial_ur\bigg(-\frac{\Omega^2}{4}\big(1-\frac{Q^2}{r^2}\big)-\partial_vr\partial_ur\bigg)\nonumber\\ &=-8\pi r^2\Omega^{-2}\partial_vr|D_u\phi|^2+\frac{Q^2\partial_ur}{2r^2}\label{massu},\\ \partial_vm &= \partial_v\bigg(\frac{r}{2}(1+4\Omega^{-2}\partial_ur\partial_vr)\bigg)\nonumber\\ &=\frac{\partial_vr}{2}+2\partial_vr\Omega^{-2}\partial_ur\partial_vr+2r\partial_v(\Omega^{-2}\partial_vr)\partial_ur+2r\Omega^{-2}\partial_vr\partial_u\partial_vr\nonumber\\ &=\frac{\partial_ur}{2}+2\partial_ur\Omega^{-2}\partial_ur\partial_vr-8\pi r^2\Omega^{-2}\partial_ur|\partial_v\phi|^2\nonumber\\ &\hspace{0.5cm}+2\Omega^{-2}\partial_vr\bigg(-\frac{\Omega^2}{4}\big(1-\frac{Q^2}{r^2}\big)-\partial_vr\partial_ur\bigg)\nonumber\\ &=-8\pi r^2\Omega^{-2}\partial_ur|\partial_v\phi|^2+\frac{Q^2\partial_vr}{2r^2}\label{massv}. \end{align} Finally, we define what is meant by a trapped surface. \begin{definition} A \textit{trapped surface} $S$ in a spherically symmetric spacetime is a point $(u,v)$ on the Penrose diagram (which represents a sphere) such that $\partial_ur(u,v)<0$ and $\partial_vr(u,v)<0$. If $\partial_vr(u,v) = 0$, we call $(u,v)$ a \textit{marginally outer trapped surface (MOTS)}. \end{definition} In {\color{black}the following}, we will only focus on a narrow strip of the double null coordinate patch $[u_0,0]\times [v_1,v_2]$, for some $v_2>v_1$. We are going to give conditions under which trapped surface formation is guaranteed in this strip. We {\color{black}introduce:} \begin{gather} r_i(u) := r(u,v_i), \hspace{0.5cm} m_i(u):=m(u,v_i),\hspace{0.5cm} i = 1,2\nonumber\\ \delta(u):=\frac{r_2(u)}{r_1(u)}-1,\hspace{0.5cm} \delta_0 := \delta(u_0)\nonumber\\ \eta(u):=\frac{2(m_2(u)-m_1(u))}{r_2(u)},\hspace{0.5cm} \eta_0 := \eta(u_0)\nonumber\\ x(u):=\frac{r_2(u)}{r_2(u_0)}\label{quantities} \end{gather} See Figure \ref{doublenull2} for an illustration. Henceforth, any dynamical quantity (except for $u$ and $v$) with the subscript $\{1,2\}$ shall be treated as a function of $u$ with $v = v_i, i = \{1,2\}$ fixed. Furthermore, denote {\color{black} the region $[u_0,0]\times[v_1,v_2]$ by $\mathcal{R}$.}\\ \begin{figure}\label{doublenull2} \centering \begin{tikzpicture} \node[] at (1.25,3.25) {\LARGE $\mathcal{R}$}; \node[] at (0.7,1.6) { $\mathcal{D}(0,v_1)$}; \begin{scope}[thick] \draw[->] (0,0) node[anchor=north]{$(u_0,0)$} -- (0,5)node[anchor = east]{$\Gamma$}; \draw[->] (0,0) --node[anchor = north]{$v$} (3,3); \draw[->] (1.75,1.75) node[anchor=west]{$(u_0,v_1)$} --node[anchor=north]{$u$} (0,3.5)node[anchor = east]{$(0,v_1)$}; \draw[->] (2.75,2.75) node[anchor=west]{$(u_0,v_2)$} -- (1,4.5); \end{scope} \begin{scope}[gray] \draw (2,2) -- (0.25,3.75); \draw (2.25,2.25) -- (0.5,4); \draw (2.5,2.5) -- (0.75,4.25); \draw(1.5,2) -- (2.5,3); \draw(1.25,2.25) -- (2.25,3.25); \draw(1,2.5) -- (2,3.5); \draw(0.75,2.75) -- (1.75,3.75); \draw(0.5,3) -- (1.5,4); \draw(0.25,3.25) -- (1.25,4.25); \draw(0,3.5) -- (1,4.5); \end{scope} \end{tikzpicture} \caption{Problem setup on Penrose Diagram} \label{doublenull2} \end{figure} \section{A Trapped Surface formation criterion for the Complex Scalar Field}\label{main section} \subsection{Outline} {\color{black}Before giving the complete proof, we briefly describe the main ideas. \begin{enumerate} \item First, we prove that $r(u,v)$ is decreasing with respect to $u$ in Lemma \ref{negativeincomingcharged}, hence the dimensionless length scale $x(u):=\frac{r_2(u)}{r_2(u_0)}$ decreases as $u$ increases, and $x(u_0) = 1$. \item Then we employ a proof-by-contradiction argument: assuming that $\mathcal{D}(0,v_1)\cup\mathcal{R}$ does not have a trapped surface, we derive an inequality for $\eta$ in terms of $x$ in the region $[u_0,u_*]\times[v_1,v_2]$. We further show that $\frac{d\eta}{dx}$ is bounded from above, i.e. $\frac{d\eta}{du}$ is bounded from below, and therefore we get a lower bound on $\eta(u_*)$. \noindent If this lower bound is greater than $1$, i.e., $\eta(u_*)= \frac{2(m_2-m_1)}{r_2}(u_*)>1$, it implies $\frac{2m_2}{r_2}(u_*)>1$ and this means $S(u_*,v_2)$ \text{ is a trapped surface.} Since $(u_*,v_2)$ is a point in $\mathcal{R}$, the above gives us the desired contradiction. \end{enumerate} The key of above arguments is to bound $\frac{d\eta}{du}$. A direct computation gives: \begin{align} \frac{d\eta}{dx}&=-\frac{\eta}{x}-\frac{16\pi\partial_vr_2\Omega_2^{-2}}{x\partial_ur_2}\bigg(r_2^2|D_u\phi_2|^2-\frac{\Omega_1^{-2}\partial_vr_1}{\Omega_2^{-2}\partial_vr_2}r_1^2|D_u\phi_1|^2\bigg)+\frac{Q_2^2}{xr_2^2}.\label{1.10intro} \end{align} We show in Lemma \ref{bound for charge} that the \underline{non}-supercharged assumption along $v=v_1$ allows us to show that ${Q_2^2}/{r_2^2}$ remains bounded by $\eta$, plus a small error term. If $\eta$ is large enough compared to ${Q_2^2}/{r_2^2}$, then the error can be absorbed into $\eta$. \noindent With the control of ${Q_2^2}/{r_2^2}$ in terms of $\eta$, we further have \begin{gather} \Theta^2:=\big(r_2|D_u\phi_2|-r_1|D_u\phi_1\big)^2\lesssim \frac{-\partial_ur_2}{8\pi\Omega_2^{-2}\partial_vr_2}(m_2-m_1)\bigg(\frac{1}{r_1}-\frac{1}{r_2}\bigg)\label{keylemma1intro}\\ \text{and } \frac{\Omega_2^{-2}\partial_vr_2}{\Omega_1^{-2}\partial_vr_1}(u)\lesssim e^{-\eta(u)}\label{keylemma2intro}. \end{gather} We can substitute $\eqref{keylemma1intro}$ and $\eqref{keylemma2intro}$ into $\eqref{1.10intro}$, to obtain \begin{align} \frac{d\eta}{dx}\lesssim -\frac{\eta}{x}\bigg(1-\frac{\delta_0}{x(1+\delta_0)-\delta_0}\bigg)+\frac{1}{x}\frac{\delta_0}{x(1+\delta_0)-\delta_0}.\label{diffineq} \end{align} Integrating this will give us a lower bound on $\eta(u)$ for $u\in[u_0,u_*]$. This ensures that for $u\in[u_0,u'], \eta(u)$ is always large enough compared to $\frac{Q_2^2}{r_2^2}$, so that the differential inequality is always valid in a neighbourhood of $u'$, and hence the domain of validity of $\eqref{diffineq}$ can be extended to the whole $[u_0,u_*]$. Finally, the inequality also shows that $\eta(u_*)>1$, a contradiction to the no-trapped-surfaces assumption. Hence the initial assumption that $\mathcal{D}(0,v_1)\cup\mathcal{R}$ has no trapped surfaces cannot be true and this completes the argument.\\ } \subsection{Proof of Theorem \ref{thm1.3}} To begin the proof proper, we first give a (negative) upper bound for $\partial_ur$ in the region $[u_0,0]\times[v_1,\infty)$ as promised. The following lemma is the analog of Lemma \ref{negativeincoming} in the uncharged case. This has two important consequences described in the remarks.\\ \begin{lemma}\label{negativeincomingcharged} {\color{black}$\partial_ur\leq -\frac{1-\epsilon}{2}\Omega^2$ everywhere in $\mathcal{D}(0,v_1)\cup\big([u_0,0]\times[v_1,\infty)\big)$.} \begin{proof} Rewrite $\eqref{EMS1}$ as \begin{align*} \partial_v\big(r\partial_ur\big)=-\frac{\Omega^2}{4}\bigg(1-\frac{Q^2}{r^2}\bigg). \end{align*}Applying the assumption that $\epsilon < 1$ {\color{black}and $\Omega^2=1$} on $C$, the following inequalities hold on $C$: \begin{align*} -\frac{1}{4}\leq\partial_v(r\partial_ur)(u_0,v)\leq-\frac{1}{4}+\frac{\epsilon}{4}. \end{align*} Integrating both sides and dividing by $r$: \begin{align} -\frac{v}{4r(u_0,v)}\leq \partial_ur(u_0,v)\leq -\frac{v(1-\epsilon)}{4r(u_0,v)}\label{estimate for nu}. \end{align} {\color{black}For} the first inequality of $\eqref{estimate for nu}$ at $v=0$, we get \begin{align}\label{boundoutgoing} -\frac{1}{4\partial_vr(u_0,0)}\leq\partial_ur(u_0,0) = -\partial_vr(u_0,0)\implies\partial_vr(u_0,0)\leq\frac{1}{2}. \end{align} Since $\Omega^2 = 1$ on $C$, $\eqref{EMS4}$ gives us $\partial_v\partial_vr\leq 0$, i.e. $r$ is concave with respect to {\color{black}$v$}. Combining this with the fact that $r(u_0,0) = v(u_0,0) = 0$, we have: \begin{align*} \frac{r}{v}(u_0,v)\leq\partial_vr(u_0,0). \end{align*} Substituting this into the second inequality of $\eqref{estimate for nu}$, followed by applying $\eqref{boundoutgoing}$, we get \begin{align*} \partial_ur(u_0,v)\leq - \frac{1-\epsilon}{4\partial_vr(u_0,0)}\leq -\frac{1-\epsilon}{2}. \end{align*} By $\eqref{EMS3}$, $\Omega^{-2}\partial_ur$ is decreasing along incoming null geodesics. Hence for a general point in {\color{black} $\mathcal{D}(0,v_1)\cup\big([u_0,0]\times[v_1,\infty)\big)$}, we have $\Omega^{-2}\partial_ur\leq-\frac{1-\epsilon}{2}$. \end{proof} \end{lemma} \begin{remark} Under the assumption of no trapped surfaces, $m(u,v)\geq 0$ for all {\color{black}$(u,v)\in \mathcal{D}(0,v_1)\cup\big([u_0,0]\times[v_1,\infty)\big)$} \end{remark} \begin{proof} Given any point $(u,v)\in \mathcal{R}$, we can extend the outgoing null geodesic backwards until it intersects $\Gamma$ at some coordinate $(u,v_c)$, so that $r(u,v_c) = 0$. Using $\eqref{massv}$, we have \begin{align*} \partial_vm = -8\pi r^2\Omega^{-2}\partial_ur|\partial_v\phi|^2 + \frac{Q^2\partial_vr}{2r^2}. \end{align*} {\color{black}Since} $\partial_ur\leq 0$ by Lemma $\ref{negativeincomingcharged}$, and $\partial_vr> 0$ by the no trapped surface assumption, we get $\partial_vm\geq 0$. Combining with the fact that $m(u,v_c) = 0$, we obtain the desired result. \end{proof} \subsection{Estimates for $Q,r$}\label{sec:nothing} In this section we will bound $Q$ in terms of the Hawking mass in $\mathcal{R}${\color{black}, and show that $\partial_u\partial_vr \leq 0$} under appropriate conditions. Obtaining a bound on $Q$ will require a bound on $\frac{r_2}{r_1}$, which in turn requires a bound on $Q$. Hence we will develop these bounds using a bootstrap argument. \begin{proposition}\label{A priori estimate on Q} {\color{black}Fix $0<\omega<\f23$. Choose $v_2-v_1$ sufficiently small satisfying \eqref{first assumption on v2-v1} and \eqref{second assumption on v2-v1}.} Let $v_1<v_a\leq v_2$. Assume that $\frac{r_2(u)}{r_1(u)}\leq\frac{3}{2}$, {\color{black}i.e., $\delta(u)\leq \frac12$ } for $u\in[u_0,0]$, and that {\color{black}$\mathcal{R}:=[u_0,0]\times[v_1,v_2]$} is free of trapped surfaces. Then the following inequality holds: \begin{align*} \frac{Q_a^2(u)}{r_a^2(u)}\leq\frac{\omega}{4}\eta_a(u)+{\color{black}\frac{2Q_1^2(u)}{r^2_1(u)}}, \end{align*} where \begin{align*} {\color{black}\eta_a(u):=\frac{2\big(m_a(u)-m_1(u)\big)}{r_a}}. \end{align*} Over here the subscript $a$ indicates a quantity evaluated at the point $(u,v_a)$. \end{proposition} \begin{proof} {\color{black}We write $Q_a$ as the integral of its derivative:} \begin{align}\label{chargeestimate1} Q_a^2 &=\bigg(\int_{v_1}^{v_a}\partial_vQ\text{ }dv +Q_1\bigg)^2\leq 2\bigg(\int_{v_1}^{v_a}\partial_vQ\text{ }dv\bigg)^2+2Q_1^2\nonumber\\ &\leq 2\bigg(\int_{v_1}^{v_a}4\pi\mathfrak{e}|\phi||\partial_v\phi|r^2dv\bigg)^2+2Q_1^2,\text{ by applying \eqref{EMS9}}\nonumber\\ &\leq 32\pi^2\mathfrak{e}^2\int_{v_1}^{v_a}r^2|\phi|^2dv\cdot\int_{v_1}^{v_a}r^2|\partial_v\phi|^2dv+2Q_1^2. \end{align} {\color{black}The second integral in the previous line can be bounded:} \begin{align}\label{chargeestimate2} \int_{v_1}^{v_a}r^2|\partial_v\phi|^2dv=\int_{v_1}^{v_a}\frac{-8\pi r^2\Omega^{-2}\partial_ur|\partial_v\phi|^2}{-8\pi\Omega^{-2}\partial_ur}dv\nonumber\\ \leq\frac{1}{4\pi(1-\epsilon)}\int_{v_1}^{v_a}\partial_vm-\frac{Q^2\partial_vr}{2r^2}\text{ }dv\leq\frac{m_a-m_1}{4\pi(1-\epsilon)}, \end{align} where we applied Lemma \ref{negativeincomingcharged} in the second last inequality to pull out the term {\color{black}$\Omega^{-2}\partial_ur$} in the denominator{\color{black}, and} used the assumption that $\partial_vr\geq 0$ for the last inequality. Next, we bound the first integral:\textcolor{black}{ \begin{align}\label{chargeestimate3} \int_{v_1}^{v_a}r^2|\phi|^2dv &=\int_{v_1}^{v_a}\bigg(r^2\bigg|\int_{v_1}^v\partial_{v'}\phi \text{ }dv'+\phi_1\bigg|^2\bigg)dv\nonumber\\ &\leq 2r_a^2\int_{v_1}^{v_a}\bigg[\bigg(\int_{v_1}^v\partial_{v'}\phi\text{ } dv'\bigg)^2+|\phi_1|^2\bigg]dv\nonumber\\ &\leq 2r_a^2\int_{v_1}^{v_a}\bigg[(v-v_1)\int_{v_1}^v|\partial_{v'}\phi|^2dv'+|\phi_1|^2 \bigg]dv\nonumber\\ &\leq 2r_a^2\int_{v_1}^{v_a}\bigg[(v-v_1)\int_{v_1}^v\bigg(\frac{-8\pi r^2\Omega^{-2}\partial_ur|\partial_v\phi|^2}{-8\pi r^2\Omega^{-2}\partial_ur}\bigg)dv'+|\phi_1|^2 \bigg]dv\nonumber\\ &\leq\frac{2r_a^2(v_a-v_1)}{r_1^2}\frac{1}{8\pi}\frac{2}{1-\epsilon}\int_{v_1}^{v_a}\int_{v_1}^v\partial_{v'}m\text{ }dv'dv+2r_a^2\int_{v_1}^{v_a}|\phi_1|^2dv,\nonumber \\ &\hspace{0.5cm}\text{ by Lemma }\ref{negativeincomingcharged}\nonumber\\ &\leq\frac{v_a-v_1}{2\pi(1-\epsilon)}\bigg(\frac{r_a}{r_1}\bigg)^2\int_{v_1}^{v_a}(m_a-m_1)dv+2r_a^2(v_a-v_1)|\phi_1|^2\nonumber\\ &\leq\frac{v_a-v_1}{2\pi(1-\epsilon)}\bigg(\frac{r_a}{r_1}\bigg)^2(v_a-v_1)(m_a-m_1)+2r_a^2(v_a-v_1)|\phi_1|^2. \end{align}} Substituting $\eqref{chargeestimate2}$ and $\eqref{chargeestimate3}$ back into $\eqref{chargeestimate1}$ and rearranging, we get \begin{align*} Q_a^2\leq \frac{4\mathfrak{e}^2(v_a-v_1)^2}{(1-\epsilon)^2}\bigg(\frac{r_a}{r_1}\bigg)^2(m_a-m_1)^2+\frac{16\pi\mathfrak{e}^2}{1-\epsilon}r_a^2(m_a-m_1)(v_a-v_1)|\phi_1|^2+2Q_1^2. \end{align*} Dividing both sides by $r_a^2$, we get\textcolor{black}{ \begin{align*} \frac{Q_a^2}{r_a^2}&\leq\frac{\mathfrak{e}^2(v_a-v_1)^2}{(1-\epsilon)^2}\bigg(\frac{r_a}{r_1}\bigg)^2\eta_a^2+\frac{8\pi\mathfrak{e}^2(v_a-v_1)}{1-\epsilon}\bigg(\frac{r_a}{r_1}\bigg)(r_1|\phi_1|^2)\eta_a+2\frac{Q_1^2}{r_a^2}\\ &\leq\bigg(\frac{9\mathfrak{e}^2}{4(1-\epsilon)^2}(v_a-v_1)^2\eta_a+\frac{12\pi L\mathfrak{e}^2}{1-\epsilon}(v_a-v_1)\bigg)\eta_a+2\frac{Q_1^2}{r_1^2}, \hspace{0.5cm}\text{since }\frac{r_a}{r_1}\leq\frac{3}{2}\\ &\leq\bigg(\frac{9\mathfrak{e}^2}{4(1-\epsilon)^2}(v_a-v_1)^2+\frac{12\pi L\mathfrak{e}^2}{1-\epsilon}(v_a-v_1)\bigg)\eta_a+2{\color{black}\frac{Q_1^2}{r_1^2}}, \end{align*}} where in the last inequality we have used the fact that $\eta_a<1$. This is because the no trapped surface or MOTS assumption gives: \begin{align*} \eta_a\leq\frac{2(m_a-m_1)}{r_a}\leq\frac{2m_a}{r_a}< 1. \end{align*} Hence by the assumption \eqref{first assumption on v2-v1} on $v_2-v_1$, we have $\frac{Q_a^2}{r_a^2}\leq \frac{\omega}{4}\eta_a+2\epsilon$. \end{proof} We wish to get rid of the $\epsilon$ term in the upper bound given by the previous proposition. This will be done in Lemma $\ref{bound for charge}$. For that, we will need the next proposition which is the equivalent of Proposition \ref{mixed derivatives of r real} in the uncharged case. {\color{black}This proposition is proven} using a bootstrap argument. \begin{proposition}\label{mixed derivatives of r} Assume {\color{black}the initial data along $\underline{C}$ is not super-charged}, and $\mathcal{R}$ is free of trapped surfaces. Then $\partial_u\partial_vr \leq 0$ in {\color{black}$[u_0,u_*]\times[v_1,v_2]$} and {\color{black} $\delta(u):=\frac{r_2}{r_1}-1\leq\frac{1}{2}$} for $u\in[u_0,u_*]${\color{black}, where $u_*$ is defined such that $x(u_*)=\frac{3\delta_0}{1+\delta_0}$.} \end{proposition} \begin{proof} Let $x':=\inf\big\{x\in[\frac{3\delta_0}{1+\delta_0},1]\big|\delta(y)\leq\frac{1}{2}\text{ holds for } y\in[x,1]\big\}$. We will aim to show that $x'=\frac{3\delta_0}{1+\delta_0}$. {\color{black}This proves the claim that $\delta(u)\leq\frac{1}{2}$ for $u\in[u_0,u_8]$, as $x$ is monotonically decreasing with respect to $u$.}\\ Since we have $\delta(x')\leq\frac{1}{2}$, it follows that $\frac{r_2(x')}{r_1(x')}\leq\frac{3}{2}$ and hence we can apply Proposition \ref{A priori estimate on Q}. Thus, for every {\color{black}$x\in [x',1],v_a\in [v_1,v_2]$}, we have {\color{black} \begin{align*} \frac{Q^2}{r^2}(x,v_a)&\leq\frac{\omega}{4}\eta_a+\frac{2Q_1^2}{r^2}\leq\eta_a+\frac{2Q_1^2}{r^2}\\ &=\frac{2m_a}{r}-\frac{2m_1}{r}+\frac{2Q_1^2}{r^2}=\frac{2m_a}{r}-\frac{2}{r}\bigg(m_1-\frac{Q_1^2}{r}\bigg)\\ &=\frac{2m_a}{r}-\frac{2}{r}\bigg(m_1-\frac{Q_1^2}{r_1}\bigg). \end{align*}} \textcolor{black}{By the non-supercharged assumption, \eqref{m Q 2} from Remark $\ref{nonsupercharged}$ tells us that $m_1-\frac{Q_1^2}{r_1}\geq 0$}. Hence we have \begin{align*} \frac{Q^2}{r^2}(x,v_a)\leq \frac{2m}{r}(x,v_a). \end{align*}Now we rewrite \eqref{EMS1} into the following equivalent form: \begin{align}\label{EMS1mass}{\color{black} \partial_u\partial_vr=-\frac{\Omega^2}{4r}\bigg(\frac{2m}{r}-\frac{Q^2}{r}\bigg)}. \end{align} Since we have just shown that {\color{black}$\frac{2m}{r}\geq\frac{Q^2}{r}$} for $x\in [x',1]$, it follows that $\partial_u\partial_vr\leq 0$ in the region $[u_0,u']\times[v_1,v_2]$.\\ Integrating with respect to $u$, we get: \begin{align*} \partial_vr(u)-\partial_vr(u_0)\leq 0\implies\partial_vr(u)\leq\partial_vr(u_0). \end{align*} Integrating the last inequality above with respect to $v$, we obtain \begin{align*} r_2(u)-r_1(u)\leq r_2(u_0)-r_1(u_0), \hspace{0.5cm}\text{for all } u\in [u_0,u']. \end{align*} {\color{black}Here $u'$ is defined so that $x(u')=x'$}. We can use {\color{black}this} to derive a bound for $\delta(u)$: \begin{align*} \delta(u) = \frac{r_2}{r_1}-1&=\frac{r_2-r_1}{r_2-(r_2-r_1)}\leq\frac{r_2(u_0)-r_1(u_0)}{r_2(u)-(r_2(u_0)-r_1(u_0))}\\ &\leq\frac{\delta_0}{\frac{r_2(u)}{r_1(u_0)}-\delta_0}=\frac{\delta_0}{x(u)(1+\delta_0)-\delta_0},\hspace{0.5cm}\text{ for all }u\in[u_0,u']. \end{align*} Hence{\color{black},} if $x'>\frac{3\delta_0}{1+\delta_0}$, we have $\delta(x')<\frac{\delta_0}{3\delta_0-\delta_0}=\frac{1}{2}$. By the continuity of the function $\delta(x)$, there exist some $x''<x'$ such that $\delta(x)<\frac{1}{2}$ for all $x\in[x'',1]$, which is a contradiction to infimum property of $x'$. {\color{black}Therefore, we must have} $x' = \frac{3\delta_0}{1+\delta_0}$. \end{proof} {\color{black} \begin{remark} Since all subsequent Lemmas and Propositions depend on Proposition $\ref{A priori estimate on Q}$, the non-supercharged hypothesis is necessary for all of them. \end{remark} } {\color{black}Recall {\color{black} that} $\epsilon:=\sup_{\underline{C}}\frac{Q^2}{r^2}<1$}. Combining Proposition \ref{A priori estimate on Q} and Proposition \ref{mixed derivatives of r}, we get the following lemma: \begin{lemma}\label{bound for charge} Assume that {\color{black}the initial data along $\underline{C}$ is not super-charged} and that $\mathcal{R}$ is free of trapped surfaces. Then for every {\color{black}$(u,v_a)\in[u_0,u_*]\times[v_1,v_2]$}, we have the following estimate for the charge: \begin{align*} {\color{black}\frac{Q_a^2}{r_a^2}}\leq\frac{\omega}{4}{\color{black}\eta_a}+2\epsilon. \end{align*} Furthermore, if {\color{black}$\eta_a\geq\frac{8\epsilon}{\omega}$, then $\frac{Q_a^2}{r_a^2}\leq\frac{\omega}{2}\eta_a$.} \end{lemma} \begin{proof} The first part of the lemma is almost proven: Since the hypothesis of this lemma satisfies that of Proposition $\ref{mixed derivatives of r}$, we have $\delta(u)\leq\frac{3}{2}$ for all $u\in [u_0,u_*]$, which is the hypothesis of Proposition $\ref{A priori estimate on Q}${\color{black}. This gives} us the first part of the Lemma. The second part follows from a computation. {\color{black}$\eta_a\geq\frac{8\epsilon}{\omega}$} implies that {\color{black}$2\epsilon\leq \frac{\omega}{4}\eta_a.$ Hence, $$\frac{Q_a^2}{r_a^2}\leq \frac{\omega}{4}\eta_a+2\epsilon\leq \frac{\omega}{4}\eta_a+\frac{\omega}{4}\eta_a=\frac{\omega}{2}\eta_a.$$ } \end{proof} \subsection{Estimates for $D_u\phi$, $\partial_vr$} In this section, we will prove two lemmas which hold in the region {\color{black}$[u_0,u_*]\times[v_1,v_2]$ under} the premise that the region is free of trapped surfaces. These are the {\color{black}equivalents of Lemmas} \ref{keylemma1real} and \ref{keylemma2real} in the uncharged case. \begin{lemma}\label{keylemma1} Define $\Theta:=r_2|D_u\phi_2|-r_1|D_u\phi_1|$. Suppose that {\color{black}the initial data along $\underline{C}$ is not super-charged} and $\mathcal{D}(0,v_1)\cup\mathcal{R}$ is free of trapped surfaces. {\color{black}If $\eta\geq \frac{8\epsilon}{\omega}$, then } \begin{align*} \Theta(u)^2\leq\bigg(1+\frac{\omega}{2}\bigg) \frac{-\partial_ur_2}{8\pi\Omega_2^{-2}\partial_vr_2}(m_2-m_1)\bigg(\frac{1}{r_1}-\frac{1}{r_2}\bigg)(u) \end{align*} for all $u\in [u_0,u_*]$. \end{lemma} \begin{proof} By integrating equation $\eqref{complex wave equation}$, we get \begin{align} \Theta^2 &= \big(r_2|D_u\phi_2|-r_1|D_u\phi_1|\big)^2\nonumber\\ &\leq|r_2D_u\phi_2-r_1D_u\phi_1|^2=\bigg|\int_{v_1}^{v_2}-\partial_ur\partial_v\phi -i\mathfrak{e}\frac{Q\phi\Omega^2}{4r}dv\bigg|^2\nonumber\\ &\leq(1+\kappa)\bigg|\int_{v_1}^{v_2}-\partial_ur|\partial_v\phi| dv\bigg|^2+\big(1+\frac{1}{\kappa}\big)\mathfrak{e}^2\bigg|\int_{v_1}^{v_2}\frac{Q\phi\Omega^2}{4r}dv\bigg|^2\text{, for any }\kappa>0\nonumber\\ &\leq\frac{1+\kappa}{8\pi}\int_{v_1}^{v_2}-8\pi r^2\partial_ur\Omega^{-2}|\partial_v\phi|^2dv\int_{v_1}^{v_2}-\frac{\partial_ur}{r^2\Omega^{-2}}dv+\big(1+\frac{1}{\kappa}\big)\mathfrak{e}^2\bigg|\int_{v_1}^{v_2}\frac{Q\phi\Omega^2}{4r}dv\bigg|^2,\label{lemma1eqn1} \end{align} {\color{black}where we used} Holder's inequality for the last inequality.\\ {\color{black}We bound the first summand like how we did in} Lemma \ref{keylemma1real}. We bound the first integral of the first {\color{black}term}: \begin{align} \int_{v_1}^{v_2}-8\pi r^2\partial_ur\Omega^{-2}|\partial_v\phi|^2dv&=\int_{v_1}^{v_2}\partial_vm-\frac{Q^2\partial_vr}{2r^2}dv\nonumber\\ &\leq\int_{v_1}^{v_2}\partial_vm\text{ }dv, \text{ since }\partial_vr> 0\nonumber\\ &=m_2-m_1\label{lemma1eqn2}. \end{align} To bound the second integral of the first {\color{black}term}, we apply Proposition $\ref{mixed derivatives of r}$ to get $\partial_v\partial_ur\leq 0$, and hence $\partial_ur\geq\partial_ur_2$. Also, equation $\eqref{EMS4}$ implies that $\Omega_2^{-2}\partial_vr_2\leq\Omega^{-2}\partial_vr$. Combining these two pieces of information, we have \begin{align} \int_{v_1}^{v_2}-\frac{\partial_ur}{r^2\Omega^{-2}}dv&={\color{black}\int_{r_1}^{r_2}-\frac{\partial_ur}{r^2\Omega^{-2}\partial_vr}dr}\nonumber\\ &{\color{black}\leq-\partial_ur_2\int_{r_1}^{r_2}\frac{1}{r^2\Omega^{-2}\partial_vr}dr}\nonumber\\ &\leq\frac{-\partial_ur_2}{\Omega_2^{-2}\partial_vr_2}\int_{r_1}^{r_2}\frac{1}{r^2}dr\nonumber\\ &=\frac{\partial_ur_2}{\Omega_2^{-2}\partial_vr_2}\big(\frac{1}{r_2}-\frac{1}{r_1}\big)\label{lemma1eqn3}. \end{align} Substituting $\eqref{lemma1eqn2}$ and $\eqref{lemma1eqn3}$ back into $\eqref{lemma1eqn1}$ gives us: \begin{align} \Theta(u)^2\leq (1+\kappa)\frac{\partial_ur_2}{8\pi\Omega_2^{-2}\partial_vr_2}(m_2-m_1)\bigg(\frac{1}{r_2}-\frac{1}{r_1}\bigg)+\big(1+\frac{1}{\kappa}\big)\mathfrak{e}^2\bigg|\int_{v_1}^{v_2}\frac{Q\phi\Omega^2}{4r}dv\bigg|^2\label{Thetabound}. \end{align} To bound the remaining integral, {\color{black}we apply the Cauchy-Schwarz inequality and Lemma \ref{bound for charge}:} \begin{align} \bigg|\int_{v_1}^{v_2}\frac{Q\phi\Omega^2}{4r}dv\bigg|^2&= \bigg|\frac{1}{4}\int_{v_1}^{v_2}\frac{Q}{r^\frac{3}{2}\Omega^{-2}}r^\frac{1}{2}\phi\text{ }dv\bigg|^2\nonumber\\ &\leq\frac{1}{16}\int_{v_1}^{v_2}\frac{Q^2}{r^2}\frac{1}{r}\frac{1}{\Omega^{-2}}\frac{1}{\Omega^{-2}}dv\cdot\int_{v_1}^{v_2}r|\phi|^2dv\nonumber\\ &\leq{\color{black}\frac{1}{16}\int_{r_1}^{r_2}\frac{\omega}{2}\eta_a\frac{1}{r}\frac{1}{\Omega^{-2}\partial_vr}\frac{-\partial_ur}{-\Omega^{-2}\partial_ur}dr\int_{v_1}^{v_2}r|\phi|^2dv}\label{lastterm}. \end{align} We bound the first integral: {\color{black} by Proposition \ref{mixed derivatives of r} we have $\partial_u\partial_v r\leq 0$, which implies $\partial_ur_2\leq\partial_ur$.} And with $\Omega_2^{-2}\partial_vr_2\leq\Omega^{-2}\partial_vr$ by \eqref{EMS4}, we derive \begin{align} \int_{r_1}^{r_2}\frac{\omega}{2}\eta_a\frac{1}{r}\frac{1}{\Omega^{-2}\partial_vr}\frac{-\partial_ur}{-\Omega^{-2}\partial_ur}dr&\leq\frac{-\partial_ur_2}{\Omega_2^{-2}\partial_vr_2}\int_{r_1}^{r_2}\frac{\omega}{2}\frac{2(m_a-m_1)}{r_a}\frac{1}{r}\frac{1}{-\Omega^{-2}\partial_ur}dr\nonumber\\ &\leq \frac{-2\omega}{1-\epsilon}\frac{\partial_ur_2}{\Omega_2^{-2}\partial_vr_2}\int_{r_1}^{r_2}(m_2-m_1)\frac{1}{r^2}dr, \nonumber\\&\hspace{0.5cm}{\color{black}\text{by Lemma \ref{negativeincomingcharged}, (} \Omega^{-2}\partial_u r\leq-\frac{1-\epsilon}{2}\text{)}}\nonumber\\ &=\frac{-2\omega}{1-\epsilon}\frac{\partial_ur_2}{\Omega_2^{-2}\partial_vr_2}(m_2-m_1)\bigg(\frac{1}{r_1}-\frac{1}{r_2}\bigg)\nonumber\\ &=-\frac{2\omega}{1-\epsilon}\frac{\partial_ur_2}{\Omega_2^{-2}\partial_vr_2}(m_2-m_1)\bigg(\frac{1}{r_1}-\frac{1}{r_2}\bigg)\label{firstintegral}. \end{align} Now we bound the second integral: \begin{align*} &\int_{v_1}^{v_2}r|\phi|^2dv= \int_{v_1}^{v_2}r\bigg|\int_{v_1}^{v'}\partial_v\phi\text{ }dv +\phi_1\bigg|^2dv'\\ \leq& 2\int_{v_1}^{v_2}\bigg(r\bigg|\int_{v_1}^{v'}\partial_v \phi\text{ }dv\bigg|^2+r|\phi_1|^2\bigg)dv'\\ \leq& 2\int_{v_1}^{v_2}\bigg(r(v'-v_1)\int_{v_1}^{v'}|\partial_v \phi|^2\text{ }dv+r|\phi_1|^2\bigg)dv'\\ \leq& 2r_2\int_{v_1}^{v_2}\bigg((v_2-v_1)\int_{v_1}^{v'}\frac{-8\pi\Omega^{-2}r^2\partial_ur|\partial_v\phi|^2}{-8\pi\Omega^{-2}\partial_urr^2}dv+|\phi_1|^2\bigg)dv'\\ \leq& 2r_2\int_{v_1}^{v_2}\bigg(\frac{2(v_2-v_1)}{(1-\epsilon)8\pi r_1^2}\int_{v_1}^{v'}\partial_vm\text{ }dv+|\phi_1|^2\bigg)dv'\\ =&2r_2\int_{v_1}^{v_2}\bigg(\frac{v_2-v_1}{(1-\epsilon)4\pi r_1^2}(m_2-m_1)+|\phi_1|^2\bigg)dv'\\ \leq& \frac{r_2^2}{r_1^2}\frac{(v_2-v_1)^2}{4\pi(1-\epsilon)}\frac{2(m_2-m_1)}{r_2}+2r_2(v_2-v_1)|\phi_1|^2. \end{align*} Using the no trapped surface {\color{black}or MOTS assumption, we have} $\frac{2(m_2-m_1)}{r_2}= \eta<1$. Using Lemma \ref{mixed derivatives of r}, $\frac{r_2}{r_1}\leq\frac{3}{2}$. Hence we have: \begin{align} \int_{v_1}^{v_2}r|\phi|^2dv\leq\frac{9(v_2-v_1)^2}{16\pi(1-\epsilon)}+2r_2(u_0)(v_2-v_1)|\phi_1|^2\leq\omega\frac{1-\epsilon}{320\mathfrak{e}^2\pi}, \label{secondintegral} \end{align} where we use the assumption in \eqref{second assumption on v2-v1}. Substituting \eqref{firstintegral} and \eqref{secondintegral} back into \eqref{lastterm}, we get {\color{black}\begin{align*} \bigg|\int_{v_1}^{v_2}\frac{Q\phi\Omega^2}{4r}dv\bigg|^2\leq-\frac{\omega^2}{160\mathfrak{e}^2\pi}\frac{\partial_ur_2}{\Omega_2^{-2}\partial_vr_2}(m_2-m_1)\bigg(\frac{1}{r_1}-\frac{1}{r_2}\bigg). \end{align*}} Now, set $\kappa = \frac{\omega}{4}$ in \eqref{Thetabound} and utilize the inequality above: \begin{align*} \Theta^2 \leq&\big(1+\frac{\omega}{4}\big)\frac{-\partial_ur_2}{8\pi\Omega_2^{-2}\partial_vr_2}(m_2-m_1)\bigg(\frac{1}{r_1}-\frac{1}{r_2}\bigg)\\ &+\big(1+\frac{4}{\omega}\big)\frac{\omega^2}{160\pi}\frac{-\partial_ur_2}{\Omega_2^{-2}\partial_vr_2}(m_2-m_1)\bigg(\frac{1}{r_1}-\frac{1}{r_2}\bigg)\\ =&\bigg(1+\frac{\omega}{4}+\frac{\omega}{20}(4+\omega)\bigg)\frac{-\partial_ur_2}{8\pi\Omega_2^{-2}\partial_vr_2}(m_2-m_1)\bigg(\frac{1}{r_1}-\frac{1}{r_2}\bigg). \end{align*} Using the assumption that {\color{black}$\omega<\frac{2}{3}$}, we have $\omega+4<5$, and therefore \begin{align*} \Theta^2&\leq\big(1+\frac{\omega}{4}+\frac{5\omega}{20}\big)\frac{-\partial_ur_2}{8\pi\Omega_2^{-2}\partial_vr_2}(m_2-m_1)\bigg(\frac{1}{r_1}-\frac{1}{r_2}\bigg)\\ &=\bigg(1+\frac{\omega}{2}\bigg)\frac{-\partial_ur_2}{8\pi\Omega_2^{-2}\partial_vr_2}(m_2-m_1)\bigg(\frac{1}{r_1}-\frac{1}{r_2}\bigg). \end{align*} \end{proof} \begin{lemma}\label{keylemma2} Assume that $\eta\geq\frac{8\epsilon}{\omega}$, {\color{black}the initial data along $\underline{C}$ is not super-charged,} and $\mathcal{D}(0,v_1)\cup\mathcal{R}$ is free of trapped surfaces. Then \begin{align*} \frac{\Omega_2^{-2}\partial_vr_2}{\Omega_1^{-2}\partial_vr_1}(u)\leq e^{-(1-\frac{\omega}{2})\eta(u)} \end{align*} for all $u\in[u_0,u_*]$. \end{lemma} \begin{proof} Dividing both sides of equation $\eqref{EMS3}$ by $\Omega^{-2}\partial_vr$ and integrating from $v_1$ to $v_2$, we get \begin{align*} \ln|\Omega_2^{-2}\partial_vr_2|-\ln|\Omega_1^{-2}\partial_vr_1| = \ln\bigg(\frac{\Omega_2^{-2}\partial_vr_2}{\Omega_1^{-2}\partial_vr_1}\bigg)=-4\pi\int_{v_1}^{v_2}\frac{r|\partial_v\phi|^2}{\partial_vr}dv \end{align*} By equation $\eqref{massv}$ and the definition of the Hawking mass, {\color{black}we have} \begin{align*} \frac{1}{r-2m}\big(\partial_vm-\frac{Q^2\partial_vr}{2r^2}\big)=\frac{-8\pi r^2\Omega^{-2}\partial_ur}{-4r\Omega^{-2}\partial_ur\partial_vr}=\frac{2\pi r|\partial_v\phi|^2}{\partial_vr}. \end{align*} Hence for any $u\in[u_0,u_*]$, \begin{align*} &\ln\bigg(\frac{\Omega_2^{-2}\partial_vr_2}{\Omega_1^{-2}\partial_vr_1}\bigg)=-2\int_{v_1}^{v_2}\frac{1}{r-2m}\big(\partial_vm-\frac{Q^2\partial_vr}{2r^2}\big)dv\\ &\leq-2\int_{v_1}^{v_2}\frac{1}{r}\big(\partial_vm-\frac{Q^2\partial_vr}{2r^2}\big)dv\leq\frac{-2}{r_2}\int_{v_1}^{v_2}\big(\partial_vm-\frac{Q^2\partial_vr}{2r^2}\big)dv\\ &=-\frac{2(m_2-m_1)}{r_2}+\frac{2}{r_2}\int_{r_1}^{r_2}\frac{Q^2}{2r^2}dr=-\eta+\frac{2}{r_2}\int_{r_1}^{r_2}\frac{Q^2}{2r^2}dr. \end{align*} By Lemma $\ref{bound for charge}$, we have \begin{align*} \frac{Q(u,v)^2}{r(u,v)^2}\leq\frac{\omega}{2}\frac{2(m(u,v)-m(u,v_1))}{r(u,v)}. \end{align*} Hence, \begin{align*} \frac{2}{r_2}\int_{r_1}^{r_2}\frac{Q^2}{2r^2}dr&\leq\frac{\omega}{2r_2}\int_{r_1}^{r_2}\frac{2(m(u,v)-m(u,v_1))}{r(u,v)}dr\\ &\leq{\color{black}\omega\cdot}\frac{m_2-m_1}{r_2}\ln\big(\frac{r_2}{r_1}\big)\leq\frac{\omega}{2}\ln\big(\frac{3}{2}\big)\eta\leq\frac{\omega}{2}\eta. \end{align*} Combining the above estimates, we get: \begin{align*} \ln\bigg(\frac{\Omega_2^{-2}\partial_vr_2}{\Omega_1^{-2}\partial_vr_1}\bigg)\leq-\bigg(1-\frac{\omega}{2}\bigg)\eta. \end{align*} Exponentiating both sides of the above inequality gives us the desired result. \end{proof} \subsection{The proof of Theorem $\ref{thm1.3}$} {\color{black} We are finally ready to prove a lower bound on $\frac{d\eta}{du}$. The presence of charge case poses some difficulties not present in the uncharged case. This is because in order to apply Lemma \ref{bound for charge}, \ref{keylemma1} and \ref{keylemma2}, we need to ensure that the $\eta>\frac{8\epsilon}{\omega}$ assumption always hold to get a lower bound on $\frac{d\eta}{du}$. On the other hand, we exactly need this lower bound on $\frac{d\eta}{du}$ to prove the assumption that $\eta>\frac{8\epsilon}{\omega}$. Hence we need to do this using a bootstrap argument again. }\\ \begin{figure} \centering \begin{tikzpicture} \draw[->] (0,0) -- (6,0) node[right] {$x$}; \draw[->] (0,0) -- (0,6) node[above] {$\eta$}; \draw[scale=1,domain=0.5:4,smooth,variable=\x,blue] plot ({\x},{(\x-3)*(\x-3)+1.3}); \draw [dashed] (4,0)node[anchor = north]{$x=1$} -- (4,2.2); \draw [dashed] (0,2.2)node[anchor = east]{$\eta = \eta_0$} -- (4,2.2); \draw [dashed] (0,1.3)node[anchor = east]{$\eta = \frac{12\epsilon}{\omega}$} -- (4,1.3); \draw [dashed] (1,0)node[anchor = north]{$x=x_*$} -- (1,5.35); \draw [dashed] (0,5.35)node[anchor = east]{$\eta \geq 1$} -- (1,5.35); \end{tikzpicture} \caption{Idea of proof of Lemma \ref{theoremlemma} and Theorem \ref{thm1.3}} \label{fig:graphcomplex} \end{figure} We will in fact prove something a little stronger: we show that $\eta(u)\geq \frac{12\epsilon}{\omega}$ for $u\in[u_0,u_*]$. {\color{black}We use a bootstrap argument to prove this in Lemma $\ref{theoremlemma}$}: Assuming that the differential inequality holds for all $u\in [u_0,u']$, where $u'<u_*$, then $\eta(u)\geq\frac{12\epsilon}{\omega}$ holds in a slightly larger region as well.\\ \begin{lemma}\label{theoremlemma} Assume that the region $\mathcal{D}(0,v_1)\cup\mathcal{R}$ is free of trapped surfaces and {\color{black}the initial data along $\underline{C}$ is not super-charged.} Then if $\eta_0\geq\frac{13\epsilon}{\omega}+g_\omega(\delta_0)$, we have $\eta(x)\geq\frac{12\epsilon}{\omega}$ for all {\color{black}$x(u)\in \big[\frac{3\delta_0}{1+\delta_0},1\big]$}. Over here, $g_\omega(x)$ is defined as: \begin{align}\label{g omega x} g_\omega(x):=\frac{1+\frac{\omega}{2}}{1-\frac{\omega}{2}}\frac{1}{(1+x)^2}\bigg(\bigg(\frac{2^{1-\frac{\omega}{2}}}{\omega}+\frac{1}{2^{1+\frac{\omega}{2}}(1+\frac{\omega}{2})}\bigg)x^{1-\frac{\omega}{2}}-\frac{2}{\omega}x-\frac{1}{1+\frac{\omega}{2}}x^2\bigg). \end{align} \end{lemma} \begin{proof} We define $u' := \sup\{u\in [u_0,u_*]\big|\eta(s)\geq\frac{12\epsilon}{\omega}\text{ for all } s\in [u_0,{\color{black}u}]\}$. We are going to show that $u' = u_*$, where $x(u_*)=\frac{3\delta_0}{1+\delta_0}$. {\color{black}A sketch of this is provided in figure \ref{fig:graphcomplex}.} We calculate $\frac{d\eta}{dx}$. The following computations holds for all $u\in [u_0,u_*]$: \begin{align}\label{inequalities} &\frac{d\eta}{dx}= \frac{d\eta}{du}\bigg/\frac{dx}{du} = \frac{r_2(u_0)}{\partial_ur_2}\bigg(-\frac{2\partial_ur_2}{r_2^2}(m_2-m_1)+\frac{2}{r_2}\partial_u(m_2-m_1)\bigg)\nonumber\\ =&-\frac{\eta}{x}+\frac{2}{x\partial_ur_2}(-8\pi r_2^2\Omega_2^{-2}\partial_vr_2|D_u\phi_2|^2+8\pi r_1^2\Omega_1^{-2}\partial_vr_1|D_u\phi_1|^2+\frac{Q_2^2\partial_ur_2}{2r_2^2}-\frac{Q_1^2\partial_ur_1}{2r_1^2})\nonumber\\ {\color{black}\leq}&-\frac{\eta}{x}-\frac{16\pi\partial_vr_2\Omega_2^{-2}}{x\partial_ur_2}\bigg(r_2^2|D_u\phi_2|^2-\frac{\Omega_1^{-2}\partial_vr_1}{\Omega_2^{-2}\partial_vr_2}r_1^2|D_u\phi_1|^2\bigg)+\frac{Q_2^2}{xr_2^2},\nonumber\\ &{\color{black}\hspace{0.5cm}\text{where we used that } \frac{Q_1^2\partial_ur_1}{xr_1^2\partial_ur_2}\geq 0.} \end{align} Now we focus our attention on the region $[u_0,u']$. Since $\eta\geq \frac{12\epsilon}{\omega}\geq\frac{8\epsilon}{\omega}$ in $[x',1]$, we can use Lemma $\eqref{keylemma2}$ to bound the factor in the second term: \begin{align*} r_2^2|D_u\phi_2|^2-\frac{\Omega_1^{-2}\partial_vr_1}{\Omega_2^{-2}\partial_vr_2}r_1^2|D_u\phi_1|^2&\leq r_2^2|D_u\phi_2|^2-e^{\eta(1-\frac{\omega}{2})} r_1^2|D_u\phi_1|^2\\ &=\Theta^2+2\Theta|D_u\phi_1|r_1+(1-e^{\eta(1-\frac{\omega}{2})})r_1^2|D_u\phi_1|^2. \end{align*} The last expression, being a quadratic in $\Theta$, can be bounded by a monic quadratic polynomial in $\Theta$: \begin{align}\label{thetaquadraticbound} \Theta^2+2\Theta|D_u\phi_1|r_1+(1-e^{\eta(1-\frac{\omega}{2})})r_1^2|D_u\phi_1|^2&\leq\bigg(1+\frac{1}{e^{\eta(1-\frac{\omega}{2})}-1}\bigg)\Theta^2\nonumber\\ &\leq\bigg(1+\frac{1}{\eta(1-\frac{\omega}{2})}\bigg)\Theta^2, \end{align} where we used the fact that $\eta(1-\frac{\omega}{2})\geq 0$ in the second inequality. Then $\eqref{thetaquadraticbound}$ {\color{black}combined with $\eqref{inequalities}$ gives}: \begin{align*} \frac{d\eta}{dx}\leq-\frac{\eta}{x}-\frac{16\pi\partial_vr_2\Omega_2^{-2}}{x\partial_ur_2}\bigg(1+\frac{1}{\eta(1-\frac{\omega}{2})}\bigg)\Theta^2+\frac{Q_2^2}{xr_2^2}. \end{align*} Applying Lemma \ref{keylemma1}, we have \begin{align}\label{inequalities2} \frac{d\eta}{dx}&\leq-\frac{\eta}{x}+\frac{\eta}{x}\bigg(1+\frac{\omega}{2}\bigg)\bigg(1+\frac{1}{\eta(1-\frac{\omega}{2})}\bigg)\bigg(\frac{r_2}{r_1}-1\bigg)+\frac{Q_2^2}{xr_2^2}. \end{align} Using Proposition \ref{mixed derivatives of r}, we get \begin{align*} \delta(u) = \frac{r_2(u)-r_1(u)}{r_2(u)-(r_2(u)-r_1(u))}\leq\frac{r_2(u_0)-r_1(u_0)}{r_2(u)-(r_2(u_0)-r_1(u_0))}=\frac{\delta_0}{x(u)(1+\delta_0)-\delta_0}. \end{align*} Combining with $\eqref{inequalities2}$, and using Lemma $\eqref{bound for charge}$ to bound the term involving $Q$, we obtain \begin{align*} \frac{d\eta}{dx}&\leq\eta\bigg(\big(1+\frac{\omega}{2}\big)\frac{\delta_0}{x^{{\color{black}2}}(1+\delta_0)-{\color{black}x}\delta_0}-\frac{1}{x}\bigg)+\frac{1+\frac{\omega}{2}}{1-\frac{\omega}{2}}{\color{black}\frac{1}{x}}\frac{\delta_0}{x(1+\delta_0)-\delta_0}+\frac{Q_2^2}{xr_2^2}\\ &\leq\eta\bigg(\big(1+\frac{\omega}{2}\big)\frac{\delta_0}{x^{{\color{black}2}}(1+\delta_0)-{\color{black}x}\delta_0}-\frac{1}{x}\bigg)+\frac{1+\frac{\omega}{2}}{1-\frac{\omega}{2}}{\color{black}\frac{1}{x}}\frac{\delta_0}{x(1+\delta_0)-\delta_0}+\frac{\eta}{x}\frac{\omega}{2}\\ &=-\frac{\eta}{x}\bigg(1-\frac{\omega}{2}-\big(1+\frac{\omega}{2}\big)\frac{\delta_0}{x(1+\delta_0)-\delta_0}\bigg)+\frac{1+\frac{\omega}{2}}{1-\frac{\omega}{2}}\frac{1}{x}\frac{\delta_0}{x(1+\delta_0)-\delta_0}. \end{align*} \\ Defining $g(x):=1-\frac{\omega}{2}-\big(1+\frac{\omega}{2}\big)\frac{\delta_0}{x(1+\delta_0)-\delta_0}$ and $f(x):=\frac{1+\frac{\omega}{2}}{1-\frac{\omega}{2}}\frac{\delta_0}{x(1+\delta_0)-\delta_0}$, we obtain the following differential inequality which holds for all $x\in [x',1]$: \begin{align*} \frac{d\eta}{dx}+\eta\frac{g(x)}{x}-\frac{f(x)}{x}\leq 0. \end{align*} To solve this differential inequality, we multiply by an integrating factor {\color{black}to get}: \begin{gather*} \frac{d}{dx}\bigg(e^{-\int_x^1\frac{g(s)}{s}ds}\eta(x)\bigg)-e^{-\int_x^1\frac{g(s)}{s}ds}\frac{f(x)}{x}\leq 0\\ \implies\bigg[e^{-\int_{t}^1\frac{g(s)}{s}ds}\eta(t)\bigg]_{t=x}^{t=1}\leq \int_x^1e^{-\int_{t}^1\frac{g(s)}{s}ds}\frac{f}{t}dt+C, \end{gather*} where $C$ can be chosen to be any value which makes the inequality hold at the initial point $x=1$. Also denote $G(x):= \int_x^1\frac{g(s)}{s}ds$ and $F(x):=\int_x^1e^{-G(s)}\frac{f}{s}ds$. In this notation, we get \begin{align*} \eta_0-e^{-G(x)}\eta(x)\leq F(x)+C. \end{align*} Since $\eta_0 = \eta(x)|_{x=1}$ by definition, and $F(1) = G(1) = 0$, setting $C = 0$ makes the inequality {\color{black}tight}. Hence, in the {\color{black}interval $[x',1]$}, we conclude that the following inequality holds: \begin{align}\label{maininequality} \eta_0-e^{-G(x)}\eta(x)\leq F(x) \end{align} Now {\color{black}we} compute explicit expressions for $G(x)$ and $F(x)$: \begin{align*} G(x) &= \int_x^1\frac{1-\frac{\omega}{2}}{s}-\frac{1+\frac{\omega}{2}}{s}\frac{\delta_0}{s(1+\delta_0)-\delta_0}ds\\ &=\int_x^1\frac{1-\frac{\omega}{2}}{s}+\frac{1+\frac{\omega}{2}}{s}-\frac{(1+\frac{\omega}{2})(1+\delta_0)}{s(1+\delta_0)-\delta_0}ds\\ &=\ln\bigg(\frac{s^2}{\big(s(1+\delta_0)-\delta_0\big)^{1+\frac{\omega}{2}}}\bigg)\bigg|^1_x = \ln\bigg(\frac{\big(x(1+\delta_0)-\delta_0\big)^{1+\frac{\omega}{2}}}{x^2}\bigg). \end{align*} \begin{align*} &\,\,F(x)= \int_x^1\frac{s^2}{\big(s(1+\delta_0)-\delta_0\big)^{1+\frac{\omega}{2}}}\frac{f}{s}ds = \int_x^1\frac{1+\frac{\omega}{2}}{1-\frac{\omega}{2}}\frac{\delta_0s}{(s(1+\delta_0)-\delta_0)^{2+\frac{\omega}{2}}}ds\\ &=\frac{1+\frac{\omega}{2}}{1-\frac{\omega}{2}}\frac{\delta_0}{1+\delta_0}\int_x^1\frac{s(1+\delta_0)-\delta_0}{(s(1+\delta_0)-\delta_0)^{2+\frac{\omega}{2}}}+\frac{\delta_0}{(s(1+\delta_0)-\delta_0)^{2+\frac{\omega}{2}}}ds\\ &=\frac{1+\frac{\omega}{2}}{1-\frac{\omega}{2}}\frac{\delta_0}{1+\delta_0}\int_x^1\frac{1}{(s(1+\delta_0)-\delta_0)^{1+\frac{\omega}{2}}}+\frac{\delta_0}{(s(1+\delta_0)-\delta_0)^{2+\frac{\omega}{2}}}ds\\ &=\frac{1+\frac{\omega}{2}}{1-\frac{\omega}{2}}\frac{\delta_0}{(1+\delta_0)^2}\bigg[-\frac{2}{\omega}\frac{1}{\big(s(1+\delta_0)-\delta_0\big)^\frac{\omega}{2}}-\frac{1}{1+\frac{\omega}{2}}\frac{\delta_0}{\big(s(1+\delta_0)-\delta_0\big)^{1+\frac{\omega}{2}}}\bigg]\bigg|_x^1\\ &=\frac{1+\frac{\omega}{2}}{1-\frac{\omega}{2}}\frac{\delta_0}{(1+\delta_0)^2}\bigg(\frac{2}{\omega}\frac{1}{\big(x(1+\delta_0)-\delta_0\big)^\frac{\omega}{2}}-\frac{2}{\omega}+\frac{1}{1+\frac{\omega}{2}}\frac{\delta_0}{\big(x(1+\delta_0)-\delta_0\big)^{1+\frac{\omega}{2}}}-\frac{\delta_0}{1+\frac{\omega}{2}}\bigg). \end{align*} Observe that $F(x)$ is monotonically decreasing and hence obtains its maximum at $x=\frac{3\delta_0}{1+\delta_0}$ on the interval $[\frac{3\delta_0}{1+\delta_0},1]$. Therefore, \begin{align*} F(x)\leq F\bigg(\frac{3\delta_0}{1+\delta_0}\bigg)&=\frac{1+\frac{\omega}{2}}{1-\frac{\omega}{2}}\frac{\delta_0}{(1+\delta_0)^2}\bigg(\frac{2^{1-\frac{\omega}{2}}}{\omega}\delta_0^{-\frac{\omega}{2}}-\frac{2}{\omega}+\frac{1}{2^{1+\frac{\omega}{2}}(1+\frac{\omega}{2})}\delta_0^{-\frac{\omega}{2}}-\frac{\delta_0}{1+\frac{\omega}{2}}\bigg)\\ &=\frac{1+\frac{\omega}{2}}{1-\frac{\omega}{2}}\frac{1}{(1+\delta_0)^2}\bigg(\bigg(\frac{2^{1-\frac{\omega}{2}}}{\omega}+\frac{1}{2^{1+\frac{\omega}{2}}(1+\frac{\omega}{2})}\bigg)\delta_0^{1-\frac{\omega}{2}}-\frac{2}{\omega}\delta_0-\frac{1}{1+\frac{\omega}{2}}\delta_0^2\bigg)\\ &=g_\omega(\delta_0). \end{align*} Substituting the expressions for $F(x)$ and $G(x)$ into \eqref{maininequality}, for all $x\in [x',1]$, we get {\color{black}\begin{align}\label{gronwall1} &\eta(x)\geq e^{G(x)}\big(\eta_0-F(x)\big)\nonumber\\ \geq&\frac{\big(x(1+\delta_0)-\delta_0\big)^{1+\frac{\omega}{2}}}{x^2}\nonumber\\ &\quad\cdot\bigg(\eta_0-\frac{1+\frac{\omega}{2}}{1-\frac{\omega}{2}}\frac{1}{(1+\delta_0)^2}\bigg(\bigg(\frac{2^{1-\frac{\omega}{2}}}{\omega}+\frac{1}{2^{1+\frac{\omega}{2}}(1+\frac{\omega}{2})}\bigg)\delta_0^{1-\frac{\omega}{2}}-\frac{2}{\omega}\delta_0-\frac{1}{1+\frac{\omega}{2}}\delta_0^2\bigg)\bigg)\nonumber\\ =&\frac{\big(x(1+\delta_0)-\delta_0\big)^{1+\frac{\omega}{2}}}{x^2}\cdot \big(\eta_0-g_{\omega}(\delta_0)\big). \end{align}} For the last identity, we use the definition of $g_{\omega}(x)$ in \eqref{g omega x}. {\color{black}Since $\omega<\frac{2}{3}$ implies that $\frac{x^2}{(x(1+\delta_0)-\delta_0)^{1+\frac{\omega}{2}}}$ is monotonically increasing}, we get: \begin{align*} \sup_{x\in [x_*,1]}\frac{x^2}{\big(x(1+\delta_0)-\delta_0\big)^{1+\frac{\omega}{2}}} = 1. \end{align*} Combining this with the hypothesis that \begin{align*} \eta_0>\frac{13\epsilon}{\omega}+g_\omega(\delta_0), \end{align*} we {\color{black}obtain} the inequality: \begin{align*} \eta_0 &>\frac{13\epsilon}{\omega}+g_\omega(\delta_0)\geq\frac{13\epsilon}{\omega}\frac{x^2}{\big(x(1+\delta_0)-\delta_0\big)^{1+\frac{\omega}{2}}}+g_\omega(\delta_0) \end{align*} for $x\in[x',1]$. Substituting the above into \eqref{gronwall1} gives us: \begin{align*} \eta(x)\geq\frac{13\epsilon}{\omega},\hspace{0.5cm}\text{for all } x\in[x',1]. \end{align*} However, by the continuity of $\eta(x)$, we can find $x''<x'$ such that $\eta(x)\geq \frac{12\epsilon}{\omega}$ for all $x\in [x'',1]$, i.e. $\eta(u)\geq\frac{12\epsilon}{\omega}$ for all $u\in [u_0,u'']$, contradicting the supremum property of $u'$. \end{proof} We are now ready to prove the main theorem of this paper. \begin{proof}{\textit{(Theorem \ref{thm1.3})}} We prove the theorem by contradiction. Suppose that $\mathcal{R}$ contains no trapped surfaces {\color{black}or MOTS}, in particular $\partial_vr_2>0$ for $u\in [u_0,u_*]$. Then Lemma \ref{keylemma1} applies and $\eqref{maininequality}$ holds for $x\in[x_*,1]$. Rearranging $\eqref{maininequality}$ gives us \begin{align*} \eta_0\leq e^{-G(x)}\eta(x)+F(x)< e^{-G(x)}+F(x), \text{ for all } x\in[x_*,1] \end{align*} where we used the assumption that $\eta(x) = \frac{2(m_2-m_1)}{r_2}\leq\frac{2m_2}{r_2}<1$ in the second inequality. In particular, by letting $x = x_*=\frac{3\delta_0}{1+\delta_0}$, we get: \begin{align*} e^{-G(x_*)}+F(x_*)&\leq \frac{x_*^2}{\big(x_*(1+\delta_0)-\delta_0\big)^{1+\frac{\omega}{2}}}+g_\omega(\delta_0)\\ &=\frac{9}{2^{1+\frac{\omega}{2}}(1+\delta_0)^2}\delta_0^{1-\frac{\omega}{2}}+g_\omega(\delta_0), \end{align*} and hence $\eta_0<\frac{9}{2^{1+\frac{\omega}{2}}(1+\delta_0)^2}\delta_0^{1-\frac{\omega}{2}}+g_\omega(\delta_0)$, giving us the desired contradiction. \end{proof} \section{Appendix} \subsection{Trapped Surface Formation for the Einstein Scalar Field} Here we provide a proof of Christodolou's sharp trapped surface formation criterion as in \cite{Chr.1}. In the case for the real scalar field, the system of equations \eqref{EMS1} to \eqref{EMS9} is reduced to \begin{gather} r\partial_v\partial_ur+\partial_vr\partial_ur=-\frac{\Omega^2}{4}\label{UEMS1},\\ \partial_u(\Omega^{-2}\partial_ur) = -4\pi r\Omega^{-2}|\partial_u\phi|^2\label{UEMS2},\\ \partial_v(\Omega^{-2}\partial_vr) = -4\pi r\Omega^{-2}|\partial_v\phi|^2\label{UEMS3},\\ r\partial_u\partial_v\phi+\partial_ur\partial_v\phi+\partial_vr\partial_u\phi=0\label{UEMS4}. \end{gather} Also, the derivatives of the Hawking mass become: \begin{align} \partial_um= =-8\pi r^2\Omega^{-2}\partial_vr|\partial_u\phi|^2\label{massureal},\\ \partial_vm = -8\pi r^2\Omega^{-2}\partial_ur|\partial_v\phi|^2\label{massvreal}. \end{align} For convenience, we restate theorem $\ref{thm1.1}$ here. \begin{customthm}{1.1} Define the function \begin{align*} E(x):=\frac{x}{(1+x)^2}\bigg[\ln\bigg(\frac{1}{2x}\bigg)+5-x\bigg]. \end{align*} Consider the system (\ref{ES}) with characteristic initial data along $u=u_0$ and $v=v_1$. For initial mass input $\eta_0$ along $u=u_0$, if the following lower bound holds: \begin{align*} \eta_0>E(\delta_0), \end{align*} then a trapped surface {\color{black}$S_{u,v}$, with properties $\partial_v r(u,v)<0$} and $\partial_u r(u,v)< 0$, {\color{black}forms} in the region $[u_0,u_*]\times[v_1,v_2]\subset\mathcal{R}$. \end{customthm} In this section, we will first give a few technical estimates to the dynamical quantities in the strip $[u_0,0]\times[v_1,v_2]$. These will be used in the proof for Theorem \ref{thm1.1}. We start off by showing that $\partial_ur$ is negative and bounded away from 0. \begin{lemma}\label{negativeincoming} $\partial_ur\leq -\frac{1}{2}\Omega^2$ everywhere in {\color{black}$\mathcal{D}(0,v_1)\cup\big([u_0,0]\times[v_1,\infty)\big)$} \begin{proof} Rewrite $\eqref{UEMS1}$ as \begin{align*} \partial_v\big(r\partial_ur\big)=-\frac{\Omega^2}{4}. \end{align*} \noindent {\color{black}Note that $\Omega^2=1$ along ${\color{black}C}$.} Integrating both sides from $0$ to $v$ and dividing by $r$: \begin{align} -\frac{v}{4r(u_0,v)} = \partial_ur(u_0,v)\label{partialur}. \end{align} Setting $v = 0 $ in the above gives us \begin{align}\label{boundoutgoing} -\frac{1}{4\partial_vr(u_0,0)}=\partial_ur(u_0,0) = -\partial_vr(u_0,0)\implies\partial_vr(u_0,0)=\frac{1}{2}. \end{align} \noindent Since $\Omega^2 = 1$ on $C$ as well, $\eqref{UEMS3}$ gives us that $\partial_v\partial_vr\leq 0$, i.e. $r$ is concave with respect to $v$. Combining this with the fact that $r(u_0,0) = v(u_0,0) = 0$, we have: \begin{align*} \frac{r}{v}(u_0,v)\leq\partial_vr(u_0,0). \end{align*} \noindent Hence, \begin{align*} \frac{r}{v}(u_0,v)\leq\frac{1}{2}. \end{align*} \noindent Substitute this into \eqref{partialur}, we get \begin{align*} \partial_ur(u_0,v)\leq -\frac{1}{2}. \end{align*} \noindent By $\eqref{UEMS3}$, $\Omega^{-2}\partial_ur$ is decreasing along incoming null geodesics. Hence for a general point in {\color{black}$\mathcal{D}(0,v_1)\cup\big([u_0,0]\times[v_1,\infty)\big)$}, we have $\Omega^{-2}\partial_ur\leq-\frac{1}{2}$. \end{proof} \end{lemma} \begin{remark} $m(u,v)\geq 0$ for all {\color{black}$(u,v)\in \mathcal{D}(0,v_1)\cup\big([u_0,0]\times[v_1,\infty)\big)$} \end{remark} \begin{proof} Given any point $(u,v)\in \mathcal{R}$, we can extend the outgoing null geodesic backwards until it intersects $\Gamma$ at some coordinate $(u,v_c)$, so that $r(u,v_c) = 0$. Using $\eqref{massvreal}$, we have \begin{align*} \partial_vm = -8\pi r^2\Omega^{-2}\partial_ur|\partial_v\phi|^2, \end{align*} \noindent and since $\partial_ur\leq 0$ by Lemma $\ref{negativeincoming}$, we get that $\partial_vm\geq 0$. Combining with the fact that $m(u,v_c) = 0$, we obtain the desired result. \end{proof} Next, we show that the mixed derivative of $r$ is always negative. This places a upper bound on the growth on the ratio $\frac{r_2}{r_1}$. \begin{proposition}\label{mixed derivatives of r real} Assume that $\mathcal{D}(0,v_1)\cup\mathcal{R}$ is free of trapped surfaces. Then i) $\partial_u\partial_vr \leq 0$ in $\mathcal{R}$ and ii) $\delta(x):=\frac{r_2}{r_1}-1\leq\frac{1}{2}$ for $u\in[u_0,u_*]$. \end{proposition} \begin{proof} We rewrite \eqref{UEMS1} into the following equivalent form: \begin{align}\label{UEMS1mass} \partial_u\partial_vr=-\frac{\Omega^2}{{\color{black}2}}\frac{m}{r^2}. \end{align} Since $m\geq 0$, the right side of the above equation is {\color{black}non-positive}. This proves the first part of the lemma.\\ Integrating with respect to $u$, we get: \begin{align*} \partial_vr(u)-\partial_vr(u_0){\color{black}\leq 0}\implies\partial_vr(u)\leq\partial_vr(u_0). \end{align*} Integrating the above inequality with respect to $v$, \begin{align*} r_2(u)-r_1(u)\leq r_2(u_0)-r_1(u_0), \text{for all } u\in [u_0,u_*]. \end{align*} Hence, we can use the above inequality to compute a bound for $\delta(u)$: \begin{equation}\label{delta inequality} \begin{split} \delta(u) = \frac{r_2}{r_1}-1&=\frac{r_2-r_1}{r_2-(r_2-r_1)}\leq\frac{r_2(u_0)-r_1(u_0)}{r_2(u)-(r_2(u_0)-r_1(u_0))}\\ &\leq\frac{\delta_0}{\frac{r_2(u)}{r_1(u_0)}-\delta_0}=\frac{\delta_0}{x(u)(1+\delta_0)-\delta_0},\hspace{0.5cm}\text{ for all }u\in[u_0,u_*], \end{split} \end{equation} where $x(u):=r_2(u)/r_2(u_0)$. {\color{black}Recall $r_2(u_*):=\frac{3\delta_0}{1+\delta_0}\cdot r_2(u_0)$ and} since $x(u)$ is monotonically decreasing, we have \begin{align*} x(u)\geq x(u_*) = \frac{3\delta_0}{1+\delta_0} \text{ for }u\in [u_0,u_*], \end{align*} and hence \begin{align*} \delta(u)\leq\frac{\delta_0}{2\delta_0} = \frac{1}{2}\text{ for all }u\in [u_0,u_*]. \end{align*} \end{proof} Next we prove two key lemmas. In the first one we bound the difference in $r\partial_u\phi$ between $v=v_1$ and $v=v_2$. Then in the second we bound the ratio of $\partial_vr$ between $v=v_1$ and $v=v_2$. \begin{lemma}\label{keylemma1real} Define {\color{black}$\Theta:=r_2\partial_u\phi_2-r_1\partial_u\phi_1$}. Suppose that $\mathcal{D}(0,v_1)\cup\mathcal{R}$ is free of trapped surfaces. Then \begin{align*} \Theta(u)^2\leq \frac{\partial_ur_2}{8\pi\Omega_2^{-2}\partial_vr_2}(m_2-m_1)\bigg(\frac{1}{r_2}-\frac{1}{r_1}\bigg)(u) \end{align*} for all $u\in [u_0,u_*]$. \end{lemma} \begin{proof} We can write the wave equation \eqref{UEMS4} as \begin{align*} \partial_v(r\partial_u\phi) = -\partial_ur\partial_v\phi. \end{align*} By integrating the above equation, we get {\color{black}\begin{align} \Theta^2 &= \big(r_2\partial_u\phi_2-r_1\partial_u\phi_1\big)^2\nonumber\\&=\bigg|\int_{v_1}^{v_2}-\partial_ur\partial_v\phi dv\bigg|^2\nonumber\leq\bigg(\int_{v_1}^{v_2}-\partial_ur|\partial_v\phi| dv\bigg)^2\nonumber\\ &\leq\frac{1}{8\pi}\int_{v_1}^{v_2}-8\pi r^2\partial_ur\Omega^{-2}|\partial_v\phi|^2dv\cdot\int_{v_1}^{v_2}-\frac{\partial_ur}{r^2\Omega^{-2}}dv\label{lemma1eqn1real} \end{align}} where we have applied Holder's inequality for the last inequality. The first integral can be written in terms of the hawking mass: \begin{align} \int_{v_1}^{v_2}-8\pi r^2\partial_ur\Omega^{-2}|\partial_v\phi|^2dv&=\int_{v_1}^{v_2}\partial_vm\text{ }dv\nonumber\\ &=m_2-m_1\label{lemma1eqn2real}. \end{align} To bound the second integral, we apply Proposition $\ref{mixed derivatives of r real}$ to get $\partial_v\partial_ur\leq 0$, and hence $\partial_ur\geq\partial_ur_2$. Also, equation {\color{black}$\eqref{UEMS3}$} implies that $\Omega_2^{-2}\partial_vr_2\leq\Omega^{-2}\partial_vr$. Combining these two pieces of information, we have {\color{black} \begin{align} \int_{v_1}^{v_2}-\frac{\partial_ur}{r^2\Omega^{-2}}dv &=\int_{r_1}^{r_2}-\frac{\partial_ur}{r^2\Omega^{-2}\partial_vr}dr\nonumber\leq-\partial_ur_2\int_{r_1}^{r_2}\frac{1}{r^2\Omega^{-2}\partial_vr}dr\nonumber\\ &\leq\frac{-\partial_ur_2}{\Omega_2^{-2}\partial_vr_2}\int_{r_1}^{r_2}\frac{1}{r^2}dr=\frac{\partial_ur_2}{\Omega_2^{-2}\partial_vr_2}\big(\frac{1}{r_2}-\frac{1}{r_1}\big)\label{lemma1eqn3real} \end{align}} Substituting $\eqref{lemma1eqn2real}$ and $\eqref{lemma1eqn3real}$ back into $\eqref{lemma1eqn1real}$ gives us the desired result. \end{proof} \begin{lemma}\label{keylemma2real} Assume that $\mathcal{D}(0,v_1)\cup\mathcal{R}$ is free of trapped surfaces. Then \begin{align*} \frac{\Omega_2^{-2}\partial_vr_2}{\Omega_1^{-2}\partial_vr_1}(u)\leq e^{-\eta(u)} \end{align*} for all $u\in[u_0,u_*]$. \end{lemma} \begin{proof} Dividing both sides of equation {\color{black}$\eqref{UEMS3}$} by $\Omega^{-2}\partial_vr$ and integrating from $v_1$ to $v_2$, we get \begin{align*} \ln|\Omega_2^{-2}\partial_vr_2|-\ln|\Omega_1^{-2}\partial_vr_1| = \ln\bigg(\frac{\Omega_2^{-2}\partial_vr_2}{\Omega_1^{-2}\partial_vr_1}\bigg)=-4\pi\int_{v_1}^{v_2}\frac{r|\partial_v\phi|^2}{\partial_vr}dv. \end{align*} By equation $\eqref{massvreal}$ and the definition of the Hawking mass, {\color{black}we have} {\color{black} \begin{align*} \frac{\partial_vm}{r-2m}=\frac{-8\pi r^2\Omega^{-2}\partial_ur|\partial_v\phi|^2}{-4r\Omega^{-2}\partial_ur\partial_vr}=\frac{2\pi r|\partial_v\phi|^2}{\partial_vr}. \end{align*}} Hence for any $u\in[u_0,u_*]$ \begin{align*} \ln\bigg(\frac{\Omega_2^{-2}\partial_vr_2}{\Omega_1^{-2}\partial_vr_1}\bigg)&=-2\int_{v_1}^{v_2}\frac{\partial_vm}{r-2m}dv\leq-2\int_{v_1}^{v_2}\frac{1}{r}\partial_vm\text{ }dv\\ &\leq\frac{-2}{r_2}\int_{v_1}^{v_2}\partial_vm\text{ }dv=-\frac{2(m_2-m_1)}{r_2}=-\eta \end{align*} Exponentiating both sides of the above inequality gives us the desired result. \end{proof} Now we are ready to prove Theorem \ref{thm1.1}. \begin{proof} (\textit{Theorem \ref{thm1.1}}) We consider the dimensionless length scale $x(u):=\frac{r_2(u)}{r_2(u_0)}$. Note that $x$ decreases as $u$ increases and $x(u_0) = 1$. {\color{black}We will show that $\frac{d\eta}{dx}$ is bounded from above, i.e. $\frac{d\eta}{du}$ is bounded from below, and hence obtain a lower bound for $\eta(u_*)$. If this lower bound is greater than $1$, this implies $S(u_*,v_2)$ is a trapped surface, for} \begin{align*} \eta(u_*)= \frac{2(m_2-m_1)}{r_2}(u_*)>1&\implies \frac{2m_2}{r_2}(u_*)>1\\ &\implies S(u_*,v) \text{ is a trapped surface.} \end{align*} See Figure \ref{fig:graphreal} for an illustration. \\ \begin{figure} \centering \begin{tikzpicture} \draw[->] (0,0) -- (6,0) node[right] {$x$}; \draw[->] (0,0) -- (0,6) node[above] {$\eta$}; \draw[scale=1,domain=0.5:4,smooth,variable=\x,blue] plot ({\x},{0.2*(\x-5)*(\x-5)+2}); \draw [dashed] (4,0)node[anchor = north]{$x=1$} -- (4,2.2); \draw [dashed] (0,2.2)node[anchor = east]{$\eta = \eta_0$} -- (4,2.2); \draw [dashed] (1,0)node[anchor = north]{$x=x_*$} -- (1,5.35); \draw [dashed] (0,5.35)node[anchor = east]{$\eta \geq 1$} -- (1,5.35); \end{tikzpicture} \caption{Idea of Proof of Theorem \ref{thm1.1}} \label{fig:graphreal} \end{figure} To be precise, we prove a Gronwall-like inequality under the assumption that there is no trapped surface formed before $u_*$. In particular, we assume that $\partial_vr_2(u)>0$ for all $u\in [u_0,u_*]$. We show that this assumption will lead to a contradiction.\\ Assuming that $\partial_vr_2(u)>0$ for all $u\in [u_0,u_*]$, the following chain of {\color{black}identities} hold in the region $[u_0,u_*]\times [v_1,v_2]$: \begin{align} \frac{d\eta}{dx} &= \frac{d\eta}{du}\bigg/\frac{dx}{du} = \frac{r_2(u_0)}{\partial_ur_2}\bigg(-\frac{2\partial_ur_2}{r_2^2}(m_2-m_1)+\frac{2}{r_2}\partial_u(m_2-m_1)\bigg)\nonumber\\ &=-\frac{\eta}{x}+\frac{2}{x\partial_ur_2}(-8\pi r_2^2\Omega_2^{-2}\partial_vr_2|\partial_u\phi_2|^2+8\pi r_1^2\Omega_1^{-2}\partial_vr_1|\partial_u\phi_1|^2)\nonumber\\ &=-\frac{\eta}{x}-\frac{16\pi\partial_vr_2\Omega_2^{-2}}{x\partial_ur_2}\bigg(r_2^2|\partial_u\phi_2|^2-\frac{\Omega_1^{-2}\partial_vr_1}{\Omega_2^{-2}\partial_vr_2}r_1^2|\partial_u\phi_1|^2\bigg)\label{inequalitiesreal}. \end{align} Using Lemma $\ref{keylemma2real}$, we can bound the factor in the second term: \begin{align*} r_2^2|\partial_u\phi_2|^2-\frac{\Omega_1^{-2}\partial_vr_1}{\Omega_2^{-2}\partial_vr_2}r_1^2|\partial_u\phi_1|^2&\leq r_2^2|\partial_u\phi_2|^2-e^\eta {\color{black}r_1^2}|\partial_u\phi_1|^2\\ &=\Theta^2+2\Theta\partial_u\phi_1r_1+(1-e^\eta)r_1^2|\partial_u\phi_1|^2. \end{align*} The last expression, being a quadratic in $\Theta$, can be bounded by a monic quadratic polynomial in $\Theta$: \begin{align*} \Theta^2+2\Theta\partial_u\phi_1r_1+(1-e^\eta)r_1^2|\partial_u\phi_1|^2&\leq\bigg(1+\frac{1}{e^\eta-1}\bigg)\Theta^2\leq\bigg(1+\frac{1}{\eta}\bigg)\Theta^2, \end{align*} since $\eta\geq 0$. This last inequality combines with $\eqref{inequalitiesreal}$ to give: \begin{align*} \frac{d\eta}{dx}\leq-\frac{\eta}{x}-\frac{16\pi\partial_vr_2\Omega_2^{-2}}{x\partial_ur_2}\bigg(1+\frac{1}{\eta}\bigg)\Theta^2. \end{align*} Applying Lemma $\ref{keylemma1real}$, we have \begin{align} \frac{d\eta}{dx}&\leq -\frac{\eta}{x}-\frac{2}{x}\bigg(1+\frac{1}{\eta}\bigg)\bigg(\frac{1}{r_2}-\frac{1}{r_1}\bigg)(m_2-m_1)\nonumber\\ &=-\frac{\eta}{x}+\frac{\eta}{x}\bigg(1+\frac{1}{\eta}\bigg)\bigg(\frac{r_2}{r_1}-1\bigg)\label{inequalities2real}. \end{align} Using {\color{black}(\ref{delta inequality})} we get \begin{align*} \delta = \frac{r_2-r_1}{r_2-(r_2-r_1)}\leq\frac{r_2(u_0)-r_1(u_0)}{r_2(u)-(r_2(u_0)-r_1(u_0))}=\frac{\delta_0}{x(1+\delta_0)-\delta_0}. \end{align*} Combining {\color{black}the above} with $\eqref{inequalities2real}$, we obtain \begin{align*} \frac{d\eta}{dx}\leq\eta\bigg(\frac{\delta_0}{x^{{\color{black}2}}(1+\delta_0)-{\color{black}x}\delta_0}-\frac{1}{x}\bigg)+\frac{\delta_0}{x^{{\color{black}2}}(1+\delta_0)-{\color{black}x}\delta_0}. \end{align*} Defining $g(x):=1-\frac{\delta_0}{x(1+\delta_0)-\delta_0}$ and $f(x):=\frac{\delta_0}{x(1+\delta_0)-\delta_0}$, we obtain the following differential inequality: \begin{align*} \frac{d\eta}{dx}+\eta\frac{g(x)}{x}-\frac{f(x)}{x}\leq 0. \end{align*} To solve this differential inequality, we multiply by an integrating factor {\color{black} then integrate with respect to $x$}: \begin{gather*} \frac{d}{dx}\bigg(e^{-\int_x^1\frac{g(s)}{s}ds}\eta(x)\bigg)-e^{-\int_x^1\frac{g(s)}{s}ds}\frac{f(x)}{x}\leq 0\\ \implies\bigg[e^{-\int_{{\color{black}x'}}^1\frac{g(s)}{s}ds}\eta(x')\bigg]_{x'=x}^{x'=1}\leq \int_x^1e^{-\int_{x'}^1\frac{g(s)}{s}ds}\frac{f}{x'}dx'. \end{gather*} We denote $G(x):= \int_x^1\frac{g(s)}{s}ds$ and $F(x):=\int_x^1e^{-G(x')}\frac{f}{x'}dx'$. In this notation, we get \begin{align*} \eta_0-e^{-G(x)}\eta(x)\leq F(x) \implies\eta(x)\geq e^{G(x)}\big(-F(x)+\eta_0\big). \end{align*} Hence, in the region $[u_0,u_*]\times [v_1,v_2]$ free of trapped surfaces, we conclude that the following inequality holds: \begin{align}\label{maininequalityreal} \eta(x)\geq e^{G(x)}(-F(x)+\eta_0). \end{align} {\color{black}Now, we} compute explicit expressions for $G(x)$ and $F(x)$: \begin{align*} G(x) &= \int_x^1\frac{1}{s}-\frac{1}{s}\frac{\delta_0}{s(1+\delta_0)-\delta_0}ds\\ &=\int_x^1\frac{1}{s}+\frac{1}{s}-\frac{1+\delta_0}{s(1+\delta_0)-\delta_0}ds\\ &=\ln\bigg(\frac{s^2}{s(1+\delta_0)-\delta_0}\bigg)\bigg|^1_x = \ln\bigg(\frac{x(1+\delta_0)-\delta_0}{x^2}\bigg) \end{align*} \begin{align*} F(x) &= \int_x^1\frac{s^2}{s(1+\delta_0)-\delta_0}\frac{f}{s}ds = \int_x^1\frac{\delta_0s}{(s(1+\delta_0)-\delta_0)^2}ds\\ &=\frac{\delta_0}{1+\delta_0}\int_x^1\frac{1}{s(1+\delta_0)-\delta_0}+\frac{\delta_0}{(s(1+\delta_0)-\delta_0)^2}ds\\ &=\frac{\delta_0}{(1+\delta_0)^2}\ln\bigg(s(1+\delta_0)-\delta_0\bigg)\bigg|_x^1 - \frac{\delta_0^{{\color{black}2}}}{(1+\delta_0)^2}\frac{1}{s(1+\delta_0)-\delta_0}\bigg|_x^1\\ &=\frac{\delta_0}{(1+\delta_0)^2}\bigg(\ln\big(\frac{1}{x(1+\delta_0)-\delta_0}\big)+\delta_0\big(\frac{1}{x(1+\delta_0)-\delta_0}-1\big)\bigg). \end{align*} Using the assumption that there is no trapped surface or MOTS, we have $\eta(x)=\frac{2(m_2-m_1)}{r_2}\leq\frac{2m_2}{r_2}<1$ for $x\in [\frac{3\delta_0}{1+\delta_0},1]$. Rearranging $\eqref{maininequalityreal}$ {\color{black}results in} \begin{align*} \eta_0\leq e^{-G(x)}\eta(x)+F(x)<e^{-G(x)}+F(x),\text{ for all }x\in \bigg[\frac{3\delta_0}{1+\delta_0},1\bigg]. \end{align*} In particular, we can substitute $x = \frac{3\delta_0}{1+\delta_0}$ into the above equation and get \begin{align*} \eta_0< E(\delta_0) = \frac{\delta_0}{(1+\delta_0)^2}\bigg[\log\big(\frac{1}{2\delta_0}\big)+5-\delta_0\bigg]. \end{align*} This gives us the desired contradiction. \end{proof} \subsection{A Special Case of Minkowskian incoming characteristic initial data} Prescribe Minkowskian data along $v=v_1$, we can improve the lower bound required on $\eta_0$ in Theorem \ref{thm1.1}. \begin{customthm}{1.2} Assume that {\color{black}Minkowskian data are prescribed along $v=v_1$ and require $\phi(u,v_1)=0$}. Suppose that the following lower bound on $\eta_0$ holds: \begin{align*} \eta_0>\f92\delta_0, \end{align*} then there exist a MOTS {\color{black}or a trapped surface} in $[u_0,u_*]\times[v_1,v_2]\subset\mathcal{R}$, i.e. $\partial_vr\leq 0$ at some point in $[u_0,u_*]\times[v_1,v_2]$. \end{customthm} \begin{proof}\textit{(Theorem \ref{thm1.2})} In this special case, we have $\phi_1\equiv 0$ and $m_1 \equiv 0$. Equation \eqref{inequalitiesreal} now reads: \begin{align*} \frac{d\eta}{dx} = -\frac{\eta}{x}-\frac{16\pi\partial_vr_2\Omega_2^{-2}}{x\partial_ur_2}r_2^2|\partial_u\phi_2|^2, \end{align*} and we also have \begin{align*} \Theta^2 = r_2^2|\partial_u\phi_2|^2. \end{align*} Combining the above equations, followed by applying Lemma \ref{keylemma1real}, we get: \begin{align*} \frac{d\eta}{dx} = -\frac{\eta}{x}-\frac{16\pi\partial_vr_2\Omega_2^{-2}}{x\partial_ur_2}\Theta^2&\leq-\frac{\eta}{x}-\frac{2}{x}\bigg(\frac{1}{r_2}-\frac{1}{r_1}\bigg)(m_2-m_1)\\ &=-\frac{\eta}{x}+\frac{\eta}{x}\bigg(\frac{r_2}{r_1}-1\bigg)\\ &\leq -\frac{\eta}{x}+\frac{\eta}{x}\frac{\delta_0}{x(1+\delta_0)-\delta_0}. \end{align*} Integrating the above inequality: \begin{align*} &\int_x^1\frac{1}{\eta}d\eta\leq\int_x^1-\frac{1}{s}+\frac{1}{s}\frac{\delta_0}{s(1+\delta_0)-\delta_0}ds=\int_x^1-\frac{2}{s}+\frac{1+\delta_0}{s(1+\delta_0)-\delta_0}ds\\ &\implies \ln\bigg(\frac{\eta_0}{\eta(x)}\bigg)\leq\ln\bigg(\frac{x^2}{x(1+\delta_0)-\delta_0}\bigg)\\ &\implies \eta_0\leq\eta(x)\frac{x^2}{x(1+\delta_0)-\delta_0}. \end{align*} Under the assumption of no trapped surfaces or MOTS, we have $\eta(x)<1$ for all $x\in[\frac{3\delta_0}{1+\delta_0},1]$, hence \begin{align*} \eta_0\leq \frac{x^2}{x(1+\delta_0)-\delta_0}{\color{black}\text{, for all }x\in\bigg[\frac{3\delta_0}{1+\delta_0},1\bigg].} \end{align*} In particular, choosing $x=\frac{3\delta_0}{1+\delta_0}$ we have \begin{align*} \eta_0\leq \f92\delta_0. \end{align*} {\color{black}This gives us} the desired contradiction to the hypothesis. \end{proof}
1,314,259,993,854
arxiv
\section{Introduction} Mean-field is widely used description for broken symmetry ordered states of many-body systems. Functional integral formalism relates the quantum partition function $Z$ of a many-body system to the Euclidean action $S$, where the inverse temperature, $\beta={1}/{T}$, acts as imaginary Matsubara time \cite{agd,hertz}: \begin{align} &Z=\int{\cal{D}}\bar{\Psi}(\tau){\cal{D}}\Psi (\tau)\exp{(-S)},\label{Z}\; \\ &S=\int^{\beta}_{0}d\tau\sum_{\vec{r}}\left\{\bar{\Psi}(\partial_{\tau}-\mu)\Psi+H(\bar{\Psi},\Psi)\right\}, \label{S0} \end{align} \noindent here $\bar{\Psi}, \Psi$ are Matsubara time conjugated quantum field operators and the Plank's constant is the unit of action ($\hbar=1$). It is also known that degenerate states with classically broken symmetries could be connected by instantons, i.e. by periodic in Matsubara time solutions of the classical equations of motion extremizing the Euclidean action \cite{1, harrington}. One of us has proposed an example of such solution for the quasi one-dimensional onsite repulsive-U Hubbard model \cite{2,3,4}, introducing a notion of {\it{quantum order parameter, QOP}}, that contains an "instantonic crystal" , which breaks translational invariance along the Matsubara time axis. It was demonstrated \cite{3,4} that instantonic crystal, e.g. of spin density wave type, possesses zero scattering cross-section for incident particles that couple to spin and, thus, forms a new kind of "hidden order", that could be relevant e.g. for pseudo-gap state of high-Tc cuprates\cite{hidd}. It was suggested recently \cite{efetov}, that in the case of two competing spin- and charge density wave orders an instantonic crystal, called there \cite{efetov} "thermodynamic quantum time crystal", might form and realize previously proposed "quantum time crystal" \cite{wilczek} as the ground state. The purpose of the present paper is to prove analytically that at least for two general classes of solutions neither a single nor multiple (competing) charge-, spin- and superconducting "thermodynamic quantum time crystals" (instantonic crystals) can form a stable thermodynamic equilibrium state of interacting fermi-system at any temperature including the absolute zero. To start a proof of the 'no-go' theorem we consider a simple two band model with long-range interaction between electron-hole pairs in momentum space, that was recently considered in \cite{efetov}, and discussed previously \cite{3,4} in relation with quasi 1D model. Namely, we introduce a complex quantum order parameter, periodic in Matsubara time: \begin{align} \hat{H}_{M}=\begin{pmatrix} \epsilon_{q} +t_{q} & M(\tau) \\ M^*(\tau)& - \epsilon_{q} +t_{q} \end{pmatrix} \label{lduv} \end{align} \noindent Hamiltonian $\hat{H}_{M}$ in Eq. (\ref{lduv}) acts on a two-component "spinor" of bare fermionic states assigned to each point in momentum space (Brillouin zone) , $\vec{\psi}^T\equiv \{u_{+},u_{-}\}$. Corresponding two energy bands possess dispersions counted from the chemical potential $\mu$: $\epsilon_{\pm}=\pm\epsilon_{q} +t_{q}$. The amplitudes $u_{\pm}$ of the fermion wave function could be either electron- and hole- amplitudes in a two-band model \cite{efetov}, or amplitudes of "right"- and "left"- movers in a quasi-$1D$ model considered in \cite{3,4}. A simple example of an origin of a complex field $M(\tau)\equiv M_{1}(\tau)+iM_{2}(\tau)$ is provided e.g. by a decoupling of onsite repulsion term in the Hubbard $U$ lattice model via Hubbard-Stratonovich (HS) procedure, that leads to the spin- and charge-density fields, $M_1, M_2$ \cite{schulz}: \begin{align} &\exp{\left[-\int^{\beta}_{0}d\tau{U}\hat{n}_{i\uparrow}\hat{n}_{i\downarrow}\right]}=\frac{1}{\pi U}\int {\cal{D}}M_1{\cal{D}}M_2\nonumber\\ &\exp\left\{-\int^{\beta}_{0}d\tau \left[\frac{1}{U}(M_{1}^{2}+M_{2}^{2})+iM_2\hat{n}_{i}+M_1\hat{s}_{zi}\right]\right\}, \label{H} \end{align} \noindent where onsite-$i$ charge and spin density operators are defined as: \begin{align} \hat{n}_{i}=\hat{n}_{i\uparrow}+\hat{n}_{i\downarrow},\;\hat{s}_{zi}=\hat{n}_{i\uparrow}-\hat{n}_{i\downarrow}. \label{def} \end{align} \noindent Hence, partition function in Eqs. (\ref{Z}), (\ref{S0}), after HS decoupling in Eq. (\ref{H}) and changing from lattice coordinate to momentum representation, is expressed as follows (a single $U$ is generalized by a different $U_{\gamma}$ for two fields $M_{\gamma=1,2}$): \begin{align} &Z=\displaystyle\int{\cal{D}}M_1{\cal{D}}M_2\prod_q{\cal{D}}\bar{\psi}_q(\tau){\cal{D}}\psi_q(\tau) \exp{\left\{-S\right\}}\label{euclid}\\ &S=\displaystyle\int_0^{\beta}d\tau \sum_q \left\{\bar{\psi}_q\left[\partial_{\tau}+\hat{H}_{M}\right]\psi_q+\sum_{\gamma=1}^{2}\dfrac{M_{\gamma}^2(\tau)}{4U_{\gamma}}\right\} \label{S} \end{align} \noindent In Eqs. (\ref{euclid}), (\ref{S}) path integration implements a trace over diagonal elements of the exponential, and hence it is performed over $\tau$-periodic Hubbard-Stratonovich fields: \begin{eqnarray} {M}_{\gamma}\left(\tau+{1}/{T}\right)={M}_{\gamma}(\tau), \label{SHM} \end{eqnarray} \noindent In case of a classical phase transition the overwhelming contribution to the path integral $Z$ comes from the $\tau$-independent HS fields, i.e. the minimum of the Euclidian action $S$ is achieved with some particular $\tau$-independent functions $M_{\gamma}(i)$, that constitute well known classical (mean-field) order parameters (COP). A condition for the minimum of $S$ , from which the COP is found, is called the self-consistency mean-field equation, and was first introduced by P. Weiss \cite{weiss} for ferromagnetic domains. It was shown \cite{2,3,4} that besides COP, there exist other minima of the Euclidian action $S$, described with Hubbard-Stratonovich fields, that were called {\it{quantum order parameters}} (QOP), being $\tau$-periodic functions with zero mean: \begin{eqnarray} \langle M(\tau)\rangle_{1/T}=0. \label{me} \end{eqnarray} \noindent The required integration over Grassmann fields $\bar{\psi}_q(\tau),\, \psi_q(\tau)$ in the partition function $Z$, (\ref{euclid}), leads to the functional determinant $Det[\partial_{\tau}+\hat{H}_{M}]=\prod_m \epsilon_m(M)$, where eigenvalues $\epsilon(M)$ with the corresponding fermionic eigenvectors $\vec{\xi}(\tau)$ are defined as \cite{neveu,2}: \begin{eqnarray} (\partial_{\tau}+\hat{H}_{M})\vec{\xi}_m=\epsilon_m\vec{\xi}_m;\;\vec{\xi}_m(\tau+{1}/{T})=-\vec{\xi}_m(\tau) \label{fs} \end{eqnarray} \noindent The eigenvalues $\epsilon_m$ could be obtained using the following procedure. First, a spectrum $\{\alpha_q\}$ of the quasi-energies (Floquet indices) of the Matsubara time-dependent Hamiltonian Eq. (\ref{lduv}) is found \cite{neveu, 2}. The Floquet indices label 'Bloch' solutions of the corresponding Dirac like equation with $\tau$-periodic potential $ M(\tau)$ in (\ref{lduv}): \begin{eqnarray} &(\partial_{\tau}+\hat{H}_{M})\vec{\psi}_q=0, \label{bloch} &\vec{{\psi}}_q(\tau+{1}/{T})=e^{-\alpha_q}\vec{\psi}_q(\tau). \label{FQF} \end{eqnarray} \noindent Provided the 'Bloch' functions $\vec{\psi}_q$ and indices $\alpha_q$ are known, the eigenvalues follow: $\epsilon_{m,q}=i(2m+1)\pi/T+\alpha_q$, enabling antiperiodicity of the fermionic eigenfunctions in (\ref{fs}), that are constructed as: $\vec{\xi}_{m,q}=\exp\{i(2m+1)\pi \tau/T+\alpha_q\tau/T\}\vec{\psi}_q$. Calculating product $\prod_{m,q}\epsilon_{m,q}$ in the partition function in Eq. (\ref{euclid}) one finds \cite{neveu,2}: \begin{eqnarray} Z=\int{\cal{D}}{M(\tau}) exp{\{-S_{M}\}}\prod_q \cosh\left(\dfrac{\alpha_q}{2}\right) \label{zm} \end{eqnarray} \noindent where $S_{M}$ is the bare Gaussian action of the Hubbard-Stratonovich fields $M_{\gamma}(\tau)$ expressed by the last sum in Eq. (\ref{S}). \noindent Then, QOP is a periodic function $M(\tau)$ that obeys Eqs. (\ref{SHM}), (\ref{me}) and extremizes the total action, being a saddle-point of the path integral (\ref{zm}) in the functional space of HS fields: \begin{align} \delta_{M(\tau)}{\left\{ S_M-\sum_q ln\left\{\cosh\left(\dfrac{\alpha_q}{2}\right)\right\}\right\}}=0 \label{self} \end{align} \noindent The self-consistency equation is readily derived from (\ref{self}) using the first-order perturbation theory\cite{neveu}: \begin{align} {T\partial_{M(\tau)}\alpha_q={\bar{\psi}}_q(\tau)\{\partial_{M(\tau)}\hat{H}_{M}\}{\psi}_q(\tau)} \label{DFQ} \end{align} \noindent where a normalization condition is assumed: \begin{align} \int_0^{\beta}d\tau{\bar{\psi}}_q(\tau){\psi}_q(\tau)=1 \label{nrmlz} \end{align} \noindent Substituting this condition and Eq. (\ref{DFQ}) into Eq. (\ref{self}) one obtains: \begin{align} \dfrac{M(\tau)}{U}= \sum_q \tanh\left(\dfrac{\alpha_q}{2}\right){\bar{\psi}}_q(\tau)\{\partial_{M(\tau)}\hat{H}_{M}\}{\psi}_q(\tau) \label{fins} \end{align} \noindent Rather nontrivial functional equation Eq. (\ref{fins}), where we have dropped the indices $\gamma=1,2$ of $M(\tau)$, has to be solved in each point of the Matsubara time interval $[0,1/T]$. \noindent Resuming consideration of the Dirac-type equation (\ref{bloch}) with Hamiltonian matrix given by Eq. (\ref{lduv}) we gauge out the 'antinesting' part of the dispersion, $t_{q}$, by "rotation" in Matsubara time: \begin{eqnarray} \vec{\psi}\equiv \left(\begin{array}{c} u_{+} \\ u_{-} \end{array}\right)= e^{-\tau t_{q}}\vec{\phi}\equiv e^{-\tau t_{q}}\left(\begin{array}{c} g_{+}\\ g_{-} \end{array}\right) \label{ganest} \end{eqnarray} \noindent After that the famous "nesting" symmetry \cite{keldysh} $\epsilon_{+}= -\epsilon_{-}$ is restored in the resulting Dirac-type equation: \begin{align} \begin{pmatrix} \partial_{\tau}+\epsilon_{q} & M(\tau) \\ M^*(\tau)& \partial_{\tau}- \epsilon_{q} \end{pmatrix}\vec{\phi}_q(\tau)=0 \label{tnest} \end{align} \noindent Thus, Floquet indices in the representation of 'rotated' spinor $\vec{\phi}_q(\tau)$ become shifted: \begin{eqnarray} \alpha_{q}=T^{-1}t_{q}+\tilde{\alpha}_{q};\;\;\;\vec{\phi}_q(\tau+1/T)=e^{-\tilde{\alpha}_{q}}\vec{\phi}_q(\tau). \label{floqq} \end{eqnarray} \noindent We shall see below from exact solution, that the indices $\tilde{\alpha}_{q}$ occur in plus-minus pairs, obeying the symmetry relation: \begin{eqnarray} \tilde{\alpha}(-\epsilon_{q})=-\tilde{\alpha}(\epsilon_{q}) \label{conj} \end{eqnarray} \noindent Next, we imply an electron-hole symmetry of the bare spectrum $\epsilon_q$ and combine it with the symmetry relation (\ref{conj}). As a result, in the representation of $\tilde{\alpha}_{q}$, Euclidean action from Eq. (\ref{zm}) acquires the following form allowing for identity $\cosh[(x+y)/2]\cosh[(x-y)/2]=(\cosh(x)+\cosh(y))/2$: \begin{eqnarray} &S^Q=-\frac{1}{2}\displaystyle\sum_q\ln\frac{1}{2}[\cosh(\tilde{\alpha}_{q})+\nonumber\\ &+\cosh(t_{q}/T)]+\displaystyle\sum_{\gamma=1}^{2}\dfrac{{P_\gamma}^2}{4\tilde{U}_{\gamma}}\label{smn} \end{eqnarray} \noindent where $\tilde{U}_{\gamma}$ stands for parameters ${U}_{\gamma}$ in (\ref{S}), but properly renormalized by the volume of the system, that is tacitly involved in the summation over momenta $\sum_q$ over the fermionic states in the Brillouin zone. Here we have also introduced notations for 'mean square orders': \begin{eqnarray} {P_\gamma}^{2}= T\int\limits_0^{1/T} {M_{\gamma}^2(\tau) } d\tau \label{mmm} \end{eqnarray} \noindent First, we are going to prove the 'no-go' theorem for the action (\ref{smn}) for two general classes of HS fields: 1) $M_2(\tau)=$const, and 2) $M_1(\tau)+iM_2(\tau)\equiv M(\tau)e^{i\phi}$ with $\phi=$const and $M(\tau)$ real function, see Fig.\ref{nogo12} : \begin{figure}[h!!] \centerline{\includegraphics[width=1.\linewidth]{chords_12_07.pdf}} \caption{Schematic layout of QOP variation in Matsubara time. Chord (1), short dashes, corresponds to: $M_1(\tau+1/T)=M_1(\tau)$, $M_2(\tau)=$const. Chord (2), long dashes, corresponds to: $M_1(\tau)+iM_2(\tau)\equiv M(\tau)e^{i\phi}$, with $\phi=$const.} \label{nogo12} \end{figure} \noindent Namely, we are going to demonstrate that in the both cases Euclidean action $S$ in (\ref{smn}) achieves its minimum only under condition $M_1(\tau)=$const, $M_2(\tau)=$const, i.e. only COP could be its thermodynamic equilibrium state at any temperature including $T=0$. The case in \cite{efetov} corresponds to $\phi=\phi(\tau)\neq$const and, therefore, does not belong to the classes of HS fields that we consider in this work. \noindent Substituting constant $M_\gamma(\tau) =const=P^{c}_\gamma$ into Eq. (\ref{tnest}) and using definitions (\ref{floqq}) we find Floquet indices (i.e. spectrum) of the fermi-system with COP: $T\alpha_q=t_q\pm\sqrt{\sum_\gamma {P^{c2}_\gamma}+\epsilon_q^2}$. This leads to the Euclidean action: \begin{eqnarray} &&S^{C}=-\frac{1}{2}\sum_q\ln{\frac{1}{2}[\cosh\left(\frac{\sqrt{\sum_\gamma {P^{c2}_\gamma}+\epsilon_q^2}}{T}\right)}+ \nonumber\\ &&{ \cosh({t_{q}}/{T})]}+\displaystyle\sum_{\gamma=1}^{2}\dfrac{{P^{c2}_\gamma}}{4\tilde{U}_{\gamma}} \label{scop} \end{eqnarray} \noindent A direct comparison of expressions (\ref{smn}) and (\ref{scop}) indicates that for any COP and QOP states with equal 'mean square orders' $P^{c2}_\gamma$ and ${P_\gamma}^2$ the difference between their Euclidean actions (e.g. free energies) depends merely on the difference between their Floquet spectra $\tilde{\alpha}_{q}$ and $\sqrt{\sum_\gamma {P^{c2}_\gamma}+\epsilon_q^2}/T$. The key of our proof is the demonstration that any Matsubara time-dependent order parameter QOP has Floquet spectrum $\tilde{\alpha}_q < \sqrt{\sum_\gamma {P^{c2}_\gamma}+\epsilon_q^2}/T$ for all $q$. Hence, corresponding QOP free energy is greater than that of COP and, consequently, the QOP state is metastable. Consider case 1) corresponding to $M_2=$const and $M_1(\tau)$ periodic with period $1/T$ along the Matsubara time axis. After an extra unitary transformation: \begin{eqnarray} f_{\pm}=(2)^{-1/2}({g}_+\pm {g}_-) \label{efs} \end{eqnarray} \noindent the corresponding Floquet equation for the 'spinor' $\vec{\phi}(\tau)$ defined in Eq. (\ref{ganest}) is readily obtained from (\ref{tnest}), allowing for the definition (\ref{mmm}): \begin{eqnarray} \label{ff} &&({\partial^2}_{\tau}-Q_{\pm}(\tau)-\epsilon_{q}^2-{P_2}^{2})f_{\pm}=0;\label{dbdgf}\;\\ &&Q_{\pm}(\tau)=M_1(\tau)^2\mp \partial_{\tau}M_1(\tau) \label{dbdg} \end{eqnarray} \noindent Rewriting equation (\ref{dbdgf}) in the following equivalent form and integrating over one period along the Matsubara time axis, 1/T, we obtain: \begin{eqnarray} T\int\limits_0^{1/T} {\frac{{\partial _\tau ^2 f_ \pm (\tau )}}{{f_ \pm (\tau )}}d\tau } = T\int\limits_0^{1/T} {\left[ {Q_ \pm (\tau ) + \epsilon _q^2+{P_2}^{2} } \right]} d\tau \label{fratioi} \end{eqnarray} \noindent Right hand side of Eq. (\ref{fratioi}) can be simplified using definitions of $Q(\tau)$ in Eq. (\ref{dbdg}) and ${P_\gamma}^{2}$ in Eq. (\ref{mmm}), together with periodicity condition (\ref{SHM}). Simultaneously, the left hand side of Eq. (\ref{fratioi}) could be directly expressed via $\tilde{\alpha}_{q}$ using Eq. (\ref{floqq}) in the form: $f_{\pm}(\tau)=e^{-\tilde{\alpha}_{q}\tau T}\theta_{\pm}(\tau)$ with $\theta_{\pm}(\tau+1/T)=\theta_{\pm}(\tau)$. Thus, after straightforward manipulations, Eq. (\ref{fratioi}) acquires an equivalent form: \begin{eqnarray} &\displaystyle T\int\limits_0^{1/T} {\frac{{\partial _\tau ^2 f_ \pm (\tau )}}{{f_ \pm (\tau )}}d\tau }\equiv(\tilde{\alpha}_{q} T)^2 - 2\tilde{\alpha}_{q}T^2 \int\limits_0^{1/T} {\frac{{\dot \theta_{\pm} (\tau )}}{{\theta_{\pm} (\tau )}}d\tau } \nonumber \\ & +\displaystyle T\int\limits_0^{1/T} {\frac{{\ddot \theta_{\pm} (\tau )}}{{\theta_{\pm} (\tau )}}d\tau } =\epsilon _q^2+\sum_\alpha {P_\alpha}^{2}. \label{fratioi1} \end{eqnarray} \noindent Integrating by parts in the left hand side of (\ref{fratioi1}) and allowing for the periodicity of function $\theta_{\pm}(\tau )$ one finds: \begin{eqnarray} &(\tilde{\alpha}_{q} T)^2=-T\displaystyle\int\limits_0^{1/T} \frac{{\dot \theta_{\pm} ^2 (\tau )}}{{\theta_{\pm} ^2 (\tau )}}d\tau + \epsilon _q^2+\sum_\alpha {P_\alpha}^{2}\nonumber\\ &\leq \epsilon _q^2+\sum_\alpha {P_\alpha} \label{fratioi2} \end{eqnarray} \noindent Since both the Floquet indices and Bloch functions in Eqs. (\ref{dbdgf}) are real (see below), the equality in Eq. (\ref{fratioi2}) could be achieved only in the COP case: \begin{eqnarray} T\int\limits_0^{1/T} \frac{{\dot \theta_{\pm} ^2 (\tau )}}{{\theta_{\pm} ^2 (\tau )}}d\tau=0;\;\theta_{\pm}(\tau )\equiv \text{const} \label{COP} \end{eqnarray} \noindent Hence, we had proven that Euclidean action (free energy) of any QOP (thermodynamic quantum time crystal) state, $S^Q$, is always higher than that of the COP state, $S^C$, and therefore, QOP state is metastable. A proof for the second class of thermodynamic quantum time crystals, i.e.: $M_1(\tau)+iM_2(\tau)\equiv M(\tau)e^{i\phi}$ with $\phi=$const and $M(\tau)$ real function, is trivially reduced to the case 1 considered above by the following transformation of the 'spinor' $\vec{\phi}_q(\tau)$ in Eq. (\ref{tnest}): \begin{align} \vec{\phi}_q(\tau)\equiv \begin{pmatrix} {g}_{+} \\ {g}_{-} \end{pmatrix}= \begin{pmatrix} e^{i\phi/2}\tilde{g}_{+} \\ e^{-i\phi/2}\tilde{g}_{-} \end{pmatrix} \label{gg} \end{align} \noindent Then, Eq. (\ref{tnest}) with $M(\tau)e^{\pm i\phi}$ functions in the place of functions $M(\tau)$, $M^*(\tau)$, transforms into the following equation: \begin{align} \begin{pmatrix} \partial_{\tau}+\epsilon_{q} &M(\tau) \\ M(\tau)& \partial_{\tau}- \epsilon_{q} \end{pmatrix}\vec{\tilde{\phi}}_q(\tau)=0 \label{tnest1} \end{align} \noindent with real function $M(\tau)$. Here $(\vec{\tilde{\phi}})^T\equiv \{\tilde{g}_{+},\tilde{g}_{-}\}$. Now, substituting $M_1(\tau)\rightarrow M(\tau) $ and $M_2(\tau)\equiv 0$ everywhere in the above proof in case 1, we arrive as well at the proof of the statement $S^Q>S^C$ also for the case 2 considered in this section. Next, introducing also superconducting order parameter $\Delta_q$ of a d-wave symmetric type in momentum space (e.g. relevant for high-T$_c$ cuprates) we prove metastability of thermodynamic quantum time crystals also in the case of multiple (competing) charge, spin and superconducting symmetry breaking orders described by the $4\times4$ matrix in the 'bispinor' space \cite{matmuk} $\vec{\Psi}^T\equiv\{u_+,u_-,v_+,v_-\}$: \begin{align} \partial_{\tau} \vec{\Psi}_q+ \begin{pmatrix}\epsilon_{q} &M& \Delta_{q} &0 \\ M^*& -\epsilon_{q} & 0& -\Delta_{q}\\ \Delta_{q}^* &0 & -\epsilon_{q}& M \\ 0 & -\Delta_{q}^* & M^*&\epsilon_{q} \end{pmatrix}\vec{\Psi}_q =0 \label{cmd} \end{align} \noindent It is straightforward now to reduce Hamiltonian matrix in (\ref{cmd}) to two block-matrices $2\times2$ of the kind (\ref{tnest}) by imposing the following linear relations between bispinor components \cite{matmuk} : $v_{\pm}=\gamma_\pm u_\mp$ with constant coefficients: \begin{align} |\gamma_\pm|=1;\;\gamma_-=-\gamma_+^*\frac{\Delta^*}{\Delta} \label{lins} \end{align} \noindent This reduces Eq. (\ref{cmd}) to the two equations of the kind Eq. (\ref{tnest}), but with different composite order parameters $M_\pm$: \begin{align} M_+= M+\gamma_+\Delta; \; M_-=(M\gamma_-/\gamma_++\Delta^*/\gamma_+)^* \label{mms} \end{align} \noindent Hence, our proof of the metastability of the thermodynamic quantum time crystals presented above applies also in the cases of coexisting spin-, charge and superconducting orders entering Hamiltonian in Eq. (\ref{cmd}). Depending on what condition of the considered cases 1) or 2) apply to the composite complex order parameters $M_\pm$ in Eq. (\ref{mms}), the corresponding version of the 'no-go' theorem stays. Finally, we discuss analytic thermodynamic quantum time crystal solution obtained by one of us earlier \cite{2,3,4}, that provides direct demonstration of the workings of 'no-go' theorem presented above and a 'pseudo-gap' thermodynamic behaviour. Comparing Eqs.(\ref{dbdgf}) and (\ref{dbdg}) with the well known solitonic-lattice equations for the Peierls/polyacetylene \cite{ssh,br,maki,machida} and one-dimensional Hubbard model \cite{matmuk} we conclude, that due to replacement of the space coordinate with imaginary Matsubara's time $\tau$ equation (\ref{dbdgf}) differs from the solitonic-lattice equations only by the opposite sign in front of the square dispersion $\epsilon_{q}^2$. Using this similarity one finds \cite{2,3,4} QOP $M_1(\tau)$, that obeys self-consistency equation (\ref{fins}): \begin{align} M_1(\tau)=4nKTk_1 sn\left(4nKT\tau;k_1\right),\,K=K(k_1) \label{norder} \end{align} \noindent Here $sn(\tau,k_1)$ is the Jacobi snoidal elliptic function, with period $1/nT$ commensurate with the main period $1/T$: $M_1(\tau)=M_1(\tau+1/nT)$, integer $n=1,2,...$ counts number of instanton - anti-instanton pairs. For simplicity, we consider here the case 1) of 'no-go' theorem with $M_2\equiv 0$. \begin{figure}[h!!] \centerline{\includegraphics[width=1.\linewidth]{graph_M_tau_1.pdf}} \caption{QOP function $M_1(\tau)$ as function of Matsubara time $\tau$. Curves are marked according to different parameter sets: (1) T=0.51; k=0.99999; n=1; (2) T=0.8; k=0.999; n=1; (3) T=0.51, k=0.84, n=4. Temperature $T$ is measured in arbitrary units and parameter $k$ is Landen transformed parameter $k_1$.} \label{QOPM} \end{figure} \noindent Simultaneously, the Floquet indices spectrum $\tilde{\alpha}_{q}$ takes the form \cite{2} : \begin{align} \tilde{\alpha}_q=2\tilde{\epsilon}_q\left(\dfrac{1-{{k}}^2+\tilde{\epsilon}_q^2}{1+\tilde{\epsilon}_q^2}\right)^{1/2} n\Pi\left(\dfrac{k^2}{1+\tilde{\epsilon}_q^2},{k}\right) \label{flon} \end{align} \noindent Accordingly, $\Pi(m,k)$ and $K(k_1)$ are elliptic integrals of the third and first kind respectively, and $k$ is Landen transformed parameter $k_1$ from (\ref{norder}): \begin{align} &\tilde{\epsilon}_q\equiv \dfrac{\epsilon_{q}}{2TnK(k)},\;{k}=2\sqrt{k_1}/(1+k_1);\,k'^2={1-k^2} \label{renorm} \end{align} \begin{figure}[h!!] \centerline{\includegraphics[width=1.\linewidth]{DOS_graph.pdf}} \caption{Density of Floquet states $\tilde{\alpha}_q$ (solid lines) for the same parameter sets (1)-(3) as indicated in Fig. \ref{QOPM}. Dashed lines is COP density of states (\ref{scop}) with dispersion $\pm\sqrt{P^{c2}_1+\epsilon_q^2}$ under the equality condition (\ref{pgs}) between COP and QOP 'mean square orders' defined in (\ref{mmm}).} \label{PSG} \end{figure} \noindent Hence, (\ref{flon}) proves the symmetry relation (\ref{conj}). The Jacobi function from Eq. (\ref{norder}) transforms the QOP self-consistency equation (\ref{fins}) into algebraic equation for parameters $k,n$: \begin{align} &\sum_{\vec{q}} \left[\tanh\dfrac{{\alpha}_q}{2}\right]\dfrac{\epsilon_{\vec{q}}}{\{(\epsilon^{2}_{\vec{q}}+ {\Delta^2}_{T})(\epsilon^{2}_{\vec{q}}+ k'^2{\Delta^2}_{T})\}^{1/2}}=\dfrac{1}{U} \label{selfinn}\\ &\alpha_{\vec{q}}=T^{-1}t_{\vec{q}}+\tilde{\alpha}_{\vec{q}},\;\Delta_{T}\equiv 2TnK(k) \label{tqop} \end{align} \noindent Result (\ref{flon}) is remarkable. Namely, compare expressions (\ref{smn}) and (\ref{scop}) under condition \cite{3}: \begin{align} P^{c2}_1=P_1^2\equiv(2nK(k)T)^2\left(1+k'^2-2\dfrac{E(k)}{K(k)}\right), \label{pgs} \end{align} \noindent while: $ P^{c2}_2=P_2^2=0$. Then, partition function in (\ref{smn}) maps the QOP state on the fermi gas with effective dispersion $\varepsilon_{eff}(q)\equiv T\tilde{\alpha}_{\vec{q}}$ of the manifestly pseudo-gap type, while (\ref{scop}) possesses dispersion $\varepsilon_{eff}(q)=\pm\sqrt{P^{c2}_1+\epsilon_q^2}$ with the usual Peierls-type gap in the density of states $g$. Comparison of the corresponding densities of states $g(\varepsilon_{eff}(q))=(\partial \varepsilon_{eff}(q)/\partial \epsilon_q)^{-1}$ in Fig. \ref{PSG} makes then our 'no-go' theorem rather obvious, as the energy gain of the fermions in the gapped state is manifestly greater than in the pseudo-gap state at the same temperature. Simultaneously, a comparison between the curves enumerated as (1)-(3) in Figs. \ref{QOPM},\ref{PSG} indicates that instantonic crystals/thermodynamic quantum time crystals with more rectangular shape (i.e. $k\rightarrow 1$) of Jacobi snoidal function (\ref{norder}), see curves (1),(2) in Fig.\ref{QOPM}, create deeper pseudogap in the density of the Floquet states in Fig. \ref{PSG}, while instantonic crystals of sine-like shape, see curve (3) in Fig. \ref{QOPM}, create shallow pseudogap in the density of the Floquet states according to curve (3) in Fig. \ref{PSG}. The authors acknowledge useful discussions with Serguey Brazovskii, Jan Zaanen and Konstantin Efetov. This research was supported by the Ministry of Science and Higher Education of the Russian Federation in the framework of Increase Competitiveness Program of NUST "MISiS" grant K2-2017-085, and via 'Goszadaniye' grant 3.3360.2017/PH.
1,314,259,993,855
arxiv
\section{Introduction} Recently, large efforts are devoted to the construction of an Energy Density Functional (EDF) able to described at best properties of nuclei over the whole nuclear chart\cite{Ben03,Sto07}. The standard strategy to design an EDF for nuclei is to start with a single-reference EDF (SR-EDF) where an effective interaction (Skyrme or Gogny type) and a trial state (Slater Determinant or more generally quasi-particle vacuum) are chosen. This technique is able to describe short-range correlations like pairing and provides already a rather good description of observables such as masses, under the condition that some symmetries of the original Hamiltonian are broken. The SR-EDF is then extended to restore broken symmetries and/or incorporate long-range correlations through configuration mixing, leading to the so-called Multi-Reference EDF (MR-EDF)\cite{Rin80}. Recent applications of this technique have revealed important conceptual and practical difficulties \cite{Dob07} related to the absence of a constructive framework for multi-reference calculations. A solution to this problem, has been recently proposed \cite{Lac08} and successfully tested in nuclei \cite{Ben08}. However, this cure does not apply to most of the functionals currently used \cite{Dug08}, i.e. those with fractional powers of density. This motivates the search of new techniques to extend actual SR-EDF. The Density Matrix Functional Theory (DMFT) \cite{Gil75} appear as an alternative to configuration mixing \cite{Umr00}. Although this theory was proposed more than 30 years ago \cite{Gil75}, explicit forms of functionals and applications have only been explored rather recently. There are nowadays an increasing interest in proposing accurate DMFT \cite{Kol06}. In this work, DMFT is applied to the two-level Lipkin model \cite{Lip65}. In this model, the Hartree-Fock (HF) theory fails to reproduce the ground state energy \cite{Aga66} while configuration mixing like Generator Coordinate Method (GCM) provides a suitable tool \cite{Rin80}. Therefore, the two-level Lipkin model is perfectly suited both to illustrate that DMFT could be a valuable tool and to provide an example of functional for system with a "shape" phase-transition. In this following, some aspects of DMFT are first recalled. Then a semi-empirical functional is constructed for the Lipkin model and applied for various particle number and two-body interaction strengths. It is shown to improve significantly the HF theory. Finally, the interest of constructing more general functional of natural orbitals and occupation numbers in the EDF context is outlined. \section{Discussion on DMFT} The concept of Density Matrix Functional Theory is a generalization of the Hohenberg-Kohn theorem \cite{Hoh64} due to Gilbert \cite{Gil75}. It relies on a theorem showing that the ground state energy could be written as a functional of the one-body density matrix (OBDM) $\gamma(\mathbf{r},\mathbf{r'})$ (instead of the local one-particle density $\rho(\mathbf{r}) \equiv \gamma(\mathbf{r},\mathbf{r})$ in the standard Hohenberg-Kohn theorem). Then, similarly to the Kohn-Sham orbitals \cite{Koh65}, the eigenvalues $n_i$ and eigenvectors $\varphi_i$, called hereafter resp. {occupation numbers} and natural orbitals, of the OBDM are often used instead of $\gamma(\mathbf{r},\mathbf{r'})$ with the relation $\gamma = \sum_i | \varphi_i \rangle n_i \langle \varphi_i |$. The variation of the functional \begin{eqnarray} {\cal F}[\{\varphi_i \}, \{n_i \}] &=& {\cal E} [\{\varphi_i \}, \{n_i \} ] \nonumber \\ &-&\mu \{ Tr(\rho) -N \} -\sum_{ij} \lambda_{ij} (\langle \varphi_i | \varphi_j \rangle - \delta_{ij}) , \label{eq:dmft} \end{eqnarray} with respect to one-particle state components $\varphi_i^*(\mathbf{r})$ and occupation numbers (with the additional constraint $0 < n_i < 1$) is then performed to obtained the optimal $\varphi_i$, $n_i$ and associated ground state energy. The set of Lagrange multipliers $\mu$ and $\{ \lambda_{ij} \}$ are introduced to insure particle number conservation and orthogonality of the single-particle states. In Eq. (\ref{eq:dmft}), ${\cal E} [\{\varphi_i \}, \{n_i \} ]$ is nothing but the functional itself which has to be found. In electronic system, the functional is generally separated into the Hartree, denoted by ${\cal E}_{H}$, (eventually Hartree-Fock, ${\cal E}_{HF}$) part and the exchange-correlation part, denoted here by ${\cal E}_{XC}$ (eventually correlation only ${\cal E}_C$). While the DMFT has been studied theoretically for a rather long time \cite{Gil75,Val80a,Val80b,Zum85,Mul84}, only recently explicit functionals of the OBDM or directly to natural orbitals have been proposed and applied to realistic situations\cite{Goe98,Csa00,Csa02,Yas02,Kol04,Cio03,Per04,Gri05,Lat05,Cio05,Lei05,Kol06,Mar08,Lat08}. There are nowadays extensive works to test functionals especially in infinite systems, the so-called Homogeneous Electronic Gas (HEG) \cite{Cio99,Lat07}. \section{Application of the DMFT to the Lipkin model} The "Lipkin Model" \cite{Lip65} is an exactly solvable model that has often been used as a benchmark for approximations for the nuclear many-body problem \cite{Rin80}. In this model, one considers $N$ particles distributed in two N-fold degenerated shells separated by an energy $\varepsilon$. The associated Hamiltonian is given by: \begin{eqnarray} H = \varepsilon J_0 - \frac{V}{2} (J_+ J_+ + J_- J_-) , \label{eq:hamillipkin} \end{eqnarray} where $V$ denotes the interaction strength while $J_0$, $J_\pm$ are the quasi-spin operators defined as \begin{eqnarray} J_0 &=& \frac{1}{2} \sum_{p=1}^{N} \left(c^\dagger_{+,p}c_{+,p} - c^\dagger_{-,p}c_{-,p}\right) , \nonumber \\ J_+ &=& \sum_{p=1}^{N} c^\dagger_{+,p}c_{-,p},~~~ J_- = J_+^\dagger ,\nonumber \end{eqnarray} $c^\dagger_{+,p}$ and $c^\dagger_{-,p}$ are creation operators associated with the upper and lower level. The exact solution of this model, is easily obtained by noting that $J^2$ (but not $J_0$) commute with $H$. It is then convenient to introduce the basis of eigenstates of $J^2$ and $J_0$ and diagonalize the Hamiltonian in this particular space (for more detail see for instance \cite{Sev06}). \subsection{Hartree-Fock approximation} In the Hartree-Fock (or Mean-Field) theory, the many-body wave function is replaced by a Slater Determinant (SD) given by $| \Phi \rangle = \Pi_{p=1}^N a^\dagger_{0,p} | - \rangle$. Here, a new single-particle basis, denoted by $\{\varphi_{0,p},\varphi_{1,p} \}$ associated to the set of creation/annihilation operators $\{a^\dagger_{0,p},a^\dagger_{1,p} \}$ has been introduced through the relation \begin{eqnarray} \left( \begin{array} {c} a^\dagger_{1,p} \\ a^\dagger_{0,p} \end{array} \right) &=& \left( \begin{array} {cc} f^* & -g^* \\ g & f \end{array} \right) \left( \begin{array} {c} c^\dagger_{+,p} \\ c^\dagger_{-,p} \end{array} \right), \label{eq:matac} \end{eqnarray} where the choice \begin{eqnarray} f = \cos(\alpha), ~~~g = \sin(\alpha) e^{i\varphi} , \end{eqnarray} automatically insures the orthogonality of the new states. Due to simple structure of the Lipkin model, the variation with respect to the SD state is identical to the variation of the $\alpha$ and $\varphi$, i.e. $| \Phi \rangle = | \Phi (\alpha , \varphi) \rangle$ and the HF energy becomes a functional of these parameters: \begin{eqnarray} {\cal E}_{MF}(\alpha,\varphi) \equiv \left\langle \Phi(\alpha, \varphi) | H | \Phi (\alpha, \varphi) \right\rangle \end{eqnarray} Anticipating for the forthcoming discussion, we first write ${\cal E}_{MF}$ as a functional of the OBDM $\gamma$: \begin{eqnarray} {\cal E}_{MF}[\gamma] &=& \varepsilon {\rm Tr}(J_0 \gamma) \nonumber \\ &-& \frac{V(N-1)}{2 N} \Big\{ ({\rm Tr}[\gamma J_+ ])^2 + ({\rm Tr}[\gamma J_-])^2 \Big\} . \label{eq:dmft1lipkin} \end{eqnarray} In the Hartree-Fock limit, the OBDM contains all the information on the many-body state and simply reads $\gamma=\sum_{p=1}^N | \varphi_{0,p} \rangle \langle \varphi_{0,p} |$. Reporting the expression of $| \varphi_{0,p} \rangle$ in terms of $\alpha$ and $\varphi$, we recover the standard HF expression: \begin{eqnarray} {\cal E}_{MF}[\alpha , \varphi] &=& -\frac{\varepsilon N}{2} \left\{ \cos(2\alpha) + \frac{\chi}{2} \sin^2(2\alpha) \cos(2\varphi) \right\}. \label{eq:hflipkin} \end{eqnarray} where $\chi = V(N-1) / \varepsilon$. Minimizing with respect to $(\alpha,\varphi)$ leads to HF energy, denoted by ${\cal E}^0_{HF}$ (both with $\varphi=0$): \begin{eqnarray} {\cal E}^0_{HF} &=& - \frac{\varepsilon N}{2} ~~{\rm for} ~~\chi \leq 1 ~~~( {\rm at}~ \alpha =0) , \nonumber \\ {\cal E}^0_{HF} &=& - \frac{\varepsilon N} {4\chi} \left( 1+ \chi^2 \right) \nonumber ~{\rm for} ~\chi > 1 ~( {\rm at}~ \chi\cos(2\alpha) = 1) . \end{eqnarray} The HF solution for the Lipkin model has been extensively discussed in the literature \cite{Aga66,Rin80}. While it provides a rather good estimate of the exact energy in the weak coupling or in the large $N$ limit, it generally differs rather significantly from it in particular for $\chi \sim 1$. This discrepancy essentially reflects the failure of the HF method to account for configuration mixing in a single-reference framework for systems with a shape phase-transition \cite{Rin80}. \subsection{Expression of the Hamiltonian for a General correlated state} Due to the two-body nature of the Hamiltonian (Eq. (\ref{eq:hamillipkin})), the most natural way to extend the Mean-Field framework to correlated system is to introduce the two-body density matrix, denoted by $\Gamma_{12}$ and the associated correlation matrix $\sigma_{12}$ (see for instance \cite{Cio00}). Using the OBDM of the correlated system $\gamma$, $\sigma_{12}$ is defined through the relation: \begin{eqnarray} \Gamma_{12} = \gamma_{1} \gamma_{2} (1-P_{12}) + \sigma_{12} , \end{eqnarray} where $P_{12}$ denotes the anti-symmetrization operator while the label "i" in $\gamma_i$ refers to the particle on which the density is applied, i.e. $\langle ij |\gamma_{1} \gamma_{2}| kl \rangle = \langle i|\gamma| k \rangle \langle j |\gamma| l \rangle$ \cite{Lac04}. The expectation value of the energy then splits into a mean-field part and a correlated part as : \begin{eqnarray} {\cal E} = {\cal E}_{MF} [\gamma ] + {\cal E}_{C} [\sigma_{12}], \label{eq:emfec0} \end{eqnarray} where ${\cal E}_{MF} [\gamma ]$ is given by expression (\ref{eq:dmft1lipkin}) except that $\gamma$ now refers to the OBDM of the correlated system. ${\cal E}_{C}$ reads: \begin{eqnarray} {\cal E}_{C} [\sigma_{12}] &=& -\frac{V}{2} {\rm Tr} \left\{ [J_+J_+ + J_- J_-]_{12} \sigma_{12} \right\} \nonumber \\ &=& -V \sum_{p,p'} \Re\left( \left\langle +, p ; + ,p' ~ |\sigma_{12} | - , p ; - , p' \right\rangle \right) , \end{eqnarray} where the notation $[.]_{12}$ emphasizes that the trace is performed using the two-body matrix elements of the operators. Eq. (\ref{eq:emfec0}) is valid for any correlated system including the exact ground state. The minimization of it with respect to any kind of OBDM and correlations will therefore lead to the ground state energy. Such a direct minimization is complex due to the number of degrees of freedom involved in the variation \cite{Cio00}. DMFT provides a practical solution to this problem. Indeed, according to this theory \cite{Gil75}, the total energy could be written as a functional of $\gamma$. Since ${\cal E}_{MF}[\gamma]$ is already written as a functional of the OBDM, we are left to find a functional for ${\cal E}_{C}$. In many cases, both ${\cal E}_{MF}$ and ${\cal E}_{C}$ are directly written as a functional of the natural orbitals $\varphi_i$ and occupation numbers $n_i$, i.e. \begin{eqnarray} {\cal E}[\{\varphi_i , n_i\}] = {\cal E}_{MF} [\{\varphi_i , n_i\}] + {\cal E}_{C} [\{\varphi_i , n_i\}] . \label{eq:emfec} \end{eqnarray} DMFT has some additional advantages compared to standard DFT. Besides the fact that the exchange contribution could be expressed exactly with the OBDM, at the minimum the optimal OBDM identifies with the ground state OBDM. Therefore, expectation values of any one-body observable could be computed and should correspond to the ground state expectation value. Accordingly, if the mean-field prescription is used in ${\cal E}_{MF}$ for a given $H$, the value of ${\cal E}_{MF}$ at the minimum will truly correspond to the true mean-field contribution to the total energy. Consequently, ${\cal E}_{C}$ will truly correspond to the contribution of correlation (this issue will further be discussed in section \ref{sec:obobs}). Recently, significant efforts have been made in the practical construction of such density matrix functionals \cite{Goe98,Csa00,Csa02,Yas02,Kol04,Cio03,Per04,Gri05,Lat05,Cio05,Lei05,Kol06,Mar08,Lat08}. In most cases, guided by general considerations \cite{Mul84} and/or constraints on the two-body density \cite{Csa00}, $\sigma_{12}$ is first written as a functional of the natural orbitals and occupation numbers. This corresponds generally to a starting point for more elaborated functionals. Functionals are further enriched by incorporating additional terms either to correct for the self-interaction problem or eventually to better reproduce some specific limits in the infinite HEG or configuration interaction calculation in molecules (for a recent review see \cite{Klo07}). \subsection{Construction of a density matrix functional for Correlated state} \label{sec:semi} Due to the specific form of the Lipkin Hamiltonian given by Eq. (\ref{eq:hamillipkin}), $\gamma$ simply writes in the natural basis as: \begin{eqnarray} \gamma = \sum_{p=1}^N \Big\{| \varphi_{0,p} \rangle n_0 \langle \varphi_{0,p} | + | \varphi_{1,p} \rangle n_1 \langle \varphi_{1,p} | \Big\} , \label{eq:obdm} \end{eqnarray} with $n_1 = (1-n_0)$. The single-particle states $| \varphi_{ i,p} \rangle$ (with $i=0,1$) now stand for the natural orbitals while the $n_i$ correspond to occupation numbers. Similarly to the HF theory, creation operators, denoted by $a^\dagger_{i,p}$, associated to natural orbitals are expressed from the $c^\dagger_{\pm,p}$ using Eq. (\ref{eq:matac}). The mean-field contribution is easily deduced from the HF case, using Eq. (\ref{eq:dmft1lipkin}) and the expression of $\gamma$ given above, leads to: \begin{widetext} \begin{eqnarray} {\cal E}_{MF}(\{ \varphi_{i,p}, n_i \}) & = & {\cal E}_{MF}(\alpha,\varphi,n_0) \nonumber \\ &=& -\frac{\varepsilon}{2} N \Big\{ \cos(2\alpha) (2n_0 -1) + \frac{\chi}{2} \sin^2(2\alpha) \cos(2\varphi) (2n_0 -1)^2 \Big\} . \label{eq:emflipkin} \end{eqnarray} \end{widetext} Expressing ${\cal E}_C$ is less straightforward. A possible strategy to construct functionals is to identify specific limits at which explicit forms are known. For instance the two electron case which has been studied in \cite{Low56} has largely influence presently used DMFT for molecules \cite{Klo07}. Following this idea, the $N=2$ case is first considered. \subsubsection{The $N=2$ case} To study the $N = 2$ case, a basis of Slater Determinant is constructed using the natural orbitals, i.e. $| \Phi_{ij} \rangle = a^\dagger_{i,p}a^\dagger_{j,p'}| - \rangle$ with $i$ and $j$ either $0$ or $1$. The ground state wave-function $\Psi$ then reads: \begin{eqnarray} | \Psi \rangle = \sum_{ij} C_{ij} | \Phi_{ij} \rangle . \end{eqnarray} From the expression of the OBDM and using the fact, that the single-particle states are natural orbitals we obtain the simple relation $|C_{ij}|^2 = \delta_{ij} n_{i}$. From which we deduce $C_{ij} = \delta_{ij} e^{i\theta_{ii}} \sqrt{n_i}$. As illustrated below, the simplest choice $e^{i\theta_{ii}}=1$ is convenient and leads to \begin{eqnarray} | \Psi \rangle = \sqrt{n_0} | \Phi_{00} \rangle + \sqrt{n_1} | \Phi_{11} \rangle , \label{eq:phin2} \end{eqnarray} which is nothing but the exact ground state wave-function written as a functional of occupation numbers and natural states. Reporting this expression in $\left\langle \Phi | H |\Phi \right\rangle$, leads to a total ground state energy ${\cal E}^{^{{N=2}}}$ given by: \begin{widetext} \begin{eqnarray} {\cal E}^{^{{N=2}}}(\alpha, \varphi, n_0) &=& -\varepsilon \cos(2\alpha) (2n_0 -1) - V \Big\{ \frac{1}{2} \sin^2(2\alpha) \cos(2\varphi) + 2\left( \sin^4(\alpha) \cos(4 \varphi) + \cos^4(\alpha) \right) \sqrt{n_0 (1-n_0)} \Big\} . \label{eq:funcn2} \end{eqnarray} Using the expression of the mean-field contribution (Eq. (\ref{eq:emflipkin})), we deduce \begin{eqnarray} {\cal E}^{^{{N=2}}}_C(\alpha, \varphi, n_0) &=& - 2 V \Big\{\sin^2(2\alpha) \cos(2\varphi) n_0(1-n_0) +\left(\sin^4(\alpha) \cos(4 \varphi)+ \cos^4(\alpha)\right) \sqrt{n_0 (1-n_0)} \Big\}. \end{eqnarray} \end{widetext} Since the above functional is exact, the ground state energy should be recovered by minimization with respect to $n_0$, $\alpha$ and $\varphi$. As in the HF case, $\varphi=0$ could always be taken. The variation of $n_0$ should be made under the constraint $n_0 \in [0,1]$. A similar technique as in the HF+BCS case could be employed \cite{Rin80}. Writing $n_0 = \cos^2(\theta)$ with $\theta \in [0,\pi/2]$, gives at the minimum \begin{eqnarray} \tan(2\theta) = \chi \left(\frac{ \sin^4(\alpha) \cos(4 \varphi)+ \cos^4(\alpha)}{\cos(2\alpha)} \right). \end{eqnarray} Then, variation of the $\alpha$ is made to obtain the minimum energy. The result (filled circles) is displayed in Fig. \ref{fig:N2} and compared to the exact ground state energy given by $E=-\sqrt{1+\chi^2}$ \cite{Lip65} (solid line) and the Hartree-Fock energy (dashed line). Not surprisingly, while the HF curve deviates significantly from the solid line, the DMFT could not be distinguished from the exact case. \begin{figure}[t!] \includegraphics[height=5.cm]{fig1_nnn.eps} \caption{Comparison between the exact energy (solid line), Hartree-Fock (dashed line) and the energy obtained from the minimization of Eq. (\ref{eq:funcn2}) (filled circles) as a function of $\chi$ for the $N=2$ case.} \label{fig:N2} \end{figure} \subsubsection{DMFT for $N \ge 3$ and large N scaling} The Lipkin model with $N=2$ is an interesting pedagogical example of DMFT where the exact energy functional in terms of natural orbitals and occupation numbers is known. This limit is used here as a guide to provide a DMFT for $N \ge 3$. The most simple extension of the functional derived for $N=2$, consists in assuming that the interaction of the $N$ particle could be written as a sum of the interaction of the $N(N-1)/2$ pairs of particles, each pair contributing to the total energy as in the $N=2$ case. This prescription naturally leads to the mean-field Hamiltonian given (\ref{eq:emflipkin}) and amount to take for all $N$ \begin{eqnarray} {\cal E}_C(\alpha, \varphi, n_0) = \frac{N(N-1)}{2} {\cal E}^{^{{N=2}}}_C(\alpha, \varphi, n_0). \label{eq:corsimp} \end{eqnarray} \begin{figure}[t!] \includegraphics[height=9.cm]{fig2_lipkinnref.eps} \caption{Exact ground state energy (solid lines) displayed as a function of $\chi$ for $N=5$ to $20$ resp. from top to bottom. In each case, the corresponding HF (dashed line) and DMFT (filled circles) are shown. The latter are obtained by minimization of the functional using the mean-field and correlation energy resp. given by Eq. (\ref{eq:emflipkin}) and Eq. (\ref{eq:corsimp}).} \label{fig:chiref} \end{figure} The minimal total energy obtained by varying both occupation numbers and $\alpha$ (using $\varphi=0$) using this prescription is displayed in Fig \ref{fig:chiref} (solid circles) as a function of $\chi$ for various particle numbers. In each case, the exact solution (solid line) and the Hartree-Fock prescription (dashed line) are shown. The simple scheme using (\ref{eq:corsimp}) clearly provides a very poor approximation for the ground state energy and always leads to an energy much below the exact one. In addition, the discrepancy increases as $N$ increases. This failure points out the complex many-body correlations present in the Lipkin model coming from the mixing of 1 particle-1 hole (1p-1h), 2p-2h, $n$p-$n$h excitations in the component in the ground state. This leads to a much more complex situation than the $N=2$ case (Eq. (\ref{eq:phin2})). In particular, using the contracted Schr\"odinger equation (CSE), we do expect that the two-body correlation depends on higher order correlation matrices \cite{Yas02}. The correlation energy given by (\ref{eq:corsimp}) clearly neglects these higher-order effects. From Fig. \ref{fig:chiref}, we see that the correlation energy is largely overestimated. In the following, we use a slightly different prescription for the correlation energy given by: \begin{eqnarray} {\cal E}^{^{{N\ge 3}}}_C(\alpha, \varphi, n_0) = \eta(N) \frac{N(N-1)}{2} {\cal E}^{^{{N=2}}}_C(\alpha, \varphi, n_0) , \label{eq:coreta} \end{eqnarray} where $\eta(N)$ is a reduction factor introduced to mimic the effect of higher order correlations. The optimal value of $\eta$ this is determined empirically from the following procedure. For a given value of $N$ and $\eta$, the mimimum energy of the corresponding DMFT, denoted by ${\cal E}^N_{min} (\eta, \chi)$ is computed for $\chi$ between 0 and $\chi_{max} = 3$. Then, the quantity $D^N(\eta)$, given by \begin{eqnarray} D^N (\eta) = \int_0^{\chi_{max}} \left \{ {\cal E}^N_{GS}(\chi) - {\cal E}^N_{min} (\eta, \chi)\right\}^2 d\chi , \end{eqnarray} where ${\cal E}^N_{GS}(\chi)$ denotes the exact ground state energy for a given $N$ and $\chi$, is computed. Obviously, $D^N (\eta)$ gives a measure of the deviation between the DFMT minimum energy and the exact energy over the interval $\chi \in [0,\chi_{max}]$. The optimal value of $\eta(N)$ is defined as the minimum of $D^N (\eta)$ as $\eta$ varies between 0 (the HF limit) and $1$ (the prescription of Eq. (\ref{eq:corsimp})). \begin{figure}[t!] \includegraphics[height=5.cm]{fig3_lipkinfit.eps} \caption{Values of the optimal quenching factor $\eta(N)$ as a function of the particle number N. The solid line represents the function $\eta(N) = c~N^{-2/3}$ with $c=1.5$.} \label{fig:fit} \end{figure} Values of optimal reduction factors are reported by filled circles in Fig. \ref{fig:fit} as a function of $N$. As guessed from the increasing discrepancy observed with increasing $N$ observed in Fig. \ref{fig:chiref}, the higher is $N$ the smaller should $\eta$ be taken. The variation of $\eta(N)$ turns out to simply behave as $N^{-2/3}$. The solid line in Fig (\ref{fig:fit}) represents the function \begin{eqnarray} \eta(N) = c ~ N^{-2/3}, \label{eq:etan} \end{eqnarray} with $c=1.5$ deduced by fitting optimal values of $\eta$. In the following, we show that the semi-empirical density matrix functional theory constructed from the mean-field and correlation energy resp. given by Eq. (\ref{eq:emflipkin}) and Eq. (\ref{eq:coreta}) significantly improves the HF theory. Combining expression (\ref{eq:etan}) with Eq. (\ref{eq:coreta}) shows that the correlation energy scales as ${\cal E}_C \propto N^{4/3}$ as $N$ increases. Since this correlation energy is proportional to $\langle J^2_x \rangle$, we observe that the $N^{-2/3}$ deduced empirically is nothing but the large $N$ scaling behavior obtained analytically in ref. \cite{Dus04}, i.e. $\langle J^2_x \rangle/N^2 \propto N^{-2/3}$. \subsection{Results of the semi-empirical DMFT} The minimal energy deduced from the semi-empirical density matrix functional proposed above (filled circles) is systematically compared to the exact ground state energy (solid line) and HF energy (dashed line) for different particle number and two-body interaction strength in Fig. \ref{fig:chi} and \ref{fig:nn}. In all cases, the DMFT significantly improves the HF result and turns out to be very close to the exact one. As illustrated in Fig. \ref{fig:chi}, the HF energy generally deviates rather significantly from the exact energy around $\chi=1$ and does not provides the correct asymptotic behavior as $\chi$ increases. This deviation, which disappears as $N \rightarrow \infty$, is due to the failure of HF theory in the presence of a "shape" phase transition \cite{Aga66} between the spherical solution ($\chi < 1$) and the "deformed" solution ($\chi > 1$). The standard technique to properly account for this effect is to mix different slater determinants like in the GCM theory. We see in figure \ref{fig:chi} and \ref{fig:nn} that, except a small deviation around $\chi=1$ which seems to slightly increase as $N$ increases, both the asymptotic behavior at large $\chi$ and $N$ and the energy around $\chi=1$ are rather well reproduced. It is worth mentioning that the DMFT is much less demanding in terms of computational power than the GCM and thus provides a rather interesting alternative to the latter theory. \begin{figure}[t!] \includegraphics[height=9.cm]{fig4_lipkinn.eps} \caption{Exact ground state energy (solid lines) displayed as a function of $\chi$ for $N=5$ to $20$ resp. from top to bottom. In each case, the corresponding HF (dashed line) and DMFT (filled circle) minimum energy are shown. The DMFT calculation is performed using the mean-field and correlation energy resp. given by Eq. (\ref{eq:emflipkin}) and Eq. (\ref{eq:coreta}) with $\eta(N) =1.5 ~{N}^{-2/3}$.} \label{fig:chi} \end{figure} \begin{figure}[t!] \includegraphics[height=9.cm]{fig5_lipkinchi.eps} \caption{Exact ground state energy per particles (solid lines) displayed as a function of $N$ for $\chi =1$, $2$ and $3$ from top to bottom. In each case, the corresponding HF (dashed line) and DMFT (filled circle) minimum energy are shown. The DMFT calculation is performed using the mean-field and correlation energy resp. given by Eq. (\ref{eq:emflipkin}) and Eq. (\ref{eq:coreta}) with $\eta(N) =1.5~N^{-2/3}$.} \label{fig:nn} \end{figure} \subsection{Discussion on Density Matrix Functional Theory with broken symmetry} \label{sec:obobs} Similarly to the Hartree-Fock case, the one-body density solution of the functional developed here can eventually violate some symmetries of the "true" ground state density. The invariance of the Hamiltonian (\ref{eq:hamillipkin}) with respect to parity imposes $\left\langle J_+ \right\rangle = \left\langle J_- \right\rangle = 0$. This implies that at the minimum of the functional $\alpha=0$. This is indeed the case for the exact functional given by Eq. (\ref{eq:funcn2}) for $N=2$. However, solutions with $\alpha \neq 0$ are found for larger particle number. This is illustrated in figure \ref{fig:ejp} where the value of $\Delta_{+-} \equiv (\left\langle J_+ \right\rangle + \left\langle J_- \right\rangle)/2N$ is displayed as a function of $\chi$. The value of $\chi$, for which $\alpha$ becomes non-zero, denoted by $\chi_c$ is infinite for $N=2$, around 1.6 for $N=5$ and tends to the HF case ($\chi_c = 1$) as $N$ goes to infinity. Note that, in this limit, the HF functional alone already provides a very good functional for the Lipkin model. \begin{figure}[t!] \includegraphics[height=9.cm]{fig6_ejp.eps} \caption{Expectation value of $\Delta_{+-}$ as a function of $\chi$ for $N=5$ (top), $N=10$ (middle) and $N=20$ (bottom). The DMFT result (filled circles) is systematically compared with the HF (open circles) value.} \label{fig:ejp} \end{figure} According to DMFT, the OBDM $\gamma$ obtained by minimizing the {\it exact} functional should match the exact OBDM at the minimum. Therefore, strictly speaking, the extracted one-body density and associated natural states and occupation numbers could not be the exact OBDM when some of the symmetries of the system are broken. This has to be kept in mind when discussing expectation values of one-body observables. As an illustration, the expectation value of the single-particle part of the Hamiltonian ${\cal E}_{J_0} \equiv \varepsilon Tr( \gamma J_0)$ obtained using the OBDM minimizing the DMFT is displayed in Fig. \ref{fig:ej0} as a function of $\chi$ for different particle number (filled circles). The exact (solid line) and HF (open circles) prescription are also shown. The value of the total energy minus ${\cal E}_{J_0}$ is also shown in each panel for the DMFT (filled square), exact (dashed line) and HF (open square). The density obtained in DMFT using the semi-empirical functional described in section \ref{sec:semi} always improves the estimate of ${\cal E}_{J_0}$ for small $\chi$. It generally gives an almost perfect result when $\alpha=0$ at the minimum, i.e. when the parity symmetry is respected by the OBDM. The behavior of ${\cal E}_{J_0}$ is also rather satisfactory at large $N$ and $\chi$. In other cases, small particle number and large $\chi$ or large particle number and intermediate $\chi$ ($1 \le \chi \le 2$). Fig. \ref{fig:ej0} illustrates that, when the functional respects all symmetries of the original Hamiltonian, expectation values of one-body observables perfectly match the exact results. This is the case of the semi-empirical functional proposed here for all particle number and $\chi \leq 1$. Of course it would be desirable to provide functionals that respects the symmetries at the first place. However, as it is well known already at the HF level, the introduction of theories where symmetries are explicitly broken is a way to grasp some of the correlations which would have been very hard to incorporate without breaking the symmetries. The success of the DMFT functional at large $N$ and $\chi$ can be attributed to the explicit parity symmetry breaking as in the HF case. \begin{figure}[t!] \includegraphics[height=9.cm]{fig7_ejj.eps} \caption{Expectation value of the one-body part of the Hamiltonian, denoted by ${\cal E}_{J_0}$ obtained at the minimum of the DMFT as a function of $\chi$ for $N=5$ (top), $N=10$ (middle) and $N=20$ (bottom). The DMFT result (filled circles) is systematically compared with the exact (solid line) and HF (open circles) value. The value of the total energy minus ${\cal E}_{J_0}$ is also shown in each panel for the DMFT (filled square), exact (dashed line) and HF (open square).} \label{fig:ej0} \end{figure} \section{Summary and discussion on EDF} In this work, the Density Matrix Functional Theory is applied to the two-level Lipkin model. Guided by the $N=2$ case, a semi-empirical functional of the natural wave-functions and occupation numbers is constructed. The minimization of the DMFT is shown to give a much better agreement of the exact ground state energy than the HF scheme over a wide range of particle number and two-body interaction strength. The success of DMFT in the Lipkin model, shows that this theory could be a valuable tool in many-body system in the presence of "shape" phase transition. Such transition often occurs for instance in nuclear physics \cite{Rin80,Ben03} and is generally treated by first introducing the Energy Density Functional of a single-reference vacuum (Slater Determinant or quasi-particle state) and then use GCM theory \cite{Ben03}. DMFT could be a powerful tool to improve actual SR-EDF functionals, by writing directly the EDF in terms of natural orbitals and occupation numbers. The possibility to introduce occupation numbers has been promoted in Ref. \cite{Pap07} for the pairing Hamiltonian and in \cite{Ber08} for the three-level Lipkin model using a slightly different technique. However, the strategy based on DMFT seems quite natural to extend actual EDF. Indeed, following the strategy used here for the Lipkin model, we can write the nuclear EDF as \begin{eqnarray} {\cal E}_{EDF}[\{\varphi_i, n_i \} ] = {\cal E}_{MF} [\{\varphi_i, n_i \} ] + {\cal E}_{C} [\{\varphi_i, n_i \} ] . \label{eq:edfdmft} \end{eqnarray} The most natural choice for the mean-field part is to use the actual Skyrme functional which has been optimized for decades. Note that, the nuclear problem differs from the electronic case and/or the Lipkin model presented here due to the fact that coefficients of the functional are not directly linked to the bare interaction but adjusted on experimental data. Therefore, the mean-field contribution already contains a large fraction of the correlation. Nevertheless, the above decomposition (Eq. (\ref{eq:edfdmft})) is already used in the nuclear context in SR-EDF calculations when a quasi-particle vacuum is retained for the trial state. Then, the correlation energy identifies with the pairing energy which could, in the canonical basis, be written as a functional of occupation numbers and natural orbitals. Indeed, using the notation $(i,\bar i)$ for canonicals pairs of single-particle states, the pairing energy writes \begin{eqnarray} {\cal E}_{C} [\{\varphi_i, n_i \} ] &\equiv& \frac{1}{4} \sum_{i,j} \bar v^{\kappa \kappa}_{i\bar i j \bar j} \sqrt{n_i (1-n_i)} \sqrt{n_j (1-n_j)} \nonumber \\ \end{eqnarray} where $\bar v^{\kappa \kappa}$ is the effective interaction in the pairing channels and where we have replaced components of the anomalous density $\kappa$ in the natural basis by $\kappa_{i \bar i} = \sqrt{n_i (1-n_i)}$. This functional is adapted for pairing like correlations. However, the use of a reference state written as a product of quasi-particle states clearly restricts the type of density matrix functional that can be guessed. This class appears to be not general enough to account for the diversity of phenomena occurring in nuclei. The possibility to use different functional than the BCS like ones has already been discussed in the early time of the Skyrme EDF history \cite{Vau73}. The functional developed here for the Lipkin model as well as functionals recently proposed in electronic systems \cite{Klo07}, clearly point out the possibility to use alternative functionals which could be of interest for nuclear systems. A crucial aspect of the present work is the introduction of DMFT functionals that explicitly breaks some of the symmetries of the original Hamiltonian to incorporate complex correlations. It should be kept in mind that broken symmetries imply that symmetries should a priori be restored. The problem of restoration of broken symmetries in functional theories is an important aspect which deserves specific studies in the near future \cite{Lac08,Ben08,Dug08}. \begin{acknowledgments} The author thank M.~Assi\'e, B. Avez, T. Duguet, C. Simenel, O. Sorlin and P. Van Isacker for enlightening discussions at different stages of this work and T. Papenbrock for useful remarks on the scaling behavior in the Lipkin model. \end{acknowledgments}
1,314,259,993,856
arxiv
\section{Introduction} Detection of double JPEG (DJPEG) compression is one of the most widely studied problems in image forensics, see for instance \cite{Jessica2008,Li2008,Barni2017}. The interest of researcher in this topic is motivated by the fact that double compression can reveal important information about the past history of an image. Important information can be obtained by estimating the quality of the first JPEG compression, and, moreover, by estimating the primary quantization matrix used for the first JPEG compression. Given an image with several copy-pasted regions, it is possible to identify the different origin of the tampering by recognizing that they have been compressed first with different JPEG qualities, and more in general that they are characterized by different primary quantization matrices of the compression (while there is not a standard definition of the JPEG quality, the concept of quantization matrix is a standard one \cite{pennebaker1992}). Several methods have been proposed in the literature for the estimation of the primary quantization matrix. Many of them exploit statistical modeling of DCT coefficients \cite{Bianchi2012,Cogranne2019,Galvan2014,Dalmia2018}. A common feature of all these approaches is that they work under particular operative conditions and settings about the relationship of the JPEG qualities of former and second compression, and the alignment of the $8 \times 8$ grid of the first compression with the second one. For instance, the method in \cite{Galvan2014} works only when the two compressions are aligned and the second quantization step is lower than the first one, that is when the quality of the second JPEG is higher than the quality of the first one (hence, $QF_{1} < QF_{2}$ for the standard quantization matrix case). Similarly, the method in \cite{Cogranne2019} is designed for the aligned JPEG case and can not estimate the first quantization step when this is a divisor of the second one. \CH{A more general approach for the estimation in the aligned scenario has been recently proposed in \cite{battiato2020indepth}.} The algorithm proposed in \cite{Bianchi2012} can work both in the aligned and non-aligned cases, however the performance drops when $QF_{1} > QF_{2}$. Eventually, the method in \cite{Dalmia2018} works in the non-aligned case only. Another drawback of such model-based techniques and approaches that rely on hand-crafted features is that their performance tend to decrease significantly when they are applied to small patches, that prevents the application of these methods for the local estimation of the quantization matrix of first compression, useful for tampering localization. A modern method for primary quantization matrix estimation based on Convolutional Neural Networks (CNN) has been recently proposed in \cite{niu2019SPL}. Such method can work under very general operative conditions and on small (64$\times$64) patches. This approach has been shown to outperform previous approaches, both in terms of accuracy and mean square error (MSE) of the estimation. In particular, in \cite{niu2019SPL}, the CNN is trained to minimize the squared difference between the predicted values of the quantization coefficients and the true values, hence the MSE of the estimation is minimized. Some works in the deep learning literature, however, shows that CNNs are better to solve classification than regression problems. CNNs can in fact achieve remarkably accurate results when trained to predict categorical variables, drawn from discrete probability distributions of data \cite{bulat2016human,chen2017deeplab}. Whenever possible, switching to a classification problem or consider hybrid methods that combine classification with regression has been shown to yield better results \cite{alp2017densereg}. In \cite{alp2017densereg}, for instance, soft values are estimated by using a quantized regression architecture that first obtains a quantized estimate (using the softmax followed by the cross entropy loss), and then refines it through regression of the residual. Given the above, in this paper, we focus on improving the performance of CNN-based estimation of the primary quantization matrix by turning the regression into a classification-like problem, with the design of a suitable CNN architecture. Our approach starts from the observation that the quantization coefficients can only take integer values. Therefore, we design a structure such that the estimation of a vector of integer values, namely all the coefficients of the quantization matrix, can be performed in a classification-like fashion. For the implementation of the network (internal layers) we consider the same CNN architecture already considered in \cite{niu2019SPL}, yielding good results, namely DenseNet \cite{huang2017densely}. Similarly to \cite{niu2019SPL}, the CNN-based estimator is designed to work under very general operative conditions, i.e. when the second compression grid is either aligned or not with the one of first compression, and for every combinations of qualities of former and second compression. The capability of the method to work under both aligned and non-aligned DJPEG, and for all possible combinations of JPEG qualities, is very relevant in practical applications, where those information are not known a priori, thus making the adoption of dedicated method very impractical. \CH{Like in \cite{niu2019SPL}, we focus on the case where the estimation is carried out on small patches, that represents the most challenging scenario.} \CH{As commonly done by the approaches from the literature of primary quantization matrix estimation, we assume that the test image is double compressed and do not consider the single JPEG scenario (in this case, quite reasonably, our method returns a quantization matrix that corresponds to a very high compression quality, the estimated coefficients being close to those of the quantization matrix for the case $QF=100$. Said differently, the network regards the single compression with $QF$ as a double compression with $QF_1 = 100$ and $QF_2 = QF$).} \iffalse \subsection{Prior works} Several methods have been proposed in the literature for the estimation of the primary quantization matrix. Many of them exploit statistical modeling of DCT coefficients \cite{Bianchi2012,Cogranne2019}. Other methods estimate the quantization steps of the first compression, exploiting the effects of successive quantizations followed by de-quantizations and the peculiar patterns induced by this operation \cite{Bianchi2012,Cogranne2019}. A common feature of all these approaches is that they work under particular operative conditions and settings about the relationship of the JPEG qualities of former and second compression, and the alignment of the $8 \times 8$ grid of the first compression with the second one. For instance, the method in \cite{Galvan2014} works only when the two compressions are aligned and the second quantization step is lower than the first one, that is when the quality of the second JPEG is higher than the quality of the first one (hence, $QF_{1} < QF_{2}$ for the standard quantization matrix case). Similarly, the method in \cite{Cogranne2019} is designed for the aligned JPEG case and can not estimate the first quantization step when this is a divisor of the second one. The algorithm proposed in \cite{Bianchi2012} can work both in the aligned and non-aligned cases, however the performance drops when $QF_{1} > QF_{2}$. Eventually, the method in \cite{Dalmia2018} works in the non-aligned case only. Another drawback of such model-based techniques and approaches that rely on hand-crafted features is that their performance tend to decrease significantly when they are applied to small patches, that prevents the application of these methods for the local estimation of the quantization matrix of first compression (useful for tampering localization applications). A method for primary quantization matrix estimation based on Convolutional Neural Networks (CNN) has been recently proposed in \cite{niu2019SPL}. Such method can work under very general operative conditions and on small (64$\times$ 64) patches. This approach has been shown to outperform previous approaches, both in terms of Accuracy and Mean Square Error (MSE) of the estimation. In particular, in \cite{niu2019SPL}, the CNN is trained to minimize the squared difference between the predicted values of the quantization coefficients and the true values, hence the MSE of the estimation is minimized. \fi The rest of this paper is organized as follows: Section \ref{sec.background} recaps the main concepts of double compression and introduces the notation. The proposed method is described in Section \ref{sec.prop_method}. Then, Section \ref{sec.exp_method} details the experimental methodology and the results are reported and discussed in Section \ref{sec.results}. We conclude the paper with some final remarks in Section \ref{sec.conclusions}. \section{Basic concepts and notation} \label{sec.background} We denote by $Q$ the quantization matrix, that is, the $8 \times 8$ matrix with the quantization steps of the DCT coefficients considered for the compression. A double compression occurs when an image compressed with a given $Q_1$ is decompressed (decompression involves de-quantization and inverse DCT), and compressed again with a second quantization matrix $Q_2$. The elements of $Q_1$ can be conveniently arranged in a vector of dimensionality 64, zig-zag ordered \cite{pennebaker1992}. We denote by ${\bf q}_1$ such 64-dim vector built from $Q_1$. As commonly done in the literature \cite{Bianchi2012,Cogranne2019,Galvan2014, niu2019SPL}, we focus on the first elements of ${\bf q}_1$ and restrict the estimation to those coefficients. We denote with $({\bf q}_{1})_{N_c} = [q_{1,1}, q_{1,2},...,q_{1,N_c}]$ the vector of the first $N_c$ coefficients of ${\bf q}_1$. The coefficients at the medium-high DCT frequencies are in fact more difficult to estimate accurately, due to the stronger quantization usually applied to them; however, since these coefficients are not very discriminative (as they tend to be similar for most quantization matrices), their estimation is less important. When a JPEG image is compressed a second time, the second compression grid can be either aligned or non-aligned to the first compression grid. The case of a non-aligned DJPEG corresponds to the most frequent scenario in practice. A grid misalignment occurs locally when image splicing is performed, that is, when a region of a single JPEG image is copy-pasted into another image, since in this case the alignment between the compression grids is rarely preserved. On a global level, we have a non-aligned DJPEG when the image is cropped in between the former and second compression stage, or some processing is applied causing a de-synchronization. The quality of the JPEG compression is often summarized by many compression softwares by means of the JPEG Quality Factor ($QF$), whose values range from 0 to 100 ($QF$ values lower than 50 however are seldom used in practice nowadays since they corresponds to extremely low qualities). A $QF$ value specifies a quantization matrix $Q$ (standard quantization matrix). For convenience, in the rest of the paper, we refer to the JPEG Quality Factor ($QF$). Note that, in principle, the proposed estimator can be applied to estimate any quantization matrix of former compression, be it standard and non-standard. In the rest of the paper, we denote with $QF_2$ the second compression $QF$ and with $QF_1$ the former. \CH{Like in \cite{niu2019SPL}, in this paper, we consider color DJPEG images (with 3 channels) and focus on the estimation of the primary quantization matrix $Q_1$ of the luminance channel, that is, the $Y$ channel of the $Y C_b C_r$ color space \cite{pennebaker1992}.} \section{Proposed classification-like CNN estimator} \label{sec.prop_method} The proposed method starts from the observation that the $q_{1,i}$'s values are discrete values. In \cite{niu2019SPL}, where a regression problem is addressed to estimate $({\bf q}_{1})_{N_c}$, the values obtained at the output of the CNN are finally quantized to get the estimated vector $(\hat{{\bf q}}_{1})_{N_c}$ (specifically, rounding is performed on each element of the output vector independently, yielding $\hat{q}_{1,i}$, $i = 1,...,N_c$). However, it has been shown that for the estimation of discrete quantities (following then a categorical distributions) it is often preferable to resort to softmax followed by the cross entropy loss, which is good for backpropagation (see \cite{alp2017densereg}). Therefore, we propose to switch the regression to a classification-like problem. To do so, we consider a custom output layers structure with a basic loss function, and also with a refined loss function, described in the following. \subsection{General structure} The architecture that we considered for the internal layers of the CNN, and in particular, the feature extraction part, is a dense structure, namely the DenseNet backbone architecture \cite{huang2017densely}, and is described in Section \ref{sec.architecture}. In the following, we describe the specific structure of the proposed output layer. In the proposed structure, each to-be-estimated coefficient ${q}_{1,i}$ ($i=1,2,..,N_c$) of discrete value is encoded as a one-hot vector. The dimensionality of the encoded vector is determined by all the possible values that ${q}_{1,i}$ may take. Assuming that the quality of the image can not be too low, in fact, for every $i$, the estimated coefficient $\hat{q}_{1,i}$ may take a limited number of values, that is, $1 \le \hat{q}_{1,i} \le q_{1,i}^M$. For simplicity, we set $q_{1,i}^M$ equal to the corresponding value of the $i$-th coefficient when $QF_1 = 50$ (minimum quality of the JPEG considered).\footnote{This corresponds to assume that the former JPEG quality is always higher than or equal to $QF_1 = 50$, which is often the case in practice, thus not representing a big restriction (as a consequence, $Q_1$ matrices corresponding to lower qualities are not correctly estimated).} To get the desired output, we set the logit level output to a size $[q_{1,1}^M + q_{1,2}^M \cdots + q_{1,N_c}^M]$; then, the softmax is applied block-based on each $N_c$ block, where each block has $q_{1,i}^M$ inputs, $i=1,2,..,N_c$. \begin{figure*}[t!] \begin{center} {Fig1.pdf} \caption{Scheme of the CNN classification-like structure considered in this paper. } \label{fig:setup} \end{center} \end{figure*} For training the network we consider the following basic custom loss: \begin{equation} \mathcal{L}({\bf x}) = \frac{1}{N_c} \sum_{i=1}^{N_c} \left(\sum_{j \in [1: q_{1,i}^M]} y_{ij} \log(f_{ij}({\bf x})) \right), \label{eq.loss} \end{equation} where ${y}_{i,j}$ denotes the ground-truth label corresponding to ${q}_{1,i}$. According to Eq. \eqref{eq.loss}, a cross-entropy loss is first computed on each block separately, then the loss is defined as the sum of all the cross-entropy loss terms. Figure \ref{fig:setup} illustrates the scheme of the CNN considered for the estimation with specific focus on the output layer. In the figure, ${\bf y}$ denotes the ground-truth vector of the $N_c$ one-hot encoded vectors, having the dimensionality $[q_{1,1}^M + q_{1,2}^M + \cdots + q_{1,N_c}^M]$, and ${f}({\bf x})$ the output soft vector of the CNN, having the same dimensionality of ${\bf y}$. Formally, ${\bf y} = {\bf y}_1 \oplus {\bf y}_2 \oplus \cdots \oplus {\bf y}_{N_c}$, where $\oplus$ denotes the horizontal concatenation, that is, \begin{equation} {\bf y} = [y_{11}, y_{12},...,y_{1 q_{1,1}^M}, y_{21}, \cdots y_{N_c 1}, ..., y_{N_c q_{1,N_c}^M}], \end{equation} and, similarly, $f({\bf x}) = [f_1({\bf x}) \oplus \cdots f_{N_c}({\bf x})]$ where $f_i({\bf x}) = (f_{ij}({\bf x}))_{j=1}^{q_{1,1}^M}$. As we said, $q_{1,i}^M$, for every $i$, is determined considering the value assumed by the $i$-th coefficient in the quantization matrix corresponding to the lowest $QF_1$ considered (that we set to $50$ in our experiments). Then, the final estimated vector $(\hat{\bf q}_{1})_{N_c}$ is given by \begin{equation} \hat{q}_{1,i} = \arg\max_j f_{ij}({\bf x}), \quad i=1,...,N_c. \end{equation} \subsection{Refined loss function} For a given image ${\bf x}$, and final predicted vector $(\hat{{\bf q}}_{1})_{N_c}$, the accuracy of the estimation is averaged over all the $N_c$ coefficients, that is, Accuracy$({\bf x}) = (1/{N_c})\sum_{i=1}^{N_c}\delta({q}_{1,i}({\bf x}),\hat{{q}}_{1,i}({\bf x}))$, where $\delta$ is the Kronecker delta ($\delta(a,b) = 1$ if $a = b$, $0$ otherwise). The Mean Square Error (MSE) of the estimation is given by ${\text{MSE}}({\bf x}) = (1/N_c)\sum_{i=1}^{N_c} |q_{1,i}({\bf x}) - \hat{q}_{1,i}({\bf x})|^2$. The new classification-like structure trained with the loss function $\mathcal{L}$ attempts to maximize the accuracy of the estimation, without caring about the MSE of the estimation. From Eq.\eqref{eq.loss}, it is easy to argue that solutions yielding large MSE values are not penalized compared to those yielding lower values, for the same soft values associated to the '1' positions in vector $y$ ($\mathcal{L}({\bf x})$ takes the same value). Said differently, an incorrect decision on the value of a $q_{1,i}$ for some $i$, that results in a different wrong one-hot encoded vector, may lead to a same value of the loss function in Eq.\eqref{eq.loss}, regardless of the estimated value, or better yet, regardless of the difference between the true and estimated value, i.e., $|q_{1,i} - \hat{q}_{1,i}|$. Since both the accuracy and the MSE of the estimation are important in practice, we would like to get high accuracy for the estimation, without paying (much) in terms of MSE. In order to solve this issue, we investigated two possible solutions, that corresponds to two possible refinements of the loss function. The first solution was to use a ``smooth'' categorical cross-entropy loss that keeps all the advantages of the standard cross-entropy loss, but at the same time assigns different weights to the errors depending on the position of the ``1'' inside each one-hot encoded vector, that is, depending on $|q_{1,i} - \hat{q}_{1,i}|$ for each $i$. The second solution, that gave us to get better results, was to perform jointly classification and regression by considering a combined loss (as done by some approaches in the literature of deep learning and standard machine learning \cite{joint1,joint2,joint3}). Given the simil-classification architecture considered, defining a suitable loss function that takes into account the distance between the estimate and the true value, and then penalizes large values of such distance, is not obvious. To do so, we define a vector ${\bf d}_y$ that reports in each position the distance from the '1' in the corresponding one-hot encoded vector $y_{i}$ (see Figure \ref{fig:mapping-ytody}). Formally, let ${\bf d}_y = {\bf d}_{y,1} \oplus {\bf d}_{y,2} \oplus \cdots \oplus {\bf d}_{y,N_c}$. The $j$-th component of ${\bf d}_{y,i}$, $i=1,\cdots N_c$, is given by \small \begin{align} & d_{y,i,j} = \nonumber\\ & \hspace{-0.5cm} \left\{\begin{array}{ll} |j - q_{1,1}| & 1 \le j \le q_{1,1}^M \\ |j - q_{1,1}^M - q_{1,2}| & q_{1,1}^M + 1 \le j \le q_{1,1}^M + q_{1,2}^M\\ \cdots & \cdots\\ \big|j - \left(\underset{i=1}{\overset{N_c - 1}{\sum}} q_{1,i}^M\right) - q_{1, N_c}\big| & \underset{i=1}{\overset{N_c - 1}{\sum}} q_{1,i}^M + 1 \le j \le \underset{i=1}{\overset{N_c}{\sum}} q_{1,i}^M, \\ \end{array}\right. \label{eq.dy} \end{align} \normalsize where ${\bf q}_1$ is the true vector of coefficients. \begin{figure}[t!] \begin{center} {Fig2.pdf} \caption{Vector ${\bf d}_y$ of the relative distances obtained from ${\bf y}$.} \label{fig:mapping-ytody} \end{center} \end{figure} Then, we define the combined loss function as follows: \begin{align} & \mathcal{L}^r({\bf x}) = c \cdot \frac{1}{N_c} \sum_{i=1}^{N_c} \left(\sum_{j \in [1: q_{1,i}^M]} y_{ij} \log(f_{ij}({\bf x})) \right) + \hspace{0.5cm} \nonumber\\ & \hspace{3cm} + (1-c) \cdot \left(f^{T}({\bf x}) \cdot {\bf d}_y\right), \label{eq.loss_comb} \end{align} where $c$ is a constant, $0 < c < 1$, determining the trade off between the two terms. We observe that, for each $i$, the contribution to the second term is large when the $\arg\max_{j} f_{ij}({\bf x})$, that is $\hat{q}_{1,i}$, is far from $q_{1j}$, small otherwise (the second term is 0 when $\arg\max_{j} f_{ij}({\bf x}) = q_{1i}$ for every $i$, that is, in the case of ideal estimation). Then, the refined loss indirectly takes into account the MSE of the estimation via the second term. Moreover, the second term is continuously differentiable and then is good for backpropagation. Some preliminary experiments we carried out confirmed that, as expected, adopting the loss function $\mathcal{L}^r$, instead of $\mathcal{L}$, a significantly lower MSE can be reached, at the price of a possible slight decrease in the accuracy. Based on our experiments, by training our CNN model on a mixture of $QF_1$ ( $QF_1 = \{60,65,70,75,80,85,90,95,98\}$) with $QF_2 = 90$ for the same number of epochs (100) with the $\mathcal{L}$ and $\mathcal{L}^r$ loss, we got an average accuracy and average MSE of 0.8511 and 1.990 in the first case and 0.8516 and $0.9221$ in the second case. \section{Experimental methodology} \label{sec.exp_method} In this section, we describe in detail the backbone architecture of the network, the procedure of dataset construction and the training setting considered in our experiments. The proposed solution is compared with the state-of-the-art approach in \cite{niu2019SPL} based on deep learning and, for the aligned case, also with those in \cite{Galvan2014,Bianchi2012} based on statistical analysis. While in fact the method in \cite{niu2019SPL} always outperforms all the other previous methods for the non-aligned scenario (e.g. \cite{Bianchi2012,Dalmia2018}), for the aligned case, there are some cases where the accuracies achieved by the methods in \cite{Galvan2014,Bianchi2012}, tailored for the aligned scenario, are superior to those of \cite{niu2019SPL}. \subsection{Backbone architecture} \label{sec.architecture} For the design of the internal layers of the CNN, we considered the DenseNet architecture \cite{huang2017densely}, which was also considered in \cite{niu2019SPL}. Such backbone architecture has been recently adopted for several image forensic tasks, see for instance \cite{chen2019multi,kamal2018applicationDense,yang2014effective}, yielding improved performance compared to those achieved with traditional CNN architecture (e.g. residual-based networks). The main feature of the dense structure is that it connects each layer to every other layer in a dense block in a feed-forward fashion. in this way, the features extracted by the various layers are used by the subsequent layers throughout the same dense block (hierarchical structure). The dense connectivity has been shown in \cite{huang2017densely} to mitigate the gradient vanishing problem. The number of links in the network increases compared to traditional CNN architectures, passing from $l$ to $l(l-1)/2$ for each dense block, where $l$ is the number of layers in the block. However, as an advantage, the number of (to-be-trained) parameters is significantly reduced. Following the original dense structure (see \cite{huang2017densely}), we considered a network depth of $40$, with 3 dense blocks and growth rate $k = 12$. Each dense layer consists of 12 convolutional layers and a transition layer, where $2\times 2$ average pooling is performed to decrease the input size. All the convolutions have kernel size $3 \times 3 \times 12$. The default dropout of $0.2$ is considered. An initial convolution with $24$ ($2 k$) filters of size $3 \times 3$ is performed before the first dense block. For more details on the dense structure we refer to \cite{huang2017densely}. After the last dense blocks, global average pooling is performed and the feature vector is fed to the fully connected layer. The number of output nodes of the fully-connected layer is set to $(q_{1,1}^M \cdot q_{1,2}^M \cdot \cdots \cdot q_{1,N_c}^M)$. A softmax is applied to each block of $q_{1,i}^M$ nodes independently, for a total of $N_c$ softmaxes, as illustrated in Figure \ref{fig:setup}. \subsection{Datasets} As in \cite{niu2019SPL}, a model for $Q_1$ estimation is trained for a fixed value of $QF_2$. This does not represent a limitation in practice since the information on the second compression is always available. The knowledge of the final quantization matrix, in fact, can be recovered from the JPEG file, and is necessary to decompress the image, getting the image in the pixel domain. Moreover, when the image is re-saved in an uncompressed format, the quantization matrix of last compression can be accurately estimated \cite{bestagini2012video}. As a drawback, a model has to be trained for every matrix of second quantization, which may be time-consuming (the same happens with the method in \cite{niu2019SPL}). However, since training has to be performed only once, this does not represent a big issue. Moreover, our experiments show that a network trained for a given $QF_2$ generalizes pretty well to a different $QF_2$'s or, more in general, to a different $Q_2$ matrix, when the difference is not too much (a $\pm$ 2 mismatch in the QF resulting in a very small decrease of performance), hence a limited number of models can be trained. The training and testing datasets are built as described in the following. We considered the RAISE dataset \cite{RAISE8K} with 8156 native (tiff) images, that is split into a training and a test set. Specifically, 7000 images were considered for training, while the remaining 1156 were reserved for testing. The images were then compressed first with several $QF_1$'s and then with the prescribed $QF_2$, thus obtaining several double compressed versions (both $QF_1$ values larger and smaller than $QF_2$ were considered). JPEG compression was performed with OpenCV. To simulate the misalignment, we applied a random grid shift $(r,c)$ with $0 \le r, c \le 7$ between the two compressions, with $r, c $ randomly selected in the $[0:7]$ range. Therefore, the JPEG is non-aligned with probability 63/64, while the aligned scenario (which corresponds to the case $r = c = 0$) occurs with probability 1/64. To build the dataset of patches used for training, we proceeded as follows: for every $QF_1$, we cropped the DJPEG images in the training set into patches of size $64\times 64\times 3$; then, from each image we took 100 patches in random positions; we stopped collecting patches when a total number of $10^{5}$ patches was reached (coming then from 1000 images) for each given $QF_1$.\footnote{For every $QF_1$, a random shuffle was applied to the 7000 DJPEG images compressed with $(QF_1, QF_2)$, so the subset of images considered for every $QF_1$ was never the same.} For our experiments we set $QF_2=90$ and $80$. Specifically, for $QF_2=90$, we built the training dataset ${\mathcal{D}}^{90}$ by considering $QF_1 \in [60:98]$, for a total of $3.9 \times 10^{6}$ patches. For $QF_2=80$, we built set ${\mathcal{D}}^{80}$ by considering $QF_1 \in [55:98]$, then for a total of $4.4 \times 10^{6}$ patches. The test patch set was obtained in the same way. In this case, for every $QF_1$, all the 1156 images in the test set were considered, each one contributing to 100 random patches (for a total of 115600 patches). \CH{By following \cite{niu2019SPL}, we directly feed the 3 ($R$, $G$ and $B$) channels to the CNN. Another possibility would be to first perform color space conversion from $R G B$ to $Y C_b C_r$ and then feed the transformed image to the CNN (since passing from $R G B$ to $Y C_b C_r$ corresponds to the application linear mapping, the network should in principle be able to learn the mapping itself, if it benefits the learning process).} The Dresden dataset \cite{Dresden} was also considered to test the performance under dataset mismatch, consisting of 1491 raw images (hence, for each $QF_1$, the performance were tested on a total of 149100 patches). \CH{The size of the images is around 3600$\times$2700, which is a bit smaller than the size of RAISE images (4928 $\times$ 3264). To further investigate the behavior of our method in presence of a resolution mismatch, we also consider subsampled versions of the raw images from the Dresden database, corresponding to a resolution less than half of the resolution considered for training.} \CH{ Finally, we also run some tests in the case where the first compression is carried out by using Photoshop (PS), that does not use standard quantization matrices, thus representing a case of strong mismatch between training and testing.} \subsection{Setting} In all the experiments, we set $N_c = 15$, which is the value considered in most prior works \cite{Bianchi2012,Cogranne2019,Galvan2014,Dalmia2018, niu2019SPL}. Hence, we estimate the first 15 DCT coefficients, zig-zag scanned. For the implementation of the proposed method and the custom loss, we used TensorFlow version 2.2. Model training and testing were carried out in Python via TensorFlow, using Keras API. We ran our experiments using a 2x Nvidia GeForce RTX 2080 Ti 11 GB GDDR6 GPU. For the optimization, the Adam solver was used with learning rate $10^{-5}$. The batch size for training and testing was set to 32 images. We got our models using the $\mathcal{L}^r$ loss by training the network for 100 epochs. After this number of epochs we verified that the loss decreases very slightly (less than 0.01\% at every iterations) and the accuracy of the estimation cannot be improved further by letting the training go on (only incurring the risk of overfitting). The weight in the combined loss $\mathcal{L}^r$ in Eq. \eqref{eq.loss_comb} was set to $c=0.8$. The code is publicly available and can be found at the github link \url{https://tinyurl.com/yxhl32w5}. \begin{figure}[th!] \begin{center} {Fig3.pdf} \caption{Average accuracy (top) and average MSE (bottom) of the estimation for each of the 15 DCT coefficients, for $QF_2 = 90$, in the non-aligned DJPEG scenario. } \label{fig:AccuracyPerf_DCTcoeff} \end{center} \end{figure} \begin{figure}[th!] \begin{center} {Fig4.pdf} \caption{Average accuracy (top) and average MSE (bottom) of the estimation for each of the 15 DCT coefficients, for $QF_2 = 90$, in aligned DJPEG scenario. } \label{fig:MSEPerf_DCTcoeff} \end{center} \end{figure} \begin{figure}[th!] \begin{center} {Fig5.pdf} \caption{Average accuracy when $QF_1 < QF_2$ (top left) and when $QF_1 \ge QF_2$ (top right), average MSE when $QF_1 < QF_2$ (bottom left) and when $QF_1 \ge QF_2$ (bottom right) of the estimation for each of the 15 DCT coefficients, for $QF_2 = 90$, in aligned DJPEG scenario.} \label{fig:seperate_DCTcoeff} \end{center} \end{figure} \section{Results} \label{sec.results} \subsection{Comparison with existing methods} The average performance results achieved by the CNN model for the new architecture, trained with the $\mathcal{L}^r$ loss on ${\mathcal{D}}^{90}$, are reported in Table \ref{tab:tableResStep1_avg}, where they are compared to those achieved by the CNN model in \cite{niu2019SPL}, trained for the same value of $QF_2 = 90$. The performance results are averaged under the same setting considered for the training regarding the alignment of the DJPEG, i.e. the test patches are DJPEG, aligned with probability 1/64, non-aligned with 63/64. The performance results of the new model are superior to those achieved by the method in \cite{niu2019SPL} both in terms of accuracy and of MSE. \begin{table}[t!] \begin{tabular}{l|c|c} \hline & Prop CNN & CNN in \cite{niu2019SPL} \\ \hline AvgAcc & {\bf 0.689} & 0.547 \\ AvgMSE & {\bf 0.731} & 0.882 \end{tabular}% \vspace{0.2cm} \caption{Average performance of the proposed CNN estimator and \cite{niu2019SPL} for $QF_2 = 90$. The DJPEG is non-aligned with probability 63/64, aligned with probability 1/64 (same setting considered for the training of the models).} \label{tab:tableResStep1_avg} \end{table} \begin{table}[t!] \begin{tabular}{l|c|c|c|c} \hline & Prop CNN & CNN in \cite{niu2019SPL} & \cite{Bianchi2012} & \cite{Galvan2014}\\ \hline AvgAcc & {\bf 0.627} & 0.463 & 0.366 & 0.518 \\ AvgMSE & {\bf 1.120} & 1.326 & 28.016 & 9.091 \end{tabular}% \vspace{0.2cm} \caption{Average performance of the proposed CNN estimator in the aligned case for $QF_2 = 90$, and comparison with the state-of-the-art.} \label{tab:tableResStep1_avg_Al} \end{table} The performance results achieved by the methods in the aligned DJPEG scenario are reported in Table \ref{tab:tableResStep1_avg_Al}. The performance results in aligned scenario are slightly inferior to those reported in the Table \ref{tab:tableResStep1_avg} for the mixed case. From the results of Table \ref{tab:tableResStep1_avg_Al}, we see that the proposed method can also outperform \cite{Galvan2014}, which is specifically designed for the aligned case, both in terms of MSE and accuracy. The estimation accuracy and MSE for each DCT coefficient are reported for the non-aligned and aligned case in Figure \ref{fig:AccuracyPerf_DCTcoeff} and \ref{fig:MSEPerf_DCTcoeff}, where the results are averaged on all the $QF_1$s. Comparison with the methods \cite{Galvan2014,Bianchi2012} is also reported for the aligned case. It can be observed that the proposed method greatly outperforms these methods, for all the 15 DCT coefficients. Figure \ref{fig:seperate_DCTcoeff} shows the averaged results when $QF_1 < QF_2$ and $QF_1 \ge QF_2$ respectively, in the case $QF_2$=90 for the aligned scenario. It can be noticed that when $QF_1 \ge QF_2$ the proposed method clearly outperforms the methods \cite{Galvan2014,Bianchi2012}, both in terms of accuracy and MSE. When $QF_1 < QF_2$, the proposed method still outperforms the state-of-the-art methods. The average performance obtained for the case $QF_2 = 80$ (model trained on ${\mathcal{D}}^{80}$) are reported in Table \ref{tab:tableResStep1_avg80} for the non-aligned scenario and Table \ref{tab:tableResStep1_avg80_Al} for the aligned scenario. A performance loss is experienced by all the methods. This was expected since with a smaller $QF_2$, the second quantization tends to erase more the traces of the first compression, thus making the estimation harder. Nevertheless, the proposed method has an advantage over the state-of-the-art. We see that the CNN model in \cite{niu2019SPL} does better than our method in terms of MSE for the aligned case; however, our method outperforms \cite{niu2019SPL} in terms of accuracy in the aligned case and both in terms of accuracy and MSE in the non-aligned case, which is our main focus. Figure \ref{fig:AccuracyPerf_DCTcoeff80} and \ref{fig:MSEPerf_DCTcoeff80} report the results on 15 DCT coefficients in the case $QF_2 = 80$, averaged on all the $QF_1$s, for the non-aligned and aligned case respectively. We see that the gain of the proposed method is confirmed. \subsection{Generalization capability} The generalization capability of the model are tested by considering several sources of mismatch, i.e., the second compression quality $QF_2$, and the image database. The results in presence of $QF_2$ mismatch are reported in Figure \ref{fig:Q2mismatch}, where the model trained on ${\mathcal{D}}^{90}$ is tested on images compressed with $QF_2 = 92$, in the same general setting regarding the alignment considered for training (that is, the DJPEG is aligned with probability 1/64). We see that, the drop of performance is limited, proving a certain generalization capability, and is similar for the two methods. The performance of the proposed CNN remains superior to \cite{niu2019SPL}: the total AvgAcc and AvgMSE are respectively 0.591 and 0.859 for our method, and 0.500 and 0.944 for the method in \cite{niu2019SPL}. \CH{Notably, in the aligned scenario, the performance remains superior to those of the state-of-the art methods in \cite{Galvan2014} and \cite{Bianchi2012} designed for this case. Specifically, the AvgAcc and AvgMSE are 0.579 and 1.029 for our method, 0.331 and 15.8 for \cite{Galvan2014}, and 0.397 and 35.5 for \cite{Bianchi2012}.} To assess the impact that dataset mismatch has on the performance of our CNN-based estimator, we also evaluate the performance of the estimator on DJPEG images coming from the Dresden dataset. Figure \ref{fig:DBmismatch} reports the results of our tests for $QF_2 = 90$. {The total AvgAcc and AvgMSE are respectively 0.644 and 0.538 for our method, and 0.523 and 0.694 for the method in \cite{niu2019SPL}.} \CH{The results with the half-resolution images from Dresden dataset (strong resolution mismatch) are reported in Figure \ref{fig:DBmismatch-low}. As expected, the performance decreases, but not seriously so, and the superior performance of our method over \cite{niu2019SPL} are confirmed also in this case.} \begin{figure}[th!] \begin{center} {Fig6.pdf} \caption{Average accuracy (top) and MSE (bottom) of the estimation for each of the 15 DCT coefficients, for $QF_2 = 80$, in the non-aligned DJPEG scenario.} \label{fig:AccuracyPerf_DCTcoeff80} \end{center} \end{figure} \begin{figure}[th!] \begin{center} {Fig7.pdf} \caption{Average accuracy (top) and MSE (bottom) of the estimation for each of the 15 DCT coefficients, for $QF_2 = 80$, in the aligned DJPEG scenario.} \label{fig:MSEPerf_DCTcoeff80} \end{center} \end{figure} \begin{table}[th!] \begin{tabular}{l|c|c} \hline & Prop CNN & CNN in \cite{niu2019SPL} \\ \hline AvgAcc & {\bf 0.440} & 0.375 \\ AvgMSE & {\bf 1.866} & 1.946 \end{tabular}% \vspace{0.2cm} \caption{Average performance of the proposed CNN estimator and \cite{niu2019SPL} for $QF_2 = 80$ . The DJPEG is non-aligned with probability 63/64 (same setting considered for the training of the models), aligned with probability 1/64.} \label{tab:tableResStep1_avg80} \end{table} \begin{table}[th!] \begin{tabular}{l|c|c|c|c} \hline & Prop CNN & CNN in \cite{niu2019SPL} & \cite{Bianchi2012} & \cite{Galvan2014}\\ \hline AvgAcc & {\bf 0.360} & 0.254 & 0.202 & 0.282 \\ AvgMSE & 5.594 & {\bf 4.970} & 35.453 & 15.061 \end{tabular}% \vspace{0.2cm} \caption{Average performance of the proposed CNN estimator in the aligned case for $QF_2 = 80$, and comparison with the state-of-the-art.} \label{tab:tableResStep1_avg80_Al} \end{table} \begin{figure}[t!] \begin{center} {Fig8.pdf} \caption{Performance of the CNN estimator under mismatched $QF_2$ ($QF_2 = 92$). Average accuracy (top) and MSE (bottom) of the estimation for each of the 15 DCT coefficients. } \label{fig:Q2mismatch} \end{center} \end{figure} \begin{figure}[t!] \begin{center} {Fig9.pdf} \caption{Performance of the estimator on DJPEG images from a different database (Dresden), for $QF_2 = 90$. Average accuracy (top) and MSE (bottom) of the estimation for each of the 15 DCT coefficients.} \label{fig:DBmismatch} \end{center} \end{figure} \begin{figure}[t!] \begin{minipage}{0.95\linewidth} \begin{center} [width = 8cm]{Fig12.pdf} \end{center} \end{minipage} \begin{minipage}{0.95\linewidth} \begin{center} [width = 8cm]{Fig13.pdf} \end{center} \end{minipage} \caption{\CH{Performance of the estimator on DJPEG low resolution images from Dresden dataset ($QF_2 = 90$). Average accuracy (top) and MSE (bottom) of the estimation for each of the 15 DCT coefficients. }} \label{fig:DBmismatch-low} \end{figure} \CH{Finally, the results of our tests in the case where the first compression is performed with non-standard quantization matrices are provided in Figure \ref{fig:PSqualities}, where we report the accuracy and MSE achieved for some common medium-high Photoshop qualities (compression levels from 7 to 12), respectively for the aligned and non-aligned case. The performance are compared with \cite{niu2019SPL} and with the baseline model-based methods for this case. The performance of \cite{niu2019SPL} and the proposed methods are very similar, our method being superior only slightly on the average and in terms of MSE. For the non-aligned case, both methods significantly outperform the method in \cite{Bianchi2012} and the one in \cite{Dalmia2018} for PS qualities above 8. In the aligned case, instead, the tailored method in \cite{Galvan2014} works better for low PS qualities, while, for higher qualities, the CNN-based estimators gets much better performance (the case where the first quantization step is smaller than the second one is a scenario that model-based techniques can not handle properly). Clearly, the performance of our CNN estimator for low PS qualities can be improved if the model is trained, or even just fine-tuned, considering examples of JPEG images compressed with non-standard quantization matrices. } \begin{figure}[t!] \begin{center} {Fig11.pdf} \caption{\CH{Performance of the estimator when the first compression is performed with Photoshop for several PS qualities ($QF_2 = 90$), in the aligned (first column) and non-aligned (second column) scenario. }.} \label{fig:PSqualities} \end{center} \end{figure} \subsection{Application to tampering localization} \label{sec.localization} Given the capability of our CNN estimator to work on small image patches, quite straightforwardly, the method can be exploited to localize possible tampering regions in a DJPEG image. Specifically, given a DJPEG tampered image, the estimator can applied on sliding windows to get a map with the estimated primary quantization coefficients $(\hat{{\bf q}_1})_{N_c}$ for each $64\times 64$ block. Notably, by looking at those maps the tampering can be exposed in the general scenario where both the background and the foreground (tampered areas) are DJPEG, that is, both the background and the copy-pasted region were originally JPEG (compressed with a different qualities) and undergo a second JPEG compression after forging. This corresponds to a very common scenario in practice. In this scenario, methods that try to detect and localize tampering by looking at the presence or absence of typical double compression artifacts do not work, see for instance \cite{bianchi2011improved,amerini2014splicing,wang2016double,Barni2017}, just to mention a few of them. These methods in fact implicitly assume that the background is single compressed, while the foreground is double compressed, or viceversa.\footnote{\CH{As we mentioned in the introduction, for a single compressed region, our estimator returns a quantization matrix of a very high compression quality ( to give an insight, the coefficients of the quantization matrix for QF=100 are estimated with an accuracy larger than 0.8). Therefore, when the method is applied for tampering localization in this scenario, the manipulation can still be exposed based on the inconsistencies between the estimated coefficients of background and foreground. } } In order to get a localization map, we first divide the input image ${\bf x}$ of size $V \times L \times 3$ into overlapping blocks of size 64 $\times$ 64 with stride $s = 1$; then, each block is fed to the CNN that returns a vector with the first $N_c$ estimated quantization coefficients. Let \mbox{$QM(i,j,:) = f([{\bf x}_{ij}])$} be the network output when the input is the $(i,j)$-th image block of size 64 $\times$ 64 $\times$ 3; then, $QM(i,j,:) = ({\hat{\bf q}}_1([{\bf x}_{ij}]))_{N_c}$. In this way, for each $k = 1,...,N_c$ we obtain a map $QM(:,:,k)$ with the estimated values of the $k$-th coefficient for each block. \begin{figure*}[h!] {Fig10.pdf} \caption{ Examples of tampered (double JPEG) images and map of the estimated $k$-th $Q_1$ coefficient, for some values of $k$. {For each example, we report, from left to right: the tampered image, the ground-truth tampering map, the estimated map for the DCT coefficient no 1, 6 and 14, that is $QM(:,:,1)$, $QM(:,:,6)$ and $QM(:,:,14)$}. } \label{tampered-imgs} \end{figure*} We performed a qualitative analysis. Figure \ref{tampered-imgs} shows two examples. {For both tampered images, we have two distinct tampered areas, where the copy-pasted regions have different first JPEG qualities, that is, $QF_{1,1}$ = 95 and $QF_{1,2}$ = 85 for the first example and $QF_{1,1}$ = 65 and $QF_{1,2}$ = 95 for the second one. The first JPEG quality for the background of the two examples is 75. The last quality factor for both examples is $QF_2 = 90$. All the JPEG grids are not aligned.} {For sake of visualization a color map is reported. The color map shows that the two tampering regions have a different $q_{1,k}$ from the background; interestingly, the color map also reveals that the $q_{1,k}$ value is also different between them, hence, that they correspond to two distinct tampering (the copy-pasted regions come from different donor images, having different JPEG compression qualities).} In general, if the qualities of the former JPEG are close, it is harder to visualize and expose the tampering by simple inspection of the $N_c$ maps of the $q_{1,k}$ coefficients of each block, that is $QM(:,:,k)$, all the more that some of the coefficients might have the same value. In this cases, we could resort to clustering to get a tampering localization map from the vectors of the estimated $q_{1,k}$ values for each position. This interesting analysis is left as a future work. \section{Conclusions} \label{sec.conclusions} In this paper, we proposed a method for primary quantization matrix estimation via CNNs, that resorts to a classification-like architecture to perform the estimation of the quantization coefficients. Thanks to the adoption of a simil-classification structure, the new CNN estimator achieves improved performance with respect to the CNN regression-based method in \cite{niu2019SPL}, both in terms of accuracy and MSE. Notably, the proposed method is a general one, which can work under a wide variety of operative conditions. i.e. when the second compression grid is either aligned or not with the one of first compression, and for every combinations of qualities of former and second compression. Regarding the JPEG alignment, the method is designed to work in particular for the case of non-aligned double JPEG compression (the aligned case is assumed to occur with probability 1/64). A method capable to deal with primary quantization estimation in the non-aligned scenario, in fact, is very relevant when the proposed estimator is used for image tampering localization (when a region of a JPEG image is copy-pasted into another JPEG image, in fact, very likely, the alignment between the compression grids is not preserved and the final JPEG is non aligned with the grid of the spliced area). Despite its generality, the proposed method also outperforms the existing - dedicated - state-of-the-art solution for the aligned scenario in most of the cases. The method provide very good performance also in the challenging case of $QF_1 > QF_2$, where state-of-the-art methods based on statistical analysis often fail. More importantly, the estimator works on small image patches, that opens the way to the application of the method for tampering localization (see Section \ref{sec.localization}). As future research, the application of the method for tampering localization in DJPEG images, possibly including the identification of the different tampering sources (donors) is an interesting direction. In addition, the robustness of the estimator in presence of adversarial attacks could also be investigated. From a more general perspective, we believe that a similar architecture to the one proposed in this paper could also be exploited to address other estimation problems which are relevant in image forensics. \begin{backmatter} \section*{Funding} This work has been partially supported by the National Natural Science Foundation of China (Grant 61872244), Guangdong Basic and Applied Basic Research Foundation (Grant 2019B151502001), Shenzhen R\&D Program (Grant JCYJ20200109105008228, JCYJ20180305124325555) and by the Italian Ministry of University and Research (MUR) under the PRIN 2017 2017Z595XS-001 program - PREMIER project. \section*{Abbreviations.} JPEG: Joint Photographic Experts Group. DJPEG: double JPEG. DCT: dIscrete cosine transform. QF: quality factor. CNN: convolutional neural network. MSE: mean squared error. AvgACC: average accuracy. AvgMSE: average mean squared error. API: application programming interface. \section*{Availability of data and materials.} The datasets used for the experiments are publicly available in \cite{RAISE8K} and \cite{Dresden}. The code is available at the following Github link: \url{https://github.com/andreacos/BoostingCNN-Jpeg-Primary-Quantization-Matrix-Estimation}. \section*{Competing interests} The authors declare that they have no competing interests. \section*{Author's contributions} BT conceived the presented idea and developed the theoretical formalism. BL supervised the study and helped interpreting the results. BT wrote the paper, with the support of BL. AC and BT conceived and planned the experiments. AC implemented the method and helped interpreting the results. DH and AC carried out the experiments and positioned the work in the current state of research. \bibliographystyle{bmc-mathphys}
1,314,259,993,857
arxiv
\section{Introduction} Traditional drug discovery relies on the development and exploration by expert chemists and pharmacologists, which is time-consuming due to the large chemical structure space \cite{walters2018virtual}. Effective methods for collecting chemical structures with desired properties will significantly reduce the number of candidates for wet-lab experiments and thus accelerate the development of novel drugs. Recently, several methods have been proposed to solve the molecular optimization problem within the deep learning framework \cite{jin2018junction, you2018graph,kajino2019molecular,kusner2017grammar,segler2018generating, gomez2018automatic}. The major challenges for molecular optimization mainly lie in generating valid molecular structures and efficiently exploring the vast chemical structure space. Although several methods, including \cite{weininger1988smiles,kusner2017grammar,jin2018junction,you2018graph}, have been proposed to solve the first challenge, they either involve complex network architectures or struggle to optimize properties due to the choices of molecular representations \cite{jin2018junction,kusner2017grammar}. The second challenge is addressed by Bayesian optimization (BO) \cite{jin2018junction,kusner2017grammar,movckus1975bayesian} and reinforcement learning (RL) \cite{you2018graph}. However, few of these methods considered the high cost to evaluate molecular properties in real-world applications \cite{kajino2019molecular}. In fact, for most chemical and biological properties, such as antibacterial, anticancer and teratogenicity, there are no known explicit functions to directly interpret a chemical structure as a corresponding numerical property score. Hence, time-consuming wet-lab experiments or simulations are typically required to evaluate these properties of molecules, resulting in a limited number of molecules with validated properties. Therefore, generating molecules with desired properties using a small number of property evaluations as well as a small number of molecules with known properties is critical. To tackle these challenges, we propose MNCE-RL, an RL-based framework using the proposed molecular neighborhood-controlled embedding grammars and a graph convolutional network (GCN). The molecular neighborhood-controlled embedding graph grammars are extended from neighborhood-controlled embedding (NCE) grammars \cite{fahmy1992survey,janssens1982graph}, which are a type of sequential context-free graph grammars. As shown in Figure \ref{procedure}, a molecular NCE grammar can be inferred from the input molecular graphs so that each molecule can be represented as a parse tree. In the generation process, an RL agent generates a sequence of production rules, and receives a reward from the environment, which measures the specific property of the generated molecule, that can be used to update the GCN policy network. Our proposed molecular NCE grammars guarantee the chemical validity and the RL agent can efficiently explore the vast chemical structure space. \begin{figure} \centering \includegraphics[width=\textwidth]{Procedule.png} \caption{Illustration of our framework. We first infer a molecular NCE grammar by representing molecules as molecular graphs, parsing the graphs using neighborhood-controlled rules, and extracting the production rules. In the generation process, a GCN-based policy network samples a sequence of productions from the action space and obtains rewards from the reward function. The reward function measures the specific property of the molecule decoded from the generated action sequence.} \label{procedure} \end{figure} Our major contributions include 1) a novel molecular NCE grammar and an efficient algorithm to infer production rules from given molecules, where the grammar provides a way to simplify the generation of valid molecules; 2) a novel GCN architecture updating both node and edge features to compute feature vectors for nodes in molecular graphs, where the update of edge features in the GCN makes it possible to capture subtle physical differences between bonds with the same labels and thus lead to better node features for policy decision making; 3) the experimental results show that MNCE-RL significantly outperforms state-of-the-art methods in molecular optimization and has a high potential to be useful in drug discovery. \section{Related work} Early methods \cite{segler2018generating,gomez2018automatic,dai2018syntax,guimaraes2017objective} represent molecules as SMILES strings \cite{weininger1988smiles}, where the generation of a molecule is modeled as a Markov decision process (MDP) and recurrent neural networks are used to generate the SMILES string. Compared to the graph representation, the SMILES representation is quite brittle as a small change in the string may lead to a completely different molecule, which makes it hard to optimize molecular properties \cite{kajino2019molecular}. {Winter \textit{et al}. \cite{winter2019efficient} optimize molecular properties in a continuous latent space learned from SMILES strings to overcome the brittleness of the SMILES representation.} Li \textit{et al}. \cite{li2018learning} first attempt to generate molecules with the graph representation and achieves promising results in generating novel and realistic molecules, but their method cannot guarantee the validity of the generated molecules. To reduce the ratio of invalid molecules, Jin \textit{et al}. \cite{jin2018junction} (JT-VAE) proposes to represent molecules with junction trees where each node in the tree represents a cluster of atoms and optimize properties in the latent space of the variational autoencoder (VAE) by BO. Although the chemical validity constraints are intrinsically satisfied by predefined connections in clusters, uncertainty in combining the generated clusters limits the model's ability to optimize molecular properties. You \textit{et al}. \cite{you2018graph} (GCPN) try to generate molecular graphs by iteratively adding atoms and edges using a graph convolutional policy network and guarantee the chemical validity by the imposition of certain chemical constraints on generated structures. Due to its complex model architecture, GCPN requires a large number of iterations in training, which limits its applications in situations when property evaluation is costly. Kajino \cite{kajino2019molecular} (MHG-VAE) is the first to apply graph grammars to the molecular optimization problem. With a simple VAE architecture, MHG-VAE shows superiority in molecular optimization with a limited number of property evaluations. However, the performance of MHG-VAE is still far from being satisfactory perhaps due to the choice of the grammars and indirect optimization in a latent space. \section{Methods} As mentioned in \cite{kajino2019molecular}, molecular optimization can be formulated as follows: \begin{equation} m^*={\arg\max}_{m\in \mathcal{M}} f(m), \end{equation} where $\mathcal{M}$ is the set of all valid chemical molecules and $f$ is an evaluation function, which measures some specific property score of molecule $m$. We represent a molecule as a graph $H=(V, E, \sigma, \psi)$ by modeling atoms as nodes and bonds as edges, where $V$ is a finite set of nodes, $E$ is a finite set of edges, $\sigma: V\to \Sigma $ is a node-labeling function, which projects $V$ to the node label set $\Sigma$, and similarly $\psi: E\to \Psi$ is an edge-labeling function that projects $E$ to the edge label set $\Psi$. Following \cite{kajino2019molecular}, we use the {Kekul\'e} structure of molecules and include the chirality tag in node labels. Using the proposed molecular NCE grammars, the generation of a novel molecule is interpreted as the generation of a parse tree, where each node in the tree represents a production rule. Furthermore, by traversing the parse trees in preorder, the molecular optimization problem is interpreted as the generation of an optimal production sequence, {\it i.e.} \begin{equation} Prod^*={\arg\max}_{Prod\in \mathcal{P}}f\circ{Dec_\mathcal{P}}(Prod), \end{equation} where $\mathcal{P}$ is the set of all valid sequences of production rules and $Dec_\mathcal{P}:\mathcal{P}\to \mathcal{M} $ is the decoding function that transforms a production sequence into a molecule. The problem can be cast as an MDP and solved in the RL framework, where a GCN is used for node feature aggregation. Given an intermediate production sequence $Prod_t$ generated at time step $t$, due to the constraints of the molecular NCE grammar, the next production rule can only be selected from a subset of the production rules. We denote a production rule to be legal for $Prod_t$ if it satisfying the grammatical constraints. \subsection{Problem formulation as reinforcement learning} As aforementioned, the generation of sequences of production rules can be formulated as a sequential decision problem. Hence, we present the design of state representation, action space, and reward function as follows. {\bf State.} We denote the state $s_t$ at time step $t$ as the intermediate sequence $Prod_t=p_1p_2...p_{t-1}$, from which a graph $H_t$ can be decoded and the non-terminal node $v_{t}$ to be rewritten at time step $t+1$ is determined. Note that at the first step, $Prod_1$ is an empty sequence and $H_1$ has only one node $v_1$ with the starting symbol. {\bf Action.} The action space is a set of the legal production rules for $Prod_t$. In time step $t$, the policy $\pi_\theta(a_t|s_t)$ samples a production rule from the action space, where \begin{equation} \pi_\theta(a_t|s_t)=softmax(\mathbf{F}_{\theta'}(H_t)_{v_t}\mathbf{W}+\mathbf{b}), \end{equation} in which $\mathbf{F}$ is a GCN described in section \ref{aggnet}, $\theta'$ is the parameter set of $\mathbf{F}$, and $\theta=\{\mathbf{W}, \mathbf{b}\}\cup \theta'$. $\mathbf{F}_{\theta'}(H_t)$ is the computed node feature matrix of $H_t$ and $\mathbf{F}_{\theta'}(H_t)_{v_t}$ is the row corresponding to the node $v_t$. The intermediate molecular graph is updated with the sampled production rule. {\bf Reward.} As the generation process may take too many steps to converge, we set a threshold $T_{max}$ and force the generation process to stop when the number of steps exceeds $T_{max}$. Assume that the length of the generated sequence is $T-1$. At time step $t<T$, a small constant reward $r_\epsilon$ is assigned and at time step $T$, if there is no non-terminal node in $H_{T}$, a task-specific reward function assigns a reward based on $f\circ Dec_{\mathcal{P}}(H_{T})$. Otherwise, a constant non-positive reward $r_{incomp}$ is assigned. \subsection{Definition of molecular NCE grammars} An NCE graph grammar proposed by Janssens et al. \cite{janssens1982graph} is a system $G=(\Sigma, \Delta_\Sigma, P)$, where $\Sigma$ is the set of node labels, and $\Delta_\Sigma\subset\Sigma$ is the terminal alphabet and $P$ is the set of production rules. A production rule is in the form of $p=(\alpha, \beta, \phi)$, where $\alpha$, $\beta$ are connected graphs. $\alpha$ is called the left-hand side (LHS) of $p$, $\beta$ is called the right-hand side (RHS), and $\phi:V_\alpha \times V_\beta\times \Sigma\to \{0,1\}$ is the embedding function. Directly applying NCE grammars to molecular graphs suffers from the following issues: 1) A molecular graph is both node-labeled and edge-labeled, while the NCE grammars are defined only on node-labeled graphs. 2) The connections between the neighbors of $V_\alpha$ and nodes in $V_\beta$ are not specified, which may cause valency invalidity in a molecular graph. 3) The number of production rules may explode, decreasing the generalization ability of the grammars. To extend NCE grammars to molecular graphs, we define molecular NCE grammars as follows. \begin{definition} A molecular NCE grammar is a system $G=(\Sigma, \Psi, \Delta_\Sigma, \Delta_\Psi, P)$, where $\Sigma$ is the set of node labels, $\Psi$ the set of edge labels, $\Delta_\Sigma=\Sigma\setminus\{x,n_\Sigma, s \}$ the terminal alphabet of nodes, $\Delta_\Psi=\Psi\setminus \{n_\Psi\}$ the terminal alphabet of edges, $s$ the starting symbol, and $n_\Sigma$ and $n_\Psi$ the empty labels for nodes and edges, respectively. Finally, $P$ is the set of production rules. A production rule is in the form of $p=(\alpha, \beta, \phi)$ where: \begin{itemize} \item $\alpha=(V_\alpha, E_\alpha, \sigma_\alpha, \psi_\alpha, L_\alpha)$ and $\beta=(V_\beta, E_\beta, \sigma_\beta, \psi_\beta, L_\beta)$ are ordered connected graphs, where $L_*$ defines a unique order for edges incident to each vertex in the graph \item $V_\alpha=\{X_p \}\cup \mathcal{B}_p$, $E_\alpha=\{X_p\}\times \mathcal{B}_p$, where$X_p$ is a non-terminal node with $\sigma_\alpha(X_p)=x$ and $\mathcal{B}_p$ is a set of nodes with $\forall v\in \mathcal{B}_p$, $\sigma_\alpha(v)=n_\Sigma$ \item $V_\beta=\mathcal{T}_p\cup \mathcal{N}_p$, $E_\beta\subset (\mathcal{T}_p\times\mathcal{T}_p)\cup (\mathcal{T}_p\times\mathcal{N}_p) $, where $\mathcal{T}_p$ and $\mathcal{N}_p$ are sets of nodes with $\forall v\in \mathcal{N}_p, \sigma_\beta(v)=x$ \begin{itemize} \item if $\|\mathcal{T}_p\|>1$, then $\forall u \in \mathcal{T}_p, \sigma_\beta(u)=n_\Sigma$, $\forall e\in E_\beta, \psi_\beta(e)=n_\Psi$, $p$ is a complex production rule \item if $\|\mathcal{T}_p\|=1$, then $\forall u \in \mathcal{T}_p$, $\sigma_\beta(u)\in\Delta_\Sigma$, $\forall e\in E_\beta, \psi_\beta(e)\in\Delta_\Psi$, $p$ is a simple production rule \end{itemize} \item $\phi: \mathcal{B}_p\times \mathcal{T}_p\to \Psi\cup \{0 \}$ is the embedding function \end{itemize} \end{definition} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{MNCE.png} \caption{(a) An example production rule and a derivation step. Here, $x$ and $n_\Sigma$ are non-terminal labels. The production rule is in the form of $(\alpha, \beta, \phi)$. In a derivation step, applying a production rule $p$ will replace a non-terminal node $v_t$ (with label $x$) in the intermediate graph $H_t$ with the RHS ($\beta$) of $p$, and the edges between neighbors of $v_t$ and nodes in $V_\beta$ is determined by the embedding function $\phi$. (b) Extraction of a production rule. $\mathcal{B}_p$, $\mathcal{T}_p$ and $\mathcal{N}_p$ are vertex sets. $H'$ is a node-induced subgraph of $H$. The LHS is obtained by representing the nodes in $\mathcal{T}_p$ as a non-terminal node, removing the edges between the nodes in $\mathcal{B}_p$ and labeling the nodes in $\mathcal{B}_p$ as $n_\Sigma$. The RHS is obtained by removing the nodes in $\mathcal{B}_p$ from $H$ and replacing the connected subgraphs in $H'$ by non-terminal nodes.} \label{deriveprod} \end{figure} The first two issues mentioned above are addressed by specifying $\psi_\alpha$, $\psi_\beta$ and $\phi$. To alleviate the third issue, we introduce the empty labels that can be matched arbitrarily, $n_\Sigma$ and $n_\Psi$, in a more general way. The labels of nodes in $\mathcal{B}_p$ are replaced by $n_\Sigma$ and for complex production rules, only the skeletons of $\beta$ are kept. Production rules predefine the edges incident to each vertex and thus the valency validity can be guaranteed intrinsically. To specify the action space at each step, we define legal production rules as follows. \begin{definition} Let $T_t$ be an intermediate tree. If $T_t$ is an empty tree, the legal production rules for $T_t$ is the set of starting production rules. If $T_t$ is not empty and we need to sample a child production rule for the parent $p_{parent}$ that already has a set of child production rules $P_{sibling}$, then an intermediate graph $H_t$ with a non-terminal node $v_t$ to be rewritten at in the next time step can be decoded from $T_t$. Suppose that the direct neighbors of $v_t$ are $\{ v_{n_1}, v_{n_2}, ..., v_{n_k}\}$ and $L_t$ sorts the edge set $\{(v_t,v_{n_1}), (v_t,v_{n_2}), ..., (v_t,v_{n_k})\}$ in the order in which $v_{n_i}$ are generated, we say that a production rule $p$ matches the context of $v_t$ if and only if the edge-induced subgraph of $H_t$ specified by $\{ (v_t,v_{n_1}), (v_t,v_{n_2}), ..., (v_t,v_{n_k})\}$ and ordered by $L_t$ is isomorphic to the LHS of $p$ \cite{jiang1999optimal}. Then \begin{itemize} \item if $p_{parent}$ is complex and $v_t\in \mathcal{T}_{p_{parent}}$, any production rule having a positive empirical probability $P(p|p_{parent}, P_{sibling})$ and matching the context of $v_t$ is legal for $T_t$ \item otherwise, any production rule matching the context of $v_t$ is legal for $T_t$ \end{itemize} \end{definition} An example production rule and a derivation step are shown in Figure \ref{deriveprod}. Applying a production rule $p$ to an intermediate graph $H_t$ to rewrite a non-terminal node $v_t$ will replace $v_t$ with the RHS of $p$, and the edges between the direct neighbors of $v_t$ and nodes in the RHS are specified by the embedding function. A formal notion of a derivation step is defined as follows. \begin{definition} \label{derivationstep} Let $T_t$ be an intermediate parse tree and a production rule $p=(\alpha, \beta, \phi)$ is legal for $T_t$. An intermediate graph $H_t$ and a non-terminal node $v_t$ can be decoded from $T_t$. A derivation step of applying $p$ to $H_t$ will generate a novel graph $H_{t+1}$ by rewriting the node $v_t$, where \begin{itemize} \item $V_{H_{t+1}}=V_{H_t}\cup V_\beta\setminus\{v_t\}$ \item $E_{H_{t+1}}=\{(u,v)|u,v\in V_{H_t}\setminus \{v_t\}\ and\ (u,v)\in E_{H_t} \}\cup E_\beta \cup \{(u,v)|\phi(u,v)\in \Psi \}$ \end{itemize} For a node $u\in V_{H_{t+1}}$ and an edge $(u,v)\in E_{H_{t+1}}$, the labeling functions are: \begin{eqnarray*} \left\{ \begin{array}{lr} \sigma_{H_{t+1}}(u)=\sigma_{H_t}(u),\ u\in V_{H_t}\\ \sigma_{H_{t+1}}(u)=\sigma_\beta(u),\ u\in V_\beta\\ \end{array} \right., \left\{ \begin{array}{lr} \psi_{H_{t+1}}\left( (u,v) \right)=\psi_{H_t}\left((u,v)\right),\ (u,v)\in E_{H_t}\\ \psi_{H_{t+1}}\left( (u,v) \right)=\psi_{\beta}\left((u,v)\right),\ (u,v)\in E_\beta\\ \psi_{H_{t+1}}\left( (u,v) \right)=\phi(u,v),\ u\in V_{H_t}, v\in V_\beta, \phi(u,v)\in \Psi\\ \end{array} \right. \end{eqnarray*} \end{definition} {With this definition, by learning production rules from known molecules, any molecule sampled from the inferred grammar is chemically valid. A comparison of our proposed grammars and the MHGs \cite{kajino2019molecular} is shown in Appendix B.} \subsection{Inference of the molecular NCE grammars} The algorithm to parse molecular graphs and infer the production rules is shown in Appendix B. We sort the nodes of $H$ in the depth-first (DF) order, and for a node $v$ with first-hop neighbors $\{v_{n_1}, v_{n_2}, ..., v_{n_k}\}$, the edges $\{(v, v_{n_1}), (v, v_{n_2}), ..., (v, v_{n_k})\}$ are sorted to be consistent with the order of $v_{n_i}$. The graph is parsed in the DF order and the LHS and RHS extracted from $H$ inherit the edge orders. For a simple production rule (Figure \ref{deriveprod}), the LHS is simply obtained by representing nodes in $\mathcal{T}_p$ as a non-terminal node, removing the edges between the nodes in $\mathcal{B}_p$ and labeling nodes in the $\mathcal{B}_p$ as $n_\Sigma$. The embedding function $\phi$ is obtained by recording the edges between the nodes in $\mathcal{T}_p$ and $\mathcal{B}_p$. Denoting the node-induced subgraph of $H$ specified by $V_H\setminus\left(\mathcal{B}_p\cup\mathcal{T}_p\right)$ as $H'$, the RHS is obtained by removing the nodes in $\mathcal{B}_p$ from $H$ and representing each connected subgraph of $H'$ with a non-terminal node. For the complex production rules (Appendix A), the first steps are also computing the LHS, recording the embedding function, removing nodes in $\mathcal{B}_p$, and substitute connected subgraphs in $H'$ into non-terminal nodes. In the final step, as discussed in the prior section, to reduce the number of production rules, we only keep the skeleton of the RHS, and the labels of all nodes in $\mathcal{T}_p$ and the labels of all edges in the RHS are replaced by $n_\Sigma$ and $n_\Psi$. To maintain the information, we introduce an extra production rule for each node in $\mathcal{T}_{p}$. Examples to parse a molecular graph and to sample a molecule from a grammar is shown in Appendix A. \subsection{Graph convolutional network for node feature aggregation} \label{aggnet} Graph convolutional networks (GCNs) \cite{gilmer2017neural,Hu2019PretrainingGN,liao2019lanczosnet, li2018adaptive, gao2019graph, kim2019edge} have been widely applied in graph information aggregation. We represent both nodes and edges with feature vectors. In the forward pass, the GCN updates both the node features and the edge features and outputs the computed features for all nodes in the last layer. Assuming that the feature size of the edges is $S_E$, the node features are updated by \begin{equation} \mathbf{V}^{(l+1)}=AGG(\{Tanh(\mathbf{E}_{(i)}^{(l)}\mathbf{V}^{(l)}\mathbf{W}_{(i)}^{(l+1)}+\mathbf{b}^{(l+1)}_{(i)})+\mathbf{V}^{(l)}| i\in (1, ..., S_E)\}), \end{equation} where $AGG$ is the aggregation function, $\mathbf{V}^{(l)}$ is the node feature matrix in the $l$th layer, $\mathbf{E}^{(l)}_{(i)}$ is the $i$th feature matrix of edges, and $\mathbf{W}_{*}^{*}$ and $\mathbf{b}^{*}_{*}$ are parameters of the network. The edge features are updated in two steps. At the first step, we calculate a vector $\mathbf{e}_{ij}$, which encodes the relationship between the $i$-th node and the $j$-th node using the following formula \begin{equation} \mathbf{e}_{ij}^{(l+1)}=ReLU(Concat(\mathbf{V}_i^{(l+1)}, \mathbf{V}_j^{(l+1)})\mathbf{W}_{e}^{(l+1)}+\mathbf{b}_{e}^{(l+1)}), \end{equation} where $\mathbf{V}_i^{(l+1)}$ is the feature vector of the node $i$ in the $(l+1)$th layer. Then, the feature vector $\mathbf{E}_{ij}$ of the edge between the node $i$ and the node $j$ is updated by \begin{equation} \mathbf{E}_{ij}^{(l+1)}=ReLU(Concat(\mathbf{e}_{ij}^{(l+1)}, \mathbf{E}_{ij}^{(l)})\mathbf{W}_{E}^{(l+1)}+\mathbf{b}_{E}^{(l+1)}). \end{equation} \subsection{Model training} To generate molecules with desired properties, the widely used RL technique, Proximal Policy Optimization \cite{schulman2017proximal} (PPO), is adopted to train the model. The objective function of PPO is \begin{equation} L^{CLIP}(\theta)=\hat{E_t}\left[\min(r_t(\theta)\hat{A}_t, clip (r_t(\theta), 1-\epsilon, 1+\epsilon)\hat{A}_t)\right], \end{equation} where $\epsilon$ is a hyperparameter, $\theta$ is the policy parameter, $\hat{E_t}$ denotes the empirical expectation over timesteps, and $r_t$ is the ratio of the probability under the new and old policies, i.e. \begin{equation} r_t=\frac{\pi_\theta(a_t|s_t)}{\pi_{\theta_{old}}(a_t|s_t)}, \end{equation} where $\theta_{old}$ is the parameter set of the old policy. $\hat{A_t}$ is the estimated advantage \cite{schulman2015high} at time step $t$. We compute the actor critic $C_\omega (\cdot)$ in $\hat{A_t}$ as \begin{equation} C_\omega(H_t)=Avg(\mathbf{F}_{\omega'}(H_t))\mathbf{W_C}+\mathbf{b_C}, \end{equation} where $\mathbf{F}$ is a GCN with the parameter set $\omega'$, $\omega$ is the parameter set of the actor critic and $\omega=\{\mathbf{W_C}, \mathbf{b_C} \}\cup \omega'$. The $Avg$ function computes the average over the node features. To encourage the model to generate graphs with high diversity, an entropy loss \cite{mnih2013playing} is also added to the loss function, and to accelerate convergence, we take all the ground truth molecules as expert trajectories and pre-train the model with these trajectories. {Details of model training and optimizations of hyperparameters are shown in Appendix F}. \section{Experiments} \subsection{Datasets} The ZINC250k molecule dataset \cite{irwin2005zinc}, {GuacaMol package \cite{brown2019guacamol}} and 2,337 drug molecules from \cite{stokes2020deep} are used in our experiments. The ZINC250k dataset contains 250,000 drug-like molecules whose maximum atom number is 38. The work in \cite{stokes2020deep} provides 2,337 drug molecules and their inhibition effects to {\it E.coli} collected from wet-lab experiments. With a threshold of 0.2, 120 of the 2,337 molecules that have a strong {\it E.coli} growth inhibition are defined as the positive set and the remaining molecules are considered as the negative set. {GuacaMol is a comprehensive benchmark package for molecular optimization that provides more than one million molecules and covers not only single-objectives but also constrained and multi-objective optimization tasks}. The validity of generated molecules is checked by RDKit \cite{landrum2013rdkit}. The statistics of the inferred molecular NCE grammars are provided in Appendix C\footnote{Link to code and datasets: \href{https://github.com/Zoesgithub/MNCE-RL}{https://github.com/Zoesgithub/MNCE-RL}}. \subsection{Molecular optimization results} To demonstrate the ability of MNCE-RL in molecular optimization in different application scenarios, we designed a series of experiments and compared MNCE-RL with the current state-of-the-art methods. Detailed experiment settings of the baseline models \cite{winter2019efficient, kajino2019molecular, jin2018junction, you2018graph} are provided in Appendix D. {\bf Property optimization with unlimited evaluations and an ablation study.} In this experiment, we assume that the cost of property evaluation is negligible and the number of times to query the molecule properties is unlimited. Penalized logP score and QED score are used to evaluate the performance of models. Here, LogP is an estimation of the octanol-water partition coefficient and penalized logP also accounts for ring size and synthetic accessibility \cite{ertl2009estimation}. QED \cite{bickerton2012quantifying} is a computational score for measuring the drug-likeness of a molecule. To measure the performance of each method, we report the top 3 property scores, the 50th best score, and the average score of the top 50 molecules. The task-specific reward function we used in our approach is a linear projection of the computed penalized logP or QED score. The results are shown in Table \ref{pop} and Appendix G. {To investigate the specific contributions of our proposed grammars and the GCN structure in this experiment, we build a model using the classical GCN \cite{gilmer2017neural,you2018graph} without edge feature updating (MNCE-RL$_{OEU}$). As shown in the tables, MNCE-RL$_{OEU}$ achieves the state-of-the-art performance in optimizing both penalized logP and QED and significantly outperforms GCPN, indicating the effectiveness of our grammars. Moreover, compared with the MHGs, our proposed grammars achieve a higher coverage rate (Appendix C), and thus can represent more molecular structures and explore the chemical space more effectively. The utility of the edge feature updating mechanism is also confirmed by the fact that MNCE-RL outperforms MNCE-RL$_{OEU}$ significantly in optimizing penalized logP.} \begin{table}[h] \setlength\tabcolsep{1.5pt} \caption{Results on property optimizations with unlimited property evalutions} \label{pop} \centering \begin{tabular*}{1\textwidth}{ccccccccccccc} \toprule \multirow{3}{*}{Method} &\multicolumn{6}{c}{Penalized logP}& \multicolumn{6}{c}{QED} \\ \cmidrule(lr){2-7} \cmidrule(lr){8-13} & $1^{st}$ &$2^{nd}$ &$3^{rd}$&$50^{th}$&\vtop{\hbox{\strut Top 50}\hbox{\strut\ \ Avg.}} &Validity&$1^{st}$ &$2^{nd}$ &$3^{rd}$&$50^{th}$&\vtop{\hbox{\strut Top 50}\hbox{\strut\ \ Avg.}}&Validity\\ \cmidrule(lr){1-1}\cmidrule(lr){2-7} \cmidrule(lr){8-13} JT-VAE&5.30&4.93&4.49&3.50&3.93&{\bf 100\%}&0.942&0.934&0.930&0.896&0.912&{\bf 100\%}\\ GCPN&7.98&7.85&7.80&-&-&{\bf 100\%}&{\bf0.948}&0.947&0.946&-&-&{\bf 100\%}\\ MHG-VAE&5.56&5.40&5.34&4.12&4.49&{\bf 100\%}&0.947&0.946&0.944&0.920&0.929&{\bf 100\%}\\ MSO&14.44&14.20&13.95&13.49&13.67&-&{\bf0.948}&{\bf0.948}&{\bf0.948}&{\bf0.948}&{\bf0.948}&-\\ \midrule MNCE-RL$_{OEU}$&{14.49}&{14.44}&{14.36}&{14.13}&{14.16}&{\bf100\%}&{\bf0.948}&{\bf0.948}&{\bf0.948}&{\bf 0.948}&{\bf 0.948}&{\bf 100\%}\\ MNCE-RL&{\bf18.33}&{\bf18.18}&{\bf18.16}&{\bf17.52}&{\bf17.76}&{\bf 100\%}&{\bf0.948}&{\bf0.948}&{\bf0.948}&{\bf 0.948}&{\bf 0.948}&{\bf 100\%}\\ \bottomrule \end{tabular*} \end{table} {\bf Constrained property optimization.} This task aims at generating molecules with an improved penalized logP score while keeping structures similar to a given target molecule. Different from previous methods, such as GCPN, that can generate novel molecules starting from a given molecule, we first train our model to maximize the log-likelihood of the target molecule and then optimize the penalized logP. The task-specific reward assigns a small constant score if the similarity drops below the threshold and assigns a linear projection of the penalized logP score if the similarity is larger than the threshold. The results are shown in Table \ref{cop} and Appendix G, where the $\delta$ is the threshold of the similarity score. MNCE-RL is capable to optimize all the molecules with success rates of 100\% on both thresholds and for each threshold, MNCE-RL achieves significantly higher improvements in penalized logP than all baseline models. Although the average similarity scores of the molecules generated by MNCE-RL are slightly lower than those generated by the baseline models, the improvements in penalized logP achieved by MNCE-RL with similarity threshold 0.6 is significantly higher than baseline models with threshold 0.4, exhibiting the superiority of MNCE-RL. \begin{table}[h] \setlength\tabcolsep{1.9pt} \caption{Results on constrained property optimizations} \label{cop} \centering \begin{tabular*}{0.9\textwidth}{ccccccc} \toprule \multirow{3}{*}{Method} &\multicolumn{3}{c}{$\delta=0.4$}& \multicolumn{3}{c}{$\delta=0.6$}\\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} & Improvement &Similarity&Success &Improvement&Similarity&Success\\ \cmidrule(lr){1-1}\cmidrule(lr){2-4} \cmidrule(lr){5-7} JT-VAE&$0.84\pm 1.45$&{$ 0.51\pm 0.10$}&83.6\%&$0.21\pm0.71$&{$0.69\pm0.06$}&46.4\%\\ GCPN&$2.49\pm1.30$&$0.47\pm0.08$&{\bf 100\%}&$0.79\pm0.63$&$0.68\pm0.08$&{\bf 100\%}\\ MHG-VAE&$1.00\pm1.87$&$\bf 0.52\pm0.11$&43.5\%&$0.61\pm1.20$&$\bf 0.70\pm 0.06$&17.0\%\\ \midrule MNCE-RL &{$\bf5.29\pm1.58$}&$0.45\pm0.05$&{\bf 100\%}&{$\bf 3.87\pm1.43$}&$0.64\pm 0.04$&{\bf 100\%}\\ \bottomrule \end{tabular*} \end{table} {{\bf Comprehensive evaluations with GuacaMol.} These experiments comprehensively measure a model's ability in optimizing properties with unlimited evaluations. The results are shown in Table \ref{guacamole}, where BNGM represents the best results of the naive baselines provided in the manuscript of GuacaMol \cite{brown2019guacamol}. The performance of MNCE-RL exceeds the baselines on all benchmarks. In particular, our method significantly outperforms the baselines in multi-objective optimization tasks, showing the superiority of MNCE-RL in complex scenarios.} \begin{table}[h] \setlength\tabcolsep{1.4pt} \caption{Results on the benchmarks provided by GuacaMol.} \label{guacamole} \centering \begin{tabular*}{1.0\textwidth}{cccccccc} \toprule \multirow{3}{*}{Benchmark}&\multicolumn{3}{c}{Methods}&\multirow{3}{*}{Benchmark}&\multicolumn{3}{c}{Methods}\\ \cmidrule(lr){2-4}\cmidrule(lr){6-8} & BNGM &MSO &MNCE-RL&& BNGM &MSO &MNCE-RL\\ \cmidrule(lr){1-4}\cmidrule(lr){5-8} Celecoxib rediscovery &{\bf 1.0}&{\bf 1.0}&{\bf 1.0}&Osimertinib MPO&0.953&0.966&{\bf 1.0}\\ Troglitazone rediscovery &{\bf 1.0}&{\bf 1.0}&{\bf 1.0}& Fexofenadine MPO&0.998&{\bf 1.0}&{\bf 1.0}\\ Thiothixene rediscovery &{\bf 1.0}&{\bf 1.0}&{\bf 1.0}&Ranolazine MPO&0.920&0.931&{\bf 0.990}\\ Aripiprazole similarity&{\bf 1.0}&{\bf 1.0}&{\bf 1.0}&Perindopril MPO&0.808&0.834&{\bf 0.882}\\ Albuterol similarity&{\bf 1.0}&{\bf 1.0}&{\bf 1.0}&Amlodipine MPO&0.894&0.900&{\bf 0.920}\\ Mestranol similarity&{\bf 1.0}&{\bf 1.0}&{\bf 1.0}&Sitagliptin MPO&0.891&0.868&{\bf 0.904}\\ C11H24&0.993&0.997&{\bf 1.0}&Zaleplon MPO&0.754&0.764&{\bf 0.781}\\ C9H10N2O2PF2Cl&0.982&{\bf 1.0}&{\bf 1.0}&Valsartan SMARTS&0.990&0.994&{\bf 1.0}\\ Median molecules 1&0.438&0.437&{\bf 0.455}&Scaffold Hop&{\bf 1.0}&{\bf 1.0}&{\bf 1.0}\\ Median molecules 2&0.432&0.395&{\bf 0.457}&Deco Hop&{\bf 1.0}&{\bf 1.0}&{\bf 1.0}\\ \bottomrule \end{tabular*} \end{table} {\bf Property range targeting.} This experiment measures the model's ability to generate diverse molecules with some specific property in a predefined range \cite{you2018graph}, where the diversity is defined as the average pairwise Tanimoto distance between the Morgan fingerprints of the generated molecules \cite{rogers2010extended}. Penalized logP and molecular weight (MW) are considered in this task where the predefined ranges are the same as those used in \cite{you2018graph}. The task-specific reward in our approach is inversely proportional to the distance between the property score of a generated molecule and the center of the predefined range. The results are shown in Table \ref{target}. Our model achieves over 90\% success rates in all the four tasks with high diversities \cite{you2018graph} and an over 99\% success rate in targeting the range $500\leq MW\leq 550$, which significantly outperforms state-of-the-art methods. \begin{table}[h] \setlength\tabcolsep{1.0pt} \caption{Results on property range targeting} \label{target} \centering \begin{tabular*}{0.95\textwidth}{ccccccccc} \toprule \multirow{3}{*}{Method} &\multicolumn{2}{c}{$-2.5\leq logP\leq -2$}& \multicolumn{2}{c}{$5\leq logP\leq 5.5$}&\multicolumn{2}{c}{$150\leq MW\leq 200$}&\multicolumn{2}{c}{$500\leq MW \leq550$}\\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9} & Success &Diversity &Success&Diversity&Success &Diversity&Success &Diversity\\ \cmidrule(lr){1-1}\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9} JT-VAE&11.3\%&{\bf 0.846}&7.6\%&{\bf 0.907}&0.7\%&0.824&16.0\%&0.898\\ GCPN&85.5\%&0.392&54.7\%&0.855&76.1\%&0.921&74.1\%&{\bf0.920}\\ \midrule MNCE-RL &{\bf98.3\%}&0.836&{\bf 98.0\%}&0.842&{\bf91.8\%}&{\bf 0.928}&{\bf 99.6\%}&0.870\\ \bottomrule \end{tabular*} \end{table} {\bf Property optimization with limited property evaluations.} This task measures a model's ability to optimize molecules when the property evaluation is expensive. As done in \cite{kajino2019molecular}, we limit the number of molecule property queries to 500. We repeat MNCE-RL ten times and take the first 500 molecules generated as the output each time to obtain a total of 5k molecules. The task-specified reward is the same as in property optimization with unlimited property evaluations. The top 3 property scores, the 50th best score and the average score of the top 50 molecules are recorded. The results are shown in Table \ref{lpop} and Appendix G. Our model significantly outperforms all baseline methods. Interestingly, even with limited property evaluations, our method still performs better than JT-VAE and MHG-VAE with unlimited evaluations. Moreover, the top 50 scored molecules generated by MNCE-RL has a higher average penalized logP score than the top-scored molecule generated by all baselines, which demonstrates the superiority of MNCE-RL in situations when it is expensive to evaluate molecular properties. \begin{table}[h] \setlength\tabcolsep{1.9pt} \caption{Results on property optimization with limited property evaluations} \label{lpop} \centering \begin{tabular*}{0.52\textwidth}{ccccccc} \toprule \multirow{3}{*}{Method} &\multicolumn{6}{c}{Penalized logP}\\ \cmidrule(lr){2-7} & $1^{st}$ &$2^{nd}$ &$3^{rd}$&$50^{th}$&\vtop{\hbox{\strut Top 50}\hbox{\strut\ \ Avg.}} &Validity\\ \cmidrule(lr){1-1}\cmidrule(lr){2-7} JT-VAE&1.69&1.68&1.60&-9.93&-1.33&{\bf100\%}\\ GCPN&2.77&2.73&2.34&0.91&1.36&{\bf100\%}\\ MHG-VAE&5.24&5.06&4.91&4.25&4.53&{\bf100\%}\\ MSO&2.96&2.91&2.75&2.49&2.54&{\bf100\%}\\ \midrule MNCE-RL &{\bf9.88}&{\bf9.82}&{\bf9.75}&{\bf7.28}&{\bf8.31}&{\bf100\%}\\ \bottomrule \end{tabular*} \end{table} {\bf Generation of novel molecules with antibacterial property.} This experiment shows MNCE-RL's ability to assist drug discovery in a real-world application scenario when the number of experimentally validated molecules is limited and there is no known evaluation function. We first train a classifier on the 2,337 molecules from \cite{stokes2020deep} to distinguish positive and negative samples and use the classifier as a pseudo evaluation function. Then, we extract production rules from these molecules. The problem is modeled as a property optimization where we try to find molecules that receive high scores from the classifier. As the classifier is severely overfitted, when training the generation model, we assume that the generated novel molecules are negative and use these "negative samples" to update the classifier to reduce bias. After training, the kinase inhibitor scores, the protease inhibitor scores, and the enzyme inhibitor scores \cite{schaenzer2017screen, umezawa1982low,el2016synthesis, alodeani2015anti} (see Appendix E for details) of the top 10 molecules with the highest scores assigned by the classifier are reported. The results are shown in Appendix G and Table \ref{anti}. Ten of the ten molecules are bioactive (with scores larger than 0.2; see Table \ref{anti}) with at least one inhibitor score, and six of them are highly bioactive (with scores larger than 0.5), which illustrates the ability of MNCE-RL to generate antibacterial candidate molecules with only limited labeled samples. \begin{table}[h] \setlength\tabcolsep{1.9pt} \caption{Properties of six molecules having high inhibitor scores.} \label{anti} \centering \begin{tabular*}{0.8\textwidth}{cccc} \toprule \multirow{3}{*}{Molecule}&\multicolumn{3}{c}{Computed properties}\\ \cmidrule(lr){2-4} & Kinase inhibitor (KI) &Protease inhibitor (PI) &Enzyme inhibitor (EI)\\ \cmidrule(lr){1-1}\cmidrule(lr){2-4} $M_1$&-0.38&0.56&0.25\\ $M_2$&-0.20&0.54&0.23\\ $M_3$&-0.34&0.55&0.15\\ $M_4$&-0.24&0.63&0.09\\ $M_5$&-0.16&0.66&0.16\\ $M_6$&-0.24&0.66&0.30\\ \bottomrule \end{tabular*} \end{table} \section{Conclusion and future work} In this paper, we propose a new method MNCE-RL based on the novel molecular NCE grammars to solve the molecular optimization problem in the RL framework. MNCE-RL achieves the state-of-the-art performance in a series of systematic experiments. In a real-world application, when the molecules with known properties are limited and no numerical evaluation function is known, our method still exhibits high potential to generate molecules with desired properties, showing its great potential utility in drug discovery. {Although our proposed grammar guarantees the valency validity of the generated structures, it struggles to capture high-level chemical properties such as bond orders. We leave it to future work.} \section*{Broader impact} Finding effective medicines for diseases has always been a challenge in the pharmaceutical industry, especially when precision medicine has attracted more and more attention in recent years. Our approach provides an efficient way to generate molecules with specific properties, which will help reduce the workload of pharmacists, accelerate the development of novel drugs, and decrease the cost of drug design. On the other hand, although the molecules generated by our method possess desirable biological or chemical properties, their safety and effectiveness on patients still need to be validated in the normal clinical trial processes. \begin{ack} This work has been supported in part by the National Natural Science Foundation of China grant 61772197, the National Key Research and Development Program of China grant 2018YFC0910404 and the Guoqiang Institute of Tsinghua University with grant no. 2019GQG1. \end{ack} \bibliographystyle{abbrv} \section{Supplementary figures} \label{figures} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{ComputeProdC.png} \caption{Extraction of complex production rules. The LHS is computed by representing the nodes in $\mathcal{T}_p$ as a non-terminal node, removing the edges between nodes in $\mathcal{B}_p$ and labeling the nodes in $\mathcal{B}_p$ as $n_\Sigma$. The computation of RHS is removing the nodes in $\mathcal{B}_p$ and turning the connected subgraphs in $H'$ to non-terminal nodes. To reduce the number of production rules, only the skeleton of the RHS is kept and a production rule for each node in $\mathcal{T}_p$ is introduced to maintain the information.} \label{production1} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{ParseExam.png} \caption{An example of transforming a molecule into a parse tree and inferring molecular NCE grammar production rules.} \label{example1} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{SampleExam.png} \caption{An example of sampling a molecule from a molecular NCE grammar. The production rules are shown in Figure \ref{example1}.} \label{example2} \end{figure} \newpage \section{Supplementary information of the proposed grammars} \label{algopi} The algorithm to infer production rules of the molecular NCE grammar and parse molecular graphs into parse trees is shown in Algorithm \ref{Parsegraph}, where $v_T$ is a node of the parse tree $T$ and $Neigh(v)_H$ is the set of first-hop neighbors of a node $v$ in the graph $H$. For a molecular graph $H$ with $\|V_H\|$ nodes, the time complexity of Algorithm \ref{Parsegraph} is $O(\|V_H\|^2)$. {Compared with MHG, our proposed grammars have better generalization ability. MHG is an extension of hyperedge replacement grammar, which is based on the clique tree decomposition of graphs. In molecular hypergraphs, the clique tree decomposition might introduce a large number of rare substructures and cause a low coverage rate. For instance, in the MHG inferred from the ZINC250k dataset, 1,424/2,031 are starting rules and 2/3 of the starting rules are used by less than ten molecules. At the same time, 16/5,000 molecules in the testing set cannot be covered by these inferred production rules. In comparison, our grammar is based on neighboring relationships. In molecular graphs, the degree and neighbors of each node are limited by chemical rules, thus the substructure involved in our grammar is relatively simpler and in smaller fragments, which leads to fewer production rules and a higher coverage rate (see Appendix \ref{bsmg}).} {In the generation process, to be consistent with the inference process, the non-terminals with labels of $n_\Sigma$ have higher priority than non-terminals with labels of $x$, and the non-terminals that are generated later have higher priority. The non-terminal with the highest priority in the intermediate graph is rewritten each time.} \begin{algorithm} \label{Parsegraph} \SetAlgoLined \KwIn{$H$, $P$, $\mathcal{B}_p$, $\mathcal{T}_p$, $v_T$} \KwOut{$T$, $P$} \SetKwFunction{FMain}{ParseMolecularGraph} \SetKwProg{Fn}{Function}{:}{} \Fn{\FMain{$H$, $P$, $\mathcal{B}_p$, $\mathcal{T}_p$, $v_T$}} { \If{$\mathcal{B}_p$ is empty} { Initialize tree $T$\; Initialize $v_T$ as the root of $T$\; Add the initial node to $\mathcal{B}_p$\; Arbitrarily select a node from $H$ and add it to $\mathcal{T}_{p}$\; } Compute $LHS$\; Record the embedding function $\phi$\; Denote $H'$ as a node-induced subgraph of $H$ where $V_{H'}=V_H\setminus\left(\mathcal{B}_p\cup\mathcal{T}_p\right)$\; Remove nodes in $\mathcal{B}_p$ from $H$\; Represent connected subgraphs in $H'$ by non-terminal nodes\; Obtain the $RHS$\; \If{$\|\mathcal{T}_p>1\|$} { \For{$v$ in $\mathcal{T}_p$} { Add a child node $v_c$ to $v_T$\; Extract a production rule $p_v$ for $v$\; Label $v_c$ as $p_v$\; Add $p_v$ to $P$\; } $RHS\longleftarrow$The skeleton of $RHS$\; } $p\longleftarrow\left(LHS, RHS, \phi \right)$\; Label $v_T$ as $p$\; Add $p$ to $P$\; $\mathcal{T}^{(descent)}\longleftarrow \cup_{v\in\mathcal{T}_p}Neigh(v)_H$\; \For{connected subgraph $h$ in $H'$} { Add a child node $v_c$ to $v_T$\; $\mathcal{B}^{(h)}\longleftarrow \left(\cup_{v\in V_{h}}Neigh(v)_H\right)\setminus V_{h}$\; $\mathcal{B}_{p}^{(h)}\longleftarrow \mathcal{T}_{p}\cap \mathcal{B}^{(h)}$\; $\mathcal{T}_{p}^{(h)}\longleftarrow \mathcal{T}^{(descent)}\cap V_{h}$\; Denote $H^{(h)}$ as an induced subgraph of $H$, where $V_{H^{(h)}}=V_h\cup \mathcal{B}_{p}^{(h)}$\; \FMain{$H^{(h)}$, $P$, $\mathcal{B}_{p}^{(h)}$, $\mathcal{T}_{p}^{(h)}$, $v_c$}\; } \textbf{return} $T,P$\; } \textbf{End Function} \caption{Inference of molecular NCE grammar production rules} \end{algorithm} \newpage \section{Basic statistics of the inferred molecular NCE grammars} \label{bsmg} First, we report the basic statistics of the molecular NCE grammars inferred from the ZINC250k dataset. To check the generalization ability of the molecular NCE grammars, we parsed the molecules in the training data. From the 220,011 training molecules, we obtained 1,775 production rules. To investigate the coverage rate of the grammar, we parsed the 5,000 molecules in the test data using the production rules inferred from the training data to estimate the percentage of molecules that cannot be represented by the inferred grammar. The result shows that only 3 out of the 5,000 molecules cannot be parsed. Our coverage rate is higher than the one achieved by the MHGs and the number of our production rules is less. Next, we inferred grammatical production rules from all 250k ZINC250k molecules, resulting in 1,838 production rules in total. Each molecule is associated with 28 production rules on the average. The maximum number of production rules associated with a molecule is 51. For the antibiotic dataset, we extracted production rules from all known molecules. We parsed the molecules starting from different nodes to extend the number of training production sequences. 3,897 production rules were obtained from the dataset. For the data provided by GuacaMol, 7256 production rules were obtained from the training set, leading to 293/238708 molecules in the test set uncovered. In comparison, 13110 production rules were obtained for the MHGs and 1088/238708 molecules were not covered by the inferred MHG. When training the generation model, we set $L_{max}$ as the maximum number of production rules that a molecule in the dataset may be associated with. During the test, we set $L_{max}$ as $\infty$ in the experiments on ZINC250k and GuacaMol, but in the antibacterial experiment, considering the application scope of the classifier, we set $L_{max}$ the same as in the training process. \section{Experimental settings of the baseline methods} \label{baseline} Four state-of-the-art methods are compared with our method. 1) Junction tree VAE (JT-VAE) is a state-of-the-art algorithm for generating molecular graphs under the VAE framework. The basic idea of JT-VAE is to generate molecular graphs cluster by cluster and join each generated cluster using a greedy search. JT-VAE can generate molecules with 100\% validity and it outperformed the previous methods such as Syntax-directed VAE and grammar VAE in property optimization and constrained property optimization. 2) Graph convolutional policy network (GCPN) aims to generate molecules atom by atom and optimize the properties of molecules by RL. As chemical validity cannot be guaranteed intrinsically in GCPN, it checks the validity of the graph in each step and discards invalid parts. A beam search is used in GCPN to improve sampling efficiency. GCPN achieved much better performance in property optimization, property range targeting and constrained optimization than the previous methods including JT-VAE. 3) Molecular hypergraph grammar variational autoencoder (MHG-VAE) uses molecular hypergraph grammars (MHGs) to assist the generation of molecular graphs and focuses on generating molecules with limited property evaluations. MHG-VAE uses an MHG as the prior of its VAE model, and achieved better performance than GCPN and JT-VAE under the limited property evaluation setting, but showed no advantage over other methods when property evaluation was unlimited. 4) Molecule Swarm Optimization (MSO) is a state-of-the-art algorithm in multi-objective molecular optimization with the particle swarm optimization algorithm and achieved excellent performance on the benchmarks provided by GuacaMol. The codes of the baselines were downloaded from \href{https://github.com/bowenliu16/rl_graph_generation}{GCPN}, \href{https://github.com/wengong-jin/icml18-jtnn}{JT-VAE}, \href{https://github.com/ibm-research-tokyo/graph_grammar} {MHG-VAE} and \href{https://github.com/jrwnter/mso}{MSO}. {\bf Property optimization with unlimited property evaluations.} The results of GCPN were copied from \cite{you2018graph}. As JT-VAE provided the molecules it generated when optimizing the penalized logP, we obtained the results directly by scoring the provided molecules. As for the task of optimizing QED, we set the objective function as the QED score and ran the code of JT-VAE with the default setting ten times to generate novel molecules. The results were obtained by summarizing all the molecules generated in the ten runs. For MHG-VAE, we copied its results in optimizing penalized logP from \cite{kajino2019molecular} and obtained the results in optimizing QED by running its code in the default setting with the QED score as the objective function. For MSO, to fairly compare with our method, the results in Table 1 were obtained by constraining the maximum number of atoms to 51 and the best hyperparameters used in the corresponding paper \cite{winter2019efficient} were adopted in our experiments. We ran MSO 100 times and merge all the obtained molecules as the results. As a comparison, the results of MSO without constraints on the number of atoms as well the results of our method under relaxed constraints are shown in Table \ref{ablation}. {\bf Constrained property optimization.} The results of all baselines were copied from the corresponding papers \cite{you2018graph,kajino2019molecular,jin2018junction}. {\bf Comprehensive evaluations with GuacaMol.} The results of all baselines were copied from the corresponding papers \cite{winter2019efficient, brown2019guacamol}. {\bf Property range targeting.} The results of GCPN and JT-VAE were directly copied from \cite{you2018graph}. {\bf Property optimization with limited evaluations:} The results of JT-VAE and GCPN were copied from \cite{kajino2019molecular}. For MHG-VAE, we ran the code ten times and took the first 250 molecules each time, with the same hyperparameters used in \cite{kajino2019molecular}. For MSO, we ran their code ten times and took the first 500 molecules each time, with the default hyperparameters. \section{Evaluation of antibacterial properties} \label{ins} Enzymes are biological catalysts. A protease is an enzyme that performs proteolysis, that is, it triggers protein catabolism by hydrolysis of the peptide bonds that link amino acids together in a polypeptide chain. A kinase is an enzyme that catalyzes the transfer of phosphate groups from high-energy, phosphate-donating molecules to specific substrates. As enzymes play an important role in bacterial activities, molecules with high enzyme inhibitor scores, protease inhibitor scores or kinase inhibitor scores are thought to be high-potential candidates for antibiotics. The inhibitor scores were computed by using the \href{https://www.molinspiration.com/cgi-bin/properties}{Molinspiration online server}. The larger the score is, the higher is the probability that the involved molecule will be active. In particular, molecules with positive scores are usually thought to be active. In our experiment, we adopted the thresholds used by Molinspiration and regarded those with scores larger than 0.2 as active molecules and those with scores larger than 0.5 as highly active molecules. \newpage \section{Supplementary details of model training} \label{ModelTrain} The model is pre-trained with known molecules by maximizing the likelihood and then trained for each optimization task. The hyperparameters in reward functions are optimized for each task independently. For tasks with unlimited property evaluations, the other hyperparameters are optimized on the optimizing penalized logP task. The hyperparameters for the optimization with limited property evaluations are optimized independently. For each task, the best molecules found by the policy are used as known trajectories to train the model to accelerate convergence. With 1080Ti, the pre-training on ZINK250 took around 27 hours and the optimization stages took 30 minutes $\sim$ 24 hours depending on the tasks. \newpage \section{Supplementary results} \label{sresult} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{LogpUnlimited.png} \caption{The 20 molecules with the highest penalized logP scores generated by MNCE-RL in optimizing the penalized logP score with unlimited property evaluations. The diversity of 5000 molecules is 0.722.} \label{optlogp} \end{figure} \begin{figure}[b] \centering \includegraphics[width=1.0\textwidth]{QED.png} \caption{The 20 molecules with the highest QED scores generated by MNCE-RL in optimizing the QED score with unlimited property evaluations. The diversity of 5000 molecules is 0.870.} \label{optqed} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{ConOpt.png} \caption{Five target molecules (the first column) in constrained optimization and their corresponding optimized molecules generated by MNCE-RL (the second and the third columns).} \label{constrain} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{limited.png} \caption{The best penalized logP scores of the molecules found by different methods depending on the number of function evaluations.} \label{limited} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{LogpLimited.png} \caption{The 20 molecules with the highest penalized logP scores generated by MNCE-RL in optimizing the penalized logP score with limited property evaluations.} \label{optloplimited} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{Antibiotic.png} \caption{The ten molecules with the highest scores assigned by the classifier in generating candidates of antibiotics and their corresponding property scores.} \label{antibio} \end{figure} \begin{table}[h] \setlength\tabcolsep{1.9pt} \caption{The maximum penalized logP scores with different $L_{max}$ values. The results of MSO are copied from the corresponding paper \cite{winter2019efficient}.} \label{ablation} \centering \begin{tabular*}{0.9\textwidth}{ccccccc} \toprule \multirow{3}{*}{Method} &\multicolumn{6}{c}{Penalized logP}\\ \cmidrule(lr){2-7} & $1^{st}$ &$2^{nd}$ &$3^{rd}$&$50^{th}$&\vtop{\hbox{\strut Top 50}\hbox{\strut\ \ Avg.}} &Validity\\ \cmidrule(lr){1-1}\cmidrule(lr){2-7} MSO (no constraints on the number of atoms)&26.10&-&-&-&-&-\\ \midrule MNCE-RL ($L_{max}=51$) &18.33&18.18&18.16&17.52&17.76&{\bf 100\%}\\ MNCE-RL ($L_{max}=90$) &{ 28.09}&{28.04}&{28.00}&{26.52}&{26.99}&{\bf 100\%}\\ MNCE-RL ($L_{max}=110$) &{\bf34.06}&{\bf34.04}&{\bf33.92}&{\bf32.96}&{\bf33.33}&{\bf100\%}\\ \bottomrule \end{tabular*} \end{table} \end{appendices} \newpage \bibliographystyle{abbrv}
1,314,259,993,858
arxiv
\section{Introduction} The synthesis of economics and physics has given to rise of the new subject of Econophysics \cite{stanley}. Most of the studies in econophysics have been focused on the financial markets \cite{voit,bouchaud} and on financial instruments and their derivatives \cite{bebcup,bebcup2}. Microeconomics is one of the pillars of modern economic theory and studies the interaction of consumers and producers of commodities \cite{varian}, \cite{jehle}, \cite{green}. There is an increasing research in the application of statistical physics to economics \cite{chak2,gall,haven1,chak1} and this paper is a continuation of such studies. Let quantity $\textbf{q}=(q_1,q_2,..,q_N)$, where $q_i>0$, be the quantity of a commodity labeled by $i$, with $i=1,2,..N$; it can be kilograms of wheat or the number of automobiles. The commodity price vector is $\textbf{p}=(p_1,p_2,..,p_N)$, where $p_i>0$ is the price of a unit of the commodity; it can be dollars/kilograms or dollars/per automobile. One of the fundamental problems of microeconomics is to determine the dependence of quantities $\textbf{q}$ on the purchased at market prices $\textbf{p}$. In most studies of microeconomics, at a given instant, the quantity and price of a commodity are taken to be a determinate quantity. Microeconomics studies the (deterministic) equilibrium value of the quantities and prices of commodities as well their time evolution. A statistical generalization is made of microeconomics by considering quantities ${q}_i(t)$ and price ${p}_i(t)$ to be independent random variables for each instant of time, namely \textit{stochastic variables}. A possible reason for prices to be random is that, similar to the price of equities, the prices of commodities incorporate all the market information and result in the traded prices. In absence of new information, any departures from the traded prices, hence, should be indeterminate, random and uncertain. Furthermore, market prices are not in equilibrium, but rather have a (random) evolution in time $t$ that can have an overall drift reflecting market sentiment. Market prices may not contain all the market information and the source of randomness of market prices may have other explanations such as due to the existence of `sticky' prices \cite{sticky}. In statistical microeconomics, the supply $\mathcal{S}[\textbf{p}]$ and demand $\mathcal{D}[\textbf{p}]$ of commodities at market prices $\textbf{p}$ is the starting point for analyzing the behavior of the producers and consumers of commodities. The competing tendency of demand and supply, namely demand increases when prices fall whereas supply increases when prices rise is reflected in the traded prices. In fact, in most microeconomics texts, the market commodity price is taken to be the value for which supply is equal to demand. Supply and demand are inseparable, with one determining the other and vise versa. The view taken in this paper is that supply and demand are two facets of the same entity, namely a microeconomic \textit{potential function} $\mathcal{V}[\textbf{p}]$. Using the analogy from mechanics, a potential function $\mathcal{V}[\textbf{p}]$ is postulated that \textit{combines} supply and demand into a single entity and embodies the competing effects of both supply and demand. As will be discussed later, both the supply and demand functions are dimensionless and hence can be consistently added together. The potential is chosen to be the sum of supply and demand, namely \begin{eqnarray} \label{defpot} \mathcal{V}[\textbf{p}]=\mathcal{D}[\textbf{p}]+\mathcal{S}[\textbf{p}] \end{eqnarray} The potential function $\mathcal{V}[\textbf{p}]$, similar to mechanics, drives the evolution of market prices. For the special case when the prices are constant (time independent) -- given by the constant prices $\textbf{p}_0=(p_{01},p_{02},..,p_{0N})$ -- the prices \textit{minimize value} of the potential; namely that $\mathcal{V}[\textbf{p}_0]$ is a minimum of $\mathcal{V}[\textbf{p}]$. In other words, in the framework of statistical microeconomics, stationary prices are determined by the minimization of the microeconomic potential, which replaces the standard microeconomic procedure of setting supply equal to demand \cite{varian}. The full dynamics of market prices is determined by assigning a \textbf{joint probability distribution} for all possible evolutions of the stochastic market prices. In analogy with quantum mechanics and classical statistical mechanics, it is \textit{postulated} that the probability of the stochastic evolution of market prices is proportional to the Boltzmann distribution, namely \begin{eqnarray} \label{boltzprop} \text{Joint probability distribution}~~\propto~~\exp\{-\mathcal{A}[\textbf{p}]\} \end{eqnarray} where the action functional $\mathcal{A}[\textbf{p}]$ determines the likelihood of the evolution of all the different values taken by all the prices. In analogy with mechanics, the action functional is taken to be the sum of the potential term $\mathcal{V}[p]$ with a \textit{kinetic term} $\mathcal{T}$, namely \begin{eqnarray} \label{microaction} \mathcal{A}[\textbf{p}]=\int_{-\infty}^{+\infty} dt\mathcal{L}(t)=\int_{-\infty}^{+\infty} dt \Big(\mathcal{T}[\textbf{p}(t)]+\mathcal{V}[\textbf{p}(t)]\Big) \end{eqnarray} with the Lagrangian given by \begin{eqnarray} \label{microlagrang} &&\mathcal{L}(t) = \mathcal{T}[\textbf{p}(t)]+\mathcal{V}[\textbf{p}(t)] \end{eqnarray} The kinetic terms $\mathcal{T}[\textbf{p}(t)]$ contains the time derivatives of the prices and together with the potential function, determines the time dependence of the stochastic prices; in particular, $\exp\{-\mathcal{A}[\textbf{p}]\}$ determines the likelihood of the different random trajectories of the random prices. Note that for all values of the prices $\mathcal{A}[\textbf{p}]>0$; the minimum value of $\mathcal{A}[\textbf{p}]$ has no significance, with the only requirement being that the minimum value is finite; by adding a constant, the minimum value of $\mathcal{A}[\textbf{p}]$ can always be taken to be zero. To examine the specific characteristics of the statistical formulation of microeconomics, the total budget $m$ of a typical aggregate consumer is introduced as an expansion parameter. In particular, the correlation of the prices is studied as a perturbative expansion in a power series in $1/m$. The perturbation expansion shows that the average prices of the model, to leading order in $1/m$, are equal to the time independent stationary prices $\textbf{p}_0=(p_{01},p_{02},..,p_{0N})$ that minimize the potential. The series expansion of the unequal time price correlator -- in a power series in $1/m$ -- can be generated using the technique of Gaussian path integration The model can be calibrated by comparing the model's unequal time correlation function with the empirical correlation of market commodity prices. \section{The Utility Function}\label{sec:utility} The utility function $\mathcal{U}$ is one of the fundamental concepts in microeconomics and depends on the quantity of consumption vector $\textbf{q}=(q_1,q_2,..,q_N)$ of commodities, that is, $\mathcal{U}=\mathcal{U}[\textbf{q}]$. The utility function is a \textit{dimensionless real number} that quantifies the utility of a commodity to the consumer, which is necessarily subjective. In all discussions in this paper, the utility function $\mathcal{U}[\textbf{q}]$ refers to an `aggregate' consumer that reflects the norms of consumption of a given society -- and is not related to the subjective preferences of any specific individual. A fundamental property of a utility function results from the intuitive expectation that a consumer gets more satisfaction by consuming greater quantities of a commodity; namely \begin{eqnarray} \label{fundproputility} q'_i>q_i~~~ \text{if and only if}~~~ \mathcal{U}[q_1,q_2,..,q'_i,..q_N]>\mathcal{U}[q_1,q_2,..,q_i,..q_N] \end{eqnarray} Marginal utility is defined by the change in utility due to a change in the quantity consumed and is required to be positive, namely \begin{eqnarray} \label{marginalutility} \text{Marginal utility}:~~\frac{\partial \mathcal{U}[\textbf{q}]}{\partial q_i} >0 \end{eqnarray} The fact that marginal utility is a positive quantity follows from Eq. \ref{fundproputility}. The utility function is required to yield the so called \textit{diminishing marginal utility}, namely that consuming larger and larger quantities yields less and less marginal utility to the consumer. Hence \begin{eqnarray} \label{dimmarginalutility} \frac{\partial^2 \mathcal{U}[\textbf{q}]}{\partial q_i^2} <0~~:~~\text{Diminishing marginal utility} \end{eqnarray} There is a \textit{measurable consequence} of the utility that a consumer derives from a commodity -- namely, the \textit{price} the consumer is willing to pay for the said commodity. Let the total money available to the consumer be $m$; the consumer then has the following constraint on the quantities $q_i$ that are consumed, namely that \begin{eqnarray} \label{pqconstraint} \sum_{i=1}^Np_iq_i=m~~:~~\text{Budget constraint} \end{eqnarray} Given the finite budget of every consumer, the preferences of the consumer are reflected in the allocation of resources made by the consumer and results in different prices for different commodities. The utility function literally compares apples with oranges, since the consumer might prefer one commodity to another; to compare qualitatively different commodities, the utility function can only be a dimensionless function and of the dimensionless quantities $A_iq_i$, where $A_i$ has the inverse dimension of the quantity $q_i$; for example, if $q_i$ is the number of automobiles, the parameter $A_i$ has the dimension of per automobile; the numerical value of $A_i$ (per automobile) represents the importance of this commodity in the utility function. \section{The Demand Function} \label{sec:demand} The demand function $\mathcal{D}[\textbf{p}]$ yields the prices $p_i$ for commodity $i$ that the consumer is willing to pay, given the budget constraint. Clearly, the demand function must be a decreasing function of prices, since, due to the consumers' budget constraint, the higher the price of a commodity the smaller is the quantity that is bought by consumers. Hence \begin{eqnarray} \label{fundpropdemand} p'_i>p_i~~~ \text{if and only if}~~~ \mathcal{D}[p_1,p_2,..,p'_i,..p_N]<\mathcal{D}[p_1,p_2,..,p_i,..p_N] \end{eqnarray} The demand function $\mathcal{D}[\textbf{p}]$ can be derived from the utility function and describes the empirical market demand of commodities. The aggregate consumer will consume quantities of commodities that maximize (optimize) the value of his (or her) utility function, subject to the budget constraint given in Eq. \ref{pqconstraint}. This yields \begin{eqnarray} \label{utilitydemandconst1} \frac{\partial \mathcal{U}[\textbf{q}]}{\partial q_i}\Big{|}_{\textbf{q}=\bar{\textbf{q}}}=0\\ \label{utilitydemandconst2} \text{Constraint}:~\sum_{i=1}^Np_iq_i=m \end{eqnarray} Simultaneously solving Equations \ref{utilitydemandconst1} and \ref{utilitydemandconst2} yields the value of $\bar{\textbf{q}}$ that maximizes the consumer's utility function for a given budget, namely \begin{eqnarray} \label{utidemandderiv} \bar{\textbf{q}}=\bar{\textbf{q}}(\textbf{p},m)~~\Rightarrow ~~ \mathcal{D}[\textbf{p},m]=\mathcal{U}[\bar{\textbf{q}}(\textbf{p},m)] \end{eqnarray} Note that the demand function is dimensionless since the utility function is dimensionless. An example of a utility function and its corresponding demand function is analyzed in the Appendix; starting from a utility function, the demand is derived using the procedure of constrained optimization. \subsection{Duality} One can equivalently start from the demand function and using the concept of \textbf{duality}. For a given demand function, together with the budget constraint, yields the following maximization problem \begin{eqnarray} \label{demandconst} \frac{\partial \mathcal{D}[\textbf{p},m]}{\partial p_i}\Big{|}_{\textbf{p}=\bar{\textbf{p}}}=0\\ \label{demandconst2} \text{Constraint}:~\sum_{i=1}^Np_iq_i=m \end{eqnarray} Simultaneously solving Equations \ref{demandconst} and \ref{demandconst2} yields the optimizing price $\bar{\textbf{p}}$ \begin{eqnarray} \bar{\textbf{p}}=\bar{\textbf{p}}(\textbf{q},m) \end{eqnarray} and yields the utility function \begin{eqnarray} \label{demdndualuti} \mathcal{U}[\textbf{q}]=\mathcal{D}[\bar{\textbf{p}}(\textbf{q},m),m] \end{eqnarray} One can view the demand function $\mathcal{D}[\textbf{p},m]$ as an \textit{indirect utility function}. In deriving the utility function from the demand function, as given in Eq. \ref{demdndualuti}, the quantities of commodities $q_i$ was taken to be fixed and one maximized the demand function over all prices $p_i$. In contrast, in deriving the demand function from the utility function, as in Eq. \ref{utidemandderiv}, the commodity prices $p_i$ were taken to be given and quantities $q_i$ were varied to maximize utility. \section{A Model Demand Function} \label{sec:demandutily} Consider the following model for the demand function, namely \begin{eqnarray} \label{modeldemand} \mathcal{D}[\textbf{p}]=\frac{m}{2}\sum_{i=1}^N\frac{d_i}{p_i^{a_i}} ~~;~~a_i,~d_i>0 \end{eqnarray} The coefficient $a_i$ is an index that characterizes the demand for a specific commodity; coefficients $d_i$ are determined by the relative importance of quantity $q_i$ in the demand for the total collection of $N$ commodities. All the coefficients $d_i>0$, that is, are positive since the demand function is positive, namely $\mathcal{D}[\textbf{p}]>0$. The form of demand function given in Eq. \ref{modeldemand} is quite realistic and, for example, has been used in an empirical study \cite{gas1} on the dependence of the demand of gasoline to its price; for the US market it was found that the index $a_\text{petrol}=0.075$ and the coefficient $md_\text{petrol}$ was taken to be a function of interest rates, inflation, per capita disposable income and so on. The demand function $\mathcal{D}[\textbf{p}]$ is dimensionless and $m$ has the dimension of \$. The demand function clearly satisfies the condition stated in Eq. \ref{fundpropdemand}. The total demand is taken to be linearly proportional to the total budget $m$; this fulfills the requirement that if the consumer has no buying power there is no demand. The concept of `latent demand' that exists in the absence of buying power can be incorporated into the model by giving a time dependence to the budget constraint, namely $m=m(t)$, and is a feature that can be included in a more elaborate analysis of the model. The utility function is obtained from the demand function using duality given in Eqs. \ref{demandconst} and \ref{demandconst2}. Using the method of the Lagrange multiplier, define an auxiliary function $B$ by \begin{eqnarray*} B= \sum_{i=1}^N\frac{d_i}{p_i^a} +\lambda(\sum_{i=1}^Np_iq_i-m) \end{eqnarray*} Minimizing $B$ with respect to both $p_i$ and $\lambda$, that is \begin{eqnarray*} \frac{\partial B}{\partial p_i}=0=\frac{\partial B}{\partial \lambda} \end{eqnarray*} yields \begin{eqnarray} \label{lambdacon} \frac{a_id_i}{p_i^{a_i}}=\lambda p_iq_i~~;~~\sum_{i=1}^Np_iq_i=m \end{eqnarray} From above equations \begin{eqnarray*} \lambda=\frac{1}{m} \sum_{i=1}^N\frac{a_id_i}{p_i^{a_i}} \end{eqnarray*} and then, solving for $\lambda$ in Eq. \ref{lambdacon}, yields the minimizing value of the prices $\bar{p}_i$ given by \begin{eqnarray} \label{eqnutility} m\frac{a_id_i}{\bar{p}_i^{a_i}}=\bar{p}_iq_i\sum_{j=1}^N\frac{a_jd_j}{\bar{p}_j^{a_i}} \end{eqnarray} \subsection{Model Utility Function} To obtain the utility function, the value of $\bar{p}_i$ is substituted into the demand function given in Eq. \ref{modeldemand}. To explicitly obtain the minimizing value of the prices $\bar{p}_i$, assume for simplicity that $a_i=a$; then, solving Eq. \ref{eqnutility} yields $\bar{p}_i$ the following \begin{eqnarray} \label{finalpr} \bar{p}_i=C \left(\frac{d_i}{q_i} \right)^{1/(a+1)}~~;~~C=\frac{m}{\sum_i {d_i^{1/(a+1)} q_i^{a/(a+1)}}} \end{eqnarray} and yields the utility function \begin{eqnarray} \label{utilitysqrt} &&\mathcal{U}[\textbf{q}]= \mathcal{D}[\bar{\textbf{p}}(\textbf{q},m)]=\frac{m^{1-a}}{2}\left(\sum_i d_i^{1/(a+1)} q_i^{a/(a+1)}\right)^{a+1}~~;~~a,d_i>0 \end{eqnarray} For a single commodity, the utility function is given by \begin{eqnarray} \label{utilityone} &&\mathcal{U}[q]= \mathcal{D}[\bar{p}(q,m)]=\frac{m^{1-a}d}{2} q^{a} \end{eqnarray} Note that the utility function depends on the budget constraint $m$ for all values of $a$ except the special case of $a=1$. The utility function is sometimes considered to be independent of the budget constraint; however, the utility function being a function of $m$ is also consistent since the budget constraint clearly influences the preferences of the aggregate consumer. The model utility function given in Eq. \ref{utilitysqrt} clearly fulfills the requirement given in Eq. \ref{fundproputility}, namely that \begin{eqnarray} &&\frac{m^{1-a}}{2}\left(\sum_i d_i^{1/(a+1)} q_i^{a/(a+1)}\right)^{a+1} >\frac{m^{1-a}}{2}\left(\sum_i d_i^{1/(a+1)} \tilde{q}_i^{a/(a+1)}\right)^{a+1} \\ &&~~~~~~~~~~~~~\text{If and only if}~~q_i>\tilde{q}_i~~ \text{for any}~i \end{eqnarray} Keeping in mind that $q_i>0$, the marginal utility, as expected, is positive and it can be shown that for the utility function given in Eq. \ref{utilitysqrt} \begin{eqnarray} &&\frac{\partial \mathcal{U}[\textbf{q}]}{\partial q_i}> \end{eqnarray} The utility function given in Eq. \ref{utilitysqrt} exhibits diminishing marginal utility since \begin{eqnarray} \label{marutility} &&\frac{\partial^2 \mathcal{U}[\textbf{q}]}{\partial q_i^2}< \end{eqnarray} \section{The Supply function} \label{sec:supply} The supply function $\mathcal{S}$ depends on the prices of commodities, and determines the quantity of a commodity that producers are willing and able to sell for a given price. Hence $\mathcal{S}=\mathcal{S}[\textbf{p}]$, where $\textbf{p}=(p_1,p_2,..,p_N)$. Clearly, the supply function must be an increasing function of prices, since the higher the prices, the more is the producer of a commodity willing to supply the said commodity. Hence \begin{eqnarray} \label{fundpropsupply} p'_i>p_i~~~ \text{if and only if}~~~ \mathcal{S}[p_1,p_2,..,p'_i,..p_N]>\mathcal{S}[p_1,p_2,..,p_i,..p_N] \end{eqnarray} The supply function $\mathcal{S}[\textbf{p}]$ must be dimensionless since the relative supply of qualitatively different commodities is aggregated into a single supply function. Hence, the prices of commodities need to enter the supply function in dimensionless combinations. The supply function is taken to be a function \textit{independent} of the demand function, and is fixed by the drive of capital in seeking returns by engaging in production. This assumption could change in a planned economy and is not explored in this paper. The total \textit{supply function} in terms of commodity quantities $q_i$ is given by \begin{eqnarray} \label{consump} \mathcal{F}[\textbf{q}]=\frac{1}{2}\sum_{i=1}^N \alpha_iq_i \end{eqnarray} The coefficients $\alpha_i$ are determined by the relative importance of quantity $q_i$ in the supply to the market of the collection of $N$ commodities. The total profit from the production of commodities is given by \begin{eqnarray} \label{tporfit} \pi[\textbf{q}]=\sum_{i=1}^N p_iq_i-C[\textbf{q}] \end{eqnarray} $C[\textbf{q}]$ is the cost function. The supply function $\mathcal{F}[\textbf{q}]$, namely the quantities produced, is fixed by the company producing only such quantities of commodities that maximizes its profit. More precisely, given the value of $p_i$, the quantity $q_i$ is determined by maximizing $\pi[\textbf{q}]$. Hence \begin{eqnarray} \label{maxprofit} \frac{\partial \pi[\textbf{q}]}{\partial q_i}\Big{|}_{\textbf{q}=\bar{\textbf{q}}}=0~~\Rightarrow~~\bar{\textbf{q}}=\bar{\textbf{q}}(\textbf{p}) \end{eqnarray} The supply function in terms of market prices $\textbf{p}$, denoted by $\mathcal{S}[\textbf{p}]$, is given by the following \begin{eqnarray} \label{supplqp} \mathcal{S}[\textbf{p}]=\mathcal{F}[\bar{\textbf{q}}(\textbf{p})] \end{eqnarray} Consider the cost function \begin{eqnarray} \label{modelconsump} &&C[\textbf{q}]=\sum_{i=1}^N \frac{b_i}{1+b_i} \beta_iq_i^{1+1/b_i} \end{eqnarray} where $\beta_i$ and $b_i$ are related to the cost of producing the commodities, the price of risk in undertaking production plus the expected return on invested capital. The profit of the company is the following \begin{eqnarray} \label{modelcost} && \pi[\textbf{q}]=\sum_{i=1}^N p_iq_i-\sum_{i=1}^N \frac{b_i}{1+b_i} \beta_iq_i^{1+1/b_i} \end{eqnarray} Maximizing profit yields \begin{eqnarray} \label{modelmaxprofit} \frac{\partial \pi[\textbf{q}]}{\partial q_i}\Big{|}_{\textbf{q}=\bar{\textbf{q}}}=0=p_i-\beta_i\bar{q}_i^{1/b_i}~~\Rightarrow ~~\bar{q}_i=\left(\frac{p_i}{\beta_i}\right)^{b_i} \end{eqnarray} Hence, from Eqs. \ref{consump}, \ref{supplqp} and \ref{modelmaxprofit}, the supply function is given by \begin{eqnarray} \label{maxprofit} \mathcal{S}[\textbf{p}]=\mathcal{F}[\bar{\textbf{q}}(\textbf{p})]=\frac{1}{2}\sum_{i=1}^N\alpha_i\bar{q}_i =\frac{1}{2}\sum_{i=1}^N \alpha_i\left(\frac{p_i}{\beta_i}\right)^{b_i} \end{eqnarray} Let $\alpha_i/\beta_i^b=ms_i$; the supply function is given by \begin{eqnarray} \label{modelsupply} \mathcal{S}[\textbf{p}]=\frac{m}{2}\sum_{i=1}^N s_ip_i^{b_i} ~~;~~b_i,s_i>0 \end{eqnarray} The supply function is \textit{scaled} by the budget constraint $m$ of the aggregate consumer. The scaling is done with the view that the price offered for a commodity is meaningful only if the consumer has non-zero buying power. In the absence of consumer buying power, the effective supply of all commodities is zero. The supply function is dimensionless and positive valued, that is $\mathcal{S}[\textbf{p}]>0$, with parameter $ms_i$ determining the relative quantity of supply of commodity $i$ with price $p_i$. \section{Price versus quantity in standard microeconomics} To avoid mixing up two different results, for standard microeconomics \cite{varian} market prices are denoted by $\textbf{p}^*$, the quantity traded is denoted by $\textbf{q}^*$ ; the market price and quantity traded is found by equating the supply of commodities (by the producers) to be equal to the demand for these commodities (by the consumers). In contrast to standard microeconomics, in the statistical microeconomic approach, market prices are denoted by $\textbf{p}_0$, the quantity traded is denoted by $\textbf{q}_0$; the relation of prices to quantities is not given by equating supply with demand, but instead is given by minimizing the microeconomic potential $\mathcal{V}[\textbf{p}]$ given in Eq. \ref{defpot} -- wand is discussed in Section \ref{micromarket}. \begin{figure}[h] \centering \epsfig{file=supplydemond2D_2.eps, height=7cm, angle=0} \caption{The supply and demand for price and quantity, for one commodity. For large $q_i$, The demand price as a function of quantity goes as $p_i\simeq q_i^{-a_i}$ and the supply prices goes as $p_i\simeq q_i^{b_i}$.} \label{microsupplydemandquantity} \end{figure} For the special case of $a_i=a$, recall from Eq. \ref{finalpr}, the price for given quantity of a commodity that a consumer is willing to pay is the following \begin{eqnarray*} \bar{p}_i=\frac{m}{\sum_j {d_j^{1/(a+1)} q_j^{a/(a+1)}}}\left(\frac{d_i}{q_i} \right)^{1/(a+1)}~~:~~\text{Demand price versus quantity} \end{eqnarray*} Furthermore, the price that a supplier is willing to sell a quantity of commodities, from Eq. \ref{modelmaxprofit}, is the following \begin{eqnarray*} p_i=\beta_i\bar{q}_i^{1/b}~~:~~\text{Supply price versus quantity} \end{eqnarray*} Figure \ref{microsupplydemandquantity} shows the relation of quantity to price for supply and demand of a single commodity, with the intersection of the two yielding market price and quantity $\textbf{q}^*,\textbf{p}^*$. \begin{figure}[h] \centering \epsfig{file=supplydemond_both.eps, height=6cm, angle=0} \caption{(a) Microeconomic supply and demand function for one $p_1=e^x$. b) Supply and demand function for one $p_1=e^x$ and $p_2=e^y$. The unique intersection point of the supply and demand curve is at the minimum of the line of intersection of the supply and demand surfaces.} \label{microsupplydemanddiag3} \end{figure} Hence, setting the quantity in demand for a commodity $q_i$, at market price $\textbf{p}^*$, to be equal to the supply given by $\bar{q}_i$, yields, for $q_i=q_i^*=\bar{q}_i$ from above Eqs. \ref{finalpr} and \ref{modelmaxprofit}, the quantities $q_i^*$ sold in the market are given by \begin{eqnarray} \label{pricquant} &&\frac{m}{\sum_i {d_i^{1/(a+1)} (q^*_{i})^{a/(a+1)}}} \left(\frac{d_i}{q^*_{i}} \right)^{1/(a+1)}=p^*_{i}=\beta_i(q^*_{i})^{1/b} \end{eqnarray} Solving the nonlinear equation given in Eq. \ref{pricquant} yields the following \begin{eqnarray} \label{pricquantgen} && \textbf{q}^*=\textbf{q}^*(\beta,\textbf{d})~~;~~ \textbf{p}^*=\textbf{p}^*(\beta,\textbf{d}) \end{eqnarray} Eq. \ref{pricquant} yields quantities $\textbf{q}^*$ that are bought by the aggregate consumer at prices $\textbf{p}^*$. A graph of the supply and demand as a function of price, for one commodity and two commodities is shown in Figure \ref{microsupplydemanddiag3}. \section{Microeconomic potential} \label{micromarket} It is \textit{postulated} that the interplay of the supply and demand functions determines the stationary prices of commodities. The trade off between supply and demand is encoded in the microeconomic potential $\mathcal{V}[\textbf{p}]$, given in Eq. \ref{defpot} and defined in terms of the market prices of commodities and as the \textit{sum }of the demand and supply function \begin{eqnarray*} \mathcal{V}[\textbf{p}]=\mathcal{D}[\textbf{p}]+\mathcal{S}[\textbf{p}] \end{eqnarray*} The potential $\mathcal{V}[\textbf{p}]$ is dimensionless since both $\mathcal{D}[\textbf{p}]$ and $\mathcal{S}[\textbf{p}]$ are dimensionless. The dependence of the demand and supply function on the market prices of commodities, given in Eqs. \ref{fundpropdemand} and \ref{fundpropsupply} yields the following general limiting behavior for the microeconomic potential \begin {eqnarray} \label{microvlimit} \mathcal{V}[\textbf{p}] \to \left\{ \begin{array}{l} \mathcal{D}[\textbf{p}] \to \infty ~~;~~p_i \to 0\\ \mathcal{S}[\textbf{p}] \to \infty~~;~~p_i \to \infty \end{array} \right. \end {eqnarray} The asymptotic behavior given in Eq. \ref{microvlimit} is due to the competing dependence of demand and supply on market prices. Hence, a \textit{minimum} value for $\mathcal{V}[\textbf{p}]$ always exists, for some prices $\textbf{p}_0$ . It is shown in Eq. \ref{microcoavgprconstant} that $\textbf{p}_0$, to leading order for the model chosen, is equal to the average value of market prices. The value of the minimizing price vector $\textbf{p}_0$ is given by the following \begin{eqnarray} \label{micropotmin} \frac{\partial \mathcal{V}[\textbf{p}]}{\partial p_i}\Big{|}_{\textbf{p}=\textbf{p}_0}=0\\ \nonumber\\ \label{micropotminsupdem} \Rightarrow \frac{\partial \mathcal{D}[\textbf{p}]}{\partial p_i}\Big{|}_{\textbf{p}=\textbf{p}_0}=-\frac{\partial \mathcal{S}[\textbf{p}]}{\partial p_i}\Big{|}_{\textbf{p}=\textbf{p}_0} \end{eqnarray} In other words, as can be seen from Eq. \ref{micropotminsupdem}, a minimum value of the potential $\mathcal{V}[\textbf{p}]$ is attained at price vector $\textbf{p}_0$ when a small variation of prices yields a change of demand that is exactly the opposite to change of supply. \begin{figure}[h] \centering \epsfig{file=supplydemond3D.eps, height=7cm, angle=0} \caption{Microeconomic potential $\mathcal{V}[\textbf{p}]$ for two prices $p_1=e^x$ and $p_2=e^y$ showing a unique minimum value, given by the dot at the minimum of the surface.} \label{micrpotdiag} \end{figure} As can be seen from Figure \ref{micrpotdiag} the microeconomic potential has a surface that is required for the minimization that yields market prices $\textbf{p}_0$; in contrast, for the standard microeconomic, only the intersection of the supply and demand curve are relevant, as shown in Figure \ref{microsupplydemanddiag3}(b). Whether the minimizing prices $\textbf{p}_0$ are unique or not depends on the model chosen for $\mathcal{V}[\textbf{p}]$; since market prices are known to be unique a requirement for all models in microeconomics is that they yield a unique value for $\textbf{p}_0$. Note that the price vector $\textbf{p}_0$ that minimizes the potential $\mathcal{V}[\textbf{p}]$ has no relation to the minimization carried out in Section \ref{sec:demandutily} since in going from the demand to the utility function one is maximizing the demand function that is constrained by the budget $m$. In contrast, the minimization of $\mathcal{V}[\textbf{p}]$ is \textit{unconstrained} and fixes the market price as a function of the parameters of the model potential. One would like to have a time independent potential since then one can make unique predictions of future movement of market prices of commodities. One can also introduce explicit time dependence in the potential $\mathcal{V}[\textbf{p}]$ to reflect major scheduled announcements such of quarterly industrial output, employment figures, yearly budgets and so on. \section{Model of microeconomic potential} For the model chosen for the demand and supply functions given in Eqs. \ref{modeldemand} and \ref{modelsupply} respectively, the microeconomic potential is given by \begin{eqnarray} \label{micropotenmodel} \mathcal{V}[\textbf{p}]&=&\mathcal{D}[\textbf{p}]+\mathcal{S}[\textbf{p}]\nonumber\\ &=&\frac{m}{2}\left[\sum_{i=1}^N\frac{d_i}{p_i^{a_i}} + \sum_{i=1}^N s_ip_i^{b_i}\right]~;~~d_i,s_i>0 ~;~~a,b>0 \end{eqnarray} The model microeconomic potential has the expected asymptotic behavior Eq. \ref{microvlimit}, and is realized in the following manner for the model chosen \begin {eqnarray} \mathcal{V}[\textbf{p}] \to \left\{ \begin{array}{l} \mathcal{D}[\textbf{p}] \simeq 1/p_i^{a_i} \to \infty ~~;~~p_i \to 0\\ \mathcal{S}[\textbf{p}] \simeq p_i^{b_i} \to \infty~~;~~p_i \to \infty \end{array} \right. \end {eqnarray} Figure \ref{micrpotdiag} shows the shape of $\mathcal{V}[\textbf{p}]$ for the model given in Eq. \ref{micropotenmodel}; note the important feature of $\mathcal{V}[\textbf{p}]$ that it has a (unique) global minimum at $\textbf{p}_0$. The value of $\textbf{p}_0$ is obtained by minimizing $\mathcal{V}[\textbf{p}]$ and, from Eqs. \ref{micropotenmodel} and \ref{micropotmin}, yields the following \begin{eqnarray} &&\frac{\partial \mathcal{V}[\textbf{p}]}{\partial p_i}\Big{|}_{\textbf{p}=\textbf{p}_0}=0 ~~\Rightarrow~~ -a_i\frac{d_i}{p_{0i}^{a_i+1}} + b_i s_ip_{0i}^{b_i-1}=0\nonumber\\ \label{minimamicropotenmodel} &&p_{0i}=\left(\frac{a_id_i}{b_is_i}\right)^{1/(a_i+b_i)} \end{eqnarray} In standard microeconomic theory, the market prices $\textbf{p}^*$ are fixed by equating demand to supply, shown graphically in Figure \ref{microsupplydemanddiag3}; for the model being considered, this yields the following \begin{eqnarray} \label{demandsuppleq} &&\mathcal{D}[\textbf{p}^*]=\mathcal{S}[\textbf{p}^*] ~~\Rightarrow~~\frac{d_i}{(p^*_{i})^{a_i}} = s_i(p^*_{i})^{b_i}~~\Rightarrow~~ p^*_{i}=\left(\frac{d_i}{s_i}\right)^{1/(a_i+b_i)} \end{eqnarray} Equating the supply and demand functions, shown graphically in Figure \ref{microsupplydemanddiag3}, yields the average price $\textbf{p}^*$ different from the result of $\textbf{p}_0$ given by minimizing the potential $\mathcal{V}[\textbf{p}]$ as given in Eq. \ref{minimamicropotenmodel}. It is only for the very special of a single commodity and with potential of $a=1=b$ that the two approaches yield the same answer. Note that the expressions obtained for $\textbf{p}_0$ and $\textbf{p}^*$ are very different than the relation of market price and quantity obtained in Eq. \ref{pricquantgen}: in Eqs. \ref{minimamicropotenmodel} and \ref{demandsuppleq} the market price is obtained in terms of the parameters of the supply and demand function whereas Eq. \ref{pricquantgen} yields the locus $\textbf{p}^*(\textbf{q}^*)$ -- namely, the market price of commodities and the quantity of these commodities sold on the market. \subsection{Potential versus supply and demand} The concept of a potential carries more information about prices than the intersection of the supply and demand curve. The following are some of the reasons. \begin{itemize} \item Figure \ref{microsupplydemanddiag3}(b) shows two surfaces, namely that of the supply and demand surfaces whereas, in contrast Figure \ref{micrpotdiag} shows only a single surface. Both figures can, in principle, be used for determining the stationary prices. But, as can be seen by comparing Eqs. \ref{minimamicropotenmodel} and \ref{demandsuppleq}, these prices are quite different. \item Equating the supply and demand functions considered separately yields only a single `market' price with no information about what are the possible variations about the market price. In contrast, being the minimum of the potential, the potential also contains information about the commodity prices in the \textit{neighborhood} of market prices $\textbf{p}_0$ as well as commodity prices that are far from the market prices. \item The statistical variation of market prices needs information on how the demand and supply compete to set prices far from equilibrium since all values of prices are allowed in computing the expected value of market observed prices. The potential, by combining supply and demand into one function, models the competing influences that supply and demand have on commodity prices. \item Together with the kinetic term for prices, discussed in next Section, the potential plays a central role in determining the statistical evolution of commodity prices near -- as well far from -- its average value. \end{itemize} \section{Microeconomic kinetic term} The dynamics of market prices is encoded in the kinetic component $\mathcal{T}[\textbf{p}]$ of the Lagrangian given in Eq. \ref{defpot}. Market prices undergo a dynamical evolution and hence depend on time, namely $p_i=p_i(t)$. Furthermore, similar to the microeconomic potential, the kinetic component is \textit{assumed} to depend linearly on budget $m$ -- since one expects no dynamics for a market in which the consumer has no buying power. The linear dependence of $\mathcal{T}[\textbf{p}]$ on the budget $m$ is taken for simplicity and can be generalized. A detailed empirical study of both interest rates \cite{bebcup2} and of equity prices \cite{bebcyeqt} shows that the Lagrangian depends on both, the \textit{velocity} and \textit{acceleration} of the underlying security. Since the markets for commodities are connected to the financial and capital markets, one expects that the dynamics of commodities should have the same behavior as interest rates and equities. Since prices are always positive, consider the exponential parametrization \begin{eqnarray} \label{defexpvar} p_i(t)=p_{0}e^{x_i(t)}~~;~~-\infty \le x_i \le +\infty \end{eqnarray} where $p_0$ is a constant quantity with the dimension of \$. The kinetic term for the market prices of commodities, in tandem with the capital and debt markets, is taken to be the following \begin{eqnarray} \label{microkinetic} &&\mathcal{T}[\textbf{p}] =\frac{m}{2}\sum_{i,j=1}^N \left[L_{ij}\frac{\partial^2 x_i}{\partial t^2}\frac{\partial^2 x_j}{\partial t^2}+\tilde{L}_{ij}\frac{\partial x_i}{\partial t}\frac{\partial x_j}{\partial t}\right] \end{eqnarray} The form of the time dependence given in Eq. \ref{microkinetic} yields many features for the dynamics of microeconomics that are are not present in quantum and classical mechanics, for which the Lagrangian depends on only the particle's velocity. The kinetic term for market prices has the following cardinal properties. \begin{itemize} \item The kinetic term $\mathcal{T}[\textbf{p}]$ does not depend on $p_0$, but rather, depends only on the relative instantaneous changes in the price vector. \item The second order derivative term in the kinetic term requires \textit{four} boundary conditions to specify a classical solution of the Lagrangian \cite{cythesis}. \item The market prices undergo a correlated time evolution that is determined by the matrix of parameters given by $L_{ij}$ and $\tilde{L}_{ij}$. \item The budget constraint influences the correlation on market prices that in turn determines the uptake of commodities $q_i$ by the aggregate consumer. \end{itemize} \section{Microeconomic Feynman Path Integral} The Lagrangian of a system determines the evolution of a dynamical system and for market prices represents all the factors determining its evolution. In particular, the interplay and competition of demand, supply with the `kinetic energy' of market prices is encoded in the Lagrangian. The Lagrangian, from Eq. \ref{microlagrang}, is given by the sum of the kinetic and potential factors and yields \begin{eqnarray*} &&\mathcal{L}(t) = \mathcal{T}[\textbf{p}(t)]+\mathcal{V}[\textbf{p}(t)] \end{eqnarray*} The action functional determines the dynamics (time evolution) of market prices and, from Eq. \ref{microaction} is given by \begin{eqnarray*} \mathcal{A}[\textbf{p}]=\int_{-\infty}^{+\infty} dt\mathcal{L}(t)=\int_{-\infty}^{+\infty} dt \Big(\mathcal{T}[\textbf{p}(t)]+\mathcal{V}[\textbf{p}(t)]\Big) \end{eqnarray*} The model chosen for the potential and kinetic parts of the Lagrangian yields, from Eqs. \ref{micropotenmodel} and \ref{microkinetic} the following \begin{eqnarray} \label{microlagrangmodel} \mathcal{L}(t) = \frac{m}{2}\sum_{i,j=1}^N \left[L_{ij}\frac{\partial^2 x_i}{\partial t^2}\frac{\partial^2 x_j}{\partial t^2}+\tilde{L}_{ij}\frac{\partial x_i}{\partial t}\frac{\partial x_j}{\partial t}\right]+\frac{m}{2}\sum_{i=1}^N\frac{d_i}{p_i^{a_i}} + \frac{m}{2}\sum_{i=1}^N s_ip_i^{b_i}~~ \end{eqnarray} The Lagrangian given in Eq. \ref{microlagrangmodel} is nonlinear, since prices (and quantities) are always positive -- and hence represented by exponential variables as in Eq. \ref{defexpvar}. For the case of a single commodity, let the price be $p=p_0e^x$; the Lagrangian given in Eq. \ref{microlagrangmodel} reduces to the following \begin{eqnarray} \label{microlagrangmodelonedoff} \mathcal{L}(t) = \frac{m}{2}\left[L\left(\frac{\partial^2 x}{\partial t^2}\right)^2+\tilde{L}\left(\frac{\partial x}{\partial t}\right)^2\right]+\frac{m}{2}\left[\frac{d}{p_0}e^{-ax} + sp_0e^{bx}\right]~~;~~p=p_0e^x>0 \end{eqnarray} The stochastic processes driving the market prices are modeled in analogy with statistical mechanics, for which the particles' deterministic positions and velocities are generalized to random positions and velocities. Similarly, in statistical microeconomics, it is postulated that all prices are random variables; the \textbf{joint probability distribution} for the market prices to have a particular evolution $\{\textbf{p}(t): -\infty \le t \le +\infty\}$, given in Eq. \ref{boltzprop}, has the following properly normalized form \begin{eqnarray} \label{micropdf} &&\frac{e^{-\mathcal{A}[\textbf{p}]}}{Z} \end{eqnarray} Note that the statistical weight provided by $\exp\{-\mathcal{A}[\textbf{p}]\}/Z$ determines which random histories of prices are important and which are not. The normalization $Z$ is given by the Feynman path integral \begin{eqnarray} \label{micropartion} &&Z=\prod_{i=1}^{N}\prod_{t=-\infty}^{+\infty}\int_{-\infty}^{+\infty}\frac{dp_i(t)}{p_i(t)}e^{-\mathcal{A}[\textbf{p}]}\equiv \int \frac{Dp}{p} ~e^{-\mathcal{A}[\textbf{p}]}~~:~~\text{Feynman path integral} \end{eqnarray} The correlation function of market prices is given by the expectation value of the product of prices, computed by summing over all possible histories of market prices using the path integral, and is given by the following \begin{eqnarray} \label{microcorrefns} &&E[p_{i_1}(t_1)p_{i_2}(t_2)..p_{i_N}(t_N)]=\frac{1}{Z}\int \frac{Dp}{p} e^{-\mathcal{A}[\textbf{p}]}p_{i_1}(t_1)p_{i_2}(t_2)..p_{i_N}(t_N) \end{eqnarray} The path integral, for exponential variables $x_i$, defined in Eq. \ref{defexpvar}, is given by \begin{eqnarray} \label{micropartionxx} &&Z=\prod_{i=1}^{N}\prod_{t=-\infty}^{+\infty}\int_{-\infty}^{+\infty}dx_i(t)e^{-\mathcal{A}[p_{0},x_i]} \equiv \int DX e^{-\mathcal{A}[p_{0i}e^{x_i}]} \end{eqnarray} with the correlations given by \begin{eqnarray*} &&E[p_i(t_1)p(t_2)..p(t_N)]=\frac{p_0^N}{Z}\int DX e^{-\mathcal{A}[p_{0},x_i]}e^{x_{i_1}(t_1)}e^{x_{i_2}(t_2)}..e^{x_{i_N}(t_N)} \end{eqnarray*} Note the expression for $Z$ given in Eq. \ref{micropartionxx} is \textit{exact} and no approximation has been made; rather a change of variables has been made from $p_i$ to $x_i$, where the new set of variables are more suitable for the perturbative study of the microeconomic path integral. The path integral given in Eq. \ref{microcorrefns} is nonlinear and nontrivial. The path integral can be studied numerically using Monte Carlo and other well known methods. In many cases, the numerical approach is necessary for studying non-perturbative features that are inaccessible to other methods. \section{Perturbation expansion} Analytic computations of the path integral is one of the standard methods for understanding the main qualitative features of a system represented by a path integral. Given that the path integral for prices is nonlinear, an exact solution is close to impossible; the best that one can do is to develop an approximation scheme, with the standard approach being a perturbation expansion in some small parameter -- with higher and higher terms in the expansion parameter being more and more accurate. In the action functional $\mathcal{A}[\textbf{p}]$, all the parameters were scaled so that the inverse of the total budget, namely $1/m$, provides a small expansion parameter for the following reason. For the case of $m>>1$, the path integral given in Eq. \ref{microcorrefns} is dominated by the values of $p(t)$ for which the integrand $\exp\{-\mathcal{A}[\textbf{p}]\}$ is a \textit{maximum}, or equivalently, for which $\mathcal{A}[\textbf{p}]$ is a minimum. In particular, the path integral can be expanded as a powers series in $1/m$ and can generate an expansion in powers of $1/m$ for all quantities of interest. The inverse of the budget $1/m$ behaves like Planck's constant of quantum mechanics, and a power series expansion in terms of $1/m$ is called a semi-classical expansion. The path (historical evolution) of $p(t)$ that minimizes $\mathcal{A}[\textbf{p}]$ is given by\footnote{In the context of quantum mechanics, the path of the prices that minimizes the action, namely $p_c(t)$ is called the classical solution.} \begin{eqnarray} \label{classical} &&\frac{\delta \mathcal{A}[\textbf{p}_c]}{\delta p_i(t)}\equiv \frac{\delta \mathcal{A}[\textbf{p}]}{\delta p_i(t)}\Big{|}_{(\textbf{p}(t)=\textbf{p}_c(t))}=0 \end{eqnarray} Non-linear actions, such as the one for prices given in Eq. \ref{microlagrangmodel}, can have a minimum value for solutions $\textbf{p}_c(t)$ that have non-trivial time dependence, and are called kinks or instantons. Kinks are time dependent solutions of Eq. \ref{classical} that connect nontrivial initial and final boundary conditions. For simplicity, consider the case where the prices $\textbf{p}_c(t)$ that minimize the action are \textit{time independent} (constant). For time independent $\textbf{p}_c$, the action functional $\mathcal{A}[\textbf{p}_c]$ is equal to the time integral of the potential $\mathcal{V}[\textbf{p}_c]$ and the minimum of the action functional is given by the minimum of the potential $\mathcal{V}[\textbf{p}_c]$. As shown in Figure \ref{micrpotdiag}, the microeconomic potential $\mathcal{V}[\textbf{p}]$ has a \textit{minimum} at $\textbf{p}_c(t)=\textbf{p}_0$; hence , $\exp\{-\mathcal{V}[\textbf{p}]\}$ has a \textit{maximum} for the value of market prices $\textbf{p}_c(t)=\textbf{p}_0$, as shown in Figure \ref{micropotmax}. Hence, similar to Figure \ref{micropotmax}, $\exp\{-\mathcal{A}[\textbf{p}]\}$ -- the integrand of the path integral -- has a maximum for which the potential has a minimum, namely at $\textbf{p}_c(t)=\textbf{p}_0$. \begin{figure}[h] \centering \epsfig{file=expV.eps, height=10cm, angle=0} \caption{The function $\exp\{-\mathcal{V}[\textbf{p}]\}$ near the minimum value of the microeconomic potential.} \label{micropotmax} \end{figure} A perturbative expansion of the path integral is based on the statistical fluctuations of the price vector in the neighborhood of the minimum value of the microeconomic potential, namely in the neighborhood of $\textbf{p}=\textbf{p}_0$. In terms of the exponential variables given in Eq. \ref{defexpvar}, the minima of the microeconomic potential is given by the value of $\bar{x}_i$; the minima $\textbf{p}_0$ given by Eq. \ref{minimamicropotenmodel} yields the following \begin{eqnarray} \label{pox} &&p_i=p_{0}e^{x_i}~~;~~p_{0i}=p_0e^{\bar{x}_i}=\left(\frac{a_id_i}{b_is_i}\right)^{1/(a_i+b_i)} \end{eqnarray} By expanding the path integral about the value of $\bar{x}_i$ as given in Eq. \ref{pox}, the path integral in Eq. \ref{micropartionxx} yields a perturbation expansion in terms of $1/m$ for the following reason. Under normal market conditions it is expected that prices have \textit{small fluctuations} about the minimum value of the action $\mathcal{A}[\bar{x}_i]$. In fact, it is shown later in Eq. \ref{avgx2} that, near the maximum of the action given by $\mathcal{A}[\bar{x}_i]$, the magnitude of the integrations variables is given by \begin{eqnarray} \label{flucm} x_i=\bar{x}_i+O(\sqrt{\frac{1}{m}}) \end{eqnarray} The result given in Eq. \ref{flucm} is intuitively the following: for a consumer with a large budget $m$, prices are very near $p_0\exp\{\bar{x}_i\}$ and fluctuate very little since the large budget allows the consumer to buy any commodity he or she wishes. However, as the budget becomes smaller and smaller, the fluctuations in the prices become larger and larger since the consumer now has to make a choice, buying some commodities and foregoing others; this leads to large changes in the uptake of different commodities and hence introducing large random variations in the prices. Making a functional change of variables from $x_i(t)$ to $y_i(t)$ \begin{eqnarray} \label{chngxtoy} x_i(t)=\bar{x}_i+y_i(t) \end{eqnarray} The path integral measure is invariant under a shift and hence $DX=DY$; hence, Eq. \ref{micropartionxx} yields the path integral \begin{eqnarray} \label{zpathy} &&Z=\int DY e^{-\mathcal{A}[\bar{x}_i+y_i(t)]} \end{eqnarray} The path integral given in Eq. \ref{zpathy} allows for an expansion of the action $\mathcal{A}[\bar{x}_i+y_i(t)]$ in a Taylor power series of $y_i$ about the maxima of $\mathcal{A}[\bar{x}_i]$ at $\bar{x}_i$ and will be shown to yield a convergent expansion of all the correlation functions in powers of $1/m$. In the expansion of $\mathcal{A}[\bar{x}_i+y_i(t)]$ about its minimum value $\mathcal{A}[\bar{x}_i]$, there are no terms that are linear in $y_i$ due to Eq. \ref{classical}; the first term is a constant and the next leading term is quadratic in $y_i$, and with the remaining terms in the expansion of the action all having powers that are $y_i^3$ and higher. Hence, the action functional has the following expansion \begin{eqnarray} \label{actionexpans} &&\mathcal{A}[p_0; x_i]=\mathcal{A}_1[\bar{x}]+\mathcal{A}_2[\bar{x};y^2]+\mathcal{A}_I[\bar{x};y^3] \end{eqnarray} with \begin{eqnarray*} &&\mathcal{A}_1[\bar{x}]:~~~~~\text{Constant independent of}~y_i\\ &&\mathcal{A}_2[\bar{x};y^2]:~\text{Quadractic function of}~y_i\\ &&\mathcal{A}_I[\bar{x};y^3]:~\text{Cubic and higher order function of}~y_i \end{eqnarray*} The integration variables $y_i(t)$ are of $O(\sqrt{\frac{1}{m}})$ and hence the successive terms in the expansion of the action $\mathcal{A}[p_0; x_i]$ in Eq. \ref{actionexpans} are of smaller and smaller magnitude. Note the expansion of the action about $\mathcal{A}[\bar{x}]$ is valid \textit{only} for $m>>1$; for $m\le 1$, the perturbative approach is invalid since there is no longer any sharp and well localized domain of the path integral that gives the dominant contribution. If the budget becomes small, such that $m\simeq O(1)$, the statistical fluctuations in the prices $\textbf{y}(t)$ become large and the perturbation expansion becomes invalid. Of course, the path integral given in Eq. \ref{micropartion} is well defined and convergent for all $m>0$. For the case when $m\simeq 1$, the path integral has to be studied using non-perturbative techniques, and which includes numerically evaluating the path integral. To illustrate the expansion of the path integral, consider the partition function given in Eq. \ref{micropartionxx}. Expanding the action functional as given above yields the following expansion for the partition function \begin{eqnarray} &&Z=\int \frac{Dp}{p}e^{-\mathcal{A}[\textbf{p}]}=e^{-\mathcal{A}_1[\bar{x}]}\int DYe^{-\mathcal{A}_2[\bar{x};y^2]-\mathcal{A}_I[\bar{x};y^3]}\nonumber\\ &&~~~=e^{-\mathcal{A}_1[\bar{x}]}\int DYe^{-\mathcal{A}_2[\bar{x};y^2]}\left[1-\mathcal{A}_I[\bar{x};y^3]+\frac{1}{2!}\mathcal{A}_I^2[\bar{x};y^3]+...\right]\nonumber\\ \label{expanpartion} &&~~~=z_0+\frac{1}{m}z_1+\frac{1}{m^2}z_1+... \end{eqnarray} \section{Expansion of microeconomic potential} Note that the minimum of the potential chosen in Eq. \ref{micropotenmodel} fixes the price of all $N$ commodities. Writing the potential in terms of variables defined in Eq. \ref{pox} that are appropriate for studying the action functional near its maximum \begin{eqnarray} \label{poxy} &&p_i=p_{0}e^{\bar{x}_i+y_i} \end{eqnarray} yields the following \begin{eqnarray} \label{micropotenmodelmin} \mathcal{V}[\textbf{p}] &=&\frac{m}{2}\sum_{i=1}^N\left[\frac{d_i}{p_i^{a_i}} + s_ip_i^{b_i}\right]~;~~a_i,b_i,d_i,s_i>0\nonumber \\ \mathcal{V}[\bar{x}; y]&=& \frac{m}{2}\sum_{i=1}^N \left[\frac{d_i}{p_0^{a_i}}e^{-a_i\bar{x}_i}e^{-a_iy_i}+s_ip_0^{b_i}e^{b_i\bar{x}_i}e^{b_iy_i}\right] \end{eqnarray} Note Eq. \ref{micropotenmodelmin} is an exact expression for the potential $\mathcal{V}[\textbf{p}]$. Expanding the potential given in Eq. \ref{micropotenmodelmin} as a power series in $y_i$ yields the following \begin{eqnarray} \label{micropotenquadra} && \mathcal{V}[\bar{x}; y] \simeq \mathcal{V}_0+\frac{m}{2}\sum_{i=1}^N \gamma_i x_i^2+O(x^3)\\ && \gamma_i=\frac{1}{2}\left[\frac{a_i^2d_i}{p_0^{a_i}}e^{-a_i\bar{x}_i}+b^2_is_ip_0^{b_i}e^{b_i\bar{x}_i}\right]\nonumber \end{eqnarray} with the constant value of the potential given by \begin{eqnarray} &&\mathcal{V}_0= \frac{m}{2}\sum_{i=1}^N \left[\frac{d_i}{p_0^{a_i}}e^{-a_i\bar{x}_i}+s_ip_0^{b_i}e^{b_i\bar{x}_i}\right]\nonumber \end{eqnarray} Note that there is no linear dependence on $y_i$ in the expansion in Eq. \ref{micropotenquadra} since the condition of minimum, given in Eq. \ref{minimamicropotenmodel}, ensures this to be the case. Since $\bar{x}$ is a constant, Eqs. \ref{actionexpans} and \ref{micropotenquadra} yield \begin{eqnarray*} \mathcal{A}_1[\bar{x}]=&=&\mathcal{V}_0\int_{+\infty}^{+\infty}dt=\text{constant independent of~}y \end{eqnarray*} \section{Model of the kinetic term} For simplicity and tractability, a special choice of the coupling of the first and second order time derivatives is made that, in matrix notation, is given by \begin{eqnarray} \label{couplingmatrix} &&L=D^T\text{diag}(\alpha_1,\alpha_2, ..,\alpha_N) D~~;~~\tilde{L}=D^T\text{diag}(\beta_1,\beta_2, ..,\beta_N) D~~;~~ DD^T=\mathcal{I}~~~~~~ \end{eqnarray} The kinetic piece of the action functional in Eq. \ref{microaction} is given by the time integral of $\mathcal{T}[\textbf{p}]$, namely \begin{eqnarray} \label{microkineticaction} &&\int_{-\infty}^{-\infty}dt \mathcal{T}[\textbf{p}(t)] \end{eqnarray} To express the kinetic piece in terms of the $x=\bar{x}+y$, we perform an integration by parts for the derivative terms in $\mathcal{T}[\textbf{p}]$ in Eq. \ref{microkineticaction}, and setting all the boundary terms to zero, obtain the following \begin{eqnarray} \label{microlagrangexct} &&~~~~~~~~~~~~~~~~~~~~~~~\int_{-\infty}^{-\infty}dt \mathcal{T}[\textbf{p}(t)]\nonumber\\ &&=\frac{m}{2}\sum_{i,j,k=1}^N \int_{-\infty}^{-\infty}dty_i(t)D^T_{ij}\Big(\alpha_j\frac{\partial^4}{\partial t^4}-\beta_j\frac{\partial^2 }{\partial t^2}\Big)D_{jk}y_k(t) \end{eqnarray} Note Eqs. \ref{couplingmatrix} has been used to obtain Eq. \ref{microlagrangexct}. \section{Gaussian path integration: propagator}\label{sec:pertexpan} As illustrated in Eq. \ref{expanpartion}, $1/m$ provides a small expansion parameter for the path integral. It should be noted that the exact path integral given in Eq. \ref{micropartionxx} has a Gaussian expansion only for $m<<1$; the reason being that only in this case is the quadratic piece of the action functional $\mathcal{A}_2[\bar{x};y^2]$ the dominant part in the full expansion of the action functional as a power series in $y_i$. This Section evaluates the propagator for the prices, and which is a central ingredient of the $1/m$ expansion. Expanding the action around the minimum to terms of order $y^2$ yields, from Eqs. \ref{actionexpans} and \ref{micropotenquadra} and \ref{microkinetic}, the following \begin{eqnarray} \label{microlagrang5} &&\mathcal{A}_2[\textbf{p}_{0};x^2] =\frac{m}{2}\sum_{i,j,k=1}^N \int_{-\infty}^{-\infty}dty_i(t)\left[D^T_{ij}\Big(\alpha_j\frac{\partial^4}{\partial t^4}-\beta_j\frac{\partial^2 }{\partial t^2}\Big)D_{jk}\right]y_k(t)\nonumber\\ &&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+\frac{m}{2}\sum_{i=1}^N \int_{-\infty}^{-\infty}dt\gamma_iy_i^2(t)\nonumber\\ &&\Rightarrow \mathcal{A}_2[\textbf{p}_{0};y^2] \equiv \frac{1}{2}\sum_{i,k=1}^N \int_{-\infty}^{-\infty}dty_i(t)G^{-1}_{ik}(t,t')y_k(t) \end{eqnarray} where the inverse of the propagator is given by \begin{eqnarray} \label{definvprop} &&~~~G^{-1}_{ik}(t,t')=m\sum_{j=1}^N \left[D^T_{ij}\Big(\alpha_j\frac{\partial^4}{\partial t^4}-\beta_j\frac{\partial^2 }{\partial t^2}\Big)D_{jk}+ \delta_{j-k}\delta_{i-k}\gamma_k\right]\delta(t-t')~ \end{eqnarray} The propagator $G_{ij}(t,t')$ is given by \begin{eqnarray} \label{propdef} &&~~~\int_{-\infty}^{-\infty}dt \sum_{k=1}^N G^{-1}_{ik}(t,\xi)G_{kj}(\xi,t')=\delta_{i-j}\delta(t-t')~\\ && \Rightarrow~~G_{ij}(t,t')\simeq \frac{1}{m} \end{eqnarray} The propagator $G_{ij}(t,t')$ has been explicitly worked out in \cite{bebcup2,cythesis} and applied to the study of equity prices in \cite{bebcyeqt}. For the quadratic action given in Eq. \ref{microlagrang5}, Gaussian path integration yields the following generating functional \begin{eqnarray} \label{gausspint} Z[h]&=&\frac{1}{Z}\int DY \exp\left\{-\mathcal{A}_2[\textbf{p}_{0};y^2]+\sum_i\int dt h_i(t)y_i(t)\right\} \nonumber\\ &=&\frac{1}{Z}\int DY \exp\left\{- \frac{1}{2}\sum_{i,k=1}^N \int_{-\infty}^{-\infty}dty_i(t)G^{-1}_{ik}(t,t')y_k(t)+\sum_i\int dt h_i(t)y_i(t)\right\} \nonumber\\ &=&\exp\left\{\frac{1}{2}\sum_{ij}\int dtdt' h_i(t)G_{ij}(t,t')h_j(t')\right\} \end{eqnarray} To $O(y^2)$, the expectation value of market prices is given by the following \begin{eqnarray} \label{microcoavgpr} E[p_i(t)]&=&E[p_{0}e^{x_i(t)}]=\frac{p_{0}}{Z}\int DY e^{-\mathcal{A}[\bar{x}+y(t)]}e^{\bar{x}_i+y_i(t)} \nonumber \\ &\simeq&\frac{p_{0}e^{\bar{x}_i}}{Z}\int Dx e^{-\mathcal{A}_2[\bar{x};y^2]}p_{0i}e^{y_i(t)} \end{eqnarray} Since $ p_{0i}=p_{0}e^{\bar{x}_i}$, using the rules of Gaussian path integration given in Eq. \ref{gausspint} yields, from Eq. \ref{microcoavgpr}, the following \begin{eqnarray} \label{microcoavgpr5} E[p_i(t)]&\simeq& p_{0i}e^{G_{ii}(t,t)} \end{eqnarray} All the parameters of the model are constants and hence \begin{eqnarray} \label{proptimeinv} G_{ij}(t,t')=G_{ij}(t-t') \end{eqnarray} From Eqs. \ref{microcoavgpr5} and \ref{proptimeinv}, the average price of the $i$th commodity is hence given by \begin{eqnarray} \label{microcoavgprconstant} E[p_i(t)]&\simeq&p_{0i}e^{G_{ii}(0)}=p_{0i}+O(\frac{1}{m}):~~\text{constant} \end{eqnarray} Note, as stated earlier in Eq. \ref{micropotmin}, $p_{0i}$ -- to leading order in $1/m$, -- is the average value of the market price and is a constant. To $O(1/m^2)$, the \textit{propagator} (correlation function of prices at two different times), using the rules of Gaussian path integration given in Eq. \ref{gausspint}, yields the following \begin{eqnarray} \label{microcorreprop} E[\ln(\frac{p_i(t)}{p_{0i}})\ln(\frac{p_j(t')}{p_{0j}})]&=&\frac{1}{Z}\int DY e^{-\mathcal{A}[\bar{x}+y]}y_i(t)y_j(t') \nonumber\\ &\simeq&\frac{1}{Z}\int DY e^{-\mathcal{A}_2[\bar{x};y^2]}y_i(t)y_j(t') \nonumber\\ &=&G_{ij}(t,t')+O(1/m^2) \end{eqnarray} where $G_{ij}(t,t')$ is given in Eq. \ref{propdef}. Eq. \ref{microcorreprop} yields the following special case \begin{eqnarray} E[y_i(t)^2]&=&G_{ii}(0)=O(\frac{1}{m})\nonumber \end{eqnarray} Hence, the average range of the integration variables where the integrand $e^{-\mathcal{A}_2[\bar{x};y^2]}$ has significant values is given by \begin{eqnarray} \label{avgx2} y_i&=&O(\sqrt{E[y_i(t)^2]})=O(\sqrt{\frac{1}{m}}) \end{eqnarray} Eq. \ref{avgx2} shows that the average magnitude of the fluctuations of $y_i^2$ is $O(1/m)$. This is the reason that a perturbation expansion can be generated for all the correlation functions of $p_i$ in increasing powers of $O(1/m)$ (by expanding, in a power series, all the terms in the action functional $\mathcal{A}$ that are of $O(y^3)$ and higher) and leads to an expansion of the path integral as in given Eq. \ref{expanpartion}. The rules of Gaussian path integration, using the technique of Feynman diagrams, yield a perturbation expansion for all the correlation functions of commodity prices. In particular, the terms $z_0,z_1, ..$ in the expansion for the partition function $Z$ given in Eq. \ref{expanpartion} can all be evaluated using the rules of Gaussian path integration. \section{Model calibration and testing} Every observed market price is taken to be a random sample of the random price; hence the correlation functions of market prices are taken to be equal to the average values of market prices, and which are empirically calculated by summing over the historical time series of market prices. To empirically test and calibrate the statistical microeconomic formulation, the left hand side of Eq. \ref{microcorreprop} is computed using market data for prices. A best fit is then done for all the parameters of the model by using the right hand side of Eq. \ref{microcorreprop}. Market prices are taken as independent inputs to the model and, from Eq. \ref{minimamicropotenmodel}, are given by \begin{eqnarray*} p_{0i}=\left(\frac{a_id_i}{b_is_i}\right)^{1/(a_i+b_i)} \end{eqnarray*} Both indices $a_i$ and $b_i$ are taken as input, for example as obtained in \cite{gas1}. Hence, market prices fix the ratios $d_i/s_i$. The coefficient $\gamma_i$ and matrices of correlation $L_{ij}$ and $\bar{L}_{ij}$ are fixed by empirically determining the propagator $G_{ij}(t-t')$. Empirically evaluating $\gamma_i^2=d_is_i$ then yields the values of $s_i$ and $d_i$. Prices and quantities $\textbf{p}_0$ and $\textbf{q}_0$, similar to those given in Eq. \ref{pricquant}, can be derived for statistical economics and yield \begin{eqnarray*} \textbf{p}_0=\textbf{p}_0(\beta,\textbf{d})~~;~~\textbf{q}_0=\textbf{q}_0(\beta,\textbf{d}) \end{eqnarray*} Taking the values $\textbf{p}_0$ and $\textbf{q}_0$ as \textit{input} and fixes the parameters $\beta_i$, which in turn, together with empirical values determined for $s_i$ yield the values for $\alpha_i$. Hence all the parameters for the model can be empirically determined. Recall the model for the demand and supply of commodity prices was fairly simple, given by Eq. \ref{micropotenmodel} as follows \begin{eqnarray*} \mathcal{V}[\textbf{p}]=\frac{m}{2}\left[\sum_{i=1}^N\frac{d_i}{p_i^{a_i}} + \sum_{i=1}^N s_ip_i^{b_i}\right]~;~~a,b,d_i,s_i>0 \end{eqnarray*} The simple form was chosen for the potential $\mathcal{V}[\textbf{p}]$ so that the analysis for computing the propagator as well as the general technique for calibrating this model could be carried out explicitly. The precise form of the potential is not very important since, to leading order in $1/m$, the entire potential only contributes the parameter $\gamma_i$ to the value of the propagator. In fact, one can take the potential to be of the form \begin{eqnarray*} \mathcal{V}[\textbf{p}]=\frac{m}{2}\left[\sum_{i=1}^Nf(p_i) + \sum_{i=1}^N g(p_i)\right] \end{eqnarray*} Any empirical demand and supply function $f(p_i)$ and $g(p_i)$, respectively, that yield a unique set of market prices $\textbf{p}_0$ are equally good for modeling the microeconomic potential. Such a general potential would, to leading order in $1/m$, result in a different relation of $\gamma_i$ to the model's parameters, with all other results -- including the form of the propagator -- remaining unchanged. \section{Summary} A statistical generalization of microeconomic modeling is proposed in this paper to consider all commodity prices to be stochastic processes. The demand and supply function are interpreted as being components of a single underlying microeconomic potential and the average market price, to lowest order, is given by minimizing the microeconomic potential. A simple model for both the demand and supply functions have been proposed so that a concrete analysis could be carried out. The utility function was evaluated from the demand function using the principle of duality. A Feynman path integral was defined for the random evolution of commodity prices and provides a theoretical framework for the study of commodity prices considered as stochastic processes. The choice for the microeconomic kinetic term $\mathcal{T}[\textbf{p}]$ is based on a detailed empirical study of equity markets; the form chosen for $\mathcal{T}[\textbf{p}]$ has been shown by empirical evidence to be very accurate for a wide range of equities \cite{bebcyeqt}. The kinetic term driving the time dependence of commodity prices was proposed, in analogy with the behavior of equity prices, to be determined by the acceleration of commodity prices. This form of the kinetic energy leads to many new features not present in quantum mechanics. Furthermore, since commodities undergo a classical random evolution, many of the problems related to the lack of unitarity due to the acceleration term in the Lagrangian do not appear in microeconomics. The microeconomic potential term $\mathcal{V}[\textbf{p}]$ combines the demand and supply of commodities into a single entity and provides an entirely new perspective on the mode of competition between supply and demand. One needs to be studied for the major commodities; as mentioned earlier, and the empirical study of gasoline prices \cite{gas1} supports the form of the microeconomic potential chosen in this paper. The Lagrangian that combines the kinetic and potential terms for commodity prices shows the central role being played by the kinetic term; this term is absent in the standard treatments of microeconomic analysis that are focused almost solely on supply and demand. Of course, whether the kinetic term in fact is important in the dynamics of commodity prices is an empirical question and needs to be further studied. A well defined perturbation expansion about the minimum of the potential was defined and the propagator was explicitly evaluated. The expansion of the path integral in terms of the inverse of the budget constraint is valid only for a large budget; if the budget becomes small, the statistical fluctuations become large and numerical methods are then necessary for evaluating the path integral. The expansion of market prices and its correlators in a power series in the inverse of the total budget, which has been introduced in this paper, needs to be studied empirically to ascertain whether in fact market data provides evidence of such an expansion. The calibration and testing of the proposed statistical model of microeconomics is based on comparing the model's prediction with the empirical values of market prices as well as by comparing the model's propagator (unequal time correlation function) of market prices with the empirical propagator obtained from market data. \section{Acknowledgment} I am deeply indebted to Emmanuel Haven for having introduced me to the subject of Microeconomics, and I thank him for many useful, stimulating and enjoyable discussions and for a careful reading of a draft of this paper. I thank Arzish Baaquie for a careful reading of the paper and making many valuable suggestions. I thank the School of Management, University of Leicester, for their warm hospitality during my sabbatical visit in 2011, and where the bulk of the work of this study was carried out. \section{Appendix: Utility function} The model considered in this paper starts with the demand function since the main focus is on market prices. For many theoretical studies in micro- and macro-economics the utility function plays a central role. A model for the utility function that could prove useful in such studies is the following \begin{eqnarray} \label{microutiltymodl} \mathcal{U}&=&\frac{1}{2}\sum_{ij=1}^Nq_iM_{ij}q_j+\sum_{i=1}^Nh_iq_i~~;~q_i>0\\ &\equiv&\frac{1}{2}qMq+hq \nonumber \end{eqnarray} where the last equation has been written in matrix notation. Taking $M_{ij}, h_i>0$ fulfills the requirement for utility functions given in Section \ref{sec:utility}. To obtain the demand function, the utility function is maximized with the constraint that the budget is fulfilled, namel \begin{eqnarray} \label{utilitydemandconst5} \frac{\partial \mathcal{U}[\textbf{q}]}{\partial q_i}\Big{|}_{\textbf{q}=\bar{\textbf{q}}}=0~~;~~ \text{Constraint}:~\sum_{i=1}^Np_iq_i=m \end{eqnarray} Simultaneously solving the equations given in Eq. \ref{utilitydemandconst5} yields the value of $\bar{\textbf{q}}$ that maximizes the utility function for a given budget, namely \begin{eqnarray*} \bar{\textbf{q}}=\bar{\textbf{q}}(\textbf{p},m)~~\Rightarrow~~\mathcal{D}[\textbf{p},m]=\mathcal{U}[\bar{\textbf{q}}(\textbf{p},m)] \end{eqnarray*} Using the technique employed in Section \ref{sec:demandutily} to obtain the utility function from the demand function, it can be shown that, in matrix notation \begin{eqnarray} \label{demandutiaquad} \bar{q}=M^{-1}(\zeta p-h)~~;~~\zeta=\frac{m+pM^{-1}h}{pM^{-1}p}\nonumber\\ \mathcal{D}[\textbf{p},m]=\frac{1}{2}\frac{(m+pM^{-1}h)^2}{pM^{-1}p}-\frac{1}{2}hM^{-1}h \end{eqnarray} The demand function derived in Eq. \ref{demandutiaquad} is not suitable for modeling the behavior of the market. When it is combined with the supply function to define the microeconomic potential $\mathcal{V}[\textbf{p},m]$, it can be shown that $\mathcal{D}[\textbf{p},m]$ given in in Eq. \ref{demandutiaquad} does not result in a unique minimum for the potential and hence does not yield a set of unique average market prices. \bibliographystyle{is-unsrt}
1,314,259,993,859
arxiv
\section{Introduction} The study of equations over algebraic structures has a long history in mathematics. Some of the first explicit decidability results in group theory are due to Makanin \cite{mak77}, who showed that equations over free groups are decidable. Subsequently several other decidability and undecidability results as well as complexity results on equations over infinite groups emerged (see \cite{DiekertE17icalpshort,GarretaMO20,LohSen06,Romankov79} for a random selection). For a fixed group $G$, the equation satisfiability problem $\EQNSAT$ is as follows: given an expression $\alpha \in (G\cup \cX \cup \cX^{-1})^*$ where $\cX$ is some set of variables, the question is whether there exists some assignment $\sigma: \cX \to G$ such that $\sigma(\alpha)=1$ (here $\sigma$ is extended to expressions in the natural way~-- $\cX^{-1}$ is a disjoint copy of $\cX$ representing the inverses of $\cX$). Likewise \EQNID is the problem, given an expression, decide whether it evaluates to 1 under \emph{all} assignments. Henceforth, all groups we consider are finite. In this case, equation satisfiability and related questions are clearly decidable by an exhaustive search. Still the complexity is an interesting topic of research: its study has been initiated by Goldmann and Russell \cite{GoldmannR02}, who showed that satisfiability of systems of equations can be decided in $\P$ if and only if the group is abelian (assuming $\P \neq \NP$)~-- otherwise, the problem is \NP-complete. They also obtained some results for single equations: $\EQNSAT$ is \NP-complete for non-solvable groups, while for nilpotent groups it is in \P. This left the case of solvable but non-nilpotent groups open. Indeed, Burris and Lawrence raised the question whether $\EQNID(G) \in \P$ for all finite solvable groups $G$ \cite[Problem 1]{BurrisL04}. Moreover, Horváth \cite{Horvath11} conjectured a positive answer. \subparagraph*{Contribution.} In this work we give a negative answer to this question assuming the exponential time hypothesis by showing the following result: \theoremstyle{plain} \newtheorem{corollaryA}{Corollary} \renewcommand*{\thecorollaryA}{\Alph{corollaryA}} \begin{corollaryA}\label{cor:mainIntro} Let $G$ be finite solvable group and assume that either \begin{itemize} \item the Fitting length of $G$ is at least four, or \item the Fitting length of $G$ is three and there is no Fitting-length-two normal subgroup whose index is a power of two. \end{itemize} Then $\EQNSAT(G)$ and $\EQNID(G)$ are not in \P under the exponential time hypothesis. \end{corollaryA} To the best of our knowledge, this constitutes the first hardness results for $\EQNSAT(G)$ and $\EQNID(G)$ if $G$ is solvable \footnote{Recently (a preprint appeared only days after the submission of this paper), in \cite{IdziakKK20} Idziak, Kawa\l{}ek, and Krzaczkowski succeeded to show that $\EQNSAT(S_4)$ is not in \P under the exponential time hypothesis ($S_4$ denotes the symmetric group over four elements). Moreover, they proved similar results as in this work for the case of algebras from congruence modular varieties. This complements our main result Corollary~\ref{cor:mainIntro}. Indeed, a joint paper proving a quasipolynomial lower bound on \EQNSAT and \EQNID for \emph{all} finite groups of Fitting length three can be found in \cite{IdziakKKW20arxiv}. } The Fitting length of a group $G$ is the minimal $d$ such that there is a sequence $1 = G_0 \Nleq \cdots \Nleq G_d = G$ with all quotients $G_{i+1} /G_i$ nilpotent. Moreover, we show that if $S$ is a semigroup with a group divisor (\ie a group which is a quotient of a subsemigroup of $S$) meeting the requirements of Corollary~\ref{cor:mainIntro}, $\EQNSAT(S)$ (here the input consists of two expressions) is also not in \P under the exponential time hypothesis. Finally, using the same ideas as for our main result, we derive an upper bound of $2^{\Oh(n^{1/(d-1)})}$ for the length of the shortest $G$-program (definition see below) for the $n$-input \AND function in a finite solvable group of Fitting length $d \geq 2$. Notice that a corresponding $2^{n^{\Omega(1)}}$ lower bound would imply that $\EQNSAT(G)$ and $\EQNID(G)$ can be solved in quasipolynomial time for finite solvable groups $G$. \subparagraph*{General approach.} The complexity of \EQNSAT is closely related to the complexity of the satisfiability problem for $G$-programs (denoted by \ProgSAT~-- for a definition see \cref{sec:programs}). Indeed, \cite{BarringtonMMTT00} gives a reduction from \EQNSAT to \ProgSAT (be aware that, while the problems \EQNSAT and \ProgSAT are well-defined for finitely generated infinite groups, in general, such a reduction exists only in the case of finite groups). Moreover, also \ProgSAT is in \P for nilpotent groups and \NP-complete for non-solvable groups \cite{BarringtonST90}. In order to show hardness of these problems, one usually reduces some \NP-complete problem like \SAT or \KColoring{C} to them. Typically, this requires to encode big logical conjunctions into the group $G$. Therefore, the complexity of these problems is linked to the length of the shortest $G$-program for the \AND function. Indeed, \cite[Theorem 4]{BarringtonMMTT00} shows that, if the \AND function can be computed by a \P-uniform family of $G$-programs of polynomial length, then $\ProgSAT(G\wr C_k)$ for $k\geq 4$ is \NP-complete (here $C_k$ denotes the cyclic group of order $k$; \P-uniform means that the $n$-input $G$-program can be computed in time polynomial in $n$). Thus, if there exists a solvable group with efficiently computable polynomial length $G$-programs for the \AND function, then there is a solvable group with an \NP-complete \ProgSAT problem. It is well-known that $G$-programs describe the circuit complexity class \CC \cite{McKenziePT91} with the depth of the circuit relating to the Fitting length of the group. One can make a depth size trade-off for the \AND function using a divide-and-conquer approach: Assume there is a circuit of depth two and size $2^n$ for the $n$-input \AND (which is the case by \cite{Barrington85}). Since the $n$-input \AND can be decomposed as $\sqrt{n}$-input \AND of $\sqrt{n}$ many $\sqrt{n}$-input \AND{}s, we obtain a \CC circuit of depth $4$ and size roughly $2^{\sqrt{n}}$. This observation plays a crucial role for our results: it allows us to reduce an $m$-edge \KColoring{C} instance to an equation of size roughly $2^{\sqrt{m}}$. We compare this to the exponential time hypothesis (ETH), which conjectures that $n$-variable \SAT cannot be solved in time $2^{o(n)}$. ETH implies that \KColoring{C} cannot be solved in time $2^{o(m)}$, which gives us a quasipolynomial lower bound on \EQNSAT and \EQNID. Notice that in the literature there are several other quasipolynomial lower bounds building on the exponential time hypothesis~-- see \cite{AaronsonIM14,BravermanKRW17,BravermanKW15} for some examples. \subparagraph*{Outline.} In \cref{sec:prelims}, we fix our notation and state some basic results on inducible and atomically universally definable subgroups. Some of these observations are well-known, while others, to the best of our knowledge, have not been stated explicitly. \Cref{sec:programs} gives a little excursion to the complexity of the \AND-function in terms of $G$-programs over finite solvable groups deriving an upper bound $2^{\Oh(n^{1/(d-1)})}$ if $d \geq 2$ is the Fitting length of $G$. \Cref{sec:reduction} and \cref{sec:consequences} are the main part of our paper: we reduce the \KColoring{C} problem to \EQNSAT and \EQNID. For the reduction, we need some special requirements on the group $G$. In \cref{sec:consequences} we show that actually the requirements of Corollary~\ref{cor:mainIntro} are enough using the concept of inducible and atomically universally definable subgroups. Finally, in \cref{cor:semigroup} we examine consequences to \EQNSAT in semigroups. \subparagraph*{Related work on equations.} Since the work of Goldman and Russell \cite{GoldmannR02} and Barrington et.\,al.\ \cite{BarringtonMMTT00}, a long list of literature has appeared investigating \EQNID and \EQNSAT in groups and other algebraic structures. In \cite{BurrisL04} it is shown that \EQNID is in \P for nilpotent groups as well as for dihedral groups $D_k$ where $k$ is odd. Horváth resp.\ Horváth and Szabó \cite{Horvath15,HorvathS06} extended these results by showing the following among other results: $\EQNSAT(G) $ is in $ \P$ for $G = C_n \rtimes B$ with $B$ abelian, $n=p^k$ or $n=2p^k$ for some prime $p$ and \EQNID is in \P for semidirect products $G = C_{n_1} \rtimes (C_{n_2} \rtimes \cdots \rtimes ( C_{n_k} \rtimes(A\rtimes B)))$ with $A,B$ abelian (be aware that such a group is two-step solvable). Furthermore, in \cite{Foldvari17} it is proved that $\EQNSAT(G) \in \P$ for so-called semi-pattern groups. Finally, in \cite{FoldvariH19} Földvári and Horváth established that \EQNSAT is in \P for the semidirect product of a $p$-group and an abelian group and that \EQNID is in \P for the semidirect product of a nilpotent group with an abelian group. Notice that all these groups have in common that their Fitting length is at most two. In \cite{HorvathS11,HorvathS12} the \EQNSAT and \EQNID problems for generalized terms are introduced. Here a generalized term means an expression which may also use commutators or even more complicated terms inside the input expression. Using commutators is a more succinct representation, which allows for showing that \EQNSAT is \NP-complete and \EQNID is \coNP-complete in the alternating group $A_4$ \cite{HorvathS12}. In \cite{Kompatscher19} this result is extended by showing that, with commutators and the generalized term $w(x,y_1,y_2,y_3) = x^8[x,y_1,y_2,y_3]$, \EQNSAT is \NP-complete and \EQNID is \coNP-complete for all non-nilpotent groups. There is also extensive literature on equations in other algebraic structures~-- for instance, \cite{AlmeidaVG09,BarringtonMMTT00,JacksonM06,Kisielewicz04,KlimaTT07,Klima09,Seif05,SeifS06,SzaboV04} in semigroups. We only mention two of them explicitly: \cite{Kisielewicz04} showed that identity checking (\EQNID without constants in the input) in semigroups is \coNP complete. Moreover, among other results, \cite{AlmeidaVG09} reduces the identity checking problem in the direct product of maximal subgroups to identity checking in some semigroup. \section{Preliminaries}\label{sec:prelims} The set of words over some alphabet $\Sigma$ is denoted by $\Sigma^*$. The length of a word $w \in \Sigma^*$ is denoted by $\abs{w}$. We denote the interval of integers $\oneset{i, \dots, j}$ by $\interval{i}{j}$. \subparagraph*{Complexity.} We use standard notation from complexity theory. In several cases we use the notion of \AC many-one reductions (denoted by $\leq_{\mathrm{m}}^{\AC}$) meaning that the reducing function can be computed in \AC (\ie by a polynomial-size, constant-depth Boolean circuit). The reader unfamiliar with this terminology may think about logspace or polynomial time reductions. Also be aware that in order to obtain \AC many-one reductions in most cases we need the presence of a letter representing the group identity for padding reasons. \subparagraph*{Exponential time hypothesis.} The exponential time hypothesis (ETH) is the conjecture that there is some $\delta > 0$ such that every algorithm for $\SAT$ needs time $\Omega(2^{\delta n})$ in the worst case where $n$ is the number of variables of the given $\SAT$ instance. By the sparsification lemma \cite[Thm.~1]{ImpagliazzoPZ01} this is equivalent to the existence of some $\epsilon > 0$ such that every algorithm for $\SAT$ needs time $\Omega(2^{\epsilon (m+n)})$ in the worst case where $m$ is the number of clauses of the given $\SAT$ instance (see also \cite[Thm.~14.4]{CyganFKLMPPS15}). In particular, under ETH there is no algorithm for \SAT running in time $2^{o(n+m)}$. \subparagraph*{\KColoring{C}.} A $C$-coloring for $C\in \N$ of a graph $\Gamma = (V,E)$ is a map $\chi:V \to \interval{1}{C}$. A coloring $\chi$ is called \emph{valid} if $\chi(u) \neq \chi(v)$ whenever $\oneset{u,v }\in E$. The problem \KColoring{C} is as follows: given an undirected graph $\Gamma = (V,E)$, the question is whether there is a valid $C$-coloring of $\Gamma$. The \KColoring{C} problem is one of the classical \NP-complete problems for $C\geq 3$. Moreover, by \cite[Thm.~14.6]{CyganFKLMPPS15}, \KColoring{3} cannot be solved in time $2^{o(\abs{V} + \abs{E})}$ unless ETH fails. Since \KColoring{3} can be reduced to \KColoring{C} for fixed $C \geq 3$ by introducing only a linear number of additional edges and a constant number of vertices, it follows for every $C\geq 3$ that also \KColoring{C} cannot be solved in time $2^{o(\abs{V} + \abs{E})}$ unless ETH fails. \subparagraph*{Commutators and Fitting series.} Throughout, we only consider finite groups $G$. We use notation similar to \cite{Robinson96book}. We write $[x,y] = x^{-1}y^{-1}xy$ for the commutator and $x^y = y^{-1}xy$ for the conjugation. Moreover, we write $[x_1, \dots, x_n] = [[x_1, \dots, x_{n-1}],x_n]$ for $n\geq 3$. As usual for subsets $X,Y \sse G$, we write $\gen{X}$ for the subgroup generated by $X$ and we define $[X,Y] = \genr{[x,y]}{x \in X, y \in Y}$ and $[X_1,\dots, X_k] = [[X_1,\dots,X_{k-1}], X_k]$ for $X_1,\dots, X_k \sse G$. In contrast, we write $[X,Y]_{\Set} = \set{[x,y]}{x \in X, y \in Y}$ (thus, $[X,Y] = \gen{[X,Y]_{\Set}}$) and $[X_1,\dots, X_k]_{\Set} = [[X_1,\dots,X_{k-1}]_{\Set}, X_k]_{\Set}$. Finally, we denote the set $\set{g^x}{x\in X}$ with $g^X$ (be aware that here we differ from \cite{Robinson96book}) and define $X^Y = \set{x^y}{x\in X ,y \in Y}$. \begin{lemma}\label{lem:setcommutator} If $X_i^G=X_i \sse G$ for $i= 1, \dots, k$, then \[ [\gen{X_1},\dots,\gen{ X_k}] = \gen{[X_1,\dots, X_k]_{\Set}}.\] \end{lemma} \begin{proof} By \cite[5.1.7]{Robinson96book}, we have $[\gen{X}, \gen{Y}] = \gen{[X, Y]^{\gen{X}\gen{Y}}}$ for arbitrary $X,Y \sse G$. Thus, if $X=X^G$ and $Y = Y^G$, we have $[\gen{X}, \gen{Y}] = [X, Y]$. We use this to show the lemma by induction: \begin{align*} [\gen{X_1},\dots,\gen{ X_k}] &= \big[[\gen{X_1},\dots,\gen{X_{k-1}}], \gen{ X_k}\big]\\ &=\big[ \gen{[X_1,\dots, X_{k-1}]_{\Set}}, \gen{ X_k}\big]\tag{by induction}\\ &=\big[ [X_1,\dots, X_{k-1}]_{\Set}, X_k\big] \tag{by \cite[5.1.7]{Robinson96book}}\\ &=\gen{[X_1,\dots, X_k]_{\Set}}\qedhere \end{align*} \end{proof} For $x,y \in G$, we write $\mcomm{x}{k}{y} = [x,\underbrace{y,\dots,y}_{k\text{ times}}]$ and likewise for $X,Y \sse G$, we write $\mcomm{X}{k}{Y} = [X,\underbrace{Y,\dots,Y}_{k\text{ times}}]$ and $\smcomm{k}{Y} = [\underbrace{Y,\dots,Y}_{k\text{ times}}]$ and analogously $\mcomm{X}{k}{Y}_{\Set} $ and $\smcomm{k}{Y}_{\Set}$. Since $G$ is finite, there is some $M = M(G) \in \N$ such that $\mcomm{X}{M}{Y} = \mcomm{X}{i}{Y}$ for all $i \geq M$ and all $X,Y\sse G$ with $X^G=X$ and $Y^G=Y$ (notice that $\mcomm{X}{i}{Y} \leq \mcomm{X}{j}{Y}$ for $j\leq i$ due to the normality of $[X,Y]$). It is clear that $M =\abs{G}$ is large enough, but typically much smaller values suffice. \begin{lemma}\label{lem:addGincomm} For all $X, Y \sse G$ with $X^G = X$ and $Y = Y^G$ we have $\fitcomm{X}{Y} = \fitcomm{[X,G]}{Y}$. \end{lemma} \begin{proof We have $[X,G] \leq \gen{X}$ because $[x,g] = x^{-1} x^g \in X$. Thus, the inclusion right to left follows. The other inclusion is because $\fitcomm{X}{Y} = \mcomm{X}{M+1}{Y} \leq [X,G, \kern.1em_{M}\kern.1em Y] = \fitcomm{[X,G]}{Y}$. \end{proof} The $k$-th term of the lower central series is $\gamma_kG = \mcomm{G}{k}{G}$. The \emph{nilpotent residual} of $G$ is defined as $\gamma_\infty G = \gamma_MG$ where $M$ is as above (\ie $\gamma_\infty G = \gamma_iG$ for every $i \geq M$). Recall that a finite group $G$ is nilpotent if and only if $\gamma_\infty G = 1$. The \emph{Fitting} subgroup $\Fit(G)$ is the union of all nilpotent normal subgroups. Let $G$ be a finite solvable group. It is well-known that $\Fit(G)$ itself is a nilpotent normal subgroup (see \eg \cite[Satz 4.2]{Huppert67}). The \emph{upper Fitting series} \[1 = \cU_0G \Nle \cU_1G \Nle \cdots \Nle \cU_k G = G \] is defined by $\cU_{i+1}G/\cU_iG = \Fit(G/\cU_iG)$. The \emph{lower Fitting series} \[1 = \cL_dG \Nle \cdots \Nle \cL_1 G \Nle \cL_0 G = G \] is defined by $\cL_{i+1}G = \gamma_\infty(\cL_{i}G)$. We have $d=k$ (see \eg \cite[Satz 4.6]{Huppert67}) and this number is called the \emph{Fitting length} $\FitL(G)$ (sometimes also referred to as \emph{nilpotent length}). The following fact can be derived by a straightforward induction from the characterization of $\Fit(G)$ as largest nilpotent normal subgroup (for a proof see \eg \cite{mathoverflowW20}): \begin{lemma}\label{lem:Fitting} Let $H \Nleq G$ be a normal subgroup. Then for all $i$, we have $\cU_i H = \cU_iG \cap H$. In particular, \begin{enumerate} \item if $\FitL(H) = i$, then $H \leq \cU_iG$, \item if $g \in \cU_{i}G \setminus \cU_{i-1}G$, then $\FitL(\gen{g^G}) = i$. \end{enumerate} \end{lemma} \subparagraph*{Equations in groups.} An \emph{expression} (also called a \emph{polynomial} in \cite{SeifS06,HorvathS06,Kompatscher19}) over a group $G$ is a word $\alpha$ over the alphabet $G \cup \cX \cup \cX^{-1}$ where $\cX$ is a set of variables. Here $\cX^{-1} $ denotes a formal set of inverses of the variables. Since we are dealing with finite groups only, a variable $X^{-1}\in \cX^{-1}$ for $X\in \cX$ can be considered as an abbreviation for $X^{\abs{G}-1}$. Sometimes we write $\alpha(X_1, \dots, X_n)$ for an expression $\alpha$ to indicate that the variables occurring in $\alpha$ are from the set $\oneset{X_1, \dots, X_n}$. Moreover, if $\beta_1, \dots, \beta_n$ are other expressions, we write $\alpha(\beta_1, \dots, \beta_n)$ for the expression obtained by substituting each occurrence of a variable $X_i$ by the expression $\beta_i$. An assignment for an expression $\alpha$ is a mapping $\sigma:\cX \to G$~-- here $\sigma$ is canonically extended by $\sigma( X^{-1}) = \sigma(X)^{-1}$ and $\sigma(g) = g$ for $g \in G$. An assignment $\sigma$ is \emph{satisfying} if $\sigma(\alpha)=1$ in $G$. The problems $\EQNSAT(G)$ and $\EQNID(G)$ are as follows: for both of them the input is an expression $\alpha$. For $\EQNSAT(G)$ the question is whether there \emph{exists} a satisfying assignment, for $\EQNID(G)$ the question is whether \emph{all} assignments are satisfying. Notice that in the literature \EQNSAT is also denoted by POL-SAT \cite{SeifS06,HorvathS06} or $\mathsf{Eq}$ \cite{Kompatscher19}, while \EQNID is also referred to as POL-EQ (\eg in \cite{SeifS06,HorvathS06,KlimaTT07}) or $\mathsf{Id}$ \cite{Kompatscher19}. If $\cX = \cY \cup \cZ$ with $\cY \cap \cZ = \emptyset$ and we are given assignments $\sigma_1: \cY \to G$ and $\sigma_2: \cZ \to G$, we obtain a new assignment $\sigma_1\cup \sigma_2$ defined by $(\sigma_1\cup \sigma_2) (X) = \sigma_1(X)$ if $X \in \cY$ and $(\sigma_1\cup \sigma_2) (X) = \sigma_2(X)$ if $X \in \cZ$. We write $[X \mapsto g]$ for the assignment $\oneset{X} \to G$ mapping $X $ to $g$. \subparagraph*{Inducible subgroups.} According to \cite{GoldmannR02}, we call a subset $S \sse G$ \emph{inducible} if there is some expression $\alpha \in (G \cup \cX \cup \cX^{-1})^*$ such that $S = \set{\sigma(\alpha)}{\sigma\colon \cX \to G}$. In this case we say that $\alpha$ \emph{induces} $S$. Notice that in a finite group every verbal subgroup is inducible. (A subgroup is called \emph{verbal} if it is generated by a set of the form $\set{\sigma(\alpha)}{\sigma\colon \cX \to G, \alpha \in \cA}$ where $\cA \sse (\cX \cup \cX^{-1})^*$ is a \emph{finite} set of expressions without constants.) This shows the first three points of the following lemma (for $\gamma_1G$, see also \cite[Lemma 5] {GoldmannR02}): \begin{lemma}\label{lem:inducible} Let $G$ be a finite group. Then \begin{enumerate} \item for every $k \in \N$, the subgroup generated by all $k$-th powers is inducible, \item every element $\gamma_kG$ of the lower central series is inducible, \item every element $\cL_kG$ of the lower Fitting series is inducible, \item if $K\leq H \leq G$ and $K$ is inducible in $H$ and $H$ inducible in $G$, then $K$ is also inducible in $G$, \item if $H\leq G$ with $H = [G,H]$, then $H$ is inducible. \end{enumerate} \end{lemma} The fourth point follows simply by ``plugging in'' an expression for $H$ inside an expression for $K$. The last point follows from the proof of \cite[Lemma 9 ]{Kompatscher19}. The notion of inducible subgroup turns out to be very useful for proving lower bounds on the complexity. Indeed, the following facts are straightforward: \begin{lemma}[{\,\!\cite[Lemma 8]{GoldmannR02}, \cite[Lemma 9, 10]{HorvathS11}}]\label{lem:inducibleEQN} Let $H \leq G$ be an inducible subgroup. Then \begin{itemize} \item $\EQNSAT(H) \leq_{\mathrm{m}}^{\Ac0} \EQNSAT(G)$, and \item $\EQNID(H) \leq_{\mathrm{m}}^{\Ac0} \EQNID(G)$. \item If, moreover, $H$ is normal in $G$, then $\EQNSAT(G/H) \leq_{\mathrm{m}}^{\Ac0} \EQNSAT(G)$. \end{itemize} \end{lemma} Let us briefly sketch the ideas to see this lemma: Fix an expression $\beta$ inducing $H$. For first and second reduction, replace every occurring variable of a given equation by a copy of $\beta$ with disjoint variables. The third reduction simply appends $\beta$ to an input equation. \subparagraph*{Atomically universally definable subgroups.} The situation for reducing $\EQNID(G/H)$ to $ \EQNID(G)$ is slightly more complicated. For this we need a new definition: We call a subset $S \sse G$ \emph{atomically universally definable} if there is some expression $\alpha \in (G \cup \cX \cup \cX^{-1})^*$ where $\cX = \oneset{X} \cup \oneset{Y_1, Y_2,\dots }$ such that \[S = \set{g \in G}{(\sigma \cup [X \mapsto g])(\alpha) = 1 \text{ for all } \sigma\colon \oneset{Y_1, Y_2,\dots } \to G}.\] In this case we say that $\alpha$ \emph{atomically universally defines} $S$. (Notice that \emph{universally definable} usually is defined analogously but instead of a single equation $\alpha$ one allows a Boolean formula of equations.) It is clear that the center of a group is atomically universally definable by the expression $[X,Y]$. This generalizes as follows: \begin{lemma}\label{lem:universallyd} Let $G$ be a finite group. \begin{itemize} \item The Fitting group $\Fit(G)$ is atomically universally definable. \item If $N \leq H\leq G$ and $N$ is normal in $G$ and $H/N$ is atomically universally definable in $G/N$ and $N$ is atomically universally definable in $G$, then $H$ is atomically universally definable in $G$. \item All terms $\cU_iG$ of the upper Fitting series are atomically universally definable. \item If $H\leq G$ is inducible, then the centralizer $C_G(H) = \set{g\in G}{gh = hg \text{ for all } h \in H}$ is atomically universally definable. \end{itemize} \end{lemma} \begin{proof} By \cref{lem:Fitting}, the normal subgroup $\gen{g^G}$ generated by $g \in G$ is nilpotent if and only if $g \in \Fit(G)$. Therefore, $g \in \Fit(G)$ if and only if $\smcomm{M}{\gen{g^G}} = 1$ ($M$ as in \cref{sec:prelims} large enough), which, by \cref{lem:setcommutator}, is the case if and only if $\smcomm{M}{g^G}_{\Set} = 1$. Hence, the expression $[X^{Y_1}, \dots, X^{Y_{\!M}}]$ atomically universally defines $\Fit(G)$. Now, suppose that $\beta \in (G \cup \cX_\beta \cup \cX_\beta^{-1})^*$ with $\cX_\beta = \oneset{X,Y_1, \dots, Y_k}$ atomically universally defines $H/N$ in $G/N$ and that $\alpha \in (G \cup \cX_\alpha \cup \cX_\alpha^{-1})^*$ with $\cX_\alpha = \oneset{Z,Y_{k+1}, \dots, Y_m}$ atomically universally defines $N$ in $G$. Thus, $g \in H$ if and only if $\beta(g, Y_1, \dots, Y_k) \in N$ for all $Y_1, \dots, Y_k \in G$ and $h \in N$ if and only if $\alpha(h, Y_{k+1}, \dots, Y_m) =1$ for all $Y_{k+1}, \dots, Y_m \in G$. Hence, $\alpha(\beta(g, Y_1, \dots, Y_k), Y_{k+1}, \dots, Y_m) =1$ for all $Y_1, \dots, Y_m \in G$ if and only if $g \in H$ and so $H$ is atomically universally definable. The third point follows by induction from the first and second point. The fourth point is essentially due to \cite[Lemma 10]{HorvathS11}: if $\beta$ is an expression inducing $H$, then $[X,\beta]$ atomically universally defines $C_G(H)$. \end{proof} \begin{lemma}\label{lem:univTAUT} Let $H \Nleq G$ be an atomically universally definable normal subgroup. Then \[\EQNID(G/H) \leq_{\mathrm{m}}^{\Ac0} \EQNID(G).\] \end{lemma} \begin{proof} Denote $Q= G/H$. Let $\beta \in (G \cup \cX_\beta \cup \cX_\beta^{-1})^*$ with $\cX_\beta = \oneset{Z,Y_1, \dots, Y_k}$ atomically universally define $H$ and let $\alpha \in (Q \cup \cX \cup \cX^{-1})^*$ be an instance for $\EQNID(Q)$ (with $\cX \cap \cX_\beta = \emptyset$). Let $\tilde \alpha$ denote the expression obtained from $\alpha$ by replacing every constant of $Q$ by an arbitrary preimage in $G$. Then $\sigma(\alpha) = 1$ in $Q$ for all assignments $\sigma: \cX \to Q$ if and only if $\tilde \sigma(\tilde \alpha) \in H$ for all assignments $\tilde\sigma: \cX \to G$. By the choice of $\beta$, the latter is the case if and only if $\hat \sigma(\beta(\tilde \alpha,Y_1, \dots, Y_k)) =1$ for all assignments $\hat\sigma: \cX \cup \oneset{Y_1, \dots, Y_k} \to G$. \end{proof} \section{$G$-programs and AND-weakness}\label{sec:programs} Let $G$ be a finite group. An $n$-input $G$-program of length $\ell$ with variables (input bits) from $\oneset{B_1, \dots, B_n}$ is a sequence \[ P = \langle B_{i_1},a_1,b_1\rangle \langle B_{i_2},a_2,b_2\rangle \cdots \langle B_{i_\ell},a_\ell,b_\ell\rangle \in (\oneset{B_1, \dots, B_n} \times G \times G)^*. \] For a mapping $\sigma : \oneset{B_1, \dots, B_n} \to \{0,1\}$ (called an assignment) we define $\sigma(P) \in G$ as the group element $c_1 c_2 \cdots c_\ell$, where $c_j = a_j$ if $B_{i_j} = 0$ and $c_j = b_j$ if $B_{i_j} = 1$ for all $1 \leq j \leq \ell$. We say that an $n$-input $G$-program $P$ \emph{computes} a function $f: \{0,1\}^n \to \{0,1\}$ if $P$ is over the variables $B_1, \dots, B_n$ and there is some $S\sse G$ such that $\sigma(P) \in S$ if and only if $f(\sigma) = 1$. \ProgSAT is the following problem: given a $G$-program $P$ with variables $B_1, \dots, B_n$, decide whether there is an assignment $\sigma:\oneset{B_1, \dots, B_n} \to G$ such that $\sigma(P)=1$. \subparagraph*{The \AND-weakness conjecture.} In \cite{BarringtonST90}, Barrington, Straubing and Thérien conjectured that, if $G$ is finite and solvable, every $G$-program computing the $n$-input \AND requires length exponential in $n$. This is called the \emph{\AND-weakness conjecture}. Unfortunately, the term ``exponential'' seems to be a source of a possible misunderstanding: while often it means $2^{\Omega(n)}$, in other occasions it is used for $2^{n^{\Omega(1)}}$. Indeed, in \cite{GoldmannR02,BarringtonMMTT00}, the conjecture is restated as its \emph{strong version}: ``every $G$-program over a solvable group $G$ for the $n$-input \AND requires length $2^{\Omega(n)}$.'' However, already in the earlier paper \cite{BarringtonBR94}, it is remarked that the $n$-input \AND can be computed by depth-$k$ \CC circuits of size $2^{\Oh(n^{1/(k-1)})}$ for every $k\geq 2$ (a \CC circuit is a circuit consisting only of $\MOD{m}$ gates for some $m \in \N$)~-- thus, disproving the strong version of the \AND-weakness conjecture. For a recent discussion about the topic also referencing the cases where the conjecture actually is proved, we refer to \cite{Kompatscher19CC}. In this section we provide a more detailed upper bound on the length of $G$-programs for the \AND function in terms of the Fitting length of $G$. We can view our upper bound as a refined version of the $2^{\Oh(n^{1/(k-1)})}$ upper bound for depth-$k$ \CC circuits. This is because, by \cite[Theorem 2.8]{McKenziePT91}, for every depth-$k$ \CC circuit family there is a fixed group $G$ of Fitting length $k$ (indeed, of derived length $k$) such that the $n$-input circuit can be transformed into a $G$-program of length polynomial in $n$. \iffull The easiest variant to disprove the strong version of the \AND-weakness conjecture is a divide-and-conquer approach: Assume we can compute the $n$-input \AND by a \CC-circuit of size $2^n$ and depth $2$ (which is true by \cite{Barrington85}). Since we can decompose the $n$-input \AND as $\sqrt{n}$-input \AND of $\sqrt{n}$ many $\sqrt{n}$-input \AND{}s, we obtain a \CC circuit of depth $4$ and size roughly $2^{\sqrt{n}}$~-- or, more generally, a \CC circuit of depth $2k$ and size roughly $2^{\sqrt[k]{n}}$. The proof of \cref{prop:notANDweakrefined} uses a similar divide-and-conquer approach: \fi \begin{proposition}\label{prop:notANDweakrefined} Let $G$ be a finite solvable group and consider a strictly ascending series $1= H_0 \Nle H_1 \Nle \cdots \Nle H_m = G$ of normal subgroups where $H_i = \gamma_{k_i}(H_{i+1})$ with $k_i \in \N \cup\oneset{\infty}$ for $i \in \interval{1}{m-1}$ and $k_0 =\infty$. Denote $c = \abs{\set{i \in \interval{1}{m-1}}{k_i = \infty}}$ and $C = \prod_{k_i < \infty} (k_i + 1)$. Then the $n$-input \AND function can be computed by a $G$-program of length $\Oh(2^{D n^{1/c}})$ where $D = \frac{c}{C^{1/c}}$. More precisely, for every $n \in \N$ there is some $1\neq g \in G$ and a $G$-program $Q_n$ of length $\Oh(2^{D n^{1/c}})$ such that \[\sigma(Q_n) = \begin{cases} g&\text{if } \sigma(B_1) = \cdots = \sigma(B_n) = 1,\\ 1& \text{otherwise.} \end{cases}\] \end{proposition} Clearly we have $c \leq d-1$ if $d$ is the Fitting length of $G$. The lower Fitting series is the special example of such a series where $H_i = \cL_{d-i}G$ and $k_i = \infty$ for all $i \in \oneset{0, \dots, d}$. Thus, we get the following corollary: \begin{corollary}\label{cor:notANDweak} Let $G$ be a finite solvable group of Fitting length $d \geq 2$. Then the $n$-input \AND function can be computed by a $G$-program of length $2^{\Oh(n^{1/(d-1)})}$. \end{corollary} \begin{example} The symmetric group on four elements $S_4$ has Fitting length 3 with $S_4 \geq A_4 \geq C_2 \times C_2 \geq 1$ being both the upper and lower Fitting series. Therefore, we obtain a length-$\Oh(2^{2\sqrt{n})})$ program for the $n$-input \AND by \cref{prop:notANDweakrefined}. In particular, the strong version of the \AND-weakness conjecture does not hold for the group $S_4$. Note that according to \cite{BarringtonST90}, $S_4$ is the smallest group for which the $2^{\Omega(n)}$ lower bound from \cite{BarringtonST90} does not apply. On the other hand, consider the group $G = (C_3 \times C_3)\rtimes D_4$ where $D_4$ (the dihedral group of order eight) acts faithfully on $C_3 \times C_3$\footnote{This group can be found in the GAP small group library under the index $[72,40]$. It has been suggested as an example by Barrington (private communication).}. It has Fitting length two. Moreover, its derived subgroup $G' = (C_3 \times C_3)\rtimes C_2$ still has Fitting length two. Hence, we have a series $H_3 =G $, $H_2 = G' = \gamma_1 G$, $\,H_1 = \gamma_\infty G' = C_3 \times C_3$, and $H_0 = 1$. Therefore, we get an upper bound of $\Oh(2^{n/2})$ for the length of a program for the $n$-input \AND. \end{example} \begin{proof}[Proof of \cref{prop:notANDweakrefined}] We choose $K = (n/C)^{1/c}$. For simplicity, let us first assume that $K$ is an integer. Moreover, we assume that $K$ is large enough such that $H_{i} = \smcomm{K}{ H_{i+1}}$ holds whenever $k_i = \infty$ and that $K \geq k_i +1$ for all $k_i < \infty$. We define sets $A_i \sse G$ inductively by $A_{m} = G$ and $A_{i} = \smcomm{K}{A_{i+1}}_{\Set}$ if $k_i = \infty$ and $A_{i} = \smcomm{k_i+1}{A_{i+1}}_{\Set}$ if $k_i < \infty$. By \cref{lem:setcommutator} and induction it follows that $H_i = \gen{A_{i}}$ for all $i\in{0, \dots, m}$. Since $H_1 \neq 1$, we find a non-trivial element $g \in A_{1}$. We can decompose $g $ recursively. For this, we need some more notation: for $\ell \in \interval{1}{m}$ consider the set of words \[V_\ell = \set{v = v_1\cdots v_{\ell-1} \in \interval{1}{K}^{\ell-1}}{v_i \leq k_{i} + 1 \text{ for all } i \in \interval{1}{\ell-1}}.\] We have $\abs{V_{m}} = C \cdot K^{c} = n$, so we can fix a bijection $\kappa\colon V_{m} \to \interval{1}{n}$. Now, we can describe the recursive decomposition of $g = g_\epsilon$: \begin{itemize} \item $g_{v} = [g_{v1}, \dots, g_{vK}]$ for $v \in V_\ell$ with $k_\ell = \infty$, and \item $g_{v} = [g_{v1}, \dots, g_{v(k_\ell +1)}]$ for $v \in V_\ell$ with $k_\ell < \infty$. \end{itemize} This, in particular, we can view $g_\epsilon$ as a word over the $g_v$ for $v \in V_{m}$. For $v \in V_\ell$ we have $\abs{g_v} \leq \sum_{i=1}^{K} 2^{K+1-i}\abs{g_{vi}} \leq 2^{K+1} \max_{i }\abs{g_{vi}}$ whenever $k_\ell = \infty$ and $\abs{g_v} \leq 2^{k_\ell+2} \max_{i }\abs{g_{vi}}$ if $k_\ell < \infty$. Therefore, setting $D = \frac{c}{C^{1/c}}$ we obtain by induction \[\abs{g_\eps} \leq 2^{\sum_{k_\ell < \infty} (k_\ell+2)} (2^{K+1})^{c} \in \Oh( 2^{Dn^{1/c}}).\] In order to obtain a $G$-program for the $n$-input \AND, we define $G$-programs $P_v$ for $v \in \bigcup_{\ell \leq m} V_\ell$. In the commutators we need also programs for inverses: for a $G$-program $P = \langle B_{i_1},a_1,b_1\rangle \langle B_{i_2},a_2,b_2\rangle \cdots \langle B_{i_\ell},a_\ell,b_\ell\rangle$ we set $P^{-1} = \langle B_{i_\ell},a_\ell^{-1},b_\ell^{-1}\rangle \cdots \langle B_{i_1},a_1^{-1},b_1^{-1}\rangle $. Clearly $(\sigma(P))^{-1} = \sigma(P^{-1})$ for all assignments $\sigma$. \begin{itemize} \item for $v \in V_m$ we set $P_v = \langle B_{\kappa(v)}, 1,g_v\rangle$, \item for $v \in V_\ell $ with $1 \leq \ell < m$ we set $P_{v} = [P_{v1}, \dots, P_{vK}]$ if $k_{\ell} = \infty$, and \item for $v \in V_\ell $ with $1 \leq \ell < m$ we set $P_{v} = [P_{v1}, \dots, P_{v(k_\ell +1)}]$ if $k_\ell < \infty$. \end{itemize} For $v \in V_\ell$ let $V(v)$ denote the set of those words $w \in V_{m}$ having $v$ as a prefix. By induction we see that \[\sigma(P_v) = \begin{cases} g_v&\text{if } \sigma(B_{\kappa(w)}) = 1 \text{ for all } w\in V(v),\\ 1& \text{otherwise.} \end{cases}\] This shows the correctness of our construction. It remains to consider the case that $(n/C)^{1/c}$ is not an integer. Then we set $K = \ceil{(n/C)^{1/c}}$. It follows that $\abs{V_{m}} = C \cdot K^{c} \geq n$, so we can fix a bijection $\kappa \colon U\to \interval{1}{n}$ for some subset $U \sse V_{m}$. We still have $\abs{g_\eps} \leq 2^{\sum_{k_i < \infty} (k_i+1)} (2^{K+1})^{c} \in \Oh (2^{cK}) = \Oh( 2^{Dn^{1/c}})$ with $D$ as above. This concludes the proof of \cref{prop:notANDweakrefined}. \end{proof} \begin{remark} In the light of \cref{prop:notANDweakrefined} it is natural to ask for a refined version of the \AND-weakness conjecture. A natural candidate would be to conjecture that every $G$-program for the $n$-input \AND has length $2^{\Omega(n^{1/(d-1)})}$ where $d$ is the Fitting length of $G$. However, this also weaker version of the \AND-weakness conjecture is wrong! Indeed, in \cite[Section 2.4]{BarringtonBR94} Barrington, Beigel and Rudich show that the $n$-input \AND can be computed by circuits using only $\MOD{m}$ gates of depth 3 and size $2^{\Oh(n^{1/r} \log n)}$ where $r$ is the number of different prime factors of $m$. Translating the circuit into a $G$-program yields a group $G$ of Fitting length 3. Since there is no bound on $r$, we see that there is no lower bound on the exponent $\delta$ such that there are $G$-programs of length $2^{\Oh(n^\delta)}$ for the $n$-input \AND in groups of Fitting length 3. \iffull While this does not yield smaller \CC circuits or shorter $G$-programs than the approach of \cref{prop:notANDweakrefined} allows, it shows that the divide-and-conquer technique on which \cref{prop:notANDweakrefined} relies is not always the best way for constructing small programs for \AND. \fi \end{remark} In \cite{HansenK10} it is shown that the \AND function can be computed by probabilistic \CC circuits using only a logarithmic number of random bits, which ``may be viewed as evidence contrary to the conjecture'' \cite{HansenK10}. In the light of this, we do not feel confident to judge which form of the \AND-weakness conjecture might be true. The following version seems possible. \begin{conjecture}[\AND-weakness \cite{BarringtonST90}]\label{conj:andweak} Let $G$ be finite solvable. Then every $G$-program for the $n$-input \AND has length $2^{n^{\Omega(1)}}$. \end{conjecture} Notice that \cite[Theorem 2]{BarringtonMMTT00} (if $G$ is \AND-weak, \ProgSAT over $G$ can be decided in quasi-polynomial time) still holds with this version of the \AND-weakness conjecture. \section{Reducing \KColoring{C} to equations}\label{sec:reduction} In this section we describe the reduction of \KColoring{C} to $\EQNSAT(G)$ and $\EQNID(G)$ in the spirit of \cite{GoldmannR02,Kompatscher19}. For this, we rely on the fact that $G$ has some normal subgroups meeting some special requirements. In \cref{sec:consequences}, we show that all sufficiently complicated finite solvable groups meet the requirements of \cref{thm:main}. For a normal subgroup $H \Nleq G$ and $g\in G$, we define $\eta_g(H) = \fitcomm{H}{g^G}$. Recall that $M $ is chosen large enough such that $\mcomm{X}{M}{Y} = \mcomm{X}{i}{Y}$ for all $i \geq M$ and all $X,Y\sse G$ with $X^G=X$ and $Y^G=Y$. Since $H$ is normal, we have $\eta_g(H)\leq H$ and $\eta_g(H)$ is normal in $G$. \begin{lemma}\label{lem:repeateta}\label{lem:Hsubgroup} Let $H\Nleq G$ be a normal subgroup and $g,h \in G$. Then \begin{enumerate} \item $\eta_g(\eta_g(H)) = \eta_g(H)$, and \item $\eta_{gh}(H) \leq \eta_g(H)\eta_h(H)$, and \item $\FitL(\eta_{gh}(H)) \leq \max\oneset{\FitL(\eta_g(H)), \FitL(\eta_h(H))}$. \end{enumerate} \end{lemma} \begin{proof} We use the fact that $M$ is chosen such that $\mcomm{X}{M}{Y} = \mcomm{X}{i}{Y}$ for all $i \geq M$ and all $X,Y \sse G$ with $X^G=G$ and $Y^G=Y$: \begin{align*} \eta_g(H) &= \mcomm{H}{M}{g^G}= \mcomm{H}{2M}{g^G}= \mcomm{\mcomm{H}{M}{g^G}}{M}{g^G\vphantom{k^k}} = \eta_g(\eta_g(H)). \end{align*} The second point follows with the same kind of argument: \begin{align*} \eta_{gh}(H) &= [H,\,_{2M} (gh)^G]\leq [H,\,_{2M} \gen{g^G \cup h^G}]\\ & = \gen{[H,\,_{2M} g^G \cup h^G]_{\Set}} \tag{by \cref{lem:setcommutator}}\\ &\leq \eta_g(H)\eta_h(H). \end{align*} The last step is because each of the commutators in $[H,\,_{2M} g^G \cup h^G]_{\Set}$ either contains at least $M$ terms from $g^G$ and, thus, is in $\eta_g(H)$ or it contains at least $M$ terms from $h^G$. The third point is an immediate consequence of the second point and \cref{lem:Fitting}. \end{proof} \begin{lemma}\label{lem:Kinducible} Suppose that $K \Nleq G$ is a normal subgroup satisfying $\eta_g(K) = K$ for some $g \in G$. Then $K$ is inducible. \end{lemma} \begin{proof} Because $\eta_{g}(K) = K$ for some $g\in G$ implies that $K = [K,G]$, it follows from \cref{lem:inducible} that $K$ is inducible. \end{proof} \begin{theorem}\label{thm:main} Let $G$ be a finite solvable group of Fitting length three and assume there are normal subgroups $K\Nleq H\Nleq G$ such that $\FitL(K) = 2$, $\cU_{2}G \leq H$, and $\abs{G/H} \geq 3$. Moreover, assume that \begin{enumerate}[(I)] \item for all $g \in G \setminus H$ we have $\eta_g(K) = K$,\label{assumption1} \item for all $h \in H$ we have $\FitL(\eta_h(K)) \leq 1$.\label{assumption2} \end{enumerate} Then $\EQNSAT(G)$ and $\EQNID(G)$ cannot be decided in deterministic time $2^{o(\log^2N)}$ under ETH where $N$ is the length of the input expression. In particular, $\EQNSAT(G)$ and $\EQNID(G)$ are not in \P under ETH. \end{theorem} \newcommand{\numBatches}{R} \newcommand{\indexBatchOut}{r} \newcommand{\indexBatchIn}{s} \newcommand{\indexK}{k} \newcommand{\indexMgamma}{\nu} \newcommand{\indexMdelta}{\mu} \newcommand{\numKInd}{T} \newcommand{\indexKInd}{t} \subparagraph*{Proof outline.} The crucial observation for this theorem is the same as for \cref{prop:notANDweakrefined}: that, roughly speaking, the $n$-input \AND can be decomposed into the conjunction of $\sqrt{n}$ many $\sqrt{n}$-input \AND{}s. We use this observation in order to reduce the \KColoring{C} problem to \EQNSAT. More precisely, given a graph $\Gamma$ with $n$ vertices and $m$ edges, we construct an expression $\delta$ and an element $\tilde h \in G$ such that \begin{enumerate}[(A)] \item the length of $\delta$ is in $2^{\Oh(\sqrt{m+n})}$, \label{pointA} \item $\delta$ can be computed in time polynomial in its length,\label{pointB} \item $\delta =\tilde h$ is satisfiable if and only if $\Gamma$ has a valid $C$-coloring, and\label{pointC} \item $\sigma(\delta) = 1$ holds for all assignments $\sigma$ if and only if $\Gamma$ does \emph{not} have a valid $C$-coloring. \label{pointD} \end{enumerate} For the number of colors we use $C = \abs{G/H}$. Let $N$ denote the input length for \EQNSAT (resp.\ \EQNID). A $2^{o(\log^2N)}$-time algorithm for \EQNSAT (resp.\ \EQNID), thus, would imply a $2^{o(n+m)}$-time algorithm for \KColoring{C} contradicting ETH. Hence, it is enough to show points (\ref{pointA})--(\ref{pointD}). In order to construct the expression $\delta$, we assign a variable $X_i$ to every vertex $v_i$ of $\Gamma$. Every assignment $\sigma$ to the variables $X_i$ will give us a coloring $\chi_\sigma$ of $\Gamma$ (to be defined later). During the proof, we also introduce some auxiliary variables. The aim is to construct $\delta$ in a way that an assignment $\sigma$ to the variables $X_i$ can be extended to a satisfying assignment for $\delta =\tilde h$ if and only if $\chi_\sigma$ is a valid coloring of $\Gamma$ (see \cref{lem:reductioncorrect}). We start by grouping the edges into roughly $\sqrt{m}$ batches of $\sqrt{m}$ edges each. For each batch of edges, we construct an expression $\gamma_\indexBatchOut$ (where $\indexBatchOut$ is the number of the batch) such that for every assignment $\sigma$ to the variables $X_i$ we have \begin{itemize} \item if $\chi_\sigma$ assigns the same color to two endpoints of an edge in the $\indexBatchOut$-th batch, then for every assignment to the auxiliary variables, $\gamma_\indexBatchOut$ evaluates to something in $\cU_1K$, \item otherwise, for every element $h \in K$, there is an assignment to the auxiliary variables such that $\gamma_\indexBatchOut$ evaluates to $h$. \end{itemize} A more formal statement of this can be found in \cref{lem:assignmentextension1}. The expression $\delta$ combines all the $\gamma_\indexBatchOut$ as an iterated commutator such that if one of the $\gamma_\indexBatchOut$ evaluates to something in $\cU_1K$, then $\delta$ evaluates to $1$, and, otherwise, there is some assignment to the auxiliary variables such that $\delta$ evaluates to the fixed element $\tilde h$. \begin{proof} Let $C = \abs{G/H}$. Let us describe how the \KColoring{C} problem for a given graph $\Gamma=(V,E)$ is reduced to an instance of \EQNSAT (resp.\ \EQNID). We denote $V= \oneset{v_1, \dots, v_n}$. For every vertex $v_i$ we introduce a variable $X_i$ and we set $\cX = \oneset{X_1, \dots, X_n}$. By fixing a bijection $\abs{G/H} \to \interval{1}{C}$, we obtain a correspondence between assignments $\cX \to G$ and colorings $V \to \interval{1}{C}$ (be aware that it is \emph{not} one-to-one). During the construction we will also introduce a set $\cY$ of auxiliary variables. As outlined above, the idea is that an assignment $\cX \to G$ represents a valid coloring if and only if there is an assignment to the auxiliary variables under which the equation evaluates to a non-identity element. For each edge $\oneset{v_i,v_j} \in E$, we introduce one edge gadget $X_iX_j^{-1}$ (it does not matter which one is the positive variable). Now, we group these gadgets into $\numBatches$ batches of $\numBatches$ elements each (if the number of gadgets is not a square, we duplicate some gadgets)~-- \ie we choose $\numBatches = \ceil{\sqrt{m}\;\!}$. How the gadgets exactly are grouped together does not matter. For $\indexBatchOut \in \interval{1}{\numBatches}$ and $\indexK\in \interval{1}{\abs{K}}$ let $\alpha_{\indexBatchOut,\indexK}$ be an expression which induces $K$ (\ie all $\alpha_{\indexBatchOut,\indexK}$ are the same expressions but with disjoint sets of variables). Such expressions exist by \cref{lem:Kinducible}. Let the variables of $\alpha_{\indexBatchOut,\indexK}$ be $Y_{\indexBatchOut,\indexK,\indexKInd}$ for $\indexKInd\in \interval{1}{\numKInd} $ for some $\numKInd \in \N$. Moreover, we introduce more auxiliary variables $Z_{\indexBatchOut,\indexK,\indexBatchIn,\indexMgamma}$ for $\indexBatchOut \in \interval{1}{\numBatches}$, $\indexK\in \interval{1}{\abs{K}}$, $\indexBatchIn \in \interval{1}{\numBatches}$, and $ \indexMgamma \in \interval{1}{M}$ (recall that $M$ is chosen such that, in particular, $\fitcomm{H_1}{H_2} = \mcomm{H_1}{M+1}{H_2}$ for arbitrary normal subgroups $H_1, H_2$ of $G$) and we set \[\cY'_\indexBatchOut = \set{\vphantom{\big(} Z_{\indexBatchOut,\indexK,\indexBatchIn,\indexMgamma},\; Y_{\indexBatchOut,\indexK,\indexKInd}}{\indexK\in \interval{1}{\abs{K}}, \indexBatchIn \in \interval{1}{\numBatches}, \indexMgamma \in \interval{1}{M}, \indexKInd \in \interval{1}{\numKInd} }.\] Let $\beta_{\indexBatchOut,1}, \dots, \beta_{\indexBatchOut,\numBatches}$ be the gadgets of the $\indexBatchOut$-th batch for some $\indexBatchOut \in \interval{1}{\numBatches}$. We define \begin{align} \gamma_\indexBatchOut = \prod_{\indexK=1}^{\abs{K}} \left[ \alpha_{\indexBatchOut,\indexK}, \beta_{\indexBatchOut,1}^{Z_{\indexBatchOut,\indexK,1,1}}, \dots, \beta_{\indexBatchOut,1}^{Z_{\indexBatchOut,\indexK,1,M}}, \dots,\beta_{\indexBatchOut,\numBatches}^{Z_{\indexBatchOut,\indexK,\numBatches,1}}, \dots, \beta_{\indexBatchOut,\numBatches}^{Z_{\indexBatchOut,\indexK,\numBatches,M}}\right].\label{eq:gammak} \end{align} We do this for every batch of gadgets. The following observation is crucial: \begin{lemma}\label{lem:assignmentextension1} Let $\sigma\colon \cX \to G$ be an assignment and let $\indexBatchOut \in \interval{1}{\numBatches}$. \begin{itemize} \item If $\sigma(\beta_{\indexBatchOut,\indexBatchIn}) \in G\setminus H$ for all $\indexBatchIn$, then $\displaystyle \set{\vphantom{k^h}(\sigma \cup \sigma')(\gamma_\indexBatchOut)}{\sigma' : \cY_\indexBatchOut' \to G} = K,$ \item Otherwise, $\displaystyle\set{\vphantom{k^h}(\sigma \cup \sigma')(\gamma_\indexBatchOut)}{\sigma' : \cY_\indexBatchOut' \to G} \leq \cU_1K.$ \end{itemize} \end{lemma} \newcommand{\comMu}{, \kern.1em_{M}\,\kern.1em} \begin{proof} By construction, we have $(\sigma \cup \sigma')(\alpha_{\indexBatchOut,\indexK}) \in K$ for all $\indexBatchOut$ and $\indexK$ and all assignments $\sigma $ and $\sigma'$. Since $K$ is normal, it follows that $(\sigma \cup \sigma')(\gamma_\indexBatchOut) \in K$ for all assignments $\sigma $ and $\sigma'$. Consider the case that $g_\indexBatchIn\coloneqq \sigma(\beta_{\indexBatchOut,\indexBatchIn}) \in G\setminus H$ for all $\indexBatchIn \in \interval{1}{\numBatches}$. By assumption (\ref{assumption1}), we have $K = \eta_{g_1}(K) = \eta_{g_2}(\eta_{g_1}(K)) = \cdots = \eta_{g_\numBatches} \dots \eta_{g_2}(\eta_{g_1}(K))\cdots) $. By \cref{lem:setcommutator}, it follows that $K = \gen{ [K\comMu g_1^G, \dots\comMu g_\numBatches^G]_{\Set}}.$ Since $1 \in [K\comMu g_1^G, \dots\comMu g_\numBatches^G]_{\Set}$ and every element in $K$ can be written as a product of length at most $\abs{K}$ over any generating set, we conclude $K = \left([K\comMu g_1^G,\dots\comMu g_\numBatches^G]_{\Set}\right)^{\abs{K}}$. This is exactly the form how $\gamma_\indexBatchOut$ was defined in \cref{eq:gammak} (recall that $\alpha_{\indexBatchOut,\indexBatchIn}$ can evaluate to every element of $K$). Therefore, for each $h \in K$, there is an assignment $\sigma'\colon \cY_\indexBatchOut' \to G$ such that $(\sigma \cup \sigma')(\gamma_\indexBatchOut) = h$. On the other hand, let $g_\indexBatchIn\coloneqq \sigma(\beta_{\indexBatchOut,\indexBatchIn}) \in H$ for some $\indexBatchIn$. Then, by assumption (\ref{assumption2}) we have $\FitL(\eta_{g_\indexBatchIn}(K)) \leq 1$. Since $(\sigma \cup \sigma')(\gamma_\indexBatchOut) \in \eta_{g_\indexBatchIn}(K)$, we obtain $(\sigma \cup \sigma')(\gamma_\indexBatchOut) \in \cU_1K$ by \cref{lem:Fitting}. \end{proof} Now, for every set of auxiliary variables $\cY_\indexBatchOut'$ we introduce $M$ disjoint copies, which we call $\cY_\indexBatchOut^{(\indexMdelta)}$ for $\indexMdelta \in \interval{1}{M}$. We write $\gamma_\indexBatchOut^{(\indexMdelta)}$ for the copy of $\gamma_\indexBatchOut$ where the variables of $\cY_\indexBatchOut'$ are substituted by the corresponding ones in $\cY_\indexBatchOut^{(\indexMdelta)}$ (the variables $\cX$ are shared over all $\gamma_\indexBatchOut^{(\indexMdelta)}$). We set \[\delta = \bigl[\gamma_1^{(1)}, \dots , \gamma_1^{(M)}, \dots , \gamma_\numBatches^{(1)}, \dots , \gamma_\numBatches^{(M)} \bigr].\] Finally, fix some $\tilde h \in K \setminus 1$ with $\tilde h \in \smcomm{M\kern-.05em\cdot\kern-.05em\numBatches}{K}_{\Set}$ and set $\cY = \bigcup_{\indexBatchOut,\indexMdelta} \cY_\indexBatchOut^{(\indexMdelta)}$. \begin{lemma}\label{lem:assignmentextension2} Let $\sigma\colon \cX \to G$ be an assignment. If $\sigma(\beta_{\indexBatchOut,\indexBatchIn}) \in G\setminus H$ for all $\indexBatchOut$ and $\indexBatchIn$, then there is some assignment $\sigma'\colon \cY \to G$ such that $(\sigma \cup \sigma')(\delta) = \tilde h$. Otherwise $(\sigma \cup \sigma')(\delta) = 1$ for all $\sigma'\colon \cY \to G$. \end{lemma} \begin{proof} If $\sigma(\beta_{\indexBatchOut,\indexBatchIn}) \in G\setminus H$ for all $\indexBatchOut$ and $\indexBatchIn$, then by \cref{lem:assignmentextension1}, $\big\{(\sigma \cup \sigma')(\gamma_\indexBatchOut^{(\indexMdelta)}) \:\big|\: \sigma' : \cY_\indexBatchOut^{(\indexMdelta)} \to G \big\} = K$ for all $\indexBatchOut\in \interval{1}{\numBatches}$ and $\indexMdelta\in \interval{1}{M}$. Hence, since we chose the auxiliary variables $\cY_\indexBatchOut^{(\indexMdelta)}$ to be all disjoint, we obtain \[ \tilde h \in \smcomm{M\kern-.05em\cdot\kern-.05em\numBatches}{K}_{\Set} \sse \set{(\sigma \cup \sigma')(\delta)}{\sigma' : \cY_\indexBatchOut^{(\indexMdelta)} \to G}.\] On the other hand, if $\sigma(\beta_{\indexBatchOut,\indexBatchIn}) \in H$, then, by \cref{lem:assignmentextension1}, for all $\sigma'\colon \cY \to G$ and all $\indexMdelta\in \interval{1}{M}$ we have $(\sigma \cup \sigma')(\gamma_\indexBatchOut^{(\indexMdelta)}) \in \cU_1K$. Hence, $(\sigma \cup \sigma')(\delta ) \in \smcomm{M}{\cU_1K} = 1$. \end{proof} Now we are ready to define our equation as $\delta\tilde h^{-1} $ for the reduction of \KColoring{C} to $ \EQNSAT(G)$ and $\delta $ for the reduction to $ \EQNID(G)$. The final step is to show points (\ref{pointA})--(\ref{pointD}) from above. For (\ref{pointA}) observe that the length of $\gamma_\indexBatchOut$ is $\Oh(2^{M\cdot\numBatches})$ for all $\indexBatchOut$. Thus, the length of $\delta$ is $\Oh(2^{M\cdot\numBatches})\cdot \Oh(2^{M\cdot\numBatches}) \sse 2^{\Oh(\numBatches)} = 2^{\Oh(\sqrt{m})}$ as desired. Point (\ref{pointB}) is straightforward from the construction of $\delta$. In order to see (\ref{pointC}) and (\ref{pointD}), we use \cref{lem:assignmentextension2} to prove another lemma. We fix a bijection $\xi: G/H\to \interval{1}{C}$. For an assignment $\sigma: \cX \to G$, we define a corresponding coloring $\chi_\sigma : V \to \interval{1}{C}$ by $\chi_\sigma(v_i) = \xi(\sigma(X_i)H)$. \begin{lemma}\label{lem:reductioncorrect} Let $\sigma: \cX \to G$ be an assignment. Then \begin{itemize} \item if $\chi_\sigma$ is valid, then there is an assignment $\sigma' :\cY \to G$ such that $(\sigma \cup \sigma')(\delta) = \tilde h\neq 1$, \item if $\chi_\sigma$ is \emph{not} valid, then for all assignments $\sigma' :\cY \to G$ we have $(\sigma \cup \sigma')(\delta) = 1$. \end{itemize} \end{lemma} \begin{proof Let $\chi_\sigma$ be a valid coloring. First, observe that the gadgets all evaluate to some element outside of $H$ under $\sigma$. This is because, if there is a gadget $X_iX_j^{-1}$ that means that $\oneset{v_i, v_j} \in E$ and so $\chi_\sigma(v_i) \neq \chi_\sigma(v_j)$; hence, $\sigma(X_i) \neq \sigma(X_j)$ in $G/H$ (since $\xi$ is a bijection). Therefore, by \cref{lem:assignmentextension2}, it follows that $\delta$ evaluates to $\tilde h$ under some proper assignment for $\cY$. On the other hand, if $\chi_\sigma$ is not a valid coloring, then there is an edge $\oneset{v_i, v_j} \in E$ with $\chi_\sigma(v_i) = \chi_\sigma(v_j)$. Then we have $\sigma(X_i)H = \sigma(X_j)H$. Hence, by \cref{lem:assignmentextension2}, we obtain that $(\sigma_\chi\cup \sigma')(\delta)=1$ in $G$ for every $\sigma': \cY \to G$. \end{proof} This concludes the proof of \cref{thm:main}. \end{proof} \section{Consequences}\label{sec:consequences} In this section we derive our main result Corollary~\ref{cor:mainIntro}. We start again with a lemma. \begin{lemma}\label{lem:KHG} For every finite solvable, non-nilpotent group $G$ of Fitting length $d$, there are proper normal subgroups $K\Nleq H \Nle G$ with $\FitL(K) = d -1$ and $\cU_{d -1}G \leq H$ such that \begin{itemize} \item for all $g \in G \setminus H$ we have $\eta_g(K) = K$, \item for all $h \in H$ we have $\FitL(\eta_h(K)) < \FitL(K)$. \end{itemize} \end{lemma} The construction for \cref{lem:KHG} resembles the ones in Lemmas 5 and 6 of \cite{Kompatscher19}. However, while in \cite{Kompatscher19} a minimal normal subgroup $N$ of a quotient $G/K$ is constructed such that $r_g$ with $r_g(x) = [x,g]$ is an automorphism of $N$ (and $N$ is abelian), in our case this is not enough since we need to apply commutator constructions to our analog of $N$ in the spirit of the divide-and-conquer approach of \cref{prop:notANDweakrefined}. \begin{proof} Let $g_1 \in G \setminus\cU_{d-1} G$ where $d$ is the Fitting length of $G$. We construct a sequence of normal subgroups $K_1, K_2, \ldots $ of $ G$ as follows: we set $K_1 = \eta_{g_1}(G) $. By \cref{lem:addGincomm}, $ K_1= \gamma_\infty\gen{g_1^G}$, so it has Fitting length $d-1$. Now, while there is some $g_i \in G$ such that $\eta_{g_i}(K_{i-1}) < K_{i-1}$ and $\FitL(\eta_{g_i}(K_{i-1})) = \FitL(K_{i-1})$, we set $K_i = \eta_{g_i}(K_{i-1})$ and continue. Since $K_i$ is a proper subgroup of $K_{i-1}$, this process eventually terminates. We call the last term $K$. We claim that $K$ satisfies the statement of \cref{lem:KHG}. By construction for every $g \in G$ one of the two cases \begin{itemize} \item $\eta_g(K) = K$ or \item $\FitL(\eta_{g}(K)) < \FitL(K)$ \end{itemize} applies. Moreover, since $K = \eta_g(K')$ for some $K'\leq G$ and some $g \in G$, we have $K = \eta_g(K') = \eta_g(\eta_g(K')) = \eta_g(K)$ by \cref{lem:repeateta} (i). By \cref{lem:Hsubgroup} (iii), the elements $\set{h \in G}{\FitL(\eta_h(K)) < \FitL(K)}$ form a subgroup $H$ of $G$. Clearly $H$ is normal (by the definition of $\eta_h$) and $K \leq \cU_{d -1}G \leq H$ because $\FitL(\mcomm{K}{M}{\cU_{d -1}G}) = \FitL(K) - 1$. Since there is some $g \in G$ with $K = \eta_g(K)$, we have $H \neq G$. \end{proof} Be aware that $K$ depends on the order the $g_i$ were chosen. Indeed, if $G$ is a direct product of two groups $G_1$ and $G_2$ of equal Fitting length, then $K$ will either be contained in $G_1$ or in $G_2$~-- in which factor depends on the choice of the $g_i$. \begin{theorem}[Corollary~\ref{cor:mainIntro}]\label{thm:main2} \iffull Let $G$ be a finite solvable group meeting one of the following conditions: \begin{enumerate} \item $\FitL(G) = 3$ and $\abs{G/\cU_2G}$ has a prime divisor 3 or greater (\ie $G/\cU_2G$ is not a 2-group), \item $\FitL(G) \geq 4$. \end{enumerate} \else Let $G$ be a finite solvable group such that either $\FitL(G) = 3$ and $\abs{G/\cU_2G}$ has a prime divisor 3 or greater (\ie $G/\cU_2G$ is not a 2-group) or $\FitL(G) \geq 4$. \fi Then $\EQNSAT(G)$ and $\EQNID(G)$ cannot be decided in deterministic time $2^{o(\log^2N)}$ under ETH. In particular, $\EQNSAT(G)$ and $\EQNID(G)$ are not in \P under ETH. \end{theorem} \begin{proof Consider the case that $G$ has Fitting length 3 and $\abs{G/\cU_2G}$ has a prime divisor 3 or greater. Let $2^\nu$ for some $\nu \in \N$ be the greatest power of two dividing $\abs{G/\cU_2G}$. Then, the subgroup $\wt G$ generated by all $2^\nu$-th powers is normal and it is not contained in $\cU_2G$. Therefore, by \cref{lem:Fitting} it has Fitting length $3$ as well. Also, by \cref{lem:Fitting}, we know that $\cU_2\wt G =\wt G \cap \cU_2G $. Hence, $\wt G/\cU_2\wt G$ is a subgroup of $G/\cU_2G$. Moreover, since $\wt G$ is generated by $2^\nu$-th powers, the generators of $\wt G$ have odd order in $\wt G/\cU_2\wt G$. Since $\wt G/\cU_2\wt G$ is nilpotent, it follows that $|\wt G/\cU_2\wt G|$ is odd (recall that a nilpotent group is a direct product of $p$-groups). Since $\wt G$ is inducible in $G$, by \cref{lem:inducibleEQN}, it suffices to show that $\wt G$ satisfies the requirements of \cref{thm:main}. For this, we use \cref{lem:KHG}, which gives us normal subgroups $K \Nleq H \Nle \wt G$ with $\cU_2\wt G \leq H$, $\FitL(K) = 2$ and such that for all $g \in\wt G \setminus H$ we have $\eta_g(K) = K$, and for all $h \in H$ we have $\FitL(\eta_h(K)) \leq 1$. It only remains to show that $|\wt G/H| \geq 3$. Since $H\neq \wt G$ and $|\wt G/H|$ is odd, this holds trivially. Thus, both $\EQNSAT(G)$ and $\EQNID(G)$ are not in \P under ETH if $G$ has Fitting length 3 and $\abs{G/\cU_2G}$ a prime divisor 3 or greater. \medskip The second case can be reduced to the first case as follows: Assume that $G$ has Fitting length $d \geq 4$. If $\abs{G/\cU_{d-1} G}$ has a prime factor $3$ or greater, we can apply the Fitting length 3 case to $G /\cL_3 G$ for \EQNSAT and to $G /\cU_{d-3} G$ for \EQNID. By \cref{lem:inducible} and \cref{lem:inducibleEQN} this implies the corollary for \EQNSAT. For \EQNID, the statement follows form \cref{lem:universallyd} and \cref{lem:univTAUT}. On the other hand, if $\abs{G/\cU_{d-1} G} = 2^\nu$ for some $\nu \geq 1$, as in the first case, we consider the subgroup $\wt G$ generated by all $2^\nu$-th powers. Then the index of $\wt G$ in $G$ is again a power of two (since the order of every element in $G/\wt G$ is a power of two). Moreover, $\wt G \leq \cU_{d-1} G$ and, by \cref{lem:Fitting}, we have \[ \wt G/\cU_{d-2} \wt G = \wt G/(\cU_{d-2}G \cap \wt G ) \cong (\wt G \cdot \cU_{d-2} G)/\cU_{d-2} G \leq \cU_{d-1} G/\cU_{d-2} G. \] Now, $\abs{\cU_{d-1} G/\cU_{d-2} G}$ cannot be a power of two because, otherwise, $G/\cU_{d-2} G$ would be a 2-group and, thus, nilpotent~-- contradicting the fact that the upper Fitting series is a shortest Fitting series. Since the index of $ \wt G $ in $\cU_{d-1} G$ is a power of two, we see that $\wt G \not\sse \cU_{d-2} G$ and that the index of $\cU_{d-2} \wt G$ in $\wt G$ has a prime factor other than 2. Therefore, we can apply the Fitting length 3 case to $\wt G /\cL_3 \wt G$ (resp.\ $\wt G /\cU_{d-3} \wt G$) \end{proof} \subparagraph*{The case that $G/\cU_2G$ is a 2-group.} As mentioned above, in the recent paper \cite{IdziakKK20} Idziak, Kawa\l ek, and Krzaczkowski proved a $2^{\Oh(\log^2(n))}$-lower bound under ETH for $\EQNSAT(S_4)$. They apply a reduction of \SAT to $\EQNSAT(S_4)$. Instead of using commutators to simulate conjunctions in the group, the more complicated logical function $(X,Y_1,Y_2,Y_3) \mapsto X\land(Y_1\lor Y_2\lor Y_3)$ is encoded into the group. Indeed, under suitable assumptions on the group and the range of the variables, both the expressions $w(X,Y_1,Y_2,Y_3) = X^8[X,Y_1,Y_2,Y_3]$ (see \cite{Kompatscher19}) and $s(X,Y_1,Y_2,Y_3) = X\;\![X,Y_1,Y_2,Y_3]^{-1}$ (see \cite{GorazdK10}~-- referred to by \cite{IdziakKK20}) simulate this logical function. A new paper unifying our approaches and proving \cref{thm:main2} for \emph{all} groups of Fitting length 3 can be found in \cite{IdziakKKW20arxiv}. \subparagraph*{Consequences for \ProgSAT.} We have $ \EQNSAT(G)\leq_{\mathrm{m}}^{\Ac0} \ProgSAT(G)$ for every finite group $G$ by \cite[Lem.~1]{BarringtonMMTT00} (while not explicitly stated, it is clear that this reduction is an \Ac0-reduction). Thus, by \cref{thm:main}, $\ProgSAT(G)$ is not in \P under ETH if $G$ is of Fitting length at least 4 or $G$ is of Fitting length 3 and $G/\cU_{2}G$ is not a $2$-group. \subparagraph*{Small groups for which \cref{thm:main2} gives a lower bound.} In \cite{Horvath15} lists of groups are given where the complexity of $\EQNSAT$ and $\EQNID$ is unknown. The paper refers to a more comprehensive list available on the author's website \url{http://math.unideb.hu/horvath-gabor/research.html}. We downloaded the lists of groups and ran tests in GAP for which of these groups \cref{thm:main2} provides lower bounds. In the list with unknown complexity for $\EQNID$ there are 2331 groups of order less than 768 out of which 1559 are of Fitting length three or greater. \cref{thm:main2} applies to 22 of them: 3 groups of Fitting length 4 and 19 groups $G$ of Fitting length 2 where $G/\cU_{2}G$ is not a 2-group. A list of the groups for which we could prove lower bounds can be found in \cref{tab:GAPresults}. \begin{table} \caption{Groups up to order 767 for which \cref{thm:main2} gives lower bounds.}\label{tab:GAPresults} {\small \begin{tabular}{|c|c|l|} \parbox{2.2cm}{Index in Small Groups Library} & \parbox{.92cm}{Fitting length} & GAP Structure description\\ \hline [ 168, 43 ] & 3 & (C2 x C2 x C2) : (C7 : C3)\\{} [ 216, 153 ] & 3 & ((C3 x C3) : Q8) : C3\\{} [ 324, 160 ] & 3 & ((C3 x C3 x C3) : (C2 x C2)) : C3\\{} [ 336, 210 ] & 3 & C2 x ((C2 x C2 x C2) : (C7 : C3))\\{} [ 432, 734 ] & 4 & (((C3 x C3) : Q8) : C3) : C2\\{} [ 432, 735 ] & 3 & C2 x (((C3 x C3) : Q8) : C3)\\{} [ 504, 52 ] & 3 & (C2 x C2 x C2) : (C7 : C9)\\{} [ 504, 158 ] & 3 & C3 x ((C2 x C2 x C2) : (C7 : C3))\\{} [ 600, 150 ] & 3 & (C5 x C5) : SL(2,3)\\{} [ 648, 531 ] & 3 & C3 . (((C3 x C3) : Q8) : C3) = (((C3 x C3) : C3) : Q8) . C3\\{} [ 648, 532 ] & 3 & (((C3 x C3) : C3) : Q8) : C3\\{} [ 648, 533 ] & 3 & (((C3 x C3) : C3) : Q8) : C3\\{} [ 648, 534 ] & 3 & ((C3 x C3) : Q8) : C9\\{} [ 648, 641 ] & 3 & ((C3 x C3 x C3) : Q8) : C3\\{} [ 648, 702 ] & 3 & C3 x (((C3 x C3) : Q8) : C3)\\{} [ 648, 703 ] & 4 & (((C3 x C3 x C3) : (C2 x C2)) : C3) : C2\\{} [ 648, 704 ] & 4 & (((C3 x C3 x C3) : (C2 x C2)) : C3) : C2\\{} [ 648, 705 ] & 3 & (S3 x S3 x S3) : C3\\{} [ 648, 706 ] & 3 & C2 x (((C3 x C3 x C3) : (C2 x C2)) : C3)\\{} [ 672, 1049 ] & 3 & C4 x ((C2 x C2 x C2) : (C7 : C3))\\{} [ 672, 1256 ] & 3 & C2 x C2 x ((C2 x C2 x C2) : (C7 : C3))\\{} [ 672, 1257 ] & 3 & (C2 x C2 x C2 x C2 x C2) : (C7 : C3) \end{tabular}} \end{table} \subsection{Equations in finite semigroups} For a semigroup $S$, the problems $\EQNSAT(S)$ and $\EQNID(S)$ both receive two expressions as input. The questions is whether the two expressions evaluate to the same element under some (resp.\ all) assignments. For semigroups $R, S$ we say that $R$ \emph{divides} $S$ if $R$ is a quotient of a subsemigroup of $S$. The following lemmas are straightforward to prove using basic semigroup theory. For the proofs, we need Green's relations $\cH$ and $\cJ$. For a definition, we refer to \cite[Appendix A]{rs09qtheory}. For a semigroup $S$ we write $S^1$ for $S$ with an identity adjoined if there is none. \begin{lemma}\label{lem:maximalsgEQN} If $G$ is a maximal subgroup of a finite semigroup $S$, then $\EQNSAT(G) \leq_{\mathrm{m}}^{\AC} \EQNSAT(S)$. \end{lemma} \begin{proof Let $e \in G$ denote the identity of $G$. Clearly, $G = eGe \leq eSe$ and $eSe$ is a submonoid of $S$ with identity $e$. The reduction simply replaces every variable $X$ by $eXe$ (and likewise for constants). Let $\tilde \alpha$ denote the equation we obtain from an input equation $\alpha$ this way. Now the question is whether $\tilde \alpha = e$ in $S$. Clearly, if $\alpha$ has a solution in $G$, the resulting equation $\tilde \alpha$ has a solution in $S$. On the other hand, if $\tilde \alpha$ has a solution in $S$, we obtain a solution of $\alpha = e$ in $S$ where every variable takes values in $eSe$. Assume we have $\sigma(X) = x \not\in G$ for a satisfying assignment $\sigma$ and some variable $X$ of $\alpha$. Since $\sigma(\alpha) = e$, we have that $e$ is in the two-sided ideal $S^1xS^1$ generated by $x=exe$. By point 2. of \cite[Exercise A.2.2]{rs09qtheory} it follows that $x\in H_e = G$ where $H_e$ denotes the $\cH$-class of $e$ under Green's relations (for a definition, we refer to \cite{rs09qtheory}) and $G$ agrees with $H_e$ because $G$ is a maximal subgroup. \end{proof} \begin{lemma}\label{dividesmaximal} If a group $G$ divides a semigroup $S$, then $G$ divides already one of the maximal subgroups (\ie regular $\cH$-classes) of $S$. \end{lemma} \begin{proof Let $U \leq S$ a subsemigroup and $\phi: U \to G$ a surjective semigroup homomorphism. Pick some arbitrary element $s \in U$ and let $e= s^{\omega}$ be the idempotent generated by $s$. Clearly, we have $\phi(e) = 1$. Now, the subsemigroup $eUe \leq U$ still maps surjectively onto $G$ under $\phi$: by assumption for every $g \in G$ there is some $u_g \in U$ with $\phi(u_g) = g$; hence, $g = 1 g 1 = \phi(e) \phi(u_g) \phi(e) \in \phi(eUe)$. If $eUe$ is not contained in a maximal subgroup, then by point 2. of \cite[Exercise A.2.2]{rs09qtheory}, there is some $t \in eUe$ which is not $\cJ$-equivalent to $e$. Now, we can repeat the above process starting with $t$. This will decrease the size of $U$, so it eventually terminates. \end{proof} \begin{corollary}\label{cor:semigroup} Let $S$ be a finite semigroup and $G$ a group dividing $S$. If $\FitL(G) \geq 4 $ or $\FitL(G) =3 $ and $G/\cU_2G$ is not a 2-group, then $\EQNSAT(S)$ is not in \P under ETH. \end{corollary} \begin{proof} If $G$ with $\FitL(G) \geq 4 $ or $\FitL(G) =3 $ and $G/\cU_2G$ divides $S$, then it follows from \cref{dividesmaximal} that there is a group $\wt G$ with the same properties and which is a maximal subgroup of $S$. Hence, the statement follows from \cref{lem:maximalsgEQN}. \end{proof} \cite[Theorem 1]{AlmeidaVG09} states that identity checking over $\wt G$ reduces to identity checking over $S$ where $\wt G$ is the direct product of all maximal subgroups of $S$. However, be aware that in this context the identity checking problem does not allow constants. Since the proof of \cref{thm:main} essentially relies on the fact that the subgroup $K$ is inducible and this can be only shown using constants, this does not allow us to show hardness of $\EQNID(S)$. \section{Conclusion} We have shown that assuming the exponential time hypothesis there are solvable groups with equation satisfiability problem not decidable in polynomial time. Thus, under standard assumptions from complexity theory this means a negative answer to \cite[Problem 1]{BurrisL04} (also conjectured in \cite{Horvath11}). \cref{thm:main2} yields a quasipolynomial time lower bound under ETH. Thus, a natural weakening of \cite[Problem 1]{BurrisL04} is as follows: \begin{conjecture}\label{conj:quasipoly} If $G$ is a finite solvable group, then $\EQNSAT(G)$ and $\EQNID(G)$ are decidable in quasipolynomial time. \end{conjecture} In \cite[Theorem 2]{BarringtonMMTT00} it is proved that $\ProgSAT(G) $ and, hence, also $ \EQNSAT(G)$ can be decided in quasipolynomial time given that $G$ is \AND-weak. As remarked in \cref{sec:programs} this theorem remains valid with our slightly less restrictive definition of \AND-weakness in \cref{conj:andweak}. Thus, \cref{conj:andweak} implies \cref{conj:quasipoly}. In particular, under the assumption of both ETH and the \AND-weakness conjecture (\cref{conj:andweak}), for every finite solvable group $G$ meeting the requirements of \cref{thm:main2} there are quasipolynomial upper and lower bounds for $\EQNSAT(G)$ and $\EQNID(G)$~-- so under these assumptions both problems are neither in \P nor \NP-complete. This contrasts the situation for solving systems of equations: there is a clear \P versus \NP-complete dichotomy \cite{GoldmannR02}. \cref{thm:main2} proves lower bounds on \EQNSAT and \EQNID for all sufficiently complicated finite solvable groups. Together with the authors of \cite{IdziakKK20} we can extend this to \emph{all} groups of Fitting length three \cite{IdziakKKW20arxiv}. Possible further research might address the complexity of \EQNSAT and \EQNID in groups of Fitting length two. Another direction for future work is the complexity of \EQNID for expressions without constants.
1,314,259,993,860
arxiv
\section{Introduction} It is well-known that the standard model based on $G_{SM}=SU(3)_c\times SU(2)_W\times U(1)_Y$ gauge symmetry is a quite promising theory to describe interactions of the particles. However, there are unsolved or non-verifiable points enough, in particular, the followings are underlying to be clarified: \begin{enumerate} \item The electroweak symmetry breaking scale $M_W\sim 10^2$ ${\rm GeV}$ is unnaturally small in comparison with the fundamental energy scale such as Planck scale $M_P\sim 10^{18}$ {\rm GeV}. \item The number of Yukawa coupling constants is too many to give predictions of the quark and lepton mass matrices. \item There is no understanding about the meaning of generations. \end{enumerate} It is believed that the first point is solved by introducing SUSY \cite{SUSY}, but there is still naturalness problem in the MSSM. The superpotential of MSSM has $\mu$-term: \eqn{ \mu H^U H^D. } The parameter $\mu$ has to be fine-tuned to $O(1\ {\rm TeV})$ in order to give appropriate electroweak breaking scale, but it is unnatural. This problem is elegantly solved by introducing an additional U(1) gauge symmetry. This extra U(1) model is proposed in the context of superstring-inspired $E_6$ model \cite{extra-u1}. In this model, the bare $\mu$-term is forbidden by the new $U(1)_X$ symmetry, but the trilinear term including $G_{SM}$ singlet superfield $S$ is allowed: \eqn{ \lambda SH^UH^D. } When this singlet field $S$ develops a vacuum expectation value (VEV), the $U(1)_X$ gauge symmetry is spontaneously broken and an effective $\mu$-term; $\mu_{{\rm eff}}H^UH^D$, is generated from this term, where $\mu_{{\rm eff}}=\lambda \left<S\right>$ \cite{mu-problem}. A promising solution for the second point is a flavor symmetry \footnote{The $E_6$ inspired supersymmetric extension of SM with discrete flavor symmetry has been considered by authors \cite{f-extra-u1}.}. In fact, the flavor symmetry strongly reduces the Yukawa coupling constants. Here, we introduce a non-Abelian discrete flavor symmetry involved in triplet representations, expecting that the number of the generations for lepton and quark is {\it three}. The triplet representations are contained in several non-Abelian discrete symmetry groups \cite{review}, for examples, $S_4$ \cite{s4}, $A_4$ \cite{a4}, $T'$ \cite{t-prime}, $\Delta(27)$ \cite{d27} and $\Delta(54)$ \cite{d54}. In our work, we consider $S_4\times Z_2$. A promising solution for the third point can be arose by the cooperation with the flavor symmetry and supersymmetry. In the MSSM, the R-parity conserving operators such as $QQQL, E^cU^cU^cD^c$ induce the proton decay at unacceptable level. But, in the extra U(1) model, these operators are forbidden by the additional gauge symmetry. However, since the extra U(1) model has additional exotic fields, the Yukawa interactions for the exotic quarks and leptons and quarks reduce proton life time to unacceptable level, again. With the $S_4$ flavor symmetry, such a dangerous proton decay is sufficiently suppressed. Hence, it might be expected that the generation structure can be understood as a new system in order to stabilize the proton \cite{s4e6}. Considering the Higgs sector of our model, there is a serious problem of flavor changing neutral current (FCNC). Multiple Higgs interactions with leptons and quarks induce too large FCNC, if the mass scale of Higgs bosons is at $O(TeV)$ region \cite{e6fcnc}. In this paper, we show Higgs contributions to FCNC may be cancelled by SUSY FCNC contributions. This cancellation solution softens the FCNC constraint on Higgs mass. Because the mass bound of Higgs mass is at $O(TeV)$ region, our model is testable at LHC or future colliders. The paper is organized as follows. In section 2, we explain the basic structure of $S_4$ flavor symmetric extra U(1) model. We give the superpotential of quark and lepton sector in section 3, and of Higgs sector in section 4. In section 5, we discuss the Higgs and SUSY contributions to FCNC. Finally, we make a brief summary in section 6. Experimental values of mixing matrices and masses of quarks and leptons are given in appendix, which are used to test our models. \section{The Extra U(1) Model with $S_4$ Flavor Symmetry} \subsection{The Extra U(1) Model} The basic structure of the extra U(1) model is given as follows. At high energy scale, the gauge symmetry of model has two extra U(1)s, which consists maximal subgroup of $E_6$ as $G_2=G_{SM}\times U(1)_X\times U(1)_Z\subset E_6$. MSSM superfields and additional superfields are embedded in three 27 multiplets of $E_6$ to cancel anomalies, which is illustrated in Table 1. The 27 multiplets are decomposed as ${\bf 27}\supset \left\{Q,U^c,E^c,D^c,L,N^c,H^D,g^c,H^U,\right.$ $\left.g,S\right\}$, where $N^c$ are right-handed neutrinos (RHN), $g$ and $g^c$ are exotic quarks, and $S$ are $G_{SM}$ singlets. We introduce $G_{SM}\times U(1)_X$ singlets $\Phi$ and $\Phi^c$ to break $U(1)_Z$ which prevents the RHNs from having Majorana mass terms. If the $G_{SM}\times U(1)_X$ singlets develop the intermediate scale VEVs along the D-flat direction of $\left<\Phi\right>=\left<\Phi^c\right>$, then the $U(1)_Z$ is broken and the RHNs obtain the mass terms through the trilinear terms $Y^M\Phi N^cN^c$ in the superpotential. After the symmetry is broken, as the R-parity symmetry \eqn{ R=\exp\left[\frac{i\pi}{20}(3x-8y+15z)\right] } remains unbroken, $G_1=G_{SM}\times U(1)_X\times R$ survives at low energy. This is the symmetry of the low energy extra U(1) model. Within the renormalizable operators, the full $G_2$ symmetric superpotential is given as follows: \eqn{ W_1&=&W_0+W_S+W_B, \\ W_0&=&Y^UH^UQU^c+Y^DH^DQD^c+Y^EH^DLE^c+Y^N H^ULN^c+Y^M\Phi N^cN^c, \\ W_S&=&kSgg^c+\lambda SH^UH^D, \\ W_B&=&\lambda_1 QQg+\lambda_2 g^cU^cD^c+\lambda_3 gE^cU^c+\lambda_4 g^cLQ+\lambda_5gD^cN^c. } For simplicity, we drop gauge and generation indices. Where $W_0$ is the same as the superpotential of the MSSM with the RHNs besides the absence of $\mu$-term, and $W_S$ and $W_B$ are the new interactions. In $W_S$, $kSgg^c$ drives the soft SUSY breaking scalar squared mass of S to negative through the renormalization group equations (RGEs) and then breaks $U(1)_X$ and generates mass terms of exotic quarks, and $\lambda SH^UH^D$ is source of the effective $\mu$-term. Therefore, $W_0$ and $W_S$ are phenomenologically necessary. In contrast, $W_B$ breaks baryon number and leads to very rapid proton decay, which are phenomenologically unacceptable, so this must be forbidden. \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c||c|c|} \hline &$Q$ &$U^c$ &$E^c$&$D^c$ &$L$ &$N^c$&$H^D$&$g^c$ &$H^U$&$g$ &$S$ &$\Phi$&$\Phi^c$\\ \hline $SU(3)_c$&$3$ &$3^*$ &$1$ &$3^*$ &$1$ &$1$ &$1$ &$3^*$ &$1$ &$3$ &$1$ &$1$ &$1$ \\ \hline $SU(2)_W$&$2$ &$1$ &$1$ &$1$ &$2$ &$1$ &$2$ &$1$ &$2$ &$1$ &$1$ &$1$ &$1$ \\ \hline $y=6Y$ &$1$ &$-4$ &$6$ &$2$ &$-3$&$0$ &$-3$ &$2$ &$3$ &$-2$&$0$ &$0$ &$0$ \\ \hline $x$ &$1$ &$1$ &$1$ &$2$ &$2$ &$0$ &$-3$ &$-3$ &$-2$ &$-2$&$5$ &$0$ &$0$ \\ \hline $z$ &$-1$&$-1$ &$-1$ &$2$ &$2$ &$-4$ &$-1$ &$-1$ &$2$ &$2$ &$-1$&$8$ &$-8$ \\ \hline $R$ &$-$ &$-$ &$-$ &$-$ &$-$ &$-$ &$+$ &$+$ &$+$ &$+$ &$+$ &$+$ &$+$ \\ \hline \end{tabular} \end{center} \caption{$G_2$ assignment of fields. Where the $x$, $y$ and $z$ are charges of $U(1)_X$, $U(1)_Y$ and $U(1)_Z$, and $Y$ is hypercharge.} \end{table} \subsection{$S_4$ Flavor Symmetry} We show how the $S_4$ flavor symmetry forbids the baryon number violating superpotential $W_B$. Non-Abelian group $S_4$ has two singlet representations ${\bf 1}$, ${\bf 1'}$, one doublet representation ${\bf 2}$ and two triplet representations ${\bf 3}$, ${\bf 3'}$, where ${\bf 1}$ is the trivial representation. As the generation number of quarks and leptons is three, at least one superfield of $\left\{Q,U^c,E^c,D^c,L,N^c,H^D,g^c,H^U,g,S\right\}$ must be assigned to triplet of $S_4$ in order to solve flavor puzzle. As we assume that full $E_6$ symmetry does not realize at Planck scale, there is no need to assign all superfields to the same $S_4$ representations. The multiplication rules of these representations are as follows: \eqn{ \begin{tabular}{lcl} ${\bf 3}\times {\bf 3}={\bf 1}+{\bf 2}+{\bf 3}+{\bf 3'}$, & & ${\bf 3'}\times {\bf 3'}={\bf 1}+{\bf 2}+{\bf 3}+{\bf 3'}$, \\ ${\bf 3}\times {\bf 3'}={\bf 1'}+{\bf 2}+{\bf 3}+{\bf 3'}$, & & ${\bf 2}\times {\bf 3}={\bf 3}+{\bf 3'}$, \\ ${\bf 2}\times {\bf 3'}={\bf 3}+{\bf 3'}$, & & ${\bf 2}\times {\bf 2}={\bf 1}+{\bf 1'}+{\bf 2}$, \\ ${\bf 1'}\times {\bf 3}={\bf 3'}$, & & ${\bf 1'}\times {\bf 3'}={\bf 3}$, \\ ${\bf 1'}\times {\bf 2}={\bf 2}$, & & ${\bf 1'}\times {\bf 1'}={\bf 1}$. \end{tabular} } With these rules, it is easily shown that all the $S_4$ invariants consist of two or three non-trivial representations are given by \eqn{ &&{\bf 1'}\cdot{\bf 1'},\quad {\bf 2}\cdot{\bf 2},\quad {\bf 3}\cdot{\bf 3},\quad {\bf 3'}\cdot{\bf 3'},\quad {\bf 1'}\cdot{\bf 2}\cdot{\bf 2},\quad {\bf 1'}\cdot{\bf 3}\cdot{\bf 3'},\quad {\bf 2}\cdot{\bf 2}\cdot{\bf 2}, \quad {\bf 2}\cdot{\bf 3}\cdot{\bf 3}, \nonumber \\ &&{\bf 2}\cdot{\bf 3}\cdot{\bf 3'},\quad {\bf 2}\cdot{\bf 3'}\cdot{\bf 3'},\quad {\bf 3}\cdot{\bf 3}\cdot{\bf 3},\quad {\bf 3}\cdot{\bf 3}\cdot{\bf 3'},\quad {\bf 3}\cdot{\bf 3'}\cdot{\bf 3'},\quad {\bf 3'}\cdot{\bf 3'}\cdot{\bf 3'}. } From these, one can see that there is no invariant including only one triplet \footnote{$T'$ does not have this property but $A_4$, $\Delta(27)$ and $\Delta(54)$ have.}. Therefore, if $g$ and $g^c$ are assigned to triplets and the others are assigned to singlets or doublets, then $W_B$ is forbidden. This provides a solution to the proton life time problem. \subsection{Exotic Quark Decay and Proton Decay Suppression} The absence of $W_B$ makes exotic quarks and proton stable, but the existence of exotic quarks which have life time longer than 0.1 second spoils the success of Big Ban nucleosynthesis. In order to evade this problem, the $S_4$ symmetry must be broken. Therefore, it is assumed that the $S_4$ breaking terms are induced from non-renormalizable terms. We introduce $G_2$ singlet $T$ as triplet of $S_4$ and add the quartic terms: \eqn{ W_{NRB}=\frac{1}{M_P}T\left(QQg+g^cU^cD^c+gE^cU^c+g^cLQ+gD^cN^c\right). } Where the order one coefficients in front of each terms are omitted for simplicity. When $T$ develops VEV with \eqn{ \frac{\left<T\right>}{M_P}\sim 10^{-12}, \label{condition} } the phenomenological constraints on the life times of proton and exotic quarks are satisfied at the same time \cite{f-extra-u1}. The violation of $S_4$ symmetry gives $S_4$ breaking corrections to effective superpotential through the non-renormalizable terms which are expressed in the same manner as Eq.(10): \eqn{ W_{NRFV}=\frac{1}{M^2_P}T^2\left(H^UQU^c+H^DQD^c+H^DLE^c+ H^ULN^c+M'N^cN^c+SH^UH^D\right)+\frac{1}{M_P}TSgg^c. } Since the above corrections are negligibly small, the $S_4$ flavor symmetry approximately holds in low energy effective theory. One finds that the most economical flavon sector is the one which is exchanged $T$ into superfield-product; $\Phi\Phi^c/M_P$, by embedding $\Phi^c$ to a $S_4$ triplet (Hereafter, we call $\Phi$ and $\Phi^c$ as flavon which is the trigger of flavor violation.). In this case, the condition of Eq. (\ref{condition}) correspond to the following relation: \eqn{ \frac{\langle\Phi\rangle \langle\Phi^c\rangle }{M^2_P}\sim 10^{-12}, } and then the right-handed neutrino mass scale can be predicted as follows: \eqn{M_R\sim \langle\Phi\rangle\sim 10^{-6}M_P\sim 10^{12}\ {\rm GeV}.} Hence, by applying the above relation to the measurement of proton and exotic quarks (In our model, we call exotic quarks as $g$-quark.) life time, it is expected that one can determine the right-handed neutrino mass scale \section{Quark and Lepton Sector} At first, we define $W_0$ that contributes mass matrices of quarks and leptons. Although the $S_4$ symmetry reduces the Yukawa coupling constants, there is still an overabundance of parameters. In order to reduce the Yukawa coupling constants further, we extend the flavor symmetry to $S_4\times Z_2$ \cite{lepton}. In our model, all chiral superfields are assigned to the representations of $S_4\times Z_2$ as Table 2. \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline &$Q_i$ &$Q_3$ &$U^c_i$ &$U^c_3$ &$E^c_1$ &$E^c_2$ &$E^c_3$ &$D^c_i$ &$D^c_3$ &$L_i$ &$L_3$ &$N^c_i$\\ \hline $S_4$&${\bf 2}$&${\bf 1}$&${\bf 2}$&${\bf 1}$&${\bf 1}$&${\bf 1}$&${\bf 1'}$&${\bf 2}$&${\bf 1}$&${\bf 2}$&${\bf 1}$&${\bf 2}$\\ \hline $Z_2$&$+$ &$-$ &$-$ &$+$ &$+$ &$-$ &$+$ &$-$ &$+$ &$+$ &$+$ &$+$\\ \hline &$N^c_3$ &$H^D_i$ &$H^D_3$ &$H^U_i$ &$H^U_3$ &$S_i$ &$S_3$ &$g_a$ &$g^c_a$ &$\Phi_i$ &$\Phi_3$ &$\Phi^c_a$\\ \hline $S_4$&${\bf 1}$&${\bf 2}$&${\bf 1}$&${\bf 2}$&${\bf 1}$&${\bf 2}$&${\bf 1}$&${\bf 3}$ &${\bf 3}$&${\bf 2}$&${\bf 1}$&${\bf 3}$\\ \hline $Z_2$&$+$ &$+$ &$-$ &$+$ &$-$ &$-$ &$+$ &$+$ &$+$ &$-$ &$+$ &$+$\\ \hline \end{tabular} \end{center} \caption{$S_4\times Z_2$ assignment of superfields (Where the index $i$ of the $S_4$ doublets runs $i=1,2$, and the index $a$ of the $S_4$ triplets runs $a=1,2,3$.)} \end{table} The superpotential $W_0$ which is consistent with $G_2$ and the symmetries of Table 2 is given by \eqn{ W_0&=&Y^U_1H^U_3(Q_1U^c_1+Q_2U^c_2)+Y^U_3H^U_3Q_3U^c_3\nonumber \\ &+&Y^U_4Q_3(H^U_1U^c_1+H^U_2U^c_2)+Y^U_5(H^U_1Q_1+H^U_2Q_2)U^c_3\nonumber \\ &+&Y^D_1H^D_3(Q_1D^c_1+Q_2D^c_2)+Y^D_3H^D_3Q_3D^c_3\nonumber \\ &+&Y^D_4Q_3(H^D_1D^c_1+H^D_2D^c_2)+Y^D_5(H^D_1Q_1+H^D_2Q_2)D^c_3\nonumber \\ &+&Y^N_2\left[H^U_1(L_1N^c_2+L_2N^c_1)+H^U_2(L_1N^c_1-L_2N^c_2)\right] \nonumber \\ &+&Y^N_3H^U_3L_3N^c_3+Y^N_4L_3(H^U_1N^c_1+H^U_2N^c_2) \nonumber \\ &+&Y^E_1E^c_1(H^D_1L_1+H^D_2L_2)+Y^E_2E^c_2H^D_3L_3+Y^E_3E^c_3(H^D_1L_2-H^D_2L_1) \nonumber \\ &+&\frac12 Y^M_1\Phi(N^c_1N^c_1+N^c_2N^c_2)+\frac12 Y^M_3\Phi N^c_3N^c_3. } There are sixteen complex Yukawa coupling constants in this superpotential. The twelve phases of these can be absorbed by redefinition of the five of six quark superfields $\{Q_i,Q_3,U^c_i,U^c_3,D^c_i,D^c_3\}$ and seven lepton superfields $\{L_i,L_3,E^c_1,E^c_2,E^c_3,N^c_i,N^c_3\}$. Without loss of generality, we can define $Y^U_{3,4,5},Y^D_{4,5},Y^N_{2,4},Y^E_{1,2,3},Y^M_{1,3}$ to be real. We define the phases of complex Yukawa couplings as follows: \eqn{ Y^U_1=e^{i\alpha}|Y^U_1|,\quad Y^D_1=e^{i\beta}|Y^D_1|,\quad Y^D_3=e^{i\gamma}|Y^D_3|,\quad Y^N_3=e^{i\delta}|Y^N_3|. } We write the VEV of the flavon as \eqn{ \left<\Phi\right>=V, } and the VEVs of the $SU(2)_W$ doublet Higgses as \eqn{ &&\left<H^U_1\right>=v_u\cos\theta_u,\quad \left<H^U_2\right>=v_u\sin\theta_u,\quad \left<H^U_3\right>=v'_u, \nonumber \\ &&\left<H^D_1\right>=v_d\cos\theta_d,\quad \left<H^D_2\right>=v_d\sin\theta_d,\quad \left<H^D_3\right>=v'_d, } where we assume these VEVs are real and the parameters $V, v_{u,d}, v'_{u,d}$ are non-negative and the relation \eqn{ \sqrt{v^2_u+v'^2_u+v^2_d+v'^2_d}=174\ {\rm GeV} } is satisfied. If we define the non-negative mass parameters as follows: \eqn{ \begin{tabular}{llll} $M_1=Y^M_1V$, & $M_3=Y^M_3V$, & & \\ $m^u_1=|Y^U_1|v'_u$, & $m^u_3=Y^U_3v'_u$, & $m^u_4=Y^U_4v_u$, & $m^u_5=Y^U_5v_u$, \\ $m^d_1=|Y^D_1|v'_d$, & $m^d_3=|Y^D_3|v'_d$, & $m^d_4=Y^D_4v_d$, & $m^d_5=Y^D_5v_d$, \\ $m^\nu_2=Y^N_2v_u$, & $m^\nu_3=|Y^N_3|v'_u$, & $m^\nu_4=Y^N_4v_u$, & \\ $m^l_1=Y^E_1v_d$, & $m^l_2=Y^E_2v'_d$, & $m^l_3=Y^E_3v_d$, & \end{tabular} } then the mass matrices of up-type quarks ($M_u$), down-type quarks ($M_d$), charged leptons ($M_l$), Dirac neutrinos ($M_D$) and Majorana neutrinos ($M_R$) are given by \eqn{ \begin{tabular}{ll} $M_u=\Mat3{e^{i\alpha}m^u_1}{0}{m^u_5\cos\theta_u} {0}{e^{i\alpha}m^u_1}{m^u_5\sin\theta_u} {m^u_4\cos\theta_u}{m^u_4\sin\theta_u}{m^u_3}$, & $M_d=\Mat3{e^{i\beta}m^d_1}{0}{m^d_5\cos\theta_d} {0}{e^{i\beta}m^d_1}{m^d_5\sin\theta_d} {m^d_4\cos\theta_d}{m^d_4\sin\theta_d}{e^{i\gamma}m^d_3}$, \\ $M_l=\Mat3{m^l_1\cos\theta_d}{0}{-m^l_3\sin\theta_d} {m^l_1\sin\theta_d}{0}{m^l_3\cos\theta_d} {0}{m^l_2}{0}$, & $M_D=\Mat3{m^\nu_2\sin\theta_u}{m^\nu_2\cos\theta_u}{0} {m^\nu_2\cos\theta_u}{-m^\nu_2\sin\theta_u}{0} {m^\nu_4\cos\theta_u}{m^\nu_4\sin\theta_u}{e^{i\delta}m^\nu_3}$, \\ $M_R=\Mat3{M_1}{0}{0}{0}{M_1}{0}{0}{0}{M_3}$. & \end{tabular} } After the seesaw mechanism, the light neutrino mass matrix is given by \eqn{ M_\nu&=&M_DM^{-1}_RM^t_D=\Mat3{\rho^2_2}{0}{\rho_2\rho_4\sin2\theta_u} {0}{\rho^2_2}{\rho_2\rho_4\cos2\theta_u} {\rho_2\rho_4\sin2\theta_u}{\rho_2\rho_4\cos2\theta_u}{\rho^2_4+e^{2i\delta}\rho^2_3}, } where \eqn{ \rho_2=\frac{m^\nu_2}{\sqrt{M_1}},\quad \rho_4=\frac{m^\nu_4}{\sqrt{M_1}},\quad \rho_3=\frac{m^\nu_3}{\sqrt{M_3}}. } In the lepton sector, the mass eigenvalues and diagonalization matrix of charged leptons are given by \eqn{ V^\dagger_{lR}M^t_l V_{lL}&=&diag(m_e,m_\mu, m_\tau)=(m^l_2,m^l_3,m^l_1), \\ V_{lL}&=&\Mat3{0}{-\sin\theta_d}{\cos\theta_d} {0}{\cos\theta_d}{\sin\theta_d} {-1}{0}{0}, \\ V_{lR}&=&\Mat3{0}{0}{1}{1}{0}{0}{0}{1}{0}, } and those of the light neutrinos are given by \eqn{ V^t_\nu M_\nu V_\nu&=&diag(e^{i(\phi_1-\phi)}m_{\nu_1},e^{i(\phi_2+\phi)}m_{\nu_2},m_{\nu_3}), \\ V_\nu&=&\Mat3{\sin2\theta_u}{-\cos2\theta_u}{0} {\cos2\theta_u}{\sin2\theta_u}{0} {0}{0}{1} \Mat3{-\sin\theta_\nu}{e^{i\phi}\cos\theta_\nu}{0} {0}{0}{1} {e^{-i\phi}\cos\theta_\nu}{\sin\theta_\nu}{0}, } from Eq.(25) and Eq.(28), the Maki-Nakagawa-Sakata (MNS) matrix is given by \eqn{ V'_{MNS}&=&V^\dagger_{lL}V_\nu P_\nu=\Mat3{-e^{-i\phi}\cos\theta_\nu}{-\sin\theta_\nu}{0} {-\cos\bar{\theta}\sin\theta_\nu}{e^{i\phi}\cos\bar{\theta}\cos\theta_\nu}{\sin\bar{\theta}} {-\sin\bar{\theta}\sin\theta_\nu}{e^{i\phi}\sin\bar{\theta}\cos\theta_\nu}{-\cos\bar{\theta}}P_\nu, } where \eqn{ &&\bar{\theta}=\theta_d+2\theta_u, \\ &&P_\nu=diag(e^{-i(\phi_1-\phi)/2},e^{-i(\phi_2+\phi)/2},1). } Following ref. \cite{lepton}, we get \eqn{ \tan^2\theta_\nu&=&\frac{\sqrt{m^2_{\nu_2}-m^2_{\nu_3}\sin^2\phi}-m_{\nu_3}|\cos\phi|} {\sqrt{m^2_{\nu_1}-m^2_{\nu_3}\sin^2\phi}+m_{\nu_3}|\cos\phi|}, \\ \sin(\phi_1-\phi_2)&=&\frac{m_{\nu_3}\sin\phi}{m_{\nu_1}m_{\nu_2}} \left[\sqrt{m^2_{\nu_2}-m^2_{\nu_3}\sin^2\phi}+\sqrt{m^2_{\nu_1}-m^2_{\nu_3}\sin^2\phi}\right], \\ \sin(\phi_1-\phi)&=&\frac{\sin\phi}{m_{\nu_1}} \left[m_{\nu_3}\sqrt{1-\sin^2\phi}+\sqrt{m^2_{\nu_1}-m^2_{\nu_3}\sin^2\phi}\right]. } After the redefinition of the fields, the MNS matrix is transformed to the standard form in Eq.(106) where the parameters are given by \eqn{ \theta_{13}=0,\quad \theta_{12}=\theta_\nu,\quad \theta_{23}=\bar{\theta},\quad \alpha'=\frac{\phi_1-\phi_2}{2},\quad \beta'=\frac{\phi_1-\phi}{2}. } If the neutrino masses have been measured, the two Majorana phases $\alpha'$ and $\beta'$ would be predicted by Eqs.(32), (33), (34) and (35). In addition, $\theta_{13}=0$ is predicted, so totally three predictions are given in the lepton sector. In the quark sector, the mass eigenvalues and diagonalization matrices of quarks are given as follows: \eqn{ V^\dagger_{uR}M^t_uV_{uL}&=&diag(m_u,m_c,m_t), \\ V_{uL}&=&V_u\Mat3{1}{0}{0}{0}{1}{0}{0}{0}{e^{i\phi_{uL}}} \Mat3{\cos\theta_{uL}}{0}{\sin\theta_{uL}}{0}{1}{0}{-\sin\theta_{uL}}{0}{\cos\theta_{uL}}S_{12} , \\ V_{uR}&=&V_u\Mat3{1}{0}{0}{0}{1}{0}{0}{0}{e^{i\phi_{uR}}} \Mat3{\cos\theta_{uR}}{0}{\sin\theta_{uR}}{0}{1}{0}{-\sin\theta_{uR}}{0}{\cos\theta_{uR}}S_{12} , \\ V_u&=&\Mat3{\cos\theta_u}{-\sin\theta_u}{0}{\sin\theta_u}{\cos\theta_u}{0}{0}{0}{1}, \\ m^2_u&=&(m^u_1)^2 , \\ m^2_c&=&\frac12\left[(m^u_1)^2+(m^u_3)^2+(m^u_4)^2+(m^u_5)^2-\mu^2_u\right] , \\ m^2_t&=&\frac12\left[(m^u_1)^2+(m^u_3)^2+(m^u_4)^2+(m^u_5)^2+\mu^2_u\right] , \\ \mu^2_u&=&\sqrt{\left((m^u_3)^2+(m^u_4)^2-(m^u_1)^2-(m^u_5)^2\right)^2+4L^2_u} , \\ L_u&=&\sqrt{(m^u_1m^u_4\cos\alpha+m^u_3m^u_5)^2+(m^u_1m^u_4\sin\alpha)^2} , \\ R_u&=&\sqrt{(m^u_1m^u_5\cos\alpha+m^u_3m^u_4)^2+(m^u_1m^u_5\sin\alpha)^2} , \\ \tan2\theta_{uL}&=&\frac{2L_u}{(m^u_3)^2+(m^u_4)^2-(m^u_1)^2-(m^u_5)^2}, \\ \tan\phi_{uL}&=&\frac{m^u_1m^u_4\sin\alpha}{m^u_1m^u_4\cos\alpha+m^u_3m^u_5}, \\ \tan2\theta_{uR}&=&\frac{2R_u}{(m^u_3)^2+(m^u_5)^2-(m^u_1)^2-(m^u_4)^2}, \\ \tan\phi_{uR}&=&\frac{-m^u_1m^u_5\sin\alpha}{m^u_1m^u_5\cos\alpha+m^u_3m^u_4}, \\ V^\dagger_{dR}M^t_dV_{dL}&=&diag(m_d,m_s,m_b), \\ V_{dL}&=&V_d\Mat3{1}{0}{0}{0}{1}{0}{0}{0}{e^{i\phi_{dL}}} \Mat3{\cos\theta_{dL}}{0}{\sin\theta_{dL}}{0}{1}{0}{-\sin\theta_{dL}}{0}{\cos\theta_{dL}}S_{12} , \\ V_{dR}&=&V_d\Mat3{1}{0}{0}{0}{1}{0}{0}{0}{e^{i\phi_{dR}}} \Mat3{\cos\theta_{dR}}{0}{\sin\theta_{dR}}{0}{1}{0}{-\sin\theta_{dR}}{0}{\cos\theta_{dR}}S_{12} , \\ V_d&=&\Mat3{\cos\theta_d}{-\sin\theta_d}{0}{\sin\theta_d}{\cos\theta_d}{0}{0}{0}{1}, \\ m^2_d&=&(m^d_1)^2 , \\ m^2_s&=&\frac12\left[(m^d_1)^2+(m^d_3)^2+(m^d_4)^2+(m^d_5)^2-\mu^2_d\right] , \\ m^2_b&=&\frac12\left[(m^d_1)^2+(m^d_3)^2+(m^d_4)^2+(m^d_5)^2+\mu^2_d\right] , \\ \mu^2_d&=&\sqrt{\left((m^d_3)^2+(m^d_4)^2-(m^d_1)^2-(m^d_5)^2\right)^2+4L^2_d} , \\ L_d&=&\sqrt{(m^d_1m^d_4\cos\beta+m^d_3m^d_5\cos\gamma)^2+(m^d_1m^d_4\sin\beta-m^d_3m^d_5\sin\gamma)^2} , \\ R_d&=&\sqrt{(m^d_1m^d_5\cos\beta+m^d_3m^d_4\cos\gamma)^2+(m^d_1m^d_5\sin\beta-m^d_3m^d_4\sin\gamma)^2} , \\ \tan2\theta_{dL}&=&\frac{2L_d}{(m^d_3)^2+(m^d_4)^2-(m^d_1)^2-(m^d_5)^2} , \\ \tan\phi_{dL}&=&\frac{m^d_1m^d_4\sin\beta-m^d_3m^d_5\sin\gamma}{m^d_1m^d_4\cos\beta+m^d_3m^d_5\cos\gamma} , \\ \tan2\theta_{dR}&=&\frac{2R_d}{(m^d_3)^2+(m^d_5)^2-(m^d_1)^2-(m^d_4)^2} , \\ \tan\phi_{dR}&=&\frac{-m^d_1m^d_5\sin\beta+m^d_3m^d_4\sin\gamma}{m^d_1m^d_5\cos\beta+m^d_3m^d_4\cos\gamma} , \\ S_{12}&=&\Mat3{0}{1}{0}{-1}{0}{0}{0}{0}{1} , } from which the Cabibbo-Kobayashi-Maskawa (CKM) matrix is given by \eqn{ &&V_{CKM}=V^\dagger_{uL} V_{dL}= \nonumber \\ &&\Mat3{\cos\tilde{\theta}}{-\sin\tilde{\theta}\cos\theta_{dL}}{-\sin\tilde{\theta}\sin\theta_{dL}} {\sin\tilde{\theta}\cos\theta_{uL}} {\cos\tilde{\theta}\cos\theta_{uL}\cos\theta_{dL}+e^{i\bar{\phi}}\sin\theta_{uL}\sin\theta_{dL}} {\cos\tilde{\theta}\cos\theta_{uL}\sin\theta_{dL}-e^{i\bar{\phi}}\sin\theta_{uL}\cos\theta_{dL}} {\sin\tilde{\theta}\sin\theta_{uL}} {\cos\tilde{\theta}\sin\theta_{uL}\cos\theta_{dL}-e^{i\bar{\phi}}\cos\theta_{uL}\sin\theta_{dL}} {\cos\tilde{\theta}\sin\theta_{uL}\sin\theta_{dL}+e^{i\bar{\phi}}\cos\theta_{uL}\cos\theta_{dL}}, \nonumber \\ } where \eqn{ \tilde{\theta}=\theta_d-\theta_u,\quad \bar{\phi}=\phi_{dL}-\phi_{uL}. } The experimental values of the matrix elements and Jarlskog invariant in Eq.(172) are reproduced by putting \eqn{ \tilde{\theta}=13.3^\circ,\quad \theta_{uL}=2.05^\circ,\quad \theta_{dL}=0.99^\circ,\quad \bar{\phi}=-83.9^\circ. } In ref. \cite{lepton}, it is assumed that the VEVs of Higgs $S_3$ doublets are fixed in the direction of $\theta_u=\theta_d=\frac{\pi}{4}$, which enforces $\tilde{\theta}=0$ (and predicts the atmospheric neutrino mixing angle is maximal). This means the Cabibbo angle is zero. In contrast, there is no such a condition of vacuum directions in this model. Due to an overabundance of free parameters, there is no prediction in quark sector. But we can show that there exist consistent parameter sets. For example, if we put \eqn{ \begin{tabular}{llll} $\alpha=0.00^\circ$, & $\beta=-83.9^\circ$, & $\gamma=83.9^\circ$, & \\ $m^u_1=1.28\ {\rm MeV}$, & $m^u_3=172\ {\rm GeV}$, & $m^u_4=17.2\ {\rm GeV}$, & $m^u_5=6.23\ {\rm GeV}$, \\ $m^d_1=2.91\ {\rm MeV}$, & $m^d_3=1.94\ {\rm GeV}$, & $m^d_4=2.14\ {\rm GeV}$, & $m^d_5=74.2\ {\rm MeV}$, \\ $m^l_1=1.75\ {\rm GeV}$, & $m^l_2=487\ {\rm KeV}$, & $m^l_3=103\ {\rm MeV}$, & \end{tabular} } then the quark masses in Eq.(171) and the parameters of CKM matrix in Eq.(67) are reproduced. In this case, unknown mixing angles $\theta_{uR},\theta_{dR}$ and phases $\phi_{uR},\phi_{dR}$ are given by \eqn{ \theta_{uR}=5.70^\circ,\quad \theta_{dR}=47.8^\circ,\quad \phi_{uR}=\phi_{uL}=0.00^\circ,\quad \phi_{dR}=-\phi_{dL}=83.9^\circ. } These parameters can be expressed by the perturbative Yukawa coupling constants and the VEVs of Higgs fields through Eq.(20), for example as follows: \eqn{ \begin{tabular}{llll} $v_u=41.4\ {\rm GeV}$, & $v'_u=150\ {\rm GeV}$, & $v_d=60.0\ {\rm GeV}$, & $v'_d=49.5\ {\rm GeV}$, \\ $\left|Y^U_1\right|=8.53\times 10^{-6}$,& $\left|Y^U_3\right|=1.15$, & $\left|Y^U_4\right|=0.415$,& $\left|Y^U_5\right|=0.150$, \\ $\left|Y^D_1\right|=5.87\times 10^{-5}$,& $\left|Y^D_3\right|=0.0392$, & $\left|Y^D_4\right|=0.0357$,& $\left|Y^D_5\right|=1.23\times 10^{-3}$, \\ $\left|Y^E_1\right|=0.0292$, & $\left|Y^E_2\right|=9.84\times 10^{-6}$,& $\left|Y^E_3\right|=1.72\times 10^{-3}$. & \end{tabular} } As all the coupling constants of the model are perturbative, it is consistent that the fundamental energy scale is much larger than the electroweak scale, which is the base of naturalness problem. \section{Higgs sector} Next, we define Higgs potential and solve its minimum condition approximately. With gauge symmetry in table 1 and flavor symmetry in table 2, superpotential of Higgs sector is given by \eqn{ W_H &=& \lambda_1S_3 (H^U_1H^D_1 + H^U_2H^D_2) + \lambda_3S_3 H^U_3H^D_3\nonumber\\ &+& \lambda_4 H^U_3 (S_1H^D_1 + S_2H^D_2) + \lambda_5 H^D_3 (S_1H^U_1 + S_2H^U_2)\subset W_S, } where one can take, without any loss of the generalities, $\lambda_{1,3,4,5}$ as real, by redefinining of four arbitrary fields of $\{S_i, S_3, H^U_i, H^U_3, H^D_i, H^D_3\}$. However, this superpotential could have would-be goldstone bosons when all of the Higgs fields acquire VEVs, because of an accidental $O(2)$ symmetry induced by the common rotation of the $S_4$ doublet. In order to avoid the problem, we assume that the flavor symmetry should be explicitly broken in the soft scalar mass terms, which can play role in giving the controllable parameters for the direction of the $SU(2)$ doublet Higgs VEVs. As the Higgs potential has too many unknown parameters, we make several assumptions. In the superpotential, we assume the parameters $\lambda_i$ are hierarchical for examples, as follows: \eqn{ \lambda_5\ll \lambda_1=0.03\ll \lambda_3=\lambda_4=0.3. } Then, we can neglect the first and fourth term in $W_H$. Note that, too small $\lambda_1$ is not consistent with chargino mass bound $M_{chargino}>94GeV$, and too large $\lambda_{3,4}$ make $Y^U_3$ reach Landau pole below $M_P$. With this assumption, F-term and D-term contribution to Higgs potential is given by \eqn{ V_{SUSY}&=&\left|\lambda_3H^U_3H^D_3 \right|^2 +\left|\lambda_4H^U_3H^D_1\right|^2+\left|\lambda_4H^U_3H^D_2\right|^2 \nonumber \\ &+&\left|\lambda_3S_3H^D_3+\lambda_4(S_1H^D_1+S_2H^D_2)\right|^2 \nonumber \\ &+&\left|\lambda_3S_3H^U_3\right|^2 +\left|\lambda_4H^U_3S_1\right|^2+\left|\lambda_4H^U_3S_2\right|^2 \nonumber \\ &+&\frac18g^2_2\sum^3_{A=1}\left[(H^U_a)^\dagger\sigma_AH^U_a+(H^D_a)^\dagger\sigma_AH^D_a\right]^2 +\frac18g^2_Y\left[|H^U_a|^2-|H^D_a|^2\right]^2 \nonumber \\ &+&\frac12 g^2_x\left[x_{H^U}|H^U_a|^2+x_{H^D}|H^D_a|^2+x_S|S_a|^2\right]^2, } where index $a$ runs $a=1,2,3$, and flavor symmetric SUSY breaking terms are given by \eqn{ V_{SB}&=&m^2_{H^U}(|H^U_1|^2+|H^U_2|^2)-m^2_{H^U_3}|H^U_3|^2 +m^2_{H^D}(|H^D_1|^2+|H^D_2|^2)+m^2_{H^D_3}|H^D_3|^2 \nonumber \\ &+&m^2_S(|S_1|^2+|S_2|^2)-m^2_{S_3}|S_3|^2 \nonumber \\ &-&\left\{\lambda_3A_3S_3H^U_3H^D_3 +\lambda_4A_4H^U_3(S_1H^D_1+S_2H^D_2)+h.c. \right\}, } where all parameters in $V_{SB}$ can be real in some SUSY breaking scenario, for example, in the case that A-parameters are induced by gaugino mass through RGEs, A-parameters become real. In order to avoid goldstone bosons, we assume flavor violation in soft scalar mass terms, and add flavor violating terms as follows: \eqn{ V_{SBFB}&=&-m^2_{BH^U}(H^U_3)^\dagger (H^U_1c_{H^U}+H^U_2s_{H^U}) -m^2_{BS}(S_3)^\dagger (S_1c_S+S_2s_S)+h.c. , } where we assume flavor violation is induced by VEV of $S_4$ doublet $Z_2$ odd auxiliary field in hidden sector. In this paper, we do not consider hidden sector which is beyond our paper. With this assumption, the term $m^2_{H^D}(H^D_3)^\dagger(H^D_1c_{H^D}+H^D_2s_{H^D})$ should be included in $V_{SBFB}$. Here we assume this term is approximately negligible. In this approximation, all parameters of potential $V=V_{SUSY}+V_{SB}+V_{SBFB}$ are real, because we can remove the phases of $m^2_{BH^U}$ and $m^2_{BS}$ by field redefinition. After the redefinition, three phases of $m^2_{BH^U,BH^D,BS}$ are transformed into $\lambda_{1,5},m^2_{BH^D}$ which are assumed to be small and negligible. From the defined potential above, the potential minimum conditions are given by \eqn{ 0=\frac{\partial V}{\partial H^U_1}/(v_uc_u) &=&m^2_{H^U}-m^2_{BH^U}c_{H^U}(v'_u/v_uc_u)+g^2_xx_{H^U}x_S(v^2_s+(v'_s)^2) \nonumber \\ &+&\left\{\frac14(g^2_Y+g^2_2)(v^2_u+(v'_u)^2-v^2_d-(v'_d)^2)\right. \nonumber \\ &+&\left. g^2_xx_{H^U}[x_{H^U}(v^2_u+(v'_u)^2)+x_{H^D}(v_d^2+(v'_d)^2)]\right\}, \\ 0=\frac{\partial V}{\partial H^U_2}/(v_us_u) &=&m^2_{H^U}-m^2_{BH^U}s_{H^U}(v'_u/v_us_u)+g^2_xx_{H^U}x_S(v^2_s+(v'_s)^2) \nonumber \\ &+&\left\{\frac14(g^2_Y+g^2_2)(v_u^2+(v'_u)^2-v_d^2-(v'_d)^2) \right. \nonumber \\ &+&\left. g^2_xx_{H^U}[x_{H^U}(v_u^2+(v'_u)^2)+x_{H^D}(v_d^2+(v'_d)^2)]\right\} , \\ 0=\frac{\partial V}{\partial H^U_3}/v'_u &=&-m^2_{H^U_3}-\lambda_3A_3v'_s(v'_d/v'_u)-\lambda_4A_4[v_sc_s(v_dc_d/v'_u)+v_ss_s(v_ds_d/v'_u)] \nonumber \\ &-&m^2_{BH^U}[c_{H^U}(v_uc_u/v'_u)+s_{H^U}(v_us_u/v'_u)] \nonumber \\ &+&\lambda^2_3(v'_s)^2+\lambda^2_4v_s^2+g^2_xx_{H^U}x_S(v^2_s+(v'_s)^2) \nonumber \\ &+&\left\{\lambda^2_3(v'_d)^2+\lambda^2_4v_d^2 +\frac14(g^2_Y+g^2_2)(v_u^2+(v'_u)^2-v_d^2-(v'_d)^2)\right. \nonumber \\ &+&\left. g^2_xx_{H^U}[x_{H^U}(v_u^2+(v'_u)^2)+x_{H^D}(v_d^2+(v'_d)^2)]\right\}, \\ 0=\frac{\partial V}{\partial H^D_1}/(v_dc_d) &=&m^2_{H^D}-\lambda_4A_4v'_u(v_sc_s/v_dc_d) +\lambda_4(v_sc_s/v_dc_d)[\lambda_3v'_sv'_d+\lambda_4(v_sc_sv_dc_d+v_ss_sv_ds_d)] \nonumber \\ &+&g^2_xx_{H^D}x_S(v^2_s+(v'_s)^2) \nonumber \\ &+&\left\{\lambda^2_4(v'_u)^2 -\frac14(g^2_Y+g^2_2)(v_u^2+(v'_u)^2-v_d^2-(v'_d)^2)\right. \nonumber \\ &+&\left. g^2_xx_{H^D}[x_{H^U}(v_u^2+(v'_u)^2)+x_{H^D}(v_d^2+(v'_d)^2)] \right\}, \\ 0=\frac{\partial V}{\partial H^D_2}/(v_ds_d) &=&m^2_{H^D}-\lambda_4A_4v'_u(v_ss_s/v_ds_d) +\lambda_4(v_ss_s/v_ds_d)[\lambda_3v'_sv'_d+\lambda_4(v_sc_sv_dc_d+v_ss_sv_ds_d)] \nonumber \\ &+&g^2_xx_{H^D}x_S(v^2_s+(v'_s)^2) \nonumber \\ &+&\left\{\lambda^2_4(v'_u)^2 -\frac14(g^2_Y+g^2_2)(v_u^2+(v'_u)^2-v_d^2-(v'_d)^2)\right. \nonumber \\ &+&\left. g^2_xx_{H^D}[x_{H^U}(v_u^2+(v'_u)^2)+x_{H^D}(v_d^2+(v'_d)^2)] \right\}, \\ 0=\frac{\partial V}{\partial H^D_3}/v'_d &=&m^2_{H^D_3}-\lambda_3A_3v'_s(v'_u/v'_d) +\lambda_3(v'_s/v'_d)[\lambda_3v'_sv'_d+\lambda_4(v_sc_sv_dc_d+v_ss_sv_ds_d)] \nonumber \\ &+&g^2_xx_{H^D}x_S(v^2_s+(v'_s)^2) \nonumber \\ &+&\left\{ \lambda^2_3(v'_u)^2 -\frac14(g^2_Y+g^2_2)(v_u^2+(v'_u)^2-v_d^2-(v'_d)^2)\right. \nonumber \\ &+&\left. g^2_xx_{H^D}[x_{H^U}(v_u^2+(v'_u)^2)+x_{H^D}(v_d^2+(v'_d)^2)]\right\} , \\ 0=\frac{\partial V}{\partial S_1}/(v_sc_s) &=&m^2_S-m^2_{BS}(v'_s/v_sc_s)c_S+g^2_xx^2_S(v^2_s+(v'_s)^2) \nonumber \\ &+&\left\{-\lambda_4A_4v'_u(v_dc_d/v_sc_s)+\lambda^2_4(v'_u)^2 +\lambda_4(v_dc_d/v_sc_s)[\lambda_3v'_sv'_d+\lambda_4(v_sc_sv_dc_d+v_ss_sv_ds_d)]\right. \nonumber \\ &+&\left. g^2_xx_S[x_{H^U}(v_u^2+(v'_u)^2)+x_{H^D}(v_d^2+(v'_d)^2)] \right\}, \\ 0=\frac{\partial V}{\partial S_2}/(v_ss_s) &=&m^2_S-m^2_{BS}(v'_s/v_ss_s)s_S+g^2_xx^2_S(v^2_s+(v'_s)^2) \nonumber \\ &+&\left\{-\lambda_4A_4v'_u(v_ds_d/v_ss_s)+\lambda^2_4(v'_u)^2 +\lambda_4(v_ds_d/v_ss_s)[\lambda_3v'_sv'_d+\lambda_4(v_sc_sv_dc_d+v_ss_sv_ds_d)]\right. \nonumber \\ &+&\left. g^2_xx_S[x_{H^U}(v_u^2+(v'_u)^2)+x_{H^D}(v_d^2+(v'_d)^2)] \right\}, \\ 0=\frac{\partial V}{\partial S_3}/v'_s &=&-m^2_{S_3}-m^2_{BS}[(v_sc_s/v'_s)c_S+(v_ss_s/v'_s)s_S] +g^2_xx^2_S(v^2_s+(v'_s)^2) \nonumber \\ &+&\left\{ -\lambda_3A_3v'_u(v'_d/v'_s)+\lambda^2_3(v'_u)^2 +\lambda_3(v'_d/v'_s)[\lambda_3v'_sv'_d+\lambda_4(v_sc_sv_dc_d+v_ss_sv_ds_d)]\right. \nonumber \\ &+&\left. g^2_xx_S[x_{H^U}(v_u^2+(v'_u)^2)+x_{H^D}(v_d^2+(v'_d)^2)] \right\}, } where we define the VEVs of the $G_{SM}$ singlet as \eqn{ \left<S_1\right>=v_s\cos\theta_s,\quad \left<S_2\right>=v_s\sin\theta_s,\quad \left<S_3\right>=v'_s, } and hereafter we neglect the terms in bracket $\left\{\quad \right\}$, because those are very small (Note that $v_{u,d},v'_{u,d}\ll v_s,v'_s$.). Solving the Eqs.(76)-(84), we get \eqn{ &&m^2_{BS}(v'_s/v_s)\left(\frac{c_S}{c_s}-\frac{s_S}{s_s}\right)=0\quad (\therefore \quad \theta_s=\theta_S), \\ &&m^2_S-m^2_{BS}(v'_s/v_s)+g^2_xx^2_S(v^2_s+(v'_s)^2)=0, \\ &&-m^2_{S_3}-m^2_{BS}(v_s/v'_s)+g^2_xx^2_S(v^2_s+(v'_s)^2)=0, \\ &&\lambda_4\left[\lambda_3v'_sv'_d+\lambda_4v_sv_d(c_sc_d+s_ss_d) -A_4v'_u\right]\left(\frac{c_s}{c_d}-\frac{s_s}{s_d}\right)\frac{v_s}{v_d}=0 \quad (\therefore\quad \theta_d=\theta_s), \\ &&m^2_{H^D}-\lambda_4A_4v_s(v'_u/v_d)+\lambda_3\lambda_4v_sv'_s(v'_d/v_d) +\lambda^2_4v^2_s+g^2_xx_Sx_{H^D}(v^2_s+(v'_s)^2)=0, \\ &&m^2_{H^D_3}-\lambda_3A_3v'_s(v'_u/v'_d)+\lambda^2_3(v'_s)^2 +\lambda_3\lambda_4v_sv'_s(v_d/v'_d) +g^2_xx_Sx_{H^D}(v^2_s+(v'_s)^2)=0, \\ &&m^2_{BH^U}(v'_u/v_u)\left(\frac{c_{H^U}}{c_u}-\frac{s_{H^U}}{s_u}\right)=0\quad (\therefore\quad \theta_u=\theta_{H^U}), \\ &&m^2_{H^U}-m^2_{BH^U}(v'_u/v_u)+g^2_xx_Sx_{H^U}(v^2_s+(v'_s)^2)=0, \\ &&-m^2_{H^U_3}-\lambda_3A_3v'_s(v'_d/v'_u)-\lambda_4A_4v_s(v_d/v'_u) -m^2_{BH^U}(v_u/v'_u)\nonumber \\ &&\quad +\lambda^2_3(v'_s)^2+\lambda^2_4v^2_s +g^2_xx_Sx_{H^U}(v^2_s+(v'_s)^2)=0. } Using Eqs.(86)-(94), mass matrices of neutral CP-even ($\phi$) and CP-odd ($\rho$) Higgs bosons are given by \eqn{ M^2_\phi&=& \Mat3{M^2_{uu}}{M^2_{ud}}{0} {M^2_{du}}{M^2_{dd}}{0} {0}{0}{M^2_{ss}}, \\ M^2_\rho&=& \Mat3{M^2_{uu}}{-M^2_{ud}}{0} {-M^2_{du}}{M^2_{dd}}{0} {0}{0}{\bar{M}^2_{ss}}, \\ M^2_{ss}&=& \Mat3{m^2_{BS}(v'_s/v_s)+2g^2_xx^2_S(v_sc_s)^2}{2g^2_xx^2_Sv^2_sc_ss_s}{-m^2_{BS}c_s+2g^2_xx^2_Sv_sv'_sc_s} {2g^2_xx^2_Sv^2_sc_ss_s}{m^2_{BS}(v'_s/v_s)+2g^2_xx^2_S(v_ss_s)^2}{-m^2_{BS}s_s+2g^2_xx^2_Sv_sv'_ss_s} {-m^2_{BS}c_s+2g^2_xx^2_Sv_sv'_sc_s}{-m^2_{BS}s_s+2g^2_xx^2_Sv_sv'_ss_s}{m^2_{BS}(v_s/v'_s)+2g^2_xx^2_S(v'_s)^2},\nonumber \\ && \\ \bar{M}^2_{ss}&=& \Mat3{m^2_{BS}(v'_s/v_s)}{0}{-m^2_{BS}c_s} {0}{m^2_{BS}(v'_s/v_s)}{-m^2_{BS}s_s} {-m^2_{BS}c_s}{-m^2_{BS}s_s}{m^2_{BS}(v_s/v'_s)}, \\ M^2_{uu}&=& \Mat3{m^2_{BH^U}(v'_u/v_u)}{0}{-m^2_{BH^U}c_u} {0}{m^2_{Bu}(v'_u/v_u)}{-m^2_{BH^U}s_u} {-m^2_{BH^U}c_u}{-m^2_{BH^U}s_u}{m^2_{BH^U}(v_u/v'_u)+\lambda_3A_3v'_s(v'_d/v'_u) +\lambda_4A_4v_s(v_d/v'_u)}, \\ M^2_{dd}&=& \Mat3{-\lambda^2_4(v_ss_s)^2-\lambda_3\lambda_4v_sv'_s(v'_d/v_d)}{\lambda^2_4v^2_sc_ss_s}{\lambda_3\lambda_4v_sv'_sc_s} {\lambda^2_4v^2_sc_ss_s}{-\lambda^2_4(v_sc_s)^2-\lambda_3\lambda_4v_sv'_s(v'_d/v_d)}{\lambda_3\lambda_4v_sv'_ss_s} {\lambda_3\lambda_4v_sv'_sc_s}{\lambda_3\lambda_4v_sv'_ss_s}{-\lambda_3\lambda_4v_sv'_s(v_d/v'_d)} \nonumber \\ &+& \Mat3{\lambda_4A_4v_s(v'_u/v_d)}{0}{0} {0}{\lambda_4A_4v_s(v'_u/v_d)}{0} {0}{0}{\lambda_3A_3v'_s(v'_u/v'_d)}, \\ M^2_{ud}&=& \Mat3{0}{0}{0} {0}{0}{0} {-\lambda_4A_4v_sc_s}{-\lambda_4A_4v_ss_s}{-\lambda_3A_3v'_s}=(M^2_{du})^t, } where $v_u,v'_u,v_d,v'_d\ll v_s,v'_s$ is assumed and defined as \eqn{ (H^U_i)^0=\frac{\phi_{u,i}+i\rho_{u,i}}{\sqrt{2}},\quad (H^D_i)^0=\frac{\phi_{d,i}+i\rho_{d,i}}{\sqrt{2}},\quad S_i=\frac{\phi_{s,i}+i\rho_{s,i}}{\sqrt{2}}\quad (i=1,2,3). } Hereafter we do not consider $\phi_{s},\rho_{s}$, because these fields do not mix with $\phi_{u,d},\rho_{u,d}$ and never contribute FCNC. Because the mass matrices are partially diagonalized as follows \eqn{ &&(M^2_{uu})'=V^t_uM^2_{uu}V_u \nonumber \\ &&= \Mat3{m^2_{BH^U}(v'_u/v_u)}{0}{-m^2_{BH^U}} {0}{m^2_{BH^U}(v'_u/v_u)}{0} {-m^2_{BH^U}}{0}{m^2_{BH^U}(v_u/v'_u)+\lambda_3A_3v'_s(v'_d/v'_u)+\lambda_4A_4v_s(v_d/v'_u)}, \\ &&(M^2_{dd})'=V^t_dM^2_{dd}V_d =\Mat3{\lambda_4v_s[A_4(v'_u/v_d)-\lambda_3v'_s(v'_d/v_d)]}{0}{\lambda_3\lambda_4v_sv'_s} {0}{M^2_{\phi'_{d,2}}}{0} {\lambda_3\lambda_4v_sv'_s}{0}{\lambda_3v'_s[A_3(v'_u/v'_d)-\lambda_4v_s(v_d/v'_d)]}, \nonumber \\ &&\quad M^2_{\phi'_{d,2}}=\lambda_4v_s[A_4(v'_u/v_d)-\lambda_4v_s-\lambda_3v'_s(v'_d/v_d)], \\ &&(M^2_{ud})'=V^t_uM^2_{ud}V_d =\Mat3{0}{0}{0} {0}{0}{0} {-\lambda_4A_4v_s}{0}{-\lambda_3A_3v'_s}, } where $V_u$ and $V_d$ are defined in Eq. (39) and Eq. (53), respectively, one can see that the mixed states \eqn{ &&\phi'_{u,2}=-\phi_{u,1}s_u+\phi_{u,2}c_u,\quad \rho'_{u,2}=-\rho_{u,1}s_u+\rho_{u,2}c_u \\ &&\phi'_{d,2}=-\phi_{d,1}s_d+\phi_{d,2}c_d,\quad \rho'_{d,2}=-\rho_{d,1}s_d+\rho_{d,2}c_d } are mass eigenstates. Note that CP-even Higgs bosons $\phi'_{u,2},\phi'_{d,2}$ and CP-odd Higgs bosons $\rho'_{u,2},\rho'_{d,2}$ have the same mass eigenvalues in this approximation, respectively. \section{Cancellation of Higgs and SUSY-FCNC Contributions} Finally, we evaluate the Higgs and SUSY contributions to FCNC. Here we calculate $K^0-\bar{K}^0$, $B^0-\bar{B}^0$ and $D^0-\bar{D}^0$ mass differences. \subsection{Higgs contributions} First, we explain how Higgs bosons mediate FCNCs. Yukawa coupling interactions of quarks and charged leptons between neutral Higgs bosons are given by \eqn{ -{\cal L}_Y&=& (\bar{u}_1,\bar{u}_2,\bar{u}_3)_R \Mat3{Y^U_1(H^U_3)^0}{0}{Y^U_4(H^U_1)^0} {0}{Y^U_1(H^U_3)^0}{Y^U_4(H^U_2)^0} {Y^U_5(H^U_1)^0}{Y^U_5(H^U_2)^0}{Y^U_3(H^U_3)^0} \3tvec{u_1}{u_2}{u_3}_L \nonumber \\ &+&(\bar{d}_1,\bar{d}_2,\bar{d}_3)_R \Mat3{Y^D_1(H^D_3)^0}{0}{Y^D_4(H^D_1)^0} {0}{Y^D_1(H^D_3)^0}{Y^D_4(H^D_2)^0} {Y^D_5(H^D_1)^0}{Y^D_5(H^D_2)^0}{Y^D_3(H^D_3)^0} \3tvec{d_1}{d_2}{d_3}_L \nonumber \\ &+&(\bar{l}_1,\bar{l}_2,\bar{l}_3)_R \Mat3{Y^E_1(H^D_1)^0}{Y^E_1(H^D_2)^0}{0} {0}{0}{Y^E_2(H^D_3)^0} {-Y^E_3(H^D_2)^0}{Y^E_3(H^D_1)^0}{0} \3tvec{l_1}{l_2}{l_3}_L+h.c. . } With the basis that quark and lepton mass matrices are diagonal, these terms are rewritten by \eqn{ -{\cal L}_Y&=& \frac{1}{\sqrt{2}}(\bar{u},\bar{c},\bar{t})_R \Mat3{Y^U_1(\phi_{u,3}+i\rho_{u,3})}{Y^U_4s_{uL}(\phi'_{u,2}+i\rho'_{u,2})}{-Y^U_4c_{uL}(\phi'_{u,2}+i\rho'_{u,2})} {Y^U_5s_{uR}(\phi'_{u,2}+i\rho'_{u,2})}{H^U_{22}}{H^U_{23}} {-Y^U_5c_{uR}(\phi'_{u,2}+i\rho'_{u,2})}{H^U_{32}}{H^U_{33}} \3tvec{u}{c}{t}_L \nonumber \\ &+&\frac{1}{\sqrt{2}}(\bar{d},\bar{s},\bar{b})_R \Mat3{Y^D_1(\phi_{d,3}+i\rho_{d,3})}{\eta Y^D_4s_{dL}(\phi'_{d,2}+i\rho'_{d,2})}{-\eta Y^D_4c_{dL}(\phi'_{d,2}+i\rho'_{d,2})} {\eta Y^D_5s_{dR}(\phi'_{d,2}+i\rho'_{d,2})}{H^D_{22}}{H^D_{23}} {-\eta Y^D_5c_{dR}(\phi'_{d,2}+i\rho'_{d,2})}{H^D_{32}}{H^D_{33}} \3tvec{d}{s}{b}_L \nonumber \\ &+&\frac{1}{\sqrt{2}}(\bar{e},\bar{\mu},\bar{\tau})_R \Mat3{-Y^E_2(\phi_{d,3}+i\rho_{d,3})}{0}{0} {0}{Y^E_3(\phi'_{d,1}+i\rho'_{d,1})}{-Y^E_3(\phi'_{d,2}+i\rho'_{d,2})} {0}{Y^E_1(\phi'_{d,2}+i\rho'_{d,2})}{Y^E_1(\phi'_{d,1}+i\rho'_{d,1})} \3tvec{e}{\mu}{\tau}_L +h.c. ,\\ H^U_{22}&=&[Y^U_1c_{uR}(\phi_{u,3}+i\rho_{u,3})-Y^U_5s_{uR}(\phi'_{u,1}+i\rho'_{u,1})]c_{uL} \nonumber \\ &-&[Y^U_4c_{uR}(\phi'_{u,1}+i\rho'_{u,1})-|Y^U_3|s_{uR}(\phi_{u,3}+i\rho_{u,3})]s_{uL}, \\ H^U_{23}&=&[Y^U_1c_{uR}(\phi_{u,3}+i\rho_{u,3})-Y^U_5s_{uR}(\phi'_{u,1}+i\rho'_{u,1})]s_{uL} \nonumber \\ &+&[Y^U_4c_{uR}(\phi'_{u,1}+i\rho'_{u,1})-|Y^U_3|s_{uR}(\phi_{u,3}+i\rho_{u,3})]c_{uL} , \\ H^U_{32}&=&[Y^U_1s_{uR}(\phi_{u,3}+i\rho_{u,3})+Y^U_5c_{uR}(\phi'_{u,1}+i\rho'_{u,1})]c_{uL} \nonumber \\ &-&[Y^U_4s_{uR}(\phi'_{u,1}+i\rho'_{u,1})+|Y^U_3|c_{uR}(\phi_{u,3}+i\rho_{u,3})]s_{uL} , \\ H^U_{33}&=&[Y^U_1s_{uR}(\phi_{u,3}+i\rho_{u,3})+Y^U_5c_{uR}(\phi'_{u,1}+i\rho'_{u,1})]s_{uL} \nonumber \\ &+&[Y^U_4s_{uR}(\phi'_{u,1}+i\rho'_{u,1})+|Y^U_3|c_{uR}(\phi_{u,3}+i\rho_{u,3})]c_{uL} ,\\ H^D_{22}&=&[Y^D_1c_{dR}(\phi_{d,3}+i\rho_{d,3})-\eta Y^D_5s_{dR}(\phi'_{d,1}+i\rho'_{d,1})]c_{dL} \nonumber \\ &-&\eta [Y^D_4c_{dR}(\phi'_{d,1}+i\rho'_{d,1})-\eta |Y^D_3|s_{dR}(\phi_{d,3}+i\rho_{d,3})]s_{dL} ,\\ H^D_{23}&=&[Y^D_1c_{dR}(\phi_{d,3}+i\rho_{d,3})-\eta Y^D_5s_{dR}(\phi'_{d,1}+i\rho'_{d,1})]s_{dL} \nonumber \\ &+&\eta [Y^D_4c_{dR}(\phi'_{d,1}+i\rho'_{d,1})-\eta |Y^D_3|s_{dR}(\phi_{d,3}+i\rho_{d,3})]c_{dL} , \\ H^D_{32}&=&[Y^D_1s_{dR}(\phi_{d,3}+i\rho_{d,3})+\eta Y^D_5c_{dR}(\phi'_{d,1}+i\rho'_{d,1})]c_{dL} \nonumber \\ &-&\eta [Y^D_4s_{dR}(\phi'_{d,1}+i\rho'_{d,1})+\eta |Y^D_3|c_{dR}(\phi_{d,3}+i\rho_{d,3})]s_{dL} , \\ H^D_{33}&=&[Y^D_1s_{dR}(\phi_{d,3}+i\rho_{d,3})+\eta Y^D_5c_{dR}(\phi'_{d,1}+i\rho'_{d,1})]s_{dL} \nonumber \\ &+&\eta [Y^D_4s_{dR}(\phi'_{d,1}+i\rho'_{d,1})+\eta |Y^D_3|c_{dR}(\phi_{d,3}+i\rho_{d,3})]c_{dL}, \\ \eta&=&e^{-i\gamma}, } where \eqn{ &&\phi'_{u,1}=\phi_{u,1}c_u+\phi_{u,2}s_u,\quad \rho'_{u,1}=\rho_{u,1}c_u+\rho_{u,2}s_u, \\ &&\phi'_{d,1}=\phi_{d,1}c_d+\phi_{d,2}s_d,\quad \rho'_{d,1}=\rho_{d,1}c_d+\rho_{d,2}s_d. } From these interactions, we can evaluate FCNC processes. For example, one can see that $\phi'_{d,2}$ and $\rho'_{d,2}$ mediate flavor changing operator such as $(\bar{d}_Rd_L)(\bar{d}_Ls_R)$ ,which contributes $K^0-\bar{K}^0$ mass difference $\Delta m_K$. Note that the terms $(\bar{d}_Rs_L)(\bar{d}_Rs_L)$ and $(\bar{d}_Ls_R)(\bar{d}_Ls_R)$ are not induced because contributions to them from $\phi'_{d,2}$ and $\rho'_{d,2}$ are cancelled due to degeneration of masses. However, lepton flavor changing processes such as $\mu\to e\gamma$, $\tau\to\mu\gamma$ and $\tau\to e\gamma$ are not induced. Flavor violating effective interactions are given by \eqn{ {\cal L}_{Higgs-FCNC}&=&\frac{Y^U_4Y^U_5s_{uL}s_{uR}}{m^2_{\phi'_{u,2}}} (\bar{u}_{R,\alpha}c_L^\alpha)(\bar{u}_{L,\beta}c_R^\beta) +\frac{Y^D_4Y^D_5s_{dL}s_{dR}}{m^2_{\phi'_{d,2}}} (\bar{d}_{R,\alpha}s_L^\alpha)(\bar{d}_{L,\beta}s_R^\beta) \nonumber \\ &+&\frac{Y^D_4Y^D_5c_{dL}c_{dR}}{m^2_{\phi'_{d,2}}} (\bar{d}_{R,\alpha}b_L^\alpha)(\bar{d}_{L,\beta}b_R^\beta) , } where $\alpha$ and $\beta$ are color indices. From this Lagrangian, we can evaluate the Higgs contributions to $\Delta m_K$, $\Delta m_B$ and $\Delta m_D$ as follows: \eqn{ (\Delta m_K)_{Higgs}&=&2Re\left<K^0|(-{\cal L}_{Higgs-FCNC})|\bar{K}^0\right> \nonumber \\ &=&-2\frac{Y^D_4Y^D_5s_{dL}s_{dR}}{m^2_{\phi'_{d,2}}} \left<K^0\left|(\bar{d}_{R,\alpha}s_L^\alpha)(\bar{d}_{L,\beta}s_R^\beta)\right|\bar{K}^0\right>, \\ (\Delta m_B)_{Higgs}&=&2Re\left<B^0|(-{\cal L}_{Higgs-FCNC})|\bar{B}^0\right> \nonumber \\ &=&-2\frac{Y^D_4Y^D_5c_{dL}c_{dR}}{m^2_{\phi'_{d,2}}} \left<B^0\left|(\bar{d}_{R,\alpha}b_L^\alpha)(\bar{d}_{L,\beta}b_R^\beta)\right|\bar{B}^0\right>, \\ (\Delta m_D)_{Higgs}&=&2Re\left<D^0|(-{\cal L}_{Higgs-FCNC})|\bar{D}^0\right> \nonumber \\ &=&-2\frac{Y^U_4Y^U_5s_{uL}s_{uR}}{m^2_{\phi'_{u,2}}} \left<D^0\left|(\bar{u}_{R,\alpha}c_L^\alpha)(\bar{u}_{L,\beta}c_R^\beta)\right|\bar{D}^0\right>, } where \eqn{ \left<K^0\left|(\bar{d}_{R,\alpha}s_L^\alpha)(\bar{d}_{L,\beta}s_R^\beta)\right|\bar{K}^0\right> &=&\left[\frac{1}{24}+\frac14\left(\frac{m_{K^0}}{m_s(2GeV)+m_d(2GeV)}\right)^2\right]m_{K^0}f^2_K \nonumber \\ &=&6.56\times 10^7 MeV^3, \\ \left<B^0\left|(\bar{d}_{R,\alpha}b_L^\alpha)(\bar{d}_{L,\beta}b_R^\beta)\right|\bar{B}^0\right> &=&\left[\frac{1}{24}+\frac14\left(\frac{m_{B^0}}{m_b(m_b)+m_d(m_b)}\right)^2\right]m_{B^0}f^2_B =9.21\times 10^7 MeV^3, \\ \left<D^0\left|(\bar{u}_{R,\alpha}c_L^\alpha)(\bar{u}_{L,\beta}c_R^\beta)\right|\bar{D}^0\right> &=&\left[\frac{1}{24}+\frac14\left(\frac{m_{D^0}}{m_c(m_c)+m_u(m_c)}\right)^2\right]m_{D^0}f^2_D =4.99\times 10^7 MeV^3, } which are evaluated by using the parameters given in appendix. Requiring $|(\Delta m_M)_{Higgs}|< \Delta m_M (M=K,B,D)$, we get \eqn{ &&m_{\phi'_{d,2}}>4.6TeV\quad (\Delta m_K), \\ &&m_{\phi'_{d,2}}>4.0TeV\quad (\Delta m_B), \\ &&m_{\phi'_{u,2}}>37.6TeV\quad (\Delta m_D). } These constraints are too strong. In our model, SUSY contributions to FCNC may be used to cancel these Higgs contributions. However, in order to suppress $\Delta m_{K,B,D}$, we must give three cancellation conditions: \eqn{ |(\Delta m_M)_{Higgs}+(\Delta m_M)_{SUSY}| \ll |(\Delta m_M)_{Higgs}| \quad (M=K,B,D), } which is unnatural. In the next subsection, we show the number of cancellation conditions are reduced to two from three. \subsection{Squark and gluino contributions} Now we evaluate SUSY-FCNC contributions. As we assume $\Delta m_D$ is suppressed by cancellation: \eqn{ |(\Delta m_D)_{Higgs}+(\Delta m_D)_{SUSY}| \ll |(\Delta m_D)_{Higgs}|, } we consider only $K^0-\bar{K}^0$ and $B^0-\bar{B}^0$ mass differences induced by squark and gluino box diagrams. These contributions depend only on down type squark mass matrices. Considering the following squark Lagrangian \eqn{ -{\cal L}_{squark}&=&m^2_Q(|Q_1|^2+|Q_2|^2)+m^2_{Q_3}|Q_3|^2 +m^2_D(|D^c_1|^2+|D^c_2|^2)+m^2_{D_3}|D^c_3|^2 \nonumber \\ &+&\left\{e^{-i\phi_Q}|m^2_{BQ}|Q^\dagger_3(c_QQ_1+s_QQ_2) +e^{i\phi_D}|m^2_{BD}|(D^c_3)^\dagger(c_DD^c_1+s_DD^c_2)+h.c. \right\} \nonumber \\ &+&(D-terms) , } one can see that down type squark mass matrix is given by \eqn{ -{\cal L}_{down-squark}&=&(D^\dagger, D^c) \mat2{M^2_{LL}}{0}{0}{M^2_{RR}} \2tvec{D}{(D^c)^\dagger}, \\ M^2_{LL}&=&\Mat3{m^2_Q}{0}{e^{i\phi_Q}|m^2_{BQ}|c_Q} {0}{m^2_Q}{e^{i\phi_Q}|m^2_{BQ}|s_Q} {e^{-i\phi_Q}|m^2_{BQ}|c_Q}{e^{-i\phi_Q}|m^2_{BQ}|s_Q}{m^2_{Q_3}}, \\ M^2_{RR}&=&\Mat3{m^2_D}{0}{e^{i\phi_D}|m^2_{BD}|c_D} {0}{m^2_D}{e^{i\phi_D}|m^2_{BD}|s_D} {e^{-i\phi_D}|m^2_{BD}|c_D}{e^{-i\phi_D}|m^2_{BD}|s_D}{m^2_{D_3}} . } Where D-term contributions are absorbed into $m^2_{Q,Q_3,D,D_3}$. In super-CKM basis, squark mass matrices are given by \eqn{ (M^2_{LL})'&=&V^\dagger_{dL} M^2_{LL}V_{dL} \nonumber \\ &=&\Mat3{m^2_Q}{\eta_Q |m^2_{BQ}|s_{Qd}s_{dL}}{-\eta_Q |m^2_{BQ}|s_{Qd}c_{dL}} {\eta^*_Q |m^2_{BQ}|s_{Qd}s_{dL}}{(M^2_{Ld})_{22}}{(M^2_{Ld})_{23}} {-\eta^*_Q |m^2_{BQ}|s_{Qd}c_{dL}}{(M^2_{Ld})^*_{23}}{(M^2_{Ld})_{33}}, \\ (M^2_{Ld})_{22}&=&m^2_Q(c_{dL})^2+m^2_{Q_3}(s_{dL})^2-|m^2_{BQ}|c_{Qd}s_{dL}c_{dL}(\eta_Q+\eta^*_Q) , \\ (M^2_{Ld})_{23}&=&(m^2_Q-m^2_{Q_3})c_{dL}s_{dL}+|m^2_{BQ}|c_{Qd}(\eta_Q(c_{dL})^2-\eta^*_Q (s_{dL})^2) , \\ (M^2_{Ld})_{33}&=&m^2_Q(s_{dL})^2+m^2_{Q_3}(c_{dL})^2+|m^2_{BQ}|c_{Qd}s_{dL}c_{dL}(\eta_Q + \eta^*_Q) , \\ s_{Qd}&=&\sin(\theta_Q-\theta_d) , \\ \eta_Q&=&\eta e^{i\phi_Q}, \\ (M^2_{RR})'&=&V^\dagger_{dR} M^2_{RR}V_{dR} \nonumber \\ &=&\Mat3{m^2_D}{\eta_D |m^2_{BD}|s_{Dd}s_{dR}}{-\eta_D |m^2_{BD}|s_{Dd}c_{dR}} {\eta^*_D |m^2_{BD}|s_{Dd}s_{dR}}{(M^2_{Rd})_{22}}{(M^2_{Rd})_{23}} {-\eta^*_D |m^2_{BD}|s_{Dd}c_{dR}}{(M^2_{Rd})^*_{23}}{(M^2_{Rd})_{33}} , \\ (M^2_{Rd})_{22}&=&m^2_D(c_{dR})^2+m^2_{D_3}(s_{dR})^2-|m^2_{BD}|c_{Dd}s_{dR}c_{dR}(\eta_D+\eta^*_D), \\ (M^2_{Rd})_{23}&=&(m^2_D-m^2_{D_3})c_{dR}s_{dR}+|m^2_{BD}|c_{Dd}(\eta_D(c_{dR})^2-\eta^*_D(s_{dR})^2), \\ (M^2_{Rd})_{33}&=&m^2_D(s_{dR})^2+m^2_{D_3}(c_{dR})^2+|m^2_{BD}|c_{Dd}s_{dR}c_{dR}(\eta_D+\eta^*_D), \\ s_{Dd}&=&\sin(\theta_D-\theta_d),\\ \eta_D&=&\eta^* e^{i\phi_D}. } Here we assume degenerated mass squared parameters as \eqn{ m^2_Q=m^2_{Q_3},\quad m^2_D=m^2_{D_3}, } which are essential assumptions to realize cancellation between Higgs and SUSY-FCNC contributions. These relations are realized if gaugino mass contributions dominate in RGEs. With this assumption, diagonal elements of mass squared matrix are also degenerated approximately as follows: \eqn{ &&(M^2_{Ld})_{22}\simeq (M^2_{Ld})_{33}\simeq m^2_Q, \\ &&(M^2_{Rd})_{22}\simeq (M^2_{Rd})_{33}\simeq m^2_D. } Where we assume that the contributions from $m^2_{BD,BQ}$ are negligible. Furthermore, we assume $\eta_Q=\eta_D=1$ to suppress \eqn{ Im\left<K^0|{\cal L}_{SUSY-FCNC}|\bar{K}^0\right>. } Here flavor changing effective interactions induced by squark and gluino box diagrams are calculated in mass insertion approximation \cite{box} as follows: \eqn{ {\cal L}_{SUSY-FCNC}&=&\frac{\alpha^2_3}{216M^2_{Q,K}} \left\{(\delta_{12})^2_{LL}\left[24xf_1(x)+66f_2(x)\right]O_1 +(\delta_{12})^2_{RR}\left[24xf_1(x)+66f_2(x)\right]O_2 \right. \nonumber \\ &+&\left. (\delta_{12})_{LL}(\delta_{12})_{RR} \left[(504xf_1(x)-72f_2(x))O_3+(24xf_1(x)+120f_2(x))O_4\right] \right\} \nonumber \\ &+&\frac{\alpha^2_3}{216M^2_{Q,B}} \left\{(\delta_{13})^2_{LL}\left[24yf_1(y)+66f_2(y)\right]P_1 +(\delta_{13})^2_{RR}\left[24yf_1(y)+66f_2(y)\right]P_2 \right. \nonumber \\ &+&\left. (\delta_{13})_{LL}(\delta_{13})_{RR} \left[(504yf_1(y)-72f_2(y))P_3+(24yf_1(y)+120f_2(y))P_4\right] \right\}, } where $\alpha_3$ is $SU(3)_c$ gauge coupling, $M_{Q,K}$ and $M_{Q,B}$ are averaged squark mass, and the other parameters are defined as \eqn{ &&f_1(x)=\frac{6(1+3x)\ln x+x^3-9x^2-9x+17}{6(x-1)^5}, \\ &&f_2(x)=\frac{6x(1+x)\ln x-x^3-9x^2+9x+1}{3(x-1)^5}, \\ &&O_1=(\bar{d}_{L,\alpha}\gamma_\mu s_L^\alpha)(\bar{d}_{L,\beta}\gamma^\mu s_L^\beta), \quad P_1=(\bar{d}_{L,\alpha}\gamma_\mu b_L^\alpha)(\bar{d}_{L,\beta}\gamma^\mu b_L^\beta), \\ &&O_2=(\bar{d}_{R,\alpha}\gamma_\mu s_R^\alpha)(\bar{d}_{R,\beta}\gamma^\mu s_R^\beta), \quad P_2=(\bar{d}_{R,\alpha}\gamma_\mu b_R^\alpha)(\bar{d}_{R,\beta}\gamma^\mu b_R^\beta), \\ &&O_3=(\bar{d}_{R,\alpha}s_L^\alpha)(\bar{d}_{L,\beta}s_R^\beta), \quad P_3=(\bar{d}_{R,\alpha}b_L^\alpha)(\bar{d}_{L,\beta}b_R^\beta), \\ &&O_4=(\bar{d}_{R,\alpha}s_L^\beta)(\bar{d}_{L,\beta}s_R^\alpha), \quad P_4=(\bar{d}_{R,\alpha}b_L^\beta)(\bar{d}_{L,\beta}b_R^\alpha), \\ &&(\delta_{12})_{LL}=\frac{|m^2_{BQ}|s_{Qd}s_{dL}}{M^2_{Q,K}} , \quad (\delta_{13})_{LL}=\frac{-|m^2_{BQ}|s_{Qd}c_{dL}}{M^2_{Q,B}}, \\ &&(\delta_{12})_{RR}=\frac{|m^2_{BD}|s_{Dd}s_{dR}}{M^2_{Q,K}}, \quad (\delta_{13})_{RR}=\frac{-|m^2_{BD}|s_{Dd}c_{dR}}{M^2_{Q,B}}, \\ &&x=\frac{M^2_3}{M^2_{Q,K}},\quad y=\frac{M^2_3}{M^2_{Q,B}},}where $M_3$ is gluino mass. With the assumptions of Eqs.(150) and (151), one can assume $M^2_{Q,K}=M^2_{Q,B}$. Note that the dominant contributions to $\Delta m_K$ and $\Delta m_B$ come from $O_3$ and $P_3$ in Eq.(153) due to the large coefficients. Total contributions to $O_3$ and $P_3$ from Higgs (Eq.(121)) and SUSY (Eq.(153)) are written as follows: \eqn{ {\cal L}_{O_3}&=&\left[\frac{Y^D_4Y^D_5}{m^2_{\phi'_{d,2}}} +\frac{\alpha^2_3 |m^2_{BQ}m^2_{BD}|s_{Qd}s_{Dd}}{216M^6_{Q,K}}(504xf_1(x)-72f_2(x))\right]s_{dL}s_{dR} (\bar{d}_{R,\alpha}s_L^\alpha)(\bar{d}_{L,\beta}s_R^\beta), \\ {\cal L}_{P_3}&=&\left[\frac{Y^D_4Y^D_5}{m^2_{\phi'_{d,2}}} +\frac{\alpha^2_3 |m^2_{BQ}m^2_{BD}|s_{Qd}s_{Dd}}{216M^6_{Q,K}}(504xf_1(x)-72f_2(x))\right]c_{dL}c_{dR} (\bar{d}_{R,\alpha}b_L^\alpha)(\bar{d}_{L,\beta}b_R^\beta). } If accidental cancellation occurs between the terms in bracket $[\quad]$, new physics contributions to $\Delta m_K$ and $\Delta m_B$ are well-suppressed at same time. Assuming $x=1$ and $|m_{BQ}|s_{Qd}=-2|m_{BD}|s_{Dd}$, we get \eqn{ f_1(1)=\frac{1}{20},\quad f_2(1)=-\frac{1}{30}, } and cancellation condition: \eqn{ \frac{Y^D_4Y^D_5}{m^2_{\phi'_{d,2}}} =13.8\frac{\alpha^2_3(|m^2_{BQ}|s_{Qd})^2}{216M^6_{Q,K}}. } One finds that Eq.(166) is satisfied, for example, if we put \eqn{ \alpha_3=0.12,\quad m^2_{\phi'_{d,2}}=M^2_{Q,K},\quad \frac{|m^2_{BQ}s_{Qd}|}{M^2_{Q,K}}=0.218. } Then the sub-dominant contributions from Eq.(153) are evaluated as follows: \eqn{ (\Delta m_K)_{SUSY}&=&-\frac{Y^D_4Y^D_5}{13.8M^2_{Q,K}} \times 2Re\left<K^0\left|\left\{-\left[s^2_{dL}O_1+\frac14s^2_{dR}O_2\right] +\frac{2.8}{2}s_{dL}s_{dR}O_4\right\}\right|\bar{K}^0\right> \nonumber \\ &=&1.06\times 10^{-12}\left(\frac{TeV}{M_{Q,K}}\right)^2 MeV , \\ (\Delta m_B)_{SUSY}&=&-\frac{Y^D_4Y^D_5}{13.8M^2_{Q,K}} \times 2Re\left<B^0\left|\left\{-\left[c^2_{dL}P_1+\frac14 c^2_{dR}P_2\right]+\frac{2.8}{2}c_{dR}c_{dL}P_4\right\}\right|\bar{B}^0\right> \nonumber \\ &=&1.74\times 10^{-10}\left(\frac{TeV}{M_{Q,K}}\right)^2 MeV , } where \eqn{ &&\left<K^0|O_1|\bar{K}^0\right>=\left<K^0|O_2|\bar{K}^0\right>=\frac13 m_Kf^2_K=4.25\times 10^6MeV^3, \\ &&\left<K^0|O_4|\bar{K}^0\right>=\left[\frac18+\frac{1}{12}\left(\frac{m_{K^0}}{m_s(2GeV)+m_d(2GeV)}\right)^2\right]m_Kf^2_K =2.33\times 10^7MeV^3,\\ &&\left<B^0|P_1|\bar{B}^0\right>=\left<B^0|P_2|\bar{B}^0\right>=\frac13 m_Bf^2_B=7.04\times 10^7MeV^3, \\ &&\left<B^0|P_4|\bar{B}^0\right>=\left[\frac18+\frac{1}{12}\left(\frac{m_{B^0}}{m_b(m_b)+m_d(m_b)}\right)^2\right]m_Bf^2_B =5.41\times 10^7MeV^3, } are used. Requiring $|(\Delta m_M)_{SUSY}|< \Delta m_M (M=K,B)$, we get \eqn{ &&M_{Q,K}=m_{\phi'_{d,2}}>0.6TeV\quad (\Delta m_K), \\ &&M_{Q,K}=m_{\phi'_{d,2}}>0.7TeV\quad (\Delta m_B). } One finds that these constraints are weaker than Eqs. (128) and (129). Therefore three cancellation conditions Eq.(131) are reduced to two conditions Eq.(132) and Eq.(166). \section{Summary} In this paper, we have considered the Higgs-FCNC problem in $S_4\times Z_2$ flavor symmetric extra U(1) model, and have shown that the Higgs mass bounds from FCNCs are weaken by cancellation between Higgs and SUSY contributions. As the result, SUSY breaking scale may be around $O(TeV)$ region. It might be expected that the new gauge symmetry and the flavor symmetry are tested in LHC or future colliders. \section*{Acknowledgments} H.O. thanks great hospitality of the Kanazawa Institute for Theoretical Physics at Kanazawa University. Discussions during my visit were fruitful to finalize this project. H. O. acknowledges partial support from the Science and Technology Development Fund (STDF) project ID 437 and the ICTP project ID 30.
1,314,259,993,861
arxiv
\section*{Supplementary information} \subsection{A\quad Surface wave function for semi-infinite system} \label{sec:semi-infinite} Here we give the full surface wave function of a semi-infinite Weyl semimetal (WSM) with a tilted cone. We start from the low energy Hamiltonian \begin{align} \label{eq:Hamiltonian} H = -i\vec{\nabla} \cdot \br{\chi \boldsymbol{\sigma} + \vec{u} \sigma_0} \end{align} and restrict it to $z \leq 0$ such that we get a surface with normal $\vec{n} = \hat{z}$. This makes it convenient to split every vector in a parallel and perpendicular part, e.g.\ $\vec{u} = \vec{u}_\parallel + u_z \vec{n}$ with $\vec{u}_\parallel \perp \vec{n}$. The Hamiltonian is translational invariant parallel to the surface which allows us to replace $-i\vec{\nabla} \rightarrow {\vec{k}_\parallel} -i\vec{n}\partial_z$. By partial integration one can show that hermiticity $\left< \psi_1,H \psi_2\right> = \left<H \psi_1, \psi_2\right>$ of the Hamiltonian requires the boundary term at $z=0$ to vanish \cite{PhysRevB.92.201107, witten2016three, 10.1093/ptep/ptx053, PhysRevB.97.075132, PhysRevB.100.155131}, i.e. \begin{align} \label{eq:psi_0} \psi_1^\dagger \br{\chi \sigma_z + u_z}\psi_2 |_{z=0} = 0 \end{align} for all $\psi_1,\psi_2$. Here we look at the special case where $\psi_1=\psi_2 \equiv \psi$, $\abs{\psi}^2=1$, are $z$-independent spinors so that the expectation value $\vec{s}$ of the pseudo-spin for $\psi \equiv \psi(\alpha)$ can be written as \begin{align} \label{eq:s} \vec{s} = \left(\sqrt{1-u_z^2} \cos (\alpha), \sqrt{1-u_z^2} \sin(\alpha), - \chi u_z \right) \end{align} with free real parameter $\alpha$ labeling all possible boundary conditions. For the surface wave function $\Psi$ we make the ansatz \begin{align} \label{eq:ansatz} \Psi\br{\vec{k_\parallel},z} = c\br{{\vec{k}_\parallel}} e^{i{\vec{k}_\parallel}\cdot\vec{r}_\parallel + \lambda\br{{\vec{k}_\parallel}} z} \psi\br{\alpha} \end{align} where $c({\vec{k}_\parallel})$ is the (real) normalization constant. Since $\Psi \propto \psi(\alpha)$ for all $z$ the boundary condition Eq.\ (\ref{eq:psi_0}) always is fulfilled. In order to obtain a surface state the function $\lambda({\vec{k}_\parallel})$ must have a non-vanishing real part. More precisely, normalization $1 = \int_{-\infty}^0 \text{d} z \abs{\Psi}^2$ gives $c^2 = 2$Re$(\lambda)$ and requires Re$(\lambda) > 0$. Applying the gradient of the Hamiltonian in Eq.\ (\ref{eq:Hamiltonian}) to wave function $\Psi$ yields the effective Hamiltonian \begin{align} H_{\text{eff}} = {\vec{k}_\parallel}\cdot \br{\chi\boldsymbol{\sigma} + \vec{u}_\parallel\sigma_0} -i\lambda\br{\chi\sigma_z+u_z\sigma_0}. \end{align} Note that we are not dealing with non-hermitian Hamiltonians since $\br{\chi\sigma_z+u_z\sigma_0}\Psi = 0$ due to the boundary condition. Thus, the energy $H\Psi = H_\text{eff}\Psi = E \Psi$ becomes \begin{align} \label{eq:energy_inf} E({\vec{k}_\parallel}) = {\vec{k}_\parallel} \cdot \br{\chi\vec{s}+\vec{u}}. \end{align} The function $\lambda({\vec{k}_\parallel})$ can be obtained by solving $-\chi u_zE({\vec{k}_\parallel}) = \bra{\psi} H_\text{eff} \sigma_z\ket{\psi}$ for $\lambda$. This yields \begin{align} \label{eq:lambda} \lambda = -\frac{{\vec{k}_\parallel}\cdot \br{\vec{n}\times\vec{s} + i \chi u_z \vec{s}}}{1-u_z^2}. \end{align} Finally, from $\psi^\dagger \boldsymbol{\sigma} \psi = \vec{s}$ we find \begin{align} \label{eq:psi_inf} \psi\br{\alpha} = \frac{1}{\sqrt{2}} \begin{pmatrix} e^{-i\alpha/2} \sqrt{1-\chi u_z} \\ e^{+i\alpha/2} \sqrt{1+\chi u_z} \end{pmatrix}. \end{align} $~$ \subsection{B\quad Explicit interface between vacuum and WSM} It is possible to determine the free parameter $\alpha$ if we consider a WSM at $z \leq0 $ and also the insulating phase at $z>0$. This can be achieved by modifying the Hamiltonian from Eq. (\ref{eq:Hamiltonian}) with $\vec{u}=u_z \hat{z}$ and $\chi=1$ by replacing $k_y$ with \begin{align} \tilde{k}_y(z) = \begin{cases} \frac{+k_0^2-k_y^2}{2k_0}, & z\leq 0\\ \frac{-\Delta^2-k_y^2}{2\Delta}, & z>0 \end{cases}, \end{align} where $2k_0$ is the distance between the Weyl nodes and $\Delta>0$ the gap of the insulator. On both sides the wave function has to decay exponentially, i.e. we have to fulfill the conditions (I) Re$(\lambda)> 0$ for the WSM and (II) Re$(\lambda)< 0$ on the gapped side for all ${\vec{k}_\parallel}$. From Eqs.\ (\ref{eq:s}) and (\ref{eq:lambda}) we find \begin{align} \text{Re}(\lambda) \propto -\sin(\alpha) k_x + \cos(\alpha) \tilde{k}_y(z). \end{align} Thus, condition (II) is only satisfied for all ${\vec{k}_\parallel}$ if $\sin(\alpha)=0$ and $\cos(\alpha)>0$, i.e. $\alpha = 0$. Condition (I) then requires the surface state to be located at $|k_y|<k_0$ and the Fermi arc becomes a straight line between the Weyl nodes. This also holds for the perfect vacuum $\Delta\rightarrow\infty$. \subsection{C\quad Wave function for finite system} \begin{figure} \label{fig:FA_HL} \begin{minipage}{0.238\textwidth} \includegraphics[width=\textwidth]{FA_HL_L=3.png} \end{minipage} \begin{minipage}{0.238\textwidth} \includegraphics[width=\textwidth]{FA_HL_L=10.png} \end{minipage} \caption{Fermi arc and hot-line in a finite slab with width $L=3$ (left) and $L=10$ (right). The dashed lines mark the Fermi arc and hot-line for $L\rightarrow\infty$.} \end{figure} For the finite system we take the same Hamiltonian and ansatz as in the semi-infinite case (Eqs.\ (\ref{eq:Hamiltonian}) and (\ref{eq:ansatz})), restrict them to $-L/2 \leq z \leq L/2$, and solve Schr\"odinger's equation for $\lambda$ first. This results in \begin{align} \lambda_s &= \frac{s\sqrt{k_\parallel^2-\epsilon^2} - i u_z \epsilon}{\sqrt{1-u_z^2}}\\ \psi_s &= \frac{1}{\sqrt{2}} \begin{pmatrix} \frac{\epsilon - i s \sqrt{k_\parallel^2-\epsilon^2}}{k_x + i k_y} \sqrt{1-\chi u_z} \\ \sqrt{1+\chi u_z} \end{pmatrix} \end{align} with $\epsilon({\vec{k}_\parallel})$ defined via the energy $E({\vec{k}_\parallel})$ of the surface state as $\chi\sqrt{1-u_z^2}\epsilon({\vec{k}_\parallel}) = E({\vec{k}_\parallel}) - {\vec{k}_\parallel}\cdot\vec{u}$ and $s=\pm 1$. Note that this is consistent with the results of the semi-infinite system. The full wave function is a superposition of both solutions of $\lambda$, i.e. \begin{align} \Psi\br{{\vec{k}_\parallel},z} = \frac{e^{i{\vec{k}_\parallel}\cdot\vec{r}_\parallel}}{\sqrt{2}}\sum_{s=\pm 1} s c_s e^{\lambda_s z} \psi_s \end{align} The condition for hermiticity now is \begin{align} \label{eq:psi_L} \psi_1^\dagger \br{\chi \sigma_z + u_z}\psi_2 |_{z=-L/2} = \psi_1^\dagger \br{\chi \sigma_z + u_z}\psi_2 |_{z=+L/2}. \end{align} Since our wave function will in general have different weights $\left|\Psi(z=+L/2)\right|^2 \neq \left|\Psi(z=-L/2)\right|^2$ on both surfaces, Eq.\ (\ref{eq:psi_L}) can only be satisfied if both sides of the equation are equal to zero. This leads to the same pseudo-spin polarization and wave function at the surface as Eqs.\ (\ref{eq:s}) and (\ref{eq:psi_inf}), but the free parameter $\alpha_{t/b}$ can be different on top and bottom surface. By using these boundary conditions we find an equation for energy $\epsilon({\vec{k}_\parallel})$: \begin{align} \label{eq:E_finite} \tanh\br{L \sqrt{\frac{k_\parallel^2-\epsilon^2}{1-u_z^2}}} = \frac{\sqrt{k_\parallel^2-\epsilon^2}\cos(\gamma)}{k_y\cos(\theta) - k_x \sin(\theta) - \epsilon \sin(\gamma)} \end{align} where $\alpha_t = \theta + \gamma$ and $\alpha_b -\pi = \theta - \gamma$. For $\theta = 0$ normalization yields \begin{widetext} \begin{align} c_s^2 &= c_0 \frac{\br{\br{(k_x+ik_y)\sin(\gamma)-i\epsilon} \br{i\epsilon - s \sqrt{k_\parallel^2-\epsilon^2}}-k_x(k_x+ik_y)}}{L \epsilon c_0^2 - \sqrt{1-u_z^2}\cos(\gamma)\br{k_y \epsilon - k_\parallel^2\sin(\gamma)}}\\ c_0^2&= \br{\epsilon - k_y\sin(\gamma)}^2 - \br{k_x \cos(\gamma)}^2. \end{align} \end{widetext} Fig.\ (3) shows the resulting Fermi arc and hotline in dependence of system size $L$. To compute the the BC, we obtained the wave function by numerically solving Eq.\ (\ref{eq:E_finite}) and taking a finite symmetric difference quotient for the derivatives. \subsection{D\quad Berry curvature dipole for $\text{TaAs}$ and $\text{TaP}$} In order to acess the parameters for TaAs and TaP we performed a density-functional (DFT) calculation for these materials using a generalized gradient approximation \cite{PhysRevLett.77.3865} as implemented in the FPLO code version 48.00-52 \cite{PhysRevB.59.1743}. Using $21^3$ k-points in a box of length $7\cdot 10^{-3}\,\text{\AA}^{-1}$ around the Weyl point, we fitted the DFT data to the generalized model Hamiltonian \begin{align} H_G = V\vec{k}\cdot \boldsymbol{\sigma} + \br{\mu + \vec{u}\cdot \vec{k}} \sigma_0 \end{align} where $\mu$ is the chemical potential and $V=V^T$ a symmetric velocity matrix with chirality $\chi=\text{sign}(\det(V))$. With an accuracy of $10\,meV$, this results for TaAs in $\mu = -8 \,meV$ and (in units of $10^5\, \frac{m}{s}$) \begin{align} \vec{u} = \begin{pmatrix} -0.86\\0.86\\1.44 \end{pmatrix} ,~~~~ V = \begin{pmatrix} 3.38 & 0.84 & 0.88 \\ 0.84 & 2.53 & 0.98 \\ 0.88 & 0.98 & 2.95 \end{pmatrix}. \end{align} and the Weyl node is located at \begin{align} \vec{k}_{W} = \begin{pmatrix} 0.039 \\ 0.511 \\ 0.309 \end{pmatrix}\,\text{\AA}^{-1}. \end{align} For TaP we find $\mu = 6 \,meV$ and: \begin{align} \vec{k}_W=\begin{pmatrix} 0.032\\0.515\\0.314 \end{pmatrix}, ~\vec{u} = \begin{pmatrix} -0.86\\0.61\\1.50 \end{pmatrix} ,~V = \begin{pmatrix} 3.08 & 0.69 & 0.74\\ 0.69 & 2.29 & 1.11\\ 0.74 & 1.11 & 2.82 \end{pmatrix} \end{align} The surface state can be solved in the same manner as in the first section. The surface condition changes to $\vec{n}\cdot V\vec{s}=-u_z$ and we get \begin{align} \lambda = {\vec{k}_\parallel} \cdot \frac{V\br{\vec{s}\times V\vec{n}} - iV\br{V\vec{n}+u_z\vec{s}}}{\left| V\vec{n} \right|^2 - u_z^2}. \end{align} A straight-forward calculation yields the BC \begin{align} \vec{\Omega} &= \frac{\text{Im}(\nabla \lambda)\times\text{Re}(\nabla \lambda)}{2\text{Re}^2(\lambda)}\\ &= \frac{ \br{\left| V\vec{n} \right|^2 - u_z^2} \br{V^{-1}\vec{s}\cdot \vec{n}}}{2 \det(V) \br{{\vec{k}_\parallel} \cdot \br{V^{-1}\vec{s} \times \vec{n}}}^2} \vec{n} \end{align} and finally the BCD \begin{align} \vec{D} = \frac{\br{\left| V\vec{n} \right|^2 - u_z^2} \br{V^{-1}\vec{s}\cdot \vec{n}}}{8\pi^2\det(V)k_c \left| V^{-1}\vec{s}\times \vec{n}\right|}\cdot \frac{V\vec{s}+\vec{u}}{1+V^{-1}\vec{s}\cdot \vec{u}}. \end{align}
1,314,259,993,862
arxiv
\section{Introduction} % The present work is a continuation of our previous paper \cite{Uniform}, where we introduced a stick-breaking construction for $\mathcal{D}$-trees (uniform tree with fixed degree sequence $\mathcal{D}$) to prove that, under natural conditions, $\mathcal{D}$-trees converge, in a GP and a GHP sens, toward either $\P$-trees or ICRT. Here, we derive from \cite{Uniform} similar limits for graph versions of those trees, which have applications to multiplicative graphs and to the configuration model. \subsection{Motivations} Computer scientists have introduced multiplicative graphs \cite{HistoMultA,HistoMultC,HistoMultD} and the configuration model \cite{HistoConfigA,HistoConfigB} as natural generalizations of the Erd\H os--R\'enyi model. They are studied for 2 main reasons: first many tools introduced for the Erd\H{o}s--R\'enyi model can also be used to study those graphs, then those models seems closer to real life network thanks to the "inhomogeneity in their degree distribution" (see e.g. Newman \cite{Newman}). For those reasons, they are currently great models to study the evolution of random networks. A natural question for any model of evolution is to study their potential phase transitions. It appears that those graphs have an intriguing phase transition where a giant component gets born. We refer the reader to \cite{Dhara} Chapter 1 and references therein for an elaborate discussion of the nature of this transition, and an overview of the literature it generated. From the point of view of precise asymptotics, a main goal is to study the geometry of the connected components of those graphs in the critical regime. To this end, Addario-Berry, Broutin, and Goldschmidt \cite{ABG} have developed a general approach in the case of the Erd\H os--R\'enyi model. This approach is divided in two main steps: \begin{compactitem} \item[(a)] First one encodes the random graphs into stochastic processes, and study those processes to deduce several limits for relevant quantities of the largest connected components such as the size, surplus, degrees. This has been noticed in the ground-breaking work of Aldous \cite{Aldous_exc_ER}. \item[(b)] Then, one use those convergences to reduce the problem to a study of a single connected component conditioned on those quantities. \end{compactitem} This approach has been further developed for multiplicative graphs and the configuration model in many different regimes. We refer the reader to \cite{ABG,MST2,HomogeneousCase2} for the homogeneous case, \cite{StableConfig, HeavyConfig, StableMarchal} for the power law case, and \cite{P-graph-2,P-graph-1} for a unified approach for multiplicative graphs. In this paper, we focus on solving (b), under what we believe to be the weakest assumptions. So we reduce the study of the largest connected components to solving (a), which tends to be simpler. Moreover, we give a universal point of view on those models which can be summarized into the three following points: we describe multiplicative graphs as degenerate configuration model, we extend the unified point of view of Broutin, Duquesne, and Minmin \cite{P-graph-2,P-graph-1} to the configuration model, and we remove the omnipresent randomness assumption in the degree sequence. \subsection{Overview of the proof} \label{1.2} Fix $k\in \mathbb{N}$. Fix $\{V_i\}_{i\in \mathbb{N}}$ a set of vertices. We say that a multigraph $G$ have degree sequence $\mathcal{D}=(d_1,\dots, d_s)$ if $G$ has vertices $(V_1,\dots, V_s)$ and for every $1\leq i \leq s$, $V_i$ has degree $d_i+1$. (This shift of $+1$ will be convenient to simplify many expressions, and to be coherent with \cite{Uniform}.) The surplus of a connected multigraph $(V,E)$ is $|E|-|V|+1$, and is, informally, the number of edges that one needs to delete to transform a multigraph into a tree. A $(\mathcal{D},k)$-graph is a uniform connected multigraph with degree sequence $\mathcal{D}$ and surplus $k$. Our goal is to study the connected components of the configuration model conditioned on having degree sequence $\mathcal{D}$ and surplus $k$, which are close from $(\mathcal{D},k)$-graphs (see Lemma \ref{Connections}). To this end, we rely on two algorithms: the stick-breaking construction of $\mathcal{D}$-trees of \cite{Uniform}, along with the cycle-breaking algorithm introduced by Addario-Berry, Broutin, Goldschmidt, and Miermont \cite{MST} which we invert to construct $(\mathcal{D},k)$-graph by adding $k$ edges to a biased $\mathcal{D}$-tree. We use the cycle breaking algorithm in the following form. Take a connected multigraph with surplus $k$, repeat $k$ times: choose an edge uniformly among all the edges that can be removed without disconnecting the graph, then cut this edge in the middle. By doing so, we add $2k$ named leaves $(\star_i)_{1\leq i \leq 2k}$, and keep the degrees of $(V_i)_{i\in \mathbb{N}}$. Note that to invert this algorithm we can intuitively repair the broken edges by gluing the different pairs in $(\star_i)_{1\leq i \leq 2k}$. Note that however this algorithm is not a bijection, since for each multigraph there are many corresponding trees. To bypass this, we bias each tree by the probability that they were obtained by their corresponding multigraph. This way, we construct a $(\mathcal{D},k)$-graph from a biased $\mathcal{D}$-tree with $k$ additional edges. Thus, to study the geometry of a $(\mathcal{D},k)$-graph, it is enough to study jointly the geometry of a $\mathcal{D}$-tree, the positions of $(\star_i)_{1\leq i \leq 2k}$, and the previous bias which is a function of $(d(\star_i,\star_j))_{1\leq i,j \leq 2k}$. Therefore, it is enough to study precisely the distance matrix between specific vertices of a $\mathcal{D}$-tree. If the bias was a continuous function of this matrix, then our main results would directly follow from \cite{Uniform} since the GP convergence of $\mathcal{D}$-trees implies the convergence of this matrix. However, some extra care is needed since the bias diverges when $(\star_i)_{1\leq i \leq 2k}$ are close. Therefore, we need to prove that $(\star_i)_{1\leq i \leq 2k}$ cannot be too close. More precisely we show, using the structure of $\mathcal{D}$-trees and of the bias, that it is enough to lower bound $(d(\star_0,\star_i))_{1\leq i \leq k}$ where $\star_0$ is a root leaf. We then use our construction of $\mathcal{D}$-trees, also introduced independently by Addario-Berry, Donderwinkel, Maazoun, and Martin in \cite{Steal1}, to lower bound those distances using the $k$ first repetitions in a random tuple. Finally, since the bias is a function of the subtree spanned by $(\star_i)_{1\leq i \leq 2k}$, it is also a function of the first branches of the stick-breaking construction. This allows us to consider the limit of the bias, to directly construct the limits of $(\mathcal{D},k)$-graphs by biasing the $\P$-trees and ICRT, introduced by Aldous, Camarri and Pitman \cite{IntroICRT1,IntroICRT2}, and then by gluing the $k$ first pair of leaves. \paragraph{Plan of the paper:} In Section \ref{TOPOsection} we introduce the topologies that we are using in this paper. In Section \ref{2}, we construct $\mathcal{D}$-trees, $\P$-trees, and $\Theta$-ICRT. In Section \ref{ModGraph}, we construct $(\mathcal{D},k)$-graphs, $(\P,k)$-graphs, and $(\Theta,k)$-ICRG. We state our main results in Section \ref{MAINsection}. We study the bias in section \ref{Bias}. We deduce our main results in Section \ref{May the proof section be removed?}. Finally we discuss in Section \ref{ALTEsection} some connections between $(\mathcal{D},k)$-graph, $(\P,k)$-graphs, the configuration model and multiplicative graphs. \paragraph{Notations:} Throughout the paper, similar variables for, $\mathcal{D}$-trees, $(\mathcal{D},k)$-graphs, $\P$-trees, $(\P,k)$-graphs, $\Theta$-ICRT, $(\Theta,k)$-ICRG share similar notations. To avoid any ambiguity, the models that we are using and their parameters are indicated by superscripts $\mathcal{D}$, $(\mathcal{D},k)$, $\P$ ,$(\P,k)$, $\Theta$, $(\Theta,k)$. We often drop those superscripts when the context is clear. \paragraph{Acknowledgment} Thanks are due to Nicolas Broutin for many advices on the configuration model and on multiplicative graphs. \section{Notions of convergence} \label{TOPOsection} \subsection{Gromov--Prokhorov (GP) topology} \label{GPdef} A measured metric space is a triple $(X,d,\mu)$ such that $(X,d)$ is a Polish space and $\mu$ is a Borel probability measure on $X$. Two such spaces $(X,d,\mu)$, $(X',d',\mu')$ are called isometry-equivalent if there exists an isometry $f:X\to X'$ such that if $f_\star \mu$ is the image of $\mu$ by $f$ then $f_\star \mu=\mu'$. Let $\mathbb{K}_{\text{GP}}$ be the set of isometry-equivalent classes of measured metric space. Given a measured metric space $(X,d,\mu)$, we write $[X,d,\mu]$ for the isometry-equivalence class of $(X,d,\mu)$ and frequently use the notation $X$ for either $(X,d,\mu)$ or $[X,d,\mu]$. We now recall the definition of the Prokhorov's distance. Consider a metric space $(X,d)$. For every $A\subset X$ and $\varepsilon>0$ let $A^\varepsilon:= \{x\in X, d(x,A)<\varepsilon\}$. Then given two (Borel) probability measures $\mu$, $\nu$ on $X$, the Prokhorov distance between $\mu$ and $\nu$ is defined by \[d_P(\mu, \nu):= \inf\{\text{ $\varepsilon>0$: $\mu\{A\}\leq \nu \{A^\varepsilon\}$ and $\nu\{A\}\leq \mu\{A^\varepsilon\}$, for all Borel set $A\subset X$} \}.\] The Gromov--Prokhorov (GP) distance is an extension of the Prokhorov's distance: For every $(X,d,\mu),(X',d',\mu')\in \mathbb{K}_{\text{GP}}$ the Gromov--Prokhorov distance between $X$ and $X'$ is defined by \[ d_{\text{GP}}((X,d,\mu),(X',d',\mu')):=\inf_{S,\phi,\phi'} d_P(\phi_\star \mu, \phi'_\star\mu'),\] where the infimum is taken over all metric spaces $S$ and isometric embeddings $\phi :X\to S$, $\phi' :X'\to S$. $d_{\text{GP}}$ is indeed a distance on $\mathbb{K}_{\text{GP}}$ and $(\mathbb{K}_{\text{GP}},d_{\text{GP}})$ is a Polish space (see e.g. \cite{GHP}). We use another convenient characterization of the GP topology: For every measured metric space $(X,d^X,\mu^X)$ let $(x_i^X)_{i\in \mathbb{N}}$ be a sequence of i.i.d. random variables of common distribution $\mu^X$ and let $M^X:=(d^X(x_i^X,x_j^X))_{(i,j)\in \mathbb{N}^2}$. We prove the following result in \cite{Uniform} (see also \cite{EquivGP}), \begin{lemma} \label{equivGP2} Let $(X^n)_{n\in \mathbb{N}}\in \mathbb{K}_{\text{GP}}^\mathbb{N}$ and let $X\in \mathbb{K}_{\text{GP}}$. Let $(y^X_i)_{i\in \mathbb{N}}$ be a sequence of random variables on $X$ and let $N^X:= (d^X(y_i^X,y_j^X))_{(i,j)\in \mathbb{N}^2}$. If \[ M^{X_n} \limit^{(d)} N^X \quad \text{and} \quad \frac{1}{n}\sum_{i=1}^n \delta_{y^X_i} \limit^{(d)} \mu^X, \] then $X^n \limit^{\text{GP}} X$ \end{lemma} \subsection{Gromov--Hausdorff (GH) topology} \label{GH} Let $\mathbb{K}_{\text{GH}}$ be the set of isometry-equivalent classes of compact metric space. For every metric space $(X,d)$, we write $[X,d]$ for the isometry-equivalent class of $(X,d)$, and frequently use the notation $X$ for either $(X,d)$ or $[X,d]$. For every metric space $(X,d)$, the Hausdorff distance between $A,B\subset X$ is given by \[d_H(A,B):= \inf\{\varepsilon>0, A\subset B^\varepsilon, B\subset A^\varepsilon \}. \] The Gromov--Hausdorff distance between $(X,d)$,$(X',d')\in \mathbb{K}_{\text{GH}}$ is given by \[ d_{\text{GH}}((X,d),(X',d')):=\inf_{S,\phi,\phi'} \left (d_H(\phi(X), \phi'(X')) \right ),\] where the infimum is taken over all metric spaces $S$ and isometric embeddings $\phi :X\to S$, $\phi' :X'\to S$. $d_{\text{GH}}$ is indeed a distance on $\mathbb{K}_{\text{GH}}$ and $(\mathbb{K}_{\text{GH}},d_{\text{GH}})$ is a Polish space (see e.g. \cite{GHP}). \subsection{Pointed Gromov--Hausdorff ($\text{GH}^n$) topology} \label{PointedGH} Let $n\in \mathbb{N}$. Let $(X,d,(x_1,\dots, x_n))$ and $(X',d',(x'_1,\dots, x'_n))$ be metric spaces, each equiped with an ordered sequence of $n$ distinguished points (we call such spaces $n$-pointed metric spaces). We say that these two $n$-pointed metric spaces are isometric if there exists an isometry $\phi$ from $(X,d)$ on $(X',d')$ such that for every $1\leq i \leq n$, $\phi(x_i)=x'_i$. Let $\mathbb{K}^n_{\text{GH}}$ be the set of isometry-equivalent classes of compact metric space. As before, we write $[X,d,(x_1,x_2,\dots, x_n)]$ for the isometry-equivalent class of $(X,d,(x_1,\dots, x_n))$, and denote either by $X$ when there is little chance of ambiguity. The $n$-pointed Gromov--Hausdorff distance between $X,X'\in \mathbb{K}^n_{\text{GH}}$ is given by \[ d_{\text{GH}}^n((X,d,(x_1,\dots, x_n)),(X',d',(x'_1,\dots, x'_n))):=\inf_{S,\phi,\phi'} \left (d_H(\phi(X), \phi'(X')) \right ),\] where the infimum is taken over all metric spaces $S$ and isometric embeddings $\phi :X\to S$, $\phi' :X'\to S$ such that for every $1\leq i \leq n$, $\phi(x_i)=\phi'(x'_i)$. $d^n_{\text{GH}}$ is indeed a distance on $\mathbb{K}^n_{\text{GH}}$ and $(\mathbb{K}^n_{\text{GH}},d^n_{\text{GH}})$ is a Polish space (see \cite{MST} Section 2.1). \subsection{Extension to pseudo metric spaces} Note that the previous topologies naturally extends to pseudo metric spaces. Indeed, one may say that a pseudo metric space $(X,d)$ is isometry-equivalent to the metric space given by quotienting $X$ by the equivalent relation $d(a,b)=0$ (see Burago, Burago, Ivanov \cite{Glue} for details.) It is then enough to extend the equivalent classes to pseudo metric spaces. \section{Constructions of $\mathcal{D}$-trees, $\P$-trees and $\Theta$-ICRT} \label{2} \subsection{$\mathcal{D}$-trees} \label{2.2} Recall that a sequence $(d_1,\dots, d_{s})$ is a degree sequence of a tree if and only if $\sum_{i=1}^s d_i=s-2$, and by convention $d_1\geq d_2\dots \geq d_{s}$. Let $\Omega_{\D}$ be the set of such sequences. For convenience issue, we want to label our leaves on a set $\{\star_i\}_{i\in \mathbb{N}}$ disjoint from $\{V_i\}_{i\in \mathbb{N}}$. So let us slightly change our definition of $\mathcal{D}$-trees. Note that a tree with degree sequence $\mathcal{D}$ must have $N^\mathcal{D}+2:=\sum_{i=1}^s \mathbf{1}_{d_i=0}$ leaves. We say that a tree $T$ is a $\mathcal{D}$-tree if it is uniform among all tree with vertices $\{V_i\}_{i:d_i>0}\cup \{\star_i\}_{0\leq i \leq N+1}$ and such that for every $i$ with $d_i>0$, $\deg(V_i)=d_i+1$. We now recall the construction of $\mathcal{D}$-trees of \cite{Uniform}. For simplicity, for every graph $G=(V,E)$ and edge $e=\{v_1,v_2\}$, $G\cup e$ denotes the graph $(V\cup \{v_1,v_2\},E\cup \{e\})$. \begin{algorithm}[Algorithm 7 from \cite{Uniform}] \label{D-tree} \emph{Stick-breaking construction of a $\mathcal{D}$-tree $T^\mathcal{D}$ (see Figure \ref{explore1}).} \begin{compactitem} \item[-] Let $A^\mathcal{D}=(A^\mathcal{D}_i)_{1\leq i \leq s-1}$ be a uniform $\mathcal{D}$-tuple (tuple such that $\forall i\in \mathbb{N}$, $V_i$ appears $d_i$ times) \item[-] Let $T^\mathcal{D}_1:=(\{\star_0,A_1\},\{\{\star_0,A_1\}\})$ then for every $2\leq i \leq s$ let \[ T^\mathcal{D}_i:=\begin{cases} T_{i-1}\cup \{A_{i-1},A_{i}\} & \text{if } A_{i}\notin T_{i-1}.\\mathcal{T}_{i-1}\cup \{A_{i-1},\star_{\inf\{k, \star_k\notin T_{i-1}\}}\}& \text{if } A_{i} \in T_{i-1} \text{ or } i=s. \end{cases} \] \item[-] Let $T^\mathcal{D}=T_s$. \end{compactitem} \end{algorithm} \begin{figure}[!h] \label{explore1} \centering \includegraphics[scale=0.5]{Uniform_Exploration4.eps} \caption{Stick breaking construction of a $\mathcal{D}$-tree with $\mathcal{D}=(1,2,1,3,3,0,0,\dots)$ and $(A^\mathcal{D}_i)_{1\leq i \leq s-1}=(V_4,V_5,V_2 ,V_5,V_3,V_4,V_5,V_4,V_1,V_2)$. The exploration starts at $\star_0$ then follows the white-black arrow toward $\star_1$, then jumps at $\star_5$ to follow the path toward $\star_2$ and so on\dots } \label{explore1} \end{figure}% \subsection{$\P$-trees} \label{2.3} Let $\{V_{\infty,i}\}_{i\in \mathbb{N}}$ be a set of vertices disjoint with $\{V_i\}_{i\in \mathbb{N}}$ and $\{\star_i\}_{i\geq 0}$. Let $\Omega_{\P}$ be the set of sequence $(p_i)_{i\in \mathbb{N}\cup \{\infty\}}$ in $\mathbb{R}^+$ such that $\sum_{i=1}^\infty p_i+p_{\infty}=1$, $p_1>0$ and $p_1\geq p_2\geq \dots$. For every $\P\in \Omega_{\P}$, the $\P$-tree is the random tree constructed as follow: \begin{algorithm} \label{P-tree} \emph{Definition of the $\P$-tree for $\P\in \Omega_{\P}$.} \begin{compactitem} \item[-] Let $(A^\P_i)_{i\in \mathbb{N}}$ be a family of i.i.d. random variables such that for all $i\in \mathbb{N}$, $\mathbb{P}(A^\P_1=V_i)=p_i$. \item[-] For every $i\in \mathbb{N}$, let $B^\P_i=A_i$ if $A_i\in \mathbb{N}$, and let $B^\P_i=V_{\infty,i}$ otherwise. \item[-] Let $T^\P_1:=(\{\star_0,B_1\},\{\{\star_0,B_1\}\})$ then for every $i\geq 2$ let \begin{equation*} T^\P_i:=\begin{cases} T_{i-1}\cup \{B_{i-1},B_{i}\} & \text{if } B_{i}\notin T_{i-1}. \\mathcal{T}_{i-1} \cup \{B_{i-1},\star_{\inf\{k, \star_k\notin T_{i-1}\}}\}& \text{if } B_{i} \in T_{i-1}. \end{cases} \end{equation*} \item[-] Let $T^\P:=\bigcup_{n\in \mathbb{N}}T_n$. \end{compactitem} \end{algorithm} \begin{remark} Usually, the leaves $\{\star_i\}_{i\in \mathbb{N}}$ are omitted in the formal definition of $\P$-trees. We consider them to clarify the intuition that they are degenerate $\mathcal{D}$-trees with an infinite number of leaves. \end{remark} \subsection{ICRT} \label{2.4} \begin{figure}[!h] \label{SB} \centering \includegraphics[scale=0.6]{ICRT_glue5.eps} \caption{A typical step of the stick-breaking construction: the "gluing" of $(y_i,y_{i+1}]$ at $z_i$. } \label{SB} \end{figure} First let us introduce a generic stick breaking construction. It takes for input two sequences in $\mathbb{R}^+$ called cuts ${\textbf y}=(y_i)_{i\in \mathbb{N}}$ and glue points ${\textbf z}=(z_i)_{i\in \mathbb{N}}$, which satisfy \begin{equation} \forall i<j,\ \ y_i<y_j \qquad ; \qquad y_i\limit \infty \qquad ; \qquad \forall i\in \mathbb{N},\ \ z_i\leq y_i, \label{2609} \end{equation} and creates a $\mathbb{R}$-tree (loopless geodesic metric space) by recursively "gluing" segment $(y_i,y_{i+1}]$ on position $z_i$, or rigorously, by constructing a consistent sequence of distances $(d_n)_{n\in \mathbb{N}}$ on $([0,y_n])_{n\in \mathbb{N}}$. \begin{algorithm} \label{Alg1} \emph{Generic stick-breaking construction of $\mathbb{R}$-tree.} \begin{compactitem} \item[--] Let $d_0$ be the trivial metric on $[0,0]$. \item[--] For each $i\geq 0$ define the metric $d_{i+1}$ on $[0, y_{i+1}]$ such that for each $x\leq y$: \[ d_{i+1}(x,y):= \begin{cases} d_{i}(x,y) & \text{if } x,y\in [0, y_i] \\ d_{i}(x,z_i)+|y-y_i| & \text{if } x \in [0, y_i], \, y \in (y_i, y_{i+1}] \\ |x-y| & \text{if } x,y\in (y_i, y_{i+1}], \end{cases} \] where by convention $y_0:=0$ and $z_0:=0$. \item[--] Let $d$ be the unique metric on $\mathbb{R}^+$ which agrees with $d_i$ on $[0, y_i]$ for each $i\in \mathbb{N}$. \item[--] Let $\SBB({\textbf y},{\textbf z})$ be the completion of $(\mathbb{R}^+,d)$. \end{compactitem} \end{algorithm} Now, let $\Omega_{\Theta}$ be the space of sequences $(\theta_i)_{i\in \{0\}\cup \mathbb{N}}$ in $\mathbb{R}^+$ such that $\sum_{i=0}^\infty \theta_i^2=1$ and such that $\theta_1\geq \theta_2\geq \dots$. For every $\Theta\in \Omega_{\Theta}$, the $\Theta$-ICRT is the random $\mathbb{R}$-tree constructed as follow: \begin{algorithm} \label{ICRT} \emph{Construction of $\Theta$-ICRT (from \cite{ICRT1})} \begin{compactitem} \item[-] Let $(X_i)_{i\in \mathbb{N}}$ be a family of independent exponential random variables of parameter $(\theta_i)_{i\in \mathbb{N}}$. \item[-] Let $\mu$ be the measure on $\mathbb{R}^+$ defined by $\mu=\theta_0^2 dx+\sum_{i=1}^{\infty} \delta_{X_i} \theta_i$. \item[-] Let $(Y_i,Z_i)_{i\in \mathbb{N}}$ be a Poisson point process on $\{(y,z)\in \mathbb{R}^{+2}: y\geq z \}$ of intensity $dy\times d\mu$. \item[-] Let $\textbf{Y}:=(Y_i)_{i\in \mathbb{N}}$ and let $\textbf{Z}:=(Z_i)_{i\in \mathbb{N}}$. Let $(Y_0,Z_0):=(0,0)$. \item[-] The $\Theta$-ICRT is defined as $(T,d)=\SBB(\textbf Y,\textbf Z)$. (see Algorithm \ref{Alg1}) \end{compactitem} \end{algorithm} \section{Constructions of $(\mathcal{D},k)$-graphs, $(\P,k)$-graphs and $(\Theta,k)$-ICRG} \label{ModGraph} \subsection{Generic gluing and cycle-breaking of discrete multigraphs (see Figure \ref{explore2})} \label{DefGraph} % In the entire section, $G=(V,E)$ denotes a multigraph. Let $\cyc(G)$ the set of all edges $e\in E$ such that $G\backslash \{e\}:=(V,E\backslash\{e\})$ is connected. (For multiples edges the operation $\backslash$ only remove one edge at a time.) Let $\square(G):=\#\cyc(G)$. For every leaves $L_1\neq L_2\in G$, we define the operation of gluing $L_1$ and $L_2$ in $G$ as follow: For every leaf $L\in G$, let the father of $L$ be the only vertex $F\in G$ such that $(F,L)\in G$. Let $F_1$, $F_2$ be the father of $L_1,L_2$. The multigraph obtained by gluing $L_1$ and $L_2$ in $G$ is \[ \mathcal{G}_{L_1,L_2}(G):=(V/\{L_1,L_2\}, E\backslash \{ \{F_1,L_1\},\{F_2,L_2\} \}\cup \{ \{F_1,F_2\}\}), \] and intuitively corresponds to the graph obtained by fusing $\{F_1,L_1\}$ and $\{F_2,L_2\}$. Similarly, for every leaves $L_1\neq L_2 \neq \dots, L_{2k-1} \neq L_{2k}$, the multigraph obtained by gluing $L_1$ and $L_2$,\dots, $L_{2k-1}$ and $L_{2k}$ in $G$ is \[ \mathcal{G}_{(L_i)_{1\leq i \leq 2k}}(G)=\mathcal{G}_{(L_1,L_2),(L_3,L_4)\dots, (L_{2k-1},L_{2k})}(G):= \mathcal{G}_{L_{1},L_{2}}\circ \mathcal{G}_{L_3,L_4} \circ \dots \circ \mathcal{G}_{L_{2k-1},L_{2k}}(G). \] Note that this multigraph does not depend on the order in which we glue the different leaves. \begin{figure}[!h] \label{explore2} \centering \includegraphics[scale=0.55]{Uniform_Exploration5.eps} \caption{Gluing leaves of the tree $T$ from Figure \ref{explore1} to form a graph $G$ with surplus 2. $\cyc(G)$ is red. $\square(G)=5$. $G=\mathcal{G}_{(\star_1,\star_2,\star_3,\star_4)}(T)$. Also, $\mathbb{P}(\CB(G)=T)=\frac{2!}{2^2\square(G\backslash\{V_4,V_5\})\square(G)}=\frac{2}{2^2*3*5}$.} \label{explore2} \end{figure}% Now recall Section \ref{1.2}. Let us give a formal definition of the cycle-breaking algorithm: \begin{algorithm} \label{CBalg} Cycle-breaking of a multigraph $G=(V,E)$ with $V\subset (V_i)_{i\in \mathbb{N}}$ and surplus $k$. \begin{compactitem} \item[-] For $1\leq i \leq k$, let $e_i=(W_{2i+1},W_{2i+2})$ be a uniform oriented edge in $\cyc(G\backslash \{e_j\}_{1\leq j<i})$. \item[-] Let $\CB(G):= (V\cup\{\star_i\}_{1 \leq i \leq k},(E\backslash \{e_i\}_{1\leq i \leq k})\cup \{\{W_i, \star_{2k+1-i}\}\}_{1\leq i \leq 2k})$. \end{compactitem} \end{algorithm} To simplify our notations for every multigraph $G=(V,E)$ and $v,w\in V$, we write $\#_{v,w}(G)$ for the number of edges $\{v,w\}$ in $G$. Also, let $\circ(G):=\prod_{v\in V}2^{\#_{v,v}(G)}\prod_{v,w\in V}\#_{v,w}(G)!$. \begin{lemma} \label{CBapp} For every connected multigraph $G$ with $V\subset \{V_i\}_{i\in \mathbb{N}}$ and surplus $k$, we have: \begin{compactitem} \item[(a)] $\CB(G)$ is almost surely a tree with vertices $V\cup \{\star_i\}_{1\leq i \leq 2k}$. \item[(b)] For every $v\in V$, $\deg_{\CB(G)}(v)=\deg_G(v)$. For every $1\leq i \leq 2k$, $\star_i$ is a leaf in $\CB(G)$. \item[(c)] Almost surely, $\mathcal{G}_{(\star_1, \star_2),\dots, (\star_{2k-1},\star_{2k})}(\CB(G))=G$ \item[(d)] For every tree $T$ satisfying (a) (b), \begin{equation} \mathbb{P}(\CB(G)=T)= \frac{\circ(G)}{2^k\prod_{i=1}^{k} \square(G\backslash \{e_j\}_{1\leq j<i})}. \label{1910} \end{equation} \end{compactitem} \end{lemma} \begin{proof} (a) and (b) follows from a quick enumeration. (c) is easy to prove from the definition of $\mathcal{G}$. (d) follows from an induction. Indeed, the right hand side of \eqref{1910} is just the product over each steps of the probability that $(W_{2i+1},W_{2i+2})$ satisfies $\{ W_{2i+1},\star_{2k-2i}\},\{W_{2i+2},\star_{2k-2i-1}\}\in T$. \end{proof} \subsection{$(\mathcal{D},k)$-graph} \label{DkDef} Note that $(d_1,\dots, d_{s})$ is a degree sequence of a connected multigraph with surplus $k$ if and only if $\sum_{i=1}^s d_i=s+k-2$, and by convention $d_1\geq d_2\dots \geq d_{s}$. Note that by adding $2k$ numbers $0$, this holds if and only if $(d_1,\dots, d_{s},0,\dots, 0)\in \Omega_{\D}$. For convenience issue, let us slightly extend our definition of $(\mathcal{D},k)$-graph. For $\mathcal{D}\in \Omega_{\D}$ with $N^\mathcal{D}\geq 2k$ we say that $G$ is a $(\mathcal{D},k)$-graph if it is uniform among all multigraph with vertices $\{V_i\}_{i:d_i>0}\cup \{\star_i\}_{i\in\{0\}\cup \{2k+1,\dots, N^\mathcal{D}+1\}}$ and such that for every $i$ with $d_i>0$, $\deg(V_i)=d_i+1$. The following result follows from Lemma \ref{CBapp} and constructs a $(\mathcal{D},k)$-graph from a biased $\mathcal{D}$-tree. \begin{lemma} \label{constructDK}Let $T^{\mathcal{D},k}$ be a random tree. Assume that for every tree $T$ such that: T have vertices $\{V_i\}_{1\leq i\leq s}\cup \{\star_i\}_{1\leq i \leq 2k}$, for every $1\leq i \leq s$ $\deg_T(V_i)=d_i+1$, and $\{\star_i\}_{1\leq i \leq 2k}$ are leaves of $T$, \begin{equation} \mathbb{P}(T^{\mathcal{D},k}=T)\alpha \frac{\circ(\mathcal{G}_{(\star_i)_{1\leq i \leq 2k}}(T))}{\prod_{i=1}^{k} \square(\mathcal{G}_{(\star_{1},\star_{2}),\dots, (\star_{2i-1},\star_{2i})}(T))}, \label{19102}\end{equation} where $\alpha$ stands for proportional. Then $\mathcal{G}_{(\star_i)_{1\leq i \leq 2k}}(T^{\mathcal{D},k})$ is a $(\mathcal{D},k)$-graph. \end{lemma} To simplify our notations, we write for every $i\in \mathbb{N}$, $\square_i(\cdot):=\square(\mathcal{G}_{(\star_{1},\star_{2}),\dots, (\star_{2i-1},\star_{2i})}(\cdot))$ and $\square_{\square,k}(\cdot):= \circ(\mathcal{G}_{(\star_i)_{1\leq i \leq 2k}}(T))/\prod_{i=1}^{k} \square_i(\cdot)$. So that the right hand side of \eqref{19102} is $\square_{\square,k}(T)$. \subsection{$(\P,k)$-graph} \label{PkDef} Since $\P$-trees appear at the limit of $\mathcal{D}$-trees, it is natural to adapt Lemma \ref{constructDK} to construct limits for $(\mathcal{D},k)$-graphs from $\P$-trees. Thus we informally define the $(\P,k)$-graph as a $\P$-tree biased by \eqref{1910} where we glued $\{\star_{2i-1},\star_{2i}\}_{1\leq i \leq k}$. Below we formally define $(\P,k)$-graph. Fix $\P\in \Omega_{\P}$. First note that Algorithm \ref{P-tree} can be seen as a function $\text{AB}$ (Aldous--Br\"oder) which takes a tuple $A^\P$ in $\Omega_\text{AB}:=(\{V_i\}_{i\in \mathbb{N}\cup\{\infty\}})^\mathbb{N}$ and send a tree $T^\P$. We equip $\Omega_\text{AB}$ with the weak topology and let $\mathcal{B}_\text{AB}$ be the Borel algebra of this space. Also, we equip $\Omega_\text{AB}$ with the distribution $\mathbb{P}^\P$ of $(A_i^\P)_{i\in \mathbb{N}}$, and complete the space so that event of measure null for $\mathbb{P}^\P$ are measurable. Then note that $\square_{\square,k}(\text{AB})$ is a measurable function from $\Omega_\text{AB}$ to $\mathbb{R}^+$ since it is locally constant on the subspace of tuple that have at least $2k$ repetitions. Also, note that $\square_{\square,k}(\text{AB})\leq (k+1)!2^k$. Thus we may define $\mathbb{P}^{\P,k}$ on $(\Omega^\text{AB},\mathcal{B}^\text{AB})$ such that for every Borel space $B\in \mathcal{B}_\text{AB}$, \[ \mathbb{P}^{\P,k}(B)=\mathbb{E}[\mathbf{1}_{A^\P\in B}\square_{\square,k}(\text{AB}(A^\P))]/\mathbb{E}[\square_{\square,k}(\text{AB}(A^\P))]. \] Now let $A^{\P,k}$ be a random variable with distribution $\mathbb{P}^{\P,k}$. Then let $T^{\P,k}:=\text{AB}(A^{\P,k})$. The $(\P,k)$-graph is the random graph $G^{\P,k}:=\mathcal{G}_{(\star_i)_{1\leq i \leq 2k}}(T^{\mathcal{D},k})\backslash \{\star_i\}_{i\in \mathbb{N}}$. \subsection{$(\Theta,k)$-ICRG} \label{4.4} \label{TkDef} Since $\Theta$-ICRT appear as the limit of $\mathcal{D}$-trees it is natural to adapt Lemma \ref{constructDK} to construct limits for $(\mathcal{D},k)$-graphs from $\Theta$-ICRT. Thus we informally define $(\Theta,k)$-ICRG as $\Theta$-ICRT biased by \eqref{1910} where we glued $\{\star_{2i-1},\star_{2i}\}_{1\leq i \leq k}$. Below we formally define $(\Theta,k)$-ICRG. We stay rudimentary and refer to Chapter 3 of \cite{Glue} or to the $\mathbb{R}$-graph theory of \cite{MST} for more details. First we formally define the gluing of two points: For every pseudo metric space $(X,d)$ and $x_1,x_2\in X$ let $\mathfrak{G}_{x,y}((X,d))$ be the pseudo metric space $(X,d')$ where for every $a_1,a_2\in X$, \[ d'(a_1,a_2):=\inf \{d(a_1,a_2)\,; \,d(a_1,x_1)+d(a_2,x_2) \, ;\, d(a_1,x_2)+d(a_2,x_1)\}. \] Also for every $k\in \mathbb{N}$ and $x_1, x_2, \dots, x_{2k}\in X$ let \[ \mathfrak{G}_{(x_i)_{1\leq i \leq 2k}}((X,d))=\mathfrak{G}_{(x_1,x_2),\dots, (x_{2k-1},x_{2k})}((X,d)):=\mathfrak{G}_{x_1,x_2}\circ \mathfrak{G}_{x_3,x_4}\circ \dots \circ \mathfrak{G}_{x_{2k-1},x_{2k}}((X,d)). \] One can check that $\mathfrak{G}_{(x_i)_{1\leq i \leq 2k}}((X,d))$ does not depends on the order in $(x_{2i-1},x_{2i})_{1\leq i \leq k}$. Recall Section \ref{2.4}. Let $\mathbb{K}_{\text{\textbf{yz}}}$ be the set of couples of sequences $\textbf y$ and $\textbf z$ satisfying \eqref{2609}. In Section \ref{2.4} we defined the stick breaking construction as a function $\text{SB}:(\textbf y, \textbf z)\in \mathbb{K}_{\text{\textbf{yz}}}\to\text{SB}(\textbf y, \textbf z)$. For every $n\in \mathbb{N}$ and $(\textbf y,\textbf z)=((y_i)_{i\in \mathbb{N}}, (z_i)_{i\in \mathbb{N}})\in \mathbb{K}_{\text{\textbf{yz}}}$ let $\cyc_n(\textbf y, \textbf z)$ be the set of $x\in \mathbb{R}$ such that $\mathfrak{G}_{(y_i)_{1\leq i \leq 2n}}(\text{SB}(\textbf y,\textbf z))\backslash\{x\}$ is connected. Note that $\cyc_n(\textbf y, \textbf z)$ is a finite union of interval so is measurable. Let $\square_n(\textbf y, \textbf z)$ be its Lebesgue measure. Note that $\square_n(\textbf y, \textbf z)$ only depends on $\{y_i\}_{1\leq i \leq n},\{z_i\}_{1\leq i \leq n}$, and is a measurable function of $(\{y_i\}_{1\leq i \leq n},\{z_i\}_{1\leq i \leq n} )$ (see Lemma \ref{CHIANTa}). Let $\square_{\square,k}(\textbf y, \textbf z):=1/\prod_{n=1}^k \square_n(\textbf y, \textbf z)$. Let $\mathbb{M}$ be the set of all positive locally finite measure on $\mathbb{R}^+$. Let $\mathbb{K}_{\text{SB}}:= \mathbb{M}\times \mathbb{K}_{\text{\textbf{yz}}}$. We equip $\mathbb{K}_{\text{SB}}$ with the weak topology and let $\mathcal{B}_\text{SB}$ be the Borel algebra of this space. Let $\Theta\in \Omega_{\Theta}$. We will prove in Lemma \ref{Followers} that $\mathbb{E}[\square_{\square,k}(\textbf Y^\Theta, \textbf Z^\Theta)]<\infty$. Thus we may define $ \mathbb{P}^{\Theta,k}$ on $(\mathbb{K}_{\text{SB}},\mathcal{B}_\text{SB})$ such that for every Borel space $B\in \mathcal{B}_\text{SB}$, \[ \mathbb{P}^{\Theta,k}(B)=\mathbb{E} \left [\mathbf{1}_{A^\P\in B}\square_{\square,k} \left (\text{SB}\left ( \textbf Y^\Theta, \textbf Z^\Theta \right ) \right )\right ]/\mathbb{E}[\square_{\square,k}(\textbf Y^\Theta, \textbf Z^\Theta)]. \] Now let $(\mu^{\Theta,k}, \textbf Y^{\Theta,k} , \textbf Z^{\Theta,k})$ be a random variable with distribution $\mathbb{P}^{\Theta,k}$. Let $\textbf Y^{\Theta,k}=(Y_i^{\Theta,k})_{i\in \mathbb{N}}$. Then let $(T^{\Theta,k},\bar d^{\Theta,k}):=\text{SB}( \textbf Y^{\Theta,k} , \textbf Z^{\Theta,k})$. The $(\Theta,k)$-ICRG is the random pseudo metric space $(G^{\Theta,k},d^{\Theta,k}):=\mathfrak{G}_{(Y_i^{\Theta,k})_{1\leq i \leq 2k}}(T^{\Theta,k},\bar d^{\Theta,k})$. \section{Main results} \label{MAINsection} In this section $({\D_n})_{n\in \mathbb{N}}$, $({\P_n})_{n\in \mathbb{N}}$, $({\Theta_n})_{n\in \mathbb{N}}$ denote fixed sequences in $\Omega_{\D}$, $\Omega_{\P}$, $\Omega_{\Theta}$ respectively. For every $\mathcal{D}=(d_1,\dots, d_{s^\mathcal{D}})\in \Omega_{\D}$, let $(\sigma^\mathcal{D})^2:=\sum_{i=1}^s d_i (d_i-1)$ then let $\lambda^\mathcal{D}:=\sigma^\mathcal{D}/s^\mathcal{D}$. Also, for every $\P=(p_i)_{i\in \mathbb{N}\cup \{\infty\}}$ let $s^\P:=\max\{i\in \mathbb{N}\cup \{\infty\}: p_i>0\}$ and let $(\sigma^\P)^2=\sum_{i=1}^{\infty} (p_i)^2$. We always work under one of the following regimes: \begin{Hypo}[${\D_n} \Rightarrow \P$] \label{Hypo1} For all $i\geq 1$, $d_i^{\D_n}/s^{\D_n}\to p^\P_i$ and $s^{\D_n}\to \infty$. \end{Hypo} \begin{Hypo}[${\D_n} \Rightarrow \Theta$]\label{Hypo2} \label{Hypo2} For all $i\geq 1$, $d_i^{\D_n}/\sigma^{\D_n}\to \theta^\Theta_i$ and $d_1^{\D_n}/s^{\D_n}\to 0$. \end{Hypo} \begin{Hypo}[${\P_n} \Rightarrow \Theta$]\label{Hypo2P} For all $i\geq 1$, $p_i^{\P_n}/\sigma^{\P_n}\to \theta^\Theta_i$ and $p_1^{\P_n}\to 0$. \end{Hypo} \begin{Hypo}[${\Theta_n} \Rightarrow \Theta$]\label{Hypo2T} For all $i\geq 1$, $\theta_i^{{\Theta_n}}\to \theta^\Theta_i$. \end{Hypo} \paragraph{A few words on $\Rightarrow$.} One can put a topology on $\Omega:=\Omega_{\D}\cup \Omega_{\P}\cup \Omega_{\Theta}$ such that $\Rightarrow$ corresponds with the notion of convergence on $\Omega$. This has several advantages (see \cite{Uniform} Section 8.1 for details). First $(\Omega,\Rightarrow)$ is a Polish space. Moreover, our results can be seen as continuity results for the function which associate to a set of parameters a metric space. Hence, our results can be used to study graph with random degree distributions. Furthermore $\Omega_\mathcal{D}$ is dense on $\Omega$. So our results on $(\mathcal{D},k)$-graphs imply the others. \subsection{The bias does not diverge} As explained previously in the introduction, our approach relies entirely on the stick breaking construction of \cite{Uniform} and on the study of the bias corresponding to the cycle-breaking construction. More precisely given the following result, our main results are applications of \cite{Uniform}. \begin{proposition} \label{MainMainMain} For every $x,m\in \mathbb{R}^+$ let $h_m:=x\mathbf{1}_{x\geq m}$. We have, \[ \lim_{m\to \infty} \max_{\mathcal{D}\in \Omega_{\D}:N^\mathcal{D}\geq 2k }\mathbb{E}\left [h_{m}\left (\frac{\square_{\square,k}(T^{\mathcal{D}})}{(\lambda^\mathcal{D})^k}\right )\right ]=0. \] \end{proposition} \subsection{Gromov--Prokhorov convergence} \label{resultGP} First let us specify the measures that we consider. Let $\Omega_{\M}$ the set of measures on $\{V_i\}_{i\geq 1}\cup\{\star_i\}_{i\geq 0}$. We say that a sequence $(\mathfrak{p}_n)_{n\in \mathbb{N}}\in \Omega_{\M}^\mathbb{N}$ converges toward $\mathfrak{p}\in \Omega_{\M}$ if $\max_{i\in \mathbb{N}} |\mathfrak{p}_n(V_i)- {\mathfrak p}(V_i) | \to 0$ and $\max_{i\in \mathbb{N}} |\mathfrak{p}_n(\star_i)- {\mathfrak p}(\star_i) | \to 0$. In the whole paper, for every $\mathcal{D}\in \Omega_{\D}$, $\mathfrak{p}^{\mathcal{D},k}$ denote a probability measure with support on $\mathcal{V}^{\mathcal{D},k}:=\{V_i,i:d_i\geq 1\}\cup \{ \star_i, i\in \{0\}\cup \{2k+1,\dots ,N^\mathcal{D}+1\}\}$. Similarly, for every $\P\in \Omega_{\P}$, $\mathfrak{p}^\P$ denote a probability measure with support on $\mathcal{V}^\P:=\{V^\P_i\}_{i:p_i>0}$. Also, we sometimes let $0$ denote the null measure. Then we recall the probability measure on ICRT of \cite{ICRT1}. To simplify our expressions, we write $\mu^\Theta=\infty$ when either $\theta^\Theta_0>0$ or $\sum_{i=1}^\infty \theta^\Theta_i=\infty$, (since $\mu^\Theta=\infty$ iff a.s. $\mu^\Theta[0,\infty]=\infty$). \begin{definition*}[\cite{ICRT1} Proposition 3.2] Let $\Theta\in \Omega_{\Theta}$ be such that $\mu^\Theta=\infty$. Almost surely, as $n\to \infty$, $\frac{1}{n}\sum_{i=1}^n \delta_{Y_i^\Theta}$ converges weakly toward a probability measure ${\mathfrak p}^\Theta$ on $\mathcal{T}^\Theta$. \end{definition*} \begin{remark} When $\mu^\Theta<\infty$, $\frac{1}{n}\sum_{i=1}^n \delta_{Y_i^\Theta}$ does not converge. For this reason, although we prove the convergence of the distance matrices, one cannot define a proper measure for the GP convergence. \end{remark} Then let us define a probability measure on $G^{\Theta,k}$. It directly follows from \cite{ICRT1} Proposition 3.2, that a.s. $\sum_{i=1}^n \delta_{Y^{\Theta,k}_i}$ converges weakly toward a probability measure ${\mathfrak p}^{\Theta,k}$ on $T^{\Theta,k}$. Since convergence in $T^{\Theta,k}$ imply convergence in $G^{\Theta,k}$, it still makes sense to define ${\mathfrak p}^{\Theta,k}$ on $G^{\Theta,k}$. We now state the main result of this section. In what follows, $d^{\mathcal{D},k}$ is the graph distance on $G^{\mathcal{D},k}$ and similarly $d^{\P,k}$ is the graph distance on $G^{\P,k}$. \begin{theorem} \label{GP} The following convergences hold weakly for the GP topology \begin{compactitem} \item[(a)] If ${\D_n}\Rightarrow \P$ and $\mathfrak{p}^{{\D_n},k}\to \mathfrak{p}^\P$ then \[ \left (G^{{\D_n},k},d^{{\D_n},k},\mathfrak{p}^{{\D_n},k} \right) \limit^{\text{WGP}} (G^{\P,k}, d^{\P,k},\mathfrak{p}^\P).\] \item[(b)] If ${\D_n} \Rightarrow \Theta$, $\mathfrak{p}^{{\D_n},k} \to 0$, and $\mu^\Theta=\infty$ then \[ \left ( G^{{\D_n},k},\lambda^{\D_n} d^{{\D_n},k},\mathfrak{p}^{{\D_n},k} \right) \limit^{\text{WGP}} (G^{\Theta,k},d^{\Theta,k},{\mathfrak p}^{\Theta,k}). \] \item[(c)] If ${\P_n} \Rightarrow \Theta$, $\mathfrak{p}^{\P_n} \to 0$, and $\mu^\Theta=\infty$ then \[ \left ( G^{{\P_n},k},\sigma^{\P_n} d^{{\P_n},k},\mathfrak{p}^{\P_n} \right) \limit^{\text{WGP}} (G^{\Theta,k},d^{\Theta,k},{\mathfrak p}^{\Theta,k}). \] \item[(d)] If ${\Theta_n} \Rightarrow \Theta$, $\mu^{\Theta_n}=\infty$ for every $n\in \mathbb{N}$, and $\mu^\Theta=\infty$ then \[(G^{{\Theta_n},k},d^{{\Theta_n},k},{\mathfrak p}^{{\Theta_n},k}) \limit^{\text{WGP}} (G^{\Theta,k},d^{\Theta,k},{\mathfrak p}^{\Theta,k}). \] \end{compactitem} \end{theorem} \subsection{Gromov--Hausdorff convergence} \label{4.3} GH convergence requires additional assumptions. In \cite{Uniform} we give quantitative assumptions. Here, we simply state rudimentary assumptions. We proved in Section 7.3 of \cite{Uniform} that the assumptions of \cite{Uniform} imply the followings. To simplify the notations, for every tree (and every $\mathbb{R}$-tree) $T$ and $v_1,\dots, v_a\in T$, we write $T(\{v_i\}_{1\leq i\leq a})$ for the subtree spanned by $v_1, \dots, v_a$. \begin{Hypo} \label{Hypo3} For every $\varepsilon>0$, \[ \lim_{a \to \infty} \limsup_{n\to +\infty} \mathbb{P} \left (\lambda^{\D_n} d_H \left (T^{\D_n}(\{\star_i\}_{0\leq i \leq a}),T^{\D_n} \right )>\varepsilon \right )=0, \] \end{Hypo} \begin{Hypo} \label{Hypo3P} For every $\varepsilon>0$, \[ \lim_{a \to \infty} \limsup_{n\to +\infty} \mathbb{P} \left (\sigma^{\P_n} d_H \left (T^{\P_n}(\{\star_i\}_{0\leq i \leq a}),T^{\P_n} \right )>\varepsilon \right )=0, \] \end{Hypo} \begin{Hypo} \label{Hypo3T} For every $\varepsilon>0$, \[ \lim_{a \to \infty} \limsup_{n\to +\infty} \mathbb{P} \left (d_H \left (T^{\Theta_n}(\{Y^{{\Theta_n}}_i\}_{0\leq i \leq a}),T^{\Theta_n} \right )>\varepsilon \right )=0. \] \end{Hypo} \pagebreak[2] \begin{theorem} \label{THM2} The following convergences hold weakly for the GH-topology. \begin{compactitem} \item[(a)] If ${\D_n}\Rightarrow \Theta$, $\mathfrak{p}^{\D_n} \to 0$, and Assumption \ref{Hypo3} is satisfied then \[ \left ( G^{{\D_n},k}, \lambda^{\D_n} d^{{\D_n},k} \right) \limit^{\text{WGH}} (G^{\Theta,k},d^{\Theta,k}). \] \item[(b)] If ${\P_n} \Rightarrow \Theta$, $\mathfrak{p}^{\P_n} \to 0$, and Assumption \ref{Hypo3P} is satisfied then \[ \left ( G^{{\P_n},k},\sigma^{\P_n} d^{{\P_n},k} \right) \limit^{\text{WGH}} (G^{\Theta,k},d^{\Theta,k}). \] \item[(c)] If ${\Theta_n} \Rightarrow \Theta$, and Assumption \ref{Hypo3T} is satisfied then \[(G^{{\Theta_n},k},d^{{\Theta_n},k}) \limit^{\text{WGH}} (G^{\Theta,k},d^{\Theta,k}). \] \end{compactitem} \end{theorem} \begin{remark} $\bullet$ Unlike the assumptions of \cite{Uniform}, Assumption \ref{Hypo3}, \ref{Hypo3P}, \ref{Hypo3T} are sufficient and necessary. \\ $\bullet$ By \cite{Uniform} Lemma 4, one can deduce the GHP convergence from the GP and GH convergence and the fact that, since $p^\Theta$ have a.s. full support on $\mathcal{T}^\Theta$ (see \cite{ICRT1}), $p^{\Theta,k}$ have a.s. full support on $G^{\Theta,k}$. \end{remark} \section{Study of the bias} \label{Bias} \subsection{Proof of Proposition \ref{MainMainMain} in the typical case} \label{Bias1} Recall that for every $x,m\in \mathbb{R}^+$, $h_m=x\mathbf{1}_{x\geq m}$. Recall the definitions of $(\square_i)_{1\leq i \leq k}$ and $\square_{\square,k}$ from section \ref{DkDef}. For every $\mathcal{D}\in \Omega_{\D}$ with $N^\mathcal{D}\geq 2k$ and $m\in \mathbb{R}^+$ let \[f^\mathcal{D}(m):=\mathbb{E}\left [h_{m}\left (\frac{\square_{\square,k}(T^{\mathcal{D}})}{(\lambda^\mathcal{D})^k}\right )\right ].\] In this section we estimate $f^\mathcal{D}$ under the additional assumption $2N^\mathcal{D}\geq s^\mathcal{D}/\sigma^\mathcal{D}$, which is satisfied when there are not too many vertices with degree 2. \begin{proposition} \label{MainMain} There exists $c,C>0$ such that for every $\mathcal{D}\in \Omega_{\D}$ with $N^\mathcal{D}\geq \max(2k,s^\mathcal{D}/(2\sigma^\mathcal{D}))$, and $m>0$, we have $f^\mathcal{D}(m)\leq Cm^{-c}$. \end{proposition} Our proof is organized as follow: We first upper bound $\square_{\square,k}$. Then we use H\"older's inequality to upper bound $f^\mathcal{D}(\varepsilon)$ with the numbers of leaves in some open balls around $\star_0$. Then we use Algorithm \ref{D-tree} to upper bound those numbers with $(Y_i)_{1\leq i \leq k}$. Finally we use the continuum $\mathcal{D}$-tree construction of \cite{Uniform} to study $(Y_i)_{1\leq i \leq k}$ through random Poisson point process. Let $d^\mathcal{D}$ be the graph distance in $T^\mathcal{D}$. Let $d'^\mathcal{D}(\cdot, \cdot):=\lambda^\mathcal{D} d^\mathcal{D}(\cdot,\cdot)$. We have: \begin{lemma} \label{MIAOUH1} Let $C=2^{2k}(k+1)!$. For every $\varepsilon>0$, for every $\mathcal{D}\in \Omega_{\D}$ with $N^\mathcal{D}\geq 2k$, \[ f^\mathcal{D}(C\varepsilon^{-k})/(kC)\leq g^\mathcal{D}(\varepsilon):=\mathbb{E}\left [\frac{\mathbf{1}_{d'(\star_{1},\star_{2})\leq \varepsilon}}{\prod_{i=1}^k d'(\star_{2i-1},\star_{2i})} \right ]. \] \end{lemma} \begin{proof} First by definition of $\square_{\square,k}$, $\square_{\square,k}\leq (k+1)!2^k/ \prod_{i=1}^k \square_i$. Then note for every $1\leq i \leq k$ that $\square_i(T^\mathcal{D})\geq d(\star_{2i-1},\star_{2i})-1\geq d(\star_{2i-1},\star_{2i})/2$. Indeed, the path between the father of $\star_{2i-1}$ and the father of $\star_{2i}$, together with the edge connecting those two fathers, forms a cycle. Thus, \[ f^\mathcal{D}(C\varepsilon^{-k})/C\leq \mathbb{E}\left [\frac{\mathbf{1}_{\prod_{i=1}^k d'(\star_{2i-1},\star_{2i})\leq \varepsilon^k}}{\prod_{i=1}^k d'(\star_{2i-1},\star_{2i})} \right ] . \] The desired result then follows from the symmetry of the leaves $(\star_i)_{1\leq i \leq 2k}$. (That is the fact that permuting the label of the leaves of $T^\mathcal{D}$ independently of $T^\mathcal{D}$ does not change the law of $T^\mathcal{D}$.) \end{proof} For the rest of the section $\varepsilon>0$, and $\mathcal{D}$ are fixed. We have to estimate $\prod_{i=1}^k d'(\star_{2i-1},\star_{2i})$. However, it is hard to estimate since it depends on $k$ separate parts of the tree. For this reason, we instead upper bound $g$ with the numbers of leaves in some open balls around $\star_0$. For every $n\geq 1$, let $M_n$ be the proportion of leaves $L\in T\backslash\{\star_0\}$ such that $2^{-n-1}< d'(\star_0,L)\leq 2^{-n}$ and let $M_0$ be the proportion such that $d'(\star_0,L)> 1/2$. Let $K_\varepsilon:=\inf\{n\in \mathbb{N},2^{-n}\geq \varepsilon\}$. We have: \begin{lemma} \label{bias1} There exists $C>0$ which depends only on $k$ such that, \[ g(\varepsilon)\leq C \mathbb{E}\left [ \sum_{n=K_\varepsilon}^\infty 2^{nk} n^{3k}M^k_n \right ]^{1/k} \mathbb{E}\left [ \sum_{n=0}^\infty 2^{nk} n^{3k} M_n^k\right ]^{(k-1)/k}. \] \end{lemma} \begin{proof} In this proof $C$ denotes a real depending only on $k$ which may vary from line to line. First, let $(L_i)_{1\leq i\leq 2k}$ be uniform random variables in $\{\star_i\}_{0\leq i \leq N+1}$. Note that by symmetry of the leaves, \[ g(\varepsilon) = \mathbb{E} \left [ \left . \frac{\mathbf{1}_{d'(L_{1},L_{2}) \leq \varepsilon}}{\prod_{i=1}^k d'(L_{2i-1},L_{2i})} \right | \forall i\neq j, L_i\neq L_j\right ]. \] Then by roughly speaking slightly changing $(L_{i})_{1\leq i \leq 2k}$ such that some equalities may hold, \begin{align*} g(\varepsilon) & \leq C\mathbb{E} \left [\left . \frac{\mathbf{1}_{d'(L_{1},L_{2}) \leq \varepsilon}}{\prod_{i=1}^k d'(L_{2i-1},L_{2i})}\right | \forall 1\leq i \leq k, L_{2i-1}\neq L_{2i} \right ]. \\ & = C\mathbb{E} \left [ \mathbb{E}\left [ \left . \frac{\mathbf{1}_{d'(L_{1},L_{2}) \leq \varepsilon}}{d'(L_1,L_2)} \right | L_1,L_1\neq L_2, T \right ] \prod_{i=2}^{k} \mathbb{E}\left [ \left . \frac{1}{d'(L_{2i-1},L_{2i})} \right | L_{2i-1},L_{2i-1}\neq L_{2i},T \right ] \right ]. \end{align*} Furthermore, by H\"older's inequality, and by symmetry of the leaves, \begin{align} g(\varepsilon)^k& \leq C\mathbb{E} \left [ \mathbb{E}\left [ \left . \frac{\mathbf{1}_{d'(L_{1},L_{2}) \leq \varepsilon}}{d'(L_1,L_2)} \right | L_1,L_1\neq L_2,T \right ]^k \right ] \prod_{i=2}^{k} \mathbb{E}\left [\mathbb{E}\left [ \left . \frac{1}{d'(L_{2i-1},L_{2i})} \right | L_{2i-1},L_{2i-1}\neq L_{2i},T \right ]^k \right ] \notag \\ & = C\mathbb{E} \left [ \mathbb{E}\left [ \left . \frac{\mathbf{1}_{d'(\star_0,L_{2}) \leq \varepsilon}}{d'(\star_0,L_2)} \right |\star_0\neq L_2, T \right ]^k \right ] \mathbb{E}\left [\mathbb{E}\left [ \left . \frac{1}{d'(\star_0,L_{2})} \right |\star_0\neq L_{2}, T \right ]^k \right ]^{k-1}. \notag \end{align} Therefore, we have by definition of $(M_n)_{n\in \{0\}\cup \mathbb{N}}$, \begin{equation} g^\mathcal{D}(\varepsilon)^k\leq C \mathbb{E}\left [ \left ( \sum_{i=K_\varepsilon}^\infty 2^n M_n \right )^k \right ] \mathbb{E}\left [ \left (\sum_{i=0}^\infty 2^n M_n\right )^k \right ]^{k-1}. \label{OUAFOUAF} \end{equation} If $k=1$ the desired results follow from \eqref{OUAFOUAF}. If $k\geq 2$ then we have a.s., by H\"older's inequality, \[ \sum_{i=K_\varepsilon}^\infty 2^n M_n \leq \left(\sum_{i=K_\varepsilon}^\infty \left ( 2^n n^3 M_n \right )^k \right )^{1/k} \left(\sum_{i=N}^\infty \left ( \frac{1}{n^3} \right )^{k/(k-1)} \right )^{(k-1)/k} , \] and similarly for $\sum_{i=0}^\infty 2^n M_n$. And the desired result follows from \eqref{OUAFOUAF}. \end{proof} Recall Section \ref{2.2}. We now upper bound for $n\in \mathbb{N}$, $\mathbb{E}[M_n^k]$ using Algorithm \ref{D-tree}. Recall the definition of $A^\mathcal{D}$. Let $Y_1, Y_2,\dots$ be the indexes such that $A^\mathcal{D}_i\in \{A^\mathcal{D}_1,\dots, A^\mathcal{D}_{i-1}\}$. \begin{lemma} \label{masse d'une truite} For every $n\in \mathbb{N}$, \[ \mathbb{E}[M_n^k]\leq k^k \sum_{a=1}^k \frac{1}{N^{k-a}} \mathbb{P} \left (Y_a\leq \frac{a}{2^n} \frac{s}{ \sigma} \right ).\] \end{lemma} \begin{proof} First, let $(L_i)_{1\leq i\leq 2k}$ be uniform random variables in $\{\star_i\}_{1\leq i \leq N+1}$. By definition of $M_n^k$, \begin{equation*} \mathbb{E}[M_n^k] =\mathbb{P}\left [\frac{1}{2^{n+1}} < d'(\star_0,L_1),\dots, d'(\star_0,L_k)\leq \frac{1}{2^n}\right ]. \end{equation*} Then we want distinct leaves to use Algorithm \ref{D-tree}. To this end, we develop the right hand side above by distinguishing the cases of equality. Let $\mathfrak{P}(k)$ be the set of partition of $\{1,\dots, k\}$. For every $I=\{I_1,\dots, I_a\}\in \mathfrak{P}(k)$, let $\mathfrak{E}_I$ be the event that for every $x,y \in \{1,\dots, k\}$, $L_x=L_y$ iff they are in the same $I_i$. For every $I\subset \{1,\dots, k\}$ let $m_{I}:=\min(I)$. We have, \begin{align*}& \mathbb{E}\left [M_n^k \right ] = \sum_{I=\{I_1,\dots, I_a\}\in \mathfrak{P}(k)} \mathbb{P} \left [ \mathfrak{E}_I \,, \, \frac{1}{2^{n+1}} < d'(\star_0,L_1),\dots, d'(\star_0,L_k)\leq \frac{1}{2^n}\right ] \\ & =\sum_{\{I_1,\dots, I_a\}\in \mathfrak{P}(k)} \frac{1}{(N+1)^{k-a} } \mathbb{P} \left [L_{m_{I_1}}\neq \dots \neq L_{m_{I_a}} \, , \, \frac{1}{2^{n+1}} < d'(\star_0,L_{m_{I_1}}),\dots, d'(\star_0,L_{m_{I_a}})\leq \frac{1}{2^n} \right ]. \end{align*} Then by symmetry of the leaves, \begin{equation*} \mathbb{E}\left [M_n^k \right ] = \sum_{\{I_1,\dots, I_a\}\in \mathfrak{P}(k)} \frac{1}{(N+1)^{k-a}} \mathbb{P} \left [\frac{1}{2^{n+1}} <d'(\star_0,\star_1),\dots, d'(\star_0,\star_a)\leq \frac{1}{2^n}\right ]. \end{equation*} So since there is at most $k^k$ partitions of $\{1,\dots, k\}$, \begin{equation} \mathbb{E}\left [M_n^k \right ] \leq k^k\sum_{a=1}^k \frac{1}{N^{k-a}} \mathbb{P} \left [\frac{1}{2^{n+1}} < d'(\star_0,\star_1),\dots, d'(\star_0,\star_a)\leq \frac{1}{2^n}\right ].\label{KProject} \end{equation} Finally we use Algorithm \ref{D-tree}. It is direct from the construction that, writing $Y_0=0$, \[ Y_a= \sum_{i=1}^a (Y_i-Y_{i-1})\leq \sum_{i=1}^a (d(\star_0,\star_i)-1)\leq (s/\sigma)\sum_{i=1}^a d'(\star_0,\star_i). \] So the desired results follows from \eqref{KProject}. \end{proof} We now upper bound $Y_a$ using a part of the continuum $\mathcal{D}$-tree construction of \cite{Uniform}: \begin{compactitem} \item[-] Let $(X_i)_{1\leq i \leq s}$ be a family of independent exponential random variables of parameter $(d_i/\sigma)_{1\leq i \leq s}$. \item[-] Let $\mu$ be the measure on $\mathbb{R}^+$ defined by $\mu=\sum_{i=1}^{s} \delta_{X_i} \left (d_i-1 \right )/\sigma$. \item[-] Let $(\hat Y_i)_{i\in \mathbb{N}}$ be a Poisson point process on $\mathbb{R}^+$ of rate $\mu[0,y]dy$. \item[-] Let $(E_i)_{1\leq i \leq s-1}$ be a family of exponential random variables of mean $(\sigma/(s-i))_{1\leq i \leq s-1}$. \end{compactitem} By \cite{Uniform} Lemma 10 there exists a coupling such that $Y_a$ is independent of $(E_i)_{1\leq i \leq s-1}$ and such that a.s. $\sum_{i=1}^{Y_a} E_i\leq \hat Y_a$. Moreover, we have: \begin{lemma} \label{truecutbound} For every $a,n\in \mathbb{N}$ with $n\leq s/2$, \[ \mathbb{P} \left (Y_a\leq n\right ) \leq \mathbb{P} (\hat Y_a\leq 4n\sigma/s )/2 .\] \end{lemma} \begin{proof} Fix $n\leq s/2$. It is easy to check from basic estimates on the Gamma distribution that, \begin{align*} \mathbb{P}\left ( \sum_{i=1}^{n} E_i \leq 4 n(\sigma/s) \right ) \geq 1/2. \end{align*} So since $Y_a$ and $(E_i)_{1\leq i \leq s-1}$ are independent, \begin{equation*} \mathbb{P} \left (\hat Y_a\leq 4n\frac{\sigma}{s} \right ) \geq \mathbb{P} \left (\sum_{i=1}^{Y_a} E_i \leq 4n\frac{\sigma}{s} \right ) \geq \mathbb{P} \left (Y_a \leq n, \sum_{i=1}^{n} E_i \leq 4n\frac{\sigma}{s} \right )\geq \frac{1}{2} \mathbb{P}(Y_a \geq n). \qedhere \end{equation*} \end{proof} Hence, to upper bound $Y_a$ it is enough to upper bound $\hat Y_a$. To this end, we first upper bound $\mu$. \begin{lemma} \label{dumb} For every $a\in \mathbb{N}$, \begin{compactitem} \item[ (a)] For every $x,t>0$, $\mathbb{P}( \mu[0,x]> t) \leq x/t$. \item[(b)] For every $0\leq x\leq 1\leq t$, $\mathbb{P}( \mu[0,x]> t) \leq e^{-t/4}$. \end{compactitem} \end{lemma} \begin{proof} Note that by definition of $\mu$, $(X_i)_{1\leq i \leq s}$ and $\sigma$, \begin{equation*} \mathbb{E}[\mu[0,x]]=\sum_{i=1}^\sigma \frac{d_i-1}{\sigma} \mathbb{P}(X_i>x)\leq \sum_{i=1}^\sigma \frac{d_i-1}{\sigma} \frac{xd_i}{\sigma} \leq x. \end{equation* So (a) follows from Markov's inequality. Also $\mu[0,x]$ is a sum of independent random variables bounded by 1 so (b) follows from Bernstein's inequality (see \cite{Massart} Section 2.8). \end{proof} \begin{lemma} \label{falsecutbound} For every $a\in \mathbb{N}$ and $0\leq x\leq e^{-9}$, $\mathbb{P} (\hat Y_a\leq x ) \leq 3x^{a+1}(-4a \log x)^a$. \end{lemma} \begin{proof} By definition of $( \hat Y_i)_{i\in \mathbb{N}}$, conditionally on $\mu$, $\max \{i\in \mathbb{N}, Y_i\leq x\}$ is a Poisson random variable of mean $\int_0^x \mu[0,t]dt \leq x\mu[0,x]$. So, by basic inequalities on the Poisson distributions, \begin{equation} \mathbb{P}(\hat Y_a\leq x)= \mathbb{E}[ \mathbb{P}(\hat Y_a\leq x|\mu) ] \leq \mathbb{E}[(x\mu[0,x])^a ]. \label{2310} \end{equation} Then we have by integration by part and Lemma \ref{dumb}, \begin{align*} \mathbb{E}[\mu[0,x]^a] & = \int_0^\infty \mathbb{P}(\mu[0,x] \geq t) (a t^{a-1} dt) \\ & \leq \int_0^x a t^{a-1} dt+ \int_x^{-4\log x} (x/t) (a t^{a-1} dt) + \int_{-4 \log x}^\infty e^{-t/4} (a t^{a-1} dt) \\ & \leq 3x(-4a \log x)^a, \end{align*} using basic calculus for the last inequality. This concludes the proof. \end{proof} \begin{proof}[Proof of Proposition \ref{MainMain}] We now complete our upper bound for $f(\mathcal{D})$. In this proof, $c, C$ denote reals which depend only on $k$ and which may vary from line to line. First by Lemmas \ref{falsecutbound} and \ref{truecutbound} we have for every $1\leq a \leq k$ and $0\leq x <1/16$, \[ \mathbb{P} \left (Y_a\leq xs/\sigma \right )\leq Cx^{a+1}(-\log(x))^c. \] Then by Lemma \ref{masse d'une truite}, and $2N\geq s/\sigma$, for every $n\in \mathbb{N}$ with $(s/\sigma)/2^n \geq 1$, \begin{align} \mathbb{E}[M_n^k] & \leq k^k \sum_{a=1}^k \frac{1}{N^{k-a}} \mathbb{P} \left (Y_a\leq \frac{a}{2^n} \frac{s}{ \sigma} \right )\notag \\ & \leq k^k \sum_{a=1}^k \left (\frac{2\sigma}{s } \right )^{k-a} C \left ( \frac{a}{2^n} \right)^{a+1} n^c \notag \\ & \leq \frac{Cn^c}{2^{(k+1)n}}. \label{rhume d'enfant} \end{align} Note that \eqref{rhume d'enfant} naturally extends to the $n\in \mathbb{N}$ with $(s/\sigma)/2^n < 1$ since for those $n$ almost surely for $1\leq a \leq k$, we have $Y_a\geq a >\frac{a}{2^n} \frac{s}{ \sigma}$. Next, since $K_\varepsilon=\inf\{n\in \mathbb{N},2^{-n}\geq \varepsilon\}$, \[ \mathbb{E}\left [ \sum_{i=K_\varepsilon}^\infty 2^{nk} n^{3k}M^k_n \right ] \leq C2^{-K_\varepsilon c} \leq C(2\varepsilon)^c, \] and \[ \mathbb{E}\left [ \sum_{i=0}^\infty 2^{nk} n^{3k}M^k_n \right ] \leq C.\] Thus by Lemma \ref{bias1}, \begin{equation} g^\mathcal{D}(\varepsilon)\leq C\mathbb{E}\left [ \sum_{i=0}^\infty 2^{nk} n^{3k}M^k_n \right ]^{1/k} \mathbb{E}\left [ \sum_{i=K_\varepsilon}^\infty 2^{nk} n^{3k}M^k_n \right ]^{(k-1)/k} \leq C (2\varepsilon)^c. \label{BlueArchive} \end{equation} Finally, Proposition \ref{MainMain} follows from Lemma \ref{MIAOUH1}. \end{proof} Along the way by \eqref{BlueArchive} we have the following result, which we extend in the next section. \begin{lemma} \label{BlueArchive2} There exists $c,C>0$ which depends only on $k$ such that for every $\varepsilon>0$, $\mathcal{D}\in \Omega_{\D}$ with $N^\mathcal{D}\geq \max(2k,s^\mathcal{D}/(2\sigma^\mathcal{D}))$, $g^\mathcal{D}(\varepsilon)\leq C \varepsilon^c$. \end{lemma} \subsection{Proof of Proposition \ref{MainMainMain} when there are many vertices of degree 2} \label{Bias2} This section is organized as follow. We first detail how to remove or add vertices of degree 2. We then prove from those constructions a connection between the $\mathcal{D}$-trees that do not have any vertice of degree 2 and the others. Finally we use this connection to prove Proposition \ref{MainMainMain}. First for every graph $G$ and $x\in G$, we call $x$ an edgepoint if $x$ have degree 2. A simple way to remove the edgepoints is to shortcut them: Formally if $T=(V,E)$ is a tree, then $\nabla T$ be the tree $(V',E')$ such that $V'=\{v\in V,\deg_T(v)\neq 2\}$ and for every $v,w\in V'$, $\{v,w\}\in E'$ iff there exists a path between $v$ and $w$ that only pass by $v$, $w$ and vertices of degree $2$. Note that $\nabla$ keep the degrees: for every $v\in T$ with $\deg_v(T)\neq 2$, we have $v\in \nabla T$ and $\deg_T(v)=\deg_{\nabla T}(v)$. \begin{remark} One may extends $\nabla$ to general graph. However, the natural way to preserves the degrees is to work with multigraph. We avoid this issue by working with trees. \end{remark} Reciprocally one may construct any tree by adding some edgepoints along the oriented edges of a tree without edgepoint: For every $T=(V,E)$ let $( \vec e_i(T))_{1\leq i \leq \# E}$ be some fixed oriented edges of $T$ such that each edge of $E$ appears in one and only one direction. Let $((W_{i,j})_{1\leq i \leq r_i})_{1\leq i \leq \# E}$ be some vertices that are not in $V$. For every $1\leq i \leq \#E$ let $(W_{i,0},W_{i,r_i+1}):= \vec e_i(G)$. Let \[ \Delta(T,((W_{i,j})_{1\leq i \leq r_i})_{1\leq i \leq \# E}:=\left (V\cup \{W_{i,j}\}_{1\leq i \leq \#E, 1\leq j \leq r_i}, \{ \{W_{i,j},W_{i,j+1}\} \}_{1\leq i \leq \#E, 0\leq j \leq r_i} \right ). \] We now use $\Delta, \nabla$ to study $\mathcal{D}$-trees. Beforehand, let us introduce some notations. For every $\mathcal{D}=(d_1,\dots, d_s)\in \Omega_\mathcal{D}$, let $s_{\geq 2}^\mathcal{D}:=\#\{a\in \mathbb{N}, d_a\geq 2\}$, let $s_{\geq 1}^\mathcal{D}=\#\{a\in \mathbb{N}, d_a\geq 1\}$, and let $s^\mathcal{D}_1:=\#\{a\in \mathbb{N}, d_a=1\}$. Also let $\nabla \mathcal{D}$ be the sequence $(d_1,d_2,\dots, d_{s_{\geq 2}}, d_{s_{\geq 1}+1},\dots, d_s)$. Also we say that $((W_{i,j})_{1\leq j \leq r_i})_{1\leq i \leq n}$ is an ordered partition of size $n\in \mathbb{N}$ of a finite set $E$ iff for $1\leq i \leq n$, $r_i\in \{0\} \cup \mathbb{N}$, and $(i,j)\mapsto W_{i,j}$ is a bijection from $\{1\leq i \leq n, 1\leq j \leq r_i\}$ to $E$. We have the following connections between $\mathcal{D}$-trees and $\nabla \mathcal{D}$-trees: \begin{lemma} \label{Nabla1} Let $\mathcal{D}\in \Omega_{\D}$. Let $W$ be a uniform ordered partition of size $s^{\nabla \mathcal{D}}-1$ of $\{V_i\}_{i:d^\mathcal{D}_i=1}$. Then, a) $\nabla (T^\mathcal{D})$ is a $\nabla \mathcal{D}$-tree, and b) $\Delta(T^{\nabla \mathcal{D}},W)$ is a $\mathcal{D}$-tree \end{lemma} \begin{proof} First note that $\nabla ( \Delta(T^{\nabla \mathcal{D}},W))=T^{\nabla \mathcal{D}}$, since this tree is obtained by adding some edgepoint on $T^{\nabla \mathcal{D}}$, which do not have edgepoint, then by removing all edgepoint. So b) imply a). Toward b), simply note that $\Delta$ may be seen as a bijection from trees with degree sequence $\nabla \mathcal{D}$ and ordered partition of size $s^{\nabla \mathcal{D}}-1$ of $\{V_i^\mathcal{D}\}_{d_i=1}$ toward trees with degree sequence $\mathcal{D}$. (Indeed, one may recover the initial tree by applying $\Delta$ and then read the ordered partition by, roughly speaking, following each oriented edges of the initial tree on the image tree.) \end{proof} We now prove Proposition \ref{MainMainMain}. To this end, it is enough to remove the assumption $2N^\mathcal{D}\geq s^\mathcal{D}/\sigma^\mathcal{D}$ of Proposition \ref{MainMain}. Note that it is satisfied when $s_{1}^\mathcal{D}=0$ since in this case, $\sigma^\mathcal{D}\geq 1$ and $s^\mathcal{D}=N^\mathcal{D}+s_{\geq 2}^\mathcal{D}\leq 2N^\mathcal{D}$. For this reason, our goal for the rest of the section will be to prove the following result, which together with Lemmas \ref{BlueArchive2} and \ref{MIAOUH1} yields Proposition \ref{MainMainMain}. \begin{proposition} \label{MainMain'} Recall the definition of $g$ from Lemma \ref{MIAOUH1}. There exists $C>0$, which depends only on $k$, such that for every $\mathcal{D}\in \Omega_{\D}$ with $N^\mathcal{D}\geq 2k$ and $\varepsilon>0$, \[ g^\mathcal{D}(\varepsilon)\leq C\varepsilon \left (\int_{\varepsilon}^1 g^{\nabla \mathcal{D}}(\delta)/\delta^2 d\delta+kg^{\nabla \mathcal{D}}(1)+1 \right ). \] \end{proposition} To this end, it is enough to lower bound $(d^\mathcal{D}(\star_{2i-1},\star_{2i}))_{1\leq i \leq k}$ using $(d^{\nabla\mathcal{D}}(\star_{2i-1},\star_{2i}))_{1\leq i \leq k}$. To do so, by Lemma \ref{Nabla1} (b), it suffices to study uniform ordered partitions. More precisely, we have to lower bound the cardinal of the sets of those partitions, which corresponds to the numbers of edgepoint added on each edge. This is done in the following lemma. \begin{lemma} \label{Nabla2}Let $((W_{i,j})_{1\leq j \leq R_i})_{1\leq i \leq n}$ be a uniform ordered partition of size $n$ of a finite set $E$. \begin{compactitem} \item[(a)] $(R_i)_{1\leq i\leq n}$ is uniform among all set of integers such that $\sum_{i=1}^n R_i=\# E$. \item[(b)] Let $(S_i)_{1\leq i\leq n}$ be independent geometric random variables of mean $\#E/n$ conditioned on $\sum_{i=1}^n S_i\leq \# E$. Then there exists a coupling between $(R_i)_{1\leq i \leq n}$ and $(S_i)_{1\leq i\leq n}$ such that almost surely for every $1\leq i \leq n$, $R_i\geq S_i$ . \end{compactitem} \end{lemma} \begin{proof} Toward (a), simply note that given $(R_i)_{1\leq i\leq n}$, there are exactly $\# E!$ possible ways to label $((W_{i,j})_{1\leq j \leq R_i})_{1\leq i \leq n}$ to form an ordered partition of size $n$ of $E$. Then (b) is an easy exercise. \end{proof} Next, in order to use the independency of Lemma \ref{Nabla2} (b), we will use the following lemma: \begin{lemma} \label{Nabla3} Let $T$ be a tree. Assume that $(\star_i)_{1\leq i \leq 2k}$ are leaves of $T$. For every $1\leq i \leq k$ let $\mathcal{E}_i$ be the set of edges that are on the minimal path between $\star_{2i-1}$ and $\star_{2i}$. Then there exists $(\mathcal E'_i)_{1\leq i \leq k}$ disjoint subsets of $(\mathcal{E}_i)_{1\leq i \leq k}$ such that for every $1\leq i \leq k$, $\# \mathcal E'_i\geq \max(\#\mathcal E_i/k,2)$. \end{lemma} \begin{proof} Consider the following informal construction of $(\mathcal E'_i)_{1\leq i \leq k}$: \begin{compactitem} \item[-] First let for $1\leq i \leq k$, $\mathcal E'_i:= \{ \{\star_{2i-1},F_{2i-1}\}, \{\star_{2i},F_{2i}\}\}$, where for $1\leq i \leq 2k$, $F_{i}$ is the father of $\star_i$ in $T$. \item[-] Then while $\bigcup_{i=1}^k \mathcal E'_i \neq \bigcup_{i=1}^k \mathcal E_i$: \begin{compactitem} \item[-] For $1\leq i \leq k$: If possible add to $\mathcal E'_i$ an arbitrary edge in $\mathcal{E}_i$ that is not yet in $\bigcup_{j=1}^k \mathcal E'_i$. \end{compactitem} \end{compactitem} It is easy to check that $(\mathcal E'_i)_{1\leq i \leq k}$ are disjoint subsets of $(\mathcal{E}_i)_{1\leq i \leq k}$. Also for $1\leq i \leq k$, $\# \mathcal E'_i \geq 2$. Finally a quick enumeration gives that at the end of the algorithm $\#\mathcal E'_i \geq \#\mathcal{E}_i/k$. \end{proof} \begin{proof}[Proof of Proposition \ref{MainMain'}.] Let $\varepsilon>0$. Let $\mathcal{D}\in \Omega_{\D}$. Let $W$ be a uniform ordered partition of size $s_{\geq 2}^\mathcal{D}$ of $\{V_i^\mathcal{D}\}_{i:d_i=1}$ and independent of $T^{\nabla \mathcal{D}}$. Let $d^{\nabla D, W}$ be the graph distance on $\Delta(T^{\nabla \mathcal{D}},W)$. Then by Lemma \ref{Nabla1} (b), $\Delta(T^{\nabla \mathcal{D}},W)$ is a $\mathcal{D}$-tree. So, by definition of $g$, it is enough upper bound \begin{equation} G^{\mathcal{D}}(\varepsilon,T^{\nabla \mathcal{D}}):=\mathbb{E}\left [\left . \frac{\mathbf{1}_{\lambda^\mathcal{D} d^{\nabla \mathcal{D}, W}(\star_{1},\star_{2})\leq \varepsilon}}{\prod_{i=1}^k\left (\lambda^\mathcal{D} d^{\nabla \mathcal{D}, W}(\star_{2i-1},\star_{2i})\right )} \right | T^{\nabla \mathcal{D}}\right ] . \label{3011.17h} \end{equation} To this end, let us use Lemmas \ref{Nabla2} and \ref{Nabla3}. Let $\mathcal{E}$ be the set of edges of $T^{\nabla \mathcal{D}}$. Let $(S_e)_{e\in \mathcal{E}}$ be independent geometric random variables of mean $s_{1}^\mathcal{D}/\# \mathcal{E}$ conditioned on $\sum_{e\in \mathcal{E}} S_i\leq s^\mathcal{D}_1$. For $1\leq i \leq k$ let $\mathcal{E}_i$ be the set of edges that are on the minimal path between $\star_{2i-1}$ and $\star_{2i}$ in $T^{\nabla \mathcal{D}}$. By definition of $\Delta$, and by Lemma \ref{Nabla2}, note that, there exists a coupling between $W$ and $(S_e)_{e\in \mathcal{E}}$ such that a.s. for $1\leq i \leq 2k$, \begin{equation} d^{\nabla D, W}(\star_{2i-1},\star_{2i}) \geq \sum_{e\in \mathcal{E}_i}(1+S_e). \label{3011.18h}\end{equation} Then, by Lemma \ref{Nabla3}, let $(\mathcal E'_i)_{1\leq i \leq k}$ be disjoint subsets of $(\mathcal{E}_i)_{1\leq i \leq k}$ such that for every $1\leq i \leq k$, $\# \mathcal E'_i\geq \max(\#\mathcal E_i/k,2)$. It directly follows from \eqref{3011.18h} that a.s. for $1\leq i \leq 2k$, \begin{equation*} d^{\nabla D, W}(\star_{2i-1},\star_{2i}) \geq \sum_{e\in \mathcal{E}'_i}(1+S_e) . \label{3011.18hb}\end{equation*} Therefore, \[G^{\mathcal{D}}(\varepsilon,T^{\nabla \mathcal{D}})\leq \mathbb{E}\left [\left . \frac{\mathbf{1}_{\lambda^\mathcal{D}\sum_{e\in \mathcal{E}'_1}(1+S_e)\leq \varepsilon}}{\prod_{i=1}^k \left (\lambda^\mathcal{D}\sum_{e\in \mathcal{E}'_i}(1+S_e)\right )} \right | T^{\nabla \mathcal{D}}\right ] . \] Hence, if $(S'_e)_{e\in \mathcal{E}}$ are independent geometric random variables of mean $s_{1}^\mathcal{D}/\# \mathcal{E}$, \[G^{\mathcal{D}}(\varepsilon,T^{\nabla \mathcal{D}})\leq \frac{1}{\mathbb{P}\left (\left . \sum_{e\in \mathcal{E}} S'_e\leq s_1^\mathcal{D}\right | T^{\Delta \mathcal{D}} \right )}\mathbb{E}\left [\left . \frac{\mathbf{1}_{\lambda^\mathcal{D}\sum_{e\in \mathcal{E}'_1}(1+S'_e)\leq \varepsilon}}{\prod_{i=1}^k \left (\lambda^\mathcal{D}\sum_{e\in \mathcal{E}'_i}(1+S'_e)\right )} \right | T^{\nabla \mathcal{D}}\right ]. \] Then note that there exists a constant $C<\infty$ that does not depends on $k,\mathcal{D}$ such that a.s. $\mathbb{P}\left (\left . \sum_{e\in \mathcal{E}} S'_e\leq s_1^\mathcal{D}\right | T^{\Delta \mathcal{D}} \right ) \leq 1/C$. So, since $(\mathcal E'_i)_{1\leq i \leq k}$ are disjoint and $(S'_e)_{e\in \mathcal{E}}$ are independent, \[G^{\mathcal{D}}(\varepsilon,T^{\nabla \mathcal{D}})\leq C\left (\lambda^\mathcal{D}\right )^{-k}\mathbb{E}\left [\left . \frac{\mathbf{1}_{\sum_{e\in \mathcal{E}'_1}(1+S'_e)\leq \varepsilon/\lambda^\mathcal{D}}}{\sum_{e\in \mathcal{E}'_1}(1+S'_e)} \right | T^{\nabla \mathcal{D}}\right ] \prod_{i=2}^k \mathbb{E}\left [\left . \frac{1}{\sum_{e\in \mathcal{E}'_i}(1+S'_e)} \right | T^{\nabla \mathcal{D}}\right ] . \] Therefore we have using Lemma \ref{Concentretoi} below, and the fact that for every $1\leq i \leq k$, $\# \mathcal E'_i\geq 2$, \begin{equation}G^{\mathcal{D}}(\varepsilon,T^{\nabla \mathcal{D}})\leq C(2e)^{-k}\left (\lambda^\mathcal{D}\right )^{-k} \min\left (1,\frac{e\varepsilon/\lambda^\mathcal{D}}{\#\mathcal{E}'_1(1+s_{1}^\mathcal{D}/\# \mathcal{E})}\right ) \prod_{i=1}^k \frac{1}{\#\mathcal{E}'_i (1+s_{1}^\mathcal{D}/\# \mathcal{E})}. \label{0112.11h} \end{equation} Next, let us rewrite \eqref{0112.11h}. First, note that for every $1\leq i \leq k$, \[ \#\mathcal{E}'_i \geq \#\mathcal{E}_i/k=d^{\nabla \mathcal{D}}(\star_{2i-1},\star_{2i})/k. \] Also, \[ 1+\frac{s_{1}^\mathcal{D}}{\# \mathcal{E}}= 1+\frac{s_{1}^\mathcal{D}}{s^{\nabla \mathcal{D}}-1}=\frac{s^{\nabla \mathcal{D}}+s_{1}^\mathcal{D}-1}{s^{\nabla \mathcal{D}}-1}=\frac{s^{\mathcal{D}}-1}{s^{\nabla \mathcal{D}}-1}\geq \frac{s^{\mathcal{D}}}{s^{\nabla \mathcal{D}}}=\frac{\lambda^\mathcal{D}}{\lambda^{\nabla \mathcal{D}}},\] noting for the last equality that $\sigma^\mathcal{D}=\sigma^{\nabla\mathcal{D}}$. Then by elementary calculus it is easy to prove that, \begin{align*} \min\left (1,\frac{e\varepsilon/\lambda^\mathcal{D}}{\#\mathcal{E}'_1(1+s_{1}^\mathcal{D}/\# \mathcal{E})}\right )& \leq ke \min\left (1,\frac{\varepsilon}{\lambda^{\nabla \mathcal{D}} d^{\nabla \mathcal{D}}(\star_1,\star_2)}\right ) \\ &= ke \varepsilon \int_{\varepsilon}^\infty \mathbf{1}_{\lambda^{\nabla \mathcal{D}}d^{\nabla \mathcal{D}}(\star_1,\star_2)) \leq \delta} \frac{d\delta}{\delta^2}. \end{align*} Therefore by \eqref{0112.11h}, \begin{equation*}G^{\mathcal{D}}(\varepsilon,T^{\nabla \mathcal{D}})\leq C(2ek)^{k+1}\left (\lambda^{\nabla \mathcal{D}}\right )^{-k} \varepsilon \int_{\varepsilon}^\infty \mathbf{1}_{\lambda^{\nabla \mathcal{D}}d^{\nabla \mathcal{D}}(\star_1,\star_2) \leq \delta} \frac{d\delta}{\delta^2} \prod_{i=1}^k \frac{1}{d^{\nabla \mathcal{D}}(\star_{2i-1},\star_{2i})}. \end{equation*} Finally by taking the expectation and by Fubini's theorem, we have, \[ \mathbb{E}[G^{\mathcal{D}}(\varepsilon,T^{\nabla \mathcal{D}})] \leq C(2ek)^{k+1}\left (\lambda^{\nabla\mathcal{D}}\right )^{-k} \varepsilon \int_{\varepsilon}^\infty \mathbb{E}\left [ \mathbf{1}_{\lambda^{\nabla \mathcal{D}}d^{\nabla \mathcal{D}}(\star_1,\star_2)\leq \delta} \prod_{i=1}^k \frac{1}{d^{\nabla \mathcal{D}}(\star_{2i-1},\star_{2i})} \right ] \frac{d\delta}{\delta^2}, \] which yields by definition of $G$ and $g$, \begin{equation} g^\mathcal{D}(\varepsilon) \leq C(2ek)^{k+1} \varepsilon \int_{\varepsilon}^\infty g^{\nabla \mathcal{D}}(\delta) \frac{d\delta}{\delta^2}. \label{WOUAFWOUAF} \end{equation} To conclude the proof, note that for $\delta\geq 1$, \begin{align*} g^{\nabla \mathcal{D}}(\delta) & \leq \mathbb{E}\left [ \prod_{i=1}^k \frac{1}{\lambda^{\nabla \mathcal{D}}d^{\nabla \mathcal{D}}(\star_{2i-1},\star_{2i})} \right ] \\ & \leq \mathbb{E}\left [1+ \sum_{j=1}^k \mathbf{1}_{\lambda^{\nabla\mathcal{D}}d^{\nabla \mathcal{D}}(\star_{2j-1},\star_{2j}) \leq 1} \prod_{i=1}^k \frac{1}{\lambda^\mathcal{D} d^{\nabla \mathcal{D}}(\star_{2i-1},\star_{2i})} \right ]=1+kg^{\nabla \mathcal{D}}(1). \end{align*} So the desired result follows from \eqref{WOUAFWOUAF}. \end{proof} \begin{lemma} \label{Concentretoi} Let $n\geq 2$, $m \geq 0$. Let $(S_i)_{1\leq i \leq n}$ be independent geometric random variables of mean $m$. Then, \[ \mathbb{E}\left [\frac{1}{\sum_{i=1}^n (1+S_i)} \right ]\leq \frac{2e}{n(1+m)}. \] Also, for every $\varepsilon>0$, \[ \mathbb{E} \left [\frac{\mathbf{1}_{\sum_{i=1}^n (1+S_i)\leq \varepsilon}}{\sum_{i=1}^n (1+S_i)} \right ]\leq \frac{2e}{n(1+m)} \min\left (1,\frac{e\varepsilon}{(1+m)n}\right ). \] \end{lemma} \begin{proof} Note that $\sum_{i=1}^n (1+S_i)$ is the time needed to get $n$ success for Bernoulli trials that hold with probability $1/(1+m)$. Thus for every $x>0$, \[ \mathbb{P} \left (\sum_{i=1}^n (1+S_i)\leq x \right )\leq \binom{\lfloor x\rfloor }{n}\frac{1}{(1+m)^n} \leq \left (\frac{x}{1+m}\right )^n/n! \leq \left (\frac{ex}{(1+m)n}\right )^n. \] It directly follows by integration by part that, \begin{align*} \mathbb{E}\left [\frac{1}{\sum_{i=1}^n (1+S_i)} \right ] & = \int_0^\infty \mathbb{P} \left (\sum_{i=1}^n (1+S_i) \leq x \right ) x^{-2} dx\notag \\ & \leq \int_0^{(1+m)n/e} \left (\frac{ex}{(1+m)n}\right )^n x^{-2} dx +\int_{(1+m)n/e}^{\infty} x^{-2}dx \notag \\ & = \frac{e}{(1+m)n(n-1)}+\frac{e}{(1+m)n} \notag \\ & \leq \frac{2e}{(1+m)n}. \end{align*} The second inequality is proved in a similar way. \end{proof} \subsection{Bias of $\P$-trees and ICRT} Recall the definitions of section \ref{DkDef} and \ref{TkDef} of $(\square_i)_{1\leq i \leq k}$ and $\square_{\square,k}$. Recall that for every $x,m\in \mathbb{R}^+$, $h_m :x \mapsto \mathbf{1}_{x\geq m}x$. \begin{lemma} \label{Followers} We have the following assertions: \begin{compactitem} \item[(a)] \[ \lim_{m\to \infty} \max_{\P\in \Omega_{\P}}\mathbb{E}\left [h_{m}\left (\square_{\square,k}(T^{\P})/(\sigma^\P)^k\right )\right ]=0. \] \item[b)] \[ \lim_{m\to \infty} \max_{\Theta \in \Omega_{\Theta}}\mathbb{E}\left [h_{m}\left (\square_{\square,k}(\textbf Y^\Theta, \textbf Z^\Theta)\right )\right ]=0. \] \end{compactitem} \end{lemma} \begin{proof} We focus only on (a) as (b) can be proved in the exact same way. Fix $\P\in \Omega_{\P}$. Let $({\D_n})_{n\in \mathbb{N}}\in \Omega_{\D}^\mathbb{N}$ such that ${\D_n} \Rightarrow \P$ (see the start of Section \ref{MAINsection} or \cite{Uniform} Section 8.1 for existence). By \cite{Uniform} Theorem 5, we have the following weak convergence, \[ (d^{\D_n}(\star_{i},\star_{j}))_{1\leq i,j \leq 2k}\limit^{(d)} (d^\P(\star_i,\star_j))_{1\leq i,j\leq 2k}. \] Then by Lemma \ref{CHIANTa} (see also \cite{MST} Corollary 6.6), $\square_{\square_k}(T^{{\D_n}})$ converges weakly toward $\square_{\square_k}(T^{\P})$ as $n\to \infty$. Furthermore, by Fubini's Theorem, \[(\lambda^{\D_n})^2=(\sigma^{\D_n}/s^{\D_n})^2= \sum_{i=1}^\infty \frac{(d^{\D_n}_i)(d^{\D_n}_i-1)}{(s^{\D_n})^2}\limit \sum_{i=1}^\infty p_i^2=(\sigma^\P)^2. \] Therefore, for every $m\geq 0$, \begin{equation} \limsup \mathbb{E}[h_m ( \square_{\square,k}(T^{{\D_n}})/(\lambda^{\D_n})^k) ]\geq \mathbb{E} [ h_{m+1} ( \square_{\square,k}(T^{\P} ) /(\sigma^\P)^k)].\label{21h/4/12} \end{equation} Finally, Proposition \ref{MainMainMain} concludes the proof. \end{proof} \section{Proof of the main theorems} \label{May the proof section be removed?} Theorems \ref{GP} and \ref{THM2} directly follows from three thing: the trees converges, the operation of gluing leaves is a continuous application, and the bias converge. In this section, we precise the proofs. \subsection{Proof of Theorem \ref{GP}} \begin{proof}[Proof of Theorem \ref{GP} (a)] Let $({\D_n})_{n\in \mathbb{N}}\in \Omega_{\D}^\mathbb{N}$ and $\P=(p_i)_{i\in \mathbb{N}\cup \{\infty\}}\in \Omega_{\P}$ such that ${\D_n}\Rightarrow \P$. Let $a\in \mathbb{N}$ such that $p_a>0$. For all $1\leq i \leq a$ let $W_i=V_i$. For all $1\leq i \leq 2k$, let $W_{a+i}:= \star_i$. By \cite{Uniform} Theorem 5, it is easy to check that we have the following joint convergence, \begin{equation} (d^{\D_n}(W_i,W_j))_{1\leq i,j\leq a+2k}\limit^{(d)} (d^\P(W_i,W_j))_{1\leq i,j\leq a+2k}, \label{10/12/8h} \end{equation} writing $d^{\D_n}$ for the graph distance on $T^{\D_n}$, and $d^\P$ for the graph distance on $T^\P$. Then by Kolmogorov representation theorem, we may assume that \eqref{10/12/8h} holds a.s. Furthermore, since we work with discrete trees, note that a.s. for every $n$ large enough equality holds in \eqref{10/12/8h}. Hence, by Lemma \ref{CHIANTa} a.s. for every $n$ large enough $\square_{\square,k}(T^\mathcal{D})=\square_{\square,k}(T^\P)$. Thus, by dominated convergence, for any continuous bounded function $f:\mathbb{R}^{(a+2k)^2}\to \mathbb{R}^+$, \[ \frac{\mathbb{E}[f(d^{\D_n}(W_i,W_j))_{1\leq i,j\leq a+2k})\square_{\square,k}(T^{\D_n})]}{\mathbb{E}[\square_{\square,k}(T^{\D_n})]}\limit \frac{\mathbb{E}[f(d^\P(W_i,W_j))_{1\leq i,j\leq a+2k})\square_{\square,k}(T^\P)]}{\mathbb{E}[\square_{\square,k}(T^\P)]}. \] Therefore, writing $\bar d^{{\D_n},k}$ for the graph distance on $T^{{\D_n},k}$ and $\bar d^{\P,k}$ for the graph distance on $T^{\P,k}$, \begin{equation} (\bar d^{{\D_n},k}(W_i,W_j))_{1\leq i,j\leq a+2k}\limit^{(d)} (\bar d^{\P,k}(W_i,W_j))_{1\leq i,j\leq a+2k}. \label{Tardif} \end{equation} Finally by gluing $(\star_1,\star_2),\dots, (\star_{2k-1},\star_{2k})$, which is a continuous map for the matrix distance, \begin{equation*} (d^{{\D_n},k}(V_i,V_j))_{1\leq i,j\leq a}\limit^{(d)} (d^{\P,k}(V_i,V_j))_{1\leq i,j\leq a}.\end{equation*} And Theorem \ref{GP} (a) follows from Lemma \ref{equivGP2}. \end{proof} \begin{proof}[Proof of Theorem \ref{GP} (b)] Let $({\D_n})_{n\in \mathbb{N}}\in \Omega_{\D}^\mathbb{N}$ such that ${\D_n}\Rightarrow \Theta\in \Omega_{\Theta}$. For every $n\in \mathbb{N}$ let $\mathfrak{p}^{{\D_n},k}$ be a probability measure on $\mathcal{V}^{{\D_n},k}$ such that $\mathfrak{p}^{{\D_n},k}\to 0$. For every $n\in \mathbb{N}$ and $1\leq i \leq 2k$, let $W^{\D_n}_{i}:= \star_i$. Also, let $(W_i^{\D_n})_{i>2k}$ be a family of independent random variables with law $\mathfrak{p}^{{\D_n},k}$. Fix $a>2k$. By \cite{Uniform} Theorem 6 (b) and Lemma 14, we have \begin{equation} \left (\lambda^{\D_n} d^{\D_n}(W^{\D_n}_i,W^{\D_n}_j) \right)_{1\leq i,j\leq a}\limit^{(d)} (d^\Theta(Y^\Theta_i,Y^\Theta_j))_{1\leq i,j\leq a}. \label{10/12/8hX} \end{equation} Then by Kolmogorov representation theorem we may assume that \eqref{10/12/8hX} holds almost surely. Hence, by Lemma \ref{CHIANTa} a.s. $ \square_{\square,k}(T^{{\D_n}})/(\lambda^{\D_n})^k\to \square_{\square,k}(Y^\Theta,Z^\Theta)$ as $n\to \infty$. Thus, by Proposition \ref{MainMainMain} and dominated convergence, we have for all continuous bounded function $f:\mathbb{R}^{a^2}\to \mathbb{R}$, \[ \frac{\mathbb{E}[f((\lambda^{\D_n} d^{\D_n}(W^{\D_n}_i,W^{\D_n}_j))_{1\leq i,j\leq a})\square_{\square,k}(T^{\D_n})]}{\mathbb{E}[\square_{\square,k}(T^{\D_n})]}\limit \frac{\mathbb{E}[f((d^\Theta(Y^\Theta_i,Y^\Theta_j))_{1\leq i,j\leq a})\square_{\square,k}(Y^\Theta,Z^\Theta)]}{\mathbb{E}[\square_{\square,k}(Y^\Theta,Z^\Theta)]}. \] Therefore, \begin{equation*} (\lambda^{\D_n}\bar d^{{\D_n},k}(W^{\D_n}_i,W^{\D_n}_j))_{1\leq i,j\leq a}\limit^{(d)} (\bar d^{\Theta,k}(Y^\Theta_i,Y^\Theta_j))_{1\leq i,j\leq a}.\end{equation*} Finally by gluing the $k$ first pair of vertices, which is a continuous map for the matrix distance, \begin{equation*} (\lambda^{\D_n} d^{{\D_n},k}(W^{\D_n}_i,W^{\D_n}_j))_{2k+1\leq i,j\leq a}\limit^{(d)} (d^{\Theta,k}(Y^\Theta_i,Y^\Theta_j))_{2k+1\leq i,j\leq a}.\end{equation*} And Theorem \ref{GP} (a) follows from Lemma \ref{equivGP2}. \end{proof} \begin{proof}[Proof of Theorem \ref{GP} (c,d)] Since $\mathbb{K}_{\text{GP}}$ is a Polish space, and $\Omega_{\mathcal{D}}$ is dense on $(\Omega,\Rightarrow)$, the results directly follows from Theorem \ref{GP} (a,b) (see \cite{Uniform} Section 8.1 for details). Also, they can be proved similarly. \end{proof} \subsection{Proof of Theorem \ref{THM2}} \begin{proof}[Proof of Theorem \ref{THM2} (a)] Let $({\D_n})_{n\in \mathbb{N}}\in\Omega_{\D}^\mathbb{N}$ such that ${\D_n}\Rightarrow \Theta\in \Omega_{\Theta}$. By \cite{Uniform} Theorem 6 (b), \begin{equation*} (\lambda^{\D_n} d^{\D_n}(\star_i,\star_j))_{i,j\in \mathbb{N}}\limit^{(d)} (d^\Theta(Y^\Theta_i,Y^\Theta_j))_{1\leq i,j\in \mathbb{N}}. \end{equation*} Thus, by Lemma \ref{reconstructTHM} for every $a\in \mathbb{N}$, we have for the a-pointed GH topology (see Section \ref{PointedGH}), \begin{equation*} (T^{\D_n}(\{\star_i\}_{1\leq i \leq a}),\lambda^{\D_n} d^{\D_n},\{\star_i\}_{1\leq i \leq a}) \limit^{\text{WGH}^a} (T^\Theta(\{Y_i^\Theta\}_{1\leq i \leq a}),d^\Theta,\{Y_i^\Theta\}_{1\leq i \leq a}). \end{equation*} Therefore, by Assumption \ref{Hypo3}, we have for the $2k$-pointed GH topology, \begin{equation} (T^{\D_n},\lambda^{\D_n} d^{\D_n},\{\star_i\}_{1\leq i \leq 2k}) \limit^{\text{WGH}^{2k}} (T^\Theta,d^\Theta,\{Y_i^\Theta\}_{1\leq i \leq 2k}). \label{10/12/16h} \end{equation} Then, by Skorohod representation theorem we may assume that the above convergence holds almost surely. Thus by Lemma \ref{CHIANTa} a.s. $\square_{\square,k}(T^{\D_n})\to \square_{\square,k}(\textbf Y^\Theta, \textbf Z^\Theta)$. Then for every continuous bounded function $f$ on $\mathbb{K}^{2k}_{\text{GH}}$ we have by Proposition \ref{MainMainMain} and dominated convergence, \[ \frac{\mathbb{E}[f(T^{\D_n},\lambda^{\D_n} d^{\D_n},\{\star_i\}_{1\leq i \leq 2k}) \square_{\square,k}(T^{\D_n})]}{\mathbb{E}[\square_{\square,k}(T^{\D_n})]}\limit \frac{\mathbb{E}[f(T^\Theta,d^\Theta,\{Y_i^\Theta\}_{1\leq i \leq 2k}) \square_{\square,k}(\textbf Y^\Theta, \textbf Z^\Theta)]}{\mathbb{E}[\square_{\square,k}(\textbf Y^\Theta,\textbf Z^\Theta)]}. \] Therefore, \[ (T^{{\D_n},k},\lambda^{\D_n} \bar d^{{\D_n},k},\{\star_i\}_{1\leq i \leq 2k})\limit^{\text{WGH}^{2k}} (T^{\Theta,k}, \bar d^{\Theta,k},\{Y_i^{\Theta,k}\}_{1\leq i \leq 2k}). \] Finally since the gluing of $k$ pair of point is a continuous operation for the $2k$-pointed GH topology the desired result follows. \end{proof} \begin{proof}[Proof of Theorem \ref{THM2} (b,c)] The results can be proved in the exact same way. \end{proof} \section{Configuration model and multiplicative graphs} \label{ALTEsection} The main objective of this section is to explain the connections between the configuration model and multiplicative graphs, and between those models and $(\mathcal{D},k)$-graphs and $(\P,k)$-graphs. \subsection{Definitions} For every multigraph $G$ on $\{V_i\}_{i\in \mathbb{N}}$ and $i,j\in \mathbb{N}$ let $\#_{i,j}(G)$ be the number of edges $\{V_i,V_j\}$ in $G$. So that a multigraph on $\{V_i\}_{i\in \mathbb{N}}$ may be seen as a matrix. We call a function $f:I\mapsto I$ a matching if $f\circ f=\Id$ and for every $x\in I$, $f(x)\neq x$. Let $\Omega_{{\CMmm}}$ be the set of decreasing sequence $(d_1,\dots d_s)$ in $\{0\}\cup \mathbb{N}$ such that $\sum_{i=1}^s d_i$ is even. \begin{algorithm} \label{ConfigAlgo} Construction of the configuration model from $\mathcal{D}=(d_1,\dots, d_s)\in\Omega_{{\CMmm}}$: \begin{compactitem} \item[-] Let $f=(f_1,f_2)$ be a uniform matching of $\{(i,j) \}_{1\leq i \leq s, 1\leq j\leq d_i}$. \item[-] The configuration model is the random multigraph ${\CMmm}^\mathcal{D}$ with vertices $(V_i)_{1\leq i \leq s}$ and such that for every $1\leq i \leq s$, $\#_{i,i}({\CMmm}^\mathcal{D}):=\frac{1}{2} \sum_{a=1}^{d_i} \mathbf{1}_{f_1(i,a)=i}$ and for $1\leq i \neq j\leq s$, \[ \#_{i,j}({\CMmm}^\mathcal{D}): =\sum_{a=1}^{d_i} \mathbf{1}_{f_1(i,a)=j}=\sum_{a=1}^{d_j} \mathbf{1}_{f_1(j,a)=i}.\] \end{compactitem} \end{algorithm} Let $\Omega_{{\MGmm}}$ be the set of sequence $(\lambda,p_1,\dots, p_s)$ in $\mathbb{R}^{+*}$ with $p_1\geq \dots\geq p_s$. \begin{algorithm} Construction of the multiplicative graph from $\P=(\lambda,p_1,\dots, p_s)\in\Omega_{{\MGmm}}$: \begin{compactitem} \item[-] Let $(X^\P_{i,j})_{1\leq i\neq j \leq s}$ be independent Bernoulli random variables with mean $1-e^{-\lambda p_i p_j}$. \item[-] The multiplicative graph is the random graph ${\MGmm}^\P$ with vertices $(V_1,\dots, V_s)$ and with edges $\{1\leq i,j \leq s: X_{i,j}=1 \}$. \end{compactitem} \end{algorithm} Next, we introduce multiplicative multigraphs, which are augmented multiplicative graphs. \begin{algorithm} Construction of the multiplicative multigraph from $\P=(\lambda,p_1,\dots, p_s)\in\Omega_{{\MGmm}}$: \begin{compactitem} \item[-] Let $(N^\P_{i,j})_{1\leq i,j \leq s}$ be independent Poisson random variables, such that for every $1\leq i \leq s$, $N^\P_{i,i}$ have mean $\lambda p_i^2/2$ and for every $1\leq i\neq j \leq s$, $N^\P_{i,j}$ have mean $\lambda p_ip_j$. \item[-] The multiplicative multigraph is the random multigraph ${\MGmm}^{\P+}$ with vertices $(V_i)_{1\leq i \leq s}$ and such that for every $1\leq i,j\leq s$, $\#_{i,j}({\MGmm}^{\P+}):=N^\P_{i,j}$. \end{compactitem} \end{algorithm} \begin{lemma} There exists a coupling such that ${\MGmm}^\P$ is the graph obtained from ${\MGmm}^{\P+}$ by removing all its multi-edge. That is, for every $i\neq j$, $\{i,j\}$ is an edge of ${\MGmm}^\P$ iff $\#_{i,j}({\MGmm}^{\P+})\geq 1$. \end{lemma} \begin{proof} It is easy to check that there exists a coupling such that a.s. for every $1\leq i \neq j \leq s$ $X^\P_{i,j}=0$ iff $N^\P_{i,j}=0$. The result follows. \end{proof} \subsection{Multiplicative multigraphs as local limit of the configuration model} \begin{lemma} \label{CM=>MG}Let $\P=(\lambda,p_1,\dots, p_s)\in\Omega_{{\MGmm}}$. For $n\in \mathbb{N}$, let $\mathcal{D}^n=(d_i^n)_{1\leq i \leq s^n}\in \Omega_{{\CMmm}}$. If $s^{\D_n}\to \infty$, and for every $1\leq i \leq s$, $d^n_i\sim \sqrt{ s^n \lambda} p_i$, and for every $n\in \mathbb{N}$, $i> s$, $d_i^n=1$. Then, \[\left (\#_{i,j}({\CMmm}^{{\D_n}}) \right )_{1\leq i,j\leq s}\limit^{(d)}\left (\#_{i,j}({\MGmm}^{\P+})\right )_{1\leq i,j\leq s}. \] \end{lemma} \begin{remark} From this result, one may see the LIFO-queue algorithm of Broutin, Duquesne, Wang \cite{P-graph-1,P-graph-2} as a limit of a recursive construction, based on a DFS exploration, of a uniform matching. \end{remark} \begin{proof} Let $(\mathcal{D}^n)_{n\in \mathbb{N}}$ and $\P$ be as in the statement. For $n\in \mathbb{N}$, let $f^n=(f^n_1,f^n_2)$ a uniform matching of $\{(i,j) \}_{1\leq i \leq s^n, 1\leq j\leq d^n_i}$. We may assume that ${\CMmm}^{{\D_n}}$ is constructed from $f^n$ by Algorithm \ref{ConfigAlgo}. The main idea is that for $n$ large enough $\{f_1(i,j)\}_{1\leq i \leq s^n, 1\leq j\leq d^n_i}$ are mostly independent. Since Poisson random variables appears as the limits of Bernoulli trials this explain the convergence. From there, there are many standard ways to justify the convergence. Below we briefly present a method based on random point process. We let the reader refer to Kallenberg \cite{Kallenberg} Section 4 for more details on convergence of point process. Let $\nu^n$ be the random measure on $\mathbb{K}:=\{(i,j)\}_{1\leq i\leq j \leq s}\times \mathbb{R}^2$ defined by \[ \nu^n:=\sum_{1\leq i <j\leq s}\sum_{\substack{1\leq a \leq d_i\\mathbf{1}\leq b \leq d_j } } \mathbf{1}_{f(i,a)=(j,b)}\delta_{(i,j,a/\sqrt{ s^n},b/\sqrt{ s^n})} +\sum_{1\leq i\leq n}\sum_{1\leq a<b\leq d_i } \mathbf{1}_{f(i,a)=(i,b)} \delta_{(i,i,a/\sqrt{ s^n},b/\sqrt{ s^n})} . \] It is enough to prove that $\{\nu^n\}_{n\in \mathbb{N}}$ converges vaguely toward a Poisson point process of rate \begin{equation} d\nu:=\sum_{1\leq i\leq j\leq s} \mathbf{1}_{0\leq x\leq \lambda p_i}\mathbf{1}_{0\leq y\leq \lambda p_j} \delta_{i,j}dx dy+\sum_{1\leq i\leq n} \mathbf{1}_{0\leq x\leq y \leq \lambda p_i}\delta_{i,i}dx dy. \label{11/12/13h} \end{equation} Indeed, provided this convergence, the desired result directly follows by integration over $dxdy$. To this end, first note that for every $n\in \mathbb{N}$, writing $m_n:=\sum_{i=1}^{s^n} d_i^n$, \begin{align*} \mathbb{E}[\nu^n(\mathbb{K})] & =\sum_{1\leq i <j\leq s}\sum_{\substack{1\leq a \leq d_i\\mathbf{1}\leq b \leq d_j } } \mathbb{P}(f(i,a)=(j,b))+\sum_{1\leq i\leq n}\sum_{1\leq a<b\leq d_i } \mathbb{P}(f(i,a)=(i,b)). \\ & = \sum_{1\leq i <j\leq s} \frac{d_i d_j}{m^n}+\sum_{1\leq i\leq n}\sum_{1\leq a<b\leq d_i } \frac{d_i^2/2}{m^n} \\ & \to \sum_{1\leq i <j\leq s} \lambda p_i p_j+\sum_{1\leq i\leq n}\sum_{1\leq a<b\leq d_i } \lambda p_i^2/2 \end{align*} where the last inequality comes from the assumptions of the lemma on $(\mathcal{D}_n)_{n\in \mathbb{N}}$. Thus, $\{\nu^n\}_{n\in \mathbb{N}}$ is tight for the vague topology. Let $\nu$ be a sub-sequential limit of $\{\nu^n\}_{n\in \mathbb{N}}$. By a similar computation, for every $1\leq i<j\leq s$, and $0\leq a\leq a' \leq \lambda p_i$, $0\leq b\leq b' \leq \lambda p_j$, \[\mathbb{E}[\nu(\{i,j\} \times [a,a']\times [b,b'])]=\lim_{n\to\infty} \mathbb{E}[\nu^n(\{i,j\}\times [a,a']\times [b,b'] )]= \lambda (a'-a)(b'-b). \] And for every $1\leq i \leq s$, $0\leq a\leq a'\leq b\leq b'\leq p_i$. \[\mathbb{E}[\nu(\{i,i\} \times [a,a']\times [b,b'])]=\lim_{n\to\infty} \mathbb{E}[\nu^n(\{i,i\}\times [a,a']\times [b,b'] )]= \lambda (a'-a)(b'-b). \] Next, we prove that $\nu$ satisfies the independency criterium. Beforehand let us introduce some notations. Let $\cov(\cdot,\cdot)$ be the covariance of two random variables. Let \begin{align*}S := \left \{(i,j,a,b)\in \mathbb{N}^4: 1\leq i<j\leq s, \substack{ 1\leq a \leq d_i \\ 1\leq b\leq d_j } \right \} \cup \{(i,i,a,b)\in \mathbb{N}^4: 1\leq i\leq s, 1\leq a<b \leq d_i \}. \end{align*} For every $K_1, K_2\subset \mathbb{K}$ disjoint compact set, for every $n\in \mathbb{N}$, $\cov(\nu^n(K_1),\nu^n(K_2))$ equal \[ \sum_{\substack{(i,j,a,b)\in S\\(i',j',a',b')\in S }}\mathbf{1}_{\left (i,j,\frac{a}{\sqrt{s^n}},\frac{b}{\sqrt{s^n}} \right )\in K_1}\mathbf{1}_{\left (i',j',\frac{a'}{\sqrt{s^n}},\frac{b'}{\sqrt{s^n}}\right )\in K_2} \cov(\mathbf{1}_{f(i,a)=(j,b)},\mathbf{1}_{f(i',a')=(j',b')}). \] Then, by distinguishing whether it is possible to have both $f(i,a)=(j,b)$ and $f(i',a')=(j',b')$, note that in the last sum there are $O(\#S)$ terms that are equal to $0-1/(m^n)^2$, $O((\#S)^2)$ terms that are equal to $1/(m^n)(m^n-2)-(1/m^n)^2=O(1/(m^n)^3)$, and the others that are null. Therefore, \[ |\cov(\nu^n(K_1),\nu^n(K_2))| =O(\#S)O(1/(m^n)^2)+O((\#S)^2)O(1/(m^n)^3)=O(1/m^n)\to 0.\] Since the last convergence hold for every disjoint compact $K_1,K_2\subset \mathbb{K}$, we have that for every disjoint compact $K'_1,K'_2\subset \mathbb{K}$, $\cov(\nu(K'_1),\nu(K'_2))=0$. Finally, to prove that $\nu$ is a Poisson point process of rate \eqref{11/12/13h} it is enough to check that a.s. for every $x\in \mathbb{K}$, $\nu(x)\in \{0,1\}$. To this end, one may adapt the previous argument to show that there exists $C>0$, such that for every $x\in \mathbb{K}$, $\varepsilon>0$, writing $B(x,\varepsilon)$ for the closed ball centered at $x$ of radius $\varepsilon$ for $\| \|_{\infty}$, if $B(x,\varepsilon)$ does not intersect $\{(i,i,1/2,1/2)\}_{1\leq i\leq s}$ then \[ \mathbb{E}[\nu(B(x,\varepsilon))(\nu(B(x,\varepsilon))-1)]\leq C\varepsilon^2. \] This implies the desired property, and so concludes the proof. \end{proof} \subsection{Connections with $(\mathcal{D},k)$-graphs and $(\P,k)$-graphs} Recall that for every multigraph $G$ on $\{V_i\}_{i\in \mathbb{N}}$, $\circ(G):=\prod_{i\in \mathbb{N}} 2^{\#_{i,i}(G)}\prod_{i,j\in \mathbb{N}}\#_{i,j}(G)!$. \begin{lemma} \label{Connections} \begin{compactitem} Let $k\in \mathbb{N}$ we have the following assertions: \item[(a)] Let $\mathcal{D}=(d_1,\dots, d_s)\in \Omega_{{\CMmm}}$ such that $\sum_{i=1}^s d_i=2s+k-2$. Then ${\CMmm}^\mathcal{D}$ biased by $\circ({\CMmm}^\mathcal{D})$ and conditioned at being connected is a $((d_1-1,\dots, d_s-1),k)$-graph. \item[(b)] Let $\mathcal W=(\lambda,w_1,\dots, w_s)\in \Omega_{{\MGmm}}$. For every $1\leq i \leq s$, let $p_i:=w_i/\sum_{j=1}^s w_j$. Let $\P=(p_1, \dots, p_s,0,0,\dots)$. Then ${\MGmm}^{\mathcal W+}$ biased by $\circ({\MGmm}^{\mathcal W+})$ and conditioned at being connected and having surplus $k$ is a $(\P,k)$-graph. \end{compactitem} \end{lemma} \begin{remark} The bias is not really important as typically those graphs are studied in a regime where with high probability the multigraph is a graph. Also removing this bias only remove the term $\circ(\mathcal{G}_{(\star_i)_{1\leq i \leq 2k}}(T))$ in Section \ref{DkDef} which does not change our proofs. \end{remark} \begin{proof} (a) is a classic and is easy to obtain from a quick enumeration. So we focus on (b). The main idea is that, on the one hand multiplicative multigraph are limits of the configuration model, and on the other hand $(\P,k)$-graph are limits of $(\mathcal{D},k)$-graph. Thus by identification, (b) follows. Let us detail: Fix $k,\mathcal W ,\P \in \Omega_{{\MGmm}}$ as in (b). Let $(\mathcal{D}^n)_{n\in \mathbb{N}}$ be a sequence of $\Omega_{{\CMmm}}$ as in Lemma \ref{CM=>MG}. Then write ${\CMmm}^{\text{W},k}$ for the random multigraph ${\MGmm}^{\mathcal W+}$ biased by $\circ({\MGmm}^{\mathcal W})$ and conditioned at being connected and having surplus $k$. Also, write for $n\in \mathbb{N}$, ${\CMmm}^{\mathcal{D}^n, k}$ for the random multigraph ${\CMmm}^{\mathcal{D}^n}$ biased by $\circ({\CMmm}^{\mathcal{D}^n})$ conditioned on the fact that the subgraph of ${\CMmm}^{\mathcal{D}^n}$ on $(V_i)_{1\leq i \leq s}$ is connected and have surplus $k$. By Lemma \ref{CM=>MG}, we have, \begin{equation} (\#_{i,j}({\CMmm}^{\mathcal{D}^n,k}) )_{1\leq i,j\leq s}\limit^{(d)} (\#_{i,j}({\MGmm}^{\mathcal W,k}))_{1\leq i,j\leq s}. \label{9/12/18h} \end{equation} Then, for every $n\in \mathbb{N}$ let $S^n+2s$ be the number of vertices that are in the connected component of $(V_i)_{1\leq i \leq s}$ in ${\CMmm}^{\mathcal{D}^n}$. Then let $\mathcal{D}^{n-}:=(d_1^n,\dots, d_s^n,1,\dots, 1 )$ with $S^n$ number 1 at the end. It is well known that for every $n\in \mathbb{N}$, conditioned on $S^n$, ${\CMmm}^{\mathcal{D}^n,k}$ have the same law as ${\CMmm}^{\mathcal{D}^{n-}}$ (where the vertices outside $(V_i)_{1\leq i \leq s}$ in ${\CMmm}^{\mathcal{D}^n}$ have been relabeled). More precisely, \[ (\#_{i,j}({\CMmm}^{\mathcal{D}^n,k}) )_{1\leq i,j\leq s} \egale^{(d)} (\#_{i,j}({\CMmm}^{\mathcal{D}^{n-}}) )_{1\leq i,j\leq s}. \] Therefore, it directly follows from \eqref{9/12/18h}, that if for $n\in \mathbb{N}$, ${\CMmm}^{\mathcal{D}^{n-},k}$ be the random multigraph ${\CMmm}^{\mathcal{D}^{n-}}$ biased by $\circ({\CMmm}^{\mathcal{D}^{n-}})$ and conditioned at being connected, then \begin{equation} (\#_{i,j}({\CMmm}^{\mathcal{D}^{n-,k}}) )_{1\leq i,j\leq s}\limit^{(d)} (\#_{i,j}({\MGmm}^{\mathcal W,k}))_{1\leq i,j\leq s}. \label{9/12/19h} \end{equation} Next let for $n\in \mathbb{N}$, $\mathcal{D}_n\in \Omega_{\D}$ be the sequence $(d_1^n-1,\dots, d_s^n-1,0,\dots, 0,0,\dots, 0)$ where we added $S^n+2k$ numbers $0$ at the end. We have by (a) for every $n\in \mathbb{N}$, \[ (\#_{i,j}(G^{{\D_n},k}) )_{1\leq i,j\leq s} \egale^{(d)} (\#_{i,j}({\CMmm}^{\mathcal{D}^{n-},k}) )_{1\leq i,j\leq s}\] Therefore by \eqref{9/12/19h}, \begin{equation} (\#_{i,j}(G^{{\D_n},k}) )_{1\leq i,j\leq s} \limit^{(d)} (\#_{i,j}({\MGmm}^{\mathcal W,k}) )_{1\leq i,j\leq s}. \label{9/12/21hb} \end{equation} Finally note that ${\D_n}\Rightarrow \P$. So, by \eqref{Tardif} and Lemma \ref{reconstructTHM}, as $n\to \infty$ the subtree of $T^{\mathcal{D}_n,k}$ spanned by $\{V_i\}_{1\leq i \leq s}\cup\{\star_i\}_{1\leq i \leq 2k}$ converges weakly toward the subtree of $T^{\P_n,k}$ spanned by the same vertices. Therefore, we have by gluing $(\star_1,\star_2),\dots, (\star_{2k-1},\star_{2k})$, then counting the edges, \begin{equation*} (\#_{i,j}(G^{{\D_n},k}) )_{1\leq i,j\leq s} \limit^{(d)} (\#_{i,j}(G^{\P,k}) )_{1\leq i,j\leq s}.\end{equation*} And \eqref{9/12/21hb} concludes the proof. \end{proof} To conclude the section let us compute the law of $(\P,k)$-graph. \begin{lemma} \label{BIGFINAL} Let $k\in \mathbb{N}$. Let $(p_1,\dots, p_s,0,0\dots)\in \Omega_{\P}$. We have for every connected multigraph $G$ on $\{V_i\}_{1\leq i \leq s}$ with surplus $k$, writing $\alpha$ for proportional, \[ \mathbb{P}(G^{\P,k}=G)\alpha \prod_{1\leq i,j \leq s} (p_i p_j)^{\#_{i,j}(G)}. \] \end{lemma} \begin{proof} Keep the notations of Lemma \ref{Connections} (b). By definition of $\Omega_{{\MGmm}}^{\mathcal W+}$, we have \[ \mathbb{P}(\Omega_{{\MGmm}}^{\mathcal W+}=G)=\prod_{1\leq i<j\leq s} \frac{(\lambda p_i p_j)^{\#_{i,j}(G)}e^{-\lambda p_i p_j}}{\#_{i,j}(G)!}\prod_{1\leq i \leq s} \frac{(\lambda p_i^2/2)^{\#_{i,i}(G)}e^{-\lambda p_i^2/2}}{\#_{i,i}(G)!}.\] So the result follows from Lemma \ref{Connections} (b). \end{proof} \begin{remark} $\bullet$ When $k=0$ the result is well known and is a classical definition for $\P$-trees. \\ $\bullet$ When the weight of the edges is not multiplicative, one can still construct similar multigraphs. Moreover, Lemma \ref{BIGFINAL} is still true in this case. For $k=0$, this relates those models with the general spanning trees constructed by Aldous--Br\"oder algorithm \cite{AaldousBroder,AldousBbroder} \end{remark} \bibliographystyle{plain}